id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
21742076 | https://en.wikipedia.org/wiki/42nd%20Electronic%20Combat%20Squadron | 42nd Electronic Combat Squadron | The 42d Electronic Combat Squadron is a United States Air Force unit. Its current assignment is with the 55th Electronic Combat Group at Davis–Monthan Air Force Base, Arizona as a geographically separated unit from its parent wing, the 55th Wing at Offutt Air Force Base, Nebraska. The squadron is the Air Force's sole Lockheed EC-130H Compass Call formal training unit.
Mission
Provides the 55th Electronic Combat Group with combat ready Lockheed EC-130H Compass Call trained aircrews. Directs all EC-130H aircrew initial academic and flying qualification, difference and requalification training for 20 different aircrew specialties with more than 200 aircrew students trained annually. Provides registrar support to students. Maintains quality control for all aspects of contracted aircrew training and manages courseware development for 17 Air Combat Command-verified syllabi. Provides the group with simulator support for both continuation and initial qualification training.
History
Constituted as 42 Reconnaissance Squadron, Very Long Range, Photographic, on 24 October 1945. Activated on 7 November 1945 at Smoky Hill Army Air Field, Kansas. Established under VIII Bomber Command, Continental Air Forces as a very long-range strategic reconnaissance squadron. Equipped with B-29 Superfortress bombers converted into F-13 reconnaissance/mapping configuration. Was designated to operate from Alaska, however squadron suffered from lack of personnel due to demobilization after the end of World War II, never became fully operational and was inactivated in August 1946.
Cold War European operations
Reactivated as part of the United States Air Forces in Europe in March 1954 at Spangdahlem Air Base, West Germany. Equipped with Douglas RB-26 Invader reconnaissance aircraft, painted in black to perform night reconnaissance which were transferred from the 1st Tactical Reconnaissance Squadron when the 1st received Martin RB-57A Canberras.
In 1956 the B-26s were sent to reclamation and the squadron received twelve Douglas RB-66C Destroyers. The RB-66C was a specialized electronic reconnaissance and electronic countermeasures aircraft designed for jamming Soviet RADAR. Its mission was to fly with tactical fighter and fighter bomber aircraft and provide an aerial defense. An extensive suite of specialized equipment was fitted to locate and identify enemy radar emissions. Additional ECM equipment was carried in wingtip pods. Chaff dispensing pods could be carried underneath the wing outboard of the engine nacelles. In addition, it was fitted with a removable in flight refueling probe attached to the right side of the forward fuselage.
In 1959 the squadron moved to the UK as part of a USAFE realignment. Its parent 10th Tactical Reconnaissance Wing was assigned to RAF Alconbury. A shortage of facilities at Alconbury led to the 42d being stationed at RAF Chelveston, about 20 miles west of Alconbury, where it remained as a detachment of the 10th TRW. In 1962 the runway at Chevelston was closed, and the squadron operated out of Toul-Rosieres AB, France, where it operated for a few years as Detachment 1, 10th Tactical Reconnaissance Wing.
On 10 March 1964, a wing RB-66B took off from Toul for a mission over West Germany. Because of an equipment malfunction that was undetected by the crew, the plane continued its flight to East Germany and was shot down. The crew ejected safely, but was taken prisoner, although they were released before the end of the month. This incident prompted USAFE to institute a buffer zone, where special procedures were required for aircraft flying near the eastern border of West Germany. Starting in April 1964, thirteen of the squadron's RB-66Bs began to be modified under Project Brown Cradle, to update their electronic warfare equipment and make other modifications. By 1965 the aircraft modification had been completed. However, the service of the Brown Cradle aircraft with the squadron was short. In December five of the aircraft deployed to Southeast Asia, and in May 1966, the remaining eight aircraft joined them.
These rotational deployments to France continued until October 1965 with the activation of the 25th Tactical Reconnaissance Wing at Chambley-Bussieres Air Base and the 42d being permanently assigned to the 25th Wing. With France's withdrawal from NATO's integrated military organization in 1966, Chambley was closed and the 25th Wing was inactivated. The specially-equipped EB-66C's of the 42d and their aircrews were sent directly to Southeast Asia for use over the skies of North Vietnam and the squadron was inactivated.
Vietnam War
Reactivated in 1968 at Takhli Royal Thai Air Base under the 355th Tactical Fighter Wing. The squadron carried out electronic warfare operations over North Vietnam and Laos, locating and identifying North Vietnamese radar sites as well as along the Ho Chi Minh trail in Laos that directed missiles and AAA fire, so that strike aircraft could avoid them. The RB-66C had no offensive capability, so it could not attack the radar sites directly. Squadron was transferred to Korat Royal Thai Air Force Base in August 1970. Continued operations until the end of hostilities in January 1973, remained in Thailand until being inactivated in March 1974.
Eldorado Canyon and Desert Storm
Reactivated in 1983 with General Dynamics EF-111A Ravens at RAF Upper Heyford, England, the replacement for the Douglas EB-66 electronic warfare aircraft. Performed ECM operations for NATO aircraft. Provided electronic countermeasures to US Navy aircraft for combat in Libya, 15 April 1986 as part of Operation El Dorado Canyon. During that mission, the 42d provided three EF-111As plus two spare aircraft to jam the Libyan radar network.
Deployed flights to Turkey and Saudi Arabia in 1991 as part of Operation Desert Shield; engaged in combat operations in 1991 as part of Operation Desert Storm. Eighteen EF-111A Ravens flew over 900 sorties. None were lost in combat, but one was lost in a non-combat related accident and both crew members were killed. The 42d was even credited with a "kill" during Desert Storm. On the night of 17 January 1991, an Iraqi Mirage F.1 flew into the ground while chasing EF-111A serial number 66-16. Even though the Raven was unarmed and had no air-to-air capability, the Raven crew was given credit for the kill. This was originally believed to be true but after review of AWACS tapes and other data credit for the claim was withdrawn. The official award for the F.1 Mirage kill went to Capt Robert E. Graeter, 33 TFW, Special order CCAF SO GA-1, 1991, as a Maneuvering kill.
Inactivated in 1992 as part of the post Cold War drawdown of the USAF, its aircraft reassigned to the 429th Electronic Combat Squadron at Cannon Air Force Base, New Mexico.
Airborne Command and Control (ABCCC) Operations
In 1994, the squadron was reactivated and received the personnel and aircraft from the 7th Airborne Command and Control Squadron which provided procedural air control via the Airborne Battlefield Command and Control Center (ABCCC). The 7th Squadron moved on paper to Offutt Air Force Base, Nebraska and assumed a new mission, the Airborne Command Post "Looking Glass" mission in support of nuclear command and control for United States Strategic Command, as part of this mission, the 7th began flying Boeing EC-135 aircraft.
The EC-130E ABCCC consisted of seven aircraft. The EC-130E is a modified C-130 Hercules; aircraft designed to carry the AN/USC-48 Airborne Battlefield Command and Control Center Capsules (ABCCC III). While functioning as a direct extension of ground-based command and control authorities, the primary mission was to provide flexibility in the overall control of tactical air resources. In addition, to maintain positive control of air operations, ABCCC provided communications to higher headquarters, including national command authorities, in both peace and wartime environments.
The ABCCC provided unified and theater commanders the capability for combat operations during war, contingencies, exercises, and other missions. A highly trained force of mission ready crew members and specially equipped EC-130E aircraft to support worldwide combat operations. Mission roles include airborne extensions of the Air Operations Center and Airborne Air Support Operations Center for command and control of offensive air support operations; and airborne on-scene command for special operations such as airdrops or evacuations.
In September 2002, following 2,186 days (5 years, 11 months, 25 days) of continuous deployed operations (July 15, 1993 to July 10, 1999) to Aviano Air Base supporting Operation Deny Flight, Operation Deliberate Force, Operation Decisive Edge, Operation Deliberate Guard, Operation Deliberate Forge and Operation Allied Force, the unit was inactivated. Some of the aircraft were retired, some assumed new roles|missions for the U.S. Air Force. All of the capsules, save one, were scrapped.
Compass Call training
It was reactivated in 2006 as part of the Global War on Terror in order to provide training for EC-130H Compass Call squadrons at Davis-Monthan Air Force Base. Since 2006, the 42nd has served as the 'schoolhouse' squadron for the 41st and 43d Electronic Combat Squadrons.
Lineage
Constituted as the 42d Reconnaissance Squadron, Very Long Range, Photographic on 24 October 1945
Activated on 7 November 1945
Inactivated on 19 August 1946
Redesignated 42d Tactical Reconnaissance Squadron, Electronics and Weather on 11 December 1953
Activated on 18 March 1954
Redesignated 42d Tactical Reconnaissance Squadron, Electronic on 1 July 1965
Discontinued and inactivated on 22 August 1966
Redesignated 42d Tactical Electronic Warfare Squadron and activated on 15 December 1967 (not organized)
Organized on 1 January 1968
Inactivated on 15 March 1974
Redesignated 42d Electronic Combat Squadron on 23 May 1983
Activated on 1 July 1983
Inactivated on 1 July 1992
Redesignated 42d Airborne Command and Control Squadron on 24 June 1994
Activated on 1 July 1994
Inactivated on 30 September 2002
Redesignated 42d Electronic Combat Squadron on 9 March 2006
Activated on 10 March 2006
Assignments
VIII Bomber Command, 7 November 1945
17th Bombardment Operational Training Wing, c. 31 January 1946
Second Air Force, 18 February 1946
Fifteenth Air Force, 31 March-19 August 1946
10th Tactical Reconnaissance Group, 18 March 1954
10th Tactical Reconnaissance Wing, 8 December 1957
25th Tactical Reconnaissance Group, 1 July 1965
25th Tactical Reconnaissance Wing, 1 October 1965 – 22 August 1966
Pacific Air Forces, 15 December 1967 (not organized)
355th Tactical Fighter Wing, 1 January 1968
Attached to 388th Tactical Fighter Wing, c. 21 September – 14 October 1970
388th Tactical Fighter Wing, 15 October 1970 – 15 March 1974
20th Tactical Fighter Wing, 1 July 1983
66th Electronic Combat Wing
Attached to 20th Tactical Fighter Wing, 1 June 1985
20th Tactical Fighter Wing (later 20th Fighter Wing), 25 January 1991 – 1 July 1992
355th Operations Group, 1 July 1994 – 30 September 2002
55th Electronic Combat Group, 10 March 2006 – present
Stations
Smoky Hill Army Air Field, Kansas, 7 November 1945 – 19 August 1946
Spangdahlem Air Base, West Germany, 18 March 1954
RAF Chelveston, England, c. 20 August 1959
Toul-Rosières Air Base, France, 1 August 1962
Deployed at Chambley-Bussières Air Base, France, 15 May-9 October 1963
Chambley-Bussières Air Base, France, 1 July 1965 – 22 August 1966
Takhli Royal Thai Air Force Base, Thailand, 1 January 1968
Korat Royal Thai Air Force Base, Thailand, 22 September 1970 – 15 March 1974
RAF Upper Heyford, England, 1 July 1983 – 1 July 1992
Davis-Monthan Air Force Base, Arizona, 1 July 1994 – 30 September 2002
Davis-Monthan Air Force Base, Arizona, 10 March 2006 – present
Aircraft
Boeing B-29 Superfortress, 1945–1946
Douglas RB-26 Invader, 1954–1957
T/WT-33 Shooting Star 1955–1957
Douglas RB-66 Destroyer 1956–1965
Douglas WB-66 Destroyer 1957–1960
Douglas B-66 Destroyer 1960–1966
Douglas EB-66 Destroyer 1968–1974
General Dynamics EF-111A Raven 1984–1992; affectionately known as the 'Spark-Vark'
Lockheed EC-130E ABCCC 1994–2002
Lockheed EC-130H Compass Call 2006–present
References
Notes
Explanatory Notes
Footnotes
Bibliography
Additional reading
'''
USAF Electronic Combat Squadron 0042
042 |
3530180 | https://en.wikipedia.org/wiki/Objective%20Interface%20Systems | Objective Interface Systems | Objective Interface Systems, Inc. is a computer communications software and hardware company. The company's headquarters are in Herndon, Virginia, USA. OIS develops, manufactures, licenses, and supports software and hardware products that generally fit into one or more of the following markets:
Real-time communications middleware software and hardware
Embedded communications middleware software and hardware
High-performance communications middleware software and hardware
Secure communications software and hardware
A popular OIS product is the ORBexpress CORBA middleware software. ORBexpress is most popular in the real-time and embedded computer markets. OIS supports the software version ORBexpress on more than 2,200 computing platforms (combinations of the versions of CPU families, operating systems, and language compilers). OIS also has FPGA versions of ORBexpress to allow hardware blocks on an FPGA to interoperate with software.
OIS engineers invented a form of communications security called the Partitioning Communication System or PCS. The PCS is a technical architecture that protects multiple Information Flows from influencing each other when communicated on a single network wire. The PCS is best implemented on a software separation operating system such as SELinux or a separation kernel.
OIS's communications products are most frequently found in the enterprise, telecom/datacom, mil/aero, medical, robotics, process control and transportation industries. Objective Interface is a privately held company and has developed software products since 1989 and hardware products since 2001.
The Company is actively involved with various standards groups including:
Common Criteria
IEEE
Network Centric Operations Industry Consortium
Object Management Group (OMG)
The Open Group
Wireless Innovation Forum
Corporate Headquarters
OIS headquarters is located at 220 Spring Street, Suite 530, Herndon, VA, 20170-6201.
References
External links
Objective interface
Objective Interface Systems - 'Official website'
Object Management Group (OMG)
The Open Group
Wireless Innovation Forum
Common Object Request Broker Architecture
Companies based in Fairfax County, Virginia
Software companies based in Virginia
Computer hardware companies
Software companies of the United States |
404298 | https://en.wikipedia.org/wiki/SLA%20Industries | SLA Industries | SLA Industries (pronounced "slay") is a role-playing game first published in 1993 by Nightfall Games in Glasgow, Scotland. The game is set in a dystopian far-flung future in which the majority of the known universe is either owned or indirectly controlled by the eponymous corporation "SLA Industries" and incorporates themes from the cyberpunk, horror, and conspiracy genres.
The game combined concepts inspired by a range of aesthetics and ideas. Elements include: song lyrics from David Bowie and the Industrial music scene, cyberpunk fiction (including Blade Runner and Max Headroom), anime / manga (including Akira, Appleseed, Bubblegum Crisis, and Trigun), and the growing cultural obsession with the media (including 24-hour news services and the Gladiator TV Show).
Setting
SLA Industries itself is a fictional corporation run by a mysterious and seemingly immortal creature called "Mr. Slayer", whose upper management team includes two other creatures like himself, "Intruder" and "Senti". The corporation is headquartered in "Mort City", a densely populated city-sprawl larger than Eurasia and surrounded by the urban ruins of the "Cannibal Sectors". It is all located on a vast planet (also called "Mort") that had been stripped of its natural resources to the point that the ecology had been utterly destroyed. SLA Industries controlled an undefined but vast number of planets, collectively referred to as the World of Progress, and governed them in accordance with Mr. Slayer's Big Picture. The setting is bleak and surreal, with much left deliberately ill-defined in the source material.
Players take the role of freelance employees of SLA Industries, called Operatives, living in Mort City and taking care of odd jobs assigned to them by the corporation. These jobs usually involve keeping the peace—chasing serial killers, hunting monsters in the sewers, quashing riots, foiling terrorist plots, and silencing dissidents are common themes. Appearance, style and branding are emphasized in the game world as much as combat ability, due to the omnipresence of television; for ambitious Operatives public persona and TV ratings are often as important as professional abilities. A supplement, the Contract Directory, also provides the option for players to play as celebrity gladiators called Contract Killers. As a role-playing experience, the game tends to be predisposed towards splatterpunk horror, noir, dark satire, and/or gunbunny high action. However, the complexity and Byzantine politics of the setting allow for slower-paced campaigns based around subversion, inter-departmental rivalry, and cut-throat power struggles within the company.
Along with humans, playable races include the drug-addicted mutant humans called "Frothers", the stealthy feline "Wraith Raiders", the formidably violent saurian "Shaktar", and the two 'Ebb' / pseudo-magic using races: the emotionally sensitive and charismatic Ebon, and their more sadistic and violent evolution, the "Brain Wasters". There are also a variety of biogenetic vat-grown warrior races called Stormers, produced by SLA to fight in their endless wars.
Publication prior to 2nd Edition
SLA Industries was first published independently in 1993. The game was later bought by Wizards of the Coast late in 1994, after their success with Magic: The Gathering. It was later republished by Nightfall Games Ltd and distributed by Hogshead Publishing, until Hogshead Publishing closed down. Between 2003 and 2011 Cubicle 7 Entertainment produced new material, and in 2011 the license returned to Nightfall Games Ltd, who released supplements (known as Data Packets) as PDFs. In August, 2016, the Kickstarter for the SLA Industries: Cannibal Sector 1 miniatures game was launched by Daruma Productions and Nightfall Games. On 22 June 2018, Nightfall Games announced that Daruma Productions was entering the liquidation process and Nightfall would be taking over completion of the Kickstarter. Cannibal Sector 1, the last 1st Edition product was published in 2019.
Second Edition
A new Kickstarter campaign launched in September 2019, for SLA Industries 2nd Edition. It raised £90,444 from 1,281 backers. Additionally, a free Quick Start was released in August 2019 to introduce the new rules, ahead of the Kickstart campaign. The pledge fulfilment started in December 2020, and the products went on general sale in January 2021.
A second Kickstarter was announced in July 2021 for the first sourcebooks for SLA Industries 2nd Edition to start on the 24th of August 2021. The first two books announced are Threat Analysis 1: Collateral and Species Guide 1: Shaktar/Wraithen.
SLA Industries 2nd Edition won a Judges' Spotlight award in the 2021 ENnies.
Product line
2nd Edition Publications (all published by Nightfall Games)
"SLA Industries 2nd Edition" Core Rulebook (Jan 2021) including 5 cover variants.
"SLA Industries 2nd Edition GM Screen Pack", includes a GM Screen and 16 page booklet (Jan 2021).
"SLA Industries 2nd Edition: Quick Start", a 48-page PDF introducing the 2nd edition ruleset. The printed version was published Jan 2021.
"SLA Industries 2nd Edition: COLLATERAL", a 240-page sourcebook focused on gangs, Manchines and other self-generated enemies. The first of the 4Cs book (will be published in 2022).
"SLA Industries 2nd Edition: Shaktar/Wraithen", a 80-page sourcebook focused on the names playable species. The first of the Species Guide books (will be published in 2022).
"SLA Industries 2nd Edition: Hunter Sheets Data Packet 1", a 48 page softback sourcebook (will be published in 2022).
“SLASHER magazines”. A series of in-setting small magazines presenting famous and wannabe criminals. At present there is 1 magazine (will be published in 2022)
“Progress Reports”. A series of PDF supplements released sporadically since June 2020. At present there are 4 documents (Progress Reports 0-3).
Conversion Documents for "Cannibal Sector One" and “Hunter Sheets 2”. PDFs to allow the use of the 1st Edition books with the 2nd edition ruleset (Jan 2021).
“Headshots” Image Packs. Each Headshots Image pack includes at least 10 high res JPEG images featuring character art by Dave Allsop. At present there are five packs.
1st Edition Publications
SLA Industries Main Rulebook () published by Nightfall Games
Karma Sourcebook () published by Nightfall Games (Out of Print)
GM screen(Out of Print)
Hunter Sheets Issue 1 Supplement () published by Cubicle 7 (Out of Print)
Hunter Sheets Issue 2 Supplement () published by Daruma Productions and Nightfall Games
"Gator Stormer" Data Packet, a PDF only Supplement published by Nightfall Games (no longer available).
"Klicks End" Data Packet, a PDF only Supplement published by Nightfall Games
"Momic 0.1" Data Packet, a PDF only Supplement published by Nightfall Games (no longer available).
"Dream" Data Packet, a PDF only Supplement published by Nightfall Games (no longer available).
"Hominid" Data Packet, a PDF only Supplement published by Nightfall Games (no longer available).
"Hunters Sheets: Red Alert" Data Packet, a PDF only Supplement published by Nightfall Games (no longer available).
"Cannibal Sector One" Supplement (352-lage full colour hardback book or pdf published by Nightfall Games). This was the last publication for 1st Edition.
The following 1st Edition books and documents are no longer considered canon and are out of print:
"Mort" Sourcebook () published by Wizards of the Coast
"The Key of Delhyread" Scenario () published by Hogshead Publishing
"The Contract Directory" Sourcebook () published by Hogshead Publishing
"CS1" Sourcebook () published by Cubicle 7
"Ursa Carrien" Data Packet, a PDF only Supplement published by Nightfall Games
The Writers' Bible
Nightfall Games produced the SLA Industries Writers Bible, sometimes simply referred to as The Bible or The Truth, to allow freelance writers to grasp the complicated background of the game. The terms of the associated non-disclosure agreement required that the contents of the document remain secret. Following an extended hiatus in production of official SLA Industries material, editor Tim Dedopulos released the bible to the members of the SLA Industries email discussion list in 1998. The fans' reaction to The Truth was not entirely warm, and the remaining members of Nightfall Games made it clear that further redistribution of the bible was not permitted without their explicit permission. The writers have since made it clear that the bible was not intended for mass consumption - it had not been edited to the same standard as the published material, as it was an internal document used only to keep the work of disparate authors consistent with the intentions of Nightfall Games. It has also been explained that the process of revealing The Truth was originally to have happened over the course of several publications, each one containing more elements of an increasingly refined version of the backstory. In 2005 Cubicle 7 re-released the old Writers Bible, stating that this was no longer the Truth used internally for further development of material. It is no longer accessible and not considered canon by Nightfall Games.
Reviews
White Wolf #44 (June, 1994)
Review
Backstab #24
Dosdediez (Número 3 - Mar/Abr 1994)
Rollespilsmagasinet Fønix (Danish) (Issue 6 - January/February 1995)
References
External links
Nightfall Games, creators of SLA Industries.
SLA Industries 2nd Edition: Quick Start on DriveThru RPG.
SLA Industries 2nd Edition product line on DriveThru RPG.
British role-playing games
Campaign settings
Cyberpunk role-playing games
Hogshead Publishing games
Horror role-playing games
Science fiction role-playing games
Wizards of the Coast games
Role-playing games introduced in 1993
ENnies winners |
18113363 | https://en.wikipedia.org/wiki/Datatel | Datatel | Datatel, Inc. was a private company that provided fully integrated software and professional services to build enterprise education platforms for higher education until it combined with its competitor SunGard Higher Education to form Ellucian in 2012. Datatel was headquartered in Fairfax, Virginia.
With 533 employees, Datatel had more than 799 client institutions located in the United States, Canada, Guam, Puerto Rico, American Samoa, and Bermuda.
History
Datatel was founded by Tom Davidson in 1968 as a data processing service bureau. The company began in the mezzanine of a suburban office supply store outside of Washington, D.C. A few years later, Datatel began performing data processing for banks. In 1973, the company grew substantially when Davidson merged Datatel with Data Technology Corporation, owned by Ken Kendrick. Datatel focused on higher education since 1979.
Beginning in 1992, the company was led by Russ Griffith, who died on August 7, 2006. Under Griffith's leadership in 2005, Datatel’s executive team—backed by Thoma Cressey Equity Partners, and Trident Capital—signed an agreement to acquire the company from Davidson and Kendrick. Davidson went on to co-found the Balance Bar food company in 1992, and Kendrick became a managing partner of the Arizona Diamondbacks baseball team. In 2004, Datatel acquired the LiquidMatrix Corporation, a Buffalo-based provider of student recruiting and alumni outreach Web-based software.
Datatel was acquired in December 2009 by private equity company Hellman & Friedman.
In March 2012, Datatel merged with SunGard Higher Education to form Ellucian.
Timeline
1972: Released the Silent 700, a programmable data terminal
1975: Became the first East Coast Microdata dealer
1976: Began development of TOPS, the Total Office Product System, later to be known as ASSETS
1979: Became first PRIME information dealer and began selling solutions for colleges and universities
Willamette University became first Datatel client
Development of Colleague began
1986: Opened San Francisco office
1987: Moved to current headquarters in the Fair Lakes area of Fairfax, Virginia
1988: Sold hundredth Colleague client
1989: Changed to multiplatform UNIX
1990: Established Datatel Scholar's Foundation
1991 Announced products on IBM and Sequent platforms
Acquired first Canadian client, the University of Waterloo
1996: Introduced Client/Server
2002: Formed Datatel Center for Institutional Effectiveness (DCIE)
2004: Acquired LiquidMatrix Corporation
2005: Management buyout of founders
2006: Longtime president and CEO Russ Griffith died
2007: Reached 39th year of consecutive growth
Released ActiveCampus Portal, built on Microsoft SharePoint
2009: Acquired in December 2009 by private equity company Hellman & Friedman.
2010: Announced 2009 revenues of $138 million
2012: Merged with SunGard Higher Education to form Ellucian
Awards
Recipient of the U.S. Senate Productivity and Quality Award (SPQA) for Virginia in 1997
Selected as one of the Washingtonian magazine’s Great Places to Work in 2001, 2003, 2005, and 2007
Recipient of the 2003 CARE Award for mid-sized companies as presented by the Northern Virginia Family Services organization
Recipient of the 2003 National Capital Business Ethics Award from The National Capital Chapter of the Society of Financial Service Professional
Recipient of the University of Maryland’s Smith Digital Economy Award in 2008
Top finalist for Microsoft Partner of the Year Award in Information Worker Solutions, Portals, and Collaboration in 2008
Named Microsoft Public Sector Education Partner of the Year 2009, developing an enterprise CRM solution, expanding into the Teaching and Learning market segment, and introducing a mobile application for administrative and academic information on the go
Products
Datatel Colleague, an administrative software suite based on a single-source technology approach, coupled with data-driven, rules-based architecture. The five components of Colleague are:
CORE
ST (Student)
CF (Colleague Finance)
CA (Colleague Advancement)
HR (Human Resources)
Datatel Mobile Access (MOX), powered by DubLabs
Datatel Portal
Datatel Recruiter
Datatel WCMS
Datatel Intelligent Learning Platform (ILP)
Professional services
Administrative and application management
Consulting
Datatel Center for Institutional Effectiveness (DCIE) - provides strategic planning programs for administrators in higher education
Institutional effectiveness
Professional executive services
Strategic planning and project management
Support and self-service
Training and education
Web site design
Related information
Datatel Users' Group (DUG) is a professional association of members from Datatel's client institutions. DUG holds annual and regional conferences.
In 1990, Datatel formed the Datatel Scholars Foundation to award scholarships to outstanding undergraduate and graduate students attending Datatel client institutions. The Foundation has awarded over 3,700 scholarships totaling more than $5.5 million.
Notes
Additional References
Company website
"Datatel Offers Web-Enabled Management Software," Campus Technology, August 12, 2002
"Company Profile for Datatel, Inc." Reuters, 2008
"Great Places to Work: The List," Washingtonian magazine, 2005
"Microsoft Announces 2008 Partner of the Year Awards Finalists and Winners," 2008
"Great Places to Work: Where to Feel at Home," Washingtonian magazine, 2007
Software companies based in Virginia
Software companies of the United States
1968 establishments in Virginia |
36384439 | https://en.wikipedia.org/wiki/Economy%20of%20Lucknow | Economy of Lucknow |
Structure and growth in employment
The distribution of main workers in 2001 by different categories reveals that almost 21 percent are cultivators while another 6 percent are agricultural laborers. This is quite understandable in the district where rural population percentage is low as 37 percent. What is remarkable is that the share of population in rural areas has registered a decline from about 37 percent in 1991 to 36 percent in 2001. In the same row cultivators declined from 29.73 percent in 1991 to 21.06 percent and agricultural labourers declined from 8.70 percent to 6.50 percent in 2001. Agricultural fervor of the workforce shows a decline in favour of non-agricultural activities in the state.
Business headquarters
The city is the headquarters of both:
The Small Industries Development Bank of India (SIDBI) and
The Pradeshiya Industrial and Investment Corporation of Uttar Pradesh (PICUP).
The Regional office of the Uttar Pradesh State Industries Development Corporation (UPSIDC) is also located here.
Another business-promoting institutions that has a presence in Lucknow is the Confederation of Indian Industry (CII).
Manufacturing and processing
Among the bigger manufacturing units, Lucknow has:
Hindustan Aeronautics Limited
Tata Motors
Eveready Industries
The city's small-scale and medium-scale industrial units are located in the industrial enclaves of
Chinhat
Aishbagh
Talkatora
Amausi
Mohanlalganj
Arjunganj
Real Estate
Real estate is one of the many booming sectors of the Lucknow's economy. Lucknow has one of the fastest growing property rates in almost all the areas due to redevelopment of several areas (like Gomtinagar and Alambagh) by the present government.
Traditional trade
Traditionally, Lucknow has been a mandi town for mangoes, melons, and grains grown in the surrounding areas. Sugarcane-growing plantations and sugar industries are also in close proximity. This attracted Edward Dyer to set up a unit based on molasses in the city. Dyer Breweries was incorporated in 1855 and was Asia's first commercial brewery. The company name was changed to Mohan Meakin Brewery in 1967 (the word "Breweries" was dropped in the eighties as the company diversified into other industries).
Lucknow is famous for its small scale industries that are based on unique styles of embroidery, namely, Chikan and Lakhnawi Zardozi, both of which are significant foreign exchange earners. Chikan has caught the fancy of fashion designers in Bollywood and abroad. It is very popular in Indian markets and have very high demand.
During the period of the Nawabs, kite-making reached a high level of artistry, and is still a small-scale industry.
Lucknow has also been an industrial producer of tobacco products like 'Kivam', edible fragrances like 'attars' and handicrafts such as pottery, earthen toys, silver and gold foil work, and bone carving products. The Khadi products of the Gandhi Ashram are also quite popular among the population.
Emerging businesses
Lucknow, with its excellent education, commercial, banking and legal infrastructure, is witnessing rapid growth in information technology; banking, retailing, construction and other service sectors.
Commercial property,
SEZs
Business centres
Multiplexes
Clubs
Banks
Food courts
Entertainment centres
Finance institutions
The Ministry of Communications and Information Technology set up Software Technology Parks of India in 2001. Currently, biotechnology and information technology are the two focus areas to promote economic development in and around the city. The Ministry of Science and Technology is setting up a biotech park in the city. Lucknow is also one of the selected cities for the Smart City project of STPI.
References |
6842051 | https://en.wikipedia.org/wiki/Scientific%20Research%20Institute%20of%20System%20Development | Scientific Research Institute of System Development | Scientific Research Institute of System Analysis (abbrev. SRISA/NIISI RAS, , ) - is Russian state research and development institution in the field of complex applications, an initiative of the Russian Academy of Sciences. The mission of the institute is to resolve complex applied problems on the basis of fundamental and applied mathematics in combination with the methods of practical computing. Founded by the Decree no. 1174 of the Presidium of the USSR Academy of Sciences on October 1, 1986.
Research fields
Main lines of activities:
research in the field of theoretical and applied problems on information security,
research in the field of automation of programming,
research in the field of creating computer models of the objects with complex geometry and topology for the open scalable system of parallel information processing,
research in the field of applied informatics.
Practical results of the institute are embedded into the developed architectures and very-large-scale integration devices, operating systems, real-time operating systems and microelectronics components.
Development
Microprocessors
The SRISA has designed several MIPS compatible CPUs for general purpose calculations. These include:
KOMDIV-32 () is a family of 32-bit microprocessors, MIPS-I ISA
KOMDIV-64 () is a family of 64-bit microprocessors, MIPS-IV ISA
Operating systems
Since 1998 the SRISA department of System Programming has develop several successive UNIX-like real-time operating system (RTOS) that include:
POSIX 1003.1-compatible RTOS developed since January 1998; the network sockets, however, were borrowed from Free BSD; it supported TCP/IP protocol and X Window suite; it runs on MIPS based CPUs mentioned above.
POSIX 1003.1 and Arinc 653-compatible RTOS was first exhibited at SofTool-2008, -2009, and -2010 in Moscow. It was joint project between Alt Linux and SRISA teams.
Notable people
Vladimir Betelin, academician,Scientific Supervisor
Israel Gelfand, academician, Chief Science Officer of SRISA
Vladimir Platonov, academician, Chief Science Officer of SRISA
Maksim Moshkow, employee, creator of the largest and the oldest Russian electronic library "Lib.ru"
External links
Official site of NIISI RAS
References
Institutes of the Russian Academy of Sciences
Research institutes in the Soviet Union
Computing in the Soviet Union
Computer science institutes |
1037251 | https://en.wikipedia.org/wiki/Nippon%20Ichi%20Software | Nippon Ichi Software | is a Japanese video game developer and publisher. The company has developed a number of traditional role-playing video games, but their latest releases have primarily been tactical RPG games. They have made games such as the Disgaea and Marl Kingdom tactical role-playing series, as well as La Pucelle: Tactics, Phantom Brave, Makai Kingdom: Chronicles of the Sacred Tome, and Soul Nomad & the World Eaters. The company has also published anime. Its corporate mascot is the Disgaea character Prinny.
A North American branch of the company, NIS America, was founded in 2003 in order to focus on publishing and localizing games in North America. Until 2007, Nippon Ichi Software did not have a branch for PAL regions, resulting in the company partnering with multiple third-party developers and publishers, such as Koei, Square Enix, and Ubisoft. NIS America also distributes anime programs.
History
Nippon Ichi Software was founded in September 1991 in Gifu Prefecture, Japan, as an entertainment software company. It was relocated and reincorporated on July 12, 1993. Since then Nippon Ichi Software, Inc. has grown into an international company focused on creating video game products across modern console platforms. NIS America, Inc., the North American branch of Nippon Ichi Software was established on December 24, 2003. Based in Santa Ana, California, NIS America, Inc. handles the localization, marketing, and publishing of Nippon Ichi and other company's games. The first game to be released in North America by NIS America was Phantom Brave.
NIS America was the result of the company's wish to focus on international publication of their works in the United States, given the increased popularity of their games in this territory. Prior to the creation of this subsidiary, NIS games that appeared in the U.S. were localized and distributed by outside publishers. Notable examples are Disgaea: Hour of Darkness, published in North America by Atlus and in Europe by Koei, La Pucelle: Tactics, published in North America by Mastiff, and Rhapsody: A Musical Adventure which was also published in North America by Atlus. Over the years, NIS America has become a well-known presence in the North American market. The publisher won RPGLand's "Stateside Publisher of the Year Award" in 2009.
During years 2008 and 2009 the financial results for Nippon Ichi saw a huge drop. The January 2009 financial report for the company recorded a drop of more than 97 percent in income in comparison to the previous year. Kotaku editor Brian Ashcraft reported that after the release of the financial figures "The company's closing price on the JASDAQ (Japanese NASDAQ) was ¥36,200 on Friday. When the market opened this morning, and investors began reacting to Nippon Ichi's financial statement, the stock price ultimately dropped ¥7,000 (US$78) as the stock was unloaded."
In 2012, the company was awarded the Guinness World Record for being the company that released the most strategy RPGs.
In September 2007 Nippon Ichi Software announced the acquisition of video game developer System Prisma Corporation. On November 5, 2012 Nippon Ichi Software announced the formation of Nippon Ichi Software Asia Pte. Ltd., a wholly owned subsidiary in Singapore. In April 2010 Nippon Ichi Software announced the formation of Shishikui-ya, a wholly owned subsidiary of NIS America. In December 2011 Nippon Ichi Software announced the formation of Dione Entertainment (now Entertainment Support). On April 1, 2015 the company announced the formation of Nippon Ichi Software Vietnam Co., Ltd., a Vietnamese subsidiary of Nippon Ichi Software Asia Pte. Ltd. In March 2016 Shishikui-ya merged into Nippon Ichi Software. On April 22, 2016 Nippon Ichi Software announced the acquisition of adventure game developer FOG Inc. On October 1, 2016 System Prisma Corporation merged into Nippon Ichi Software.
Games
The company has developed a number of traditional role-playing video games, but their latest releases have primarily been tactical RPG games. Many characters from previous games make their way into later games as secret characters, even though their respective games may not be directly connected. The Rhapsody series, Disgaea series, Phantom Brave, Makai Kingdom, and Soul Nomad are the games that are usually connected with each other through secret characters.
Other games of separate genres, such as Cooking Fighter Hao, have been published by the company though they have never been released outside Japan. The most popular games are the Disgaea games. All of these games have connections to each other, whether it be the same world, dimensional travel, same creatures, or the same villain, Baal.
Games developed and/or published by Nippon Ichi Software
Nippon Ichi Indie Spirits
On December 28, 2016, Nippon Ichi Software announced , a label under which the company localises western indie games for the Japanese market. The program kicked off with the simultaneous release of Nidhogg, Back to Bed, and Emily Wants to Play on February 9, 2017. All games that are published through this program get released only digitally through the PlayStation Store.
Games published by NIS America
Cancelled titles
Makai Wars (developer) - PlayStation 3, PlayStation Portable
Other media
Anime
Makai Senki Disgaea — 12 episodes total. Studio: Oriental Light and Magic
Releases in North America (NIS America)
References
External links
(NIS America)
Amusement companies of Japan
Anime companies
Japanese companies established in 1991
Video game companies established in 1991
Home video companies of the United States
Japanese brands
Video game companies of Japan
Video game development companies
Video game publishers
Companies based in Gifu Prefecture
Kakamigahara, Gifu |
24845611 | https://en.wikipedia.org/wiki/Release%20early%2C%20release%20often | Release early, release often | Release early, release often (also known as Ship Early, Ship Often, time-based releases, and sometimes abbreviated RERO) is a software development philosophy that emphasizes the importance of early and frequent releases in creating a tight feedback loop between developers and testers or users, contrary to a feature-based release strategy. Advocates argue that this allows the software development to progress faster, enables the user to help define what the software will become, better conforms to the users' requirements for the software,
and ultimately results in higher quality software. The development philosophy attempts to eliminate the risk of creating software that no one will use.
This philosophy was popularized by Eric S. Raymond in his 1997 essay The Cathedral and the Bazaar, where Raymond stated "Release early. Release often. And listen to your customers".
This philosophy was originally applied to the development of the Linux kernel and other open-source software, but has also been applied to closed source, commercial software development.
The alternative to the release early, release often philosophy is aiming to provide only polished, bug-free releases. Advocates of RERO question that this would in fact result in higher-quality releases.
See also
Worse is Better
Programming paradigm
Software development process
Agile Software Development
Minimum viable product
Vote early and vote often
References
External links
Release Early, Release Often, chapter of The Cathedral and the Bazaar by Eric S. Raymond
Software development philosophies
Software release |
12324273 | https://en.wikipedia.org/wiki/FTA%20receiver | FTA receiver | A free-to-air or FTA Receiver is a satellite television receiver designed to receive unencrypted broadcasts. Modern decoders are typically compliant with the MPEG-2/DVB-S and more recently the MPEG-4/DVB-S2 standard for digital television, while older FTA receivers relied on analog satellite transmissions which have declined rapidly in recent years.
Uses
Mainstream broadcast programming
In some countries, it is common for mainstream broadcasters to broadcast their channels over satellite as FTA. Most notably, in the German-speaking countries, most of the main terrestrial broadcasters, such as ARD Das Erste and ZDF offer FTA satellite broadcasts, as do some of the more recent satellite rivals such as Sat.1 and RTL. The satellites on which these channels broadcast, at Astra's 19.2° ea position, are receivable throughout most of Europe.
In the UK, all the original five terrestrial broadcasters, BBC One, BBC Two, ITV, Channel 4, and Five broadcast FTA on digital satellite in some form.
Ethnic and religious programming
FTA receivers are sold in the United States and Canada for the purpose of viewing unencrypted free-to-air satellite channels, the bulk of which are located on Galaxy 19 (97°W, Ku band). There is also a substantial amount of Christian-based programming available on several satellites over both North America and Europe, such as The God Channel, JCTV, EWTN, and 3ABN.
Educational programming
The PBS Satellite Service offers educational programming on Ku band DVB from the AMC-21 satellite (125°W). As there is no standard MPEG audio on many of these channels, the AC3-only feeds require a Dolby Digital-capable receiver. They are otherwise free. Channels include PBS-HD/PBS-X as well as various secondary programmes normally carried on digital subchannels of PBS terrestrial member stations.
The main PBS New York feed is absent from the free-to-air version of the PBS satellite service to afford local terrestrial member stations a chance to broadcast material before it becomes available on PBS-X or PBS-HD. Typically, PBS-X feeds carried programmes (except news) a day later than the main terrestrial PBS network.
US terrestrial broadcasters
Many of these channels carried programming from major network television affiliates, although these are disappearing, particularly on K-band.
Equity Broadcasting used one Ku band (Galaxy 18, 123°W) and one C-band satellite feed as a key part of its Equity C.A.S.H. centralcasting operation; many small UHF local stations were fed from one central point in Little Rock, Arkansas via free-to-air satellite. Most were members of secondary terrestrial networks, including both US English- and Spanish-language broadcasters, and content from satellite broadcasts often fed over-the-air digital subchannels of terrestrial stations. Programming such as the Retro Television Network or Retro Jams had been provided at various times; music video broadcasters Mas Música and The Tube were formerly available at 123°W before being taken over (Mas Música is now MTV3) or ceasing operations.
Similarly, unencrypted Ku band satellite television was also used temporarily in the aftermath of 2005's Hurricane Katrina as a means to feed NBC programming into New Orleans from the studios of an out-of-state broadcaster; the feeds contained the content, branding and station identification of the damaged New Orleans station in a form suitable for direct feed to a transmitter (with no further studio processing) in the target market.
Paradoxically, many Equity-owned local UHF stations obtained solid national satellite coverage despite small terrestrial LPTV footprints that barely covered their nominal home communities. In many cases, this brought smaller networks and Spanish-language broadcasting to communities which otherwise would have no free access to this content.
As television market statistics for these stations from firms such as Nielsen Media Research are based on counting viewership within the footprint of the corresponding terrestrial signal, television ratings severely underestimated or failed to estimate the number of households receiving programming such as Univision from FTA satellite feeds. The liquidation of Equity Broadcasting's station group in mid-2009 greatly reduced the number of US terrestrial stations available from Ku band free-to-air satellite; while a very small handful of uplinked terrestrial stations remain free (mostly on C-band, which requires a much larger antenna) these are from other, independent sources.
Rural and hobby use
Over-the-air digital TV signals do not reach very far outside the city in which they are transmitted. FTA receivers can be used in rural locations as a fairly reliable source of television without subscribing to cable or a major satellite provider.
Terrestrial broadcasters use some of the nearly 30 North American satellites to transmit their feeds for internal purposes. These unencrypted feeds can then be received by anyone with the proper decoder. Satellite signals are normally receivable well beyond the terrestrial station's coverage area. Enthusiasts also use FTA receivers to watch the numerous wildfeeds that are present on many of those satellites.
In theory, a viewer in Glendive, Montana (the smallest North American TV market) could have received what little local CBS and NBC programming is available terrestrially, alongside a Ku band free-to-air dish for additional commercial networks (such as individual ABC and Fox TV affiliates from Equity Broadcasting, formerly at 123°W) and educational programming (PBS Satellite Service at 125°W). There is no assurance that any individual FTA broadcast will remain available or that those which do remain will continue broadcast in a compatible format – in this example, such a viewer would have lost ABC and Fox in mid-2009 due to Equity's bankruptcy.
Signal piracy
Free-To-Air receivers generally use the same technology standards (such as DVB-S, MPEG-2) as those used by pay-TV networks such as Echostar's Dish Network and BCE's Bell Satellite TV. FTA receivers, however, lack the smartcard readers or decryption modules designed for the reception of pay-TV programming, since the receivers are designed only for reception of unencrypted transmissions.
On occasion, where a pay-TV service's encryption system has been very seriously compromised, to the extent that it can be emulated in software and without the presence of a valid access card, hackers have been able to reverse-engineer an FTA receiver's software and add the necessary emulation to allow unauthorized reception of pay TV channels. Manufacturers, importers, and distributors of FTA receivers officially do not condone this practice and some will not sell to or support individuals who they believe will be using their products for this purpose, use of third-party software usually voiding any warranties.
Unlike traditional methods of pirate decryption that involve altered smart cards used with satellite receivers manufactured and distributed by the provider, piracy involving FTA receivers require only an update to the receiver's firmware. Electronic countermeasures that disable access cards may not have the same or any effect on FTA receivers because they are not capable of being updated remotely. The firmware in receivers themselves cannot be overwritten with malicious code via satellite as provider-issued receivers are.
FTA receivers also have the advantage of being able to receive programming from multiple providers plus legitimate free-to-air DVB broadcasts which are not part of any package, a valuable capability which is conspicuously absent from most "package receivers" sold by DBS providers. DVB-S is an international standard and thus the industry-imposed restriction that a Bell TV receiver is not interchangeable with a Dish Network receiver (the same box) and neither are interchangeable with a GlobeCast World TV receiver (also DVB) is an artificial one created by providers and not respected by either pirates or legitimate unencrypted FTA viewers.
Periodically, a provider will change the processes in which its encryption information is sent. When this happens, illegitimate viewing is disrupted. Third-party coders may release an updated altered version of the FTA receiver software on internet forums, sometimes hours to days after the countermeasure is implemented, although some countermeasures have allowed the encryption to remain secure for several months or longer. The receivers, meanwhile, remain able to receive unencrypted DVB-S broadcasts and (for some HDTV models) terrestrial ATSC programming. The same is not true of standard subscription TV receivers, whereby unsubscribing from a pay-TV package causes loss of all channels.
The use of renewable security allows providers to send new smart cards to all subscribers as existing compromised encryption schemes (such as Nagravision 1 and 2) are replaced with new schemes (currently Nagravision 3). This "card swap" process can provide pay-TV operators with more effective control over pirate decryption, but at the expense of replacing smart cards in all existing subscribed receivers. While this approach is used by most providers, deployments tend to be slowed due to cost.
While smart-card piracy often involves individuals who re-program access cards for others (usually for a price), piracy using FTA receivers involves third-party software that is relatively easy to upload to the receiver and can even be uploaded using a USB device, network, or serial link (a process called "flashing"). Most such firmware is distributed freely on the Internet. Websites that third-party coders use to share this software often have anywhere from 50,000 to over 200,000 registered users.
Another method of pirate decryption that has become popular recently is known as Internet Key Sharing (IKS). This is accomplished by an Ethernet cable hooked to the receiver that allows updated decryption keys to be fed to the unit directly from the internet. The DVB-S common scrambling system and the various conditional access systems are based on the use of a legitimately subscribed smart cards which generates a continuous stream of cryptographic keys usable to decrypt one channel on a receiver. A key-sharing scheme operates by redistributing these keys in real-time to multiple receivers in an unlimited number of locations so that one valid smartcard may serve almost 10000 viewers.
this was the only active pirate decryption system still in widespread use in North American satellite TV, due to the shutdown of the compromised Nagravision 2 system by providers such as Dish Network and Bell TV.
However, this is limited by the interval of the stream of keys or also called CW (Control Words). Usually, the interval for the renewal of the CWs is ±10 seconds, but other systems (i.e. NDS3) have CW intervals of 5 seconds or less. Each channel usually has a different set of CWs for decryption and thus each box currently watching a specific channel must periodically request the current CWs from the server/smart-card for that specific channel. So arguably, the sharing of the card might not be unlimited. There are some restrictions to this like the frequency of CW changes and also the latency of the network. If the CWs do not arrive in time, there could be a freeze or crackle in the picture.
There are of course other more costly possibilities like having several legitimately subscribed cards each handling a few channels and a CW caching server.
The dependence on an external server also compromises privacy for individual viewers, as well as rendering the system incompatible with many receiver models which lack the ability to connect to an outside network and/or lack the ability to set or modify the various keys or identifiers used in communication between the card and receiver.
Common features
Installation menu
This is the main control panel that allows the user to configure the receiver to interact with LNBs, switches, motors, and other equipment. The user selects the LNB type, local oscillator frequency, appropriate DiSEqC switch port, and motor configuration. If all the settings are correct for the appropriate equipment, a signal bar showing strength and quality will appear. At that point, the receiver can be used to scan the satellite to detect channels.
Blind scan
There are 63 satellites in orbit over the Americas, 57 over Europe and a further 64 over Asia, a significant number of which will be receivable from any one location. Each of these has a different number of active transponders. Each transponder operates at a different frequency and symbol rate. Many FTA receivers are designed to detect any active transponders and any channels on those transponders. Because they are designed to do this without needing to be pre-programmed with the transponder information for each satellite, this process is referred to as a "blind" scan—as opposed to a satellite scan, which scans according to pre-set transponder values.
Channel edit/sort
Once a scan is complete, the channels can often be sorted alphabetically, in satellite/transponder order, or in scrambled/unscrambled order. Additionally, third-party software often allows the option of sorting by the channel's Station Identification (SID) number. This is so that the individual channels can be numbered in a way that mimics the lineup of Dish Network or Bell TV. Channels can also be renamed or deleted, either in an on-screen menu or with external software.
The most popular software used to configure and sort channels was a database program called Channel Master, which allowed the user to name, number, sort, and delete channels and then save them in a format that can then be written to the receiver. The file created that contains channel information is called a channel list. This channel editor application is not affiliated with the similarly-named antenna manufacturer and appears to have last been updated in 2008. Many older and discontinued receiver models are supported in Channel Master, though most newer and less popular ones are not.
User settings
Most FTA receivers give the user the option of configuring the language, aspect ratio, TV type (NTSC/PAL), and time settings.
Typically, most FTA receivers can accept an MPEG2 video stream in either PAL-compatible (540/704/720 x 576) or NTSC-compatible (640 x 480) image formats and convert it for display on either a PAL or NTSC monitor. There is some loss of image data due to NTSC's lower resolution. Some receivers also support output to SCART, S-Video, HDMI or component video.
Parental control
All FTA receivers contain a parental lock feature.
DiSEqC switch and motor control
Unlike package receivers promoted for use with a limited number of satellites controlled by an individual pay-TV provider, an FTA receiver is designed to be capable of receiving any free signals from all available satellites visible in a given location. To fully exploit this capability, most Ku band FTA receivers will control a DiSEqC motor which can rotate a single dish to view one of any number of multiple satellites.
An alternate approach of pointing a fixed dish (or LNB) at each satellite to be received (then feeding the individual signals into a remotely controlled switch) is compatible both with standard FTA receivers and the more-restricted pay-TV "package receiver". The most common standard for use with FTA receivers is a DiSEqC switch which normally allows automatic selection of signal from four satellites. A simpler two-position remote switch operated by a 22 kHz tone is also occasionally used for North American reception, but this configuration is not compatible with European-style universal LNB's which use the tone internally for band-switching.
A toroidal antenna may be used with multiple LNB's to receive multiple satellites in various locations over a 40° arc. Unlike the single parabola of a standard satellite dish antenna (which is best adapted to focus one target satellite to a single point), the toroidal antenna uses a reflector pair to focus multiple signals to a line.
Individual adjacent or near-adjacent pairs (such as Glorystar on 97°W and 101°W) may be received, due to their close proximity, with two LNB's on what otherwise looks geometrically to be a standard parabolic dish. The outputs from these individual LNB's may then be fed through a switch to a receiver, providing access to all signals on both satellites.
Electronic program guide
An on-screen program schedule can be accessed that also contains descriptive information about a selected program. The availability and quality of programme guide information varies widely between broadcasters (some provide nothing) and the ability of receivers to collect and store guide listings from multiple sources is also variable. Receivers with more memory (or storage on external devices such as hard drives) are often, but not always, better equipped to store and retrieve on-screen programme listings. In some cases, a receiver with both satellite and terrestrial tuners will provide on-screen guide support for one mode of operation but not both.
PVR functions
A few high-end receivers feature the ability to record programs, pause, and review live TV. Often, a hard drive is not included when the unit is purchased, which allows the user to install any desired hard disk drive. Many newer units are equipped with a USB 2.0 port that allows the user to connect a portable hard drive; at least one unit (the Pansat 9200HD) uses external SATA as PVR media storage.
Some receivers, such as TripleDragon or Dream Multimedia's Linux-based Dreambox series, provide local area network interfaces. This allows the use of network-attached storage to provide PVR-like functions (some of these models also include internal hard drives or USB) and allows the unit to be controlled or updated via network.
The use of desktop personal computer cards to deploy DVB-S or terrestrial digital television tuners allows the computer's hard drive and network storage to be used to archive electronic programme guide information and recorded television programming. Most or all of the base PVR functionality becomes available by default at little or no added cost.
MPEG4 and 4:2:2
Most standard FTA receivers support DVB-S, MPEG-2, 480i or 576i SDTV received as unencrypted QPSK from Ku band satellites.
Rarely supported by stand-alone FTA receivers, but likely to be supported by FTA DVB-S tuners for personal computers, are MPEG-4 and MPEG2 4:2:2, variants on the MPEG compression algorithm which provide more compression and more colour resolution, respectively. As personal computers handle much of the video decompression in software, any codec could be easily substituted on the desktop.
High-definition television is also beginning to be supported by a limited-number of high-end receivers; at least one high-end stand-alone receiver (the Quali-TV 1080IR) supports both 4:2:2 and HDTV.
4:2:2 is a version of MPEG-2 compression used in network feeds such as NBC on Ku band (103°W). Some broadcast networks use 4:2:2 encoding for otherwise-unencrypted transmission of sports events to local terrestrial stations, as it provides slightly better colour than the standard 4:2:0 compression.
In some cases support for additional standards (such as DVB-S2, MPEG-4 and 8PSK) will also become necessary to receive a viewable signal. The use of newer means of modulation and compression is likely to become more widespread for high-definition television feeds, to partially offset the larger amount of transponder space required to deliver high-definition video to television stations.
Terrestrial DTV
In countries using the DVB-T and DVB-C standards for terrestrial digital television and digital cable, a few higher-end receivers provide an option to install terrestrial DVB tuners either alongside or in place of the stock DVB-S tuner. Dream Multimedia's DreamBox series, for instance, supports this in a few selected models.
In countries using ATSC, inclusion of terrestrial tuners in DVB-S FTA receivers is rare, with one key exception. Some HDTV FTA receivers incorporate terrestrial ATSC tuners. These typically do not support ATSC's unique major.minor digital subchannel numbering scheme or the on-screen program guide but are capable of displaying (or timeshifting) local HDTV with no loss in detail. Channels from these receivers are numbered using FTA conventions, by which the first channel found is most often arbitrarily given channel 1 as its virtual channel number.
HDTV
A few high-end receivers feature HDTV. In North America, these often include an ATSC over-the-air digital television tuner and MPEG-4 support. A few HDTV units allow for the addition of a UHF remote control. However, an 8PSK module can be installed in place of the UHF remote and allows the receiver to decode the format used on most Dish Network high definition programming.
These units are superior to DVD recorders for time-shifting HDTV programming, as most DVD units down-convert OTA HDTV signals to standard-definition to match the limitations of the DVD standard. An HDTV FTA receiver with ATSC capability and USB storage can record one channel from a terrestrial or satellite DTV transport stream entirely losslessly, although the on-screen guide for terrestrial reception is often limited and viewing or storage of analog NTSC channels is not supported.
Controversy
Availability of free programming
While significant amounts of programming remain free, there is no assurance to viewers that any individual broadcast currently available free-to-air will remain so. Some will inevitably move to incompatible signal formats (such as MPEG 4:2:2, 8PSK, DVB-S2, or MPEG-4), change from free to encrypted, move to different satellite locations (often across bands, where C band reception requires much larger antennae) or shut down entirely.
Many of the signals are backhaul or "wildfeed" video destined to individual stations, or are feeds to terrestrial transmitters programmed remotely. These were not intentionally created as direct satellite broadcasts to home viewers, but often had been left unencrypted (in the clear) on the assumption that few people were watching. As free-to-air receivers became inexpensive and widely deployed in the 2000s, many of these feeds moved to C band (requiring a huge dish), were encrypted or changed to incompatible modulation or encoding standards which required more advanced receivers, even though the corresponding terrestrial television broadcast may still be free-to-air in its home community.
The onus is on receiver vendors to voluntarily indicate, whenever they use lists of currently available FTA programming for marketing purposes, that free channels frequently may appear, move and disappear, often on a permanent basis, with no advance notice. One North American example was Equity Broadcasting, once a major source of small local terrestrial stations on free satellite television. Equity filed for Chapter 11 bankruptcy on 9 December 2008, and most of Equity's terrestrial stations were sold at auction in mid-2009. As many of the stations (such as New York state's WNGS and WNYI) were sold to Daystar and now originate nothing, the corresponding unique free-to-air signals (Galaxy 18, 123°W) are no more. Even where a signal still exists, an incompatible signal format such as that of the NBC feeds (AMC 1 at 103°W, now requires 8PSK, DVB-S2 and HDTV support to receive anything) can remove a channel from virtually all standard FTA receivers.
Receiver obsolescence
Many receivers will provide options for hardware expansion (such as to add 8PSK reception or DVB Common Interface TV subscription cards) and firmware upgrade (either officially or from nominally third-party sources). Most often, once the individual receiver model is discontinued, this support and expandability rapidly disappears from all sources. The migration of existing feeds to formats such as MPEG-4, HDTV, or DVB-S2 (which many current receivers do not support) may also result in viewers losing existing free programming as equipment becomes rapidly obsolete. Unlike digital terrestrial set-top boxes, most standard-definition DVB-S receivers do not down-convert HD programming and thus produce no usable video for these signals.
There have also been incidents where existing receiver designs have been "cloned" or copied by competing manufacturers; a manufacturer will often reduce support for a widely copied receiver design. In some cases, malware has been released, ostensibly in the same format as existing third-party firmware, in an attempt to interfere with the further use of a widely cloned receiver's design.
Legal issues
FTA receivers are ostensibly designed for free-to-air use but can be adapted for other purposes. In some jurisdictions, this dual-use nature can cause problems. Thus, combatting piracy involving FTA receivers has been difficult using legal means.
Popular brands
Peripheral equipment
Satellite dish
LNB
DiSEqC
Universal Satellites Automatic Location System
See also
Free-to-air satellite television
PBS Satellite Service formerly on AMC-3 (87°W), now on AMC-21 (125°W)
GlobeCast World TV (former television service) and ethnic television on Galaxy 19 (97°W)
Glorystar and religious broadcasting on Galaxy 19 (97°W)
Equity Broadcasting formerly on Galaxy 18 (123°W), now defunct
Retro Television Network on AMC-9 (83°W, C-Band)
White Springs Television formerly on Galaxy 27 (129°W), no longer on satellite due to uplink failure
Bell Satellite TV, Dish Network and pirate decryption issues surrounding these systems
References
External links
Free to air TV channels
Free to air radio stations
Channel Master editor (archived, last update October 2008)
Channel finder for Astra satellites
Set-top box
Television technology |
12075392 | https://en.wikipedia.org/wiki/Computer%20compatibility | Computer compatibility | A family of computer models is said to be compatible if certain software that runs on one of the models can also be run on all other models of the family. The computer models may differ in performance, reliability or some other characteristic. These differences may affect the outcome of the running of the software.
Software compatibility
Software compatibility can refer to the compatibility that a particular software has running on a particular CPU architecture such as Intel or PowerPC. Software compatibility can also refer to ability for the software to run on a particular operating system. Very rarely is a compiled software compatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system. Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system. Software incompatibility occurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on.
Hardware compatibility
Hardware compatibility can refer to the compatibility of computer hardware components with a particular CPU architecture, bus, motherboard or operating system. Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work with legacy components. An example is RAM chips, some of which can run at a lower (or sometimes higher) clock rate than rated. Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, much of the hardware for macOS is proprietary hardware with drivers unavailable for use in operating systems such as Linux.
Free and open-source software
See also
Binary-code compatibility
Compatibility layer
Interchangeability
Forward compatibility
Backward compatibility
Cross-platform
Emulator
List of computer standards
Portability
Plug compatible
Hardware security
References
Interoperability
Computer hardware
Software |
24419 | https://en.wikipedia.org/wiki/Peter%20Carey%20%28novelist%29 | Peter Carey (novelist) | Peter Philip Carey AO (born 7 May 1943) is an Australian novelist. Carey has won the Miles Franklin Award three times and is frequently named as Australia's next contender for the Nobel Prize in Literature. Carey is one of only five writers to have won the Booker Prize twice—the others being J. G. Farrell, J. M. Coetzee, Hilary Mantel and Margaret Atwood. Carey won his first Booker Prize in 1988 for Oscar and Lucinda, and won for the second time in 2001 with True History of the Kelly Gang. In May 2008 he was nominated for the Best of the Booker Prize.
In addition to writing fiction, he collaborated on the screenplay of the film Until the End of the World with Wim Wenders and is executive director of the Master of Fine Arts in Creative Writing program at Hunter College, part of the City University of New York.
Early life and career: 1943–1970
Peter Carey was born in Bacchus Marsh, Victoria, in 1943. His parents ran a General Motors dealership, Carey Motors. He attended Bacchus Marsh State School from 1948 to 1953, then boarded at Geelong Grammar School between 1954 and 1960. In 1961, Carey enrolled in a science degree at the new Monash University in Melbourne, majoring in chemistry and zoology, but cut his studies short because of a car accident and a lack of interest. It was at university that he met his first wife, Leigh Weetman, who was studying German and philosophy, and who also dropped out.
In 1962, he began to work in advertising. He was employed by various Melbourne agencies between 1962 and 1967, including on campaigns for Volkswagen and Lindeman's Wine. His advertising work brought him into contact with older writers who introduced him to recent European and American fiction: "I didn't really start getting an education until I worked in advertising with people like Barry Oakley and Morris Lurie—and Bruce Petty had an office next door."
During this time, he read widely, particularly the works of Samuel Beckett, William Faulkner, James Joyce, Franz Kafka, and Gabriel García Márquez, and began writing on his own, receiving his first rejection slip in 1964, the same year he married Weetman. Over the next few years he wrote five novels—Contacts (1964–1965), Starts Here, Ends Here (1965–1967), The Futility Machine (1966–1967), Wog (1969), and Adventures on Board the Marie [sic] Celeste (1971). None of them were published. Sun Books accepted The Futility Machine but did not proceed with publication, and Adventures on Board the Marie Celeste was accepted by Outback Press before being withdrawn by Carey himself. These and other unpublished manuscripts from the period—including twenty-one short stories—are now held by the Fryer Library at the University of Queensland.
Carey's only publications during the 1960s were "Contacts" (a short extract from the unpublished novel of the same name, in Under Twenty-Five: An Anthology, 1966) and "She Wakes" (a short story, in Australian Letters, 1967). Towards the end of the decade, Carey and Weetman abandoned Australia with "a certain degree of self-hatred", travelling through Europe and Iran before settling in London in 1968, where Carey continued to write highly regarded advertising copy and unpublished fiction.
Middle career: 1970–1990
Returning to Australia in 1970, Carey once again did advertising work in Melbourne and Sydney. He also kept writing, and gradually broke through with editors, publishing short stories in magazines and newspapers such as Meanjin and Nation Review. Most of these were collected in his first book, The Fat Man in History, which appeared in 1974. In the same year Carey moved to Balmain in Sydney to work for Grey Advertising.
In 1976, Carey moved to Queensland and joined an alternative community named Starlight in Yandina, north of Brisbane, with his new partner, the painter Margot Hutcheson, with whom he lived in the 1970s and 1980s. He remained with Grey, writing in Yandina for three weeks, then spending the fourth week at the agency in Sydney. It was during this time that he produced most of the stories collected in War Crimes (1979), as well as Bliss (1981), his first published novel.
Carey started his own advertising agency in 1980, the Sydney-based McSpedden Carey Advertising Consultants, in partnership with Bani McSpedden. After many years of separation, Leigh Weetman asked for a divorce in 1980 so that she could remarry and Peter agreed. In 1981, he moved to Bellingen in northern New South Wales. There he wrote Illywhacker, published in 1985. In the same year he married theatre director Alison Summers. Illusion, a stage musical Carey wrote with Mike Mullins and composer Martin Armiger, was performed at the 1986 Adelaide Festival of the Arts and a studio cast recording of the musical was nominated for a 1987 ARIA Award (for which Carey as lyricist was nominated).
The decade—and the Australian phase of Carey's career—culminated with the publication of Oscar and Lucinda (1988), which won the Booker McConnell Prize (as it was then known) and brought the author international recognition. Carey explained that the novel was inspired, in part, by his time in Bellingen:
Move to New York: 1990–present
Carey sold his share of McSpedden Carey and in 1990 moved with Alison Summers and their son to New York, where he took a job teaching creative writing at New York University. He later said that New York would not have been his first choice of place to live, and that moving there was his wife's idea. Carey and Summers divorced in 2005 after a four-year separation. Carey is now married to the British-born publisher Frances Coady.
The Tax Inspector (1991), begun in Australia, was the first book he completed in the United States. It was followed by The Unusual Life of Tristan Smith (1994), a fable in which he explored the relationship between Australia and America, disguised in the novel as "Efica" and "Voorstand". This is a relationship that has preoccupied him throughout his career, going back to Bliss (1981), Illywhacker (1985), and the early short stories. Nevertheless, Carey continued to set his fiction primarily in Australia and remained diffident about writing explicitly on American themes. In a piece on True History of the Kelly Gang (2001), Mel Gussow reported that:
It was only after nearly two decades in the United States that he embarked on Parrot and Olivier in America (2010), loosely based on events in the life of Alexis de Tocqueville. Carey says "Tocqueville opened a door I could enter. I saw the present in the past. It was accessible, imaginable." Carey continues to extend his canvas; in his novel, The Chemistry of Tears (2012), "contemporary London is brought intimately in touch with ... a 19th-century Germany redolent of the Brothers Grimm".
Controversies
In 1998, Carey was accused of snubbing Queen Elizabeth II by declining an invitation to meet her after winning the Commonwealth Writers Prize for Jack Maggs (1997). While Carey is a republican, in the Australian sense, he insists that no offence was intended:
The meeting did eventually take place, with the Queen remarking, according to Carey, "I believe you had a little trouble getting here."
The unhappy circumstances of Carey's break-up with Alison Summers received publicity (largely in Australia) in 2006 when Theft: A Love Story appeared, depicting the toxic relationship between its protagonist, Butcher Bones, and his ex-wife, known only as "the Plaintiff".
In April 2015 he, alongside Michael Ondaatje, Francine Prose, Teju Cole, Rachel Kushner and Taiye Selasi, withdrew from the PEN American Center gala honouring the French satirical magazine Charlie Hebdo with its "Freedom of Expression Courage" award. He stated that one of his reasons for doing so was "PEN’s seeming blindness to the cultural arrogance of the French nation, which does not recognise its moral obligation to a large and disempowered segment of their population.". In addition, 204 PEN members, including Teju Cole and Deborah Eisenberg, wrote to PEN, objecting to its decision to give the award to Charlie Hebdo.
Awards and distinctions
Carey has been awarded three honorary degrees. He has been elected a Fellow of the Royal Society of Literature (1989), an Honorary Fellow of the Australian Academy of the Humanities (2001), a Member of the American Academy of Arts and Sciences (2003), and a Member of the American Academy of Arts and Letters (2016), which has also awarded him its Harold D Vursell Memorial Award (2012). In 2010, he appeared on two Australian postage stamps in a series dedicated to "Australian Legends". On 11 June 2012, Carey was named an Officer of the Order of Australia for "distinguished service to literature as a novelist, through international promotion of the Australian identity, as a mentor to emerging writers." And in 2014, Carey was awarded an honorary Doctor of Letters (honoris causa) by Sydney University.
Carey has won numerous literary awards, including:
Bibliography
Novels
Bliss (1981)
Illywhacker (1985)
Oscar and Lucinda (1988)
The Tax Inspector (1991)
The Unusual Life of Tristan Smith (1994)
Jack Maggs (1997)
True History of the Kelly Gang (2000)
My Life as a Fake (2003)
Theft: A Love Story (2006)
His Illegal Self (2008)
Parrot and Olivier in America (2010)
The Chemistry of Tears (2012)
Amnesia (2014)
A Long Way From Home (2017)
Short story collections
The Fat Man in History (1974)
"Crabs"
"Peeling"
"She Wakes"
"Life and Death in the Southside Pavilion"
"Room No. 5 (Escribo)"
"Happy Story"
"A Windmill in the West"
"Withdrawal"
"Report on the Shadow Industry"
"Conversations with Unicorns"
"American Dreams"
"The Fat Man in History"
War Crimes (1979)
"The Journey of a Lifetime"
"Do You Love Me?"
"The Uses of Williamson Wood"
"The Last Days of a Famous Mime"
"A Schoolboy Prank"
"The Chance"
"Fragrance of Roses"
"The Puzzling Nature of Blue"
"Ultra-Violet Light"
"Kristu-Du"
"He Found Her in Late Summer"
"Exotic Pleasures"
"War Crimes"
Stories from Carey's first two collections have been repackaged in The Fat Man in History and Other Stories (1980), Exotic Pleasures (1990), and Collected Stories (1994); the last also includes three previously uncollected stories: "Joe" (Australian New Writing, 1973), "A Million Dollars Worth of Amphetamines" (Nation Review, 1975), and "Concerning the Greek Tyrant" (The Tabloid Story Pocket Book, 1978).
Uncollected short stories
"Contacts" (Under Twenty-Five: An Anthology, 1966)
"Eight Parts of a Whole" (Manic Magazine, 1970)
"Interview with Yourself" (Manic Magazine, 1970)
"Structure" (Manic Magazine, 1970)
"I Know You Can Talk" (Stand Magazine, 1975)
"The Mad Puzzle King" (Living Daylights, 1975)
"The Rose" (Nation Review, 1976)
"The Cosmic Pragmatist" (Nation Review, 1977)
"The Pleasure Bird" (Australian Playboy, 1979)
"An Abandoned Chapter" (Overland, 1997)
Contributed chapters
"A small memorial" In: Stories of Manhood: Journeys into the Hidden Hearts of Men edited by Steve Biddulph (2009)
Juvenile fiction
The Big Bazoohley: A Story for Children (1995)
Non-fiction
A Letter to Our Son (1994)
30 Days in Sydney: A Wildly Distorted Account (2001)
Letter from New York (2001)
Wrong about Japan (2005)
Screenplays
Bliss (1985, with Ray Lawrence)
Until the End of the World (1991, with Wim Wenders)
Stage
Illusion (1986, with Mike Mullins and Martin Armiger)
Adaptations
Dead End Drive-In (1986, adapted from his short story "Crabs" by Peter Smalley)
Oscar and Lucinda (1997, adapted from his novel by Laura Jones)
True History of the Kelly Gang (2019, adapted from his novel by Shaun Grant)
Notes
External links
MFA Creative Writing, Hunter College, City University of New York
Peter Carey at Random House Australia
Guide to the Papers of Peter Carey at the National Library of Australia
Guide to the Peter Carey Papers at Fryer Library, The University of Queensland
Peter Carey Website maintained by Rebecca J. Vaughan and hosted by Flinders University
The Literary Encyclopedia: Peter Carey
Internet book List: Peter Carey
Peter Carey on Bookworm Radio
Peter Carey interviewed in Melbourne about Parrot & Olivier
The Art of Fiction No. 188: Peter Carey, an interview in the Paris Review (Summer 2006).
Peter Carey's favourite books
1943 births
Living people
20th-century Australian male writers
20th-century Australian novelists
20th-century Australian short story writers
21st-century Australian male writers
21st-century Australian novelists
21st-century Australian short story writers
Advertising people
Australian expatriates in the United States
Australian male novelists
Australian male short story writers
Australian republicans
Booker Prize winners
Copywriters
Fellows of the Royal Society of Literature
Granta people
Hunter College faculty
Miles Franklin Award winners
Officers of the Order of Australia
People educated at Geelong Grammar School
People from Bacchus Marsh
Writers from Victoria (Australia) |
66209970 | https://en.wikipedia.org/wiki/Leopold%20Fleischhacker | Leopold Fleischhacker | Leopold Fleischhacker (13 May 1882, Felsberg - 11 September 1946, Uccle) was a German sculptor and medallist of Jewish ancestry.
Life and work
From 1897 to 1902, he attended the Kunstgewerbeschule Düsseldorf where he studied with . In his spare time, he took lessons at stucco workshops and qualified as a journeyman plasterer. Then, from 1903 to 1905, he attended the Prussian Academy of Arts, where his primary instructors were Ernst Herter and Peter Breuer. In 1905, he was awarded a scholarship by the Michael Beer Foundation that enabled him to spend eight months in Rome at the Villa Strohl-Fern.
In 1906, he returned to Düsseldorf; serving as an assistant in the studios of August Bauer––. After 1909, he was a regular participant in the exhibitions of the artists' association in Barmen. In 1912, he opened his first studio in a room at the home of a decorative painter named August Blumenberg, where the sculptor, also had a studio. In 1919, he had a showing at the first exhibition of a new artists' association called Young Rhineland and moved into his own, private studio.
He presented entries in competitions for several war memorials, including Elberfeld (1921) and Düren (1928). For the souhern wall of the in Düsseldorf, he designed a memorial in honor of its fallen community members. In 1926, he created reliefs for the "Hygiene of the Jews“ pavilion at the "GeSoLei", a trade fair. They were later displayed at the headquarters of the Jewish Community Center, but were destroyed in 1938.
Until 1931, he was a member of Malkasten, a progressive artists' association. In 1933, as a Jew, he was denied membership in the by the Reich Chamber of Culture, effectively banning him from his profession. From that point on, he was only able to work for Jewish customers and created mostly funerary art.
In 1938, during the pogrom known as "Kristallnacht", his studio in the Flingern District was destroyed. His wife, Lotte, fled to live with friends in Cologne. Eventually, they both managed to emigrate to Belgium and settled in Uccle, where he worked for a ceramics factory. In 1945, he was able to reopen a studio, in Brussels, but died the following year.
References
Further reading
"Leopold Fleischhacker 1882–1946". In: Augenblick, Berichte, Informationen und Dokumente der Mahn- und Gedenkstätte Düsseldorf. , #5, 1993, pp.1–7
Walther Killy (Ed.): Deutsche Biographische Enzyklopädie, Vol.3, K. G. Saur, München 1996, pg.342
Stephanie Kluth: "Leben und Werk des deutsch-jüdischen Künstlers Leopold Fleischhacker (1882–1946)". In: Neuer Anbruch. Zur deutsch-jüdischen Geschichte und Kultur, Metropol, Berlin 2001, , pg.311 ff.
External links
The Virtual Leopold Fleischhacker Museum @ MW2013
More works by Fleischhacker @ ArtNet
1882 births
1946 deaths
German sculptors
Jewish sculptors
Prussian Academy of Arts alumni
German emigrants to Belgium
People from Felsberg, Hesse |
20238168 | https://en.wikipedia.org/wiki/14th%20Dalai%20Lama | 14th Dalai Lama | The 14th Dalai Lama (spiritual name Jetsun Jamphel Ngawang Lobsang Yeshe Tenzin Gyatso, known as Tenzin Gyatso; né Lhamo Thondup), known as Gyalwa Rinpoche to the Tibetan people, is the current Dalai Lama, the highest spiritual leader and former head of state of Tibet. Born on 6 July 1935, or in the Tibetan calendar, in the Wood-Pig Year, 5th month, 5th day, he is considered a living Bodhisattva; specifically, an emanation of Avalokiteśvara. He is also the leader of the Gelug school, the newest school of Tibetan Buddhism, formally headed by the Ganden Tripa. The central government of Tibet, the Ganden Phodrang, invested the Dalai Lama with temporal duties until his exile in 1959. On 29 April 1959, the Dalai Lama established the independent Tibetan government in exile in the north Indian hill station of Mussoorie, which then moved in May 1960 to Dharamshala, where he resides. He retired as political head in 2011 to make way for a democratic government, the Central Tibetan Administration.
The 14th Dalai Lama was born to a farming family in Taktser (Hongya Village), in the traditional Tibetan region of Amdo (administratively Qinghai Province, Republic of China). He was selected as the tulku of the 13th Dalai Lama in 1937 and formally recognized as the 14th Dalai Lama in a public declaration near the town of Bumchen in 1939. As with the recognition process for his predecessor, a Golden Urn selection process was not used. His enthronement ceremony was held in Lhasa on 22 February 1940 and he eventually assumed full temporal (political) duties on 17 November 1950, at the age of 15, after the People's Republic of China's occupation of Tibet. The Tibetan government administered the historic Tibetan regions of Ü-Tsang, Kham and Amdo.
During the 1959 Tibetan uprising, the Dalai Lama escaped to India, where he currently lives in exile while remaining the most important spiritual leader of Tibet. The Dalai Lama advocates for the welfare of Tibetans while continuing to call for the Middle Way Approach to negotiations with China for the autonomy of the nation and the protection of its culture, including for the religious rights of Tibetans.
The Dalai Lama also meets with other world leaders, religious leaders, philosophers and scientists, and travels worldwide giving Tibetan Buddhist teachings. His work includes focus on the environment, economics, women's rights, nonviolence, interfaith dialogue, physics, astronomy, Buddhism and science, cognitive neuroscience, reproductive health and sexuality.
Along with his teachings on Tibetan Mahayana and Vajrayana Buddhism, the Dalai Lama's Kalachakra teachings and initiations are international events.
He is the chief Patron of the Maha Bodhi Society of India, conferred upon him at the 2008 Annual General Meeting of the Maha Bodhi Society of India.
The Dalai Lama was awarded the Nobel Peace Prize in 1989, and the US Congressional Gold Medal in 2006. Time magazine named the Dalai Lama one of the "Children of Mahatma Gandhi" and Gandhi's spiritual heir to nonviolence.
Early life and background
Lhamo Thondup was born on 6 July 1935 to a farming and horse trading family in the small hamlet of Taktser, or Chija Tagtser (), at the edge of the traditional Tibetan region of Amdo in Qinghai Province.
He was one of seven siblings to survive childhood and one of the three reincarnated Rinpoches in the same family. His eldest sister Tsering Dolma, was sixteen years his senior and was midwife to his mother at his birth. She would accompany him into exile and found Tibetan Children's Villages. His eldest brother, Thupten Jigme Norbu, had been recognised at the age of three by the 13th Dalai Lama as the reincarnation of the high Lama, the 6th Taktser Rinpoche. His fifth brother, Tendzin Choegyal, had been recognised as the 16th Ngari Rinpoche. His sister, Jetsun Pema, spent most of her adult life on the Tibetan Children's Villages project. The Dalai Lama has said that his first language was "a broken Xining language which was (a dialect of) the Chinese language", a form of Central Plains Mandarin, and his family speak neither Amdo Tibetan nor Lhasa Tibetan.
After the demise of the 13th Dalai Lama, in 1935, the Ordinance of Lama Temple Management () was published by the Central Government. In 1936, the Method of Reincarnation of Lamas () was published by the Mongolian and Tibetan Affairs Commission of the Central Government. Article 3 states that death of lamas, including the Dalai Lama and Panchen Lama, should be reported to the Commission, soul boys should be located and checked by the Commission, and a lot-drawing ceremony with the Golden Urn system should be held. Article 6 states that local governments should invite officials from the Central Government to take care of the sitting-in-the-bed ceremony. Article 7 states that soul boys should not be sought from current lama families. Article 7 echoes what the Qianlong Emperor described in The Discourse of Lama to eliminate greedy families with multiple reincarnated rinpoches, lamas. Based on custom and regulation, the regent was actively involved in the search for the reincarnation of the Dalai Lama.
Following reported signs and visions, three search teams were sent out to the north-east, the east, and the south-east to locate the new incarnation when the boy who was to become the 14th Dalai Lama was about two years old. Sir Basil Gould, British delegate to Lhasa in 1936, related his account of the north-eastern team to Sir Charles Alfred Bell, former British resident in Lhasa and friend of the 13th Dalai Lama. Amongst other omens, the head of the embalmed body of the thirteenth Dalai Lama, at first facing south-east, had turned to face the north-east, indicating, it was interpreted, the direction in which his successor would be found. The Regent, Reting Rinpoche, shortly afterwards had a vision at the sacred lake of Lhamo La-tso which he interpreted as Amdo being the region to search. This vision was also interpreted to refer to a large monastery with a gilded roof and turquoise tiles, and a twisting path from there to a hill to the east, opposite which stood a small house with distinctive eaves. The team, led by Kewtsang Rinpoche, went first to meet the Panchen Lama, who had been stuck in Jyekundo, in northern Kham. The Panchen Lama had been investigating births of unusual children in the area ever since the death of the 13th Dalai Lama. He gave Kewtsang the names of three boys whom he had discovered and identified as candidates. Within a year the Panchen Lama had died. Two of his three candidates were crossed off the list but the third, a "fearless" child, the most promising, was from Taktser village, which, as in the vision, was on a hill, at the end of a trail leading to Taktser from the great Kumbum Monastery with its gilded, turquoise roof. There they found a house, as interpreted from the vision—the house where Lhamo Dhondup lived.
The 14th Dalai Lama claims that at the time, the village of Taktser stood right on the "real border" between the region of Amdo and China. According to the search lore, when the team visited, posing as pilgrims, its leader, a Sera Lama, pretended to be the servant and sat separately in the kitchen. He held an old mala that had belonged to the 13th Dalai Lama, and the boy Lhamo Dhondup, aged two, approached and asked for it. The monk said "if you know who I am, you can have it." The child said "Sera Lama, Sera Lama" and spoke with him in a Lhasa accent, in a dialect the boy's mother could not understand. The next time the party returned to the house, they revealed their real purpose and asked permission to subject the boy to certain tests. One test consisted of showing him various pairs of objects, one of which had belonged to the 13th Dalai Lama and one which had not. In every case, he chose the Dalai Lama's own objects and rejected the others.
From 1936 the Hui 'Ma Clique' Muslim warlord Ma Bufang ruled Qinghai as its governor under the nominal authority of the Republic of China central government. According to an interview with the 14th Dalai Lama, in the 1930s, Ma Bufang had seized this north-east corner of Amdo in the name of Chiang Kai-shek's weak government and incorporated it into the Chinese province of Qinghai. Before going to Taktser, Kewtsang had gone to Ma Bufang to pay his respects. When Ma Bufang heard a candidate had been found in Taktser, he had the family brought to him in Xining. He first demanded proof that the boy was the Dalai Lama, but the Lhasa government, though informed by Kewtsang that this was the one, told Kewtsang to say he had to go to Lhasa for further tests with other candidates. They knew that if he was declared to be the Dalai Lama, the Chinese government would insist on sending a large army escort with him, which would then stay in Lhasa and refuse to budge. Ma Bufang, together with Kumbum Monastery, then refused to allow him to depart unless he was declared to be the Dalai Lama, but withdrew this demand in return for 100,000 Chinese dollars ransom in silver to be shared amongst them, to let them go to Lhasa. Kewtsang managed to raise this, but the family was only allowed to move from Xining to Kumbum when a further demand was made for another 330,000 dollars ransom: one hundred thousand each for government officials, the commander-in-chief, and the Kumbum Monastery; twenty thousand for the escort; and only ten thousand for Ma Bufang himself, he said.
Two years of diplomatic wrangling followed before it was accepted by Lhasa that the ransom had to be paid to avoid the Chinese getting involved and escorting him to Lhasa with a large army. Meanwhile, the boy was kept at Kumbum where two of his brothers were already studying as monks and recognised incarnate lamas. The payment of 300,000 silver dollars was then advanced by Muslim traders en route to Mecca in a large caravan via Lhasa. They paid Ma Bufang on behalf of the Tibetan government against promissory notes to be redeemed, with interest, in Lhasa. The 20,000-dollar fee for an escort was dropped, since the Muslim merchants invited them to join their caravan for protection; Ma Bufang sent 20 of his soldiers with them and was paid from both sides since the Chinese government granted him another 50,000 dollars for the expenses of the journey. Furthermore, the Indian government helped the Tibetans raise the ransom funds by affording them import concessions.
Released from Kumbum, on 21 July 1939 the party travelled across Tibet on a journey to Lhasa in the large Muslim caravan with Lhamo Dhondup, now 4 years old, riding with his brother Lobsang in a special palanquin carried by two mules, two years after being discovered. As soon as they were out of Ma Bufang's area, he was officially declared to be the 14th Dalai Lama by the Central Government of Tibet, and after ten weeks of travel he arrived in Lhasa on 8 October 1939. The ordination (pabbajja) and giving of the monastic name of Tenzin Gyatso were handled by Reting Rinpoche. There was very limited Chinese involvement at this time. The family of the 14th Dalai Lama was elevated to the highest stratum of the Tibetan aristocracy and acquired land and serf holdings, as with the families of previous Dalai Lamas.
Tibetan Buddhists normally refer to him as Yishin Norbu (Wish-Fulfilling Gem), Kyabgon (Saviour), or just Kundun (Presence). His devotees, as well as much of the Western world, often call him His Holiness the Dalai Lama, the style employed on the Dalai Lama's website. According to the Dalai Lama, he had a succession of tutors in Tibet including Reting Rinpoche, Tathag Rinpoche, Ling Rinpoche and lastly Trijang Rinpoche, who became junior tutor when he was nineteen. At the age of 11 he met the Austrian mountaineer Heinrich Harrer, who became his videographer and tutor about the world outside Lhasa. The two remained friends until Harrer's death in 2006.
In 1959, at the age of 23, he took his final examination at Lhasa's Jokhang Temple during the annual Monlam or Prayer Festival. He passed with honours and was awarded the Lharampa degree, the highest-level geshe degree, roughly equivalent to a doctorate in Buddhist philosophy.
Life as the Dalai Lama
Historically the Dalai Lamas or their regents held political and religious leadership over Tibet from Lhasa with varying degrees of influence depending on the regions of Tibet and periods of history. This began with the 5th Dalai Lama's rule in 1642 and lasted until the 1950s (except for 1705–1750), during which period the Dalai Lamas headed the Tibetan government or Ganden Phodrang. Until 1912 however, when the 13th Dalai Lama declared the complete independence of Tibet, their rule was generally subject to patronage and protection of firstly Mongol kings (1642–1720) and then the Manchu-led Qing dynasty (1720–1912).
During the Dalai Lama's recognition process, the cultural Anthropologist Goldstein writes:
Afterwards in 1939, at the age of four, the Dalai Lama was taken in a procession of lamas to Lhasa. The traditional ceremony enthroning the 14th Dalai Lama was attended by observing foreign dignitaries after a traditional Tibetan recognition processes. Sir Basil Gould, the British representative of the Government of India, has left a highly detailed account of the ceremonies surrounding the enthronement of the 14th Dalai Lama in Chapter 16 of his memoir, The Jewel in the Lotus. Despite historical records of eyewitness accounts, China's Kuomintang government later presented false claims to have ratified the Dalai Lama, and that a Kuomintang representative, General Wu Zhongxin, presided over the ceremony.
Gould disputes the Chinese claim to have presided over it. He criticised the Chinese account as follows:
Tibetan scholar Nyima Gyaincain wrote that based on Tibetan tradition, there was no such thing as presiding over an event, and wrote that the word "主持 (preside or organize)" was used in many places in communication documents. The meaning of the word was different than what we understand today. He added that Wu Zhongxin spent a lot of time and energy on the event, his effect of presiding over or organizing the event was very obvious.
After his enthronement, the Dalai Lama's childhood was then spent between the Potala Palace and Norbulingka, his summer residence, both of which are now UNESCO World Heritage sites.
Chiang Kai Shek ordered Ma Bufang to put his Muslim soldiers on alert for an invasion of Tibet in 1942. Ma Bufang complied, and moved several thousand troops to the border with Tibet. Chiang also threatened the Tibetans with aerial bombardment if they worked with the Japanese. Ma Bufang attacked the Tibetan Buddhist Tsang monastery in 1941. He also constantly attacked the Labrang monastery.
In October 1950 the army of the People's Republic of China marched to the edge of the Dalai Lama's territory and sent a delegation after defeating a legion of the Tibetan army in warlord-controlled Kham. On 17 November 1950, at the age of 15, the 14th Dalai Lama assumed full temporal (political) power as ruler of Tibet.
Cooperation and conflicts with the People's Republic of China
The Panchen Lama and Dalai Lama had many conflicts throughout Tibetan history. Dalai Lama's formal rule was brief. He sent a delegation to Beijing, which ratified the Seventeen Point Agreement for the Peaceful Liberation of Tibet. He would later claim that the delegation did so without his authorization. The Seventeen Point Agreement recognized Chinese sovereignty over Tibet, but China allowed the Dalai Lama to continue to rule Tibet internally, and it allowed the system of feudal peasantry to persist. The Dalai Lama worked with the Chinese government: in September 1954, together with the 10th Panchen Lama he went to the Chinese capital to meet Mao Zedong and attend the first session of the National People's Congress as a delegate, primarily discussing China's constitution. On 27 September 1954, the Dalai Lama was selected as a Vice chairman of the Standing Committee of the National People's Congress, a post he officially held until 1964.
In 1956, on a trip to India to celebrate the Buddha's Birthday, the Dalai Lama asked the Prime Minister of India, Jawaharlal Nehru, if he would allow him political asylum should he choose to stay. Nehru discouraged this as a provocation against peace, and reminded him of the Indian Government's non-interventionist stance agreed upon with its 1954 treaty with China.
Long called a "splitist" and "traitor" by China, the Dalai Lama has attempted formal talks over Tibet's status in China. In 2019, after the United States passed a law requiring the US to deny visas to Chinese officials in charge of implementing policies that restrict foreign access to Tibet, the US Ambassador to China "encouraged the Chinese government to engage in substantive dialogue with the Dalai Lama or his representatives, without preconditions, to seek a settlement that resolves differences".
The Chinese Foreign Ministry has warned the US and other countries to "shun" the Dalai Lama during visits and often uses trade negotiations and human rights talks as an incentive to do so. China sporadically bans images of the Dalai Lama and arrests citizens for owning photos of him in Tibet. Tibet Autonomous Region government job candidates must strongly denounce the Dalai Lama, as announced on the Tibet Autonomous Region government's online education platform, "Support the (Communist) Party’s leadership, resolutely implement the [Chinese Communist] Party’s line, line of approach, policies, and the guiding ideology of Tibet work in the new era; align ideologically, politically, and in action with the Party Central Committee; oppose any splittist tendencies; expose and criticize the Dalai Lama; safeguard the unity of the motherland and ethnic unity and take a firm stand on political issues, taking a clear and distinct stand".
The Dalai Lama is a target of Chinese state sponsored hacking. Security experts claim "targeting Tibetan activists is a strong indicator of official Chinese government involvement" since economic information is the primary goal of private Chinese hackers. In 2009 the personal office of the Dalai Lama asked researchers at the Munk Center for International Studies at the University of Toronto to check its computers for malicious software. This led to uncovering GhostNet, a large-scale cyber spying operation which infiltrated at least 1,295 computers in 103 countries, including embassies, foreign ministries, other government offices, and organizations affiliated with the Dalai Lama in India, Brussels, London and New York, and believed to be focusing on the governments of South and Southeast Asia. A second cyberspy network, Shadow Network, was discovered by the same researchers in 2010. Stolen documents included a years worth of the Dalai Lama's personal email, and classified government material relating to Indian, West Africa, the Russian Federation, the Middle East, and NATO. "Sophisticated" hackers were linked to universities in China, Beijing again denied involvement. Chinese hackers posing as The New York Times, Amnesty International and other organization's reporters targeted the private office of the Dalai Lama, Tibetan Parliament members, and Tibetan nongovernmental organizations, among others, in 2019.
Exile to India
At the outset of the 1959 Tibetan uprising, fearing for his life, the Dalai Lama and his retinue fled Tibet with the help of the CIA's Special Activities Division, crossing into India on 30 March 1959, reaching Tezpur in Assam on 18 April. Some time later he set up the Government of Tibet in Exile in Dharamshala, India, which is often referred to as "Little Lhasa". After the founding of the government in exile he re-established the approximately 80,000 Tibetan refugees who followed him into exile in agricultural settlements. He created a Tibetan educational system in order to teach the Tibetan children the language, history, religion, and culture. The Tibetan Institute of Performing Arts was established in 1959 and the Central Institute of Higher Tibetan Studies became the primary university for Tibetans in India in 1967. He supported the refounding of 200 monasteries and nunneries in an attempt to preserve Tibetan Buddhist teachings and the Tibetan way of life.
The Dalai Lama appealed to the United Nations on the rights of Tibetans. This appeal resulted in three resolutions adopted by the General Assembly in 1959, 1961, and 1965, all before the People's Republic was allowed representation at the United Nations. The resolutions called on China to respect the human rights of Tibetans. In 1963, he promulgated a democratic constitution which is based upon the Universal Declaration of Human Rights, creating an elected parliament and an administration to champion his cause. In 1970, he opened the Library of Tibetan Works and Archives in Dharamshala which houses over 80,000 manuscripts and important knowledge resources related to Tibetan history, politics and culture. It is considered one of the most important institutions for Tibetology in the world.
In 2016, there were demands from Indian citizens and politicians of different political parties to confer the Dalai Lama the prestigious Bharat Ratna, the highest civilian honour of India, which has only been awarded to a non-Indian citizen twice in its history.
In 2021, it was revealed that the Dalai Lama’s inner circle were listed in the Pegasus project data as having been targeted with spyware on their phones. Analysis strongly indicates potential targets were selected by the Indian government.
International advocacy
At the Congressional Human Rights Caucus in 1987 in Washington, D.C., the Dalai Lama gave a speech outlining his ideas for the future status of Tibet. The plan called for Tibet to become a democratic "zone of peace" without nuclear weapons, and with support for human rights. The plan would come to be known as the "Strasbourg proposal", because the Dalai Lama expanded on the plan at Strasbourg on 15 June 1988. There, he proposed the creation of a self-governing Tibet "in association with the People's Republic of China." This would have been pursued by negotiations with the PRC government, but the plan was rejected by the Tibetan Government-in-Exile in 1991. The Dalai Lama has indicated that he wishes to return to Tibet only if the People's Republic of China agrees not to make any precondition for his return. In the 1970s, the Paramount leader Deng Xiaoping set China's sole return requirement to the Dalai Lama as that he "must [come back] as a Chinese citizen ... that is, patriotism".
The Dalai Lama celebrated his seventieth birthday on 6 July 2005. About 10,000 Tibetan refugees, monks and foreign tourists gathered outside his home. Patriarch Alexius II of the Russian Orthodox Church alleged positive relations with Buddhists. However, later that year, the Russian state prevented the Dalai Lama from fulfilling an invitation to the traditionally Buddhist republic of Kalmykia. The President of the Republic of China (Taiwan), Chen Shui-bian, attended an evening celebrating the Dalai Lama's birthday at the Chiang Kai-shek Memorial Hall in Taipei. In October 2008 in Japan, the Dalai Lama addressed the 2008 Tibetan violence that had erupted and that the Chinese government accused him of fomenting. He responded that he had "lost faith" in efforts to negotiate with the Chinese government, and that it was "up to the Tibetan people" to decide what to do.
Thirty Taiwanese indigenous peoples protested against the Dalai Lama during his visit to Taiwan after Typhoon Morakot and denounced it as politically motivated.
The Dalai Lama is an advocate for a world free of nuclear weapons, and currently serves on the Advisory Council of the Nuclear Age Peace Foundation.
The Dalai Lama has voiced his support for the Campaign for the Establishment of a United Nations Parliamentary Assembly, an organisation which campaigns for democratic reformation of the United Nations, and the creation of a more accountable international political system.
Teaching activities, public talks
Despite becoming 80 years old in 2015, he maintains a busy international lecture and teaching schedule. His public talks and teachings are usually webcast live in multiple languages, via an inviting organisation's website, or on the Dalai Lama's own website. Scores of his past teaching videos can be viewed there, as well as public talks, conferences, interviews, dialogues and panel discussions.
The Dalai Lama's best known teaching subject is the Kalachakra tantra which, as of 2014, he had conferred a total of 33 times, most often in India's upper Himalayan regions but also in the Western world. The Kalachakra (Wheel of Time) is one of the most complex teachings of Buddhism, sometimes taking two weeks to confer, and he often confers it on very large audiences, up to 200,000 students and disciples at a time.
The Dalai Lama is the author of numerous books on Buddhism, many of them on general Buddhist subjects but also including books on particular topics like Dzogchen, a Nyingma practice.
In his essay "The Ethic of Compassion" (1999), the Dalai Lama expresses his belief that if we only reserve compassion for those that we love, we are ignoring the responsibility of sharing these characteristics of respect and empathy with those we do not have relationships with, which cannot allow us to "cultivate love." He elaborates upon this idea by writing that although it takes time to develop a higher level of compassion, eventually we will recognize that the quality of empathy will become a part of life and promote our quality as humans and inner strength.
He frequently accepts requests from students to visit various countries worldwide in order to give teachings to large Buddhist audiences, teachings that are usually based on classical Buddhist texts and commentaries, and most often those written by the 17 pandits or great masters of the Nalanda tradition, such as Nagarjuna, Kamalashila, Shantideva, Atisha, Aryadeva and so on.
The Dalai Lama refers to himself as a follower of these Nalanda masters, in fact he often asserts that 'Tibetan Buddhism' is based on the Buddhist tradition of Nalanda monastery in ancient India, since the texts written by those 17 Nalanda pandits or masters, to whom he has composed a poem of invocation, were brought to Tibet and translated into Tibetan when Buddhism was first established there and have remained central to the teachings of Tibetan Buddhism ever since.
As examples of other teachings, in London in 1984 he was invited to give teachings on the Twelve Links of Dependent Arising, and on Dzogchen, which he gave at Camden Town Hall; in 1988 he was in London once more to give a series of lectures on Tibetan Buddhism in general, called 'A Survey of the Paths of Tibetan Buddhism'. Again in London in 1996 he taught the Four Noble Truths, the basis and foundation of Buddhism accepted by all Buddhists, at the combined invitation of 27 different Buddhist organisations of all schools and traditions belonging to the Network of Buddhist Organisations UK.
In India, the Dalai Lama gives religious teachings and talks in Dharamsala and numerous other locations including the monasteries in the Tibetan refugee settlements, in response to specific requests from Tibetan monastic institutions, Indian academic, religious and business associations, groups of students and individual/private/lay devotees. In India, no fees are charged to attend these teachings since costs are covered by requesting sponsors. When he travels abroad to give teachings there is usually a ticket fee calculated by the inviting organization to cover the costs involved and any surplus is normally to be donated to recognised charities.
He has frequently visited and lectured at colleges and universities, some of which have conferred honorary degrees upon him.
Dozens of videos of recorded webcasts of the Dalai Lama's public talks on general subjects for non-Buddhists like peace, happiness and compassion, modern ethics, the environment, economic and social issues, gender, the empowerment of women and so forth can be viewed in his office's archive.
Interfaith dialogue
The Dalai Lama met Pope Paul VI at the Vatican in 1973. He met Pope John Paul II in 1980, 1982, 1986, 1988, 1990, and 2003. In 1990, he met a delegation of Jewish teachers in Dharamshala for an extensive interfaith dialogue. He has since visited Israel three times, and in 2006 met the Chief Rabbi of Israel. In 2006, he met Pope Benedict XVI privately. He has met the Archbishop of Canterbury, Dr. Robert Runcie, and other leaders of the Anglican Church in London, Gordon B. Hinckley, who at the time was the president of The Church of Jesus Christ of Latter-day Saints, as well as senior Eastern Orthodox Church, Muslim, Hindu, Jewish, and Sikh officials.
The Dalai Lama is also currently a member of the Board of World Religious Leaders as part of The Elijah Interfaith Institute and participated in the Third Meeting of the Board of World Religious Leaders in Amritsar, India, on 26 November 2007 to discuss the topic of Love and Forgiveness.
In 2009, the Dalai Lama inaugurated an interfaith "World Religions-Dialogue and Symphony" conference at Gujarat's Mahuva religions, according to Morari Bapu.
In 2010, the Dalai Lama, joined by a panel of scholars, launched the Common Ground Project, in Bloomington, Indiana (USA), which was planned by himself and Prince Ghazi bin Muhammad of Jordan during several years of personal conversations. The project is based on the book Common Ground between Islam and Buddhism.
In 2019, the Dalai Lama fully-sponsored the first-ever 'Celebrating Diversity in the Muslim World' conference in New Delhi on behalf of the Muslims of Ladakh.
Interest in science, and Mind and Life Institute
The Dalai Lama's lifelong interest in science and technology dates from his childhood in Lhasa, Tibet, when he was fascinated by mechanical objects like clocks, watches, telescopes, film projectors, clockwork soldiers and motor cars, and loved to repair, disassemble and reassemble them. Once, observing the Moon through a telescope as a child, he realised it was a crater-pocked lump of rock and not a heavenly body emitting its own light as Tibetan cosmologists had taught him. He has also said that had he not been brought up as a monk he would probably have been an engineer. On his first trip to the west in 1973 he asked to visit Cambridge University's astrophysics department in the UK and he sought out renowned scientists such as Sir Karl Popper, David Bohm and Carl Friedrich von Weizsäcker, who taught him the basics of science.
The Dalai Lama sees important common ground between science and Buddhism in having the same approach to challenge dogma on the basis of empirical evidence that comes from observation and analysis of phenomena.
His growing wish to develop meaningful scientific dialogue to explore the Buddhism and science interface led to invitations for him to attend relevant conferences on his visits to the west, including the Alpbach Symposia on Consciousness in 1983 where he met and had discussions with the late Chilean neuroscientist Francisco J. Varela. Also in 1983, the American social entrepreneur and innovator R. Adam Engle, who had become aware of the Dalai Lama's deep interest in science, was already considering the idea of facilitating for him a serious dialogue with a selection of appropriate scientists. In 1984 Engle formally offered to the Dalai Lama's office to organise a week-long, formal dialogue for him with a suitable team of scientists, provided that the Dalai Lama would wish to fully participate in such a dialogue. Within 48 hours the Dalai Lama confirmed to Engle that he was "truly interested in participating in something substantial about science" so Engle proceeded with launching the project. Francisco Varela, having heard about Engle's proposal, then called him to tell him of his earlier discussions with the Dalai Lama and to offer his scientific collaboration to the project. Engle accepted, and Varela assisted him to assemble his team of six specialist scientists for the first 'Mind and Life' dialogue on the cognitive sciences, which was eventually held with the Dalai Lama at his residence in Dharamsala in 1987. This five-day event was so successful that at the end the Dalai Lama told Engle he would very much like to repeat it again in the future. Engle then started work on arranging a second dialogue, this time with neuroscientists in California, and the discussions from the first event were edited and published as Mind and Life's first book, "Gentle Bridges: Conversations with the Dalai Lama on the Sciences of Mind".
As Mind and Life Institute's remit expanded, Engle formalised the organisation as a non-profit foundation after the third dialogue, held in 1990, which initiated the undertaking of neurobiological research programmes in the United States under scientific conditions. Over the following decades, as of 2014 at least 28 dialogues between the Dalai Lama and panels of various world-renowned scientists have followed, held in various countries and covering diverse themes, from the nature of consciousness to cosmology and from quantum mechanics to the neuroplasticity of the brain. Sponsors and partners in these dialogues have included the Massachusetts Institute of Technology, Johns Hopkins University, the Mayo Clinic, and Zurich University.
Apart from time spent teaching Buddhism and fulfilling responsibilities to his Tibetan followers, the Dalai Lama has probably spent, and continues to spend, more of his time and resources investigating the interface between Buddhism and science through the ongoing series of Mind and Life dialogues and its spin-offs than on any other single activity. As the institute's Cofounder and the Honorary chairman he has personally presided over and participated in all its dialogues, which continue to expand worldwide.
These activities have given rise to dozens of DVD sets of the dialogues and books he has authored on them such as Ethics for the New Millennium and The Universe in a Single Atom, as well as scientific papers and university research programmes. On the Tibetan and Buddhist side, science subjects have been added to the curriculum for Tibetan monastic educational institutions and scholarship. On the Western side, university and research programmes initiated by these dialogues and funded with millions of dollars in grants from the Dalai Lama Trust include the Emory-Tibet Partnership, Stanford School of Medicine's Centre for Compassion and Altruism Research and Education (CCARES) and the Centre for Investigating Healthy Minds, amongst others.
In 2019, Emory University's Center for Contemplative Sciences and Compassion-Based Ethics, in partnership with The Dalai Lama Trust and the Vana Foundation of India, launched an international SEE Learning (Social, Emotional and Ethical Learning) program in New Delhi, India, a school curriculum for all classes from kindergarten to Std XII that builds on psychologist Daniel Goleman's work on emotional intelligence in the early 1990s. SEE learning focuses on developing critical thinking, ethical reasoning and compassion and stresses on commonalities rather than on the differences.
In particular, the Mind and Life Education Humanities & Social Sciences initiatives have been instrumental in developing the emerging field of Contemplative Science, by researching, for example, the effects of contemplative practice on the human brain, behaviour and biology.
In his 2005 book The Universe in a Single Atom and elsewhere, and to mark his commitment to scientific truth and its ultimate ascendancy over religious belief, unusually for a major religious leader the Dalai Lama advises his Buddhist followers: "If scientific analysis were conclusively to demonstrate certain claims in Buddhism to be false, then we must accept the findings of science and abandon those claims." He has also cited examples of archaic Buddhist ideas he has abandoned himself on this basis.
These activities have even had an impact in the Chinese capital. In 2013 an 'academic dialogue' with a Chinese scientist, a Tibetan 'living Buddha' and a Professor of Religion took place in Beijing. Entitled "High-end dialogue: ancient Buddhism and modern science" it addressed the same considerations that interest the Dalai Lama, described as 'discussing about the similarities between Buddhism and modern science'.
Personal meditation practice
The Dalai Lama uses various meditation techniques, including analytic meditation. He has said that the aim of meditation is "to maintain a very full state of alertness and mindfulness, and then try to see the natural state of your consciousness."
Social stances
Tibetan independence
Despite initially advocating for Tibetan independence from 1961 to 1974, the Dalai Lama no longer supports it. Instead he advocates for more meaningful autonomy for Tibetans within the People's Republic of China. This approach is known as the "Middle Way". In a speech at Kolkata in 2017, the Dalai Lama stated that Tibetans wanted to stay with China and they did not desire independence. He said that he believed that China after opening up, had changed 40 to 50 percent of what it was earlier, and that Tibetans wanted to get more development from China. In October 2020, the Dalai Lama stated that he did not support Tibetan independence and hoped to visit China as a Nobel Prize winner. He said "I prefer the concept of a 'republic' in the People's Republic of China. In the concept of republic, ethnic minorities are like Tibetans, The Mongols, Manchus, and Xinjiang Uyghurs, we can live in harmony".
Abortion
The Dalai Lama has said that, from the perspective of the Buddhist precepts, abortion is an act of killing. He has also clarified that in certain cases abortion could be considered ethically acceptable "if the unborn child will be retarded or if the birth will create serious problems for the parent", which could only be determined on a case-by-case basis.
Death penalty
The Dalai Lama has repeatedly expressed his opposition to the death penalty, saying that it contradicts the Buddhist philosophy of non-violence and that it expresses anger, not compassion. During a 2005 visit to Japan, a country which has the death penalty, the Dalai Lama called for the abolition of the death penalty and said in his address, "Criminals, people who commit crimes, usually society rejects these people. They are also part of society. Give them some form of punishment to say they were wrong, but show them they are part of society and can change. Show them compassion." The Dalai Lama has also praised U.S. states that have abolished the death penalty.
Democracy, nonviolence, religious harmony, and Tibet's relationship with India
The Dalai Lama says that he is active in spreading India's message of nonviolence and religious harmony throughout the world. "I am the messenger of India's ancient thoughts the world over." He has said that democracy has deep roots in India. He says he considers India the master and Tibet its disciple, as great scholars went from India to Tibet to teach Buddhism. He has noted that millions of people lost their lives in violence and the economies of many countries were ruined due to conflicts in the 20th century. "Let the 21st century be a century of tolerance and dialogue."
The Dalai Lama has also critiqued proselytization and certain types of conversion, believing the practices to be contrary to the fundamental ideas of religious harmony and spiritual practice. He has stated that "It's very important that our religious traditions live in harmony with one another and I don't think proselytizing contributes to this. Just as fighting and killing in the name of religion are very sad, it's not appropriate to use religion as a ground or a means for defeating others." In particular, he has critiqued Christian approaches to conversion in Asia, stating that he has "come across situations where serving the people is a cover for proselytization." The Dalai Lama has labeled such practices counter to the "message of Christ" and has emphasized that such individuals "practice conversion like a kind of war against peoples and cultures." In a statement with Hindu religious leaders, he expressed that he opposes "conversions by any religious tradition using various methods of enticement."
In 1993, the Dalai Lama attended the World Conference on Human Rights and made a speech titled "Human Rights and Universal Responsibility".
In 2001, in response to a question from a Seattle schoolgirl, the Dalai Lama said that it is permissible to shoot someone in self-defense (if the person was "trying to kill you") and he emphasized that the shot should not be fatal.
In 2013, the Dalai Lama criticised Buddhist monks' attacks on Muslims in Myanmar and rejected violence by Buddhists, saying: "Buddha always teaches us about forgiveness, tolerance, compassion. If from one corner of your mind, some emotion makes you want to hit, or want to kill, then please remember Buddha's faith. ... All problems must be solved through dialogue, through talk. The use of violence is outdated, and never solves problems." In May 2013, he said "Really, killing people in the name of religion is unthinkable, very sad." In May 2015, the Dalai Lama called on Myanmar's Nobel Peace Prize winner Aung San Suu Kyi to do more to help the Rohingya Muslims in Myanmar, said that he had urged Suu Kyi to address the Rohingyas' plight in two previous private meetings and had been rebuffed.
In 2017, after Chinese dissident and Nobel Peace Prize laureate Liu Xiaobo died of organ failure while in Chinese government custody, the Dalai Lama said he was "deeply saddened" and that he believed that Liu's "unceasing efforts in the cause of freedom will bear fruit before long."
Diet and animal welfare
The Dalai Lama advocates compassion for animals and frequently urges people to try vegetarianism or at least reduce their consumption of meat. In Tibet, where historically meat was the most common food, most monks historically have been omnivores, including the Dalai Lamas. The Fourteenth Dalai Lama was raised in a meat-eating family but converted to vegetarianism after arriving in India, where vegetables are much more easily available and vegetarianism is widespread. He spent many years as a vegetarian, but after contracting hepatitis in India and suffering from weakness, his doctors told him to return to eating meat which he now does twice a week. This attracted public attention when, during a visit to the White House, he was offered a vegetarian menu but declined by replying, as he is known to do on occasion when dining in the company of non-vegetarians, "I'm a Tibetan monk, not a vegetarian". His own home kitchen, however, is completely vegetarian.
In 2009, the English singer Paul McCartney wrote a letter to the Dalai Lama inquiring why he was not a vegetarian. As McCartney later told The Guardian, "He wrote back very kindly, saying, 'my doctors tell me that I must eat meat'. And I wrote back again, saying, you know, I don't think that's right. [...] I think now he's vegetarian most of the time. I think he's now being told, the more he meets doctors from the west, that he can get his protein somewhere else. [...] It just doesn't seem right – the Dalai Lama, on the one hand, saying, 'Hey guys, don't harm sentient beings... Oh, and by the way, I'm having a steak.'"
Economics and political stance
The Dalai Lama has referred to himself as a Marxist and has articulated criticisms of capitalism.
He reports hearing of communism when he was very young, but only in the context of the destruction of Communist Mongolia. It was only when he went on his trip to Beijing that he learned about Marxist theory from his interpreter Baba Phuntsog Wangyal. At that time, he reports, "I was so attracted to Marxism, I even expressed my wish to become a Communist Party member", citing his favorite concepts of self-sufficiency and equal distribution of wealth. He does not believe that China implemented "true Marxist policy", and thinks the historical communist states such as the Soviet Union "were far more concerned with their narrow national interests than with the Workers' International". Moreover, he believes one flaw of historically "Marxist regimes" is that they place too much emphasis on destroying the ruling class, and not enough on compassion. He finds Marxism superior to capitalism, believing the latter is only concerned with "how to make profits", whereas the former has "moral ethics". Stating in 1993:
Environment
The Dalai Lama is outspoken in his concerns about environmental problems, frequently giving public talks on themes related to the environment. He has pointed out that many rivers in Asia originate in Tibet, and that the melting of Himalayan glaciers could affect the countries in which the rivers flow. He acknowledged official Chinese laws against deforestation in Tibet, but lamented they can be ignored due to possible corruption. He was quoted as saying "ecology should be part of our daily life"; personally, he takes showers instead of baths, and turns lights off when he leaves a room. Around 2005, he started campaigning for wildlife conservation, including by issuing a religious ruling against wearing tiger and leopard skins as garments. The Dalai Lama supports the anti-whaling position in the whaling controversy, but has criticized the activities of groups such as the Sea Shepherd Conservation Society (which carries out acts of what it calls aggressive nonviolence against property). Before the 2009 United Nations Climate Change Conference, he urged national leaders to put aside domestic concerns and take collective action against climate change.
Sexuality
The Dalai Lama's stances on topics of sexuality have changed over time.
A monk since childhood, the Dalai Lama has said that sex offers fleeting satisfaction and leads to trouble later, while chastity offers a better life and "more independence, more freedom". He has said that problems arising from conjugal life sometimes even lead to suicide or murder. He has asserted that all religions have the same view about adultery.
In his discussions of the traditional Buddhist view on appropriate sexual behavior, he explains the concept of "right organ in the right object at the right time", which historically has been interpreted as indicating that oral, manual and anal sex (both homosexual and heterosexual) are not appropriate in Buddhism or for Buddhists. However, he also says that in modern times all common, consensual sexual practices that do not cause harm to others are ethically acceptable and that society should accept and respect people who are gay or transgender from a secular point of view. In a 1994 interview with OUT Magazine, the Dalai Lama clarified his personal opinion on the matter by saying, "If someone comes to me and asks whether homosexuality is okay or not, I will ask 'What is your companion's opinion?' If you both agree, then I think I would say, 'If two males or two females voluntarily agree to have mutual satisfaction without further implication of harming others, then it is okay.'" However, when interviewed by Canadian TV news anchor Evan Solomon on CBC News: Sunday about whether homosexuality is acceptable in Buddhism, the Dalai Lama responded that "it is sexual misconduct".
In his 1996 book Beyond Dogma, he described a traditional Buddhist definition of an appropriate sexual act as follows: "A sexual act is deemed proper when the couples use the organs intended for sexual intercourse and nothing else ... Homosexuality, whether it is between men or between women, is not improper in itself. What is improper is the use of organs already defined as inappropriate for sexual contact." He elaborated in 1997, conceding that the basis of that teaching was unknown to him. He also conveyed his own "willingness to consider the possibility that some of the teachings may be specific to a particular cultural and historic context".
In 2006, the Dalai Lama has expressed concern at "reports of violence and discrimination against" LGBT people and urged "respect, tolerance and the full recognition of human rights for all".
Women's rights
In 2007, he said that the next Dalai Lama could possibly be a woman: "If a woman reveals herself as more useful the lama could very well be reincarnated in this form."
In 2009, on gender equality and sexism, the Dalai Lama proclaimed at the National Civil Rights Museum in Memphis, Tennessee: "I call myself a feminist. Isn't that what you call someone who fights for women's rights?" He also said that by nature, women are more compassionate "based on their biology and ability to nurture and birth children". He called on women to "lead and create a more compassionate world", citing the good works of nurses and mothers.
At a 2014 appearance at the Tata Institute of Social Sciences in Mumbai, the Dalai Lama said, "Since women have been shown to be more sensitive to others' suffering, their leadership may be more effective."
In 2015, he said in a BBC interview that if a female succeeded him, "that female must be attractive, otherwise it is not much use," and when asked if he was joking, replied, "No. True!" He followed with a joke about his current success being due to his own appearance.
Health
In 2013, at the Culture of Compassion event in Derry, Northern Ireland, the Dalai Lama said that "Warm-heartedness is a key factor for healthy individuals, healthy families and healthy communities."
Response to COVID-19
In a 2020 statement in Time magazine on the COVID-19 pandemic, the Dalai Lama said that the pandemic must be combated with compassion, empirical science, prayer, and the courage of healthcare workers. He emphasized "emotional disarmament" (seeing things with a clear and realistic perspective, without fear or rage) and wrote: "The outbreak of this terrible coronavirus has shown that what happens to one person can soon affect every other being. But it also reminds us that a compassionate or constructive act – whether working in hospitals or just observing social distancing – has the potential to help many."
Immigration
In September 2018, speaking at a conference in Malmö, Sweden home to a large immigrant population, the Dalai Lama said "I think Europe belongs to the Europeans", but also that Europe was "morally responsible" for helping "a refugee really facing danger against their life". He stated that Europe has a responsibility to refugees to "receive them, help them, educate them", but that they should aim to return to their places of origin and that "they ultimately should rebuild their own country".
Speaking to German reporters in 2016, the Dalai Lama said there are "too many" refugees in Europe, adding that "Europe, for example Germany, cannot become an Arab country." He also said that "Germany is Germany".
Retirement and succession plans
In May 2011, the Dalai Lama retired from the Central Tibetan Administration.
In September 2011, the Dalai Lama issued the following statement concerning his succession and reincarnation:
When I am about ninety I will consult the high Lamas of the Tibetan Buddhist traditions, the Tibetan public, and other concerned people who follow Tibetan Buddhism, and re-evaluate whether the institution of the Dalai Lama should continue or not. On that basis we will take a decision. If it is decided that the reincarnation of the Dalai Lama should continue and there is a need for the Fifteenth Dalai Lama to be recognized, responsibility for doing so will primarily rest on the concerned officers of the Dalai Lama's Gaden Phodrang Trust. They should consult the various heads of the Tibetan Buddhist traditions and the reliable oath-bound Dharma Protectors who are linked inseparably to the lineage of the Dalai Lamas. They should seek advice and direction from these concerned beings and carry out the procedures of search and recognition in accordance with past tradition. I shall leave clear written instructions about this. Bear in mind that, apart from the reincarnation recognized through such legitimate methods, no recognition or acceptance should be given to a candidate chosen for political ends by anyone, including those in the People's Republic of China.
In October 2011, the Dalai Lama repeated his statement in an interview with Canadian CTV News. He added that Chinese laws banning the selection of successors based on reincarnation will not impact his decisions. "Naturally my next life is entirely up to me. No one else. And also this is not a political matter", he said in the interview. The Dalai Lama also added that he has not decided on whether he would reincarnate or be the last Dalai Lama.
In an interview with the German newspaper Welt am Sonntag published on 7 September 2014 the Dalai Lama stated "the institution of the Dalai Lama has served its purpose", and that "We had a Dalai Lama for almost five centuries. The 14th Dalai Lama now is very popular. Let us then finish with a popular Dalai Lama."
Gyatso has also expressed fear that the Chinese government would manipulate any reincarnation selection in order to choose a successor that would go along with their political goals. In response the Chinese government implied that it would select another Dalai Lama regardless of his decision.
CIA Tibetan program
In October 1998, the Dalai Lama's administration acknowledged that it received $1.7 million a year in the 1960s from the U.S. government through a Central Intelligence Agency program. When asked by CIA officer John Kenneth Knaus in 1995 to comment on the CIA Tibetan program, the Dalai Lama replied that though it helped the morale of those resisting the Chinese, "thousands of lives were lost in the resistance" and further, that "the U.S. Government had involved itself in his country's affairs not to help Tibet but only as a Cold War tactic to challenge the Chinese."
His administration's reception of CIA funding has become one of the grounds for some state-run Chinese newspapers to discredit him along with the Tibetan independence movement.
In his autobiography Freedom in Exile, the Dalai Lama criticized the CIA again for supporting the Tibetan independence movement "not because they (the CIA) cared about Tibetan independence, but as part of their worldwide efforts to destabilize all communist governments".
In 1999, the Dalai Lama said that the CIA Tibetan program had been harmful for Tibet because it was primarily aimed at serving American interests, and "once the American policy toward China changed, they stopped their help."
Criticism
Ties to India
The Chinese press has criticized the Dalai Lama for his close ties with India. His 2010 remarks at the International Buddhist Conference in Gujarat saying that he was "Tibetan in appearance, but an Indian in spirituality" and referral to himself as a "son of India" in particular led the People's Daily to opine, "Since the Dalai Lama deems himself an Indian rather than Chinese, then why is he entitled to represent the voice of the Tibetan people?" Dhundup Gyalpo of the Tibet Sun replied that Tibetan religion could be traced back to Nalanda in India, and that Tibetans have no connection to Chinese "apart ... from a handful of culinary dishes". The People's Daily stressed the links between Chinese Buddhism and Tibetan Buddhism had accused the Dalai Lama of "betraying southern Tibet to India". In 2008, the Dalai Lama said for the first time that the territory India claims and administers as part of Arunachal Pradesh is part of India, citing the disputed 1914 Simla Accord.
Shugden controversy
The Dorje Shugden Controversy reappeared in the Gelug school by the publication of the Yellow Book in 1976, containing stories about wrathful acts of Dorje Shugden against Gelugpas who also practiced Nyingma teachings. In response, the 14th Dalai Lama, a Gelugpa himself and advocate of an "inclusive" approach to the teachings of Tibetan Buddhism, started to speak out against the practice of Dorje Shugden in 1978.
The controversy has attracted attention in the West because of demonstrations held in 2008 and 2014 by Dorje Shugden practitioners. A 2015 Reuters investigation determined "that the religious sect behind the protests has the backing of the Communist Party" and that the "group has emerged as an instrument in Beijing's long campaign to undermine support for the Dalai Lama". After the Reuters investigation revealed that China backs it, the Shugden group halted operations and disbanded.
Gedhun Choekyi Nyima
In April 2018, the Dalai Lama confirmed the official Chinese claims about Gedhun Choekyi Nyima by saying that he knew from "reliable sources" that the Panchen Lama he had recognized was alive and receiving normal education. He said he hoped that the Chinese-recognised Panchen Lama (Gyaincain Norbu) studied well under the guidance of a good teacher, adding that there were instances in Tibetan Buddhist tradition, of a reincarnated lama taking more than one manifestation.
Public image
The Dalai Lama places highly in global surveys of the world's most admired men, ranking with Pope Francis as among the world's religious leaders cited as the most admired.
The Dalai Lama's appeal is variously ascribed to his charismatic personality, international fascination with Buddhism, his universalist values, and international sympathy for the Tibetans. In the 1990s, many films were released by the American film industry about Tibet, including biopics of the Dalai Lama. This is attributed to both the Dalai Lama's 1989 Nobel Peace Prize as well as to the euphoria following the Fall of Communism. The most notable films, Kundun and Seven Years in Tibet (both released in 1997), portrayed "an idyllic pre-1950 Tibet, with a smiling, soft-spoken Dalai Lama at the helm – a Dalai Lama sworn to non-violence": portrayals the Chinese government decried as ahistorical.
The Dalai Lama has his own pages on Twitter, Facebook, and Instagram.
The Dalai Lama has tried to mobilize international support for Tibetan activities. The Dalai Lama has been successful in gaining Western support for himself and the cause of greater Tibetan autonomy, including vocal support from numerous Hollywood celebrities, most notably the actors Richard Gere and Steven Seagal, as well as lawmakers from several major countries. Photos of the Dalai Lama were banned after March 1959 Lhasa protests until after the end of the Cultural Revolution in 1976. In 1996 the Chinese Communist Party once again reinstated the total prohibition of any photo of the 14th Dalai Lama. According to the Tibet Information Network, "authorities in Tibet have begun banning photographs of the exiled Dalai Lama in monasteries and public places, according to reports from a monitoring group and a Tibetan newspaper. Plainclothes police went to hotels and restaurants in Lhasa, the Tibetan capital, on 22 and 23 April and ordered Tibetans to remove pictures of the Dalai Lama..." The ban continues in many locations throughout Tibet today.
In the media
The 14th Dalai Lama has appeared in several non-fiction films including:
10 Questions for the Dalai Lama (2006, documentary)
Dalai Lama Renaissance (2007, documentary)
The Sun Behind the Clouds (2010)
Bringing Tibet Home (2013)
Monk with a Camera (2014, documentary)
Dalai Lama Awakening (2014)
Compassion in Action (2014)
He has been depicted as a character in various other movies and television programs including:
Kundun, 1997 film directed by Martin Scorsese
Seven Years in Tibet, 1997 film starring Brad Pitt and David Thewlis
Klovn "Dalai Lama" Season 1, Episode 4 (2005)
Red Dwarf episode "Meltdown" (1991)
Song of Tibet, 2000 film directed by Xie Fei.
The Great Escape "14th Dalai Lama" (2018) on Epic
"Dalai Lama", episode of the Indian television series Mega Icons (2019–20) on National Geographic.
The Dalai Lama was featured on 5 March 2017, episode of the HBO late-night talk show Last Week Tonight, in which host John Oliver conducted a comedic interview with the Dalai Lama, focusing on the topics of Tibetan sovereignty, Tibetan self-immolations, and his succession plans.
A biographical graphic novel, Man of Peace, also envisaging the Dalai Lama's return to Tibet, was published by Tibet House US. The Extraordinary Life of His Holiness the Fourteenth Dalai Lama: An Illuminated Journey, illustrations and text by artist Rima Fujita, narrated by the Dalai Lama, was published by Simon and Schuster in 2021.
Awards and honours
The Dalai Lama has received numerous awards and honors worldwide over his spiritual and political career. For a more complete list see Awards and honors presented to the 14th Dalai Lama.
After the 1989 Tiananmen Square protests and massacre, the Norwegian Nobel Committee awarded him the 1989 Nobel Peace Prize. The Committee officially gave the prize to the Dalai Lama for "the struggle of the liberation of Tibet and the efforts for a peaceful resolution" and "in part a tribute to the memory of Mahatma Gandhi".
He has also been awarded the:
1959 Ramon Magsaysay Award for Community Leadership;
1994 Freedom Medal from the Roosevelt Institute;
2005 Christmas Humphreys Award from the Buddhist Society in the United Kingdom;
2007 Congressional Gold Medal, the highest civilian award bestowed by the American Congress and President. The Chinese government declared this would have "an extremely serious impact" on relations with the United States;
2006 Order of the White Lotus by the Republic of Kalmykia for outstanding services and significant contribution to the spiritual revival and prosperity of the republic.
2007 Ahimsa Award from the Institute of Jainology in recognition of individuals who embody and promote the principles of Ahimsa (Non-violence); and in
2012, Order of the Republic of Tuva by the Tuvan Republic in recognition of the contribution to the upbringing of high spiritual and cultural tolerance, strengthening interreligious and interethnic harmony.
2012, the Templeton Prize. He donated the prize money to the charity Save the Children.
In 2006, he became one of only six people ever to be granted Honorary Citizenship of Canada. In 2007 he was named Presidential Distinguished Professor at Emory University in Atlanta, Georgia, the first time he accepted a university appointment.
Publications
My Land and My People: The Autobiography of His Holiness the Dalai Lama. Ed. David Howarth. Weidenfeld and Nicolson, 1962.
Deity Yoga: In Action and Performance Tantras. Ed. and trans. Jeffrey Hopkins. Snow Lion, 1987.
Tantra in Tibet. Co-authored with Tsong-kha-pa, Jeffrey Hopkins. Snow Lion, 1987.
The Dalai Lama at Harvard. Ed. and trans. Jeffrey Hopkins. Snow Lion, 1988.
Freedom in Exile: The Autobiography of the Dalai Lama, London: Little, Brown and Co., 1990,
My Tibet, co-authored with photographer Galen Rowell, 1990,
The Path to Enlightenment. Ed. and trans. Glenn H. Mullin. Snow Lion, 1994.
Essential Teachings, North Atlantic Books, 1995,
The World of Tibetan Buddhism, translated by Geshe Thupten Jinpa, foreword by Richard Gere, Wisdom Publications, 1995,
Tibetan Portrait: The Power of Compassion, photographs by Phil Borges with sayings by Tenzin Gyatso, 1996,
Healing Anger: The Power of Patience from a Buddhist Perspective. Trans. Thupten Jinpa. Ithaca, NY: Snow Lion, 1997,
The Gelug/Kagyü Tradition of Mahamudra, co-authored with Alexander Berzin. Ithaca, NY: Snow Lion Publications, 1997,
The Art of Happiness, co-authored with Howard C. Cutler, M.D., Riverhead Books, 1998,
The Good Heart: A Buddhist Perspective on the Teachings of Jesus, translated by Geshe Thupten Jinpa, Wisdom Publications, 1998,
Kalachakra Tantra: Rite of Initiation, edited by Jeffrey Hopkins, Wisdom Publications, 1999,
MindScience: An East–West Dialogue, with contributions by Herbert Benson, Daniel Goleman, Robert Thurman, and Howard Gardner, Wisdom Publications, 1999,
The Power of Buddhism, co-authored with Jean-Claude Carrière, 1999,
Opening the Eye of New Awareness, Translated by Donald S. Lopez, Jr., Wisdom Publications, 1999,
Ethics for the New Millennium, Riverhead Books, 1999,
Consciousness at the Crossroads. Ed. Zara Houshmand, Robert B. Livingston, B. Alan Wallace. Trans. Thupten Jinpa, B. Alan Wallace. Snow Lion, 1999.
Ancient Wisdom, Modern World: Ethics for the New Millennium, Little, Brown/Abacus Press, 2000,
Dzogchen: Heart Essence of the Great Perfection, translated by Geshe Thupten Jinpa and Richard Barron, Snow Lion Publications, 2000,
The Meaning of Life: Buddhist Perspectives on Cause and Effect, Translated by Jeffrey Hopkins, Wisdom Publications, 2000,
Answers: Discussions with Western Buddhists. Ed. and trans. Jose Cabezon. Snow Lion, 2001.
The Compassionate Life, Wisdom Publications, 2001,
Violence and Compassion: Dialogues on Life Today, with Jean-Claude Carriere, Doubleday, 2001,
Imagine All the People: A Conversation with the Dalai Lama on Money, Politics, and Life as it Could Be, Coauthored with Fabien Ouaki, Wisdom Publications, 2001,
An Open Heart, edited by Nicholas Vreeland; Little, Brown; 2001,
The Heart of Compassion: A Practical Approach to a Meaningful Life, Twin Lakes, Wisconsin: Lotus Press, 2002,
Sleeping, Dreaming, and Dying, edited by Francisco Varela, Wisdom Publications, 2002,
Essence of the Heart Sutra: The Dalai Lama's Heart of Wisdom Teachings, edited by Geshe Thupten Jinpa, Wisdom Publications, 2002,
The Pocket Dalai Lama. Ed. Mary Craig. Shambhala Pocket Classics, 2002.
The Buddhism of Tibet. Ed. and trans. Jeffrey Hopkins, Anne C. Klein. Snow Lion, 2002.
The Art of Happiness at Work, co-authored with Howard C. Cutler, M.D., Riverhead, 2003,
Stages of Meditation (commentary on the Bhāvanākrama). Trans. Ven. Geshe Lobsang Jordhen, Losang Choephel Ganchenpa, Jeremy Russell. Snow Lion, 2003.
Der Weg des Herzens. Gewaltlosigkeit und Dialog zwischen den Religionen (The Path of the Heart: Non-violence and the Dialogue among Religions), co-authored with Eugen Drewermann, PhD, Patmos Verlag, 2003,
The Path to Bliss. Ed. and trans. Thupten Jinpa, Christine Cox. Snow Lion, 2003.
How to Practice: The Way to a Meaningful Life, translated and edited by Jeffrey Hopkins, 2003,
The Wisdom of Forgiveness: Intimate Conversations and Journeys, coauthored with Victor Chan, Riverbed Books, 2004,
The New Physics and Cosmology: Dialogues with the Dalai Lama, edited by Arthur Zajonc, with contributions by David Finkelstein, George Greenstein, Piet Hut, Tu Wei-ming, Anton Zeilinger, B. Alan Wallace and Thupten Jinpa, Oxford University Press, 2004,
Dzogchen: The Heart Essence of the Great Perfection. Ed. Patrick Gaffney. Trans. Thupten Jinpa, Richard Barron (Chokyi Nyima). Snow Lion, 2004.
Practicing Wisdom: The Perfection of Shantideva's Bodhisattva Way, translated by Geshe Thupten Jinpa, Wisdom Publications, 2004,
Lighting the Way. Snow Lion, 2005.
The Universe in a Single Atom: The Convergence of Science and Spirituality, Morgan Road Books, 2005,
How to Expand Love: Widening the Circle of Loving Relationships, translated and edited by Jeffrey Hopkins, Atria Books, 2005,
Living Wisdom with His Holiness the Dalai Lama, with Don Farber, Sounds True, 2006,
Mind in Comfort and Ease: The Vision of Enlightenment in the Great Perfection. Ed. Patrick Gaffney. Trans. Matthieu Ricard, Richard Barron and Adam Pearcey. Wisdom Publications, 2007,
How to See Yourself as You Really Are, translated and edited by Jeffrey Hopkins, 2007,
The Leader's Way, co-authored with Laurens van den Muyzenberg, Nicholas Brealey Publishing, 2008,
My Spiritual Autobiography compiled by from speeches and interviews of the 14th Dalai Lama, 2009,
Beyond Religion: Ethics for a Whole World, Mariner Books, 2012,
The Wisdom of Compassion: Stories of Remarkable Encounters and Timeless Insights, coauthored with Victor Chan, Riverhead Books, 2012,
My Appeal to the World, presented by Sofia Stril-Rever, translated from the French by Sebastian Houssiaux, Tibet House US, 2015,
The Book of Joy: Lasting Happiness in a Changing World, coauthored by Archbishop Desmond Tutu, 2016,
Behind the Smile: The Hidden Side of the Dalai Lama, by Maxime Vivas (author), translated from the French book Not So Zen, Long River Press 2013,
Discography
Inner World (2020)
See also
Awards and honors presented to the 14th Dalai Lama
List of organizations of Tibetans in exile
Chinese intelligence activity abroad#Modes of operation
Dalai Lama Center for Peace and Education
Foundation for Universal Responsibility of His Holiness the Dalai Lama
History of Tibet (1950–present)
Human rights in Tibet
Annexation of Tibet by the People's Republic of China
Protests and uprisings in Tibet since 1950
Sinicization of Tibet
List of overseas visits by Tenzin Gyatso the 14th Dalai Lama outside India
List of peace activists
List of Nobel laureates
List of refugees
Templeton Prize lauretes
List of rulers of Tibet
Religious persecution
Freedom of religion in China#Buddhism
Tibet Fund
Tibet House
Tibet Religious Foundation of His Holiness the Dalai Lama
Tibetan art
Tibetan Centre for Human Rights and Democracy
Tibetan culture
Tibetan Institute of Performing Arts
Notes
Reference
Citations
Sources
Craig, Mary. Kundun: A Biography of the Family of the Dalai Lama (1997) Counterpoint. Calcutta. .
Bell, Sir Charles (1946). Portrait of the Dalai Lama Wm. Collins, London, 1st edition. (1987) Wisdom Publications, London. .
Iyer, Pico. The Open Road: The Global Journey of the Fourteenth Dalai Lama (2008) Alfred A. Knopf, Inc.
Knaus, Robert Kenneth. Orphans of the Cold War: America and the Tibetan Struggle for Survival (1999) PublicAffairs. .
Mullin, Glenn H. (2001). The Fourteen Dalai Lamas: A Sacred Legacy of Reincarnation, pp. 452–515. Clear Light Publishers. Santa Fe, New Mexico. .
Richardson, Hugh E. (1984). Tibet & Its History. 1st edition 1962. 2nd edition, Revised and Updated. Shambhala Publications, Boston. (pbk).
Shakya, Tsering. The Dragon in the Land of Snows (1999) Columbia University Press. .
United States. Congressional-Executive Commission on China. The Dalai Lama: What He Means for Tibetans Today: Roundtable before the Congressional-Executive Commission on China, One Hundred Twelfth Congress, First Session, 13 July 2011. Washington, D.C.: U.S. G.P.O., 2012.
External links
Teachings by the Dalai Lama
Photographs of the Dalai Lama's visit to UC Santa Cruz, October 1979 from the UC Santa Cruz Library's Digital Collections
Quotes by the Dalai Lama
|-
|-
|-
|-
1935 births
20th-century lamas
20th-century philosophers
20th-century Tibetan people
21st-century philosophers
21st-century Tibetan people
Buddhist and Christian interfaith dialogue
Buddhist socialism
Buddhist feminists
Buddhist monks from Tibet
Buddhist pacifists
Scholars of Buddhism from Tibet
Civil rights activists
A4
Dorje Shugden controversy
Humanitarians
Living people
Male feminists
Tibetan Marxists
Marxist feminists
Congressional Gold Medal recipients
Nautilus Book Award winners
Nobel Peace Prize laureates
Nobel laureates of the People's Republic of China
Vice Chairpersons of the National People's Congress
Nonviolence advocates
People from Kangra, Himachal Pradesh
People from Haidong
Ramon Magsaysay Award winners
Recipients of the Four Freedoms Award
Tibetan activists
Tibetan dissidents
Tibetan Buddhists from Tibet
Tibet freedom activists
Tibetan feminists
Tibetan pacifists
Templeton Prize laureates
People associated with animal welfare and rights
21st-century Buddhist monks
21st-century lamas
Tibetan refugees
Tibetan emigrants to India
World War II political leaders |
30801765 | https://en.wikipedia.org/wiki/Troy%20Trojans%20basketball | Troy Trojans basketball | Troy Trojans basketball may refer to either of the basketball teams that represent Troy University:
Troy Trojans men's basketball
Troy Trojans women's basketball |
10385693 | https://en.wikipedia.org/wiki/Squirrel%20Systems | Squirrel Systems | Squirrel Systems is a Burnaby-based point of sale vendor specializing in hospitality management systems. Squirrel is based in Burnaby, Canada.
History
Squirrel Systems was founded in 1984, and released the first restaurant point of sale system to use an integrated diskless touchscreen terminal for order management. Originally a wholly owned subsidiary of Sulcus Hospitality Technologies Corporation, in 1998 Sulcus merged with Eltrax Systems, Incorporated (Nasdaq SmallCap: ELTX). Squirrel is currently a wholly owned subsidiary of Marin Investments Ltd.
Squirrel Workstation
One of the unique characteristics of Squirrel's original product was the use of hardened LCD touchscreen terminals. Unlike other systems that used keyboards and CRT monitors, Squirrel terminals had no moving parts and were easily adapted to any operating environment. The original Squirrel terminals reached over 35,000 installed units worldwide, and was the first to integrate an LCD panel, credit card reader, employee ID reader, and CPU inside a single unit. Later units would incorporate IP connectivity, remote booting of a customized Linux operating system, and a Java virtual machine.
Squirrel Embedded Linux
In 1998 Squirrel Systems released Squirrel Embedded Linux (SEL), a customized distribution of Linux for "thin client" terminal architecture. SEL has several characteristics that were unique at the time of development, including primary support for diskless workstations, customized high-volume touchscreen drivers, integrated Java virtual machine with hardware control, and two-stage booting from a Windows server.
Industry awards
In 2010, O'Charley's named Squirrel as its Enterprise Support Partner of the Year at the annual Inukshuk Business Partner Awards.
Squirrel Systems was awarded the 2009 Epson Envision Award for Innovation for its Squirrel in a Box product.
Squirrel Systems was awarded the 1999 Independent Cash Register Dealers Association Silver Award for Outstanding Sponsor in Systems/Software.
In 1998, Squirrel was the third recipient of the Microsoft Retail Application Developer award at the HITEC Show in Los Angeles. Microsoft recognized SquirrelONE as the first application to integrate Java, Microsoft SQL Server, and Windows NT in the retail market.
References
Further reading
Shift4 and Squirrel Systems Partner to Offer Payment Solution to the Hospitality Industry. - Entertainment Close-up | HighBeam Research
Squirrel One and Merchant Link Integrate Solutions - Wireless News | HighBeam Research
Payment Software from Squirrel Systems Certified by NetSPI as Compliant with Latest PA-DSS Standard. - Information Technology Newsweekly | HighBeam Research
External links
Technology companies of Canada
Companies based in Burnaby
Computer hardware companies
Companies established in 1984
Diskless workstations
Point of sale companies
1984 establishments in British Columbia |
16370058 | https://en.wikipedia.org/wiki/Openmoko%20Linux | Openmoko Linux | Openmoko Linux is an operating system for smartphones developed by the Openmoko project. It is based on the Ångström distribution, comprising various pieces of free software.
The main targets of Openmoko Linux were the Openmoko Neo 1973 and the Neo FreeRunner. Furthermore, there were efforts to port the system to other mobile phones.
Openmoko Linux was developed from 2007 to 2009 by Openmoko Inc. The development was discontinued because of financial problems. Afterwards the development of software for the Openmoko phones was taken over by the community and continued in various projects, including SHR, QtMoko and Hackable1.
Components
Openmoko Linux uses the Linux kernel, GNU libc, the X.Org Server plus their own graphical user environment built using the EFL toolkit, GTK+ toolkit, Qt toolkit and the illume window manager (previously Matchbox window manager). The OpenEmbedded build framework and opkg package management system, are used to create and maintain software packages.
This is a very different approach than that of Android (in which everything except Linux, Webkit, and the Java language inside of Android seems non-standard). Applications targeted for Android must be substantially rewritten and are largely not portable. Many existing Linux desktop apps can be easily ported to Openmoko. (However the limited computational power and screen resolution require substantial reworking of existing applications, in order to render them usable in a finger-oriented, small-screen environment.)
See also
List of free and open source Android applications
References
External links
Smartphones
Mobile operating systems
Embedded Linux
Openmoko
Free mobile software
Linux distributions |
62323456 | https://en.wikipedia.org/wiki/Cybercrime%20Law%20No.%2063 | Cybercrime Law No. 63 | Law No. 63 of 2015 on Combating Information Technology Crimes, known as Cybercrime Law No. 63, is a law of Kuwait concerning various computer-related crimes. It came into effect on 12 January 2016.
The law was criticised by several international human rights organizations for its perceived limitations on freedom of expression and freedom of the press.
Background
Kuwait was once considered the most tolerant Gulf country in terms of freedom of speech, but changes to national security laws since the 2011 protests have changed this view. However, sections of the Kuwaiti constitution guarantee freedom of opinion and expression, such as article 36.
Implementation
On 16 June 2015, the law was approved by the National Assembly. It was published on 7 July 2015.
On 22 June 2015, Minister of Justice Yaqoub al-Sane stated the law's purpose was to "preserve social stability" and the law only intended to punish those who "publish pornography or offend others".
Law No. 63 of 2015 on Combating Information Technology Crimes
Law No. 63 contains 21 articles which set out the regulation of various online activities in Kuwait. Some of the articles include:
Article 4 – punishes with imprisonment or a fine anyone who "establishes a website or publishes or produces or prepares or creates or sends or stores information or data with a view to use, distribute or display to others via the Internet or an information technology device that would prejudice public morality or manages a place for this purpose".
Article 6 – punishes anyone that criticises the head of state, "shows contempt or disdain for the state constitution", "insults or demonstrates contempt for the judiciary or prejudices their integrity and impartiality", or "prejudices public morals, incites to breach public order or violate law even if a crime does not occur".
Article 7 – punishes, for up to ten years imprisonment, a number of acts listed in Article 28 of the 2006 Press and Publications Law, including “the publication of incitement to overthrow the regime in the country".
Article 13 – allows the government to "close shops or locations for a period of one year from the date on which any of the acts listed in the cybercrime law are committed and confiscate devices and software".
International response
In January 2016, a joint statement was released by the Arabic Network for Human Rights Information, Article 19, the Gulf Center for Human Rights, the International Service for Human Rights, and Reporters Without Borders. The statement raised concerns over articles 4, 6, and 7, which the groups believed could be "used to limit freedom of expression on the Internet, as well as to target online activists" and "individual human rights defenders". They called on the Kuwaiti authorities to repeal the selected articles of Law No. 63, as well as the entirety of the Press and Publications Law.
The Human Rights Watch argued that articles 6, 7, and 13 work as an "effective barrier to critical political speech over the Internet, contrary to essential safeguards that article 19 of the ICCPR provides".
References
External links
Cybercrime Law No. 63
Politics of Kuwait
Kuwaiti law |
54196474 | https://en.wikipedia.org/wiki/Informatics%20General | Informatics General | Informatics General Corporation, earlier Informatics, Inc., was an American computer software company in existence from 1962 through 1985 and based in Los Angeles, California. It made a variety of software products, and was especially known for its Mark IV file management and report generation product for IBM mainframes, which became the best-selling corporate packaged software product of its time. It also ran computer service bureaus and sold turnkey systems to specific industries. By the mid-1980s Informatics had revenues of near $200 million and over 2,500 employees.
Computer historian Martin Campbell-Kelly, in his 2003 volume From Airline Reservations to Sonic the Hedgehog: A History of the Software Industry, considers Informatics to be an exemplar of the independent, middle-sized software development firms of its era, and the Computer History Museum as well as the Charles Babbage Institute at the University of Minnesota have conducted a number of oral histories of the company's key figures. Historian Jeff Yost identifies Informatics as a pioneering "system integration" company, similar to System Development Corporation. The Chicago Tribune wrote that Informatics was "long a legend in software circles".
Informatics General was acquired by Sterling Software in 1985 in what was the first hostile takeover in the software industry. Immediately, Sterling Software became a member of the largest corporations within the software industry, with $200 million in revenue.
Background and founding
Walter F. Bauer (1924–2015), the main founder of Informatics, was from Michigan and earned a Ph.D. in mathematics from the University of Michigan in 1951. His early work was at the Michigan Aeronautical Research Center; the National Bureau of Standards, where he programmed the early digital SEAC computer; and for Boeing's BOMARC interceptor missile. He became a manager at the Ramo-Wooldridge Corporation in charge of a unit with 400 employees and two computers, an IBM 704 and a UNIVAC 1103A, and in 1958 joined the merged Thompson Ramo Wooldridge company. Bauer later said that he "was never a green eyeshade programmer" nor a "strong technologist", but being a systems person and a manager gave him a good grasp of computer systems and their capabilities.
Another key founder was Werner L. Frank (1929–), who during 1954–55 had done programming work on the ILLIAC I at the University of Illinois at Urbana–Champaign. He was then recruited by Bauer and joined Ramo-Wooldridge in 1955, where he did numerical analysis and programming in assembly language and FORTRAN. Working with pioneers of scientific computing such as David M. Young, Jr. and George Forsythe, Frank published several important articles on numerical analysis in Journal of the ACM and other publications. By 1958, Ramo-Wooldridge had been acquired by Thompson Products, Inc. and became known as TRW Inc.; Frank then did early programming on several defense industry computers, including the AN/UYK-1, and spent long stretches of time in Washington, D.C.
The third founder was another TRW colleague, Richard H. Hill, who had been a professor at UCLA and an assistant director of a joint data center between that university and IBM.
In January 1962, Bauer approached Frank and Hill to start a new independent company that would provide software services. At the time, it was an unusual move since few people saw software as a viable business. "Primarily, we were going to develop systems for large-scale computer systems, probably of a military nature. That was our first objective," stated Bauer in a later interview. Despite a lack of any kind of business school training, Bauer put together a business plan for the new company.
Venture capital was hard to locate for such start-ups in that era and Bauer met with several rejections. He and the others then decided to join forces with Data Products Corporation, a newly formed manufacturer of computer peripheral equipment. The co-founder of Data Products, Erwin Tomash (1921–2012), was from Minnesota and had earlier worked at Engineering Research Associates, a pioneering computer firm from the 1950s. He had known Bauer and thought that the two new efforts being formed together would provide a hedge against either one of them encountering start-up difficulties.
Informatics was thus created as a wholly owned subsidiary of Data Products.
The new software firm was capitalized at all of $40,000, of which Data Products contributed $20,000, Bauer $10,000, and Frank and Hill $5,000 each.
The name
The company's name came from the founders' desire to base it on "-atics", a Greek suffix meaning "the science of". Their first thought was "Datamatics", but a form of that was already taken by an early computer from Honeywell/Raytheon; Bauer and the others settled on "Informatics", meaning "the science of information handling".
At the very same time, March 1962, French computer pioneer Philippe Dreyfus came up with the name Societe pour L'Informatique et Applique for a new firm of which he was co-founder, thus creating a French version of the same name. However, in France, the term "informatique" soon became a generic name, meaning the modern science of information handling, and would become accepted by the Académie française as an official French word. The term then came into common use in a number of other European countries, adapted slightly for each language.
In the United States, however, Informatics fought any such use as an infringement upon their legal rights to the name; this was partly in fear of the term becoming a brandnomer. Bauer later recalled that at one point the Association for Computing Machinery, the leading academic organization in computer software, wanted to change its name to the Society for Informatics, but the company refused to allow that use. Eventually the generic usage of the term around the world caused the company to reconsider and, according to Frank, was the reason for the 1982 name change to Informatics General.
Early history
Informatics, Inc. began operations on March 19, 1962, in Frank's empty house in Woodland Hills in the San Fernando Valley area of Los Angeles. In addition to the three founders, the fourth initial employee was a secretary, Marie Kirchner. An important early hire was Frank Wagner, a North American Aviation executive who was past president of the IBM user group SHARE and had many contacts among that community. Data Products, which served as the Informatics back office, was located in nearby Culver City at that time.
The company struggled at first, winning only a few small contracts, until it improved its presence in government circles and finally, in early 1963, won a $150,000 contract with the Rome Air Defense Center. This was a forerunner of several large contracts it would have with that U.S. Air Force facility in years to come, and several other defense sector contracts soon followed. By its second year, Informatics was profitable and had 37 employees; by the third year it was growing well. Informatics was one of the major companies of the time involved in the software contracting business. An early description of the company used in press releases was "Informatics provides analysis, design and consulting services for users of digital processing equipment."
At the time Informatics did little marketing, mostly relying upon its network of personal contacts. The firm was one of forty or fifty software companies started in the early 1960s (many of which are little known to history). Two other prominent firms were Applied Data Research (ADR) and Advanced Computer Techniques (ACT). All three are credited by Campbell-Kelly as firms that succeeded because, and gained awareness due to, the personality of their principal founder; in this case it was Bauer who "succeeded in combining his entrepreneurial activities with his role as a leader in the technical computing community."
Meanwhile, Data Products, which had moved its office to Sherman Oaks, California in 1964 and renamed itself slightly to Dataproducts, was suffering from falling behind IBM on disk drive technology; its eventually successful printer business had not yet taken off. In order to placate its subsidiary, the three Informatics co-founders were given 7.5 percent of Data Products stock in 1965. As Tomash later said, "To satisfy them, we deliberately took the step that we knew would separate us in the long run."
In May 1966 there was an IPO of Informatics stock, priced at $7.50 per share, that brought in $3.5 million. Only the third software company to have stock issued for it and thus becoming a public company, it was listed on the over-the-counter market, based in New York. However, 60 percent of its stock was still held by Dataproducts. At that time Informatics had revenues of $4.5 million and a net income of $171,000, and the number of employees was around 300. By 1967 Informatics had something possessed 3% to 4% of the total market for custom-built software.
During the mid-1960s the U.S. stock market went through what was known as the "go-go market" boom, and computer companies become special darlings of traders. Informatics was no exception; its price–earnings ratio rose from 25 at the time of its IPO to 200 by mid-1968 and over 600 by early 1969, despite the company having only $40,000 in earnings for the previous year. Informatics used the proceeds from additional offerings during this period to fund development of its Mark IV product and to create a Data Services Division.
Dataproducts sold off the last of its Informatics stock in 1969, and in doing so Informatics thus became fully independent. For its initial investment of $20,000 in Informatics, Dataproducts had gained about $20 million in return. By 1969, Informatics had revenues of over $11 million with earnings of $561,000.
Origins of Mark IV and the software product business
The history of what became Mark IV goes back to 1960 when GIRLS (the Generalized Information Retrieval and Listing System) was developed for the IBM 704 by John A. Postley (1923–2004), an engineer who had worked for many years in the aerospace industry; the first customer for GIRLS was the Douglas Aircraft Company. Postley was working in the Advanced Information Systems subsidiary of Electrada Corporation along with Robert M. Hayes and others.
In April 1963, Advanced Information Systems was purchased from Electrada by Hughes Dynamics,
an early 1960s subsidiary of the Hughes Tool Company
that provided computerized management and information services.
Subsequent versions of GIRLS were called Mark I and Mark II; made for the IBM 1401, they were increasingly stronger in their capabilities. Under Hughes, Mark III was in development, with key performance improvements.
Hughes Dynamics then decided it wanted to exit the activity of making software.
While accounts later told by some Informatics executives imply that Howard Hughes himself was aware of, or played a role, in what was going on, Hughes biographers suggest that
in the secretive world of his empire, it appears that Hughes was never informed of the existence of Hughes Dynamics until a couple of years after its creation; once he found out about it, he had it shut down.
In any case, in May 1964, Informatics acquired Advanced Information Systems from Hughes Dynamics. For this it paid essentially nothing: Hughes actually paid Informatics $38,000 to take it, but in doing so Informatics assume some existing customer obligations of about the same amount.
Within Informatics, Postley became the champion of making another version, Mark IV, that was for the new IBM 360 computer line.
Mark IV was not the first file management system/report generator; indeed there had been several efforts in the late 1950s towards this end, including one from SHARE called 9PAC. Indeed, it is possible Bauer and Wagner, who were both active in SHARE (Wagner had been a chair of it), were influenced as to the value of such a product by their exposure to previous efforts in that users group.
But only Postley had the full vision of what a software product might be; Informatics as a whole was reluctant to finance the development cost, which Postley estimated to be half a million dollars. So Postley recruited five companies, each of whom provided $100,000: Sun Oil, National Dairy Industries, Allen-Bradley, Getty Oil, and Prudential.
Existence of the new product was first announced in 1967.
Mark IV found quick success as a product: during 1968, its initial year of availability, it garnered orders for 117 installations and sales of nearly $2 million.
But IBM then decided to unbundle software from its mainframes in 1969, which helped facilitate the growth of the commercial software industry in the 1970s and beyond.
This accelerated sales of Mark IV severalfold from what Informatics had anticipated.
Equitable Life Assurance Society relationship
Beginning in 1970 the computer industry hit a downturn that lasted several years. Software houses of the time tended to suffer from unprofitable contracts, failed ventures, and slowing demand. Informatics' creation of a Data Services Division, and with it the acquisition of a number of computer service bureaus as a means of providing utility computing, did not go well. In May 1970 Informatics announced a $4.2 million loss, its first since 1963.
But in a time when many software firms did not survive, the more conservatively managed Informatics did.
By 1974, Informatics revenues were up to $33 million.
In 1974, Informatics became a subsidiary of The Equitable Life Assurance Society of the United States, with the goal of gaining the ability to grow organically and to acquire other businesses. This followed a joint venture, called Equimatics, that Informatics had formed with Equitable in 1971 to establish a data services business that would provide such services to Equitable and others in the insurance industry. But in many respects the choice to become a subsidiary was forced by the inability of Informatics, in the gloomy early 1970s, to find investment capital.
For the year 1976, Informatics had revenues of $58 million. It had some 1,800 employees at locations around the world. From around 1976 through to the end in 1985, Informatics corporate headquarters was located in an office along Ventura Boulevard in Woodland Hills.
Seeking to capitalize on the brand of its most known entity, some other Informatics products were named with a "IV" in their title, including "Production IV" for planning in manufacturing and "Accounting IV" for the financial sector. Additional products included Life-Comm and Issue-Comm for the insurance sector, Minicomm and Intercomm for teleprocessing and communications, and CSS, for corporate shareholder processing.
In addition to packaged software, Informatics continued to make custom software and engage in professional services contracts.
The relationship with Equitable did not work out well, and by the late 1970s Informatics sought to be an independent company again. It had a second IPO and starting in 1979 began trading as an Over-the-counter stock with the symbol IMAT. Then on June 7, 1982, the recently renamed Informatics General Corporation began trading on the New York Stock Exchange under the symbol IG. It was only the second software company ever to be listed on the NYSE.
Products and divisions
Mark IV and Mark V
Mark IV was a batch processing, early fourth-generation programming language that combined file management and upkeep with report generation capabilities. One taxonomy of application generators published in a scholarly setting placed Mark IV in the category of "Generalized file-management systems and sophisticated report writers". Mark IV was originally designed to be usable by non-programmers, with simple interfaces given for report requests and data updates. This interface consisted of filling out one of several paper forms by hand and then having it keypunched into a machine-readable form, that was then run by a batch operation. To some extent the goal was reached and non-programmers were able to use it. However experience showed that non-programmers had difficulty understanding the increasingly complex capabilities of the product and that only those with some data processing background were able to use those capabilities effectively.
Mark IV and Applied Data Research's Autoflow are generally considered to be the two most influential early software products.
At this time IBM mainframes dominated the computer landscape, but IBM failed to seize control of the database management or file management areas. Instead, Informatics built up a large sales force that was explicitly modeled after IBM's, with long sales cycles also a characteristic of their market space.
An independent users' group of Mark IV customers, named the IV League (a play on the Ivy League of universities), was created and had its first full meeting in 1969. By 1972 the group's meetings up to 750 attendees. Chapters of the group were established in different countries in Europe as well as Japan, and regional groups existing in the United States as well. Existence of the users' group, which tended to be populated by computer programmers, helped push Mark IV towards more sophisticated features with which intricate applications could be built, and further away from the model where non-programmers were intended users.
In the eight years between its introduction in 1968 and 1976, Mark IV was sold into some 1,100 installations around the world and had $50 million in sales.
At the start, and for a long time, the base price of Mark IV was $30,000.
It later sold for up to over $100,000 depending upon mainframe size and features desired,
and that higher price became a typical cost for customers.
By 1977, Informatics had created a Software Products Group to conduct the Mark IV business.
By 1984 it was still the best-selling software product targeted to corporations in the world, with some 3,000 installations.
At its peak, it was responsible for $30 million in revenues per year.
Over the three decades of the 1970s through 1990s it had some $300 million in sales.
Indeed, Mark IV was the first software product to have cumulative sales of $1 million, $10 million, and later $100 million. It is not only that, as computer historian Thomas Haigh has written, "Mark IV [was] the most successful product of the early independent software industry" –
but that it remained the best-selling independent software product in the world for a 15-year stretch.
For a long time Mark IV had few effective rivals in its market niche; as Bauer later remembered, "We didn't have much competition with Mark IV for many, many years. It was just pure sailing for 10 or 15 years."
However, starting in 1980, the technological age of the product became apparent and sales of Mark IV leveled off, amassing only about 60 percent of what Informatics had planned for.
A successor product, Mark V, was released in 1981–82. In contrast to the batch-only features of Mark IV, the goal of Mark V was the generation of online applications, although initially this was still done through some batch-oriented development steps.
The same taxonomy of application generators mentioned earlier placed Mark V in the category of "Application Development Systems", as it covered more advanced capabilities such as generating online systems with screen dialogue and similar features. Mark V was made available for two IBM mainframe online transaction processing environments, IMS/DC and, beginning in 1983, CICS. Mark V never become a dominant force in the marketplace like Mark IV was. It had many competitors, including products from Applied Data Research, IBM, Cincom Systems, DMW Europe, and Pansophic Systems.
Following the acquisition by Sterling Software, Mark IV continued to be a significant product, but in 1994 it was renamed VISION:Builder. By one account, in the late 1990s the product still had close to $20 million in annual revenue. Ownership then passed again in 2000, when Sterling Software was sold to Computer Associates and the product remained under the name VISION:Builder.
Government services and online search
During the 1960s and 1970s Informatics played a key role in the development of online information services. One of these was RADCOL at Rome Air Development Center (site of some of Informatics's earliest contracts); this was short for RADC Automatic Document Classification On-Line, which ran from the late 1960s into the mid-1970s.
Informatics had several contracts with NASA. The earliest, in 1966 (and possibly earlier) was in support of NASA efforts at the Jet Propulsion Laboratory and the Ames Research Center. In conjunction with the contract, Informatics opened a branch office in Glendale, California. Work done there included software developed for the Surveyor, Mariner and Apollo programs with applications as diverse as satellite tracking, redesigning the Goldstone Observatory's antenna, and a database application for maintaining information about primates in use at various NASA laboratories. The program for redesign of the Goldstone antenna used what came to be called a hill climbing algorithm and was given special recognition by NASA in the form of a small monetary prize for its developers.
Later, Informatics had a long-running contract with NASA from 1968 to 1980.
This began with winning an over-$4 million business to operate the Scientific and Technical Information Facility at College Park, Maryland. There Informatics maintained NASA online bibliographic systems, including the pioneering RECON facility. These systems involved abstracts and indexes created against microfilm and other representations of documents on NASA-related subject areas.
Informatics made continual improvements to it, including reducing the response time for queries down to three seconds or less.
Using some of the technology in place at NASA, including the DIALOG system which had been placed in the public domain, Informatics developed online search services in other areas as well during the 1970s, including TOXLINE and CHEMLINE for the United States National Library of Medicine. At one point Informatics made an offer to DIALOG founder Roger K. Summit to join and had he done so, it is possible that Informatics would have entered the commercial online services world with some form of what became DIALOG. Instead, Informatics focused on government and private information services that were developed and maintained on a contractual basis.
By the late 1970s into the 1980s, Geno P. Tolari was the head of Informatics' government and military services operations, which was based in San Francisco, California.
Following the Sterling Software takeover, Tolari stayed on as chief of what became known as the Federal Systems Group.
Data Services Division
Although Informatics was always best known as a software company, it always had a presence in the services arena, with service processing and facilities management often accounting for around a quarter of Informatics' revenue.
This activity was the responsibility of the Data Services Division, which was funded out of Informatics' stock offerings during the late 1960s. Informatics spent $3.6 million acquiring a number of existing computer service bureaus with the goal of providing utility computing. The timing was poor, as the boom in such services soon turned to bust, and the Data Services Division lost $100,000 a month during 1970.
Nevertheless, the division kept on going. Based in Fairfield, New Jersey, by the mid-1970s it offered a virtualized VM/370 platform, based on both IBM System/370 systems and Itel's IBM mainframe-compatible AS/5 and AS/6 systems.
The network access featured multiplexers located in various U.S. cities. Users could work in either OS/VS batch mode or VM/CMS interactive mode with a variety of programming language and program development tools available as well as access to an IMS database. The service offering also provided programs to optimize telecommunications usage and costs.
Typical customers of the Data Services Division during the 1970s included the General Services Administration for hosting a teleprocessing services program,
the National Highway Traffic Safety Administration for hosting a reporting system,
and Simplan Systems, Inc. for macroeconomic modeling.
Informatics still offered time-sharing services into the early 1980s. Then the Fairfield division, by that time known as the Data Services Operation, was sold to Mellonics Systems Development, a division of the Litton Industries conglomerate, in 1984.
Answer Division
During 1979 and 1980 Informatics tried to broaden its range of IBM mainframe-related products beyond just Mark IV. Database management systems were becoming increasingly popular, but Informatics decided not to create its own such system, instead making products that worked in conjunction with IBM's database and data communications products, such as IMS and CICS, respectively. The Answer Division was created to fulfill this goal, although at one point, the Mark IV product line itself was also moved into the division. The Answer Division was located in the Canoga Park area of Los Angeles.
Answer/2 was a product released in 1979 that was billed as a moderately priced report writer for files on IBM mainframe operating systems.
It was followed by Answer/DB, a product introduced in 1981, that allowed end users at terminals to make queries against various files and IMS databases on the same IBM mainframe operating systems.
Informatics then put out a series of a products that linked specific popular PC-based applications to Answer/DB on the mainframe. Such linkages were a frequent aim of products being developed during this time.
For Informatics, these products were called and released as
Visi/Answer in 1983, dBASE/Answer in 1984, and Lotus/Answer also in 1984, so named because they represented links for VisiCalc, dBASE, and Lotus 1-2-3. The products generally communicated to the mainframe over IRMA boards or the FORTE package. Another implementation of these products, for the IBM 3270 PC, was billed as Micro/Answer and released in early 1985.
Sales of Visi/Answer were much slower than Informatics had anticipated. Instead of seeing the sort of short sales cycle that one would anticipate with PC products, potential customers viewed the link as a strategic decision and Informatics saw the same kind of long sales cycles they were used to encountering with their mainframe products.
By 1985 the Answer product line was continuing to experience high costs and disappointing sales. In general, Informatics was one of a number of successful mainframe-based software companies that failed to do well in the microcomputer market, either because they did not see that market as being worth the effort or because the high-volume, low-price nature of that domain was the opposite of the low-volume, high-price environment they were used to.
Management Services Division and Ordernet
William D. Plumb was a pioneer of electronic data interchange who began thinking about it while at a Columbus, Ohio-based firm known as Management Horizons. The data processing part of this firm was spun off as a subsidiary, Management Horizons Data Systems (MHDS), which provided transaction-based computer services to wholesale distributors. MHDS was subsequently acquired by Citibank.
Informatics then bought the MHDS subsidiary from Citibank in 1974 or 1975 for $3.4 million. Plumb's vision of electronic data interchange was constructed as a service called Ordernet, which entered the market in 1978.
Ordernet was an early e-commerce initiative that provided electronic interchange of purchase orders and associated business documents between manufacturers and distributors.
In particular, it was set up as a service bureau that would provide a solution to distributors looking to handle business-to-business transactions. In 1975 Informatics had arranged with the National Wholesale Druggists’ Association to create a central clearinghouse for the processing of electronic purchase orders within the industry. In 1978 that association formally endorsed the use of Ordernet, which led Informatics to create an Ordernet Services Division. As a business unit within Informatics, this division was essentially a one-person effort at the beginning.
The electronic data interchange industry continued to grow in its adoption of standards and more agreements were made in regards to Ordernet.
By 1982 four trade associations had endorsed the use of Ordernet, the most recent being the National Association of Service Merchandising.
Informatics' Columbus operation, subsequently known as the Management Services Division, included more than just Ordernet and Warner Blow was the executive in charge of it.
Ordernet was one of the main prizes that Sterling Software sought by acquiring Informatics in 1985.
It was expanded greatly under Sterling Software as a series of e-commerce initiatives, so much so that it was later spun off as its own company, Sterling Commerce, in 1996. Warner Blow became the CEO of Sterling Commerce.
Frank would later say, "Little
did we realize that this business would one day be a raging success that would bring its owner
into the great New World of E-commerce and ultimately the Internet."
TAPS Division
The Terminal Application Processing System, known as TAPS, had been created by a Midtown Manhattan-based firm named Decision Strategy Corporation, which was founded by Michael J. Parrella. Intended to significantly reduce the development time for online, CRT terminal-based applications, TAPS had been around since 1974 and initially ran on IBM mainframes under the CICS teleprocessing monitor and the TCAM access method.
The core idea was to allow, by the creation of tables and other specifications, the user to create all of the functionality needed by an online application, without requiring user programming. TAPS was not only a development tool for making online applications but also a production environment to run them within, and as such provided essential capabilities including network security and control, screen mapping and data editing, menu processing, database maintenance and inquiry, concurrency protection, and network and database recovery.
During the late 1970s TAPS was ported to a number of minicomputer platforms, including the Digital Equipment Corporation PDP-11, the Hewlett-Packard HP 3000, Perkin Elmer's Interdata minicomputers, and the IBM Series/1, along with systems from Harris Corporation and Tandem Computers.
At this time some 70 percent of TAPS sales were to other companies doing software development, such as McCormack & Dodge and On-Line Systems, Inc., in what the firm said was a deliberate strategy to first market the product to customers who would be "the toughest test of all".
Over time Decision Strategy Corporation fell under financial stress and went through a significant downsizing. In October 1980, it was acquired by Informatics. Bauer stated that Informatics wanted an entré into the minicomputer market and Frank had been looking for a while for a transaction- and terminal-based
application building system. As part of the acquisition, Informatics created a TAPS Division in New York with Parrella as its head.
Freedom from vendor-specific databases and data communications were desirable qualities in application generators,
and Informatics continued to stress the portability of TAPS across different hardware, operating systems, and terminal models.
Prime Computer became an important minicomputer platform for the product;
also supported was the NCR 9300 under ITX.
Projects were undertaken to expand the number of IBM platforms that could host TAPS, to include not just System 370 OS-based ones such as OS/VS1 but also the DOS-based SSX/VSE for the IBM 4300, and even the relatively obscure IBM 8100 distributed processing engine.
The overall goal was a product that could span across mainframes, minicomputers, and microcomputers.
Applications could be built and tested in one environment, such as an IBM mainframe in a data center, and then run in another environment, such a minicomputer located in a regional location or a microcomputer located in the field.
TAPS found its biggest market in the U.S. government, with its portability a big advantage for such customers, since they often possessed a disparate collection of computer systems brought about by lowest-bid government contracting requirements.
The U.S. Navy in particular was a major customer going back to the 1970s. By the mid-1980s, TAPS was heavily used inside the Navy's stock management and distribution system.
During the early-mid-1980s TAPS underwent an implementation change from TAPS I, which was written in less-portable languages, to TAPS II, which was written in an explicitly designed portable dialect of the Pascal programming language.
Although he was gone from Informatics by that time, Frank later wrote that "Unfortunately, TAPS did not become
economically viable and was ultimately de-committed."
In any case, an early 1985 reorganization within Informatics saw a proposal that the TAPS Division be moved from New York to Rockville, Maryland. Instead, most of the division's employees left. Effective control of the TAPS product went to SOFT, Inc. (Source of Future Technology), a consulting company in New York City that had previously done work on the product and was known for being one of the few consulting firms that was owned by women.
SOFT did development work to keep TAPS going on the Tandem and especially IBM platforms, and TAPS remained in use by the Army and Navy for accounting, personnel, and distribution and supply applications into the 2000s, with license renewals and maintenance payments from the Defense Information Systems Agency of around $800,000 a year through at least 2009. It was not until 2015 that TAPS was finally retired from service by the U.S. military.
Life Insurance Systems Division
United Systems International was a Dallas, Texas-based company that was building an ambitious solution for automating the back-office functions for companies that offer life insurance. Informatics acquired it in 1971 as part of the Equimatics initiative. From this the Life-Comm solution emerged; the Life-Comm III version in particular became popular in the mid-1970s, quickly getting to the $1 million level in sales and growing to have several dozen customers among insurance companies. It eventually became the leading product in the field.
But then around 1984 it fell into difficulty and was responsible for some of Informatics' declining financial fortunes. In late 1984, the division was sold to The Continuum Company.
Legal software divisions
Informatics had two divisions that related to computer support for law firms.
One was the Legal Information Services Division, which was begun around 1974, was based in Rockville, Maryland, and provided a service bureau for litigation support services.
In particular it offered a legal support service that assisted law firms with large-scale document maintenance and retrieval functions in complex litigation efforts.
The basis for this service was online search work in the legal area that Informatics had done as part of its government services work in such areas.
This unit was also sometimes known as Legal Information Services Operations.
The other had its origins with
Professional Software Systems, Inc., a Phoenix, Arizona-based firm that created law practice management software for U.S. law firms. Founded around 1976, it provided a turnkey solution that ran on the Wang VS minicomputer.
It was one of the first software companies to realize that law firms needed dedicated computer support for client billing operations, and from that need its Legal Time Management System product was created. By 1980 the firm had a customer base that included 75 major law firms and revenues of about $5 million per year.
In May 1981, Informatics acquired Professional Software Systems.
In so doing the Professional Software Systems Division was created.
Continuing to sell the Wang-based Legal Time Management System turnkey solution,
the Phoenix division had yearly revenues on the order of $30 million by the mid-1980s. It would claim in advertisements in the ABA Journal to have 30 of the largest 100 law firms as customers and to be the top supplier of integrated legal word and data processing systems.
Following the Sterling Software acquisition, the Rockville operation was sold in 1987 to ATLIS. As an entity, ATLIS Legal Information Services persisted at least into the early 1990s. The Phoenix operation was sold several times, beginning in 1986, and also was still active into the early 1990s as owned by Wang Laboratories.
Professional services
Even with the success of Mark IV, contract programming services still remained the primary revenue generator for the company during much of its history.
The company was still engaged in professional services as of 1984.
Bauer later said that while Informatics had a good start on professional services, they never really grew that business and thus missed a major market opportunity.
Others
CPM Systems, Inc. was a pioneer in Critical path method (CPM) and Program evaluation and review technique (PERT) techniques that had begun as part of Hughes Dynamics. In 1965 Informatics acquired it and formed the CPM Systems Division, led by Russell D. Archibald and located in Sherman Oaks, Los Angeles. Much of its focus was on the efficient planning and construction of tract housing, but the business dissipated during a housing downturn in the late 1960s.
During the 1970s Informatics brought out accounting software, but failed to compete effectively with that from Management Science America.
Business Management Systems was another division of Informatics in early 1985, located in Atlanta.
Final years and the Sterling Software takeover battle
Informatics continued to grow, both organically and via acquisition. Indeed, by the early-mid-1980s Informatics General had made more than thirty different acquisitions along the way.
Werner Frank had a parting of the ways with Informatics management and left the company at the end of 1982, with some acrimonious relations taking place between him and Bauer.
There were attempts to change the structure of Informatics' management, such that Bauer would be less involved in operations. Accordingly, in February 1983, Bruce T. Coleman was named president of the company. However, during a large-scale reorganization of the company in August 1984, which involved the selling off of some unprofitable businesses, Coleman departed and Bauer resumed being both chairman and president.
The company continued to have strong revenue growth, moving from $129 million in 1982 to $152 million in 1983 to $191 million in 1984. Profits followed the same path for most of the time, with seven straight years of increasing earnings through 1983, including moving from $5.4 million and $1.49 per share in 1982 to $8.5 million and $1.67 per share in 1983. But then in 1984 earnings declined to $4.7 million and 82 cents per share, with two of Informatics' ten divisions showing an outright loss. The performance of Informatics stock became erratic, as exemplified by a market close in December 1983 where the New York Times wrote that Informatics General was the "big loser" of the day when its stock fell to after a poor earnings forecast was put out, or by a drop of to on a day in July 1984 when another a forecast for a break-even quarter was released.
By 1985, Informatics General had some 2,600 employees and offices in 30 cities in North America and in nine other locations around the world. It was the fourth largest independent software company in the world. Informatics had a solid cash position and almost no long-term debt.
However the company and its stock was considered, in the words of the Los Angeles Times, a "chronic underachiever" and "a lackluster performer on Wall Street". Overall the stock had fallen from a one-time high of $34 per share to around $17, with a low point of $14. In the 1984 book The Coming Computer Industry Shakeout, writer Stephen T. McClellan had characterized Informatics General as "Doing too many things, none of them well." He criticized company management, saying further said that "Bauer, the longtime chairman, is 60 years of age and has managed the firm too autocratically and too monotonously for too long." As a result, Wall Street analysts considered the company a prime target for acquisition, with the expectation that new management could make it a better.
Sterling Software had been founded in 1981 by executive Sterling Williams and investor Sam Wyly and found growth via a series of acquisitions, becoming public in 1983. Wyly had a controversial background with both successes and failures, the latter including a $100 million loss in attempting to establish Datran, a U.S. nationwide digital network in direct competition with AT&T.
Werner Frank had begun consulting for Sterling Software almost as soon as he left Informatics and became an executive vice president of Sterling in October 1984.
Sterling Software saw Informatics General as a possible acquisition, but Informatics management decided it did not want to be acquired, and especially not by Sterling Software. On April 15, 1985, Sterling offered $25 per share for Informatics, then when that was rejected by the Informatics board, on April 22 increased the offer to $26 per share.
When that too was rejected, the acquisition attempt became an overt hostile takeover that was later described by one Informatics executive as "an all-out war", with both financial interests and pure ego driving it. Sterling deciding to stage a proxy battle, taking out full page advertisements in newspapers such as the Wall Street Journal and the Los Angeles Times to try to convince shareholders to elect Wyly and Williams to the Informatics board at an upcoming shareholders' meeting.
This was the first hostile takeover attempt that the software industry had ever seen. Until then received opinion had been that it would be counterproductive, due to the rationale, as Wyly later said, that "nobody
can do a hostile takeover of a software company because the talent will walk out the door." However, Wyly felt that in this case, the staff in question would view more competent management coming in "not as conquerors but as liberators." Financing for the takeover attempt came from Michael Milken and the "junk bonds" of Drexel Burnham Lambert. Bad feelings ensued all over, including a lawsuit by Informatics that in part charged that Sterling had benefited from confidential information from Frank, a charge that many people gave credence to but that he always strongly denied. (In Bauer's later rueful estimation, the main beneficiaries of the takeover struggle were lawyers and investment bankers, who received millions of dollars in fees no matter the outcome.)
On May 9, 1985, Informatics management won the proxy battle, by a 70-to-30-percent margin reelecting Bauer and another board member rather than electing Wyly and Williams. But Sterling also had a victory because some proposed enhanced anti-takeover measures were not approved. Furthermore, the fact that trading on the stock on Wall Street had become quite heavy, with some 70 percent of its issue changing hands during the battle, led to Bauer concluding that the company's shareholders actually did want to be acquired. Attempts by Informatics to find a white knight came up empty. A series of other possible proposals for Informatics soon emerged, however; these included two specific offers, one from a private leverage buyout proposed by Bauer, the other from an unidentified third party. But these were seen as inferior.
So finally, on June 21. 1985, it was announced that Informatics board of directors had agreed to be acquired by Sterling for $27 per share, meaning $135 million in total. The acquisition was approved by Informatics shareholders in a process that ended on August 13, 1985. At that point, as the Chicago Tribune later wrote, "the Informatics name, long a legend in software circles, was gone."
Aftermath and legacy
Overnight, Sterling Software became a $200 million in revenue company, up from $20 million, and one of the biggest firms in the software industry. One Computerworld writer referred to the takeover as "the guppy swallowing the whale."
The entire Informatics corporate headquarters office in Woodland Hills was let go, including Bauer. Bauer had been CEO of Informatics for its entire 23-year history, in what he believed was a record at the time for the longest period that a founding CEO had lasted in that position in a company. Bauer also believed he was the longest-tenured CEO in the computer industry at that time.
In reflection on the hostile takeover process a couple of years later, he said, "I've been associated with a lot of firsts in the software industry. This was one I could have done without."
Sterling Software management insisted in the first years after the acquisition, and later in oral histories, that the transition had gone well, that layoffs other than at the corporate office had been minimal, and that they had brought about better performance than Informatics management had. Informatics employees sometimes had a different perspective, as some 40 percent of the staff at the Canoga Park facility were laid off in September 1985, during a day employees called Black Thursday.
Sterling sold off several Informatics divisions as part of paying off the takeover financing. Other units became part of the core of Sterling Software going forward.
The Ordernet business of Informatics was expanded greatly under Sterling Software as a series of e-commerce initiatives under the rubrics Electronic Document Interchange and Electronic Data Interchange, so much so that it was later spun off as its own company, Sterling Commerce, in 1996.
The Informatics brand name may have lasted longest in connection with one of its aforementioned legal software entities, the Professional Software Systems Division. Sterling Software renamed it as the Informatics Legal Systems division, then sold it in 1986 to Baron Data Systems, a company that made legal and medical systems. Advertisements from that entity stressed "Informatics" far more than "Baron Data". In 1987 Baron Data was acquired by Convergent Technologies, a computer maker; Informatics Legal Systems remained as the name of the subsidiary under Convergent. But the legal software still ran on Wang systems and thus was not a match with the parent, so in 1988 the Phoenix operation was acquired by Wang Laboratories itself. There it became known as the Wang Informatics Legal & Professional Systems, Inc. wholly owned subsidiary and was still based in Phoenix. Wang Informatics was still active in 1992 when Wang Laboratories itself went into bankruptcy.
In 2000, Sterling Software was sold to Computer Associates. That same year, Sterling Commerce was sold to SBC Communications; it later became part of IBM.
Relations between Bauer and Frank did not remain completely sundered, and in 1999 Frank attended, along with Wagner, Postley, and three other early executives, a private "Informatics Retrospective" hosted by Bauer, where they could, in Bauer's words, "discuss what happened, good and bad."
References
Bibliography
Chapter also appears beginning on p. 31 of pdf and cited page numbers are to those pages.
Further reading
Praised by Campbell-Kelly as a major corporate history but is a privately published typescript and thus hard to find – see WorldCat entry
External links
Informatics brochure for Mark IV, 1972
Software Memories entry – MHDS and its successors
Defunct software companies of the United States
International information technology consulting firms
Software companies based in California
Companies based in Los Angeles
Software companies established in 1962
Software companies disestablished in 1985
1962 establishments in California
1985 disestablishments in California |
457774 | https://en.wikipedia.org/wiki/Frank%20Lorenzo | Frank Lorenzo | Francisco Anthony "Frank" Lorenzo (born May 19, 1940) is an American businessman. He is well known for his management of Continental Airlines and Texas International Airlines, between 1972 and 1990, through airline deregulation. Lorenzo also led the creation and management of the holding company for the group, Texas Air Corporation, through which New York Air was formed in 1980 and Eastern Air Lines was acquired in 1986, as well as Frontier Airlines and People Express Airlines.
As an airline manager, Lorenzo earned a reputation for low fares and cost cutting, as well as being confrontational with unions, as he and other airline managers pursued an affordable business model for the newly deregulated aviation era. Lorenzo was held in high esteem by airline regulators as committed to low fares, while others in the industry referred to Lorenzo as 'the embodiment of deregulation.' As R.E.G. Davies, the former aerospace historian at the Smithsonian Museum put it, Lorenzo "challenged the entire structure of cost-and-revenue relations that had long existed in a closely protected corporate environment and in doing so generated a cause célèbre of historic proportions."
Since 1990, Lorenzo has been chairman of Savoy Capital, Inc., an asset management and venture capital firm, and he has served as a trustee for The Hispanic Society of America since 2005.
Early life
Born to Spanish immigrants Olegario (died 1980) and Ana (née Mateos, died 1996), Lorenzo grew up in Queens, New York. His father was a long-time beauty salon proprietor in Manhattan; his mother worked as a hairdresser at their salon.
Public high school, Columbia and Harvard Business School
Lorenzo attended Forest Hills High School and then worked his way through Columbia University, holding several jobs, including at Macy's as a salesperson and Coca-Cola as a truck driver and member of the Teamsters Union. He graduated in 1961 with a B. A. degree in economics, followed by an MBA from Harvard Business School in 1963, as he was turning 23.
Jobs at TWA, Eastern, and the Army
Lorenzo's first professional jobs, from 1963–1966, were at Trans World Airlines, as a Senior Analyst, and then at Eastern Airlines, as Manager of Financial Analysis. He also spent six months in the Army reserve in 1964, and then returned to his job in New York.
Lorenzo, Carney & Co.
He formed Lorenzo & Carney, Inc., a financial advisory firm specializing in airlines, in 1966 with Robert Carney, who had also attended Harvard Business School. Their first offices were atop the Pan Am Building in New York. The firm participated in the establishment of a supplemental airline, Universal Airways, and also in the refinancing of British West Indian Airlines (BWIA).
Jet Capital
Lorenzo and Carney later formed Jet Capital Corporation in 1969, initially as an aircraft leasing company. Jet Capital raised $1.5 million in a public stock offering, and was later asked by Chase Manhattan Bank to attempt a refinancing plan for Texas International Airlines (TIA) as an alternative to bankruptcy, as the airline was in default on its aircraft bank loans. Jet Capital put together a refinancing and equity infusion for Texas International Airlines in 1971.
Early airline career
Texas International Airlines
As part of the refinancing plan for Texas International Airlines (TIA), Jet Capital acquired control of TIA in 1972, and received 26% of TIA's equity interest and 59% of its voting power for $1.15 million. Lorenzo became its president and Carney its executive vice president in August 1972. The deal was approved by the Civil Aeronautics Board (CAB), which regulated the airline industry in the United States at the time, after a contentious CAB approval process fought with Howard Hughes, who at the time controlled Hughes Airwest.
When Lorenzo took control of TIA in 1972, it was on the verge of financial collapse, having lost money since 1966. Two years later, Lorenzo's management was able to steer the company to break even due to significant streamlining of operations. Some years later an analyst from Oppenheimer & Company, citing Lorenzo and business-partner Carney's strategies of substituting obsolete planes with jets, eliminating unprofitable routes in exchange for destinations with higher demand, and instituting half-price "peanut-fares" for the first time in the aviation industry, noted TIA's ability to compete and win against much larger and financially stronger companies. TIA also had to compete with a new-at-the-time Texas intrastate airline, Southwest Airlines, which was union-free then and had much lower costs.
Lorenzo's management changed the character of the airline, having revamped its fleet, increased utilization, cut costs and dropped many losing routes, while adding flights on strong segments. The average trip distance per passenger boarded jumped 25% from 1972 to 1976. In 1977, the company earned $8 million, and in 1978, TIA's reported net income was up to $13.2 million and it was described as an "aggressive, innovative carrier".
TIA implemented in 1977, just before the advent of airline deregulation, the first unrestricted airline low fares, which were called "Peanut Fares." The success of this promotion drove major improvement in TIA's financial position. In his book on aviation history, author R.E.G. Davies refers to these fares as "...a watershed in airline passenger tariffs."
Airline takeover attempts
National Airlines
In 1978, the airline world was astonished to learn that Lorenzo (TIA) had moved to take over National Airlines, a domestic trunk airline three times its size, having purchased 9.2% of the stock of National Airlines.
For several weeks starting in June 1978, Texas International Air had acquired shares of National Airlines, filing documents with the U.S. Securities and Exchange Commission when its stake reached just short of the 10% that would require prior approval of the Civil Aeronautics Board (CAB). TIA was reported at the time to be studying the "possibility of seeking control" of National. Several weeks later, TIA announced intentions to buy up to 25% of National's stock, and requested approval from the CAB to acquire the stock and direct control of National. TIA's attempted takeover of the much larger National "surprised a lot of people", because National was so much larger than TIA at the time and no airline had ever attempted an "unfriendly" takeover of another airline before. Lorenzo's management team viewed National's stock as significantly undervaluing the assets of the company, and the move was financial in addition to being strategic and aimed at combining the routes of the two airlines.
A competing offer for control of National was submitted to the CAB by Pan Am, and National ultimately agreed to be acquired by Pan Am; these actions drove up the price of National's stock. The CAB gave TIA and Pan Am each permission to acquire up to 25% of National's shares, and the two companies acquired 45% of the shares in total. The management and directors of National, as a group, owned less than 5% of the outstanding shares. In December, Eastern Airlines also joined the bidding for National. TIA and Pan Am considered the bid a "ploy to block their own chances" of completing the deal, but Eastern chairman Frank Borman called the offer serious.
In April 1979, the United States Department of Justice and the United States Department of Transportation both announced opposition to a merger of Eastern and National on the grounds that the merger would be anti-competitive. In July, the CAB said it "would not stand in the way of airline mergers that appear to benefit the public". National accepted the Pan Am offer and did not submit TIA's offer to its shareholders, which rendered the TIA offer "effectively dead". TIA agreed to sell its shares in National to Pan Am, earning TIA a net profit on the merger attempt of approximately $46 million.
TWA in 1980 and 1985
With a large amount of cash on hand, observers said that TIA was likely to begin another acquisition attempt quickly, to preclude becoming a takeover target itself. TIA did exactly that, accumulating shares of Trans World Airlines (TWA), an even bigger target than National. TWA had airline revenues about 14 times those of TIA, and the total revenue of TWA's parent corporation, Trans World Corp., was 20 times TIA's. TIA's plan was to merge TIA with the TWA subsidiary of Trans World Corp., providing it with smaller feed aircraft and a feed network.
TWA was opposed to the merger; its board "unanimously affirmed" publicly that TWA was not for sale. After accumulating more than 4% of Trans World's stock, TIA eventually backed out of the proposed deal, selling all of its holdings by January 1980.
In 1985, Lorenzo's team again sought TWA as a merger partner. This time they were approached by the management of TWA as a "white knight" against the threatened acquisition of TWA by Carl Icahn, who was feared by TWA management and employees. Lorenzo signed a formal contract for the acquisition of TWA in June 1985. However, Icahn pressed on with his acquisition attempt after he received the backing of TWA's pilot union who promised major cost cuts.
In August 1985, the TWA board voted to cede the company to Icahn who didn't require any government approval, while Lorenzo's airline would, since airline acquisitions of another airline still required DOT approval.
Formation of Texas Air as holding company of group
Lorenzo restructured TIA in 1980, forming a holding company called Texas Air Corp, which was controlled by Jet Capital, as TIA had been. Texas Air owned TIA and had cash holdings of about $100 million, which Lorenzo said would be used primarily for investments in the airline industry.
Startup of New York Air as separate sub in New York
Later in 1980, Texas Air formed a new airline, New York Air, with $25 million in capital. Lorenzo was the chairman of the new company, which was to fly shuttle flights between New York La Guardia, the airline's hub, and Boston-Logan, and Washington-National Airports. Plans included expansion to several cities in the Northeast.
New York Air planned to win customers by charging lower fares and providing better service and roomier seats than the dominant carrier between New York and Washington, Eastern Airlines, which responded by offering lower fares. Pan Am, which had announced competing service after New York Air's formation, also lowered its fares.
The formation of New York Air was opposed by the unions of Texas International Airlines, who were upset that employees were being hired with pay rates substantially lower than union rates at TIA; in the case of pilots, the rate was about half of the union rate. The unions felt the company was set up specifically to "sidestep longstanding labor contracts with organized employees" and that the jobs should have been offered to TIA employees. The pilots' union planned a $1 million protest campaign and a boycott of companies that did business with Texas Air, the parent company, but the campaign was unsuccessful and was called off because of the 1981 air traffic controllers' strike.
New York Air, independently managed away from the other airline subs, expanded in the 1980s to new cities and was successful in building a strong presence in the profitable Northeast shuttle markets.
Continental Airlines
Purchase
Continental Airlines (CAL) had been on TIA's potential merger screen for quite a while. CAL had been losing money regularly and seemed not to have developed an effective strategy to deal with airline deregulation and the new competition that it brought. Its western oriented route structure fit well with TIA's network. Lorenzo made two attempts to merge the companies on a friendly basis, but was rejected by its long-time Chairman Bob Six.
In January 1981, Texas International Airlines announced an offer to buy Continental. It had already purchased just over 9% of Continental's shares, and offered to buy enough to bring its stake to between 35% to 49%. Continental already had an agreement to merge with Western Airlines, and Texas Air announced that it would seek to vote its shares to block the merger which it felt was not in the interests of CAL shareholders. As with previous attempts, TIA's takeover target was a much larger company than itself.
There was opposition to TIA's planned takeover. In particular, unions feared that Lorenzo would lay off workers and that he was anti-union, demonstrated by formation of New York Air as a non-unionized company, charges which Lorenzo denied by making clear the employees had the right to select union representation should they have so desired. While TIA acquired 49% of Continental's shares, an employee group attempted to stop the plan through legal maneuvers designed to give them control of the company rather than sell to TIA. Continental's chairman, Alvin Feldman, objected to the purchase, telling Lorenzo that he thought the combined company would be "very weak"; as the takeover battle continued, Feldman was described as being "bitterly opposed" to the plan.
The financing for the employee group's planned purchase fell through, and as Continental was preparing a public announcement, Feldman committed suicide in his Los Angeles office. TIA received approval for its purchase from the CAB, and President Ronald Reagan did not block the deal. The legal battles between the two companies ended in November, with Lorenzo and two associates elected to the board of directors of Continental.
1981 was a financially difficult year for Texas International, New York Air, and Continental. Lorenzo claimed, and some analysts agreed, that Continental's problems were worse than they had appeared before his purchase of the airline. Continental's losses for 1981 were over $100 million, and an economic recession further hurt Continental's finances. It attempted to renegotiate contracts with several unions.
A series of steps to combine TIA and Continental followed. Lorenzo consolidated the historic TIA Houston as part of a plan to combine the routes of TIA and Continental in a hub and spoke system centered in Houston and Denver. Continental moved its annual meeting from May to March 1982; at that meeting, Lorenzo was named chairman of Continental. In July, Continental and Texas Air approved a "financial merger" of the companies, although operations were still to remain separate. In September, the employees of both Texas International and Continental were informed that there would be "operational and management integration of the two companies effective Oct. 31 [1982]". Although Texas Air remained the parent company, the combined companies operated under the Continental name, brand, and aircraft livery.
Strike and bankruptcy
After 19 months of negotiations, Continental and its mechanics' union failed to reach a labor agreement, and the mechanics went on strike in August 1983. Continental continued to operate, partially by hiring replacement mechanics. Continental continued to lose money significantly due to its cost structure, and management gave a final proposal to its pilots which provided for ownership by the pilots and other employees of 35% of the company's stock, but with the plan rejected, the company filed for Chapter 11 bankruptcy protection from creditors on Saturday evening, September 24, 1983, laying off 65% of its employees. Following the bankruptcy filing, the pilots' union went on strike to protest lower wages being offered by the company. The bankruptcy filing allowed Continental to void union contracts. It returned to operating profitability in 1984, but remained under bankruptcy protection until 1986, after working out plans with creditors to repay 100% of its debt over a period of 10 years.
During the bankruptcy, Continental claimed that its labor costs were too high and would force the company out of business completely if left unchanged. The unions claimed the bankruptcy was simply a legal maneuver intended to void contracts. The company trimmed operations and expenses during the reorganization, cutting one-third of its employees, introducing new labor contracts with its employees, thus voiding the old Union contracts. The average salary of Continental's pilots after the bankruptcy filing was 30% to 50% lower than before the filing. Senior Management also reduced their salaries to those of the pilots.
Expansion after the New Continental is "born"
The "New Continental" began emerging while in bankruptcy. With its cost structure in line with any post deregulation upstart carrier, the airline began competing effectively with legacy carriers. By May 1984, Continental was operating 83 of its 105 aircraft, which had been mostly grounded upon filing for bankruptcy. Though operating near capacity, it was doing so with nearly half the 12,000 employees, on half the average wage cost, than prior to bankruptcy. By the summer of 1984, the Lorenzo team had filled many of the positions downsized during the strike, and employment had reached 9,000 with a route system that served 67 destinations. Planes were averaging a 67% load factor, amongst the highest in the industry. New Continental's low-cost carrier strategy was beginning to bear fruit.
In February 1984, Continental received a favorable ruling from Bankruptcy Judge R.F. Wheless Jr. who ruled that "Continental was justified in rejecting its [pilots’ union] contract it described as burdensome" and that it "had no choice but to file for reorganization." Continental's Lorenzo and his team had successfully argued that labor "costs would force the airline to liquidate."
In the second quarter of 1984, led by Lorenzo's team, Continental's restructuring showed more promise. The company reported a net profit of $10.4 million, which was a $36.9 million improvement from the $26.5 million loss during the same time period in 1983.
Further signs that the Lorenzo management team's new low-fare, high-frequency business model was working began to show. With all fares initially set at $49, the airline was able to compete with legacy carriers. It also put in place a stock ownership plan with a grant of 1 million shares, profit sharing and other new forms of compensation, including options to buy new stock to reward employees’ loyalty during and after the union strikes. Pioneering innovations in the industry, such as the self-service ticketing Flying Machines and Continental's partnership with Federal Express to provide overnight ticket delivery nationwide to customers, proved effective.
Emergence from Bankruptcy
By September 1984, Lorenzo's team had led the airline to surpass the number of seat miles flown prior to bankruptcy with 25% fewer employees. Though the airline was price competitive with new entry carriers, it still offered full services such as hot meals, checked baggage, a frequent flier program and code-sharing with partner airlines. Record profits of $30.3 million were reported for the third quarter in 1984. Continental had successfully transitioned from "a high-cost airline losing its proverbial shirt to being a low-cost carrier making good profits." In September 1986, the "New Continental" successfully emerged from bankruptcy as a lean, sustainable and profitable airline. Notably, Lorenzo's management team had reached agreement with its creditors to repay them at 100 cents-on-the-dollar with no equity dilution, a rarity in previous bankruptcies. Employees and management were issued equity and equity incentives.
Growth of Continental in the 1980s
Continental's rapid growth continued through the 1980s. It began service to Europe and added services to Asia, during this period, in addition to growing its operations domestically. In February 1987, Continental merged with the operations of People Express, which had been acquired, and New York Air expanding its route system. Severe service disruptions were experienced during this merger.
In 1987, Continental also inaugurated a major new hub operation at Newark Airport's Terminal C, with rights acquired in the People Express acquisition. SAS moved its operations to Newark, from JFK airport, in a joint service arrangement in 1988. Continental also made a major image change, currently in use, and eliminated first class service, giving business class passengers the same first class seats—this service was later to be marketed as Business First.
By 1989, Continental served 133 airports around the world, carried approximately 9% of US commercial air traffic and offered more than 1,250 daily departures.
Eastern Air Lines
Background
In 1975, Frank Borman became president and CEO of Eastern Air Lines. It was ailing financially, and Borman negotiated concessions from its employees, but he also ordered expensive new airplanes. He also clashed frequently with the head of the machinists' union, Charlie Bryan. The atmosphere at Eastern was such that "[l]abor brawls struck Eastern with the regularity of tropical storms in Florida". (The company was later described as having been "anemic for most of the past 20 years" and as having "lurched from one financial crisis to another".)
Takeover
In December 1985, Borman met with Lorenzo to elicit possible Eastern interest. CAL had been looking to gain more critical mass and strategic assets. Then, in January 1986, Eastern faced another financial crisis and sought more concessions from its unions. Borman said that Eastern needed to "either get long term economic relief from its labor unions or enter bankruptcy and impose that relief, just as now-thriving Continental did under Frank Lorenzo in September 1983", but Eastern's machinists' union refused to renegotiate its contract. After Texas Air made an offer to buy Eastern, despite the unions' generally negative opinion of Lorenzo's tactics, union leader Bryan said, "it might be surprising the relationship that could develop between Lorenzo and our organization". Eastern established a deadline with its three major unions to grant concessions or have the company sold to Texas Air; when the deadline passed without the concessions, the Eastern sale was signed early Monday morning on March 24, 1986. Although there was more litigation to stop the deal, it was approved by shareholders in November 1986.
Phil Bakes, who had been president of Continental, was named president and chief executive officer of Eastern, in November 1986, and moved his family to Miami to assume direction of the airline. Lorenzo, appointed Chairman, remained in Houston as Continental's CEO.
In December 1986, Texas Air also finalized a deal to acquire People Express Airlines, a low-cost airline that had been formed by eight executives who had left Texas Air in 1980. The deal included the assets of the bankrupt Frontier Airlines, which People Express had purchased in 1985. At that point, the combination of the airline companies controlled by Lorenzo through Texas Air accounted for 20% of the airline industry in the United States, and some estimates described it as the largest airline in the Western world, and second only to Aeroflot worldwide.
Asset transfers; Sale of shuttle to Donald Trump
In March 1987, Texas Air, realizing that Eastern's labor difficulties were hurting its computer system service sales to other airlines and also wanting to combine its other computer system assets, transferred Eastern's reservation system to a new subsidiary, System One. The transfer value was set at $100 million based on investment banking opinions, although it was substantially below some value estimates of $200 million to $400 million. Eastern then paid fees to lease back the system. The system was combined with Texas Air's other computer system assets and grew to have 5,100 travel agent customers. Texas Air later sold half of its interest in the system to Electronic Data Systems, based on an overall system value of $250 million.
In early 1988, with Eastern's cash being consumed by its losses, discussions began with Donald Trump and others to buy the Eastern Shuttle. Lorenzo had contacted Trump as a logical buyer, who importantly wouldn't have provided the competitive threat of one of the major airlines—who were not contacted. The negotiations with Trump stretched on for several months, and most all of the negotiations were between Trump and Lorenzo in the Edwardian Room of the Plaza Hotel, in New York, which Trump had just bought in early 1988. An agreement was finally reached in October 1988, for Trump to buy Eastern's shuttle service for $365 million, wanting to rebrand it as Trump Shuttle. The deal was closed in June 1989 and Trump went into the airline business.
Donald Trump had much to say about buying the shuttle and negotiations with Lorenzo in his 1990 book: Trump, Surviving at the Top. In it he discussed the union fights at Eastern and the making of Lorenzo as a symbol. He said, "through it all, Frank just kept quiet and took the heat. In life, most people are big talk and no action. Frank is that rare person who is all action and no talk." Trump also wrote that "the funny thing is, I found Frank, man-to-man to be a very different guy from the reclusive, evil maniac you read about in the business press. I met with him numerous times... and invariably found him relaxed, charming, and compassionate, despite his feelings about the union leaders [at Eastern]. Frank is also a devoted family man who cares very deeply about his wife, Sharon, and their children." Trump went on to lose the Trump Shuttle in September, 1990, when the difficult economic times and large debt positions caused a default on Trump's debt to a Citibank led group. The banking group leased the airline to US Air in late 1990, which promptly took off the Trump name.
Labor strike and resulting decline in service
Due to management and the unions' ongoing battle over labor contracts, service at the airline declined resulting in a series of increasingly contentious and politically charged events. In April 1988, the United States Department of Transportation began an inquiry into the financial fitness and safety of Texas Air and Eastern; the inquiry was announced at the same time as a fine of $823,000 levied by the FAA for safety violations. Texas Air and Eastern's operations were vindicated when the inquiry deemed Texas Air to have passed its tests and added that labor disputes "could endanger safety at Eastern".
After negotiations with the three unions failed to produce any labor agreements, a mandated 30-day cooling off period began on 1 February 1989. If an agreement could not be reached, strikes would begin on 4 March 1989. During this period, Lorenzo and Phil Bakes, Eastern's CEO, met with several potential buyers of Eastern, including Carl Icahn, Jay Pritzker and Peter Ueberroth, but no deal materialized.
On 3 March 1989, President George H. W. Bush issued a statement outlining his decision not to act on a National Mediation Board recommendation to appoint a presidential emergency board to attempt to reach a labor agreement with machinists, which had been impossible for many years before Texas Air's acquisition. After the IAM struck, flight attendants, and the pilot union honored the IAM picket lines, and thousands of Eastern flights were canceled, and also thousands of employees were laid off.
Bankruptcy and shutdown
Eastern filed for bankruptcy protection on 9 March 1989. However, the law had been changed since Continental's bankruptcy in 1983 and it was much more difficult to void union contracts in a bankruptcy. Eastern continued to operate a reduced schedule, using pilots who crossed picket lines. By November, it claimed it was back on the road toward profitability and asked for more time to submit a reorganization plan to the bankruptcy court.
After Bush declined to appoint an emergency board to mediate the labor dispute, Congress passed a compromise bill designed to establish a bi-partisan commission specifically to "investigate the labor dispute" at Eastern, but it was vetoed by Bush on 21 November 1989. The following day, the pilots' union voted to end its strike, after about 800 of its members had crossed picket lines, and many others had taken early retirement or gone to work for other airlines. The flight attendants ended their strike the following day, but the machinists continued to strike.
Although Eastern was still operating, it was still losing money and proposed plans to its unsecured creditors that called for payment of 50% of the debt, and as conditions worsened, only 25% of the debt. Unable to reach agreement with Eastern, creditors requested that a trustee be appointed. The bankruptcy court agreed to the request and named a trustee, Martin Shugrue, to run Eastern in April 1990, with the goal of continuing to operate the company and find a buyer. With the trustee unable to bring the airline to profitability, Eastern ceased operations on 18 January 1991 and its assets were liquidated.
Airline legacy
Lorenzo's Jet Capital Corporation agreed to sell its controlling stake in Continental Airlines’ holding company to Scandinavian Airlines Systems (SAS) on August 10, 1990, and the Jet Capital officers at Continental stepped down.
"Frank Lorenzo has etched an indelible mark on the U.S. airline business. Whether he is a union buster, an opportunist, or a keen businessman depends on one's perspective, but his mark has been deep and may be lasting," wrote James Ott on Lorenzo's departure from Continental.
“When Lorenzo took over the ailing Texas International in 1972, he became the youngest CEO of any airline of substance in the history of air transportation,” according to R.E.G. Davies who has been the Lindbergh Chair of Aerospace history at the Smithsonian Air and Space Museum and is often considered the dean of aviation history. Davies went on to say that “(Lorenzo's) subsequent progress toward greater goals has been marked with no less success and has been just as systematic,…Lorenzo became the most controversial single personality in the airline world…because he challenged the entire structure of cost-and-revenue relations that had long existed in a closely protected corporate environment and in doing so generated a cause célebre of historic proportions.”
Lorenzo has frequently been described as "anti-union", particularly after the bankruptcy at Continental, which ended by unionized employees voiding their union contracts in the successful effort to save the airline from liquidation and permanent job losses. Earlier at Texas International Air, the airline sought wage concessions in 1974 to return the company to viability given its competition with low cost Southwest Airlines. After a union went on strike, TIA began hiring replacement workers, and after four months, the unions returned to work and TIA went on to reach profitability in 1976. Before Texas Air's takeover of Eastern was completed, their efforts to build a sustainable, profitable airline model were criticized, with opponents saying his companies' profits were earned "on the backs of broken labor contracts, extremely low wages and harsh work rules."
Investment career
Departure from Continental and establishment of Savoy Capital
Lorenzo had been CEO for Continental and Texas International for 18 years, when he sold through Jet Capital his controlling interest in Continental Airlines in 1990 to Scandinavian Airlines System (SAS), and stepped down from his CEO role to pursue other entrepreneurial and investment ventures. Lorenzo's departure from Continental was a basic part of the deal, required by both Lorenzo and SAS, although he remained a director of Continental for two years afterwards as part of the arrangements with SAS. During his eighteen-year tenure, his airline empire grew from 15 jet aircraft (at Texas International) with revenues of $73 million, to Continental's combined fleet of 350 jet aircraft and revenues of over $5 billion.
After the sale of his interest in Continental, Lorenzo founded Savoy Capital, Inc. in 1990 in Houston, TX. Savoy is a private investment firm largely investing for its own account, both on a liquid basis and privately, but which also has invested on behalf of accredited outside investors.
ATX, Inc.
In 1992, a company in which Lorenzo was a significant shareholder, but not part of management, ATX Inc., attempted to start a low-cost airline serving the east coast of the United States. A regulatory application by ATX, strongly fought by organized labor, was rejected by the U.S. Department of Transportation in 1993; the agency said that under Lorenzo's supervision, Eastern Airlines and Texas Air had "experienced operational, maintenance and labor-related problems that were among the most serious in the history of U.S. aviation." Lorenzo had previously reduced his stake in the company to 24 percent and said that he would only serve on its board.
Philanthropy
Lorenzo is a long-time trustee of The Hispanic Society of America, an institution with a free-entrance museum of art located in New York City. It houses the largest collection of Spanish art outside Spain, with major paintings by Velázquez, Goya, Zurbarán, El Greco, and Sorolla. He is additionally a trustee emeritus of the Woodrow Wilson National Fellowship Foundation located in Princeton, New Jersey. Lorenzo and his wife are involved as a director, donor, or advisor to a number of other arts and charitable organizations.
In 1986, Lorenzo established the Olegario Lorenzo Memorial and Lorenzo Family Scholarship Funds at Columbia College.
Personal
Lorenzo married Sharon Neill (née Murray) in 1972, and has four children. His wife is an adjunct professor at University of Penn law school, where she teaches a course in art law.
References
Further reading
Columbia College (New York) alumni
Continental Airlines people
Eastern Air Lines
Place of birth missing (living people)
Harvard Business School alumni
American people of Spanish descent
1940 births
Living people
Forest Hills High School (New York) alumni |
24140889 | https://en.wikipedia.org/wiki/TOG%20%28hackerspace%29 | TOG (hackerspace) | TOG is a hackerspace in Dublin, Ireland. tóg is a word in the Irish language; one of its meanings is 'to build or construct'.
History
The project was started in January 2009 and the space opened in May 2009. In June 2010, TOG moved to a new, larger building on Chancery Lane (Unit 3), Dublin 8. In January 2015 it was confirmed that the lease on this building would not be extended beyond April 2015, and TOG announced its intention to move to a new premises in Dublin city centre. In the event the lease was extended through October and TOG moved to its new premises, the ground floor of 22 Blackpitts, in mid-October 2015.
Structure
TOG is a community-operated physical locale based run by its members to provide a shared space. The members have a place to be creative and work on their projects in an environment designed to be both inspiring and supportive of both new and old technologies. The space is funded for by its members and allows members 24-hour access to a place equipped with shared equipment supplied or donated by members or friends of TOG.
TOG allows for a community that would usually be based through virtual medium to come together to form a meeting place for like-minded individuals. TOG allows people to collaborate on projects in a centralized place to bring the skills of many people together to get a task or project done.
Events
National Events
TOG has group nights on Mondays with Electronics and Micro-controllers, and Wednesdays with Coding where the space is available for people to work on their projects and also get help with any problems that may be having from 7PM weekly, and semi-regular Craft nights.
Members of TOG have also spoken (or plan to speak) at technology related conferences around Ireland, such as
OSSBarcamp, September 2009
PyCon Ireland (the Python Ireland annual conference)
Ubuntu Ireland and Skynet talks
TechWeek at DCU
TOG has also hosted events for Engineers Week and Science Week. a
TOG, in association with the Irish Robotics Club, hosted an electronics workshop with Mitch Altman from San Francisco's hackerspace Noisebridge, and Jimmie P. Rodgers, in the Science Gallery, Dublin.
TOG has also held events as part of the Innovation Dublin Festival
TOG has also taken part in Dublin Art and Technology Association's (DATA) 'DATA EVENT 37.0' at the Irish Museum of Contemporary Art (IMOCA).
TOG has also participated in the annual Dublin Maker event, which attracts over 10,000 attendees.
Global Events
Members of TOG have also taken part in several 'Synchronous Hackathons', live simultaneous hacking events with other hackerspaces.
TOG has also streamed and had events while the Chaos Communication Congress (CCC) occurs in Berlin. Members in Dublin who are unable to go to Berlin are able to attend TOG and watch talks etc. TOG members also meet up with members of other hackerspaces at events like CCC
TOG also took part in the Global Hackerspace Cupcake Challenge, where participating hackerspaces send a cupcake to each other. The TOG cupcake was sent to Artifactory in Perth, Australia. TOG received a cupcake from Alpha One Labs in New York, USA.
References
Nearby hackerspaces
091 Labs in Galway, Ireland
MilkLabs in Limerick, Ireland
External links
TOG website
Hackerspaces.org page on TOG
Hackerspaces.org page on Dublin
Hacker groups
Computer clubs
Hackerspaces
2009 establishments in Ireland |
30279070 | https://en.wikipedia.org/wiki/Open%20Source%20Summit | Open Source Summit | Open Source Summit (formerly LinuxCon) is a name for a series of annual conventions organized each year since 2009 by the Linux Foundation. The first LinuxCon took place in North America. Linux Foundation started organizing similar events in Europe and Japan. The original LinuxCon was rebranded LinuxCon North America, adding to the list LinuxCon Europe and LinuxCon Japan.
Apart from keynotes given by some high-profile Linux people (such as Linus Torvalds or Greg Kroah-Hartman in 2009), Open Source Summit is increasingly used as a platform for open source thought leaders, influencers, hiring managers, professionals, and developers in Information technology to talk about their plans for the technological landscape and announce major news. For example, Nokia confirmed in 2010 the delivering of the first MeeGo device this same year, or Oracle Corporation explained in 2010 where they were heading for their Linux efforts after their acquisition of Sun Microsystems. Uber and Lyft announced a de facto collaboration on Cloud Native Computing Foundation projects at Open Source Summit North America in 2017.
Rebranding as Open Source Summit
At the end of LinuxCon North America event in Toronto, Ontario, Canada in 2016, it was announced that the event would be re-branded to be more representative of the organization/event's more general open source audience in 2017. As such, LinuxCon has been replaced with an event called Open Source Summit in 2017 in North America, Europe and Japan.
History
LinuxCon 2009
Held on 21–23 September 2009 in Portland.
LinuxCon North America 2010
Held on 10–12 August 2010 in Boston.
LinuxCon Brazil 2010
Held on August 31 - September 1, 2010 in São Paulo.
LinuxCon Japan 2010
Held on 27–29 September 2010 in Tokyo.
This is the first edition under the name "LinuxCon Japan" but it is the renamed second edition of "Japan Linux Symposium".
LinuxCon Japan 2011
Held on 1–3 June 2011 in Yokohama.
LinuxCon North America 2011
Held on 17–19 August 2011 in Vancouver, British Columbia.
LinuxCon Europe 2011
Held on 26–28 October 2011 in Prague.
This is the first edition of LinuxCon Europe.
LinuxCon Brazil 2011
Held on 17–18 November 2011 in São Paulo.
LinuxCon Japan 2012
Held on 6–8 June 2012 in Yokohama.
LinuxCon North America 2012
Held on 29–31 August 2012 in San Diego.
This was also the first edition of CloudOpen.
LinuxCon Europe 2012
Held on 5–7 November 2012 in Barcelona.
LinuxCon Japan 2013
Held on 29–31 May 2013 in Tokyo.
LinuxCon North America 2013
Held on 16–18 September 2013 in New Orleans.
LinuxCon Europe 2013
Held on 21–23 October 2013 in Edinburgh.
LinuxCon North America 2014
Held on 20–22 August 2014 in Chicago.
LinuxCon Europe 2014
Held on 13–15 October 2014 in Düsseldorf.
LinuxCon North America 2015
Held on 17–19 August 2015 in Seattle
LinuxCon Europe 2015
Held on 5–7 October 2015 in Dublin.
LinuxCon North America 2016
Held on 22–24 August 2016 in Toronto, Ontario.
LinuxCon Europe 2016
Held on 4–6 October 2016 in Berlin.
Open Source Summit North America 2017
Held on 11–14 September 2017 in Los Angeles
Open Source Summit Europe 2017
Held 23–26 October 2017 in Prague
Open Source Summit Japan 2018
Held 20–22 June 2018 in Tokyo
Open Source Summit North America 2018
Held 29–31 August 2018 in Vancouver, British Columbia.
Open Source Summit Europe 2018
Held 22–24 October 2018 in Edinburgh
Open Source Summit Europe 2019
Held 28–30 October 2019 in Lyon
References
External links
LinuxCon website
Linux conferences
Linux Foundation |
370320 | https://en.wikipedia.org/wiki/PowerPC%20970 | PowerPC 970 | The PowerPC 970, PowerPC 970FX, and PowerPC 970MP are 64-bit PowerPC processors from IBM introduced in 2002. When used in PowerPC-based Macintosh computers, Apple referred to them as the PowerPC G5.
The 970 family was created through a collaboration between IBM and Apple. The project went under the codename GP-UL or Giga Processor Ultra Light, where Giga Processor was the codename for the POWER4 from which the core was derived. When Apple introduced the Power Mac G5, they stated that this was a five-year collaborative effort, with multi-generation roadmap. This forecast however was short-lived when Apple later had to retract its promise to deliver a 3 GHz processor only one year after its introduction. IBM was also unable to reduce power consumption to levels necessary for laptop computers. Ultimately, Apple only used three variants of the processor.
IBM's JS20/JS21 blade modules and some low-end workstations and System p servers are based on the PowerPC 970. It is also used in some high end embedded systems like Mercury’s Momentum XSA-200. IBM is also licensing the PowerPC 970 core for use in custom applications.
Design
The PowerPC 970 is a single core derivative of the POWER4 and can process both 32-bit and 64-bit PowerPC instructions natively. It has a hardware prefetch unit and a three way branch prediction unit.
Like the POWER4, the front-end is nine stages long. The PowerPC 970 can fetch and decode up to eight instructions, dispatch up to five to reserve stations, issue up to eight to the execution units and retire up to five per cycle. The execution pipelines were lengthened compared to the POWER4 to achieve higher IPC. It has eight execution units: two arithmetic logic units (ALUs), two double-precision floating-point units, two load/store units and two AltiVec units.
One of the AltiVec units executes integer and floating-point instructions, and the other only permute instructions. The latter has three subunits for simple integer, complex integer and floating-point instructions. These units have pipelines of varying lengths: 10 stages for simple integer and permute instructions, 13 stages for complex integer instructions and 16 stage for floating-point instructions.
The processor has two unidirectional 32-bit double data rate (DDR) buses (one for reads, the other for writes) to the system controller chip (northbridge) running at one quarter of the processor core speed. The buses also carry addresses and control signals in addition to data so only a percentage of the peak bandwidth can be realized (6.4 GB/s at 450 MHz). As the buses are unidirectional, each direction can realize only half the aggregate bandwidth, or 3.2 GB/s.
Processors
PowerPC 970
The PowerPC 970 was announced by IBM in October 2002. It was released in Apple Computer's Power Mac G5 in June 2003. (In keeping with its previous naming conventions, Apple termed the PowerPC 970 based products G5, for the fifth generation of PowerPC microprocessors.) IBM released its first PowerPC 970 blade servers, the BladeCenter JS20, in November 2003.
The PowerPC 970 had 512 KB of full-speed L2 cache and clock speeds from 1.6 to 2.0 GHz. The front side bus ran at half the processor's clock speed.
PowerPC 970FX
The PowerPC 970FX used a 90 nm manufacturing process and has a maximum power rating of 11 watts at 149 degrees Fahrenheit (65 °C) while clocked at 1 GHz and a maximum of 48 watts at 2 GHz.
The PowerPC 970's pipeline was lengthened from 9 stages to 16–21 stages for the PowerPC 970 FX. It has 10 functional units 2 Fixed-Point Units, 2 Load/Store Units, 2 Floating Point Units, 1 Branch Unit, 1 SIMD ALU unit, 1 SIMD Permute unit, and 1 Condition Register. It supports up to 215 instructions in-flight: 16 in the Instruction Fetch Unit, 67 in the Instruction Decode Unit, 100 in the Functional Units, and 32 in the Store Queue. It has 64 KBytes of directly mapped Instruction Cache and 32 KBytes of D-Cache.
Apple released their 970FX-powered machines throughout 2004: the Xserve G5 in January, the Power Mac G5 in June, and the iMac G5 in August. The Power Mac introduced a top clock speed of 2.5 GHz while liquid-cooled (eventually reaching as high as 2.7 GHz in April 2005). The iMac ran the front side bus at a third of the clock speed.
Despite intense user demand for a faster laptop CPU than the G4, Apple never used a G5 series CPU in their PowerBook laptops. The original 970 used far too much power and was never seriously viewed as a candidate for a portable computer. The 970FX reduced thermal design power (TDP) to about 30 W at 1.5 GHz, a figure that led many users to believe a PowerBook G5 might be possible. However, several obstacles prevented even the 970FX from being used in this application. At 1.5 GHz, the G5 was not substantially faster than the 1.5 and 1.67 GHz G4 processors, which Apple used in PowerBooks instead. Furthermore, the northbridge chips available to interface the 970FX to memory and other devices were not designed for portable computers, and consumed too much power. Finally, the 970FX had inadequate power saving features for a portable CPU. Its minimum (idle) power was much too high, which would have led to poor battery life figures in a notebook computer.
PowerPC 970MP
IBM announced the PowerPC 970MP, code-named "Antares", on 7 July 2005 at the Power Everywhere forum in Tokyo. The 970MP is a dual-core derivative of the 970FX with clock speeds between 1.2 and 2.5 GHz, and a maximum power usage of 75 W at 1.8 GHz and 100 W at 2.0 GHz. Each core has 1 MB of L2 cache, twice that of the 970FX. Like the 970FX, this chip was produced at the 90 nm process. When one of the cores is idle, it will enter a "doze" state and shut down. The 970MP also includes partitioning and virtualization features.
The PowerPC 970MP replaced the PowerPC 970FX in Apple's high-end Power Mac G5 computers, while the iMac G5 and the legacy PCI-X Power Mac G5 continued to use the PowerPC 970FX processor. The PowerPC 970MP is used in IBM's JS21 blade modules, IBM Intellistation POWER 185 workstation and YDL PowerStation by Fixstars Solutions (Yellow Dog Linux (YDL) PowerStation).
Due to high power requirements, IBM has chosen to discontinue parts running faster than 2.0 GHz.
Gallery
Illustrations of the different generations of 970 processors. All were manufactured in IBM's East Fishkill plant in New York on a white ceramic substrate that was typical for IBM's high end processors of the era.
Northbridges
There are two dedicated northbridges for PowerPC 970-based computers, both manufactured by IBM:
CPC925 Designed by Apple and called the U3 or the U3H (which supports ECC memory). It is capable of supporting up to two PowerPC 970s or PowerPC 970FXs and has two 550 MHz unidirectional processor buses, a 400 MHz DDR memory controller, x8 AGP and a 400 MHz 16-bit HyperTransport tunnel. It fabricated on a 130 nm process. Additionally, there was an unreleased U3Lite northbridge in development for the PowerBook G5, which never made it to market.
CPC945 Designed by IBM and called U4 by Apple, it is capable of supporting two PowerPC 970MPs and has two 625 MHz unidirectional processor buses, two memory controllers that support up to 64 GB of 533 MHz DDR2 SDRAM with ECC capability and has a x16 PCIe lane and an 800 MHz 16-bit HyperTransport tunnel. It is fabricated on a 90 nm process.
There was also a cancelled CPC965 northbridge. Slated for release in 2007, it was to be a uniprocessor-only northbridge. Its features were a 533 MHz DDR2 controller that supported up to 8 GB ECC memory, a 8x PCIe bus, integrated four-port Gigabit Ethernet with IPv4 TCP/UDP offloading, USB 2.0 ports, a Flash-interface. The northbridge contains an integrated PowerPC 405 core to provide system management and configuration capabilities.
Buses
IBM uses its proprietary Elastic Interface (EI) bus in the modules.
See also
POWER3, POWER4 and POWER5
PowerPC G4 and Altivec
Power Mac G5 and Xserve G5
Supercomputers using the PowerPC 970:
System X
All supercomputers in Spanish Supercomputing Network (Magerit and Marenostrum are the two most powerful supercomputers of the network)
References
Further reading
"IBM's PPC970 Becomes Apple's G5". (7 July 2003). Microprocessor Report.
"IBM Takes the Lead". (9 February 2004). Microprocessor Report.
"IBM's Double-Shot of PowerPC". (7 November 2005). Microprocessor Report.
External links
Ars Technica article, part I
Ars Technica article, part II
IBM PowerPC 970FX RISC Microprocessor User's Manual
Understanding 64-bit PowerPC architecture
ISSCC 2006: IBM PowerPC 970MP
IBM Documentation: PowerPC 9XX Microprocessors
970
970
970
Power microprocessors
64-bit microprocessors |
1390936 | https://en.wikipedia.org/wiki/Digium | Digium | Digium, Inc. is a communications technology company based in Huntsville, Alabama, and since 2018, a subsidiary of Sangoma Technologies. In 1999, Digium's founder Mark Spencer created Asterisk, the open source software project that can be used to turn a personal computer into a communications server or Voice over IP (VoIP) phone system. Today, Digium's core business lines include Switchvox, the Asterisk-based VoIP business phone system, Digium IP phones, and Asterisk telephony software and hardware products. Digium continues to make Asterisk available to the global development community free.
Digium, Inc. sells all of its products and services directly to businesses and consumers. It also offers its business-to-business communications products, including Digium Cloud Services, through a worldwide network of resellers.
History
In 1999, while a Computer Engineering student at Auburn University, Mark Spencer founded Linux Support Services (LSS). The high cost of buying a business phone system for LSS spurred him to use Linux and his programming skills to develop his own solution. He named it Asterisk, after the programming wildcard symbol and the ”star” phone key. It enabled phone calls over the Internet, providing an alternative to hardware-dependent PBX systems at a cost as much as 80% less.
Forbes Magazine characterized Asterisk's impact as “wreaking havoc on the multibillion-dollar phone equipment business,” and “disrupting the $7 billion market for office telecom (PBX) switches” by enabling individuals and organizations to innovate in what was previously a vendor-locked marketplace. Asterisk-based business phone systems, like Digium's Switchvox, were the first of a new generation of VoIP-based communications solutions that would emerge to compete with the telecom giants like Cisco and Avaya. Reflecting the company's shift away from Linux support to applications and enhancements for Asterisk and the development of new communications solutions, Linux Support Services was renamed Digium in 2001.
Asterisk was released as open source by Spencer; today, the Asterisk Open-Source Software (OSS), the Asterisk community and forum, as well as an annual conference and expo, are all sponsored and maintained by Digium. Developers around the world continue to use Asterisk to create custom IP PBX systems, VoIP gateways, conference servers and other solutions for small and large organizations. They have also continued to enhance it, adding hundreds of features. It is estimated that Asterisk is currently being used by developers in 170 countries, is running on approximately 1MM servers, and is the foundation of communications systems for the majority of Fortune 500 companies.
Acquisition
Sangoma Technologies Corporation (TSX VENTURE: STC) announced they entered into a definitive agreement on August 23, 2018, closing August 31, 2018 Sangoma expects to pay a total purchase price of US$28 million on a debt-free and cash-free basis, subject to customary working capital adjustments. The purchase price consists of US$24.3 million in cash and 3,943,025 Sangoma common shares (representing US$3.7 million based on a ten (10) day volume weighted average price of $1.2214 per common share). Digium generated approximately US$30 million in revenue in its fiscal year ended December 31, 2017, with consolidated assets (net of cash) of approximately US$11 million, no debt and a net loss of approximately US$4 million. The transaction closed and was finalized on September 5, 2018
Products and Services
Digium products and services fall into four main categories:
Business Phone Systems
Digium offers Switchvox, a Unified Communications System powered by Asterisk, either as an on-premises solution or a cloud-based PBX solution.
Switchvox Mobile extends Switchvox services to employees’ mobile phones.
Digium Cloud Services include Switchvox Cloud service and SIP Trunking. SIP Trunking replaces a business’ traditional, or legacy, telco connection by providing voice connectivity to an ITSP via an existing data connection (internet).
References
Free software companies
Companies based in Huntsville, Alabama
Software companies established in 2002
Privately held companies based in Alabama
Asterisk (PBX)
FreePBX
2002 establishments in Alabama |
7820 | https://en.wikipedia.org/wiki/CCC | CCC | CCC may refer to:
Arts and entertainment
Canada's Capital Cappies, the Critics and Awards Program in Ottawa, Ontario, Canada
Capcom Classics Collection, a 2005 compilation of arcade games for the PlayStation 2 and Xbox
CCC, the production code for the 1970 Doctor Who serial The Ambassadors of Death
Music
Canadian Chamber Choir, a national choral ensemble for Canadian singers, conductors and composers
"Candy Cane Children", a single by The White Stripes
The Color Changin' Click, a rap group which was started by rapper Chamillionaire
Christianity
Calvinist Cadet Corps, Christian mentoring organization
Campus Crusade for Christ, the original name of the interdenominational Christian organization now known as Cru
Canadian Council of Churches, an ecumenical Christian forum of churches in Canada
Catechism of the Catholic Church, an official exposition (catechism) of the teachings of the Roman Catholic Church
Celestial Church of Christ, an independent African Church
Central Congregational Church (Providence, Rhode Island), a United Church of Christ congregation
China Christian Council, a government-approved Christian organization in the People's Republic of China
Christian City Churches, an evangelical, Pentecostal church movement founded by Pastors Phil Pringle and Chris Pringle
Christian Cultural Center, a New York City-based church pastored by Dr. A. R. Bernard
Church of Christ in China, one of the Chinese Independent Churches
Clearwater Christian College, a four-year non-denominational Christian College
Colorado Community Church, an interdenominational church in Denver, Colorado, U.S.
Community Christian College, a two-year college based in Redlands, California, U.S.
Companies
Canadian Commercial Corporation, a Canadian corporation responsible for facilitating international contracts
Canterbury of New Zealand, a New Zealand-based sports apparel company
CCC Film, a film production company in Germany, formally known as Central Cinema Compagnie-Film GmbH
Cloud Credential Council, a global provider of vendor-neutral certification programs for the information technology (IT) industry
Color Climax Corporation, a Danish pornography company
Comcast Cable Communications, a cable television, internet and telephone service provider in the United States
Commodity Credit Corporation, United States owned corporation which funds USDA programs
Consolidated Contractors Company, a large Middle Eastern and International EPC Contractor
Cooper Cameron Corporation, now Cameron International Corporation
Copyright Clearance Center, a U.S. copyright collection company
Crane Carrier Company, a U.S. truck manufacturer
Cwmni Cyfyngedig Cyhoeddus, a Welsh form of public limited company
Education
In the U.S.
California Community Colleges System, combined districts of California's community colleges
Camden County College, Camden County, New Jersey
Cascadia Community College, Bothell, Washington, now known as Cascadia College
Cayuga Community College, 2-year SUNY college in Cayuga County, New York
Center for Computational Chemistry, research center in the department of Chemistry at the University of Georgia
Central Community College (Nebraska)
Chemeketa Community College, Salem, Oregon
City Colleges of Chicago, a system of seven community colleges for Chicago residents
Clackamas Community College, Oregon City, Oregon
Clatsop Community College, Clatsop County, Oregon
Clearwater Central Catholic High School, college preparatory school in Clearwater, Florida
Cleveland Chiropractic College, chiropractic school in Overland Park, Kansas
Cleveland Community College, Shelby, North Carolina
Clinton Community College (Iowa), Clinton, Iowa
Clovis Community College (New Mexico), Clovis, New Mexico
Club Coordination Council, University of Notre Dame, Notre Dame, Indiana
Coahoma Community College, unincorporated Coahoma County, Mississippi, near Clarksdale
Coconino County Community College, Flagstaff, Arizona
Compton Community College, near Los Angeles, California
Contra Costa College, San Pablo, California
Corning Community College, 2-year SUNY college in Corning, Steuben County, New York
Cross-cultural center, a department at various universities
Cumberland County College, Cumberland County, New Jersey
Cuyahoga Community College, Cuyahoga County, Ohio
In other places
Canadian Computing Competition, a national programming competition for secondary school students in Canada
Castleknock Community College, public secondary school in Carpenterstown, Dublin, Ireland
Cebu College of Commerce and Cebu Central Colleges, the former names of the University of Cebu in Cebu, Philippines
Central Coast Campuses, three education campuses on the Central Coast of New South Wales
Central Commerce Collegiate, high school in Toronto, Ontario, Canada
Centro de Capacitación Cinematográfica, film school in Mexico City
Chenab College, Chiniot, Institute in Pakistan
Chessington Community College, secondary school and sixth form college in the Royal Borough of Kingston upon Thames
City College of Calamba, public college in the Philippines
Cooloola Christian College, Gympie, Queensland, Australia
Corpus Christi College (disambiguation), the name of several colleges
Countesthorpe Community College, Leicestershire, Countesthorpe, Great Britain
Law
California Coastal Commission
Central Criminal Court (disambiguation)
China Compulsory Certificate, a compulsory safety mark for many products sold on the Chinese market
Citizens' Committee for Children
Civil Constitution of the Clergy, a law
Constitutio Criminalis Carolina, the first body of German criminal law
Convention on Cybercrime by the Council of Europe
Corruption and Crime Commission of Western Australia
Cox's Criminal Cases, a series of law reports
Crime and Corruption Commission, independent entity of Queensland, Australia, created to combat major crime
Civil authorities
Cambridge City Council (disambiguation)
Cardiff City Council, the governing body for Cardiff
Carmarthenshire County Council, the administrative authority for the county of Carmarthenshire, Wales
Casino control commission, a variation of a gaming control board in the U.S.
Central Communications Command, the command-and-control system for London's police services
Chittagong City Corporation, a governing organisation in southeastern Bangladesh
Christchurch City Council, New Zealand
Organizations and organizing
Conservation
2009 United Nations Climate Change Conference, held at the Bella Center in Copenhagen, Denmark
California Conservation Corps, a state agency modeled after the Civilian Conservation Corps of the 1930s
Cetacean Conservation Center, a Chilean organization dedicated to the conservation of cetaceans and other marine mammals
Civilian Conservation Corps, a major New Deal program in the U.S. for young men, 1933–42
Climate Change Committee, an independent non-departmental public body, formed in 2008 to advise UK Government
Politics
Center for Community Change, a progressive community organizing group in the United States
Citizens Coalition For Change, a Zimbabwean political party
Climate Change Coalition, Australian political party
Command for Hunting Communists, a Brazilian paramilitary terrorist group of the 1960s known as Comando de Caça aos Comunistas
Committee on Climate Change, an independent body established by the UK Government to advise on climate change policy
Communist Combatant Cells, a Belgian terrorist organization of the 1980s committed to a Communist ideology
Communist Committee of Cabinda, a separatist group in the Cabinda exclave of Angola
Council of Conservative Citizens, a United States paleoconservative white separatist political organization
Customs Cooperation Council, an intergovernmental organization that helps Members communicate and cooperate on customs issues
Science and technology
Comb ceramic culture
Climatic climax community, a biological community of plants and animals which has reached a steady state
Conformal cyclic cosmology, a cosmological model in which the universe undergoes a repeated cycle of death and rebirth
Countercurrent chromatography, a chromatographic separations-science technique
Cryogenic current comparator, electronic test equipment
CCC, a codon for the amino acid proline
Computing
Catalyst Control Center, control panel for AMD Catalyst drivers
Chaos Computer Club, one of the biggest and most influential hacker organisations
Chaos Communication Congress, an annual meeting of computer hackers organized by the Chaos Computer Club
Chaos Communication Camp, a quadrennial international meeting of hackers organized by the Chaos Computer Club
Citizen Cyberscience Centre, Switzerland-based volunteer computing organisation
Computational Complexity Conference, academic conference in the field of theoretical computer science
Corsham Computer Centre, an underground British government installation near RAF Corsham and RAF Rudloe Manor in the heavily tunneled Corsham area of Wiltshire
Cray Computer Corporation, a defunct computer company
Color Cell Compression, an algorithm developed to compress digital color images
Mathematics
Cartesian closed category, a concept in category theory
CCC, Roman numeral for 300
Countable chain condition, a condition in order theory and topology
Cube-connected cycles, a graph used as a communications network topology
Medicine
Capsulorhexis or capsulorrhexis, also known as continuous curvilinear capsulorhexis, a type of cataract surgery
Clinical Care Classification System, a nursing terminology consisting of nursing diagnoses, nursing interventions, and nursing action types that assist in documenting the nursing process
Convenient care clinic, a health care clinic located in neighborhoods
Sport
CAF Confederation Cup, the annual international football competition held in the CAF region
Cascade Cycling Classic, a competitive multi-stage bicycle road race, held every July in Bend, Oregon
CCC Pro Team, a UCI WorldTeam cycling team based in Poland
Central Connecticut Conference, an interscholastic athletic conference in greater Hartford, Connecticut, U.S.
Charlottetown Civic Centre, an indoor ice hockey venue in Canada
Colombo Cricket Club, a cricket club in Colombo, Sri Lanka
Commonwealth Coast Conference, an NCAA Division III conference in New England.
Compton Cricket Club, a California exhibition cricket club
County cricket club, any of the clubs participating in the County Championship or the Minor Counties Championship
Cross Country Canada, the governing body for cross country skiing in Canada
Courmayeur - Champex - Chamonix, a 101 km running race along a portion of the Ultra-Trail du Mont-Blanc
Other uses
California Correctional Center, a state prison in the United States
Carly Colón (born 1979), also known as Carlito Caribbean Cool, a Puerto Rican professional wrestler
Cash Conversion Cycle, a cost accounting term that refers to the liquidity risk posed by growth
Certified Chef de Cuisine, a professional title
Chapman code or Chapman County Code, a set of 3-letter codes used in genealogy to identify administrative divisions in the UK, Ireland, the Isle of Man and the Channel Islands
Colorado Cryptologic Center, a U.S. National Security Agency facility
Command, control, and communications, a concept in military doctrine
Country calling code
Crescent City Connection, twin cantilever bridges
tripleC: Communication, Capitalism & Critique, a peer-reviewed academic journal of communication studies
Jardines del Rey Airport, Cuba, by IATA code
See also
Triple C's, popular nickname for the rap group Carol City Cartel
CC (disambiguation)
C3 (disambiguation)
CCCC (disambiguation)
300
Fate/Extra CCC |
18950011 | https://en.wikipedia.org/wiki/Extended%20ASCII | Extended ASCII | Extended ASCII (EASCII or high ASCII) character encodings are eight-bit or larger encodings that include the standard seven-bit ASCII characters, plus additional characters. Using the term "extended ASCII" on its own is sometimes criticized, because it can be mistakenly interpreted to mean that the ASCII standard has been updated to include more than 128 characters or that the term unambiguously identifies a single encoding, neither of which is the case.
There are many extended ASCII encodings (more than 220 DOS and Windows codepages). EBCDIC ("the other" major 8-bit character code) likewise developed many extended variants (more than 186 EBCDIC codepages) over the decades.
History
ASCII was designed in the 1960s for teleprinters and telegraphy, and some computing. Early teleprinters were electromechanical, having no microprocessor and just enough electromechanical memory to function. They fully processed one character at a time, returning to an idle state immediately afterward; this meant that any control sequences had to be only one character long, and thus a large number of codes needed to be reserved for such controls. They were typewriter-derived impact printers, and could only print a fixed set of glyphs, which were cast into a metal type element or elements; this also encouraged a minimum set of glyphs.
Seven-bit ASCII improved over prior five- and six-bit codes. Of the 27=128 codes, 33 were used for controls, and 95 carefully selected printable characters (94 glyphs and one space), which include the English alphabet (uppercase and lowercase), digits, and 31 punctuation marks and symbols: all of the symbols on a standard US typewriter plus a few selected for programming tasks. Some popular peripherals only implemented a 64-printing-character subset: Teletype Model 33 could not transmit "a" through "z" or five less-common symbols ("`", "{", "|", "}", and "~"). and when they received such characters they instead printed "A" through "Z" (forced all caps) and five other mostly-similar symbols ("@", "[", "\", "]", and "^").
The ASCII character set is barely large enough for US English use and lacks many glyphs common in typesetting, and far too small for universal use. Many more letters and symbols are desirable, useful, or required to directly represent letters of alphabets other than English, more kinds of punctuation and spacing, more mathematical operators and symbols (× ÷ ⋅ ≠ ≥ ≈ π etc.), some unique symbols used by some programming languages, ideograms, logograms, box-drawing characters, etc. For years, applications were designed around the 64-character set and/or the 95-character set, so several characters acquired new uses. For example, ASCII lacks "÷", so most programming languages use "/" to indicate division.
The biggest problem for computer users around the world was other alphabets. ASCII's English alphabet almost accommodates European languages, if accented letters are replaced by non-accented letters or two-character approximations. Modified variants of 7-bit ASCII appeared promptly, trading some lesser-used symbols for highly desired symbols or letters, such as replacing "#" with "£" on UK Teletypes, "\" with "¥" in Japan or "₩" in Korea, etc. At least 29 variant sets resulted. 12 code points were modified by at least one modified set, leaving only 82 "invariant" codes. Programming languages however had assigned meaning to many of the replaced characters, work-arounds were devised such as C three-character sequences "??(" and "??)" to represent "{" and "}". Languages with dissimilar basic alphabets could use transliteration, such as replacing all the Latin letters with the closest match Cyrillic letters (resulting in odd but somewhat readable text when English was printed in Cyrillic or vice versa). Schemes were also devised so that two letters could be overprinted (often with the backspace control between them) to produce accented letters. Users were not comfortable with any of these compromises and they were often poorly supported.
When computers and peripherals standardized on eight-bit bytes in the 1970s, it became obvious that computers and software could handle text that uses 256-character sets at almost no additional cost in programming, and no additional cost for storage. (Assuming that the unused 8th bit of each byte was not reused in some way, such as error checking, Boolean fields, or packing 8 characters into 7 bytes.) This would allow ASCII to be used unchanged and provide 128 more characters. Many manufacturers devised 8-bit character sets consisting of ASCII plus up to 128 of the unused codes. Since Eastern Europe were politically separated at the time, 8-bit encodings which covered all the more used European (and Latin American) languages, such as Danish, Dutch, French, German, Portuguese, Spanish, Swedish and more could be made, often called "Latin" or "Roman".
128 additional characters is still not enough to cover all purposes, all languages, or even all European languages, so the emergence of many proprietary and national ASCII-derived 8-bit character sets was inevitable. Translating between these sets (transcoding) is complex (especially if a character is not in both sets); and was often not done, producing mojibake (semi-readable resulting text, often users learned how to manually decode it). There were eventually attempts at cooperation or coordination by national and international standards bodies in the late 1990s, but manufacture proprietary sets remained the most popular by far, primarily because the standards excluded many popular characters.
Proprietary extensions
Various proprietary modifications and extensions of ASCII appeared on non-EBCDIC mainframe computers and minicomputers, especially in universities.
Hewlett-Packard started to add European characters to their extended 7-bit / 8-bit ASCII character set HP Roman Extension around 1978/1979 for use with their workstations, terminals and printers. This later evolved into the widely used regular 8-bit character sets HP Roman-8 and HP Roman-9 (as well as a number of variants).
Atari and Commodore home computers added many graphic symbols to their non-standard ASCII (Respectively, ATASCII and PETSCII, based on the original ASCII standard of 1963).
The TRS-80 character set for the TRS-80 home computer added 64 semigraphics characters (0x80 through 0xBF) that implemented low-resolution block graphics. (Each block-graphic character displayed as a 2x3 grid of pixels, with each block pixel effectively controlled by one of the lower 6 bits.)
IBM introduced eight-bit extended ASCII codes on the original IBM PC and later produced variations for different languages and cultures. IBM called such character sets code pages and assigned numbers to both those they themselves invented as well as many invented and used by other manufacturers. Accordingly, character sets are very often indicated by their IBM code page number. In ASCII-compatible code pages, the lower 128 characters maintained their standard US-ASCII values, and different pages (or sets of characters) could be made available in the upper 128 characters. DOS computers built for the North American market, for example, used code page 437, which included accented characters needed for French, German, and a few other European languages, as well as some graphical line-drawing characters. The larger character set made it possible to create documents in a combination of languages such as English and French (though French computers usually use code page 850), but not, for example, in English and Greek (which required code page 737).
Apple Computer introduced their own eight-bit extended ASCII codes in Mac OS, such as Mac OS Roman. The Apple LaserWriter also introduced the Postscript character set.
Digital Equipment Corporation (DEC) developed the Multinational Character Set, which had fewer characters but more letter and diacritic combinations. It was supported by the VT220 and later DEC computer terminals. This later became the basis for other character sets such as the Lotus International Character Set (LICS), ECMA-94 and ISO 8859-1.
ISO 8859 and proprietary adaptations
Eventually, ISO released this standard as ISO 8859 describing its own set of eight-bit ASCII extensions. The most popular is ISO 8859-1, also called ISO Latin 1, which contained characters sufficient for the most common Western European languages.
Variations were standardized for other languages as well: ISO 8859-2 for Eastern European languages and ISO 8859-5 for Cyrillic languages, for example.
One notable way in which ISO character sets differ from code pages is that the character positions 128 to 159, corresponding to ASCII control characters with the high-order bit set, are specifically unused and undefined in the ISO standards, though they had often been used for printable characters in proprietary code pages, a breaking of ISO standards that was almost universal.
Microsoft later created code page 1252, a compatible superset of ISO 8859–1 with extra characters in the ISO unused range.
Code page 1252 is the standard character encoding of western European language versions of Microsoft Windows, including English versions.
ISO 8859-1 is the common 8-bit character encoding used by the X Window System, and most Internet standards used it before Unicode.
Character set confusion
The meaning of each extended code point can be different in every encoding. In order to correctly interpret and display text data (sequences of characters) that includes extended codes, hardware and software that reads or receives the text must use the specific extended ASCII encoding that applies to it. Applying the wrong encoding causes irrational substitution of many or all extended characters in the text.
Software can use a fixed encoding selection, or it can select from a palette of encodings by defaulting, checking the computer's nation and language settings, reading a declaration in the text, analyzing the text, asking the user, letting the user select or override, and/or defaulting to last selection. When text is transferred between computers that use different operating systems, software, and encodings, applying the wrong encoding can be commonplace.
Because the full English alphabet and the most-used characters in English are included in the seven-bit code points of ASCII, which are common to all encodings (even most proprietary encodings), English-language text is less damaged by interpreting it with the wrong encoding, but text in other languages can display as mojibake (complete nonsense). Because many Internet standards use ISO 8859-1, and because Microsoft Windows (using the code page 1252 superset of ISO 8859-1) is the dominant operating system for personal computers today, unannounced use of ISO 8859-1 is quite commonplace, and may generally be assumed unless there are indications otherwise.
Many communications protocols, most importantly SMTP and HTTP, require the character encoding of content to be tagged with IANA-assigned character set identifiers.
Multi-byte character encodings
Some multi-byte character encodings (character encodings that can handle more than 256 different characters) are also true extended ASCII. That means all ASCII characters are encoded with a single byte with the same value as ASCII, and these values are not used anywhere else. They can be used in file formats where only ASCII bytes are used for keywords and file format syntax, while bytes 0x80-0xFF might be used for free text, including most programming languages, where string constants, comments, and often variable names can contain non-ASCII characters. This makes it much easier to introduce the multi-byte character set into existing systems.
UTF-8 is true extended ASCII, as are some Extended Unix Code encodings.
ISO/IEC 6937 is not extended ASCII because its code point 0x24 corresponds to the general currency sign (¤) rather than to the dollar sign ($), but otherwise is if you consider the accent+letter pairs to be an extended character followed by the ASCII one.
Shift JIS is not true extended ASCII. Besides replacing the backslash with the yen character, multi-byte characters can also include ASCII bytes. It does avoid the use of ASCII delimiters and controls, so in many cases such as HTML it can work. UTF-16 is even less extended ASCII because ASCII characters are stored as two bytes with one byte equal to 0x00. Porting an existing system to support character sets as Shift JIS or UTF-16 is complicated and bug prone.
Usage in computer-readable languages
For programming languages and document languages such as C and HTML, the principle of Extended ASCII is important, since it enables many different encodings and therefore many human languages to be supported with little extra programming effort in the software that interprets the computer-readable language files.
The principle of Extended ASCII means that:
all ASCII bytes (0x00 to 0x7F) have the same meaning in all variants of extended ASCII,
bytes that are not ASCII bytes are used only for free text and not for tags, keywords, or other features that have special meaning to the interpreting software.
See also
ASCII
ASCII art
Digraphs and trigraphs
Input method
List of Unicode characters
UTF-8
KOI-8
References
External links
Roman Czyborra's Unicode and extended ASCII information pages
A short page on ASCII, with the OEM 8-bit chart and the ANSI 8-bit chart
Character sets
ASCII |
2252943 | https://en.wikipedia.org/wiki/Openwall%20Project | Openwall Project | The Openwall Project is a source for various software, including Openwall GNU/*/Linux (Owl), a security-enhanced Linux distribution designed for servers. Openwall patches and security extensions have been included into many major Linux distributions.
As the name implies, Openwall GNU/*/Linux draws source code and design concepts from numerous sources, most importantly to the project is its usage of the Linux kernel and parts of the GNU userland, others include the BSDs, such as OpenBSD for its OpenSSH suite and the inspiration behind its own Blowfish-based crypt for password hashing, compatible with the OpenBSD implementation.
Public domain software
The Openwall project maintains also a list of algorithms and source code which is public domain software.
Openwall GNU/*/Linux releases
LWN.net reviewed Openwall Linux 3.0. They wrote:
PoC||GTFO
Issues of the International Journal of Proof-of-Concept or Get The Fuck Out (PoC||GTFO) are mirrored by the Openwall Project under a samizdat licence. The first issue #00 was published in 2013, issue #02 featured the Chaos Computer Club. Issue #07 in 2015 was a homage for Dr. Dobb's Journal, which could be rendered as .pdf, .zip, .bpg, or .html.
See also
Executable space protection
Comparison of Linux distributions
Security-focused operating system
John the Ripper
References
External links
Free software projects
Operating system security
Public-domain software with source code |
1497849 | https://en.wikipedia.org/wiki/List%20of%20Google%20products | List of Google products | The following is a list of products and services provided by Google LLC. Active, soon-to-be discontinued, and discontinued products, services, tools, hardware, and other applications are broken out into designated sections.
Web-based products
Search tools
Google Search – a web search engine and Google's core product.
Google Alerts – an email notification service that sends alerts based on chosen search terms whenever it finds new results. Alerts include web results, Google Groups results, news and videos.
Google Arts & Culture – an online platform to view artworks and cultural artifacts.
Google Assistant – a virtual assistant.
Google Books – a website that lists published books and hosts a large, searchable selection of scanned books.
Google Dataset Search – allows searching for datasets in data repositories and local and national government websites.
Google Finance – searchable US business news, opinion, and financial data.
Google Flights – a search engine for flight tickets.
Google Images – a search engine for images online.
Google News – automated news compilation service and search engine for news in more than 20 languages.
Google Patents – a search engine to search through millions of patents, each result with its own page, including drawings, claims and citations.
Google Scholar – a search engine for the full text of scholarly literature across an array of publishing formats and scholarly fields. Includes virtually all peer-reviewed journals.
Google Shopping – a search engine to search for products across online shops.
Google Travel – a trip planner service.
YouTube – a video hosting website.
Advertising services
Google Ads – an online advertising platform.
AdMob – a mobile advertising network.
Google AdSense – a contextual advertising program for web publishers that delivers text-based advertisements that are relevant to site content pages.
Google Ad Manager – an advertisement exchange platform.
Google Marketing Platform - an online advertising and analytics platform.
Google Tag Manager – a tag management system to manage JavaScript and HTML tags, including web beacons, for web tracking and analytics.
Communication and publishing tools
Blogger – a weblog publishing tool.
FeedBurner – a tool in news feed management services, including feed traffic analysis and advertising facilities.
Gmail – an email service.
Google Account – controls how a user appears and presents themselves on Google products.
Google Calendar – an online calendar with Gmail integration, calendar sharing and a "quick add" function to create events using natural language.
Google Chat – an instant messaging software with a capability of creating multi-user "rooms".
Google Charts – an interactive, web-based chart image generation from user-supplied JavaScript.
Google Collections – a collections app
Google Classroom – a content management system for schools that aids in distribution and grading of assignments and providing in-class communication.
Google Currents – a digital bulletin board.
Google Docs Editors – a productivity office suite with document collaboration and publishing capabilities. Tightly integrated with Google Drive.
Google Docs – a document editing software.
Google Sheets – a spreadsheet editing software.
Google Slides – a presentation editing software.
Google Drawings – a diagramming software.
Google Forms – a survey software.
Google Sites – a webpage creation and publication tool.
Google Keep – a note-taking service.
Google Domains – a domain registration service, with website publishing partners.
Google Drive – a file hosting service with synchronisation option; tightly integrated with Google Docs Editors
Google Duo – a video chat mobile app.
Google Fonts – a webfont hosting service.
Google Groups – an online discussion service that also offers Usenet access.
Google Meet – a video conferencing platform.
Google Translate – a service that allows carrying out machine translation of any text or web page between pairs of languages.
Google Voice – a VoIP system that provides a phone number which can be forwarded to actual phone lines.
Map-related products
Google Maps – mapping service that indexes streets and displays satellite and street-level imagery, providing directions and local business search.
Google My Maps – a social custom map making tool based on Google Maps.
Google Maps Gallery – a collection of data and historic maps.
Google Mars – imagery of Mars using the Google Maps interface. Elevation, visible imagery and infrared imagery can be shown.
Google Moon – NASA imagery of the moon through the Google Maps interface.
Google Street View – provides interactive panoramas from positions along many streets in the world.
Google Sky – view planets, stars and galaxies.
Google Santa Tracker – simulates tracking Santa Claus on Christmas Eve.
Statistical tools
Google Analytics – a traffic statistics generator for defined websites, with Google Ads integration. Webmasters can optimize ad campaigns, based on the statistics. Analytics are based on the Urchin software.
Google Surveys – a market research tool.
Firebase – an open, Creative Commons, attribution licensed collection of structured data, and a Freebase platform for accessing and manipulating that data via the Freebase API.
Google Ngram Viewer – charts year-by-year frequencies of any set of comma-delimited strings in Google's text corpora.
Google Public Data Explorer – a public data and forecasts from international organizations and academic institutions including the World Bank, OECD, Eurostat and the University of Denver
TensorFlow – a machine learning service that simplifies designing neural networks in an easier and more visible fashion
Google Trends – a graphing application for Web Search statistics, showing the popularity of particular search terms over time. Multiple terms can be shown at once. Results can be displayed by city, region or language. Related news stories are shown. Has "Google Trends for Websites" sub-section that shows popularity of websites over time.
Google Activity Report – a monthly report including statistics about a user's Google usage, such as sign-in, third party authentication changes, Gmail usage, calendar, search history and YouTube.
Google Data Studio - an online tool for converting data into customizable informative reports and dashboards.
Business-oriented products
Google Workspace – a suite of web applications for businesses, education providers and nonprofits that include customizable versions of several Google products accessible through a custom domain name. Services include, but are not limited to, Gmail, Google Contacts, Google Calendar, Google Docs Editors, Google Sites, Google Meet, Google Chat, Google Cloud Search, and more.
Google My Business
Google Tables (beta) - Business workflow automation tool
Developer tools
Accelerated Mobile Pages (AMP) – an open-source project and service to accelerate content on mobile devices. AMP provides a JavaScript library for developers and restricts the use of third-party JS.
Google App Engine – write and run web applications.
Google Developers – open source code and lists of API services. Provided project hosting for free and open source software until 2016.
Dart – a structured web programming language.
Flutter – a mobile cross platform development tool for Android and iOS.
Go (programming language) – a compiled, concurrent programming language.
OpenSocial – APIs for building social applications on many websites.
Google PageSpeed Tools – optimize webpage performance.
Google Web Toolkit – an open source Java software development framework that allows web developers to create Ajax applications in Java.
Google Search Console Sitemap – submission and analysis for the Sitemaps protocol.
GN – meta-build system generating Ninja build configurations. Replaced GYP in Chromium.
Gerrit – a code collaboration tool.
Googletest – testing framework in C++.
Bazel – a build system.
FlatBuffers – a serialization library.
Protocol Buffers - a serialization library similar to FlatBuffers.
Shaderc - tools and library for compiling HLSL or GLSL into SPIRV.
American fuzzy lop – a security-oriented fuzzer.
Google Guava – core libraries for Java.
Google Closure Tools – JavaScript tools.
Google Colaboratory – write Python code using a Jupyter notebook.
Security tools
reCAPTCHA – a user-dialogue system used to prevent bots from accessing websites.
Google Safe Browsing – a blacklist service for web resources that contain malware or phishing content.
Titan – a security hardware chip.
Titan Security Key – a U2F security token.
Titan M – used in Pixel smartphones starting with the Pixel 3.
Titan C – used in Google-made Chromebooks such as the Pixel Slate.
Operating systems
Android – a Linux-based operating system for mobile devices such as smartphones and tablet computers by Google and the Open Handset Alliance.
Wear OS – a version of Android designed for smartwatches and other wearable items.
Android Auto – a version of Android made for automobiles by Google.
Android TV – a version of Android made for smart TVs.
Cast OS – a version of Google Cast which powers some Google Nest devices.
Chrome OS – a Linux-based operating system for web applications.
Glass OS – an operating system for Google Glass.
Fuchsia – an operating system based on the Zircon kernel.
Desktop applications
AdWords Editor – desktop application to manage a Google AdWords account; lets users make changes to their account and advertising campaigns before synchronizing with the online service.
Drive File Stream – file synchronisation software that works with the business edition of Google Drive.
Google Chrome – a web browser.
Google Earth – virtual 3D globe that uses satellite imagery, aerial photography, GIS from Google's repository.
Google IME – Input method editor that allows users to enter text in one of the supported languages using a Roman keyboard.
Google Japanese Input – Japanese input method editor.
Google Pinyin – input method editor that is used to convert Pinyin characters, which can be entered on Western-style keyboards, to Chinese characters.
Google Toolbar – web browser toolbar with features such as a Google Search box, pop-up blocker and ability for website owners to create buttons.
Android Studio – integrated development environment for Android.
Google Web Designer – WYSIWYG editor for making rich HTML5 pages and ads intended to run on multiple devices.
Backup and Sync – client software to synchronize files between the user's computer and Google Drive storage.
Tilt Brush – painting game for the Vive and Oculus Rift.
Google Trends Screensaver – a screensaver showing the Google Trends in a customizable colorful grid for macOS.
Mobile applications
Hardware
Product families
Google Pixel – smartphones, tablets, laptops, earbuds, and other accessories.
Google Nest – smart home products including smart speakers, smart displays, digital media players, smart doorbells, smart thermostats, smoke detectors, and wireless routers.
Google Chromecast – digital media players.
Fitbit – activity trackers and smartwatches.
Google Glass – wearable computer with an optical head-mounted display and camera that allows the wearer to interact with various applications and the Internet via natural language voice commands.
Stadia Controller – game controller for Stadia.
Jamboard – 55" interactive whiteboard.
Models
Nexus One – 3.7" phone running Android 2.3 "Gingerbread"
Nexus S – 4" phone running Android 4.1 "Jelly Bean"
Nest Learning Thermostat (first generation) – smart thermostat
Galaxy Nexus – 4.7" phone running Android 4.3 "Jelly Bean"
Nexus Q – media streaming entertainment device in the Google Nexus product family
Nexus 7 (2012) – 7" tablet running Android 5.1 "Lollipop"
Nexus 10 – 10" tablet running Android 5.1 "Lollipop"
Nest Learning Thermostat (second generation) – smart thermostat
Nexus 4 – 4.7" phone running Android 5.1 "Lollipop"
Chromebook Pixel (2013) — laptop running Chrome OS
Nexus 7 (2013) – 7" tablet running Android 6.0 "Marshmallow"
Chromecast (first generation) – media streaming adapter
Nexus 5 – 4.95" phone running Android 6.0 "Marshmallow"
Nest Protect (first generation) – smoke alarm
Nexus 6 – 5.96" phone running Android 7.1.1 "Nougat"
Nexus 9 – 9" tablet running Android 7.1 "Nougat"
Nexus Player – streaming media player running Android 8.0 "Oreo"
Chromebook Pixel (2015) — laptop running Chrome OS
Nest Cam Indoor – security camera
Nest Protect (second generation) – smoke alarm
Nest Learning Thermostat (third generation) – smart thermostat
Nexus 5X – 5" phone running Android 8.1 "Oreo"
Nexus 6P – 5.7" phone running Android 8.1 "Oreo"
Pixel C – 10.2" convertible tablet running Android 8.1 "Oreo"
Chromecast (second generation) – digital media player
Chromecast Audio – audio streaming adapter
Nest Cam Outdoor – security camera
Pixel – 5" smartphone running Android 10
Pixel XL – 5.5" smartphone running Android 10
Daydream View (first generation) – virtual reality headset for smartphones
Google Home – smart speaker
Google Wifi – wireless router
Chromecast Ultra – 4K-capable media streaming adapter
Nest Cam IQ Indoor – security camera
Nest Thermostat E – smart thermostat
Nest Hello – smart video doorbell
Nest Cam IQ Outdoor – security camera
Nest × Yale – smart lock
Nest Secure – security system
Nest Guard
Nest Detect
Nest Tag
Pixel 2 – 5" smartphone running Android 11
Pixel 2 XL – 6" smartphone running Android 11
Daydream View (second generation) – virtual reality headset for smartphones
Home Mini – smart speaker
Home Max – smart speaker
Pixel Buds (first generation) – wireless earbuds
Pixelbook – laptop running Chrome OS
Pixel 3 – 5.5" smartphone running Android 11
Pixel 3 XL – 6.3" smartphone running Android 11
Pixel Slate – 2-in-1 PC running Chrome OS
Pixel Stand – wireless charger
Nest Hub – smart display
Chromecast (third generation) – media streaming adapter
Stadia Controller – gaming controller for Stadia
Pixel 3a – 5.6" smartphone running Android 11
Pixel 3a XL – 6" smartphone running Android 11
Nest Hub Max – smart display
Pixel 4 – 5.7" smartphone running Android 11
Pixel 4 XL – 6.3" smartphone running Android 11
Pixelbook Go – laptop running Chrome OS
Nest Mini – smart speaker
Nest Wifi – wireless router
Pixel Buds (second generation) – wireless earbuds
Pixel 4a – 5.8" smartphone running Android 11
Pixel 4a (5G) – 6.2" smartphone running Android 11
Pixel 5 – 6" smartphone running Android 11
Nest Audio – smart speaker
Chromecast with Google TV – media streaming adapter
Nest Thermostat – smart thermostat
Pixel Buds A-Series – wireless earbuds
Pixel 5a – 6.3" smartphone running Android 11
Pixel 6 – 6.4" smartphone running Android 12
Pixel 6 Pro – 6.7" smartphone running Android 12
Processors
Pixel Visual Core (2017, Pixel 2)
Titan M (2018, Pixel 3)
Pixel Neural Core (2019, Pixel 4)
Titan C (2019, Pixelbook Go)
Google Tensor (2021, Pixel 6)
Services
Google Cloud Platform – a modular cloud-based services for software development.
Google Crisis Response – a public project that covers disasters, turmoils and other emergencies and alerts.
Google Fi – a MVNO aimed at simple plans and pricing.
Google Get Your Business Online – increase the web presence of small businesses and cities. Advice on search engine optimization and maintaining business owners update their business profile.
Google Public DNS – a publicly accessible DNS server.
Google Person Finder – an open-source tool that helps people reconnect with others in the aftermath of a disaster.
Google Firebase – a real time database that provides an API that allows developers to store and sync data across multiple clients.
Google Cast – a display entertainment and apps from a phone, tablet or laptop right on a TV or speakers.
Google Pay – a digital wallet platform and online payment system
Stadia – a video game streaming service.
YouTube TV – an over-the-top internet television service that offers live TV.
Scheduled to be discontinued
Applications that are no longer in development, and scheduled to be discontinued in the future:
2022
Google Chrome Apps – applications that run on the Google web browser. Chrome Apps for Windows, Mac and Linux were phased out by early 2018, but apps on Chrome OS will remain supported and maintained through June 2022.
Google Hangouts – Instant Messaging service. Scheduled to be shut down in early 2022.
Discontinued products and services
Google has retired many offerings, either because of obsolescence, integration into other Google products, or lack of interest. Google's discontinued offerings are colloquially referred to as Google Graveyard.
2022
Android Things – A part of Google Internet of Things (IoT). Shut down on January 5, 2022.
2021
AngularJS – Open source web application framework. Shut down on December 31, 2021.
Google Clips – a miniature clip-on camera device. Pulled from Google Store on October 15, 2019. Discontinued on December 31, 2021.
My Maps – an Android app that enabled users to create custom maps for personal use or sharing on their mobile device. Shut down on October 15 and users were asked to migrate to the mobile web version of the app.
Google Bookmarks - Online bookmarking service. Discontinued on September 30, 2021.
Tour Builder – allowed users to create and share interactive tours inside Google Earth. Shut down in July 2021, replaced by new creation tools in Google Earth.
Poly – a service to browse, share and download 3D models. Shut down on June 30, 2021.
Google Expeditions - virtual reality (VR) platform designed for educational institutions. Discontinued on June 30, 2021. The majority of Expedition's tours were migrated to Google Arts & Cultures.
Tour Creator – allowed users to create immersive, 360° guided tours in the Expeditions app that could be viewed with VR devices. Shut down on June 30.
Timely – an Android app that provided alarm, stopwatch and timer functions with synchronization across devices. Timely servers were shut down on May 31.
Google Go Links – a URL shortening service that also supported custom domain for customers of Google Workspace. Discontinued on April 1.
Google Public Alerts – an online notification service that sent safety alerts to various countries. Shut down on March 31 and functions moved to Google Search and Google Maps.
Google Crisis Map – a service that visualized crisis and weather-related data. Shut down March 30. Improvements to Google Search and Maps rendered this service redundant.
Google App Maker – allowed users to develop apps for businesses. Shut down on January 19, 2021.
2020
Google Cloud Print – a cloud-based printing solution that has been in beta since 2010. Discontinued on December 31, 2020.
Google Play Music – Google's music streaming service. Discontinued on December 3 and replaced by YouTube Music and Google Podcasts.
Nest Guard – a security system with an alarm, keypad, and motion sensor with an embedded microphone. Discontinued and removed from Google Store in October 2020, though it will continue functioning.
Google Station – service that allowed users to Spread Wi-Fi hotspots. Shut down on September 30.
Hire by Google – applicant tracking system & recruiting software. Shut down on September 1.
Password Checkup – an extension that warned of breached third-party logins. Shut down in July after it had been integrated with Chrome.
Google Photos Print – a subscription service that automatically selected the best ten photos from the last thirty days which were mailed to users' homes. Shut down in June.
Shoelace – an app used to find group activities with others who share your interests. Shut down in May.
Neighbourly – an experimental mobile app designed to help you learn about your neighborhood by asking other residents. Shut down on May 12.
Fabric – modular SDK platform launched by Crashlytics in 2014. Google acquired Crashlytics in 2017 and announced plans to migrate all of its features to Firebase. It was shut down on May 4, 2020.
Material Theme Editor – plugin for Sketch app which allowed you to create a material-based design system for your app. Discontinued in March.
Fiber TV – an IPTV service bundled with Google Fiber. Discontinued on February 4, 2020.
One Today – an app that allowed users to donate $1 to different organizations and discover how their donation would be used. Discontinued in January.
Androidify – allowed users to create a custom Android avatar. Discontinued in January.
2019
Chromecast Audio – a variation of the second-generation Chromecast designed for use with audio streaming apps. Discontinued in January 2019.
YouTube Annotations – annotations that were displayed over videos on YouTube. On January 15, all existing annotations were removed from YouTube.
Mr. Jingles – Google's notifications widget. Discontinued on March 7.
Google Allo – Google's instant messaging app. Discontinued on March 12, 2019.
Google Image Charts a chart-making service that provided images of rendered chart data, accessed with REST calls. The service was deprecated in 2012, temporarily disabled in February 2019 and discontinued on March 18, 2019.
goo.gl – a URL shortening service. Started to turn down support on March 30, 2018, and was discontinued on March 30, 2019.
Inbox by Gmail – an email application for Android, iOS, and web platform that organized and automated to-do lists using email content. As of April 2, 2019, accessing the Inbox subdomain redirects to Gmail proper.
Google+ – The consumer edition of Google's social media platform. As of April 2, 2019, users receive a message stating that "Google+ is no longer available for consumer (personal) and brand accounts".
Google Jump – cloud-based video stitching service. Discontinued June 28.
Works with Nest the smart home platform of Google brand Nest. Users were asked to migrate to the Google Assistant platform. Support ended on August 31, 2019.
YouTube for Nintendo 3DS – official app for Nintendo 3DS. Discontinued on September 3.
YouTube Messages – direct messages on YouTube – discontinued after September 18.
YouTube Leanback – a web application for control with a remote, intended for use with smart TVs and other similar devices. Discontinued on October 2, 2019.
Google Daydream View Google's VR headset (first-gen in late 2016, second-gen in late 2017) was discontinued just after their "Made by Google" event in October 2019. The Google Daydream platform itself is being retired also.
Touring Bird – Travel website which facilitated booking tours, tickets and activities in top locations. The service was shut down on November 17, 2019.
Google Bulletin – "Hyperlocal" news service which allowed users to post news from their neighborhood. It was shut down on November 22, 2019.
Google Fusion Tables – A service for managing and visualizing data. The service was shut down on December 3, 2019.
Google Translator Toolkit – An online computer-assisted translation tool designed to allow translators to edit the translations that are automatically generated by Google Translate. It was shut down on December 4, 2019, citing declining usage and the availability of other similar tools.
Google Correlate – finds search patterns which correspond with real-world trends. It was shut down on December 15, 2019, as a result of low usage.
Google Search Appliance – A rack mounted device used to index documents. Hardware sales ended in 2017 and initial shutdown occurred in 2018; and was ultimately shut down on December 31, 2019.
Google Native Client (NaCL/PNaCl) – sandboxing technology for running a subset of native code. It was discontinued on December 31, 2019.
Datally – Lets users save mobile data – Removed from Play Store in October 2019.
Build with Chrome – an initiative between Lego and Google to build the world using Lego. It was discontinued in March 2019
2018
Blogger Web Comments (Firefox only) – displays related comments from other Blogger users.
Google Portfolios – Personal financial securities tracker. Deprecated in November 2017. Reached end of life as of January 2018.
City Tours – overlay to Maps that shows interesting tours within a city
Dashboard Widgets for Mac (Mac OS X Dashboard Widgets) – suite of mini-applications including Gmail, Blogger and Search History.
Joga Bonito – soccer community site.
Local – Local listings service, merged with Google Maps.
MK-14 – 4U rack-mounted server for Google Radio Automation system. Google sold its Google Radio Automation business to WideOrbit Inc.
Google Music Trends – music ranking of songs played with iTunes, Winamp, Windows Media Player and Yahoo Music. Trends were generated by Google Talk's "share your music status" feature.
Google Personalized Search – search results personalization, merged with Google Accounts and Web History.
Photos Screensaver – slideshow screensaver as part of Google Pack, which displays images sourced from a hard disk, or through RSS and Atom Web feeds.
Rebang (Google China) – search trend site, similar to Google Zeitgeist. , part of Google Labs.
Spreadsheets – spreadsheet management application, before it was integrated with Writely to form Google Docs & Spreadsheets.
University Search – search engine listing for university websites.
U.S. Government Search – search engine and personalized homepage that exclusively draws from sites with a .gov TLD. Discontinued June 2006.
Video Player – view videos from Google Video.
Voice Search – automated voice system for web search using the telephone. Became Google Voice Local Search and integrated on the Google Mobile web site.
Google X – redesigned Google search homepage. It appeared in Google Labs, but disappeared the following day for undisclosed reasons.
Accessible Search – search engine for the visually impaired.
Quick Search Box – search box, based on Quicksilver, easing access to installed applications and online searches.
Visigami – image search application screen saver that searches files from Google Images, Picasa and Flickr.
Wireless access – VPN client for Google WiFi users, whose equipment does not support WPA or 802.1x protocols.
Google Play Newsstand – News publication and magazine store. Replaced by Google News on May 15, removed from Google Play on November 5, and magazines were no longer available on Google News since January 2020.
Google News and Weather – News publication app. Merged by Google News on May 15.
Google global market finder
QPX Express API – flight search API
Google Contact Lens – was a smart contact lens project capable of monitoring the user's glucose level in tears. On November 16, 2018, Verily announced it has discontinued the project because of the lack of correlation between tear glucose and blood glucose.
2017
Google Maps Engine – develop geospatial applications. Discontinued February 1.
Google Swiffy – convert Adobe Flash files (SWF) into HTML5. Discontinued July 1.
Google Nexus – Smartphone lineup – replaced by Google Pixel on October 4
Free Search – embed site/web search into a user's website. Replaced by Google Custom Search.
Google Hands Free – retail checkout without using your phone or watch. Pilot started in the Bay area March 2016, but discontinued on February 8.
Google Spaces – group discussions and messaging. Discontinued on April 17.
Google Map Maker – map editor with browser interface. Discontinued on April 1, replaced by Google Maps and Google Local Guides.
Trendalyzer – data trend viewing platform. Discontinued in September.
2016
Google Code – Open source code hosting. Discontinued on January 25 and renamed to Google Developers.
Picasa – photo organization and editing application. Closed March 15 and replaced by Google Photos.
Google Compare – comparison-shopping site for auto insurance, credit cards and mortgages
Google Showtimes – movie showtime search engine. Discontinued on November 1.
MyTracks – GPS logging. Shut down April 30.
Project Ara – an "initiative to build a phone with interchangeable modules for various components like cameras and batteries" was suspended to "streamline the company's seemingly disorganized product lineup". on September 2.
Panoramio – geolocation-oriented photo sharing website. Discontinued on November 4. Google's Local Guides program as well as photo upload tools in Google Maps rendered Panoramio redundant.
Google Feed API – download public Atom or RSS feeds using JavaScript. Deactivated on December 15.
2015
Google Moderator – rank user-submitted questions, suggestions and ideas via crowdsourcing. Discontinued on June 30.
Wildfire by Google – social media marketing software
BebaPay – prepaid ticket payment system. Discontinued on March 15.
Google Helpouts – Hangout-based live video chat with experts. Discontinued on April 20.
Google Earth Enterprise – Google Earth for enterprise use. Discontinued on March 20.
Google Earth Plugin – customize Google Earth. Discontinued on December 15.
Speak To Tweet – telephone service created in 2011 in collaboration with Twitter and SayNow allowing users to phone a specific number and leave a voicemail; a tweet was automatically posted on Twitter. Discontinued sometime during 2015.
2014
Google Questions and Answers – community-driven knowledge market website. Discontinued on December 1.
Orkut – social networking website. Discontinued on September 30.
Google's "discussion search" option. Discontinued in July.
Quickoffice – productivity suite for mobile devices. Discontinued in June, merged into Google Drive.
Google TV – smart TV platform based on Android. Discontinued and replaced by Android TV in June.
Google Offers – service offering discounts and coupons. Shut down on March 31.
Google Chrome Frame – plugin for Internet Explorer that allowed web pages to be viewed using WebKit and the V8 JavaScript engine. Discontinued on February 25.
Google Schemer – social search to find local activities. Discontinued on February 7.
YouTube My Speed. Discontinued in January, replaced by Google Video Quality Report.
Google Notifier – alerted users to new messages in their Gmail account. Discontinued on January 31.
2013
My Maps, GIS tools for Google Maps.
Google Currents – Magazine app. Merged into Google Play Newsstand on November 20.
Google Checkout – online payment processing service, aimed at simplifying the process of paying for online purchases. Discontinued on November 20, merged into Google Wallet.
iGoogle – customisable homepage, which can contain web feeds and Google Gadgets. Discontinued on November 1.
Google Latitude – mobile geolocation tool that lets friends know where users are. Discontinued on August 9, with some functionality moved to Google+.
Google Reader – web-based news aggregator, capable of reading Atom and RSS feeds. Discontinued on July 1.
Meebo – A social networking website discontinued on June 6
Google Building Maker – web-based building and editing tool to create 3D buildings for Google Earth. Discontinued on June 4.
Google Talk – instant messaging service that provided both text and voice communication. Replaced May 15 by Google Hangouts.
SMS Search – mobile phone short message service. Discontinued on May 10.
Google Cloud Connect – Microsoft Office plugin for automatically backing up Office documents upon saving onto Google Docs. Discontinued on April 30, in favor of Google Drive.
Picnik – online photo editor. Discontinued on April 19, 2013 and moved to Google+ photo manager.
Google Calendar Sync – sync Microsoft Outlook email and calendar with Gmail and Google Calendar. Synchronization for existing installations stopped on August 1, 2014. Replaced with Google Sync, which does not synchronize Outlook calendars, but can sync email using IMAP or POP3. Also, Google Apps for Business, Education, and Government customers can use Google Apps Sync for Microsoft Outlook
2012
Picasa Web Albums Uploader – upload images to the "Picasa Web Albums" service. It consisted of an iPhoto plug-in and a stand-alone application.
Google Chart API – interactive Web-based chart image generator, deprecated in 2012 with service commitment to 2015 and turned off in 2019. Google promotes JavaScript-based Google Charts as a replacement, which is not backwards-compatible with the Google Chart API's HTTP methods.
Google Apps Standard Edition – Discontinued on December 6.
Nexus Q – digital media player. Discontinued in November.
Google Refine – data cleansing and processing. It was spun off from Google on October 2, becoming open source; it is now OpenRefine.
TV Ads – Method to place advertising on TV networks. Discontinued on August 30, with all remaining active campaigns ending December 16.
Knol – write authoritative articles related to various topics. Discontinued October 1.
Yinyue (Music) (Google China) – site linking to a large archive of Chinese pop-music (principally Cantopop and Mandopop), including audio streaming over Google's own player, legal lyric downloads, and in most cases legal MP3 downloads. The archive was provided by Top100.cn (i.e., this service does not search the whole Internet) and was available in mainland China only. Discontinued in September, users were given the option to download playlists until October 19.
Google Insights for Search – insights into Google search term usage. Discontinued September 27, merged in Google Trends.
Listen – subscribe to and stream podcasts and Web audio. Discontinued in August.
BumpTop – physics-based desktop application. Discontinued in August.
Google Video – a free video hosting service. Shut down and migrated to YouTube on August 20.
Google Notebook – online note-taking and web-clipping application. Discontinued in July.
Google Website Optimizer – testing and optimization tool. Discontinued on August 1.
Google Mini – reduced capacity, lower-cost version of the Google Search Appliance. Discontinued on July 31.
Google Wave – online communication and collaborative real-time editor tool that bridge email and chat. Support ended on April 30, 2012.
Slide.com – Discontinued on March 6.
Google Friend Connect – add social features to websites. Discontinued on March 1, replaced by Google+'s pages and off-site Page badges.
Jaiku – social networking, microblogging and lifestreaming service comparable to Twitter. Shut down January 15.
Google Code Search – software search engine. Discontinued on January 15.
Google Health – store, manage, and share personal health information in one place. Development ceased June 24, 2011; accessible until January 1, 2012; data available for download until January 1, 2013.
2011
Google Buzz – social networking service integrated with Gmail allowing users to share content immediately and make conversations. Discontinued in December and replaced by Google+.
Google Sidewiki – browser sidebar and service that allowed contributing and reading helpful information alongside any web page. Discontinued in December.
Gears – web browser features, enabling some new web applications. Removed from all platforms by November.
Squared – creates tables of information about a subject from unstructured data. Discontinued in September.
Aardvark – social search utility that allowed people to ask and answer questions within their social networks. It used people's claimed expertise to match 'askers' with good 'answerers'. Discontinued on September 30.
Google PowerMeter – view building energy consumption. Discontinued on September 16.
Desktop – desktop search application that indexed emails, documents, music, photos, chats, Web history and other files. Discontinued on September 14.
Google Fast Flip – online news aggregator. Discontinued September 6.
Google Pack – application suite. Discontinued on September 2.
Google Directory – collection of links arranged into hierarchical subcategories. The links and their categorization were from the Open Directory Project, sorted using PageRank. Discontinued on July 20.
Google Blog Search – weblog search engine. Discontinued in July.
Google Labs – test and demonstrate new Google products. Discontinued in July.
Image Swirl – an enhancement for an image-search tool in Google Labs. It was built on top of image search by grouping images with similar visual and semantic qualities. Shut down in July due to discontinuation of Google Labs.
Google Sets – generates a list of items when users enter a few examples. For example, entering "Green, Purple, Red" emits the list "Green, Purple, Red, Blue, Black, White, Yellow, Orange, Brown". Discontinued mid-year.
Directory – navigation directory, specifically for Chinese users.
Hotpot – local recommendation engine that allowed people to rate restaurants, hotels etc. and share them with friends. Moved to Google Places service in April 2011.
Real Estate – place real estate listings in Google Maps. Discontinued February 10.
2010
Marratech e-Meeting – web conferencing software, used internally by Google's employees. Discontinued on February 19.
Google SearchWiki – annotate and re-order search results. Discontinued March 3, replaced by Google Stars.
GOOG-411 (also known as Voice Local Search) – directory assistance service. Discontinued on November 12.
Google Base – submission database that enabled content owners to submit content, have it hosted and made searchable. Information was organized using attributes. Discontinued on December 17, replaced with Google Shopping APIs.
2009
Audio Ads – radio advertising program for US businesses. Discontinued on February 12.
Catalogs – search engine for over 6,600 print catalogs, acquired through optical character recognition. Discontinued in January.
Dodgeball – social networking service. Users could text their location to the service, which would then notify them of nearby people or events of interest. Replaced by Google Latitude.
Living Stories – collaboration with The New York Times and The Washington Post for presenting news. Discontinued in February 2010.
Google Mashup Editor – web mashup creation with publishing, syntax highlighting, debugging. Discontinued in July; migrated to Google App Engine.
Google Ride Finder – taxi and shuttle search service, using real time position of vehicles in 14 U.S. cities. Used the Google Maps interface and cooperated with any car service that wished to participate. Discontinued in October.
Shared Stuff – web page sharing system, incorporating a bookmarklet to share pages, and a page to view the most popular shared items. Pages could be shared through third-party applications such as Delicious or Facebook. Discontinued on March 30.
Google Page Creator – webpage publishing program that could be used to create pages and to host them on Google servers. Discontinued, with all existing content transferred to Google Sites.
2008
Google Browser Sync (Mozilla Firefox) – allowed Firefox users to synchronize settings across multiple computers. Discontinued in June.
Google Lively – 3D animated chat. Discontinued December 31.
Hello – send images across the Internet and publish them to blogs. Discontinued on May 15.
SearchMash – search engine to "test innovative user interfaces". Discontinued on November 24.
Send to Phone – send links and other information from Firefox to their phone by text message. Discontinued on August 28, replaced by Google Chrome to Phone.
Web Accelerator – increased load speed of web pages. No longer available for, or supported by, Google as of January 20.
2007
Google Click-to-Call – allowed a user to speak directly over the phone without charge to businesses found on Google search results pages.
Google Video Player – a video player that played back files in Google's own .gvi format and supported playlists in .gvp format. Shut down on August 17 due to Google's acquisition of YouTube.
Related Links – links to information related to a website's content. Discontinued on April 30.
Public Service Search – non-commercial organization service, which included Google Site Search, traffic reports and unlimited search queries. Discontinued in February, replaced by Google Custom Search.
Google Video Marketplace – discontinued in August
2006
Google Answers – online knowledge market that allowed users to post bounties for well-researched answers to their queries. Discontinued on November 28; still accessible (read-only).
Google Deskbar – desktop bar with a built-in mini browser. Replaced by a similar feature in Google Desktop. Discontinued May 8.
Writely – web-based word processor. On October 10, Writely was merged into Google Docs & Spreadsheets.
See also
Outline of Google
History of Google
List of acquisitions by Google
Google's hoaxes
X (company)
Google.org
References
External links
List of products on the Google corporate site
List of products on Google Developers
Mobile software
Google
Computing-related lists
Google products
Products
Google
Google |
3732824 | https://en.wikipedia.org/wiki/Ntdetect.com | Ntdetect.com | ntdetect.com is a component of Microsoft Windows NT-based operating systems that operate on the x86 architecture. It is used during the Windows NT startup process, and is responsible for detecting basic hardware that will be required to start the operating system.
Overview
The bootstrap loader takes the control over the booting process and loads NTLDR.
Ntdetect.com is invoked by NTLDR, and returns the information it gathers to NTLDR when finished, so that it can then be passed on to ntoskrnl.exe, the Windows NT kernel.
Ntdetect.com is used on computers that use BIOS firmware. Computers with Extensible Firmware Interface, such as IA-64, use a method of device-detection that is not tied to the operating system.
Hardware detection operates somewhat differently depending on whether or not Advanced Configuration and Power Interface (ACPI) is supported by the hardware. It passes on the hardware details gathered from the BIOS onto the OS. If ACPI is supported, the list of found devices is handed to the kernel, Windows will take responsibility for assigning each device some resources. On older hardware, where ACPI is not supported, the BIOS takes responsibility for assigning resources, not the operating system, so this information is passed to the kernel as well.
In addition, ntdetect.com will make a determination as to which hardware profile to use. Windows supports multiple distinct hardware profiles, which allows a single copy of Windows to work well in situations where the hardware changes between specific layouts on a regular basis. This is common with portable computers that connect to a docking station.
In Windows Vista and later Windows operating systems, the HAL only supports ACPI, and ntdetect.com has been replaced by winload.exe, so that Windows will be able to control hardware resource allocation on every machine in the same way. Hardware profiles are also no longer supported in Windows Vista.
The information gathered by ntdetect.com is stored in the HKLM\HARDWARE\DESCRIPTION key in the Windows Registry at a later stage in the boot process.
Classes of hardware detected
Hardware identification
Hardware date & time
Bus and adapter types
SCSI adapters
Video adapters
Keyboard
Serial and parallel communication ports
Hard drives
Floppy disks
Mouse
Floating-point coprocessor
Industry Standard Architecture-based devices
Troubleshooting
To aid in troubleshooting, Microsoft has made available "debug" versions of ntdetect.com which will display detailed information about the hardware that was detected. Called ntdetect.chk, it is included in the Windows Support Tools.
Notes
References
Windows XP Resource Kit - Troubleshooting the Startup Process
Windows 2000 Resource Kit - Starting Windows 2000 - Detecting Hardware
Windows NT Workstation Resource Kit - Troubleshooting Startup and Disk Problems
External links
Download of ntdetect.chk for Windows 2000
Windows XP SP2 Support Tools includes ntdetect.chk for Windows XP.
Windows components
Windows files |
22558431 | https://en.wikipedia.org/wiki/Juniper%20E-Series | Juniper E-Series | Juniper E-Series is a series of broadband services routers or edge routers manufactured by Juniper Networks. The E series was originally developed by Unisphere Networks, which Juniper acquired in 2002. These routers provide multiple services including broadband remote access server, broadband video services, dedicated access, 802.11 wireless subscriber management, VOIP, internet access, security services, network address translation (NAT) etc. on a single platform. The carrier-class architecture of E-series routers allows to combine Broadband Remote Access Server (B-RAS) and dedicated access capabilities (T1/E1 and above) on a single and integrated platform. The E-series routes runs on JUNOSe software compared to other series of routers of Juniper which runs on JUNOS.
Models and Platforms
The Juniper E-series includes six different models that are designed to address the variety of Service Provider requirements. The specific models include the high-capacity E320 BSR and ERX-1440 platforms, the mid-range ERX-1410 platform, compact ERX-710 and ERX-705 platforms, and the highly compact ERX-310. All E-series platforms use a single version of the JUNOSe operating system, and support a full suite of Internet routing protocols, including BGP-4, IS-IS, OSPF, and RIP.
E120
The Juniper E120 router is a high-performance router used primarily for small to medium-sized points of presence (PoPs) and central offices. The E120 has a 120 Gbit/s switch fabric and hosts up to six line modules that support OC3/STM1 through OC48c/STM16 and 10 Gigabit Ethernet rates.
E320
The Juniper E320 router is a high-performance router used primarily for large points of presence (PoPs). The box supports a 100 Gbit/s or a 320 Gbit/s switch fabric and hosts up to 12 line modules that support OC3/STM1 through OC48c/STM16 and 10 Gigabit Ethernet interfaces with the ability to support 96,000 subscribers. The E320 was designed with video in mind, adding the Quality of service (QOS) and high availability that carriers want for IPTV, as well as a huge increase in density. Cisco Systems has two boxes selling into this space: the 10000 series, considered Cisco's primary B-RAS entry, and the 7600 line of edge routers (of which the 7613 is the largest), which include some B-RAS capabilities. The B-RAS Backplane Switching Capacities of E320 supports up to 320 Gbit/s compared to 256 Gbit/s of Cisco 7613.
ERX310
The Juniper ERX310 is a compact but high-performance router that has a 10 Gbit/s switch fabric, two slots dedicated to line modules, and supports up to OC12c/STM4 and Gigabit Ethernet interfaces. The 3-slot router contains a 10 Gbit/s switch fabric /route processor (SRP) and the rest of the two slots dedicated to line modules.
ERX705
The Juniper ERX705 is a compact router that is used for small and medium-sized circuit aggregation applications. They can be configured with a 5 Gbit/s or 10 Gbit/s switch fabric (optional switch fabric redundancy), has five slots for line modules, and supports up to OC12c/STM4 and Gigabit Ethernet interfaces. These 7-slot router contains either a 5 Gbit/s or 10 Gbit/s switch fabric / route processor (SRP) with optional SRP redundancy for high availability and 5 slots dedicated to line modules.
ERX710
The Juniper ERX710 is mainly used for medium-sized and large circuit aggregation applications. They have a 5 Gbit/s switch fabric with optional redundancy, five slots for line modules, and supports up to OC12c/STM4 and Gigabit Ethernet interfaces. The ERX-705 and ERX-710 routers utilize the same line modules and I/Os used across the entire E-series product line.
ERX1410
The Juniper ERX1410 is an edge router that is used for large circuit aggregation applications. They have a 10 Gbit/s switch fabric with optional redundancy, 12 line module slots, and supports up to OC12c/STM4 and Gigabit Ethernet interfaces.
ERX1440
The Juniper ERX1440 is a high-performance router is used for small to medium-sized points of presence (PoPs). The ERX1440 has a 40 Gbit/s switch fabric with optional redundancy, 12 line module slots, and supports up to OC48c/STM16 and Gigabit Ethernet interfaces.
Features
The ERX system uses a modular, carrier-class design with a passive midplane, active front-insert line modules, and high-reliability, rear-insert input/output (I/O) modules. All chassis types use the same line modules and I/O modules. The 7-slot and 14-slot systems support full redundancy and line module hot-swapping to optimize network uptime.
The E-series JUNOSe system software and applications supports stateful Switch Route Processor switchover capabilities. The high availability for subscriber management applications, including all Point-to-Point Protocol and Dynamic Host Configuration Protocol access options, means that all subscriber sessions and services remain active during failure.
These routers maintain an entire routing table per port, which removes the route processor from the forwarding path and provides wire speed performance IP traffic streams, a process aided by Juniper developed ASIC technology.
The E-series supports subscriber management features in order to effectively aggregate traffic from access multiplexers, terminate Point-to-Point Protocol (PPP) sessions, and enforce QoS policies on a per flow and per subscriber basis. Features includes support for DHCP, PPPoE, PPPoA, PAP, and CHAP, domain parsing based on destination domain, IP address pooling, L2TP, LAC, LNS, RADIUS-initiated disconnect, RADIUS server support, auto-detection, Zero-Touch configuration, and TACACS+ etc.
The E-Series Modules supported are Channelized T3, Channelized OC3/STM-1 and OC12/STM-4 LM, Fast Ethernet/Gigabit Ethernet Line Modules, OC-3/STM-1 and OC-12/STM-4 ATM Line Modules, OC3/STM-1, OC12/STM-4 and OC48/STM-16 Packet over SONET Line Modules, Service Modules and IPSec Service Module etc.
References
External links
http://www.juniper.net/us/en/products-services/routing/e-series/
Juniper Networks
Routers (computing) |
59803487 | https://en.wikipedia.org/wiki/Swift%20Playgrounds | Swift Playgrounds | Swift Playgrounds is an educational tool and development environment for the Swift programming language developed by Apple Inc., initially announced at the WWDC 2016 conference. It was introduced as an iPad application alongside iOS 10, with a macOS version introduced in February 2020. It is available for free via Apple's App Store for iPadOS and Mac App Store for macOS.
In addition to publishing the Swift Playgrounds application itself, Apple also produces a series of educational lessons teaching programming and debugging skills. The application can also subscribe to lessons and other content published by third parties, including lessons allowing users to control educational toys such as Lego Mindstorms EV3 and Sphero robots. Apple publishes a curriculum guide for educators wishing to incorporate Swift Playgrounds into their teaching.
Features
Swift Playgrounds was designed to be a development environment and an education tool simultaneously. The app allows users to download lessons and challenges. Once stored on the iPad, these can be copied and modified without the need of an active internet connection.
Apple's initial lessons, available for all Swift Playgrounds users to download, introduce three characters: Byte, Blu, and Hopper. In each challenge, young coders are asked to assist these characters achieving simple goals by coding simple instructions. As challenges become more difficult, more complex algorithms are required to solve them and new concepts are introduced. Advanced lessons in Playgrounds introduce users to more complex features such as Apple's Bluetooth and Augmented Reality development platform (ARKit) APIs.
In addition to Apple's own educational content, Swift Playgrounds can download third-party lessons through its subscriptions feature. Some third-party lessons allow the app to control robots (such as Lego Mindstorms EV3 and Sphero educational toys) and drones (such as the Parrot). Apple also offers coding classes using Swift Playgrounds at Apple Stores.
Swift Playgrounds was designed to be fully accessible to users with disabilities. It supports Apple's VoiceOver screen reader technology, and at WWDC 2020 Apple introduced a series of lessons called "Swan's Quest" which use accessibility features to help students solve puzzles.
History
The Swift Playgrounds application was announced on June 13, 2016 at WWDC 2016 as an iPad exclusive app to help people learning to code with Apple's Swift programming language. A beta version for Apple developers was released on the same date, followed by a public beta version in the following month. The app was presented as a teaching tool for students, introducing the core concepts of coding using an interactive environment designed for touch. The application's name is an apparent reference to Xcode's earlier Playgrounds feature, introduced in 2014.
Along with iOS 10, the app was officially released on September 13, 2016. Apple also published a curriculum guide, recommending the iPad app for middle school students and up. In January 2018, Apple introduced subscriptions to the iPad application, allowing users to subscribe to playgrounds developed by third parties.
On February 11, 2020, Apple released a macOS version of Swift Playgrounds on the Mac App Store, built using Apple's Catalyst technology. Subsequent versions of the application have supported both iPadOS and macOS, with most of Apple's curriculum available on both platforms. For WWDC 2020, Apple published a session instructing third-party developers on how to support both platforms in their subscriptions.
Development
The iPad version of the Swift Playgrounds (1.0) was released on September 13, 2016. Chris Lattner was also one of the few core people who drove Swift Playgrounds for iPad, including conception, design, implementation, and iteration. Simultaneously with its release, Apple published guides on the iBookStore to teach users how to navigate and use the application. The launch coincided with a large Silicon Valley campaign to press public schools to teach coding and was followed by Apple's announcement of the "Everyone Can Code" initiative, a program that provides computer science curriculum to help kids learn how to code. Swift Playgrounds is included in this program as free coding curriculum and Apple provides detailed guides to walk teachers through teaching Swift. Apple also released "App Development with Swift", a year-long curriculum for teaching Swift software development and later introduced a Swift certification program to validate coding skills for students.
In May 2018, Apple announced the extension of "Everyone Can Code" initiative to US schools serving blind and deaf students. In January 2017, Apple partnered with RNIB (Royal National Institute of Blind People) to provide braille versions of the Swift Playgrounds graphics used in its coding course.
Version history
Reception
Upon release, Swift Playgrounds reached the first place in the top free iPad education apps in nearly 100 countries. The app received generally positive reviews from users (4/5 rating score on the App Store) and from the press. The app's ability to make serious coding accessible to young students was praised, as well as the fact that it was not excessively focused on Swift but rather in teaching good coding practices. Common Sense Media rates Swift Playgrounds with a 5/5 ranking score.
References
External links
Apple Inc. software
Educational software
Integrated development environments
IOS software
MacOS programming tools
Freeware |
23101430 | https://en.wikipedia.org/wiki/Anwar%20Ali%20%28physicist%29 | Anwar Ali (physicist) | Anwar Ali (born 1943, ), is a Pakistani physicist and a computer programmer who served as the Chairman of the Pakistan Atomic Energy Commission (PAEC) from 2006 till 2009. His scientific career is spent at the Pakistan Atomic Energy Commission as a computational physicist and played a key scientific role his nation's secret nuclear deterrent program.
Biography
Early life
Born and hailed from Hoshiarpur in India, Ali was attended the Government College University in Lahore where he graduated with Bachelor of Science (BSc) in physics, and went to attend the master's program in physics at the Punjab University where he graduated with Master of Science (MSc) in physics. He attended the University of Birmingham where he graduated with MSc in nuclear physics.
In 1972, he pursued the doctoral studies in physics at the Birmingham University but left his dissertation studies when he learned of India's nuclear test, 'Smiling Buddha' on 18 May 1974.
Pakistan Atomic Energy Commission
After his graduation from Punjab University, Ali found employment with the Pakistan Atomic Energy Commission (PAEC) in 1967 as a scientific officer at the Atomic Energy Center in Lahore. He joined the Nuclear Engineering Division alongside Bashiruddin Mahmood, an engineer, and aided in the investigations on the uranium enrichment under Bashiruddin Mahmood in May 1974. Ali was an original member of the uranium investigation team that was working with Mahmood as a principal investigator under advice from Abdul Qadeer Khan. Ali was posted back to Engineering Division of the PAEC after Abdul Qadeer Khan took over the program after moving it at the Khan Research Laboratories in Kahuta.
In 1978, Ali joined the Airport Development Works— a separate facility working on electromagnetic separation of uranium isotopes independent from Khan Research Laboratories— functioning under its director, Dr. G.D. Alam, at the Chaklala Air Force Base and assisted Alam in designing of the centrifuge and further in computer programing to control the rotation of the centrifuge. There, Ali learned computer programming from Alam who also taught him about the automatic controls after being posted at the Khan Research Laboratories in 1981. Eventually, Ali lost interest in nuclear physics but found himself working in computer coding and researching on the topics and applications involving the fluid dynamics in the computational physics.
In 1982, Ali was in brief conflict with Abdul Qadeer Khan when he was confided about the latter about selling the centrifuge technology to unknown Arab country, and may have been the one who alerted the Zia administration to Khan's motive. Following this incident, Ali was transferred at the Directorate of Industrial Liaison until 1985 when he was posted at the Directorate of Technical Development (DTD) where he found himself working as a computer programmer, and aided in developing computer codes for the launch sequences of the missiles. In 1993, Ali accepted the employment with the National Defence Complex, a rocket engine contractor, and played a key role in designing the satellite-based inertial guidance and control system of the Shaheen-I missile system. In 1996, Ali went to join the DTD, and helped write the computer codes for the nuclear devices— he was part of the computer programming team that design the codes for the nuclear devices, codenamed: Chagai-I.
In 1998, Ali was the Director of the Computation at the Atomic Energy Commission and eye-witnessed the nuclear testing at Chagai Hills, and reportedly quoted: "Pakistan has successfully detonated the device and we have made a series of experiments and they have been very successful and the results were as we were expecting and in this series there were six experiments." In 1999, Ali was honored with Hilal-e-Imtiaz (Trans. Star of Excellence) by the President of Pakistan, and remained associated with his nation's strategic program, namely the Shaheen missile system.
In 2002, Ali was appointed as chief technical officer —local corporate designation: Member (Technical)— at the PAEC, and was rumored to become director of the Khan Research Laboratories (KRL), which was rebuffed by the Government in 2004. Ali, who had been the most senior scientist at the PAEC, was confirmed for the appointment of the chairmanship of the Atomic Energy Commission when the Government of Pakistan announced at the news media on 28 March 2006. Upon taking over the chairmanship, Ali announced that the PAEC has also promoted the utilization of nuclear technologies in other areas, such as agricultural production and for medical diagnosis and therapy. Ali, as chairman of the PAEC, provided strong advocacy for expansion of the commercial nuclear plants to the Government of Pakistan."
On 27 January 2006, Anwar Ali, received an invitation letter to visit CERN– a particle physics facility in Switzerland– where Ali and Robert Aymar negotiated an agreement between PAEC and CERN to provide financial, technical, and scientific support in the field of novel accelerator, collimator, particle detector and information technologies, as well as through the training and education of scientists and engineers. Under this agreement, Pakistan upgraded the Compact Muon Solenoid and the Large Hadron Collider by designing and developing the particle accelerators as its lead contractor at the CERN.
On 5 April 2009, Ali, rumored to be given extension from his three-year chairmanship, was confirmed for his retirement from the PAEC and was succeeded by Dr. Ansar Pervaiz on 7 April 2009.
Controversy in Canada
In 1985, the Canadian government launched an investigation of Ali regarding his visit to the county in July 1980 as reported by the Indian news outlets in 2006. In 1980, Ali, who was serving as the Director of the Directorate of Industrial Liaison (DIL), contracted with the Canadian GE alongside with American firms such as the Westinghouse Electric, RCA Corp., and Motorola, to provide power inverters, capacitors, resistors, Krytron, and other electronic machine parts.
The Canadian prosecutors did not identify Ali as a suspect but filed a criminal complaint on three expatriate individuals at the Canadian court, as a Wall Street Journal investigation in 1984, based on court documents and interviews with Canadian officials stated:
"In July 1980, two members of [the] Pakistan Atomic Energy Commission came here in secret with a high-priority shopping list: vital parts [for] Pakistan’s fledgling effort to become a nuclear power...... According to court records here, the two Pakistani officials/scientists who came to Canada– Anwar Ali and I.A. Bhatty – brought with them a list of parts needed for a key item embargoed by the U.S. and others, a high-frequency inverter. This exotic electrical device is used to spin a gas centrifuge at extremely high speeds to enrich uranium. The parts were bought from manufacturers, including General Electric Co., Westinghouse Electric Corp., RCA Corp., and Motorola, Inc., by two small electrical equipment stores in Montreal, according to the court papers. Then they were repackaged and shipped on to Pakistan".
The investigations fell apart when Canadian authorities were unable to establish the link of proliferation between the suspected individuals; the court exonerated the accused individuals from any wrong doings with minor court fees and charges.
Awards and honors
Hilal-i-Imtiaz (1999)
Pride of Performance (2003)
Nishan-i-Imtiaz (2015)
See also
Computational physics
References
Sources
https://web.archive.org/web/20131003001554/http://cns.miis.edu/other/wmdi060504b.htm
http://www.paec.gov.pk/news-archive-pakatom/na-p-mj07.htm
http://www.paec.gov.pk/news-archive-pakatom/na-p-mj07.htm
http://www.wmdinsights.org/I5/SA1_NewHeadOfPakistan.htm
Collaboration
Anwar ali
Living people
1943 births
Punjabi people
Government College University, Lahore alumni
University of the Punjab alumni
Pakistani expatriates in the United Kingdom
Pakistani expatriates in England
Alumni of the University of Birmingham
Punjabi academics
Pakistani nuclear physicists
Project-706 people
Pakistani computer programmers
Pakistani diplomats
Pakistani expatriates in Canada
Computational physicists
People associated with CERN
Scientists from Lahore
Chairpersons of the Pakistan Atomic Energy Commission |
22280100 | https://en.wikipedia.org/wiki/List%20of%20DVD%20authoring%20software | List of DVD authoring software | The following applications can be used to create playable DVDs.
Free software
Free software implementations often lack features such as encryption and region coding due to licensing restrictions issues, and depending on the demands of the DVD producer, may not be considered suitable for mass-market use.
DeVeDe (Linux)
DVD Flick (Windows only)
DVDStyler (Windows, Mac OS X, and Linux using wxWidgets. Recent versions are bundled with Potentially Unwanted Programs tha
t may accidentally be installed unless care is taken during installation.)
Professional studio software
MAGIX Vegas DVD Architect previously known as Sony Creative Software's DVD Architect Pro (discontinued)
Apple DVD Studio Pro (Mac) (discontinued)
Sonic DVDit Pro (formerly DVD Producer) (discontinued)
Adobe Encore (EOL / discontinued)
Sonic DVD Creator (discontinued)
Professional corporate software
MAGIX Vegas DVD Architect (previously known as Sony Creative Software's DVD Architect Pro) (discontinued)
Adobe Encore (Last version is CS6, bundled with Adobe Premiere Pro CS6 / EOL) (discontinued)
Sonic Scenarist SD/BD/UHD
MediaChance DVD-lab (discontinued)
Home
Apple iDVD (Mac)
CyberLink Media Suite
Nero Vision
Pinnacle Studio
Roxio Easy Media Creator
Roxio Toast (for Mac OS)
Sonic MyDVD
TMPGEnc DVD Author
Ulead DVD MovieFactory
Windows DVD Maker (discontinued)
WinDVD Creator
Ashampoo Burning Studio
See also
DVD-Video
DVD authoring
DVD ripper
References
List
DVD
DVD
DVD |
49843313 | https://en.wikipedia.org/wiki/Artificial%20economics | Artificial economics | Artificial Economics can be defined as ″a research field that aims at improving our understanding of socioeconomic processes with the help of computer simulation″.
Like in Theoretical Economics, the approach followed in Artificial Economics to gain understanding of socioeconomic processes involves building and analysing formal models. However, in contrast with Theoretical Economics, models in Artificial Economics are implemented in a programming language so that computers can be employed to analyse them. Concretely, the method followed in Artificial Economics to analyse formal models most often comprises two stages: 1) deductive generation of samples, and 2) inductive inference of general patterns.
The deductive generation of samples consists in running the model many times for different particularisations of the variables that the model contains. Specifically, if the model is stochastic, then each computer simulation run is conducted with a specific realisation of each and every random variable in the model. The result of this first stage is a set of inputs and their corresponding outputs, which have been derived by the computer using pure deduction, i.e. applying to the inputs the inference rules that define the model.
Once a sufficient number of samples have been obtained, an inductive approach is then employed to infer general patterns about the behaviour of the model. This inductive process can only lead to probable –rather than necessarily true– conclusions (unless all possible particular instances are explored), since it tries to infer general properties out of particular instances.
Thus, using this computer simulation approach, the data is produced by the computer using strict deduction, but the general patterns about how the rules of the model transform the inputs into the outputs are inferred using generalisation by induction.
The benefit of using the computer simulation approach described above (vs. pure logical deduction only) is that it enables the exploration of (formal) models that are –currently– intractable using the most advanced mathematical techniques. This is so because the set of assumptions that can be investigated using computer simulation is not limited by the strong restrictions that mathematical tractability imposes. This point is particularly important in the study of socioeconomic processes, which –due to its complex nature– are oftentimes difficult or impossible to address adequately using a purely deductive approach only. The strictly deductive approach often requires so many simplifications to ensure mathematical tractability that the correspondence between the real world and the model assumptions turns out disappointingly weak. Some of these simplifications have been outlined in the left column of the table below, together with some of the features that can be explored using the Artificial Economics approach (right column).
The differences in the type of assumptions investigated using the strictly deductive approach only and those investigated in Artificial Economics are so fundamental that some scholars see these differences as the defining features of Artificial Economics. Other scholars find that the distinctive characteristic of Artificial Economics is methodological, i.e. the use of the computer simulation approach. The fact that models in Artificial Economics are implemented in a programming language (rather than expressed as a set of equations) is not considered substantial since any model implemented in computer code can be expressed as a well-defined mathematical function.
Artificial Economics Conference Series
One of the aim of these conferences is to favour the meeting of people and ideas coming from two communities of scientists –computer science and economics– in order to construct a more structured multi-disciplinary approach. Proceedings of every conference in the series have been published as a volume in the Lecture Notes in Economics and Mathematical Systems Springer series.
References
Economic methods
Computational economics |
54637068 | https://en.wikipedia.org/wiki/Chief%20Ministership%20of%20Shehbaz%20Sharif | Chief Ministership of Shehbaz Sharif | Shehbaz Sharif served the longest tenure in history as chief minister of Punjab spanning over 11 years of rule. His tenures involved the 1997 Nawaz Government, the 2008 Pakistan Peoples Party's regime and the 2013 PML-N's rule. His prominent contribution involves infrastructure development, transit projects and power plants. He has developed flyovers, road infrastructures, transit services and power projects. He is best known for his dedication, working speed, ability to execute mega projects and quick actions. However, he is also criticised over lack of priorities, spending development funds mostly in Lahore and large cities, keeping major power and assignments within himself, nepotism, conflict of interests, and misuse of authority by opposition parties.
His vision in transit system involves Lahore Metro, Multan Metrobus, Rawalpindi-Islamabad Metrobus and Orange Line (Lahore Metro), several flyovers and road projects majorly in larger cities especially Lahore. His main work in education includes, Danish School, Punjab Educational Endowment Fund and the Chief Minister Laptop Scheme. His electric power initiates include, Nandipur Power Project, Quaid-e-Azam Solar Park and other projects. In order to maintain security and enforce law and order, he started the Dolphin Force and the Safe Cities Project. He had done some tremendous work during the dengue outbreak in Punjab and capped it successfully within a short period of time. However, Shehbaz Sharif also remained surrounded with several controversies including Model town incident, scam, and corruption probes over development projects.
Tenure of Chief Ministership
The chief ministership of Shehbaz Sharif began on 20 February 1997, with formation of majority provincial government in the Punjab, Pakistan. This occurred upon dismissal of the government by-then President Leghari after alleged corruption and abuse of power, which was two years earlier than 5-year tenure. After the 1997 Pakistani general election, Sharif became 9th Chief Minister of Punjab. His first tenure started on 20 February 1997 and ended on 12 October 1999 with 1999 Pakistani coup d'état.
Sharif again became chief minister of Punjab on 8 June 2008 after his fourth win from Bhakkar constituency in by-polls held in June 2008 and subsequently elected chief minister unopposed after securing 265 votes in the 371-members provincial assembly. After the 2013 general election, Sharif remained Chief Minister of Punjab after securing 214 seats out of 372.
First term (1997–99)
During his first tenure as chief minister of Punjab, he focused on health, education, agriculture and industrial sectors and was praised for his good governance.
Sabzazar case
In 2003, an anti-terrorism court issued an arrest warrant for Sharif in a 1998 extrajudicial killings case. Sharif was accused for ordering extrajudicial killings of five people in a fake police encounter in 1998 during his first tenure as Chief Minister of Punjab. Sharif attempted to return to Pakistan in 2004 to appear before the court, but was forcibly deported back to Saudi Arabia. In August 2007, the Supreme Court of Pakistan gave its verdict which allowed Sharif to return to Pakistan. In September 2007, a court in Pakistan ordered police to arrest Sharif "at whichever airport he lands at" on a 2003 arrest warrant. Sharif denied ordering the alleged killings and said the charges against him were politically motivated. He further said "in 2004 he landed at the Lahore Airport and wanted to appear before the court but the government in a deceitful mode sent him back to Saudi Arabia in sheer violation of the orders of the Supreme Court."
Second term (2008–13)
Some of the major projects initiated by Shahbaz Sharif include;
Punjab Education Foundation a Foundation to promote educational investment in Punjab by promoting private sector
e-Rozgaar Program for freelancers
Violence Against Women Centers Violence Against Women Center (VAWC) was established for the defence of women against violence. Through this entity, the victim will be able to report violence through the use of its helpline and get speedy justice. This entity will also provide victims with the facilities to lodge FIR along with forensic help, medical examination, prosecution and post-trauma rehabilitation under one roof. The first Violence Against Women Center has been inaugurated in Multan. The centre will provide medical and legal aid, protection and relief for women victimized by violence. The Violence Against Women Center will provide services including FIR registration, provide psychological, forensic, legal, medicolegal, investigation, rehabilitation and examination facilities under one roof.
Quaid-e-Azam Solar Park (QASP) Quaid-e-Azam Solar Power Park was a 1000 MW planned solar park in Bahawalpur, Punjab, Pakistan. However, after installation of 100 MW in May 2015, the project was stopped for further development, due to dusty environment and hot weather. It was decided in June 2017 to privatise Solar-Park, attributing to 18 MW production on installed 100 MW due to high maintenance of production costs.
Lahore Orange Line Metro The Orange Line of Lahore Metro is an under-construction line of the Lahore Metro network in Lahore. It is a part of the mass transit programme launched by Chief Minister Shehbaz Sharif. EXIM Bank of China has granted a loan of Rs. 160 billion for Orange Line Metro. The repayment of this loan is spread over 20 years, Rs. 8 billion per year. The payment of this amount will be made from the transport budget of the Punjab government. Construction work for Lahore Orange Line Metro Train project has started in the provincial metropolis. Labourers, workers, students, doctors, lawyers, farmers, traders and people of all walks of life will travel by metro train and approximately 250,000 people will benefit from this project daily and the capacity will increase up to 500,000 by 2025. Lahore Orange Line Metro has a 27 km long route. 24.3 km long section will be at a height of six meters from the ground while 1.7 km track will be underground with a total of 26 stations. Metro train will be connecting all the main areas of Lahore starting from Ali Town, Thokar Niaz Baig, Wahdat Road, Aiwan Town, Sabzazaar, Samanabad, Gulshan Ravi, Chuburji, GPO, Lakshami, Railway Station, UET, Shalamar Bagh, Mehmood Boti to Dera Gujjaran. It will bring a revolution in the transport sector and work will be carried out round the clock on this project. The project will achieve a saving of Rs. 14.9 billion in the first five years of operation in terms of travel time and vehicle operating costs. In addition to a reduction in congestion, the end-to-end travel time will be 45 minutes compared to 2.5 hours spent while travelling on this route.
Multan Metrobus Multan metro project was started in May 2015 and started operations in January 2017. The long track with 21 stations costed Rs. 29 billion. However, the project remained under several controversies, including lack of public interest in the project, purchase of 200 feeder buses to support Multan metro and $17.5 million corruption scam taken up by senate finance committee and latter by NAB. However, Chief Minister Punjab Shahbaz Shareef denied any corruption charges or misappropriation in the project.
The Dolphin Force Its is a new elite security unit, inspired by law enforcement unit in Turkey. This unit is patrolling the streets of Lahore to curb street crimes. The unit has so far received 35 HONDA CB motorbikes out of an estimated final strength of 300 to aid in pursuit and apprehension. However, despite spending billions, the project failed to control crime in Lahore.
50 thousand low-cost houses in Punjab An agreement was signed between the Punjab government and a renowned housing sector institution of Turkey, Housing Development Administration Turkey (TOKI) for developing 50,000 low-cost houses in Punjab. TOKI will provide technical assistance for the construction of these houses. 25 thousand houses will be built in 2016 whereas further 25 thousand houses in 2017.
Land Records and Management and Information System (LRMIS) Punjab government has formally inaugurated Land Records and Management and Information System in all the 36 districts of Punjab province. The manual record of 22,000 rural estates of more than 55.5 million landowners has been digitized by now. The state-of-the-art Arazi Centres have been established in all 143 tehsils in the province of Punjab to deliver services. This system has fixed 500 million discrepancies and service centres are issuing around 150, 000 fards per month. In addition, some 150,000 land transfer deals are approved per month. By 2016, land records have been computerized in 30 districts along with linking of 92 service centres online. In addition, Google mapping of Lodhran, Lahore and Hafizabad has been completed. Land search can be done online in Lodhran, Lahore, Okara, Hafizabad, Bahawalnagar, Jhang, Bahawalpur, Hafizabad, Jhelum, Rajanpur, Khosab and Khanewal A customer can get ‘fard’ in 30 minutes after going through basic minimum formalities.
Children Hospital Faisalabad It is the world's and the Asia's second-largest children hospital. 1000 bed children hospital with the state-of-the-art centrally air-conditioned emergency and OPD block, this hospital has been established on the special directive of Chief Minister Punjab Shehbaz Sharif.
Safe Cities Project* Chief Minister Punjab Shehbaz Sharif inaugurated Safe Cities Project in Lahore with a plan to extend this system throughout Punjab. The basic purpose of this system is to protect citizens and their property from any harm by nefarious elements. Safe Cities Project in Lahore has Asia's largest surveillance screen installed for monitoring the entire city. 10,000 cameras have been installed in Lahore for comprehensive coverage of public institutions, key infrastructure, public roads, prime hotspots and public places. Safe City project will be implemented in 6 cities of Punjab by 2017 including Multan, Rawalpindi, Bahawalpur, Faisalabad, Gujranwala and Sargodha. This system will ensure improved service delivery through the provision of integrated emergency services like Rescue 1122, Fire Fighting and disaster relief to the police and public. The public will be informed of traffic density and alternate routes through the Variable Messaging System and broadcast. Police respondents will be available for real-time monitoring of processions and law and order situations. Through this system, public satisfaction will increase as real-time comparison of suspected persons with criminal and other allied departmental databases will be conducted.
Punjab Educational Endowment Fund a scholarship fund for students
Lahore Metro Bus System The project was initiated by Pervez Elahi Govt in 2005. But only feasibility was done during his tenure. The Shahbaz Sharif Govt. delayed the project and instead of all four routes for which feasibility was completed, went for only one route of 27 km Bus Rapid Transit (BRT) system from Gajumata to Shahadra.
Danish Schools is a free education project to provide top-class education to the less fortunate in Pakistan
Apna Rozgar Scheme is an initiative taken by Punjab Govt. in order to give 50,000 yellow Taxi's yearly
Khud Rozgar Scheme is a loan scheme in which interest-free loans are provided to poor people to establish small shops or other locally operative businesses.
Punjab Mass Transit Authority to construct, maintain rapid bus transit systems in Punjab
Punjab Food Authority to control food hygiene in Punjab
Punjab Information Technology Board to promote IT field in Punjab
Punjab Land Development Company to make affordable housing schemes in Punjab
Information Technology University a world class university in field of IT in Lahore
Ashiana Housing Scheme Affordable Housing for low income families with modern facilities in Lahore, Faisalabad, Rawalpindi, Sargodha, Gujranwala & other cities of Punjab.
e-Youth Initiative to provide free laptops to 100 thousand brilliant students studying at public sector colleges & universities and high achievers of the Boards of Punjab in Matriculation Examinations
Punjab Forensic Science Agency at Lahore Establishing of PFSA and a state of the art forensic lab
Punjab Youth Internship Program A program to equip 50,000 unemployed youth with productive skills in Punjab. A monthly stipend of Rs 10,000 will be paid to the interns.
Lahore Waste Management Company An Integrated Solid Waste Management project with the collaboration of Turkish contractors for the city of Lahore to serve as a model for other areas in Punjab.
The Punjab Model of Proactive Governance A system to proactively seek feedback from consumers of government services using call centres, text messages and phone calls. The project has been supported and funded by the World Bank.
Sasti Roti Subsidised white floor bags and ovens provided to special shops, aimed to provide cheap bread to the common man.
Rawalpindi Institute of Cardiology Following the reforms in the health sector, Punjab Chief Minister Shehbaz Sharif inaugurated the successful completion of Rawalpindi Institute of Cardiology. This hospital has been completed at the cost of Rs. 3.6 billion. The 272-bed hospital is equipped with high medical facilities to provide complete health facilities.
Third term (2013–2018)
Punjab Budget 2016-2017
Punjab budget 2016-17 was presented on 13 June by Punjab Finance Minister Ayesha Ghaus Pasha. An accumulated budget of Rs. 1,681 billion was presented in the budget. The budget allocation for agriculture has been increased by 47%, for law and order by 48%, for health by 62%, for education by 47%, for potable water by 88%, for school education by 71% and for other developmental projects an increase by 37.5%. This budget is primarily focused on social development rather than infrastructure. From providing education to each and every kid in Punjab to women empowerment, it is a socially directed budget with unique initiatives. Some of the highlights of this budget include:
A total of Rs. 256 billion has been allocated to education in the Punjab budget 2016-2017. A total of Rs. 43.83 billion has been allocated to health in the Punjab budget 2016-17. A major part of this budget will be spent on revamping DHQ and THQ hospitals throughout Punjab. The Punjab government has already announced a historic package of Rs. 100 billion for agriculture. Out of this package, Rs. 50 billion will be allocated to agriculture in 2016-17. Women Development & Empowerment has been given a substantial share of the budget including initiatives such as Help Desk in police stations, 40% quota for women in self-employment scheme and establishment of Women Protection Center in Multan. For empowering and supporting youth through different initiatives, the Punjab government has allocated Rs. 23.3 billion in 2016-17 budget. For establishing law and order throughout the province, the Punjab government has allocated Rs. 145 billion.
Chinese investment tour
Chief Minister Shehbaz Sharif went on an elaborate tour to China from 24–29 July. The tour not only was focused on bringing in more investment to Punjab by presenting a favourable side of the province but also to celebrate 67 years of Pak-China friendship. Chief Minister travelled from Beijing to Jimo and met higher authorities of all the developmental sectors including health, energy and infrastructure. The term Punjab Speed is used in China with regard to rapid execution of projects, said the Chairman of EXIM Bank in a meeting with Shehbaz Sharif. In his six days tour visiting four cities, Chief Minister Shehbaz Sharif and Chinese authorities successfully signed 20 agreements and 25 memorandums of understandings for health, energy, infrastructure and other sectors. In addition, the chief minister told the Chinese authorities that work has been started on the implementation of a $46 billion investment package by China. Chief Minister Shehbaz Sharif met the top management of the Huawei company. In the meeting, it was decided that Huawei will incorporate its healthcare solution in one of the DHQs in Punjab free of cost. If the incorporation brings desired results, this solution will be expanded in all 36 districts of Punjab. The Chief Minister visited Beijing Friendship hospital and met the top management. In a fruitful meeting, a proposition was made to Friendship hospital that their team of experts will visit Pakistan Kidney & Liver Institute to assess the training needs and train the Pakistani doctors accordingly. At a business seminar in Shandong where 60 key companies of China were present, Vice Governor Shandong said that Shehbaz Sharif has furthered the Pak-China relationship in a real sense. The chief minister ensured that the best facilities will be provided to the Chinese investors and their team in Punjab.
Turkey visit
Chief Minister Shehbaz Sharif went on a three-day tour to Turkey from 25–27 July. The most prominent highlight of this tour includes Shehbaz Sharif's meeting with Turkish Health Minister Dr Recep Akdag. In this meeting, a memorandum of understanding was signed as per which Turkey will extend its cooperation to uplift the healthcare system in Punjab. The cooperation will include the safe disposal of hospital waste and disinfection and infection control in health facilities. Shehbaz Sharif also attended the opening ceremony of Yavuz Sultan Selim Bridge on the personal invitation of Turkish President Erdogan. Shehbaz Sharif was among the leading leaders of the world on this inauguration. Later, the chief minister also had a one-on-one meeting with President Erdogan.
CM's Special Monitoring Unit
Special Monitoring Unit (SMU) is an initiative of Mr Shehbaz Sharif. CM's Special Monitoring Unit (SMU) Law and Order is an entity of the Government of Punjab and an essential part of the Chief Minister's Office. The unit is responsible for monitoring and implementation in various projects across Punjab. The unit is directly accountable to the CM and the citizens of the province of Punjab.
Since its inception in August 2014, SMU has already introduced groundbreaking reforms in Law & Order wing that are currently in the implementation stage. These include but are not limited to:
Violence Against Women Centres (VAWC) – Provision of all facilities under one roof (including First Aid, FIR lodging, Prosecutor, Medical Examination, Forensics, Post Trauma Rehabilitation, Psychologist Consulting, Dar-ul-Amaans and Placement into Society)
Dealer Vehicle Registration System (D.V.R.S) – Citizen convenience by vehicle registration and license plate issuance at dealerships.
Token Tax Identification Stickers – to eliminate inconvenient checkpoint stops
Speciality plates – Customized license plates in different colours and with options for different images (relating to Culture, Disability, Personalities, Punjab Landmarks and in support of causes such as Breast Cancer, Terror Victims etc.)
• Introduction of the camera integrated traffic fines with zero discrimination towards violators. Vehicle owners are traced by the registration plate data and warnings are sent to the owner’s house.
Model Graveyards to facilitate grieving citizens
• Traffic Reforms including engineering, education and enforcement
• Restaurant Grading System which will give citizens the decision making power to choose restaurants based on quality and hygiene as opposed to the ambiance.
Shahbaz Sharif is acknowledged even by his opponents as an outstanding administrator for his role in making Punjab a model of good governance.
Punjab Protection of Women Against Violence Act 2015
The Punjab Assembly passed a long-awaited and landmark Punjab Protection of Women Against Violence Act 2015. This bill has 31 clauses and provides a complete system for complaint registration along with penalties for offenders. This act contains remedies for victims of violence. It also criminalises all forms of violence against women along with providing special centres providing services and protection to women against violence and abuse. Centres are being established for reconciliation and resolution of disputes. In addition, shelter homes will be built for protecting aggrieved women providing lodging and boarding facilities to the victim and children.
Punjab Agriculture Conference 2016
Chief Minister Shehbaz Sharif announced a package of Rs 100 billion for the development of the agriculture sector. It was announced at the Punjab Agriculture Conference 2016. Representatives of farmers, agriculture experts and government gathered at one platform to make deliberations in working groups for the formulation of concrete recommendations and proposals. Through this conference, a solid policy for agriculture will be formulated. The Rs 100 billion budget will be spent on the prosperity of agriculture and farmers during the next two years. A major chunk of this budget will be allocated to research and development departments for the improvement of agriculture. In addition, Kissan Commission will also be set up to resolve problems for farmers, developing agriculture and increasing per acre yield.
Prohibition of Child Labour in Brick Kilns Ordinance 2016
Punjab government under the supervision of Chief Minister Punjab Shehbaz Sharif passed Prohibition of Child Labor in Brick Kilns Ordinance 2016 to eliminate child labour throughout the province of Punjab. Government of the Punjab launched a mega campaign to eliminate child labour and the ordinance offers the following package to children currently working at brick kilns in Punjab:
• Monthly scholarship of Rs. 1000/- to every child going to school.
• Government of Punjab to bear all educational expenses.
• Free uniform, textbooks and stationery.
• Transport facility to be extended to the children of far-flung areas.
• Sum of two thousand rupees to be given to the parents of such children going to school.
Fight against dengue
In 2011 there was a Dengue fever outbreak in Punjab. The total number of reported patients was 17,352, with a loss of 252 lives in district Lahore only. The Punjab government under Chief Minister Sharif took radical and drastic measures to bring this menace completely under control. In 2012 only 182 confirmed cases were reported in district Lahore with zero mortality. The achievement was recognised and appreciated by Sri Lankans and other foreign experts.
Special investment initiatives
In November 2015, Chief Minister Punjab Shehbaz Sharif decided to sharpen the investment pitch for Punjab by organizing a successful two-day International Seminar for Investment Opportunities in Punjab. The basic purpose of this seminar was to give assurance to the investors about the sanctity of the information and facilitation for investors in order to help them make up their minds when entering a new emerging market. This seminar facilitated investors in pointing out their expectations and highlighting possible gaps. Punjab honed its investment promotion skills through this seminar and invited some of the most trusted business partners, chiefly Chinese and Turks. The Chief Minister of Punjab personally attended the investment delegates from China and Turkey and proposed the investment opportunities available in 48 different sectors in Punjab. It was a successful event which resulted in the signing of over 150 MoUs and agreements with the investors
Shehbaz Sharif went on a five-day tour to the United Kingdom in November 2015 to enhance cooperation in health, education, skill development and other sectors in Punjab. On this occasion, he met British Secretary of State for Foreign and Commonwealth Affairs Philip Hammond and UK Home Minister Ms Theresa. He also addressed Chatham House, a think tank of international repute under the aegis of Royal Institute of International Affairs in London. He addressed a large number of professors, intellectuals, politicians and experts at this occasion. It must be mentioned that addressing intellectual sessions at Chatham House is a prestigious tradition as Chatham House is about 100 years old and only prominent personalities or politicians with international repute are invited for address.
Major Initiatives
Khadim-e-Punjab Saaf Dehat Program A Major Initiative in November 2017 aimed to provide cleanliness facilities to rural areas, and it is the first of its own kind in history of Pakistan that such initiative has been taken for the rural community. This program will make sure the cleaning of drains, street sweeping and collection of solid waste from the village streets. The disposal of solid waste from the villages and elimination of stagnant water from these villages will improve the general environment and healthy conditions will be resulted in.
Pakistan Kidney and Liver Institute
The Chief Minister of Punjab has initiated the Hepatitis Prevention and Treatment Program under which they have launched the 300-bed first phase of Pakistan Kidney and Liver Institute and Research Centre on 25 December 2017 at a cost of Rs20bn. It is the first state of the art kidney and liver centre in South Asia that specifically deals with Kidney and Liver related issues. Liver related diseases are the 11th leading cause of death in Pakistan with kidney diseases on number 12. According to World Health Organization (WHO), approximately 40% of all diseases in Pakistan have kidney, bladder, and liver-related origin. The treatment of these diseases was either too expensive or not available in the country whatsoever that left people with no choice but to get it done from neighbouring countries or lose precious lives. Pakistan Kidney and Liver Institute aim to deal in Nephrology, Urology, Hepatobiliary Surgery, Kidney and Liver Transplant and in other specialized medical fields in the future. The institute has a network of filter clinics in all 36 districts of Punjab province for timely provision of treatment. It specifically targets the low-income group, who cannot afford to get the treatment.
CM Punjab’s historic participation at OIC summit 2017
As an aftermath of President Trump’s decision of recognition of Jerusalem, Turkish President Recep Tayyip Erdoğan took a timely decision of convening a meeting of the OIC which was fully endorsed by the Chief Minister of Punjab, Mian Shehbaz Sharif.
On 12 December 2017, he [Shehbaz Sharif] Left for Turkey along with PM Abbasi to attend the OIC Extraordinary Summit on Jerusalem.
Punjab Chief Minister Mian Shehbaz Sharif got prominent coverage in national as well as international media during his participation at the OIC Extraordinary Summit in Istanbul where he [Shehbaz Sharif] was the only provincial Chief Executive who was invited by the Turkish government along with PM Shahid Khaqan Abbasi, which is a personal honour for him and acknowledgement of his leadership qualities at the global level.
Zewar-e-Taleem
Under the Khadim-e-Punjab Zewar-e-Taleem program, financial assistance is being provided to low-literacy districts. A stipend of Rs 1000 per month has been awarded to 495,000 female students so far in order to get better enrolment and retention from female students.
Lahore Knowledge park
Lahore Knowledge Park is being developed as the country’s first Technopolis, with a capital outlay of PKR 107 Billion (US$1 Billion). The Punjab government has dedicated an area of 852 acres of land. Lahore Knowledge Park embodies the vision of the provincial Chief Minister to make Punjab and by extension Pakistan, South Asia’s premier destination for the global knowledge economy.
NexGen
ECE centres have been set up in 11,000 schools out of 62,000 schools so far, almost 4,000 primary school teachers have already been trained for the project. It is the largest project of its kind in SAARC Countries.
Distribution of Laptops
426,000 laptops have been distributed among the deserving students who couldn't afford the luxury of laptops and showed good performance throughout their education career. Shahbaz Sharif Merit Scholarship (SSMS) Program offers a fully-funded PhD Program from world-known universities. A total number of 351,424 scholarships have been awarded to the competitive students in the region.
New Campuses
1167 College Teachers Interns (CTIs) have been recruited to meet the shortage of college staff. New Universities and sub-campuses have been established in Sahiwal, Okara, Jhang, T.T. Singh, D.G. Khan, Chakwal, Jhelum, Narowal and P.D. Khan (PKR.700 million) in order to get better enrollment in accordance with the growing population.
I. Sub-Campus of GC University Faisalabad at T.T. Singh
II. Establishment of University of Jhang
III. Establishment of University of Sahiwal and Okara
IV. Sub Campus of BZU at Vehari
V. Government College Women University Sialkot
VI. Sub Campus of UET Taxila Pind Dadan Khan Jhelum
VII. Engineering College of University of Sargodha
VIII. Sub-Campus of Govt. College University, Faisalabad
IX. Establishment of University of Gujrat
X. UOG Sub Campus Narowal
XI. Sub Campus Govt College University at Kala Shah Kaku
XII. Sub Campus of King Edward Narowal-Sheikhupura
XIII. Khawaja Farid University of Engineering & IT at Rahim Yar Khan (PKR.587 million)
XIV. Information Technology University (ITU) at Lahore.
XV. Cadet College Khanpur, Okara and Fort Manro has been established.
Energy Projects
• The 1,223MW of power project in Balloki Kasur has been completed. Groundbreaking was commenced by Nawaz Sharif on 11 November 2015. The project will utilize regasified liquefied natural gas (RLNG) for fuel, with diesel as an alternate backup. It is currently producing 380MW of energy out of 1223 MW.
• 1,230MW Haveli Bahadur Shah Project Jhang was inaugurated by Nawaz Shareef in 2017 fueled by environment-friendly liquefied natural gas (LNG) The project is environment friendly with a minimum impact on climate change due to the use of an efficient technology which guarantees the productivity and regulation of the plant at 62.44 per cent on LNG. The project will add 760 megawatts to the national grid.
• The first plant of the Sahiwal Coal Power Project was inaugurated in 2017 by Nawaz Shareef. It is producing 1320 Megawatt of energy. The project has been completed and functional through imported coal, in order to change energy mix for inexpensive power generation.
• The 1,180-MW RLNG-fuelled Bhikki Power Plant financed by the Punjab government is the first project to be commissioned with 717 MW added to the national grid in the first phase. The project has taken 18 months to be functional which may be a world record for a mega project of this magnitude. The project has been built at half the cost of similar projects.
• Quaid e Azam Solar Park in Bahawalpur is the first energy project under CPEC that is producing 400MW out of 1000MW. With some 400,000 solar panels, spread over 200 hectares of flat desert, glare defiantly in Cholistan Desert Punjab, it is the largest Photovoltaic power station in the world.
• The provincial government of Punjab and a Chinese company has signed an agreement to install an LNG-fueled power plant in Jhang by the name of Punjab power plant. The project is scheduled to be completed in 26 months and will start power generation of 810 MW within 14 months.
• Huaneng Shandong Power, a subsidiary of China Huaneng Group signed an agreement with the Punjab government to build the Rahim Yar Khan plant under China Pakistan Economic Corridor. 1,320MW Rahim Yar Khan coal power project is under feasibility study.
Kissan package
Small farmers would be provided interest-free loans of Rs80 billion throughout the province under the Kissan package. 600,000 farmers will be benefited from Kissan Package in the province. It is the first of its kind project in the country that will result in agricultural production boost in the region. Kissan Cards are issued to farmers benefiting from various government programs to consolidate recipients’ information, including demographics, contact details, crops & landholding patterns, livestock, irrigation sys and mechanization, for transparent and efficient implementation of these interventions.
Transport Projects
Following the rapid growth and density in population, the provincial government in Punjab has been among the pioneers to introduce Orange Line Metro Train in the city. 27-kilometer Orange Line Metro Train Project is the country’s first mass rapid transit train project with an estimated cost of over $1.65 billion that will connect Raiwind, Multan Road, Mcleod Road, Lahore Junction Railway Station and the Grand Trunk Road.
The country’s first rapid transit system was introduced by the government of Punjab in 2013 with Lahore Green Line BRT. It has 27 stations along a 27 km corridor that stretches from Shahdara to Gajumata. The total ridership in 2017 has hit 54 million. Multan metro project includes 21 bus stations spread over an 18.5 km route, which facilitates 97,000 commuters daily that covers 30 km long route. A 23-kilometre corridor of Rawalpindi Metro bus including elevated and underground roads have been facilitating thousands of commuters from the twin city. Almost 200 air-conditioned Lahore Feeder Bus Service feeds the metro bus service, extending the service to virtually the entire city covering 130 km on 14 routes. Up to 100 buses are running on 11 feeder routes of the Multan Metro Bus System with an e-ticketing system. The same service is underway from Bahawalpur to Lodhran.
Environment
The government has introduced environment-friendly Zig Zag Firing Brick Kilns technology. The design manual is the first one in South Asia as it incorporates the practical knowledge and engineering design of the kilns. Most of the brick kilns that are located in or around the urban cities of Pakistan emit thick black smoke containing several air pollutants (carbon monoxide, carbon dioxide, Sulphur dioxide, nitrogen oxides) and fine particulate matter, which causes serious health hazards like human illnesses, animal and plant life destruction to communities living around them.
References
Shehbaz Sharif
Government of Punjab, Pakistan
Chief Ministerships of Pakistani provinces
History of Punjab, Pakistan (1947–present)
1990s in Pakistan
2000s in Pakistan
2010s in Pakistan |
2907075 | https://en.wikipedia.org/wiki/Spoofed%20URL | Spoofed URL | A spoofed URL describes one website that poses as another website. It sometimes applies a mechanism that exploits bugs in web browser technology, allowing a malicious computer attack. Such attacks are most effective against computers that lack recent security patches. Others are designed for the purpose of a parody.
During such an attack, a computer user innocently visits a website and sees a familiar URL in the address bar such as http://www.wikipedia.org but is, in reality, sending information to an entirely different location that would typically be monitored by an information thief. When sensitive information is requested by a fraudulent website, it is called phishing.
The user is typically enticed to the false website from an email or a hyperlink from another website.
In another variation, a website may look like the original, but is in fact a parody of it. These are mostly harmless, and are more noticeably different from the original, as they usually do not exploit bugs in web browser technology.
Redirects can also take place in a hosts file, redirecting from legitimate site(s) to another IP, that of the spoofed URL.
Cyber security
Spoofing is the act of deception or hoaxing. URLs are the address of a resource (as a document or website) on the Internet that consists of a communications protocol followed by the name or address of a computer on the network and that often includes additional locating information (as directory and file names). Simply, a spoofed URL is a web address that illuminates an immense amount of deception through its ability to appear as an original site, despite it not being one. In order to prevent falling victim to the prevalent scams stemmed from the spoofed URLs, major software companies have come forward and advised techniques to detect and prevent spoofed URLs.
Detection
In order to prevent criminals from accessing personal information, such as credit card information, bank account/routing numbers, and one’s telephone number, home address, etc. it is important to learn and understand how these spoof URLs can be detected. It is very important to first verify the name of the site on a digital certification through the use of SSL/TLS. Always try to identify the actual URL for the web page you are on. Make sure you are able to see the full URL for any hyperlink, so that you can examine the address. Some characters that are commonly found in spoofed URLs are: %00, %01, @. Sometimes the URLs can differ by a single letter or number. In addition, set your Internet security level to high to ensure that your computer is protected from possible attacks from spoofed sites. In general, only input personal information on a Website if the name has been verified on the digital certificate. Also, if you have any concern about the confidentiality of a website leave the page immediately.
Prevention
Spoofed URLs, a universal defining identity for phishing scams, pose a serious threat to end-users and commercial institutions. Email continues to be the favorite vehicle to perpetrate such scams mainly due to its widespread use combined with the ability to easily spoof them. Several approaches, both generic and specialized, have been proposed to address this problem. However, phishing techniques, growing in ingenuity as well as sophistication, render these solutions weak. In order to prevent users from future victimization stemmed from a spoofed URL, Internet vigilantes have published numerous tips to help users identify a spoof. The most common are: using authentication based on key exchange between the machines on your network, using an access control list to deny private IP addresses on your downstream interface, implementing filters of both inbound and outbound traffic, configuring routers and switches if they support such configuration, to reject packets originating from outside the local network that claim to originate from within, and enable encryption sessions in the router so that trusted hosts that are outside your network can securely communicate with your local hosts. Ultimately, protection comes from the individual user. Keeping up with new spoofing techniques or scams will readily allow one to identify a scam and most importantly keep information secure and personal.
Susceptible targets
PayPal, an e-commerce business allows money transactions to be made through the Internet and is a common target for spoofed URLs. This forgery of a legitimate PayPal website allows hackers to gain personal and financial information and thus, steal money through fraud. Along with spoof or fake emails that appear with generic greetings, misspellings, and a false sense of urgency, spoofed URLs are an easy way for hackers to violate one’s PayPal privacy. For example, www.paypalsecure.com, includes the name, but is a spoofed URL designed to deceive. Remember to always log into PayPal through a new window browser and never log in through email. In the case that you do receive a suspected spoofed URL, forward the entire email to [email protected] to help prevent the URL from tricking other PayPal users.
Common crimes
A major crime associated with spoofed URLs is identity theft. The thief will create a website very similar in appearance to that of a popular site, then when a user accesses the spoofed URL, they can inadvertently give the thief their credit card and personal details. Their spoofed URLs might use “too good to be true” prices to lure more and more looking for a good deal. Crimes like these happen quite often, and most frequently during the festive holidays and other heavy online shopping periods of the year.
Another crime associated with spoofed URLs is setting up a fake anti-malware software. An example of this would be Ransomware, fake anti-malware software that locks up important files for the computer to run, and forces the user to pay a ransom to get the files back. If the user refuses to pay after a certain period of time, the Ransomware will delete the files from the computer, essentially making the computer unusable. Ads for these programs usually appear on popular websites, such as dating sites or social media sites like Facebook and Twitter. They can also come in the form of attachments to emails.
Phishing scams are also another major way that users can get tricked into scams (see below).
Phishing
Phishing is a scam by which an e-mail user is duped into revealing personal or confidential information which the scammer can use illicitly. Phishing is the action of fraudsters sending an email to an individual, hoping to seek private information used for identity theft, by falsely asserting to be a reputable legal business. Phishing is performed through emails containing a spoofed URL, which links them to a website. Since it usually appears in the form on an email, it is crucial to not rely just on the address in the “from” field in order to prevent phishing. Computer users should also look out for spelling mistakes within the website's URLs, as this is another common sign to look out for in a phishing email. The website whose URLs are in the e-mails requests individuals to enter personal information so businesses can update it in their system. This information often includes passwords, credit card numbers, social security, and bank account numbers. In turn, the email recipients are giving these fake businesses their information the real businesses already have.
See also
Computer insecurity
Hosts File
IDN homograph attack
Internet fraud prevention
Social engineering (computer security)
Spoofing attack
References
URL
Web security exploits |
4516078 | https://en.wikipedia.org/wiki/Pat%20Howell | Pat Howell | Patrick Gerrard Howell (born March 12, 1957) is a former American college and professional football player who was an offensive lineman in the National Football League (NFL) for seven seasons during the late 1970s and 1980s. Howell played college football at the University of Southern California, and was recognized as an All-American. A second-round pick in the 1979 NFL Draft, he played professionally for the Atlanta Falcons and Houston Oilers of the NFL.
Early years
Howell was born in Fresno, California. He was a Parade magazine high school All-American for Fresno High School football team in 1974, and Central California Valley discus throw champion in track and field.
College career
Howell attended the University of Southern California, where he played for the USC Trojans football team. As a senior in 1978, he received consensus first-team All-American honors.
Professional career
The Houston Oilers chose Howell in the second round (forty-ninth pick overall) of the 1979 NFL Draft, and he played for the Oilers from to , and Atlanta Falcons between 1983 and .
Football family
His son, Nick Howell, was an offensive lineman for the USC Trojans, after playing football at Bullard High School in Fresno, California.
1957 births
Living people
All-American college football players
American football offensive linemen
Atlanta Falcons players
Houston Oilers players
Sportspeople from Fresno, California
USC Trojans football players |
53017 | https://en.wikipedia.org/wiki/Special%20effect | Special effect | Special effects (often abbreviated as SFX, SPFX, F/X or simply FX) are illusions or visual tricks used in the theatre, film, television, video game, and simulator industries to simulate the imagined events in a story or virtual world.
Special effects are traditionally divided into the categories of mechanical effects and optical effects. With the emergence of digital film-making a distinction between special effects and visual effects has grown, with the latter referring to digital post-production and optical effects, while "special effects" refers to mechanical effects.
Mechanical effects (also called practical or physical effects) are usually accomplished during the live-action shooting. This includes the use of mechanized props, scenery, scale models, animatronics, pyrotechnics and atmospheric effects: creating physical wind, rain, fog, snow, clouds, making a car appear to drive by itself and blowing up a building, etc. Mechanical effects are also often incorporated into set design and makeup. For example, prosthetic makeup can be used to make an actor look like a non-human creature.
Optical effects (also called photographic effects) are techniques in which images or film frames are created photographically, either "in-camera" using multiple exposure, mattes or the Schüfftan process or in post-production using an optical printer. An optical effect might be used to place actors or sets against a different background.
Since the 1990s, computer-generated imagery (CGI) has come to the forefront of special effects technologies. It gives filmmakers greater control, and allows many effects to be accomplished more safely and convincingly and—as technology improves—at lower costs. As a result, many optical and mechanical effects techniques have been superseded by CGI.
Developmental history
Early development
In 1857, Oscar Rejlander created the world's first "special effects" image by combining different sections of 32 negatives into a single image, making a montaged combination print. In 1895, Alfred Clark created what is commonly accepted as the first-ever motion picture special effect. While filming a reenactment of the beheading of Mary, Queen of Scots, Clark instructed an actor to step up to the block in Mary's costume. As the executioner brought the axe above his head, Clark stopped the camera, had all of the actors freeze, and had the person playing Mary step off the set. He placed a Mary dummy in the actor's place, restarted filming, and allowed the executioner to bring the axe down, severing the dummy's head. Techniques like these would dominate the production of special effects for a century.
It wasn't only the first use of trickery in cinema, it was also the first type of photographic trickery that was only possible in a motion picture, and referred to as the "stop trick". Georges Méliès, an early motion picture pioneer, accidentally discovered the same "stop trick." According to Méliès, his camera jammed while filming a street scene in Paris. When he screened the film, he found that the "stop trick" had caused a truck to turn into a hearse, pedestrians to change direction, and men to turn into women. Méliès, the stage manager at the Theatre Robert-Houdin, was inspired to develop a series of more than 500 short films, between 1914, in the process developing or inventing such techniques as multiple exposures, time-lapse photography, dissolves, and hand painted color. Because of his ability to seemingly manipulate and transform reality with the cinematograph, the prolific Méliès is sometimes referred to as the "Cinemagician." His most famous film, Le Voyage dans la lune (1902), a whimsical parody of Jules Verne's From the Earth to the Moon, featured a combination of live action and animation, and also incorporated extensive miniature and matte painting work.
From 1910 to 1920, the main innovations in special effects were the improvements on the matte shot by Norman Dawn. With the original matte shot, pieces of cardboard were placed to block the exposure of the film, which would be exposed later. Dawn combined this technique with the "glass shot." Rather than using cardboard to block certain areas of the film exposure, Dawn simply painted certain areas black to prevent any light from exposing the film. From the partially exposed film, a single frame is then projected onto an easel, where the matte is then drawn. By creating the matte from an image directly from the film, it became incredibly easy to paint an image with proper respect to scale and perspective (the main flaw of the glass shot). Dawn's technique became the textbook for matte shots due to the natural images it created.
During the 1920s and 1930s, special effects techniques were improved and refined by the motion picture industry. Many techniques—such as the Schüfftan process—were modifications of illusions from the theater (such as pepper's ghost) and still photography (such as double exposure and matte compositing). Rear projection was a refinement of the use of painted backgrounds in the theater, substituting moving pictures to create moving backgrounds. Lifecasting of faces was imported from traditional maskmaking. Along with makeup advances, fantastic masks could be created which fit the actor perfectly. As material science advanced, horror film maskmaking followed closely.
Many studios established in-house "special effects" departments, which were responsible for nearly all optical and mechanical aspects of motion-picture trickery. Also, the challenge of simulating spectacle in motion encouraged the development of the use of miniatures. Animation, creating the illusion of motion, was accomplished with drawings (most notably by Winsor McCay in Gertie the Dinosaur) and with three-dimensional models (most notably by Willis O'Brien in The Lost World and King Kong). Naval battles could be depicted with models in studio. Tanks and airplanes could be flown (and crashed) without risk of life and limb. Most impressively, miniatures and matte paintings could be used to depict worlds that never existed. Fritz Lang's film Metropolis was an early special effects spectacular, with innovative use of miniatures, matte paintings, the Schüfftan process, and complex compositing.
An important innovation in special-effects photography was the development of the optical printer. Essentially, an optical printer is a projector aiming into a camera lens, and it was developed to make copies of films for distribution. Until Linwood G. Dunn refined the design and use of the optical printer, effects shots were accomplished as in-camera effects. Dunn demonstrating that it could be used to combine images in novel ways and create new illusions. One early showcase for Dunn was Orson Welles' Citizen Kane, where such locations as Xanadu (and some of Gregg Toland's famous 'deep focus' shots) were essentially created by Dunn's optical printer.
Color era
The development of color photography required greater refinement of effects techniques. Color enabled the development of such travelling matte techniques as bluescreen and the sodium vapour process. Many films became landmarks in special-effects accomplishments: Forbidden Planet used matte paintings, animation, and miniature work to create spectacular alien environments. In The Ten Commandments, Paramount's John P. Fulton, A.S.C., multiplied the crowds of extras in the Exodus scenes with careful compositing, depicted the massive constructions of Rameses with models, and split the Red Sea in a still-impressive combination of travelling mattes and water tanks. Ray Harryhausen extended the art of stop-motion animation with his special techniques of compositing to create spectacular fantasy adventures such as Jason and the Argonauts (whose climax, a sword battle with seven animated skeletons, is considered a landmark in special effects).
The science fiction boom
During the 1950s and 1960s numerous new special effects were developed which would dramatically increase the level of realism achievable in science fiction films. Sci-fi special effects milestones in the 1950s included the Godzilla films, The Day the Earth Stood Still (featuring Klaatu), and 3-D films.
The tokusatsu genre of Japanese science fiction film and television, which include the kaiju sub-genre of monster films, rose to prominence in the 1950s. The special-effects artist Eiji Tsuburaya and the director Ishirō Honda became the driving forces behind the original Godzilla (1954). Taking inspiration from King Kong (1933), Tsuburaya formulated many of the techniques that would become staples of the tokusatsu genre, such as so-called suitmation—the use of a human actor in a costume to play a giant monster—combined with the use of miniatures and scaled-down city sets. Godzilla changed the landscape of Japanese cinema, science fiction and fantasy, and kickstarted the kaiju genre in Japan called the "Monster Boom", which remained extremely popular for several decades, with characters such as the aforementioned Godzilla, Gamera and King Ghidorah leading the market. Tokusatsu films, notably Warning from Space (1956), sparked Stanley Kubrick's interest in science fiction films; according to his biographer John Baxter, despite their "clumsy model sequences, the films were often well-photographed in colour ... and their dismal dialogue was delivered in well-designed and well-lit sets."
If one film could be said to have established a new benchmark for special effects, it would be 1968's 2001: A Space Odyssey, directed by Stanley Kubrick, who assembled his own effects team (Douglas Trumbull, Tom Howard, Con Pederson and Wally Veevers) rather than use an in-house effects unit. In this film, the spaceship miniatures were highly detailed and carefully photographed for a realistic depth of field. The shots of spaceships were combined through hand-drawn rotoscoping and careful motion-control work, ensuring that the elements were precisely combined in the camera—a surprising throwback to the silent era, but with spectacular results. Backgrounds of the African vistas in the "Dawn of Man" sequence were combined with soundstage photography via the then-new front projection technique. Scenes set in zero-gravity environments were staged with hidden wires, mirror shots, and large-scale rotating sets. The finale, a voyage through hallucinogenic scenery, was created by Douglas Trumbull using a new technique termed slit-scan.
The 1970s provided two profound changes in the special effects trade. The first was economic: during the industry's recession in the late 1960s and early 1970s, many studios closed down their in-house effects houses. Technicians became freelancers or founded their own effects companies, sometimes specializing on particular techniques (opticals, animation, etc.).
The second was precipitated by the blockbuster success of two science-fiction and fantasy films in 1977. George Lucas's Star Wars ushered in an era of science-fiction films with expensive and impressive special effects. Effects supervisor John Dykstra, A.S.C. and crew developed many improvements in existing effects technology. They created a computer-controlled camera rig called the "Dykstraflex" that allowed precise repetition of camera motion, greatly facilitating travelling-matte compositing. Degradation of film images during compositing was minimized by other innovations: the Dykstraflex used VistaVision cameras that photographed widescreen images horizontally along stock, using far more of the film per frame, and thinner-emulsion filmstocks were used in the compositing process. The effects crew assembled by Lucas and Dykstra was dubbed Industrial Light & Magic, and since 1977 has spearheaded many effects innovations.
That same year, Steven Spielberg's film Close Encounters of the Third Kind boasted a finale with impressive special effects by 2001 veteran Douglas Trumbull. In addition to developing his own motion-control system, Trumbull also developed techniques for creating intentional "lens flare" (the shapes created by light reflecting in camera lenses) to provide the film's undefinable shapes of flying saucers.
The success of these films, and others since, has prompted massive studio investment in effects-heavy science-fiction films. This has fueled the establishment of many independent effects houses, a tremendous degree of refinement of existing techniques, and the development of new techniques such as computer-generated imagery (CGI). It has also encouraged within the industry a greater distinction between special effects and visual effects; the latter is used to characterize post-production and optical work, while "special effects" refers more often to on-set and mechanical effects.
Introduction of computer generated imagery (CGI)
The use of computer animation in film dates back to the early 1980s, with the films Tron (1982) and Golgo 13: The Professional (1983). Since the 1990s, a profound innovation in special effects has been the development of computer generated imagery (CGI), which has changed nearly every aspect of motion picture special effects. Digital compositing allows far more control and creative freedom than optical compositing, and does not degrade the image as with analog (optical) processes. Digital imagery has enabled technicians to create detailed models, matte "paintings," and even fully realized characters with the malleability of computer software.
Arguably the biggest and most "spectacular" use of CGI is in the creation of photo-realistic images of science-fiction/fantasy characters, settings and objects. Images can be created in a computer using the techniques of animated cartoons and model animation. The Last Starfighter (1984) used computer generated spaceships instead of physical scale models. In 1993, stop-motion animators working on the realistic dinosaurs of Steven Spielberg's Jurassic Park were retrained in the use of computer input devices. By 1995, films such as Toy Story underscored the fact that the distinction between live-action films and animated films was no longer clear. Other landmark examples include a character made up of broken pieces of a stained-glass window in Young Sherlock Holmes, a shape-shifting character in Willow, a tentacle formed from water in The Abyss, the T-1000 Terminator in Terminator 2: Judgment Day, hordes and armies of robots and fantastic creatures in the Star Wars (prequel) and The Lord of the Rings trilogies, and the planet, Pandora, in Avatar.
Planning and use
Although most visual effects work is completed during post-production, it must be carefully planned and choreographed in pre-production and production. A visual effects supervisor is usually involved with the production from an early stage to work closely with the Director and all related personnel to achieve the desired effects.
Practical effects also require significant pre-planning and co-ordination with performers and production teams. The live nature of the effects can result in situations where resetting due to an error, mistake, or safety concern incurs significant expense, or is impossible due to the destructive nature of the effect.
Live special effects
Live special effects are effects that are used in front of a live audience, such as in theatre, sporting events, concerts and corporate shows. Types of effects that are commonly used include: flying effects, laser lighting, theatrical smoke and fog, CO2 effects, and pyrotechnics. Other atmospheric effects can include flame, confetti, bubbles, and snow.
Mechanical effects
Mechanical effects encompass the use of mechanical engineering to a greater degree. Cars being flipped and hauled over buildings are usually an effect built on specialized rigs and gimbals such as in movies like Unknown. These features were made possible by the use of these rigs and gimbals. Usually a team of engineers or freelance film companies provide these services to movie producers. As the action event is being recorded against a green screen, camera workers, stunt artists or doubles, directors and engineers who conceptualize and engineer these monumental mechanics, all collaborate at the same time to acquire the perfect angle and shot that provides the entertainment users enjoy. It is then edited and reviewed before final release to the public.
Visual special effects techniques
Bullet hit squibs
Bullet time
Computer-generated imagery (often using Shaders)
Digital compositing
Dolly zoom
In-camera effects
Match moving
Matte (filmmaking) and Matte painting
Video matting
Miniature effects
Morphing
Motion control photography
Optical effects
Optical printing
Practical effects
Prosthetic makeup effects
Rotoscoping
Reverse motion
Stop motion
Go motion
Schüfftan process
Traveling matte
Virtual cinematography
Wire removal
Noted special effects companies
Adobe Systems Incorporated (San Jose, USA)
Animal Logic (Sydney, Australia; Venice, USA)
Bird Studios (London, UK)
BUF Compagnie (Paris, France)
CA Scanline (Munich, Germany)
Cinema Research Corporation, defunct (Hollywood, USA)
Cinesite (London, UK; Hollywood, USA)
Creature Effects, Inc. (Los Angeles, USA)
Digital Domain (Playa Vista, USA)
Double Negative (VFX) (London, UK)
DreamWorks (Los Angeles, USA)
Flash Film Works (Los Angeles, USA)
Framestore (London, UK)
Hydraulx (Santa Monica, USA)
Image Engine (Vancouver, Canada)
Industrial Light & Magic (San Francisco, USA), founded by George Lucas
Intelligent Creatures (Toronto, Canada)
Jim Henson's Creature Shop, (Los Angeles; Hollywood; Camden Town, London)
Legacy Effects, (Los Angeles, USA)
Look Effects, (Culver City, USA)
M5 Industries (San Francisco, USA) home of MythBusters
Mac Guff (Los Angeles, USA; Paris, France)
Machine Shop (London, UK)
Makuta VFX (Universal City, USA; Hyderabad, India)
Matte World Digital (Novato, USA)
Method Studios (Los Angeles, USA; New York, USA; Vancouver, Canada)
The Mill (London, UK; New York, USA; Los Angeles, USA)
Modus FX (Montreal, Canada)
Moving Picture Company (Soho, London, UK)
Pixomondo (Frankfurt; Munich; Stuttgart; Los Angeles; Beijing; Toronto; Baton Rouge, LA)
Rainmaker Digital Effects (Vancouver, Canada)
Rhythm and Hues Studios (Los Angeles, USA)
Rise FX (Berlin, Germany)
Rising Sun Pictures (Adelaide, Australia)
Rodeo FX (Montreal; Munich; Los Angeles)
Snowmasters (Lexington, AL, USA)
Sony Pictures Imageworks (Culver City, USA)
Strictly FX, live special effects company
Surreal World (Melbourne, Australia)
Super FX, Special Effects Company (Italy)
Tippett Studio (Berkeley, USA)
Tsuburaya Productions (Hachimanyama, Setagaya, Tokyo)
Vision Crew Unlimited, defunct (Los Angeles, USA)
Weta Digital, (Wellington, New Zealand)
Zoic Studios (Culver City, USA)
ZFX Inc a flying effects company (Louisville, USA; Utrecht, the Netherlands)
Noted special effects directors
Eiji Tsuburaya, the "Father of Tokusatsu"
Tom Howard
John P. Fulton
Douglas Trumbull
Ken Pepiot
Stan Winston
John Dykstra
Koichi Kawakita
Teruyoshi Nakano
Yonesaburo Tsukiji
Yoshiyuki Kuroda
Yoshihiro Nishimura
Akira Watanabe
Notes
References
Cinefex magazine
American Cinematographer magazine
Richard Rickitt: Special Effects: The History and Technique, Billboard Books; 2nd edition, 2007;
Movie Magic: The History of Special Effects in the Cinema by John Brosnan (1974)
Techniques of Special Effects Cinematography by Raymond Fielding (For many years, the standard technical reference. Current edition 1985)
Special Effects: Titanic and Beyond The online companion site to the NOVA documentary (See especially the timeline and glossary)
T. Porter and T. Duff, "Compositing Digital Images", Proceedings of SIGGRAPH '84, 18 (1984).
The Art and Science of Digital Compositing ()
Mark Cotta Vaz; Craig Barron: The Invisible Art: The Legends of Movie Matte Painting, Chronicle Books, 2004;
Larry Nile Baker, A History of Special Effects Cinematography in the United States, 1895–1914'', Larry Nile Baker, 1969.
External links
Video (03:04) – Evolution of Special Effects since 1878.
How to Add Special Effects to Your Videos.
Chinese Film Classics: Special effects in early Chinese cinema: clips and discussion of special effects in Chinese films from before 1949, on scholarly website chinesefilmclassics.org
Audiovisual introductions in 1857
Animation techniques
Stagecraft
Articles containing video clips |
486709 | https://en.wikipedia.org/wiki/Browser%20wars | Browser wars | A browser war is competition for dominance in the usage share of web browsers. The "First Browser War" during the late 1990s pitted Microsoft's Internet Explorer against Netscape's Navigator. Browser wars continued with the decline of Internet Explorer's market share and the popularity of other browsers including Firefox, Google Chrome (and other Chromium-based browsers), Safari, Microsoft Edge and Opera.
With the introduction of HTML5 and CSS 3, a new generation of browser war began, this time adding extensive client-side scripting to the World Wide Web, as well as more widespread use of smartphones and other mobile devices for browsing the web. These newcomers have ensured that browser battles continue among enthusiasts, while the average web user is less affected.
Background
The World Wide Web is an Internet-based hypertext system invented in the late 1980s and early 1990s by Tim Berners-Lee. He wrote the first web browser WorldWideWeb, later renamed Nexus, and released it for the NeXTstep platform in 1991.
By the end of 1992 other browsers had appeared, many of them based on the libwww library. These included Unix browsers such as Line Mode Browser, ViolaWWW, Erwise, and MidasWWW, and MacWWW/Samba for the Mac. Even though these browsers tended to be simple HTML viewers, relying on external helper applications to view multimedia content, they provided choice to users both in browsers and platforms.
Mosaic Wars
Further browsers were released in 1993, including Cello, Arena, Lynx, tkWWW, and Mosaic. The most influential of these was Mosaic, a multiplatform browser developed at National Center for Supercomputing Applications (NCSA). By October 1994, Mosaic was "well on its way to becoming the world's standard interface", according to Gary Wolfe of Wired.
Several companies licensed Mosaic to create their own commercial browsers, such as AirMosaic, Quarterdeck Mosaic, and Spyglass Mosaic. One of the Mosaic developers, Marc Andreessen, co-founded the Mosaic Communications Corporation and created a new web browser named Mosaic Netscape.
To resolve legal issues with NCSA, the company was renamed Netscape Communications Corporation and the browser Netscape Navigator. The Netscape browser improved on Mosaic's usability and reliability and was able to display pages as they loaded. By 1995, helped by the fact that it was free for non-commercial use, the browser dominated the emerging World Wide Web.
Other browsers launched during 1994 included IBM Web Explorer, Navipress, SlipKnot, MacWeb, and Browse.
In 1995, Netscape faced new competition from OmniWeb, WebRouser, UdiWWW, and Microsoft's Internet Explorer 1.0, but continued to dominate the market.
First Browser War (1995–2001)
By mid-1995, the World Wide Web had received a great deal of attention in popular culture and the mass media. Netscape Navigator was the most widely used web browser and Microsoft had licensed Mosaic to create Internet Explorer 1.0, which it had released as part of the Microsoft Windows 95 Plus! pack in August.
Internet Explorer 2.0 was released as a free download three months later. Unlike Netscape Navigator, it was available to all Windows users free of charge, this also applying to commercial companies. Other companies later followed suit and released their browsers free of charge. Both Netscape Navigator and competitor products like InternetWorks, Quarterdeck Browser, InterAp, and WinTapestry were bundled with other applications to full Internet suites. New versions of Internet Explorer and Netscape (branded as Netscape Communicator) were released at a rapid pace over the following few years.
Development was rapid and new features were routinely added, including Netscape's JavaScript (subsequently replicated by Microsoft as JScript) and proprietary HTML tags such as <blink> (Navigator) and <marquee> (Internet Explorer).
Internet Explorer began to approach feature parity with Netscape with version 3.0 (1996), which offered scripting support and the market's first commercial Cascading Style Sheets (CSS) implementation.
In October 1997, Internet Explorer 4.0 was released. The release party in San Francisco featured a ten-foot-tall letter "e" logo. Netscape employees showing up to work the following morning found the logo on their front lawn, with a sign attached that read "From the IE team ...We Love You." The Netscape employees promptly knocked it over and set a giant figure of their Mozilla dinosaur mascot atop it, holding a sign reading "Netscape 72, Microsoft 18" representing the market distribution.
Internet Explorer 4 changed the tides of the browser wars. It was integrated into Microsoft Windows, which gave it a large installation base.
During these releases, it was common for web designers to display 'best viewed in Netscape' or 'best viewed in Internet Explorer' logos. These images often identified a specific browser version and were commonly linked to a source from which the stated browser could be downloaded. These logos generally recognized the divergence between the standards supported by the browsers and signified which browser was used for testing the pages. In response, supporters of the principle that web sites should be compliant with World Wide Web Consortium standards and hence viewable with any browser started the "Viewable With Any Browser" campaign, which employed its own logo similar to the partisan ones. Most mainstream web sites, however, specified one of Netscape or IE as their preferred browser while making some attempt to support minimal functionality on the other.
While Netscape began with about 80% market share and a good deal of public goodwill, as a relatively small company deriving the great bulk of its income from what was essentially a single product (Navigator and its derivatives), it was financially vulnerable. The company's total revenue never exceeded the income generated by Microsoft's funds that were readily available for use. Microsoft's resources allowed them to make Internet Explorer available without charge, as the revenues from Windows were used to fund its development and marketing. As a result, Internet Explorer was provided free for all Windows and Macintosh users, unlike Netscape which was free for home and educational use but would require a paid license for business use; as it was told by Jim Barksdale, President and CEO of Netscape Communications: "Very few times in warfare have smaller forces overtaken bigger forces...".
Microsoft bundled Internet Explorer with every copy of Windows, which had an over 90% share of the desktop operating system market, allowing the company to obtain market share more easily than Netscape, as customers already had Internet Explorer installed as the default browser. In this time period, many new computer purchasers had never extensively used a web browser before. Consequently, the buyer did not have anything else to compare with and little motivation to consider alternatives; the set of abilities they had gained with access to the Internet and the World Wide Web made any difference in browser features or ergonomics pale in comparison.
During the United States Microsoft antitrust case in 1998, government witness and Intel vice president Steven McGeady testified that a senior executive at Microsoft told him in 1995 of his company's intention to "cut off Netscape's air supply", although a Microsoft attorney rejected McGeady's testimony as not credible. That same year, Netscape, the company, was acquired by America Online for US$4.2 billion. Internet Explorer became the new dominant browser, attaining a peak of about 96% of the web browser usage share during 2001.
The first browser war ended with Internet Explorer having no remaining serious competition for its market share. This also brought an end to the rapid innovation in web browsers; until 2006 there was only one new version of Internet Explorer since version 6.0 had been released in 2001. Internet Explorer 6.0 Service Pack 1 was developed as part of Windows XP Service Pack 1 and was also integrated into Windows Server 2003. Further enhancements were made to Internet Explorer in Windows XP Service Pack 2 (released in 2004), including a pop-up blocker and stronger default security settings regarding the installation of ActiveX controls.
Second Browser War (2004–2017)
At the start of Netscape Navigator's decline, Netscape open-sourced their browser code, and later entrusted it to the newly formed non-profit Mozilla Foundation—a primarily community-driven project to create a successor to Netscape. Development continued for several years with little widespread adoption until a stripped-down browser-only version of the full suite, which included new features such as a separate search bar (which had previously only appeared in the Opera browser), was created. The browser-only version was initially named Phoenix, but because of trademark issues that name was changed, first to Firebird, then to Firefox. This browser became the focus of the Mozilla Foundation's development efforts and Mozilla Firefox 1.0 was released on November 9, 2004. It then continued to gain an increasing share of the browser market until a peak in 2010.
In 2003, Microsoft announced that Internet Explorer 6 Service Pack 1 would be the last standalone version of its browser. Future enhancements would be dependent on Windows Vista, which would include new tools such as the WPF and XAML to enable developers to build web applications.
In response, in April 2004, the Mozilla Foundation and Opera Software joined efforts to develop new open-technology standards which add more capability while remaining backward-compatible with existing technologies. The result of this collaboration was the WHATWG, a working group devoted to the fast creation of new standard definitions that would be submitted to the W3C for approval.
The growing number of device/browser combinations in use, legally-mandated web accessibility, as well as the expansion of expected web functionality to essentially require DOM and scripting abilities, including AJAX, made web standards of increasing importance during this era. Instead of touting their proprietary extensions, browser developers began to market their software based on how closely it adhered to the behavior as specified by the standard.
Updated browsers and rise of mobile browsers
On February 15, 2005, Microsoft announced that Internet Explorer 7 would be available for Windows XP SP2 and later versions of Windows by mid-2005. The announcement introduced the new version of the browser as a major upgrade over Internet Explorer 6 SP1.
Opera had been a long-time small player in the browser wars, known for introducing innovative features such as tabbed browsing and mouse gestures, as well as being lightweight but feature-rich. The software, however, was commercial, which hampered its adoption compared to its free rivals until 2005, when the browser became freeware. On June 20, 2006, Opera Software released Opera 9 including an integrated source viewer, a BitTorrent client implementation, and widgets. It was the first Windows browser to pass the Acid2 test. Opera Mini, a mobile browser, has significant mobile market share. Editions of Opera are also available for the Nintendo DS and the Wii.
Microsoft released Internet Explorer 7 on October 18, 2006. It included tabbed browsing, a search bar, a phishing filter, and improved support for web standards (including full support for PNG)—all features already long familiar to Opera and Firefox users. Microsoft distributed Internet Explorer 7 to genuine Windows users (WGA) as a high-priority update through Windows Update. Typical market share analysis showed only a slow uptake of Internet Explorer 7 and Microsoft decided to drop the requirement for WGA and made Internet Explorer 7 available to all Windows users in October 2007. Throughout the two following years, Microsoft worked on Internet Explorer 8. On December 19, 2007, the company announced that an internal build of that version had passed the Acid2 CSS test in "IE8 standards mode"—the last of the major browsers to do so. Internet Explorer 8 was released on March 19, 2009. New features included accelerators, improved privacy protection, a compatibility mode for pages designed for older browsers, and improved support for various web standards. It was the last version of Internet Explorer to be released for Windows XP. Internet Explorer 8 scored 20/100 in the Acid3 test, which was much worse than all major competitors at the time.
On October 24, 2006, Mozilla released Mozilla Firefox 2.0. It included the ability to reopen recently closed tabs, a session restore feature to resume work where it had been left after a crash, a phishing filter, and a spell-checker for text fields. Mozilla released Firefox 3.0 on June 17, 2008, with performance improvements and other new features. Firefox 3.5 followed on June 30, 2009, with further performance improvements, native integration of audio and video, and more privacy features.
Apple created forks of the open-source KHTML and KJS layout and JavaScript engines from the KDE Konqueror browser in 2002. They explained that those provided a basis for easier development than other technologies by virtue of being small (fewer than 140,000 lines of code), cleanly designed, and standards-compliant. The resulting layout engine became known as WebKit and it was incorporated into the Safari browser that first shipped with Mac OS X v10.3. On June 13, 2003, Microsoft said it was discontinuing Internet Explorer on the Mac platform and on June 6, 2007, Apple released a beta version of Safari for Microsoft Windows. On April 29, 2010, Steve Jobs wrote an open letter regarding his Thoughts on Flash, and the place it would hold on Apple's iOS devices and web browsers. Web developers were tasked with updating their web sites to be mobile friendly, and while many disagreed with Steve Jobs's assessment on Adobe Flash, history would soon prove his point with notable mentions of poor performance of Flash on Android devices. HTML4 and CSS2 were the standard in most browsers in 2006. However, new features being added to browsers from HTML5 and CSS3 specifications were quickly making their mark by 2010, especially in the emerging mobile browser market where new ways of animating and rendering for various screen sizes were to become the norm. Accessibility would also become a key player for the mobile web.
On December 28, 2007, Netscape announced that support for its Mozilla-derived Netscape Navigator would be discontinued on February 1, 2008, suggesting its users migrate to Mozilla Firefox.
However, on January 28, 2008, Netscape announced that support would be extended to March 1, 2008, and mentioned Flock alongside Firefox as alternatives to its users.
Chrome's Entry
Google released the Chrome browser for Microsoft Windows on December 11, 2008, using the same WebKit rendering engine as Safari and a faster JavaScript engine called V8. Shortly after, an open-sourced version for the Windows, Mac OS X, and Linux platforms was released under the name Chromium. According to Net Applications, Chrome had gained a 3.6% usage share by October 2009. After the release of the beta for Mac OS X and Linux, the market share had increased rapidly.
During December 2009 and January 2010, StatCounter reported that its statistics indicated that Firefox 3.5 was the most popular browser when counting individual browser versions, passing Internet Explorer 7 and 8 by a small margin. This was the first time a global statistic has reported that a non-Internet Explorer browser version had exceeded the top Internet Explorer version in usage share since the fall of Netscape Navigator. However, this feat, which GeekSmack called the "dethroning of Microsoft and its Internet Explorer 7 browser," could largely be attributed to the fact that it came at a time when version 8 was replacing version 7 as the dominant Internet Explorer version; no more than two months later Internet Explorer 8 had established itself as the most popular browser and version. Other major statistics, such as Net Applications, never reported any non-Internet Explorer browser having a higher usage share than Internet Explorer if each version of each browser was looked at individually: for example Firefox 3.5 was reported as the third most popular browser version from December 2009 to February 2010, succeeded by Firefox 3.6 since April 2010, each ahead of Internet Explorer 7 but behind Internet Explorer 6 and 8.
Chrome's dominance, Presto engine deprecation, and HTML5 standardized
On January 21, 2010, Mozilla released Mozilla Firefox 3.6, which allows support for a new type of theme display, 'Personas', which allows users to change Firefox's appearance with a single click. Version 3.6 also improved JavaScript performance, overall browser responsiveness, and startup times.
In October 2010, StatCounter reported that Internet Explorer had for the first time dropped below 50% market share to 49.87% in their figures. Also, StatCounter reported Internet Explorer 8's first drop in usage share in the same month.
Google released Google Chrome 9 on February 3, 2011. New features introduced included support for WebGL, Chrome Instant, and the Chrome Web Store. The company created another seven versions of Chrome that year, finishing with Chrome 16 on December 15, 2011. Google Chrome 17 was released on February 15, 2012. In April 2012, Google browsers (Chrome and Android) became the most used browsers on Wikimedia Foundation sites. By May 21, 2012, StatCounter reported Chrome narrowly overtaking Internet Explorer as the most used browser in the world.
However, troughs and peaks in the market share between Internet Explorer and Chrome meant that Internet Explorer was slightly ahead of Chrome on weekdays up until July 4. At the same time, Net Applications reported Internet Explorer firmly in first place, with Google Chrome almost overtaking Firefox as the second. In 2012, responding to Chrome's popularity, Apple discontinued Safari for Windows.
Microsoft released Internet Explorer 9 on March 14, 2011. It featured a revamped interface, support for the basic SVG feature set, and partial HTML5 video support, among other new features. It only runs on Windows Vista, Windows 7, and Windows Phone 7. The company later released Internet Explorer 10 along with Windows 8 and Windows Phone 8 in 2012, and an update compatible with Windows 7 followed in 2013. This version drops Vista and Phone 7 support. The release preview of Internet Explorer 11 was released on September 17, 2013. It supports the same desktops as its predecessor.
The concept of rapid releases established by Google Chrome prompted Mozilla to do the same for its Firefox browser. On June 21, 2011, Firefox 5.0 was the first rapid release for this browser, finished a mere six weeks after the previous edition. Mozilla created four more whole-number versions throughout the year, finishing with Firefox 9 on December 20, 2011. For those desiring long-term support, Mozilla made an Extended Support Release (ESR) version of Firefox 10 on January 31, 2012. Contrary to the regular version, a Firefox ESR receives regular security updates plus occasional new features and performance updates for approximately one year, after which a 12-week grace period is given before discontinuing that version number. Those who continued to use the rapid releases with an active Internet connection were automatically updated to Firefox 11 on March 15, 2012. By the end of 2011, however, Chrome overtook Firefox to become world's most used browser, and the competition between Chrome and Firefox intensified.
During this era, all major web browsers implemented support for HTML5 video. Supported codecs, however, varied from browser to browser. Current versions of Android, Chrome, and Firefox support Theora, H.264, and the VP8 version of WebM. Older versions of Firefox omitted H.264 due to it being a proprietary codec, but it was made available beginning in version 17 for Android and version 20 for Windows. Internet Explorer and Safari support H.264 exclusively, but the Theora and VP8 codecs can be manually installed on the desktop versions. Given the popularity of WebKit for mobile browsers, Opera Software discontinued its own Presto engine in February 2013. The Opera 12 series of browsers were the last to use Presto with its successors using WebKit instead. In 2015, Microsoft discontinued production of newer versions of Internet Explorer. By this point Chrome overtook all other browsers as the browser with the highest usage share.
Starting in 2015 with the release of Windows 10, Microsoft shifted from Internet Explorer to Microsoft Edge. However, the new browser has failed to capture much popularity as of 2018.
By 2017 usage shares of Opera, Firefox and Internet Explorer fell well below 5% each, while Google Chrome had expanded to over 60% worldwide. In May 2017, Andreas Gal, former Mozilla CTO, publicly stated that Google Chrome won the Second Browser War.
Aftermath
Though Internet Explorer is no longer the default browser as of Windows 10, it continues to operate due to organizations needing it for legacy applications, and – despite its overall decline – has a higher usage share than its successor, Microsoft Edge. Due to Google Chrome's success, in December 2018 Microsoft announced that they would be building a new version of Edge based on Chromium and powered by Google's rendering engine, Blink, rather than their own rendering engine, EdgeHTML. The new Microsoft Edge browser was released on January 15, 2020. Though Firefox showed a slight increase in usage share as of February 2019, it continues to struggle with less than 10% usage share worldwide. By April 2019, worldwide Google Chrome usage share crossed 70% across personal computers and remained over 60% combining all devices. In June 2021, Microsoft permanently discontinued Internet Explorer in favor of Microsoft Edge as their sole browser.
See also
After the Software Wars
Comparison of web browsers
Usage share of web browsers
References
Bibliography
DOJ/Antitrust: U.S. Department of Justice Antitrust Division. Civil Action No. 98-1232 (Antitrust) Complaint, United States of America v. Microsoft Corporation. May 18, 1998. Press release: Justice Department Files Antitrust Suit Against Microsoft for Unlawfully Monopolizing Computer Software Markets
External links
A March 1998 Interview with Marc Andreessen about Microsoft antitrust litigation and browser wars
The Roads and Crossroads of Internet History: Chapter 4. Birth of the World Wide Web by Gregory R. Gromov
Browser Statistics – Month by month comparison spanning from 2002 and onward displaying the usage share of browsers among web developers
Browser Stats – Chuck Upsdell's Browser Statistics
Browser Stats – Net Applications' Browser Statistics
StatCounter Global Stats – tracks the market share of browsers including mobile from over 4 billion monthly page views
Browser war, RIA and future of web development
Browser Wars II: The Saga Continues – an article about the development of the browser wars
Web Browsers' War – 2012 – An article about web browsers' war in 2012
Thomas Haigh, "Protocols for Profit: Web and Email Technologies as Product and Infrastructure" in The Internet & American Business, eds. Ceruzzi & Aspray, MIT Press, 2008– business & technological history of web browsers, online preprint
Browser Market Share – current market share of browsers and their versions, desktop and mobile
Web browsers
Software wars |
1113784 | https://en.wikipedia.org/wiki/Open-design%20movement | Open-design movement | The open-design movement involves the development of physical products, machines and systems through use of publicly shared design information. This includes the making of both free and open-source software (FOSS) as well as open-source hardware. The process is generally facilitated by the Internet and often performed without monetary compensation. The goals and philosophy of the movement are identical to that of the open-source movement, but are implemented for the development of physical products rather than software. Open design is a form of co-creation, where the final product is designed by the users, rather than an external stakeholder such as a private company.
Origin
Sharing of manufacturing information can be traced back to the 18th and 19th century. Aggressive patenting put an end to that period of extensive knowledge sharing.
More recently, principles of open design have been related to the free and open-source software movements. In 1997 Eric S. Raymond, Tim O'Reilly and Larry Augustin established "open source" as an alternative expression to "free software", and in 1997 Bruce Perens published The Open Source Definition. In late 1998, Dr. Sepehr Kiani (a PhD in mechanical engineering from MIT) realized that designers could benefit from open source policies, and in early 1999 he convinced Dr. Ryan Vallance and Dr. Samir Nayfeh of the potential benefits of open design in machine design applications. Together they established the Open Design Foundation (ODF) as a non-profit corporation, and set out to develop an Open Design Definition.
The idea of open design was taken up, either simultaneously or subsequently, by several other groups and individuals. The principles of open design are closely similar to those of open-source hardware design, which emerged in March 1998 when Reinoud Lamberts of the Delft University of Technology proposed on his "Open Design Circuits" website the creation of a hardware design community in the spirit of free software.
Ronen Kadushin coined the title "Open Design" in his 2004 Master's thesis, and the term was later formalized in the 2010 Open Design Manifesto.
Current directions
The open-design movement currently unites two trends. On one hand, people apply their skills and time on projects for the common good, perhaps where funding or commercial interest is lacking, for developing countries or to help spread ecological or cheaper technologies. On the other hand, open design may provide a framework for developing advanced projects and technologies that might be beyond the resource of any single company or country and involve people who, without the copyleft mechanism, might not collaborate otherwise. There is now also a third trend, where these two methods come together to use high-tech open-source (e.g. 3D printing) but customized local solutions for sustainable development. Open Design holds great potential in driving future innovation as recent research has proven that stakeholder users working together produce more innovative designs than designers consulting users through more traditional means.
Open machine design as compared to open-source software
The open-design movement is currently fairly nascent but holds great potential for the future. In some respects design and engineering are even more suited to open collaborative development than the increasingly common open-source software projects, because with 3D models and photographs the concept can often be understood visually. It is not even necessary that the project members speak the same languages to usefully collaborate.
However, there are certain barriers to overcome for open design when compared to software development where there are mature and widely used tools available and the duplication and distribution of code cost next to nothing. Creating, testing and modifying physical designs is not quite so straightforward because of the effort, time and cost required to create the physical artefact; although with access to emerging flexible computer-controlled manufacturing techniques the complexity and effort of construction can be significantly reduced (see tools mentioned in the fab lab article).
Organizations
Open design is currently a fledgling movement consisting of several unrelated or loosely related initiatives. Many of these organizations are single, funded projects, while a few organizations are focusing on an area needing development. In some cases (e.g. Thingiverse for 3D printable designs or Appropedia for open source appropriate technology) organizations are making an effort to create a centralized open source design repository as this enables innovation. Notable organizations include:
AguaClara, an open-source engineering group at Cornell University publishing a design tool and CAD designs for water treatment plants
Arduino, an open-source electronics hardware platform, community and company
Elektor: electronics
Instructables : miscellaneous
Local Motors : methods of transport, vehicles
LittleBits
One Laptop Per Child, a project to give every child in developing territories a laptop computer with open hardware and software
OpenCores : digital electronic hardware
Open Architecture Network : places. to shelter
Open Hardware and Design Alliance (OHANDA)
OpenStructures (OSP), a modular construction model where everyone designs on the basis of one shared geometrical grid.
Open Source Ecology, including solar cells
Sensorica A collaborative open source hardware Open Value Network (virtual organisation)
Thingiverse : miscellaneous
VOICED
VIA OpenBook netbook has CAD files for the design licensed under the Creative Commons Attribution Share Alike 3.0 Unported License
Wikispeed, open-source modular vehicles
Zoetrope, open design low cost wind turbine.
See also
3D printing services
Commons-based peer production
Co-creation
Knowledge commons
Modular design
OpenBTS
Open manufacturing
Open-source appropriate technology
Open-source architecture
Open-source hardware
Open Source Initiative
Open-source software
Open standard and Open standardization
Open Design Alliance
References
External links
Episodes of Collective Invention (Peter B. Meyer, August 2003) An article on several historical examples of what could be called "open design"
"Lawrence Lessig and the Creative Commons Developing Nations License" (Alex Steffen, November 2006) An interview with Lawrence Lessig on the use of the Developing Nations License by Architecture for Humanity to create a global open design network
"In the Next Industrial Revolution, Atoms Are the New Bits" (Chris Anderson, Wired February 2010)
Design
Open-source hardware
Free culture movement
Open design
Articles containing video clips |
1729790 | https://en.wikipedia.org/wiki/Write%20once%2C%20compile%20anywhere | Write once, compile anywhere | Write once, compile anywhere (WOCA) is a philosophy taken by a compiler and its associated software libraries or by a software library/software framework which refers to a capability of writing a computer program that can be compiled on all platforms without the need to modify its source code. As opposed to Sun's write once, run anywhere slogan, cross-platform compatibility is implemented only at the source code level, rather than also at the compiled binary code level.
Introduction
There are many languages that follow the WOCA philosophy, such as C++, Pascal (see Free Pascal), Ada, Cobol, or C, on condition that they don't use functions beyond those provided by the standard library. Languages like Go go even further in as far that no system specific things are used, it should just work, and for system-specific elements a system of platform-specific files is used. A computer program may also use cross-platform libraries, which provide an abstraction layer hiding the differences between various platforms, for things like sockets and GUI, ensuring the portability of the written source code. This is, for example, supported by Qt (C++) or the Lazarus (Pascal) IDE via its LCL and corresponding widgetsets.
Today, we have very powerful desktop computers as well as computers in our phones, which often have sophisticated applications such as word processing, Database management, and spreadsheets, that can allow people with no programming experience to, sort, extract, and manipulate their data. and create documents (such ass PDF files) showing their now organized information, or printing it out. Before 2000, some of these were not available, and prior to 1980, almost none of them were.
From the start of computer automation in the early 1960s, if you wanted a report from data you had, or needed to print up invoices, payroll checks, purchase orders, and other paperwork businesses, schools and governments generated, you typed them up on a physical typewriter, possibly using pre-printed forms. Otherwise, if you did have information stored in a computer, and wanted it sorted, manipulated, or printed, it required someone to write a program to do so. In some cases, the person needed information that professional programmers either could not understand how to provide a program to do what they wanted; the available programmers could not produce something in a reasonable amount of time; or there weren't any programmers they could use, caused some non-programming professionals to learn some programming skills, at least to know how to manipulate and print out information they needed from their data.
Whether the work was done by a professional programmer, or an end-user writing a program to provide them information for their own use, the means to do this in either case is the same. Write a program, submit it to a compiler (another program that converts written programs into instructions the computer can understand), fix any errors, then repeat until the program worked.
While this helped fix part of the problem, it created a new one. People who wrote programs, or hired someone to write them (purchasing software was not a thing until the 1970s or 1980s), discovered when their employer or school bought a new computer, their programs no longer worked.
History
Originally, programming at the written level was done by writing programs in assembly language. The problems with this are that
It is the ultimate in vendor lock-in as your programs are not only tied to that vendor's hardware, but to that particular model of that manufacturer's hardware.
If you want to move to a more capable or more powerful machine, your code might not work, requiring everything to be done over.
If your vendor goes out of business, or does not provide a better model, when you want to upgrade you have to move to another vendor's equipment, and your entire investment is lost, for the same reason as the previous example.
Assembly language is slow to develop. A program in a high-level language today, (like C++, Python, PHP, or Go) can have one line of code that would require dozens of assembly-language statements.
Many people who want to develop solutions for a particular problem and want to create a program to do so, may not want to take the time to learn the machine language of a computer, but would be willing to learn to learn to write programs to solve their problems using something less complicated.
Development of high-level programming languages
To combat these problems, various high-level languages were developed that were usable for general purpose application program development, but could be used to provide reports and information for people with specialized requirements.
These include:
BASIC, a simplified programming language which could be used to build "real world" complex applications, as well as simple programs to solve a problem, and was very accessible for amateur programmers. Its ease of use made BASIC well suited as an introductory computer programming class in many colleges.
C, a general-purpose programming language often suitable for writing programs which would otherwise require assembly language.
Cobol, a general-purpose programming language which tended to be used for developing business applications.
Fortran, a general-purpose programming language which is well suited for scientific and engineering problems.
Pascal, a general-purpose programming language, which tended to be targeted toward use in schools as an introductory class to teach programming as an alternative to BASIC. Pascal became commonly used for hobbyist programmers in the 1970s and 1980s due to them being part of an integrated development environment allowing a person to edit, compile, and run their programs, providing immediate results. Programs such as UCSD Pascal and Turbo Pascal opened up the world of programming to millions of people,
RPG, a domain-specific programming language designed to take the contents of data files, manipulate data and do calculations, then produce printed reports, similar to the way a person can use a spreadsheet to organize data and create PDF files today.
While compilers and interpreters of all of these languages, and dozens of others, were available for different machines and different vendors, often each manufacturer would develop proprietary enhancements which made programing on that machine easier, but again, made programs difficult to port (move the program to a different type of computer or a different vendor's computers), and increased vendor lock in. Something had to change.
Standardization
Starting in the late 1960's and early 1970s, efforts came into play to create standards and specifications of how machine-independent programs could be written using compilers from any vendor. Standards-making organizations, like the International Standards Organization (ISO), and ANSI, among others, in cooperation with large users of computers and software (like governments, financial institutions and manufacturers), and computer manufacturers, to create standardized specifications to provide a description of how each specific language should be implemented.
Computer manufacturers could still have their own proprietary extensions to a programming language, if they wanted to be able to claim compliance with the standard, in their reference manual they had to identify what was different, so that a program written according to the standard, would be able to compile and operate on their machine would also operate, without further change, on a different manufacture's computer, if their compiler also followed the standard. The requirements of the standard were enforced by large software buyers, such as military, government, and manufacturing companies, to refuse to buy their computing equipment, if they had a compiler for the programming languages they used, if it was not compliant with the standard,
Results
Currently, there are more than a dozen programming languages that have standards describing how programs in the language are supposed to be written, including Ada, APL, BASIC, C++, COBOL, ECMAScript (the generic name for JavaScript), Forth, FORTRAN, Pascal, PL/I, Ruby and SQL. Many of these are still in use, in some cases, because customers were able to take their source code to a different manufacturer's computer, where it was recompiled, often without change, because of the standardization of programming languages.
While the standards helped, the WOCA philosophy works only when the makers of compilers ensure that they follow the standard.
See also
Pivot language
Software engineering folklore
Computer programming folklore
Programming language folklore
Interoperability |
34361187 | https://en.wikipedia.org/wiki/Hypsenor | Hypsenor | In Greek mythology, the name Hypsenor (Ancient Greek: Ὑψήνωρ) may refer to:
Hypsenor, a Trojan priest, son of Dolopion, killed by Eurypylus.
Hypsenor, son of Hippasus, fought under Antilochus and was killed by Deiphobus.
Hypsenor, a son of Neleus and Chloris.
Notes
References
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Trojans
Achaeans (Homer)
People of the Trojan War
Characters in Greek mythology |
2650529 | https://en.wikipedia.org/wiki/Integrity%20%28disambiguation%29 | Integrity (disambiguation) | Integrity is the ethical concept of basing of one's actions on a consistent framework of principles.
Integrity may also refer to:
Technology
Data integrity, a concept from information and telecommunications technology in general, and cryptography in particular
System integrity, a telecommunications concept regarding the operation of a system
Integrity (operating system), a real-time operating system produced and marketed by Green Hills Software
HPE Integrity Servers, a server line from Hewlett Packard Enterprise based on the Itanium processor
Integrity by Tandem Computers, a fault-tolerant server line and Unix-based operating system whose trademark passed to HP
PTC Integrity, a software system lifecycle management and application lifecycle management platform
Arts and media
Music
Integrity (band), an American punk rock band formed in 1988
Integrity 2000, a 1999 album by American punk band Integrity
Integrity (album), 2015 album by British grime artist Jme
Integrity Blues, a 2016 album by American rock band Jimmy Eat World
TV and films
Anti-Corruption (film) (translated as "Storm of Integrity"), a 1975 Hong Kong crime film
"Integrity" (Modern Family), a 2015 episode from the TV series Modern Family
A Man of Integrity, a 2017 Iranian drama film
Integrity (film), a 2019 Hong Kong crime film
Media companies
Integrity Media, a media communications company that publishes and distributes Christian music, films and related materials
Integrity Records, a British independent record label
Legal
Bodily integrity, the principle of a human right to personal autonomy
Territorial integrity, a principle under international law
Integrity Staffing Solutions, Inc. v. Busk, a 2014 US Supreme Court decision
Nautical
HMCS Integrity (1804), a cutter that disappeared in 1805 while en route from New South Wales to Chile
HMS Integrity, a name used by the Royal Navy for several ships
Integrity (1824 ship), an English merchant ship
MV American Integrity, an American ship built in 1978
Other uses
IntegrityBC, a Canadian non-profit organization focusing on political reform
Integrity Toys, an American toy company
Integrity USA, an American non-profit organization of lesbian, gay, bisexual, and transgender Episcopalians and straight friends
Global Integrity, a nonprofit organization based in Washington, DC which monitors governmental corruption around the world |
346781 | https://en.wikipedia.org/wiki/Modeling%20language | Modeling language | A modeling language is any artificial language that can be used to express information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure.
Overview
A modeling language can be graphical or textual.
Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints.
Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions.
An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS.
Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems.
A large number of modeling languages appear in the literature.
Type of modeling languages
Graphical types
Example of graphical modeling languages in the field of computer science, project management and systems engineering:
Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system.
Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language.
C-K theory consists of a modeling language for design processes.
DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages.
EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language.
Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers.
Flowchart is a schematic representation of an algorithm or a stepwise process.
Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems.
IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies.
Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure.
LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns.
Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages.
Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis.
Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification.
Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation.
Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems.
SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization).
Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support.
Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more.
Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system.
AADL (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics.
Examples of graphical modeling languages in other fields of science.
EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design.
Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics.
IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems.
Textual types
Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as:
- the Eiffel tower <is located in> Paris
- Paris <is classified as a> city
whereas information requirements and knowledge can be expressed for example as follows:
- tower <shall be located in a> geographical area
- city <is a kind of> geographical area
Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as and ) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers.
More specific types
In the field of computer science recently more specific types of modeling languages have emerged.
Algebraic
Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Mosel, OPL and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
Behavioral
Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that
execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra.
Discipline-specific
A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram.
Domain-specific
Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system.
Framework-specific
A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices.
A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept.
Information and knowledge modeling
Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning.
Object-oriented
Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design.
Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code.
Virtual reality
Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind.
Others
Architecture Description Language
Face Modeling Language
Generative Modelling Language
Java Modeling Language
Promela
Rebeca Modeling Language
Service Modeling Language
Web Services Modeling Language
X3D
Applications
Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify:
system requirements,
structures and
behaviors.
Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled.
The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically. Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations.
Quality
A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models.
Framework for evaluation
Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thoroughly explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework.
Domain appropriateness
The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present.
Participant appropriateness
To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain.
Modeller appropriateness
Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language.
Comprehensibility appropriateness
Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation.
This is in connection to also to the structure of the development requirements.
.
Tool appropriateness
To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors.
Organizational appropriateness
The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization.
See also
Analogical models
Anthropomorphism
Metamodeling
Model-based testing (MBT)
Model-driven architecture
Model-driven engineering (MDE)
Modeling perspective
Ontology language
Scientific modelling
Service-oriented modeling
Visual modeling
Visual programming language
References
Further reading
John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway
Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\
Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences.
External links
Fundamental Modeling Concepts
Software Modeling Languages Portal
BIP -- Incremental Component-based Construction of Real-time Systems
Gellish Formal English
Specification languages |
23069169 | https://en.wikipedia.org/wiki/PlayStation%20Move | PlayStation Move | is a motion game controller developed by Sony Computer Entertainment. Initially released in 2010 for use with the PlayStation 3 video game console, its compatibility was later expanded to its successor, the PlayStation 4 in 2013, its PlayStation VR platform in 2016 and the PlayStation 5 in 2020. Conceptually similar to Nintendo's Wii Remote and Microsoft's Kinect, its function is based around controller input in games stemming from the actual physical movement of the player. The Move uses inertial sensors in the wand to detect motion while the wand's position is tracked using a PlayStation Eye or PlayStation Camera. The device was generally well received by critics, but has not quite met Sony's goals for integration into the market.
Hardware
As with the standard PlayStation 3 wireless controllers (Sixaxis, DualShock 3), both the main PlayStation Move motion controller and the PlayStation Move navigation controller use Bluetooth 2.0 and an internal lithium-ion battery charged via a USB Mini-B port on the controller. On the PlayStation 3, up to four Move controllers can be used at once (four Move motion controllers, or two Move motion controllers and two Move navigation controllers).
Motion controller
The primary component of PlayStation Move, the PlayStation Move motion controller, is a wand controller which allows the user to interact with the console through motion and position in front of a PlayStation camera. It functions similarly to the Wii Remote.
The PlayStation Move motion controller features an orb at the head which can glow in any of a full range of colors using RGB light-emitting diodes (LEDs). Based on the colors in the user environment captured by the camera, the system dynamically selects an orb color that can be distinguished from the rest of the scene. The colored light serves as an active marker, the position of which can be tracked along the image plane by the camera. The uniform spherical shape and known size of the light also allows the system to simply determine the controller's distance from the camera through the light's image size, thus enabling the controller's position to be tracked in three dimensions with high precision and accuracy. The simple sphere-based distance calculation allows the controller to operate with minimal processing latency, as opposed to other camera-based control techniques on the PlayStation 3.
A pair of inertial sensors inside the controller, a three-axis linear accelerometer and a three-axis angular rate sensor, are used to track rotation as well as overall motion. An internal magnetometer is also used for calibrating the controller's orientation against the Earth's magnetic field to help correct against cumulative error (drift) by the inertial sensors. In addition, an internal temperature sensor is used to adjust the inertial sensor readings against temperature effects. The inertial sensors can be used for dead reckoning in cases which the camera tracking is insufficient, such as when the controller is obscured behind the player's back.
The controller face features a large oblong primary button (Move), surrounded by small action buttons (, , , ), and with a regular-sized PS button beneath, arranged in a similar configuration as on the Blu-ray Disc Remote Control. On the left and right side of the controller is a Select and Start button, respectively. On the underside is an analog trigger (T). On the tail end of the controller is the wrist strap, USB port, and extension port.
The motion controller features vibration-based haptic technology. In addition to providing a tracking reference, the controller's orb light can be used to provide visual feedback, simulating aesthetic effects such as the muzzle flash of a gun or the paint on a brush.
Using different orb colors for each controller, up to four motion controllers can be tracked at once on the PlayStation 3. Demonstrations for the controller have featured activities using a single motion controller, as well as those in which the user wields two motion controllers, with one in each hand. To minimize the cost of entry, Sony stated that all launch titles for PlayStation Move would be playable with one motion controller, with enhanced options available for multiple motion controllers.
On the PlayStation 3, image processing for PlayStation Move is performed in the console's Cell microprocessor. According to Sony, use of the motion-tracking library entails some Synergistic Processing Unit (SPU) overhead as well an impact on memory, though the company states that the effects will be minimized. According to Move motion controller co-designer Anton Mikhailov, the library uses 1-2 megabytes of system memory.
Navigation controller
The PlayStation Move navigation controller is a one-handed supplementary controller designed for use in conjunction with the PlayStation Move motion controller for certain types of gameplay, similar to Nintendo Wii Nunchuk, although it lacks motion-sensing technology, as dual-wield, independent two-handed
motion control is implemented with the use of another Move Controller. Replicating the major functionality of the left side of a standard PlayStation 3 gamepad, the PlayStation Move navigation controller features a left analog stick (with L3 button function), a D-pad, L1 button and L2 analog trigger. The navigation controller also features and action buttons, as well as a PS button. Since all controls correspond to those of a standard PlayStation 3 gamepad, a Sixaxis or DualShock 3 controller can be used in place of the navigation controller in PlayStation Move applications.
Accessories
A number of additional accessories have also been released for use in conjunction with the PlayStation Move controllers.
PlayStation Move charging station could charge up any combination of two Move controllers.
PlayStation Move shooting attachment adapts the Move controllers into a handgun form. The motion controller is fitted into the gun barrel so that the motion controller's T trigger is interlocked with the trigger on the gun attachment, while leaving all the topmost buttons accessible through an opening in the top. Its functions are similar to the Wii Zapper.
PlayStation Move Sharp Shooter attachment adapts both Move controllers into a submachine gun form, which features an adjustable shoulder support. The motion controller is fitted into the gun barrel, while the navigation controller is clipped into a holder below this gun barrel. This accessory also adds several buttons to the controller. Several games, including Killzone 3, Dead Space: Extraction, SOCOM 4: U.S. Navy SEALs and Resistance 3, officially support this peripheral.
The PlayStation Move Racing Wheel allows players to simulate driving in video games. The accessory features vibration feedback, paddle-style gear shifters and twist throttle controls. It supports games like LittleBigPlanet Karting, Gran Turismo 5, Need for Speed: Hot Pursuit, Burnout Paradise, and MotorStorm: Apocalypse.
Bundle packages
In addition to selling the controllers individually, Sony also provides several different bundle options for PlayStation Move hardware such as: software/camera bundles with a PlayStation Eye, a Move motion controller and motion-control enabled software; console bundles which include a PS3 console, DualShock 3 controller, PlayStation Eye, and Move motion controller; and bundles with a Move motion controller with select games.
Though the games with the bundles vary in each region, most bundles come with the PlayStation Move Demo Disc which contains demos for eleven different games. The bundles in Europe and Oceania however, come with the PlayStation Move Starter Disc with a setup tutorial and nine demos (same as the Demo Disc, except without Kung Fu Rider and Time Crisis). The demos included are for the games Beat Sketcher, Echochrome II, EyePet, Kung Fu Rider, Sports Champions, Start the Party!, The Shoot, Tiger Woods PGA Tour 11, Time Crisis: Razing Storm, Tumble, and TV Superstars.
In North America, bundles are available with the game Sports Champions or the PlayStation Move edition of EyePet. In Japan, bundles with Beat Sketch!, Biohazard 5 Alternative Edition, or Big 3 Gun Shooting are available. All bundles, as well as the stand-alone controller will also include the demo disk for a limited time. In Europe, a bundle will be released with a demo disc. In Asian countries outside Japan such as Singapore, the bundles are available with the games Sports Champions, Start the Party!, and Kung Fu Rider.
History
PlayStation Move stems from early work on the EyeToy, a webcam-based controller for the PlayStation 2 conceived in 1999 and released in 2003. Early in the EyeToy's inception, developers experimented with color-based 3D wand tracking, including prototypes using spheres. By the time it was released, the EyeToy's focus was on hands-free applications. With the emergence of affordable inertial sensors and the success of the Wii Remote motion controller wand, in 2008 Sony began work on productizing its own motion controller wand, revisiting the sphere-tracking concept for use with the PlayStation Eye, integrating inertial sensors, and refining the device from an engineering and a design perspective. The Move's internal development codename was the "Y-con", so called because three groups — the hardware team in SCEI in Japan, the software engineering team at SCEA, and Sony's Worldwide Studios — worked together to develop the hardware, with the three points of a "Y" indicating the three teams coming together. This was the start of a new form of development at Sony, where hardware had previously been developed separately from software teams and later delivered along with technical documentation for software teams to get to grips with themselves; this move was continued with the development of the PlayStation Vita and PlayStation 4.
The motion controller was revealed at Sony's E3 2009 press conference on 2 June 2009, with a live demonstration using an engineering prototype. Tentatively referred to as the PlayStation Motion Controller, the device was originally stated to be available in Q1/Q2 2010. As of August 2009, the controller features and design had not been finalized.
Soon after revealing the motion controller to developers, Sony indicated that it was exploring the possibility of using the motion controller in combination with a standard PlayStation 3 gamepad, such as having the player use "the motion controller as a sword and use DualShock 3 as a shield." One combination control scheme was demonstrated in September 2009 at the Tokyo Game Show for Biohazard 5: Alternative Edition, making particular use of the DualShock 3's analog stick. Although users found the setup to work well, some found holding a DualShock with one hand to be somewhat awkward. At the time Sony was already rumored to be in the design phase of a supplementary controller akin to that of the Nunchuk controller for the Wii Remote.
In January 2010, Sony announced a revised release target, stating instead that the motion controller would launch in Q3/Q4 of 2010. On March 10, Sony revealed the official name and logo at the Game Developers Conference, showcasing the final motion controller design, and unveiling the PlayStation Move navigation controller (then referred to as the PlayStation Move sub-controller), to be launched concurrently with the motion controller. The logo is a colored squiggle-like shape, representing a light trail from the sphere of a PlayStation Move motion controller being waved. Newly revealed in the final motion controller was the inclusion of an internal magnetometer.
Name
Prior to the Game Developers Conference 2010, the PlayStation Move motion controller was known by several names. Initially given little guidance on what to call the device when it was unveiled in June 2009, many in the video game press informally referred to the controller as the "magic wand", or simply "Wand" due to the controller's wand design and glowing orb. Sony had used the term "PlayStation Motion Controller" from the motion controller's introduction, but many perceived it to be used merely as a description. Sony gradually clarified "Motion Controller" as the tentative name, but by this time media attention had shifted to rumored final names. In September 2009, statements in two unconnected interviews at the Tokyo Game Show led to speculation that the controller may be referred to by developers as the "Sphere". In December, a brief reference to the motion controller as "Gem" by Electronic Arts CEO John Riccitiello during a media industry conference presentation prompted an admission by Sony that "Gem" was an early code name for the controller.
In January 2010, video game blog VG247 reported that Sony had named its PS3 motion control platform "Arc". The name was observed to liken the controller's glowing orb to the charged sphere of a Tesla coil or a plasma globe electrode. The report was supported by evidence emerging in the following weeks, including a registration of the playstationarc.com domain name to SCE dated October 2009 (shortly after the Tokyo Game Show) and numerous references to "Arc" by president Brian Farrell of video game publisher THQ during the company's February earnings conference call. Responding to speculation that Farrell's statements effectively confirmed the name, SCEA senior director of corporate communications Patrick Seybold stated that they did not, and that Farrell was referring to "Arc" as a "rumored code name."
On March 1, it was reported that Sony submitted Japanese trademark application filing for "PlayStation Arc". A week later on March 8, Sony was reportedly considering a hasty renaming due to a trademark held by competitor Microsoft for its Arc-brand PC accessories, which could present trademark conflicts. On March 9, Sony submitted a European trademark filing for "PlayStation Move", which was announced as the official name the next day at Sony's press conference at the Game Developers Conference. Video gaming blog Joystiq reports several anonymous Sony sources claiming that the PlayStation Move logo presented at the conference resembles a letter "A" because it is the same design for when the name was "PlayStation Arc", in which the "A" would stand for "Arc".
Promotion
As part of the promotional marketing for Sorcery, the PlayStation Move controller was inducted into The Magic Circle museum by Vice President Scott Penrose.
Recreational therapy
PlayStation Move is now being actively used in recreational therapy on children that suffer from obesity. The Move creates a challenging physical environment for the children. In one study, recreational therapists utilized PlayStation Move Fitness. The children chose which activity to do and then, they engaged for a period of approximately 30 minutes. The recreational therapists monitored the children.
These programs have shown to:
Lower BMI and weight.
Improve cardiovascular endurance.
Lead to healthier choices in recreation activities.
Improve self-esteem.
Identify the importance of exercise.
Software
Alongside SIE Worldwide Studios and its second-party partners, a total of 36 third-party game development companies had confirmed that they would support the PlayStation Move by the time the finalized controller was announced in March 2010. On the box art of PlayStation Move games, underneath the PlayStation 3 logo banner, a blue bar with white letters indicates when a game supports the PlayStation Move. When a game can only be played with the PlayStation Move, the box art carries a "PlayStation Move Required" label. When a game supports traditional Sixaxis/DualShock 3 controls and PlayStation Move controls it carries a "PlayStation Move Features" (or "PlayStation Move Compatible") label.
Reception
Critical reception
The PlayStation Move has been generally well received. Game Informer gave it an 8 out of 10, saying, "The PlayStation Eye and motion controller are a killer combination for accurate and highly responsive motion-based gameplay, and we applaud Sony for getting the hardware right the first time." Kotaku praised its accuracy, design, use of augmented reality and said, "The Playstation Move is a intuitive, natural feeling way to play games and it brings with it not only a sense of increased immersion to already graphically immersive games, but a new way to play with your reality and a refreshing form of colorful feedback." IGN gave the Move an 8.5 out of 10, noting that the launch line-up of games for the controller was insufficient though it summarized by saying, "At the end of the day, the PlayStation Move has the potential to be the best motion control system on the current crop of consoles." Joystiq praised the Move, saying, "The hardware's great, and I can see it being used in a multitude of really cool ways, but of course it's only as cool as the games that use it" and that the launch line-up was not worth the purchase, though it believed that the Move would be worth the purchase in early 2011 due to a stronger line-up of games such as SOCOM 4: U.S. Navy SEALs and Killzone 3. CVG gave the Move an extremely positive review and awarded it 9 out of 10, saying, "Sony's motion control gets beyond being a gimmick. We found ourselves constantly itching for 'one more go'." The Guardian strongly criticized the Move's launch line-up, though it noted that the hardware was strong and that after playing with the Move it was "very hard to go back to the relative inaccuracy of the Wii".
The PlayStation Move won the 2010 Popular Science award for the "Most immersive game controller".
Sales
In October 2010, the PlayStation Move had shipped around 1.5 million units in Europe and 1 million units in North America during its first month of release, figures that Sony felt were selling "extremely well" at the time. On 30 November 2010, it was announced that 4.1 million units had been shipped worldwide in the first two months since its release. By June 2011, at E3 2011, Sony announced that the PlayStation Move had sold 8.8 million units. By November 2012, this figure had grown to 15 million.
In March 2012 Fergal Gara of Sony UK spoke to Official PlayStation Magazine UK to acknowledge that, the device had not lived up to their expectations, in either their target audience or the software support that had been provided for it.
See also
Wonderbook
Games that are compatible with PlayStation Move
Games that require PlayStation Move
Footnotes
References
External links
European PlayStation Move page
North American PlayStation Move page
PlayStation Move Sharp Shooter
PlayStation 3 accessories
Video game controllers
Pointing devices
Products introduced in 2010 |
8157631 | https://en.wikipedia.org/wiki/List%20of%20Nintendo%20DS%20and%203DS%20flash%20cartridges | List of Nintendo DS and 3DS flash cartridges | Nintendo DS and 3DS storage devices are used to store a licensed developer's work-in-progress images, homebrew video games, and downloaded commercial games (since the Nintendo DS is not sold with a rewritable storage medium). Licensed developers, however, can use a blue Intelligent Systems Nitro Emulator box to flash cards. These devices are also known as "flashcarts" or "flashcards". There are two main classes of flashcarts: older devices which fit in Slot-2 (the Game Boy Advance Game Pak slot) and newer devices that fit in Slot-1 (the DS Game Card slot). Slot-2 (or first-generation) devices have been historically cheaper due to economies of scale (inherited from their use with Game Boy Advance), but require a booting tool in Slot-1 in order to use the touch screen and other DS features. Second-generation devices (those which only use Slot-1) do not work with GBA homebrew, but became less expensive and easier to use, rivaling many Slot-2 devices in price.
First-generation devices include GBA flash cartridges, GBAMP CF, SuperCard, and M3. Second-generation devices include R4 Revolution, CycloDS, G6 Real and DS-X. Many of these devices also work with the Nintendo 3DS as DS flash cartridges, allowing DS ROM backups to run in game consoles of the Nintendo 3DS line. There are also flash cartridges made exclusively for the Nintendo 3DS, such as the Gateway 3DS and Sky3DS, which allow 3DS ROM backups to run in these systems. The storage device either contains flash memory or a slot for a memory card to store homebrew. Storage devices with a memory-card slot usually have more storage capacity than flash-memory devices. Although flash-memory capacity is usually measured in megabits (Mb), memory-card capacity is usually measured in megabytes (MB) (where 8 Mb is 1 MB).
Storage-device brands differ in their support for homebrew; DS, 3DS and Game Boy Advance ROMs; special features (such as ability to play media files); physical size and cost. Strictly speaking, a storage device is not necessary for DS with FlashMe installed because homebrew can be sent to the DS using Wireless Multiboot or WMB. However, this is not an easily portable method because the DS needs to be within range of a suitable Wi-Fi card.
First generation
GBA flash cartridge
The first method of storing homebrew applications for the Nintendo DS was the use of flash cartridges designed for the Game Boy Advance. These were effective in finding exploits, since they are a 32 MiB block of rewritable flash memory directly accessible by both CPUs of the Nintendo DS. Users of GBA homebrew tended to use GBA methods for DS homebrew as well; however, the limited storage space, variety and price of GBA flash cartridges make them unsuitable for new users. Since there were many types of flash cartridges (each with its own method for writing to the flash ROM), most homebrew programs only supported saving to the included 64 KiB of SRAM intended for game-saving.
After the creation of DLDI this was no longer a problem, and any program from 2007 or later works with any flashcart. This method of storage does not work with the Nintendo DSi, since it does not have a GBA slot.
GBA Movie Player
The GBA Movie Player (GBAMP) is a CompactFlash adapter for the Game Boy Advance. It supports playing music and movies; NES and Game Boy games (under 200kb) and Game Boy Advance games (under 256kb) from the CF card, and reading text files. Its low price and simple design make it suitable for DS homebrew. Hacked firmware is available, adding the ability to run DS homebrew while maintaining the GBA features. This is the most widely supported homebrew device; nearly every homebrew which writes to the storage medium supports it. There are three versions of the GBAMP: a large pass-through device (version 1), a small white-and-red cartridge (version 2) and a slightly smaller SD card version (version 2 SD). Version 2 has the most support.
SuperCard and M3 Perfect
Supercard and M3 Perfect are similar devices to the GBA Movie Player, with more features. M3 is made by the developers of GBAMP. Both contain a CompactFlash, Secure Digital, miniSD, or Transflash slot, and 32 MB of built-in RAM. They offer GBAMP's features, in addition to the ability to play all GBA games and homebrew with the built-in RAM as a flash cartridge. They can also play DS backups by using sequential reading (SuperCard's built-in RAM, however, is too slow to play some GBA games accurately). Some DS homebrew also uses this additional RAM; however, addressing issues and speed make it less useful than the DS' built-in memory.
SuperCard and M3 Perfect are more expensive than GBAMP (M3 more so than SuperCard), and not as well supported by homebrew. Most homebrew encounters difficulty writing to SD cards; the more complex and proprietary protocols used with SD were reverse engineered later than the well-known CF protocol. The SuperCard has full support for booting DS game backups (no problems with saving or booting, a cheat system and rumble support); however, the SuperCard only has partial support for DS Download (some games freeze on loading). Variations on these devices have been released, such as SuperCard Rumble and M3 Pro. Some feature a smaller size, which better fits the DS Lite to a reduction of the built-in RAM.
MAX Media Player
The MAX Media Player (not to be confused with Max Media Launcher, a NoPass device) is similar to the GBA Movie Player, but does not function in GBA mode. Although it is the easiest device to find (the only DS homebrew device sold in major retail stores, such as Wal-Mart) its price, power consumption and poor homebrew support make it unsuitable. Its primary attraction is the ability to play DS game backups (through homebrew known as Maximum Overload, although all DS storage devices may be used for this purpose). A version of Maximum Overload has been released which allows the playing of commercial ROMs and has full Nintendo DS ROM compatibility. A DLDI driver has been written (with 99-percent homebrew compatibility) which is compatible with Pokémon Black and White games.
Second generation
Once the DS-card encryption was broken, it became possible to design cards which boot and read directly from SLOT-1. Pass-through devices and flash firmware are no longer needed to read from external storage, which has caused a proliferation of all-in-one plug and play cards.
One drawback of second-generation storage devices is that the new cards have poor compatibility with a significant portion of homebrew applications requiring filesystem I/O. DLDI (Dynamically Linked Device Interface for libfat) patching, however, has solved this problem. All second-generation cards with a DLDI driver written for them are able to run most homebrew requiring filesystem I/O. Also, GBA games are mainly unable to be played on these cards unless you have a 2nd slot flash cart.
Acekard 2/2i
The Acekard 2 earned a 90-percent on DS-Scene.net and a Golden Award on GBAtemp. It features 99-percent game compatibility, skinnable download-play support, micro-SD/SDHC memory card support, quick loading and Action Replay cheat support. Users reported freezing problems with the first batch of cards, but the Acekard 2.1 revision (released in December 2008) addressed these issues. The newer cards have 2.1 on their stickers, and may also be identified by the lack of a microchip bulge under the sticker.
In addition to official firmware, a closed-source firmware known as AKAIO is under development. It is at version 1.9.0, with support for the EZ-Flash 3-in-1 and 3-in-1+ expansion packs. With the release of the Nintendo DSi, Acekard became the first team to release a card compatible with it: the Acekard 2i. Other than its support for the DSi, it is identical to the Acekard 2. When Nintendo released the newest version of the DSi firmware (1.4.1U in the United States), the Acekard 2i was the second manufacturer to release a patch to make the card compatible. The Acekard 2i can be used on a 3DS in DS mode, up to the 4.3 version of the firmware. It has not released an update for the 1.4.5 (DSi) and 4.4(3DS) firmware releases and has not had any updates since early December 2012. It is possible this cart may be unable to be updated and may be discontinued.
CycloDS Evolution
The CycloDS Evolution features DLDI auto-patching and full DS download play support. It earned a 96-percent on DS-Scene.net and a gold star on GBAtemp.net.
Like most other slot-1 devices, the CycloDS Evolution uses micro SD cards for storage; media are played with a modified version of the MoonShell player. It was one of the first to use SDHC cards, ranging from 4 to 32 GB. Its enhanced mode provides features activated through an in-game menu (which other cards do not support), including slow-motion, soft reset to CycloDS GUI, built-in cheat device, real-time save feature, an in-game DS Lite LCD backlight-brightness-change option and an in-game text reader for game walk-throughs.
The card also acts as a PassMe and Slot-2 card; the 3-in-1 extension or G6 Lite may be recognized as expansion packs for the CycloDS Evolution. This permits memory expansion for DSLinux, to play GBA games launched from the Slot-1 device and to use the Opera browser (which normally requires a separate RAM-expansion pack). The CycloDS Evolution boots and patches GBA games directly from the CycloDS menu, instead of a separate homebrew program.
DS-Xtreme
The DS-Xtreme flash card is a one-card solution with 4 Gib (512 MiB)—or a newer 16 Gib model (2 GiB)—of internal flash memory, with no option to add external memory. The DS-Xtreme hardware supports generic USB mass-storage specifications, functioning as a drag and drop USB drive with no additional drivers needed. It has two color-adjustable LEDs. After connecting the DS-Xtreme to a PC via the included USB cable, files may be transferred to the DS-Xtreme memory and immediately used on the DS. The DS-Xtreme is not fully compatible; many games which will load will have errors in a number of areas, particularly download play and Nintendo Wi-Fi connection. These problems were corrected by installing the newest version of the firmware (as with any flashcart); however, support has been discontinue and many games require patching (or other advanced fixes) to operate. It is highly homebrew-compatible, and does not require extra fixes (due to the auto-patching software included). As of 2009, the price of the 16 Gib models has dropped to around $50; however, its developers abandoned the project in 2007. DS-Scene.net gave the 4 GiB DS-Xtreme a rating of 91 percent in its review.
DSTT, DSTTi and 3DSTT
The DSTT is a popular card, compatible with SDHC cards. Like the R4, it has been Cloned. Some clones change the name (e.g. DSTT-ADV or CN-DSTT), but most label their product as DSTT although they are not affiliated with a DSTT team. The DSTT website has a guide to identifying counterfeits and clones. The DSTT and DSTTi are versatile, incorporating cheat support in a variety of games. A patch exists for the DSTTi for it to work on the 3DS 2.1.0-4. DSTT and DSTTi TTMenu Kernel updated its kernel (DSTT TTMenu V1.18) on December 15, 2010. Closed-source firmware (YSMenu) was programmed by Yasu Software with YSMenu V6.72 on November 26, 2011.
EZFlash V
The EZFlash V uses microSD cards for storage. An advantage is its hybrid mode, which allows for speedy performance and less lag when using slower memory cards. The EZFlash V Plus supports microSD cards over 2 GB; the original model does not. The EZFlash Vi is an upgraded version of the EZFlash V Plus, made for the Nintendo DSi. A three-in-one expansion pack (supporting rumble, cache and GBA ROM) is also available. The EZFlash Vi card works with the Nintendo 3DS console. DS-Scene.net gave the EZFlash V a rating of 94 percent in its review.
EDGE DS
The EDGE DS card is identical to the Nintendo DS Lite cartridge, and has been compared to the R4 card (R4v2/R4v3/R4v5), DSTT card and SuperCard DSONE. The EDGE is a budget-priced variety of the full-featured CycloDS, using essentially the same system. It was sold in a distinct, triangular swivel-pack. The EDGE DS card ceased production in October 2009, and has been replaced by the iEDGE DS.
M3i Zero
The M3i Zero is an updated M3 Real card, designed for compatibility with the Nintendo DSi, which was released in July 2009. There is no GBA compatibility with the M3i Zero running on the Nintendo DSi, because it does not have a GBA expansion slot. However, the M3i Zero will run GBA games on DSs/DS Lites with a GBA expansion card. The M3i Zero is the first card to use specialist hardware to upgrade its firmware; it is supplied with a firmware cable, thus allowing it to bypass Nintendo's DSi update system.
N-Card (NAND Card)
The N-Card was previously available in 128 MB, 512 MB, 1 GB and 2 GB sizes. There are two versions of the N-Card (1 GB and 2 GB); the smaller cards are discontinued. The N-Card does not require FlashMe, PassMe, or any other device or modification. It is compatible with all commercial games, without patching. No additional software is required to transfer games from PC to the card. Firmware 1.45 supports download-play; Wi-Fi play has always been supported. The N-card uses its internal memory, making its read speed faster than microSD cards. To read the card on a computer (to add or remove files), there is a supplied adapter and USB cable.
The N-Card had at least six different clones: DS Fire Card, K6, MK5, Ultra N-Card, DS Linker and F-Card. All clones can run the original N-Card firmware.
NinjaPass
Like most cards, NinjaPass uses microSD cards for storage. Compatibility is not total, and is highly dependent on the microSD-card brand and model. The card works with most homebrew (including—but not limited to—DSAIM, DSLinux, MoonShell, NesDS, LemmingsDS and DSOrganize). It is compatible with microSD cards up to 4 GB, and works with cards of all speeds (if the speed is adjusted on the main boot menu before loading any applications). The product website provides users with all necessary startup software for download. DS-Scene.net rated the NinjaPass Evolution X9 a score of 82 percent in its review.
YushenDS Card, R4DS, M3 DS Simply, and their clones
R4DS (Revolution for DS), YushenDS Card (YDC), and M3DS Simply have essentially the same hardware. The same method is used to distinguish between Chinese, English, and Japanese (and German for the YDC) versions of the cards. The firmware for the various brand and language versions can be readily patched to work on other language (or brand) versions of the hardware.
The original R4 card was updated in early 2007 to the R4 version 2, or R4v2. In late 2007 the R4v2 was revised, eliminating the spring mechanism for inserting and releasing the microSD card. Instead, it had a slot in the back into which a user could insert a microSD card. This eliminated the problem in the original R4 Revolution DS Card where the spring mechanism malfunctioned after prolonged use. DS-Scene.net rated the original R4 at 95 percent in its review.
Further confusion has been added by poor-quality clones of the YDC R4 and M3 hardware, selling under brands including N5, E7, ND1, NPlayer, U2DS, MARS and variations of the R4 name (such as R4DS Upgrade-II, New R4, R4 Deluxe, R4 Advance, R4 DS III, R4 SDHC, R4 gold Pro and R4 Ultra). The firmware for genuine YDC R4 cards is encrypted; however, the encryption was broken in 2007 and several utilities exist for encrypting, modifying, and decrypting YDC R4 firmware. The N5 (and most other clones) use a decrypted version of the firmware; decrypted YDC R4 firmware can be used on the N5 (and some other clone cards), and encrypted clone firmware can be used on the R4. Some clone manufacturers have released modified versions of the firmware to support additional games; others have eliminated R4 firmware, replacing it with homebrew loaders (such as YSMenu).
All of these are one-card (slot-1) solutions using microSD cards for storage; all final firmware versions include Action Replay cheats, auto-DLDI patching and support for Nintendo Wi-Fi connection and download play. They also include a hardware-specific version of the MoonShell media player, selection of which is integrated with the main menu.
The R4 has been discontinued. The last firmware was version 1.18 (released on April 23, 2008), but clone manufacturer R4Li continued updates for the original R4. In addition to the official firmware a closed-source firmware, Wood R4, is under development. Wood R4, written by Yellow Wood Goblin, was updated to Wood R4 v1.30 on May 19, 2011. A Wood R4 DS patch was released on Dec 24, 2010. R4 DS users can run this patch to update online the Wood R4 kernel automatically.
The R4 has been banned from sale in Japan due to its promotion of software piracy. Nintendo won a lawsuit against an Australia-based distributor over selling the R4 card, but the technology itself remains unbanned. These cards have also been banned from sale (and import) in the UK following a high-court ruling. In November 2012 the Japanese Ministry of Economy, Trade and Industry made importing the R4 illegal.
Unlike newer cards, the R4 cannot read SDHC (Secure Digital High Capacity) cards due to hardware limitations (although certain clones add this functionality). There exist several clones of the R4 card capable of using SDHC cards, but studies suggest that certain types of these cards have high failure rates. One clone, the R4 SDHC, is reportedly reliable. R4DS is known to be outdated, as other flashcarts (Acekard 2, M3 Real Supercard One and Edge) are known to be better and cheaper; genuine R4DS cards are rare.
SuperCard DSONE
The SuperCard DSONE is a slot-1 solution using microSD cards for storage. It is made by the SuperCard team. The earlier version requires a high-speed microSD to fully utilize its gameplay function. With SDHC-compatible models games are playable with slower SD cards, solving the most significant problem with the original. The firmware includes functions such as RTS, real-time cheats and real-time game guides. The Supercard DSONE also exists in a version compatible with the Nintendo DSi, the DSONEi. DSONEi comes with a firmware writer for future firmware updates, if Nintendo releases a DSi-firmware update to block flash cards. DS-Scene.net rated the SuperCard DSONE v1 at 89 percent and the SuperCard DSONE v2 at 83 percent in its reviews.
R4i 3DS and R4i SDHC
Often confused are the R4i products with similar names R4iSDHC.com, R4i-SDHC.HK, and R4i-SDHC.com.
The first R4i-SDHC team (from r4i-sdhc.com, who released the R4i-SDHC RTS 3DS card) developed the first "original" card compatible with the micro-SDHC memory card. The first cards released were the R4-SDHC and R4i-SDHC. When Nintendo released its V1.4 update, these cards were blocked. The R4i-SDHC team responded with a new card, the R4i-SDHC V1.4, the first of a new generation of cards that could upgrade the core firmware using a flash memory patch. Another firmware update (the DSi V1.4.1) brought to light that there were more than one hardware version of the card, of which some would not patch. For the subsequent Nintendo updates, the R4i-SDHC team released a patch, as well as a new card version labeled with the update number. They also released a 3DS version which also had regular patches. When 1.45 (DSi or DSI XL) and 4.5 (3DS or 3DS XL) console software was released by Nintendo in early December 2012, software patches were not released to upgrade the cards. New R4i-SDHC cards were subsequently released which are anticipated to reduce a future issue after Nintendo updates. The current cards being named V1.45, V4.5 and V4.5 RTS.
Those R4i are clones for Gateway 3DS, the first 3DS flashcard for Nintendo 3DS.
Third generation
The third generation began with the release of the Supercard DSTWO with emulation, drag-and-drop video playback, on-the-fly anti-anti-piracy advancements, an in-game menu system, slow-motion, soft reset, a real-time cheat editor, eReader support and a hex editor. The biggest change in these flashcarts from the second generation was anti-piracy countermeasures to bypass the ROM without patching or a firmware update. These flashcarts have built-in RAM and powerful CPUs, allowing the use of powerful homebrew programs. Flashcarts offering these features are The Supercard's DSTWO, EX4i, iSmart Premium, iSmart Multimedia and CycloDS iEvolution.
SuperCard DSTWO and DSTWO Plus
The SuperCard DSTWO was released on November 20, 2009. Its features include real-time functions (saving and cheats), multiple save files, bypassing piracy blocks without patches, microSDHC support, multiple languages (English, French, Italian, Japanese, Korean, simplified Chinese, Spanish and traditional Chinese), a built-in GBA/SNES emulator, four levels of slow motion, a file-management system (to rename, copy, etc.), an iReader supporting .bmp, .jpeg, .jpg, .png, .tif, .gif, .txt and .pdf files, and support for several PC video formats (including DivX/Xvid). It is capable of functions not found in a standard slot-1 flashcart (such as GBA emulation), since the card has 32 MB of built-in RAM and an extra coprocessor. Because of this, the built-in CPUs on the card drain the battery significantly at startup. The card features rewritable firmware, like as the DSONEi. The DSTWO also supports EZ Flash three-in-one expansion and the SuperCard expansion brands. The SuperCard team released the SDK for the DSTWO flashcart to homebrew developers only when contacted by email. Their goal for the release of the SDK was for small developers to release power programs and help with debugging the EOS, GBA and SNES systems on the flashcart. The DSTWO is compatible with the DS, DSL and DSi up to the 1.4.5 firmware, and 3DS up to the 6.3.0-12 firmware. Nintendo has released 1.4.3 for North America, Europe and Japan and 1.4.4 in China to block current flashcarts. Less than 24 hours after the release of these updates, the Supercard team released a bypass patch for the DSTWO to run again on the latest DSi firmware.
The Supercard Team is efficient at updating the Supercard DSTWO; it found a way to let the Supercard DSTWO work on the 3DS, as it did for the DSi 1.4.4–1.4.5 CHN fix. The team have activated a hidden forum for those with permission for the SCDSTWO SDK (Software Development Kit). Homebrew and emulator developer Alekmaul received the SDK and a flashcart sample, and began porting his Dingoo emulators to the Supercard DSTWO. Alekmaul later released his MAME EMU for the DS2. The emulator supports the MAME 0.37b5 ROM set. Based on the included documentation, this is a port of his Dingoo MAME4ALL build (which leaves the possibility for future Dingoo EMU ports to the SCDSTWO). The DSTWO will run nearly all game systems (including DS, GBA and SNES games, movies, music, pictures, MAME, Neo Geo, NES, Apple 2, Game Boy Color, GB and Atari). DS-Scene.net praised the DSTWO in its review.
In October 2015, the Supercard Team further released the Supercard DSTWO Plus, which contains all features of the DSTWO as well as a Gateway 3DS emulation plugin to support 3DS games. It is advertised as a 4 in 1 card, meaning that it supports DS, GBA, SNES and 3DS games in a single flashcart.
iSmart Premium
iSmart Premium (the third third-generation flashcart) is a competitor to the Supercard DSTWO and DSi-compatible. The iSmartDS team's first offering has been compared to the Acekard series (in price and open-source firmware options) and the EZ Vi (on which the iSmart Premium is based). It includes an open-source SDK, in-game menu (save-states, in-game guide and soft reset), customizable skins, multilingual support, an Action Replay cheat engine, hardware-based anti-piracy circumvention and three-in-one expansion support. DS-Scene.net rated the iSmart Premium at 87 percent in its review.
3DS-exclusive storage devices
Unlike DS flashcards, which may also work in the Nintendo 3DS and allow DS ROM backups to run on these consoles, there are also 3DS-exclusive storage devices, which allow 3DS ROM backups and perhaps also 3DS homebrew to run on these consoles. Gateway3DS is the first 3DS flashcard for Nintendo 3DS. Its clones include 3DS Link, MT Card, R4i 3DS, and more. They only work on 4.1-9.2 version firmware. The Gateway 3DS emulation plugin of the Supercard DSTWO Plus may also be seen as belonging to this category. There is also the Sky3DS, which works with all versions up to and including the latest 11.13.0-45 update, and all consoles in the Nintendo 3DS line. Its clones include R5SDHC 3DS, QQ3DS, and more. However, there are several 3DS firmware releases which block flashcard usage. Custom firmwares such as Luma3DS bypass this issue.
See also
Game backup device
Nintendo DS accessories
Nintendo DS homebrew
Custom firmware
References
External links
R4 3DS Official Website With New Firmware Updates
Supercard Official Website For Firmware Updates
M3 DS Official Website
Computer storage devices
Nintendo DS
Nintendo 3DS |
3501526 | https://en.wikipedia.org/wiki/X%20Window%20System%20core%20protocol | X Window System core protocol | The X Window System core protocol is the base protocol of the X Window System, which is a networked windowing system for bitmap displays used to build graphical user interfaces on Unix, Unix-like, and other operating systems. The X Window System is based on a client–server model: a single server controls the input/output hardware, such as the screen, the keyboard, and the mouse; all application programs act as clients, interacting with the user and with the other clients via the server. This interaction is regulated by the X Window System core protocol. Other protocols related to the X Window System exist, both built at the top of the X Window System core protocol or as separate protocols.
In the X Window System core protocol, only four kinds of packets are sent, asynchronously, over the network: requests, replies, events, and errors. Requests are sent by a client to the server to ask it to perform some operation (for example, create a new window) and to send back data it holds. Replies are sent by the server to provide such data. Events are sent by the server to notify clients of user activity or other occurrences they are interested in. Errors are packets sent by the server to notify a client of errors occurred during processing of its requests. Requests may generate replies, events, and errors; other than this, the protocol does not mandate a specific order in which packets are sent over the network. Some extensions to the core protocol exist, each one having its own requests, replies, events, and errors.
X originated at MIT in 1984 (its release X11 appeared in September 1987). Its designers Bob Scheifler and Jim Gettys set as an early principle that its core protocol was to "create mechanism, not policy". As a result, the core protocol does not specify the interaction between clients and between a client and the user. These interactions are the subject of separate specifications, such as the ICCCM and the freedesktop.org specifications, and are typically enforced automatically by using a given widget set.
Overview
Communication between server and clients is done by exchanging packets over a channel. The connection is established by the client (how the client is started is not specified in the protocol). The client also sends the first packet, containing the byte order to be used and information about the version of the protocol and the kind of authentication the client expects the server to use. The server answers by sending back a packet stating the acceptance or refusal of the connection, or with a request for a further authentication. If the connection is accepted, the acceptance packet contains data for the client to use in the subsequent interaction with the server.
After connection is established, four types of packets are exchanged between client and server over the channel:
Request: The client requests information from the server or requests it to perform an action.
Reply: The server responds to a request. Not all requests generate replies.
Event: The server informs the client of an event, such as keyboard or mouse input, a window being moved, resized or exposed, etc.
Error: The server sends an error packet if a request is invalid. Since requests are queued, error packets generated by a request may not be sent immediately.
Request and reply packets have varying length, while event and error packets have a fixed length of 32 bytes.
Request packets are numbered sequentially by the server as soon as it receives them: the first request from a client is numbered 1, the second 2, etc. The least significant 16 bits of the sequential number of a request is included in the reply and error packets generated by the request, if any. They are also included in event packets to indicate the sequential number of the request that the server is currently processing or has just finished processing.
Windows
What is usually called a window in most graphical user interfaces is called a top-level window in the X Window System. The term window is also used to denote windows that lie within another window, that is, the subwindows of a parent window. Graphical elements such as buttons, menus, icons, etc. can be realized using subwindows.
A client can request the creation of a window. More precisely, it can request the creation of a subwindow of an existing window. As a result, the windows created by clients are arranged in a tree (a hierarchy). The root of this tree is the root window, which is a special window created automatically by the server at startup. All other windows are directly or indirectly subwindows of the root window. The top-level windows are the direct subwindows of the root window. Visibly, the root window is as large as the virtual desktop, and lies behind all other windows.
The content of a window is not always guaranteed to be preserved over time. In particular, the window content may be destroyed when the window is moved, resized, covered by other windows, and in general made totally or partly non-visible. In particular, content is lost if the X server is not maintaining a backing store of the window content. The client can request backing store for a window to be maintained, but there is no obligation for the server to do so. Therefore, clients cannot assume that backing store is maintained. If a visible part of a window has an unspecified content, an event is sent to notify the client that the window content has to be drawn again.
Every window has an associated set of attributes, such as the geometry of the window (size and position), the background image, whether backing store has been requested for it, etc. The protocol includes requests for a client to inspect and change the attributes of a window.
Windows can be InputOutput or InputOnly. InputOutput windows can be shown on the screen and are used for drawing. InputOnly windows are never shown on the screen and are used only to receive input.
The decorative frame and title bar (possibly including buttons) that is usually seen around windows are created by the window manager, not by the client that creates the window. The window manager also handles input related to these elements, such as resizing the window when the user clicks and drags the window frame. Clients usually operate on the window they created disregarding the changes operated by the window manager. A change it has to take into account is that re-parenting window managers, which almost all modern window managers are, change the parent of top-level windows to a window that is not the root. From the point of view of the core protocol, the window manager is a client, not different from the other applications.
Data about a window can be obtained by running the xwininfo program. Passing it the -tree command-line argument, this program shows the tree of subwindows of a window, along with their identifiers and geometry data.
Pixmaps and drawables
A pixmap is a region of memory that can be used for drawing. Unlike windows, pixmaps are not automatically shown on the screen. However, the content of a pixmap (or a part of it) can be transferred to a window and vice versa. This allows for techniques such as double buffering. Most of the graphical operations that can be done on windows can also be done on pixmaps.
Windows and pixmaps are collectively named drawables, and their content data resides on the server. A client can however request the content of a drawable to be transferred from the server to the client or vice versa.
Graphic contexts and fonts
The client can request a number of graphic operations, such as clearing an area, copying an area into another, drawing points, lines, rectangles, and text. Beside clearing, all operations are possible on all drawables, both windows and pixmaps.
Most requests for graphic operations include a graphic context, which is a structure that contains the parameters of the graphic operations. A graphic context includes the foreground color, the background color, the font of text, and other graphic parameters. When requesting a graphic operation, the client includes a graphic context. Not all parameters of the graphic context affect the operation: for example, the font does not affect drawing a line.
The core protocol specifies the use of server-side fonts. Such fonts are stored as files, and the server accesses them either directly via the local filesystem or via the network from another program called font server. Clients can request the list of fonts available to the server and can request a font to be loaded (if not already) or unloaded (if not used by other clients) by the server. A client can request general information about a font (for example, the font ascent) and the space a specific string takes when drawn with a specific font.
The names of the fonts are arbitrary strings at the level of the X Window core protocol. The X logical font description conventions specify how fonts should be named according to their attributes. These conventions also specify the values of optional properties that can be attached to fonts.
The xlsfonts program prints the list of fonts stored in the server. The xfontsel program shows the glyphs of fonts, and allows the user to select the name of a font for pasting it in another window.
The use of server-side fonts is currently considered deprecated in favour of client-side fonts. Such fonts are rendered by the client, not by the server, with the support of the Xft or cairo libraries and the XRender extension. No specification on client-side fonts is given in the core protocol.
Resources and identifiers
All data about windows, pixmaps, fonts, etc. are stored in the server. The client knows identifiers of these objects—integers it uses as names for them when interacting with the server. For example, if a client wishes a window to be created, it requests the server to create a window with a given identifier. The identifier can be later used by the client to request, for example, a string to be drawn in the window. The following objects reside in the server and are known by the client via a numerical identifier:
Window
Pixmap
Font
Colormap (a table of colors, described below)
Graphic context
These objects are called resources. When a client requests the creation of one such resource, it also specifies an identifier for it. For example, for creating a new window, the client specifies both the attributes of the window (parent, width, height, etc.) and the identifier to associate with the window.
Identifiers are 32-bit integers with their three most significant bits equal to zero. Every client has its own set of identifiers it can use for creating new resources. This set is specified by the server as two integers included in the acceptance packet (the packet it sends to the client to inform it that the connection is accepted). Clients choose identifiers that are in this set in such a way they do not clash: two objects among windows, pixmaps, fonts, colormaps, and graphic contexts cannot have the same identifier.
Once a resource has been created, its identifier is used by the client to request operations about it to the server. Some operations affect the given resource (for example, requests to move windows); others ask for resource data stored from the server (for example, requests for the attributes of windows).
Identifiers are unique to the server, not only to the client; for example, no two windows have the same identifier, even if created by two different clients. A client can access any object given its identifier. In particular, it can also access resources created by any other client, even if their identifiers are outside the set of identifiers it can create.
As a result, two clients connected to the same server can use the same identifier to refer to the same resource. For example, if a client creates a window of identifier 0x1e00021 and passes this number 0x1e00021 to another application (via any available means, for example by storing this number in a file that is also accessible to the other application), this other application is able to operate on the very same window. This possibility is for example exploited by the X Window version of Ghostview: this program creates a subwindow, storing its identifier in an environment variable, and calls Ghostscript; this program draws the content of the PostScript file to show in this window.
Resources are normally destroyed when the client that created them closes the connection with the server. However, before closing connection, a client can request the server not to destroy them.
Events
Events are packets sent by the server to a client to communicate that something the client may be interested in has happened. For example, an event is sent when the user presses a key or clicks a mouse button. Events are not only used for input: for example, events are sent to indicate the creation of new subwindows of a given window.
Every event is relative to a window. For example, if the user clicks when the pointer is in a window, the event will be relative to that window. The event packet contains the identifier of that window.
A client can request the server to send an event to another client; this is used for communication between clients. Such an event is for example generated when a client requests the text that is currently selected: this event is sent to the client that is currently handling the window that holds the selection.
The Expose event is sent when an area of a window of destroyed and content is made visible. The content of a window may be destroyed in some conditions, for example, if the window is covered and the server is not maintaining a backing store. The server generates an Expose event to notify the client that a part of the window has to be drawn.
Most kinds of events are sent only if the client previously stated an interest in them. This is because clients may only be interested in some kind of events. For example, a client may be interested in keyboard-related events but not in mouse-related events. Some kinds of events are however sent to clients even if they have not specifically requested them.
Clients specify which kinds of events they want to be sent by setting an attribute of a window. For example, in order to redraw a window when its content has been destroyed, a client must receive the Expose events, which inform it that the window needs to be drawn again. The client will however be sent Expose events only if the client has previously stated its interest in these events, which is done by appropriately setting the event mask attribute of the window.
Different clients can request events on the same window. They can even set different event masks on the same window. For example, a client may request only keyboard events on a window while another client requests only mouse events on the same window. This is possible because the server, for each window, maintains a separate event mask for each client. However, there are some kinds of events that can only be selected by one client at time for each window. In particular, these events report mouse button clicks and some changes related to window management.
The xev program shows the events relative to a window. In particular, xev -id WID requests all possible events relative to the window of identifier WID and prints them.
Example
The following is a possible example of interaction between a server and a program that creates a window with a black box in it and exits on a keypress. In this example, the server does not send any reply because the client requests do not generate replies. These requests could generate errors.
The client opens the connection with the server and sends the initial packet specifying the byte order it is using.
The server accepts the connection (no authorization is involved in this example) by sending an appropriate packet, which contains other information such as the identifier of the root window (e.g., 0x0000002b) and which identifiers the client can create.
The client requests the creation of a default graphic context with identifier 0x00200000 (this request, like the other requests of this example, does not generate replies from the server)
The client requests the server to create a top-level window (that is, it specifies the parent to be the root window 0x0000002b) with identifier 0x00200001, size 200x200, position (10,10), etc.
The client requests a change in the attributes of the window 0x00200001, specifying it is interested in receiving Expose and KeyPress events.
The client requests the window 0x00200001 to be mapped (shown on the screen)
When the window is made visible and its content has to be drawn, the server sends the client an Expose event
In response to this event, the client requests a box to be drawn by sending a PolyFillRectangle request with window 0x00200001 and graphic context 0x00200000
If the window is covered by another window and uncovered again, assuming that backing store is not maintained:
The server sends another Expose event to tell the client that the window has to be drawn again
The client redraws the window by sending a PolyFillRectangle request
If a key is pressed:
The server sends a KeyPress event to the client to notify it that the user has pressed a key
The client reacts appropriately (in this case, it terminates)
Colors
At the protocol level, a color is represented by a 32-bit unsigned integer, called a pixelvalue. The following elements affect the representation of colors:
the color depth
the colormap, which is a table containing red, green, and blue intensity values
the visual type, which specifies how the table is used to represent colors
In the easiest case, the colormap is a table containing a RGB triple in each row. A pixelvalue x represents the color contained in the x-th row of the table. If the client can change the entries in the colormap, this representation is identified by the PseudoColor visual class. The visual class StaticColor is similar, but the client cannot change the entries in the colormap.
There are a total of six possible visual classes, each one identifying a different way for representing an RGB triple with a pixelvalue. PseudoColor and StaticColor are two. Another two are GrayScale and StaticGray, which differ in that they only display shades of grey.
The two remaining visual classes differ from the ones above because they break pixelvalues in three parts and use three separate tables for the red, green, and blue intensity. According to this color representation, a pixelvalue is converted into an RGB triple as follows:
the pixelvalue is seen as a sequence of bits
this sequence is broken in three parts
each of these three chunks of bits is seen as an integer and used as an index to find a value in each of three separate tables
This mechanism requires the colormap to be composed of three separate tables, one for each primary color. The result of the conversion is still a triple of intensity values. The visual classes using this representation are the DirectColor and TrueColor ones, differing on whether the client can change colormaps or not.
These six mechanisms for representing colors with pixelvalues all require some additional parameters to work. These parameters are collected into a visual type, which contains a visual class and other parameters of the representation of colors. Each server has a fixed set of visualtypes, each one associated with a numerical identifier. These identifiers are 32-bit unsigned integers, but are not necessarily different from identifiers of resources or atoms.
When the connection from a client is accepted, the acceptance packet sent by the server contains a sequence of blocks, each one containing information about a single screen. For each screen, the relative block contains a list of other blocks, each one relative to a specific color depth that is supported by the screen. For each supported depth, this list contains a list of visualtypes. As a result, each screen is associated a number of possible depths, and each depth of each screen is associated a number of possible visual types. A given visual type can be used for more screens and for different depths.
For each visual type, the acceptance packet contains both its identifier and the actual parameters it contains (visual class, etc.) The client stores this information, as it cannot request it afterwards. Moreover, clients cannot change or create new visual types. Requests for creation of a new window include the depth and the identifier of the visual type to use for representing colors of this window.
Colormaps are used regardless of whether the hardware controlling the screen (e.g., a graphic card) uses a palette, which is a table that is also used for representing colors. Servers use colormaps even if the hardware is not using a palette. Whenever the hardware uses palettes, only a limited number of colormaps can be installed. In particular, a colormap is installed when the hardware shows colors according to it. A client can request the server to install a colormap. However, this may require the uninstalling of another colormap: the effect is that windows using the uninstalled colormap are not shown with the correct color, an effect dubbed color flashing or technicolor. This problem can be solved using standard colormaps, which are colormaps with a predictable association between pixelvalues and colors. Thanks to this property, standard colormaps can be used by different applications.
The creation of colormaps is regulated by the ICCCM convention. Standard colormaps are regulated by the ICCCM and by the Xlib specification.
A part of the X colour system is the X Color Management System (xcms). This system was introduced with X11R6 Release 5 in 1991. This system consists of several additional features in xlib, found in the Xcms* series of functions. This system defines device independent color schemes which can be converted into device dependent RGB systems. The system consists of the xlib Xcms* functions and as well the X Device Color Characterization Convention (XDCCC) which describes how to convert the various device independent colour systems into device dependent RGB colour systems. This system supports the CIEXYZ, xyY, CIELUV and CIELAB and as well the TekHVC colour systems.
,
Atoms
Atoms are 32-bit integers representing strings. The protocol designers introduced atoms because they represent strings in a short and fixed size: while a string may be arbitrarily long, an atom is always a 32-bit integer. Atom brevity was exploited by mandating their use in the kinds of packets that are likely to be sent many times with the same strings; this results in a more efficient use of the network. The fixed size of atoms was exploited by specifying a fixed size for events, namely 32 bytes: fixed-size packets can contain atoms, while they cannot contain long strings.
Precisely, atoms are identifiers of strings stored in the server. They are similar to the identifiers of resources (Windows, Pixmaps, etc.) but differ from them in two ways. First, the identifiers of atoms are chosen by the server, not by the client. In other words, when a client requests the creation of a new atom, it only sends the server the string to be stored, not its identifier; this identifier is chosen by the server and sent back as a reply to the client. The second important difference between resources and atoms is that atoms are not associated with clients. Once created, an atom survives until the server quits or resets (this is not the default behavior of resources).
Atoms are identifiers and are therefore unique. However, an atom and a resource identifier can coincide. The string associated with an atom is called the atom name. The name of an atom cannot be changed after creation, and no two atoms can have the same name. As a result, the name of an atom is commonly used to indicate the atom: “the atom ABCD” means, more precisely, “the atom whose associated string is ABCD.” or “the atom whose name is ABCD.” A client can request the creation of a new atom and can request for the atom (the identifier) of a given string. Some atoms are predefined (created by the server with given identifier and string).
Atoms are used for a number of purposes, mostly related to communication between different clients connected to the same server. In particular, they are used in association with the properties of windows, which are described below.
The list of all atoms residing in a server can be printed out using the program xlsatoms. In particular, this program prints each atom (the identifier, that is, a number) with its name (its associated string).
Properties
Every window has a predefined set of attributes and a set of properties, all stored in the server and accessible to the clients via appropriate requests. Attributes are data about the window, such as its size, position, background color, etc. Properties are arbitrary pieces of data attached to a window. Unlike attributes, properties have no meaning at the level of the X Window core protocol. A client can store arbitrary data in a property of a window.
A property is characterized by a name, a type, and a value. Properties are similar to variables in imperative programming languages, in that a client can create a new property with a given name and type and store a value in it. Properties are associated to windows: two properties with the same name can exist on two different windows while having different types and values.
The name, type, and value of a property are strings; more precisely, they are atoms, that is, strings stored in the server and accessible to the clients via identifiers. A client application can access a given property by using the identifier of the atom containing the name of the property.
Properties are mostly used for inter-client communication. For example, the property named WM_NAME (the property named by the atom whose associated string is "WM_NAME") is used for storing the name of windows. Window managers typically read this property to display the name of windows in their title bar.
Some types of inter-client communication use properties of the root window. For example, according to the freedesktop window manager specification, window managers should store the identifier of the currently active window in the property named _NET_ACTIVE_WINDOW of the root window. The X resources, which contain parameters of programs, are also stored in properties of the root window; this way, all clients can access them, even if running on different computers.
The xprop program prints the properties of a given window; xprop -root prints the name, type, and value of each property of the root window.
Mappings
In the X Window System, every individual, physical key is associated a number in the range 8–255, called its keycode. A keycode only identifies a key, not a particular character or term (e.g., "Page Up") among the ones that may be printed on the key. Each one of these characters or terms is instead identified by a keysym. While a keycode only depends on the actual key that is pressed, a keysym may depend, for example, on whether the Shift key or another modifier was also pressed.
When a key is pressed or released, the server sends events of type KeyPress or KeyRelease to the appropriate clients. These events contain:
the keycode of the pressed key
the current state of the modifiers (Shift, Control, etc.) and mouse buttons
The server therefore sends the keycode and the modifier state without attempting to translate them into a specific character. It is a responsibility of the client to do this conversion. For example, a client may receive an event stating that a given key has been pressed while the Shift modifier was down. If this key would normally generate the character "a", the client (and not the server) associates this event to the character "A".
While the translation from keycodes to keysyms is done by the client, the table that represents this association is maintained by the server. Storing this table in a centralized place makes it accessible to all clients. Typical clients only request this mapping and use it for decoding the keycode and modifiers field of a key event into a keysym. However, clients can also change this mapping at will.
A modifier is a key that, when pressed, changes the interpretation of other keys. A common modifier is the Shift key: when the key that normally produces a lowercase "a" is pressed together with Shift, it produces an uppercase "A". Other common modifiers are "Control", "Alt", and "Meta".
The X server works with at most eight modifiers. However, each modifier can be associated with more than one key. This is necessary because many keyboards have duplicated keys for some modifiers. For example, many keyboards have two "Shift" keys (one on the left and one on the right). These two keys produce two different keycodes when pressed, but the X server associates both with the "Shift" modifier.
For each of the eight modifiers, the X server maintains a list of the keycodes that it consider to be that modifier. As an example, if the list of the first modifier (the "Shift" modifier) contains the keycode 0x37, then the key that produces the keycode 0x37 is considered a shift key by the X server.
The lists of modifier mappings is maintained by the X server but can be changed by every client. For example, a client can request the "F1 key" to be added to the list of "Shift" modifiers. From this point on, this key behaves like another shift modifier. However, the keycode corresponding to F1 is still generated when this key is pressed. As a result, F1 operates as it did before (for example, a help window may be opened when it is pressed), but also operates like the shift key (pressing "a" in a text editor while F1 is down adds "A" to the current text).
The X server maintains and uses a modifier mapping for the mouse buttons. However, the buttons can only be permuted. This is mostly useful for exchanging the leftmost and rightmost button for left-handed users.
The xmodmap program shows and changes the key, modifier, and mouse button mappings.
Grabs
A grab is a condition in which all keyboard or mouse events are sent to a single client. A client can request a grab of the keyboard, the mouse, or both: if the request is fulfilled by the server, all keyboard/mouse events are sent to the grabbing client until the grab is released. The other clients will not receive these events.
When requesting a grab, a client specifies a grab window: all events are sent to the grabbing client as if they were relative to the grab window. However, the other clients do not receive events even if they have selected them in the grab window. There are two kinds of grabs:
active: the grab takes place immediately
passive: the grab takes place only when a previously specified key or mouse button is pressed and terminates when it is released
A client can establish a grab over the keyboard, the pointer, or both. A request for grabbing can include a request for freezing the keyboard or the pointer. The difference between grabbing and freezing is that grabbing changes the recipient of events, while freezing stops their delivery altogether. When a device is frozen, the events it generates are stored in a queue to be delivered as usual when the freeze is over.
For pointer events, an additional parameter affects the delivery of events: an event mask, which specifies which types of events are to be delivered and which ones are to be discarded.
The requests for grabbing include a field for specifying what happens to events that would be sent to the grabbing client even if it had not established the grab. In particular, the client can request them to be sent as usual or according to the grab. These two conditions are not the same as they may appear. For example, a client that would normally receive the keyboard events on a first window may request the keyboard to be grabbed by a second window. Events that would normally be sent to the first window may or may not be redirected to the grab window depending on the parameter in the grab request.
A client can also request the grab of the entire server. In this case, no request will be processed by the server except the ones coming from the grabbing client.
Other
Other requests and events in the core protocol exist. The first kind of requests is relative to the parent relationship between windows: a client can request to change the parent of a window, or can request information about the parenthood of windows. Other requests are relative to the selection, which is however mostly governed by other protocols. Other requests are about the input focus and the shape of the pointer. A client can also request the owner of a resource (window, pixmap, etc.) to be killed, which causes the server to terminate the connection with it. Finally, a client can send a no-operation request to the server.
Extensions
The X Window core protocol was designed to be extensible. The core protocol specifies a mechanism for querying the available extensions and how extension requests, events, and errors packets are made.
In particular, a client can request the list of all available extensions for data relative to a specific extension. The packets of extensions are similar to the packets of the core protocol. The core protocol specifies that request, event, and error packets contain an integer indicating its type (for example, the request for creating a new window is numbered 1). A range of these integers are reserved for extensions.
Authorization
When the client initially establishes a connection with the server, the server can reply by either accepting the connection, refusing it, or requesting authentication. An authentication request contains the name of the authentication method to use. The core protocol does not specify the authentication process, which depends on the kind of authentication used, other than it ends with the server either sending an acceptance or a refusal packet.
During the regular interaction between a client and a server, the only requests related to authentication are about the host-based access method. In particular, a client can request this method to be enabled and can request reading and changing the list of hosts (clients) that are authorized to connect. Typical applications do not use these requests; they are used by the xhost program to give a user or a script access to the host access list. The host-based access method is considered insecure.
Xlib and other client libraries
Most client programs communicate with the server via the Xlib client library. In particular, most clients use libraries such as Xaw, Motif, GTK+, or Qt which in turn use Xlib for interacting with the server. The use of Xlib has the following effects:
Xlib makes the client synchronous with respect to replies and events:
the Xlib functions that send requests block until the appropriate replies, if any is expected, are received; in other words, an X Window client not using Xlib can send a request to the server and then do other operations while waiting for the reply, but a client using Xlib can only call an Xlib function that sends the request and wait for the reply, thus blocking the client while waiting for the reply (unless the client starts a new thread before calling the function);
while the server sends events asynchronously, Xlib stores events received by the client in a queue; the client program can only access them by explicitly calling functions of the X11 library; in other words, the client is forced to block or busy-wait if expecting an event.
Xlib does not send requests to the server immediately, but stores them in a queue, called the output buffer; the requests in the output buffer are actually sent when:
the program explicitly requests so by calling a library function such as XFlush;
the program calls a function that gives as a result something that involve a reply from the server, such as XGetWindowAttributes;
the program asks for an event in the event queue (for example, by calling XNextEvent) and the call blocks (for example, XNextEvent blocks if the queue is empty.)
Higher-level libraries such as Xt (which is in turn used by Xaw and Motif) allow the client program to specify the callback functions associated with some events; the library takes care of polling the event queue and calling the appropriate function when required; some events such as those indicating the need of redrawing a window are handled internally by Xt.
Lower-level libraries, such as XCB, provide asynchronous access to the protocol, allowing better latency hiding.
Unspecified parts
The X Window System core protocol does not mandate over inter-client communication and does not specify how windows are used to form the visual elements that are common in graphical user interfaces (buttons, menus, etc.). Graphical user interface elements are defined by client libraries realizing widget toolkits. Inter-client communication is covered by other standards such as the ICCCM and freedesktop specifications.
Inter-client communication is relevant to selections, cut buffers, and drag-and-drop, which are the methods used by a user to transfer data from a window to another. Since the windows may be controlled by different programs, a protocol for exchanging this data is necessary. Inter-client communication is also relevant to X window managers, which are programs that control the appearance of the windows and the general look-and-feel of the graphical user interface.
Session management
Yet another issue where inter-client communication is to some extent relevant is that of session management.
How a user session starts is another issue that is not covered by the core protocol. Usually, this is done automatically by the X display manager. The user can however also start a session manually running the xinit or startx programs.
See also
X Window System protocols and architecture
Xlib
Intrinsics
Xnee can be used to sniff the X Window System protocol
References
External links
X.Org Foundation (official home page) - Mirror with the domain name 'freedesktop.org'.
X Window System Internals
Kenton Lee's pages on X Window and Motif
X Window System Protocol, Version 11 (current Release)
core protocol |
18412924 | https://en.wikipedia.org/wiki/List%20of%20free-content%20licences | List of free-content licences | This is a list of free content licences not specifically intended for software. For information on software-related licences, see Comparison of free and open-source software licences.
A variety of free content licences exist, some of them tailored to a specific purpose. Also listed are open hardware licences, which may be used on design documents of and custom-made software for open-source hardware.
List
For documents and text
FreeBSD Documentation License
GNU Free Documentation License, or GFDL
GNU Simpler Free Documentation License, or GSFDL
Open Content License, obsolete
Open Publication License, obsolete
For any type of content
Against DRM license
Creative Commons licenses which are considered free:
Creative Commons Attribution, or CC BY
Creative Commons Attribution-ShareAlike, or CC BY-SA
Creative Commons Zero, or CC0
Creative Archive Licence, discontinued licence of the BBC Archive
Design Science License
Free Art License
Korean Open Access License
MirOS Licence
WTFPL
For fonts
Apache License
SIL Open Font License
Ubuntu Font License
See also: GPL font exception
For hardware
CERN Open Hardware License
Simputer General Public License
TAPR Open Hardware License
For tabletop games
Dominion Rules Licence, used for tabletop games
Open Game License, used for tabletop games
Other content
Open Audio License, for audio
Open Database License, or ODbL, for databases
Using software licences for other content
Some free software licences, including the GNU General Public License (GPL) and European Union Public Licence, can also be used to license content other than software. However, the FSF has recommended against using the GPL for educational works.
Notes |
701756 | https://en.wikipedia.org/wiki/Salt%20%28cryptography%29 | Salt (cryptography) | In cryptography, a salt is random data that is used as an additional input to a one-way function that hashes data, a password or passphrase. Salts are used to safeguard passwords in storage. Historically, only a cryptographic hash function of the password was stored on a system, but over time, additional safeguards were developed to protect against duplicate or common passwords being identifiable (as their hashes are identical). Salting is one such protection.
A new salt is randomly generated for each password. Typically, the salt and the password (or its version after key stretching) are concatenated and fed to a cryptographic hash function, and the output hash value (but not the original password) is stored with the salt in a database. Hashing allows later authentication without keeping and therefore risking exposure of the plaintext password if the authentication data store is compromised. Note that due to this, salts don't need to be encrypted or stored separately from the hashed password itself, because even if an attacker has access to the database with the hash values and the salts, the correct use of said salts will hinder common attacks.
Salts defend against attacks that use precomputed tables (e.g. rainbow tables), as they can make the size of table needed for a successful attack prohibitively large without burdening users. Since salts differ from one another, they also protect redundant (e.g. commonly used, re-used) passwords, as different salted hashes are created for different instances of the same password.
Cryptographic salts are broadly used in many modern computer systems, from Unix system credentials to Internet security.
Salts are closely related to the concept of a cryptographic nonce.
Example usage
Here is an incomplete example of a salt value for storing passwords. This first table has two username and password combinations. The password is not stored.
The salt value is generated at random and can be any length; in this case the salt value is 8 bytes long. The salt value is appended to the plaintext password and then the result is hashed, which is referred to as the hashed value. Both the salt value and hashed value are stored.
As the table above illustrates, different salt values will create completely different hashed values, even when the plaintext passwords are exactly the same. Additionally, dictionary attacks are mitigated to a degree as an attacker cannot practically precompute the hashes. However, a salt cannot protect common or easily guessed passwords.
Without a salt, the hashed value is the same for all users that have a given password, making it easier for hackers to guess the password from the hashed value:
Common mistakes
Salt re-use
Using the same salt for all passwords is dangerous because a precomputed table which simply accounts for the salt will render the salt useless.
Generation of precomputed tables for databases with unique salts for every password is not viable because of the computational cost of doing so. But, if a common salt is used for all the entries, creating of such a table (that accounts for the salt) then becomes a viable and possibly successful attack.
Because salt re-use can cause users with the same password to have the same hash, cracking a single hash can result in other passwords being compromised too.
Short salt
If a salt is too short, an attacker may precompute a table of every possible salt appended to every likely password. Using a long salt ensures such a table would be prohibitively large.
Benefits
To understand the difference between cracking a single password and a set of them, consider a file with users and their hashed passwords. Say the file is unsalted. Then an attacker could pick a string, call it attempt[0], and then compute hash(attempt[0]). A user whose hash stored in the file is hash(attempt[0]) may or may not have password attempt[0]. However, even if attempt[0] is not the user's actual password, it will be accepted as if it were, because the system can only check passwords by computing the hash of the password entered and comparing it to the hash stored in the file. Thus, each match cracks a user password, and the chance of a match rises with the number of passwords in the file. In contrast, if salts are used, the attacker would have to compute hash(attempt[0] || salt[a]), compare against entry A, then hash(attempt[0] || salt[b]), compare against entry B, and so on. This prevents any one attempt from cracking multiple passwords, given that salt re-use is avoided.
Salts also combat the use of precomputed tables for cracking passwords. Such a table might simply map common passwords to their hashes, or it might do something more complex, like store the start and end points of a set of precomputed hash chains. In either case, salting can defend against the use of precomputed tables by lengthening hashes and having them draw from larger character sets, making it less likely that the table covers the resulting hashes. In particular, a precomputed table would need to cover the string [salt + hash] rather than simply [hash].
The modern shadow password system, in which password hashes and other security data are stored in a non-public file, somewhat mitigates these concerns. However, they remain relevant in multi-server installations which use centralized password management systems to push passwords or password hashes to multiple systems. In such installations, the root account on each individual system may be treated as less trusted than the administrators of the centralized password system, so it remains worthwhile to ensure that the security of the password hashing algorithm, including the generation of unique salt values, is adequate.
Another (lesser) benefit of a salt is as follows: two users might choose the same string as their password, or the same user might choose to use the same password on two machines. Without a salt, this password would be stored as the same hash string in the password file. This would disclose the fact that the two accounts have the same password, allowing anyone who knows one of the account's passwords to access the other account. By salting the passwords with two random characters, even if two accounts use the same password, no one can discover this just by reading hashes.
Unix implementations
1970s–1980s
Earlier versions of Unix used a password file /etc/passwd to store the hashes of salted passwords (passwords prefixed with two-character random salts). In these older versions of Unix, the salt was also stored in the passwd file (as cleartext) together with the hash of the salted password. The password file was publicly readable for all users of the system. This was necessary so that user-privileged software tools could find user names and other information. The security of passwords is therefore protected only by the one-way functions (enciphering or hashing) used for the purpose. Early Unix implementations limited passwords to eight characters and used a 12-bit salt, which allowed for 4,096 possible salt values. This was an appropriate balance for 1970s computational and storage costs.
1980s–
The shadow password system is used to limit access to hashes and salt. The salt is eight characters, the hash is 86 characters, and the password length is unlimited.
Web-application implementations
It is common for a web application to store in a database the hash value of a user's password. Without a salt, a successful SQL injection attack may yield easily crackable passwords. Because many users re-use passwords for multiple sites, the use of a salt is an important component of overall web application security. Some additional references for using a salt to secure password hashes in specific languages or libraries (PHP, the .NET libraries, etc.) can be found in the external links section below.
See also
Password cracking
Cryptographic nonce
Initialization vector
Padding
"Spice" in the Hasty Pudding cipher
Rainbow tables
Pepper (cryptography)
References
External links
OWASP Cryptographic Cheat Sheet
how to encrypt user passwords
Cryptography
Password authentication |
49373665 | https://en.wikipedia.org/wiki/Firewalld | Firewalld | firewalld is a firewall management tool for Linux operating systems. It provides firewall features by acting as a front-end for the Linux kernel's netfilter framework. firewalld's current default backend is nftables. Prior to v0.6.0, iptables was the default backend. Through its abstractions, firewalld acts as an alternative to nft and iptables command line programs. The name firewalld adheres to the Unix convention of naming system daemons by appending the letter "d".
firewalld is written in Python. It was intended to be ported to C++, but the porting project was abandoned in January 2015.
Features
firewalld supports both IPv4 and IPv6 networks and can administer separate firewall zones with varying degrees of trust as defined in zone profiles. Administrators can configure Network Manager to automatically switch zone profiles based on known Wi-Fi (wireless) and Ethernet (wired) networks, but firewalld cannot do this on its own.
Services and applications can use the D-Bus interface to query and configure the firewall. firewalld supports timed rules, meaning the number of connections (or "hits") to a service can be limited globally. There is no support for hit-counting and subsequent connection rejection per source IP; a common technique deployed to limit the impact of brute-force hacking and distributed denial-of-service attacks.
firewalld's command syntax is similar to but more verbose than other iptables front-ends like Ubuntu's Uncomplicated Firewall (ufw). The command-line interface allows managing firewall rulesets for protocol, ports, source and destination; or predefined services by name.
Services are defined as XML files containing port- and protocol-mappings, and optionally extra information like specifying subnets and listing required Kernel helper modules.
The syntax resembles that of systemd's service files. A simple service file for a web server listening on TCP port 443 might look like this:
<?xml version="1.0" encoding="utf-8"?>
<service>
<short>Web Server</short>
<description>Public web host over HTTPS.</description>
<port port="443" protocol="tcp" />
</service>
Forward and output filtering
firewalld v0.9.0 added native support for forward and output forwarding via policy objects. This allows filtering traffic flowing between zones. Policies support most firewalld primitives available to zones: services, ports, forward-ports, masquerade, rich rules, etc.
Limitations
By default firewalld does not block outbound traffic as required by standards such as NIST 800-171 and 800-53. However, an outbound block can be added with a policy.
Graphical front-ends (GUIs)
firewall-config is a graphical front-end that is optionally included with firewalld, with support for most of its features.
firewall-applet is a small status indicator utility that is optionally included with firewalld. It can provide firewall event log notifications as well as a quick way to open firewall-config. firewall-applet was ported from the GTK+ to the Qt framework in the summer of 2015 following the GNOME Desktop’s deprecation of system tray icons.
Adoption
firewalld ships by default on the following Linux distributions:
CentOS 7 and newer
Fedora 18 and newer
OpenSUSE Leap 15 and newer
Red Hat Enterprise Linux 7 and newer
SUSE Linux Enterprise 15 and newer
firewalld is enabled by default in all of these distributions. firewalld is also available as one of many firewall options in the package repository of many other popular distributions such as Debian or Ubuntu.
References
Firewall software
Linux security software
Command-line software
Linux kernel features
Linux-only free software
Red Hat software |
1425689 | https://en.wikipedia.org/wiki/Ticket%20to%20Ride%20%28board%20game%29 | Ticket to Ride (board game) | Ticket to Ride is a railway-themed German-style board game designed by Alan R. Moon. It was illustrated by Julien Delval and Cyrille Daujean and published in 2004 by Days of Wonder. The game is also known as Zug um Zug (German), Les Aventuriers du Rail (French), Aventureros al Tren (Spanish), Wsiąść do pociągu (Polish), and Menolippu (Finnish).
The original version of the game is played on a board depicting a railway map of the United States and southern Canada. Localized editions have subsequently been published depicting maps of other countries, cities and regions. Players collect and play train car cards to claim train routes across the map. Points are earned based on the length of the claimed routes, whoever completes the longest continuous railway, and whether the player can connect distant cities that are determined by drawing ticket cards.
The game won the 2004 Spiel des Jahres, the Origins Award for Best Board Game of 2004, the 2005 Diana Jones award, the 2005 As d'Or Jeu de l'année, and placed second in the Schweizer Spielepreis for Family Games. Ticket to Ride: Europe won the 2005 International Gamers Award. As of August 2008, over 750,000 copies of the game had been sold according to the publisher. As of October 2014, over three million copies were reported sold, with retail sales of over $150 million.
Gameplay
At the beginning of the main game, players are dealt four train car cards as their playing hand. They are also dealt three Destination Ticket cards, each of which shows a pair of cities on a map of the United States and southern Canada. These become goals, representing two end-points which players are secretly attempting to connect. The player must keep at least two of these destination cards and discard unwanted tickets, if any, to the bottom of the stack. Once kept, a destination ticket may not be discarded for the rest of the game. Each player also selects a group of 45 colored train pieces with a matching scoring marker.
Each turn, the player has to choose from one of three options:
draw two railway car cards in various colours from the draw piles (with the restriction that drawing a wild Locomotive card face up forfeits drawing another card), or
draw three additional destination ticket cards and keep at least one (replacing undesired tickets at the bottom of the stack), or
play their collected railway car cards from their hand to claim a route on the board and place the corresponding number of train pieces from their store on the claimed route, thereby earning points.
The routes are of varying lengths (requiring varying numbers of matching coloured cards), and each discrete route marked on the board can be claimed by only a single player. Some cities are connected by two parallel routes that can each be claimed by a different player (unless the game is played by three or fewer players, in which case only one of the routes can be claimed). The same player may not claim both parallel routes between two adjacent cities. Longer routes are worth progressively more points than shorter routes, e.g., a route of length four is worth more than two routes of length two.
On their turn, a player can claim any route anywhere on the board that has not already been claimed, regardless of whether the route helps to complete their destination tickets. The routes score points by themselves, as mentioned above, but routes not connected to a player's destination do not help them in reaching the destination or completing their destination ticket.
The game ends when one player has only two or fewer of their supply of coloured train pieces. When this occurs, every player then plays one additional turn, after which they each reveal their previously hidden destination tickets. Additional points are awarded for having successfully connected the destinations on the cards, whereas points are subtracted for any incomplete tickets. A ten-point bonus is awarded to the player who has the longest continuously connected set of routes.
Since the game's release in 2004, Days of Wonder has released additional stand-alone board games, expansion maps - which require a base game to play, a card game, and multiple electronic game versions.
Board games
The original game features "railway routes connecting cities throughout North America", and was released in 2004. In 2008, Days of Wonder released USA 1910, a card expansion which contains additional destination tickets and a full-size deck for both routes and railway cards to replace the much smaller ones included in the original game.
Ticket to Ride: 10th Anniversary;
In 2014, the company released Ticket to Ride: 10th Anniversary, which has a larger map of the original game (USA) and metal boxes for the trains. There are no rule changes to the game, but the map and cars are larger and have been redesigned. The USA 1910 expansion is also included in this version.
Ticket to Ride: 15th Anniversary
A 15th Anniversary edition was released in August 2019 with translucent train cars and a special booklet detailing the history of the game. The card borders and backs were updated and are clearer and more vibrant than earlier editions. There are no rule changes to the game and no expansions are included in this edition.
Europe
A Europe version was released in 2005. Ticket to Ride: Europe takes place on a map of Europe as it was at the turn of the 20th century. Two new types of routes were introduced: Ferry routes that require locomotive cards to be played when claiming them, and tunnel routes which add the risk that additional train cards may be necessary to complete the route. The game includes "stations" which allow the player to use a route owned by another player and thus complete their destination ticket. In 2009, DOW released Europe 1912, a card expansion for the European game. It contains additional destination tickets, and an additional play mechanic— Warehouses.
Märklin
A Germany version, Ticket to Ride: Märklin, was released in 2006 by Märklin is a German toy company, best known for model railways and technical toys. Whereas railway car cards of each type in the previous games were identical, the cards in Ticket to Ride: Märklin each show a different image of Märklin rolling stock. This edition features a passenger mechanic, where a passenger token is placed on the board and can claim point tokens by traveling along a player's route.
Nordic countries
Released in late October 2007, Ticket to Ride: Nordic Countries is based on a map of the Nordic countries as the name suggests, also including parts of Russia and Estonia. This version also incorporates the ferry routes and tunnels from Ticket to Ride: Europe. Players only receive a set of 40 colored train pieces for this version, and it is playable by 2 or 3 players only. Following its success in the eponymous countries, this version also became available in the English, French and German languages.
Germany
In September 2012, Zug um Zug: Deutschland ("Ticket to Ride: Germany") was released by Asmodee GmbH, Days of Wonder's German distributor. It was developed and produced specifically at Asmodee GmbH's request (as Ticket to Ride: Märklin had gone out of print) and was only available in Germany and Austria. It is an adaptation of the same map and routes in Märklin, set in turn-of-the-20th-century Germany. In 2015, Asmodee GmbH released Deutschland 1902, a card expansion for the German map that contains additional destination tickets. In 2017, Ticket to Ride: Germany was released in the US. It is a combination of Zug um Zug: Deutschland together with Deutschland 1902.
Rails & Sails
Released in September 2016, Ticket to Ride: Rails & Sails is a standalone game with a double-sided board, the largest used in any Ticket to Ride game. One side is "The World" map and the other is "The Great Lakes" map. This version contains train pieces and ship pieces to be played on land routes and water routes respectively. The travel cards include a modified train card deck and a new "ship deck" used to claim land or water routes respectively. Also included are 3 harbor tokens for each player.
First Journey
Released as an exclusive item in Target stores in 2016, First Journey is designed for children 6 and up. This version is for 2 to 4 players. The board is smaller than the base version, connections are shorter, and game time is quicker. There are no points in this game, but players race to complete six destination tickets. In 2017, Days of Wonder would also create a version for Europe, with the same rules but a European map.
Ticket to Ride: Cities Collection
Maps in the Cities Collection features the same gameplay from the Ticket to Ride game series — collect cards, claim routes, draw tickets — on a scaled-down map that allows for a shorter game time.
New York (2018). Instead of trains, the tokens are taxis.
London (2019). Instead of trains, the tokens are buses.
Amsterdam (2020). Instead of trains, the tokens are horse-drawn carts.
Ticket to Ride: Map Collections
Starting in 2011, Days of Wonder began releasing expansions consisting of new maps. Each game introduces new rules specific to that version, and requires pieces from either Ticket to Ride: USA , Europe, Germany, or Nordic Countries to play. The following map collections have been released to date:
Volume 1: Asia - Team Asia and Legendary Asia (2011). Team Asia allows 6 player games, on teams of two, while Legendary Asia introduces Mountain Routes.
Volume 2: India + Switzerland (2011). India has a bonus for connecting your destinations in a mandala (circle); it is for 2-4 players. Switzerland introduced city to country, and country to country cards; it is for 2 or 3 players only.
Volume 3: The Heart of Africa (2012). The Heart of Africa introduces Terrain cards, which give a bonus when claiming a route if you also have the terrain cards to match it.
Volume 4: Nederland (2013). This is the first ticket to ride game with a currency system, used to pay bridge tolls for your routes.
Volume 5: United Kingdom + Pennsylvania (2015). United Kingdom introduces a technology system, requiring upgrades in order to complete certain routes. Pennsylvania has a stock market system.
Volume 6: France + Old West (2017). The France board is mostly blank, requiring you to lay tracks to decide the color of the route before you can claim that route. Old West allows for 6 individual players. It introduces the city system, and breaks from Ticket to Ride tradition by only allowing you to build routes that connect to your initial city.
Volume 6 1/2: Poland (2019). This was published by Rebel under the Polish name Wsiąść do Pociągu - Polska. It was planned as a Polish exclusive, but by early December was available from online merchants in the rest of Europe. It is the smallest expansion map board in the series, having only 4 sections as opposed to the standard 6. Game play has an emphasis on connections to neighboring countries.
Volume 7: Japan + Italy (2019). Japan features bullet trains, which are collaboratively built and accessible by anyone. Italy features a new ferry mechanic.
Stay at Home (2020). Released to mark the COVID-19 pandemic. It's a free print at home expansion that allows players to control family members as they navigate domestic life and complete daily tasks.
Card game
Ticket to Ride: The Card Game
The card game was released in the summer of 2008 and includes a similar artistic style and theme, and general game mechanism of set collection. The card game is playable in 30–45 minutes and supports 2-4 players. Players start with 1 locomotive card and 7 other random train cards in their hand. Players are also dealt 6 destination tickets of which they must keep at least 1. The destination tickets have 1 to 5 colored dots which match the colors of the train cards. In order to complete a destination ticket, players must move cards from their hand, to their rail yard (playing area directly in front of the player), and finally to their on-the-track stack (scoring area). During play, players can "train rob" another player, by playing more of a specific color than their opponent has in their rail yard. When the train card draw piles are exhausted, the players use the train cards in their on-the-track stack to complete their destination tickets, by matching the colored train cards with the colored dots on the destination tickets. Completed tickets are added to the players score, while uncompleted tickets are subtracted. Additional bonus points are awarded to players who complete the most tickets to the six big cities, Chicago, Dallas, Los Angeles, Miami, New York, and Seattle.
Computer games
Ticket to Ride: Online
Ticket to Ride and most expansions can be played online at Days of Wonder's website. A four-game free trial subscription is available. Due to the major browsers and Adobe discontinuing support for Flash, the future of this version of the game is uncertain.
Ticket to Ride: The Computer Game
Days of Wonder also released a computer game for Windows, OS X, and Linux that allows players to play the original game. Ticket to Ride: Europe, Ticket to Ride: Switzerland and Ticket to Ride: USA 1910 expansions are available as purchasable enhancements to the game. In February 2017 Days of wonder abandoned Linux support. However the current version available on Steam is playable under Linux.
The game has received generally favorable reviews.
Ticket to Ride: Xbox Live Arcade
The Xbox Live Arcade version was released on June 25, 2008, and supports play with up to five people on Xbox Live or four people on the same console, and can utilize the Xbox Live Vision cam.
Ticket to Ride: iPad
The iPad version was released on May 18, 2011, and supports play with up to five people using the Game Center or Days of Wonder's own servers. Its offline mode originally only supported a single player with up to four computer players; however pass and play was added later.
This version was released with three additional extensions available for purchase and download: Ticket to Ride: Europe; Ticket to Ride: Switzerland; and Ticket to Ride: USA 1910, which itself includes three separate game modes. In August 2012, Ticket to Ride: Legendary Asia was added, and in April 2016, Ticket to Ride: Nordic Countries became available.
The iPad version of Ticket to Ride was named the 2011 Digital Game of the Year by the Danish Guldbrikken (The Golden Pawn) Awards, which referred to the game as "the exemplar of how a board game makes the leap to the digital world without compromise. The iPad version dazzles with its superb finish, easy availability and unparalleled expandability, as well as the ability to play on just the iPad or over the Internet."
Ticket to Ride Pocket (iPhone and iPod Touch)
The iPhone version was released on November 16, 2011, which is a simplified version of the iPad game. Online play was added as an update on February 2, 2012, and users can also play a multi-player game on a local network via WiFi or Bluetooth. The company released a redesigned version of the digital game in November 2015.
Ticket to Ride with Alexa Digital Assistant
Amazon's Alexa Digital Assistant can now play Ticket to Ride and Ticket to Ride: Europe, provided the players have a physical copy of the game.
Reception
The game won numerous awards after being released, including the 2004 Spiel des Jahres (game of the year).
Mike Fitzgerald calls Ticket to Ride "a game that I never tire of, one that lends itself well to the many expansions that Days of Wonder have released. The design principles it uses are all simple and have been done before, but they have never been put together in a game as compelling as Ticket to Ride." Board Game Quest mentioned that it is "one of the greatest gateway games ever made", and Board Games Land described it as "one of the best family board games ever made". Many of its expansions have also been positively received by critics.
Reviews
Pyramid
Awards and honours
See also
Railway Rivals (a similar train board game)
References
External links
Days of Wonder's Ticket to Ride site
Railroad board games
Connection games
Spiel des Jahres winners
Origins Award winners
Alan R. Moon games
Days of Wonder games
Board games introduced in 2004 |
33759724 | https://en.wikipedia.org/wiki/Computer%20Automation | Computer Automation | Computer Automation Inc. was a computer manufacturer founded by David H. Methvin in 1968, based originally in Newport Beach, California, United States. It opened a sales, support and repair arm in the UK in 1972, based at Hertford House, Maple Cross, Rickmansworth, Hertfordshire. Later relocated to Suite 2 Milfield House, Croxley Centre, Croxley Green, Watford, Hertfordshire.
In 1981 they moved the corporate offices to Boulder, Colorado, manufacturing and sales remained in California. In 1985 the offices moved to Irvine, California.
Finally in 1990 they moved to Richardson, Texas. They had previously opened a manufacturing and engineering development facility there in 1978 as a way to escape high California tax and labor rates.
The first products were the Computer Automation PDC 404 and PDC 808 "Programmed Digital Controllers". The PDC 808 announced circa July 1969 was designed for control, monitoring and/or data logging applications. It featured 4K 8-bit core memory expandable to 16K with DTL logic circuits.
In 1969 CA announced full production of the Model 816, a 16 bit general purpose computer using TTL integrated circuits for logic and a 3D core memory.
In 1971 CA introduced the Alpha 8, an eight bit machine, and Alpha 16 which merely doubled up this concept to make a 16-bit machine. Both were built using DTL and TTL devices. The processor for the Alpha 8 and Alpha 16 each comprised three full sized circuit boards about 18 inches square, then there were the memory options, 4k, 8k and rarely 16k magnetic core full cards. There were number of options for data input, paper tape via a board called the utility controller which could also be used to drive other devices such as printers, etc. There was a magnetic tape controller which was a full card and a Winchester interface disk controller which was two full cards with a circuit board jumper which interlinked the two cards. The programmers console had a row of toggle switches for data entry of bootstrap routines, etc. Two chassis were available, standard and jumbo with separate power supplies. There were a variety of other cards available for various forms of input/output and process control, relay cards, dual teletype cards, etc.
In 1973 the LSI-1 was announced, a single board low-cost 16-bit computer. To achieve the ambitious goals, the company ventured into development of full-custom LSI chips: a 4-bit slice arithmetic logic unit and 3-chip control unit. The control unit was based on programmable logic arrays (PLA). The control unit PLA transformed the machine instructions and events into series of microinstructions to operate the ALU and related functions. The concept was conceived by Ken Gorman and was designed by Gorman and Roy Blacksher. Although the design was proven in the lab using first iteration chips, a disastrous processing error by chip foundry National Semiconductor during a bug-fix iteration caused a six-month schedule slip from which the project could not recover. Therefore, the LSI-1 never entered the marketplace. Gorman subsequently became manager of the Processor Development Department and oversaw computer processor engineering through 1975. For one project, Gorman worked with AMD in the conceptualization of the Am2900 4-bit slice chip that was employed in Computer Automation's high-end processors and gained widespread acceptance in the marketplace.
The LSI-1 was superseded by the LSI-2 which implemented the same architecture using standard TTL logic. The LSI-2 comprised a single full card with two piggyback half cards, on the lefthand side viewed from the back was the card containing the microcode in fuse link bipolar PROMs, on the right the options card with the bootstrap Proms and serial interface for current loop teletype or RS-232 device. There were a couple of speed options of processor available, the 2/10 with a 10 MHz clock and 2/20 with a 20 MHz clock. Another option was the 2/60 which used different microcode on the half card to support an enhanced instruction set used uniquely by Computer Automation’s SyFa (System For access) data processing systems. There were two types of console available, the operator's console which merely had enough functionality to enable an operator to boot the system, and a programmer’s console which would enable data entry for bootstrap routines, etc. Memory options included magnetic core of between 4 and 16k and later semiconductor memory of up to 32k in a variety of formats, full card and half card. Memory banking where blocks of memory could be switched in or out to a degree bypassed the restrictions of a 16 bit address bus. The format for input/output devices remained the same as the Alpha 8 and Alpha 16 products, therefore many of the I/O devices for the earlier product could still be used. However, the LSI 2 had a different dual-card hard disk controller and a number of different options for half-card floppy controllers. The chassis available had five slots with internal PSU or nine slots with external PSU.
Both the Alpha systems and LSI systems were tested using a programme called QCD – quality-control diagnostic. There were a number of different versions of this around, for the Alpha machines on paper tape, hard disk or magnetic tape, and for the LSI systems paper tape, floppy disk, hard disk and magnetic tape. There were also other diagnostics for every product, many I/O devices requiring a wired loopback header connecting output to input in a particular pattern so that the device could test itself.
Another product of the mid-1970s was a cut down and cut-price half card processor, the 3/05. This had its own unique half card chassis and power supply, plus its own console.
In the late 1970s a major redesign of the LSI 2 took place to integrate the two piggyback cards into the main full card, this became the 2/40 and 2/120. Another enhancement to speed operation was the introduction of cache in the form of another full card, plus an expansion of memory with 64k semiconductor modules in the form of a full card becoming available. Again the restrictions of the 16 bit address bus still meant memory banking was a necessity for memory greedy applications. The increase speed of the “Super 2” as the systems were called required a new revision of motherboard but this was backwards compatible with the earlier systems.
Another product range emerged in the late 1970s, the Naked Mini 4 range of systems. These were still implemented in TTL but used a different and enhanced instruction set. They ranged from the 4/10, which was a half card replacement for the 3/05, through the full card 4/30 to the 4/95. Although there was some compatibility with a few of the I/O cards from the LSI-2, everything about the NM4 series was generally speaking unique. Naked Mini products saw extensive use in early computer controlled typesetting machines and automatic teller machines.
The 4/10 processor was based on a pair of custom LSI integrated circuits, the DATA chip and the CONTROL chip. The custom chips were fabricated by Western Digital and another California-based company was a second source. The microcode for controlling these chips was stored in four 8-bit wide bipolar PROMs. In 1978, the Richardson, Texas manufacturing facility added a small engineering development group headed by Frank J. Marshall that was tasked with building a small, low-cost 16-bit mini-computer product line using the LSI 4/10 custom chips. The resulting product line was the 4/04, also known as the SCOUT (Small Computer Optimized for Use by the Thousands) or Naked Milli. The 4/04 system used small (around 6 x 9 inches) circuit boards and made heavy used of PAL logic chips. The boards plugged into a chassis that had 4 to 12 slots for cards. One side of the chassis was the system power supply, which was 5 volt only. Boards that needed other voltages generated them with small DC-DC converters. The SCOUT had many advanced features for its time including built-in self test diagnostics, a plug-and-play driver and bootloader facility, and automatic memory address allocation for memory boards.
As Computer Automation moved into the 1980s it became apparent that the concept of the minicomputer was getting a little long in the tooth. Microprocessors such as the 8080, Z80 and 6502 could be incorporated into much process control equipment. The marketing and engineering groups at Computer Automation realized this and proposed a new product line and direction for the company to be called "Triad". This was to be based on Motorola microprocessors on the VME or Versa bus and running a Unix-based operating system. Dave Methvin, the founder and president of the company was adamantly opposed to non-proprietary systems and architectures and killed the project.
Computer Automation consisted of three were divisions, firstly:
Naked Mini which sold minicomputer's to OEMs, where they were used in process control.
IPD (Industrial Products Division) manufactured automatic test equipment. Computer Automation had designed an ATE to production test its own product in-house. CA decided this was a marketable product which was dubbed "Capable". The first Capable testers used an Alpha 16, later models used the LSI-2. These were functional ATE which ran a program against the UUT (Unit Under Test) to exercise all logic functions. A later development was the Marathon in-circuit tester, which as the name suggests measured viability of components in-circuit.
SyFa (Systems for Access) manufactured programmable distributed data processing systems using the LSI 2/60 and later the 2/120 as the core. These were used by many companies to perform jobs such as stock control, order processing, etc. Originally the systems were manufactured and assembled in the States and shipped to the UK for commissioning, but by the late Seventies a production facility was in place at a separate unit at Maple Cross near Rickmansworth in England.
In 1979 a production facility opened up at Clonshaugh in Dublin, taking advantage of tax concessions introduced by the Irish Government.
The company last filed a financial statement in 1992.
References
1968 establishments in Texas
1992 disestablishments in Texas
American companies established in 1968
American companies disestablished in 1992
Companies based in Boulder, Colorado
Computer companies established in 1968
Computer companies disestablished in 1992
Defunct companies based in Texas
Defunct computer companies of the United States
Defunct computer hardware companies
History of Irvine, California
Manufacturing companies based in Texas
Richardson, Texas |
71107 | https://en.wikipedia.org/wiki/Palm%2C%20Inc. | Palm, Inc. | Palm, Inc. was an American company that specialized in manufacturing personal digital assistants (PDAs) and various other electronics. They were the designer of the PalmPilot, the first PDA successfully marketed worldwide, as well as the Treo 600, one of the first smartphones. Palm developed several versions of Palm OS for PDAs and smartphones. The company was also responsible for the first versions of webOS, the first multitasking operating system for smartphones, and enyo.js, a framework for HTML5 apps.
In July 2010, Palm was purchased by Hewlett-Packard (HP) and in 2011 announced a new range of webOS products. However, after poor sales, HP CEO Léo Apotheker announced in August 2011 that it would end production and support of Palm and webOS devices, marking the end of the Palm brand after 19 years.
In October 2014, HP sold the Palm trademark to a shelf corporation tied to the Chinese electronics firm TCL Corporation. Shortly afterward, TCL confirmed its plans to revive the Palm brand on future, crowdsourced smartphones.
History
Palm, headquartered in Sunnyvale, California, was responsible for numerous products including the Pre and Pixi as well as the Treo and Centro smartphones. Previous product lines include the Pilot 1000, Palm Pilot Pro, Palm III, Palm V, Palm VII, Zire and Tungsten. While their older devices run Palm OS Garnet, four editions of the Treo run Windows Mobile.
Founding and acquisition
Palm Computing, Inc. was founded in 1992 by Jeff Hawkins, who later hired Donna Dubinsky and Ed Colligan, all of whom guided Palm to the invention of Palm Pilot. The company was originally started to write software for the Zoomer, a consumer PDA manufactured by Casio for Tandy. The Zoomer devices were also distributed by Casio and GRiD, while Palm provided the PIM software. The PEN/GEOS operating system was provided by Geoworks.
The Zoomer failed commercially, but Palm continued generating revenue by selling synchronization software for HP devices, and the Graffiti handwriting recognition software for the Apple Newton MessagePad.
The company was acquired by U.S. Robotics Corp. in 1995. In June 1997, U.S. Robotics was acquired by 3Com and Palm became a 3Com subsidiary. In June 1998, the founders became unhappy with the direction in which 3Com was taking the company, and left to found Handspring.
Stock offering and split into PalmSource and PalmOne
3Com made the Palm subsidiary an independent, publicly traded company on March 1, 2000, and it traded on the NASDAQ under the ticker symbol PALM. Palm Inc had its IPO during the dot-com bubble and in its first day of trading the shares of the new company hit an all-time high of US$95.06. But competition and the end of the tech bubble caused Palm's shares to lose 90% of their value in just over a year. By June 2001 the company's shares were trading at US$6.50, making it the worst performing PDA manufacturer on the NASDAQ index at the time.
In January 2002, Palm set up a wholly owned subsidiary to develop and license Palm OS, which was named PalmSource in February. PalmSource was then spun off from Palm as an independent company. In October 2003, the hardware division of the company merged with Handspring, was renamed to palmOne, Inc. and traded under the ticker symbol PLMO. The Palm trademark was held by a jointly owned holding company.
United as a single company
In May 2005, palmOne purchased PalmSource's share in the 'Palm' trademark for US$30 million. In July 2005, palmOne launched its new name and brand, reverting to Palm, Inc. and trading under the ticker symbol PALM once again.
In late 2005, ACCESS, which specializes in mobile and embedded web browser technologies, acquired PalmSource for US$324 million.
On January 4, 2006, Palm released the Palm Treo 700w, the first Windows Mobile-powered Treo, in a partnership with Verizon Wireless and Microsoft.
In December 2006, Palm, Inc. paid US$44 million to ACCESS for an irrevocable license to use and modify the source code for Palm OS Garnet as well as ship Palm OS Garnet in any Palm product without paying royalties; with this arrangement, the Palm company could once again develop both its hardware and software.
In June 2007, Palm formed a strategic relationship with the private-equity firm Elevation Partners, who purchased a 25% equity stake of the company for US$325 million – an investment that came after months of rumours about a possible Palm sale. Palm CEO Ed Colligan acknowledged that "We were approached by larger parties over the last six months," and "the reality is that we thought this was the best outcome for our business and our investors."
On December 18, 2008, Palm CEO Ed Colligan announced that the company would no longer develop any new handheld PDAs. Palm announced the webOS operating system and Palm Pre smartphone at the Consumer Electronics Show on January 8, 2009, and released on June 6, 2009 with Sprint. The design team was led by Matias Duarte, Mike Bell, Peter Skillman and Michael Abbott.
In early 2009, the hype over WebOS sent Palm’s stock from US$3 to a high of about US$18. While reviews of the Palm Pre were positive, launching with only one U.S. carrier (Sprint, which was also a distant third in the market) proved to be a crucial mistake that limited sales, even though it became Sprint's phone. The Pre was often described as Palm's swan song as it was too late to keep the company – with only $250 million in cash and short- term investments at the beginning of 2009 – independent for long. By 2010 the share price of Palm dropped to below US$4.
Acquisition by HP and demise
On April 28, 2010, Hewlett-Packard announced it would purchase Palm at $5.70 a share for $1.2 billion in an all-cash deal. The acquisition was completed on July 1, 2010.
The Palm global business unit was to be responsible for webOS software development and webOS-based hardware products, from a robust smartphone roadmap to future slate PCs and netbooks.
In February 2011, HP unveiled a new line of WebOS products, including the Pre 3, Veer, and TouchPad; however, these products were branded under HP's name and not with the Palm name. In July 2011, as part of a reorganization, WebOS head Jon Rubinstein was demoted from senior vice president to a "product innovation role", and replaced by Steven DeWitt, head of HP's North American consumer PC unit. At the same time, Palm was renamed the "webOS global business unit", effectively ending the use of the Palm brand.
The launch of the TouchPad was met with extremely poor sales; on August 18, 2011, HP announced that it would immediately end the production and support of all Palm and WebOS devices, and would be "exploring options to optimize the value of webOS software going forward", including a potential sale of the division to another company. HP also cancelled the U.S. release for the Pre 3 and Veer, and infamously, held a fire sale on remaining TouchPad stock, lowering prices for the tablet to as low as US$99 (which, however, led to a major spike in demand for the device). The decision, made by HP's CEO at the time Léo Apotheker, along with its $11.7 billion acquisition of Autonomy, and threats to spin-off HP's consumer business, led to a major decline in HP's market performance, with its shares falling in value by 45.4%.
Following the resignation of Apotheker and his replacement by Meg Whitman, it was announced in December 2011 that an open source version of much of WebOS would be created. Shortly afterward, Jon Rubinstein, along with a number of other senior Palm staff members, began to leave HP. On August 15, 2012, it was revealed that HP had re-organized the remaining WebOS team as a unit known as "Gram", made up of the remaining components of Palm. In February 2013, HP announced that it had sold the WebOS team, along with a license to the WebOS source code, documentation, and underlying patents, to LG Electronics. LG planned to primarily utilize the WebOS platform for its smart TV products, rather than on mobile devices, but did not rule out the possibility.
TCL ownership, new device
On December 30, 2014, it was reported that in October 2014, HP had sold the Palm trademark and related intellectual properties to Wide Progress Global Limited, a shelf company controlled by Nicolas Zibell — a regional president of TCL Corporation, which markets Android smartphones under the Alcatel brand. At the same time, it was discovered that the former Palm.com now redirected to MyNewPalm.com; the site displayed a "coming soon" page with the previous orange Palm logo, and the slogan "Smart move", which is also the slogan used by Alcatel OneTouch.
TCL publicly confirmed its acquisition of the Palm brand on January 6, 2015, stating that it planned to "re-create" the company with a new team based in Silicon Valley, and incorporate crowdsourcing into its product development.
On October 15, 2018, a new Palm companion device was unveiled, which is manufactured by a new Palm-branded startup company from California that is financially backed by TCL and basketball player Stephen Curry. It is an "ultra-mobile", Android-based device designed to serve as a smaller, simplified companion to a larger smartphone. The new device was announced as being exclusive to Verizon Wireless, only available as an add-on to an existing or new device plan.
See also
List of Palm OS devices
Palm (PDA)
Palm Desktop
Palm Foleo
PalmSource, Inc.
References
External links
1992 establishments in California
2011 disestablishments in California
American companies established in 1992
Companies based in Silicon Valley
Companies based in Sunnyvale, California
Computer companies established in 1992
Computer companies disestablished in 2011
Defunct companies based in the San Francisco Bay Area
Defunct computer companies based in California
Defunct mobile phone manufacturers
Electronics companies established in 1992
Electronics companies disestablished in 2011
Hewlett-Packard acquisitions
Manufacturing companies based in California
Software companies established in 1992
Software companies disestablished in 2011
Technology companies based in the San Francisco Bay Area
2000 initial public offerings
2010 mergers and acquisitions |
2008633 | https://en.wikipedia.org/wiki/Ansys | Ansys | Ansys is an American company based in Canonsburg, Pennsylvania. It develops and markets CAE/multiphysics engineering simulation software for product design, testing and operation and offers its products and services to customers worldwide.
Ansys was founded in 1970 by John Swanson, who sold his interest in the company to venture capitalists in 1993. Ansys went public on NASDAQ in 1996. In the 2000s, the company acquired numerous other engineering design companies, obtaining additional technology for fluid dynamics, electronics design, and physics analysis. Ansys became a component of the NASDAQ-100 index on December 23, 2019.
Corporate history
Origins
The idea for Ansys was first conceived by John Swanson while working at the Westinghouse Astronuclear Laboratory in the 1960s. At the time, engineers performed finite element analysis (FEA) by hand. Westinghouse rejected Swanson's idea to automate FEA by developing general purpose engineering software, so Swanson left the company in 1969 to develop the software on his own. He founded Ansys under the name Swanson Analysis Systems Inc. (SASI) the next year, working out of his farmhouse in Pittsburgh.
Swanson developed the initial Ansys software on punch-cards and used a mainframe computer that was rented by the hour. Westinghouse hired Swanson as a consultant, under the condition that any code he developed for Westinghouse could also be included in the Ansys product line. Westinghouse also became the first Ansys user.
History
By 1991 SASI had 153 employees and $29 million in annual revenue, controlling 10 percent of the market for finite element analysis software. According to The Engineering Design Revolution, the company became "well-respected" among engineering circles, but remained small.
In 1992, SASI acquired Compuflo, which marketed and developed fluid dynamics analysis software.
In 1993, Swanson sold his majority interest in the company to venture capitalist firm TA Associates. Peter Smith was appointed CEO and SASI was renamed after the software, Ansys, the following year.
Ansys went public in 1996, raising about $46 million in an initial public offering.
By 1997, Ansys had grown to $50.5 million in annual revenue.
In the late 1990s, Ansys shifted its business model away from software licenses, and corresponding revenue declined. However, revenue from services increased stronger.
From 1996 to 2000, profits at Ansys grew an average of 160 percent per year.
In February 2000, Jim Cashman was appointed CEO. Current CEO Ajei S. Gopal was appointed in early 2017.
Acquisitions
Engineering simulation software
Ansys develops and markets engineering simulation software for use across the product life cycle. Ansys Mechanical finite element analysis software is used to simulate computer models of structures, electronics, or machine components for analyzing strength, toughness, elasticity, temperature distribution, electromagnetism, fluid flow, and other attributes. Ansys is used to determine how a product will function with different specifications, without building test products or conducting crash tests. For example, Ansys software may simulate how a bridge will hold up after years of traffic, how to best process salmon in a cannery to reduce waste, or how to design a slide that uses less material without sacrificing safety.
Most Ansys simulations are performed using the Ansys Workbench system, which is one of the company's main products. Typically Ansys users break down larger structures into small components that are each modeled and tested individually. A user may start by defining the dimensions of an object, and then adding weight, pressure, temperature and other physical properties. Finally, the Ansys software simulates and analyzes movement, fatigue, fractures, fluid flow, temperature distribution, electromagnetic efficiency and other effects over time.
Ansys also develops software for data management and backup, academic research and teaching. Ansys software is sold on an annual subscription basis.
History
The first commercial version of Ansys software was labeled version 2.0 and released in 1971. At the time, the software was made up of boxes of punch cards, and the program was typically run overnight to get results the following morning. In 1975, non-linear and thermo-electric features were added. The software was exclusively used on mainframes, until version 3.0 (the second release) was introduced for the VAXstation in 1979. Version 3 had a command line interface like DOS.
In 1980, Apple II was released, allowing Ansys to convert to a graphical user interface in version 4 later that year. Version 4 of the Ansys software was easier to use and added features to simulate electromagnetism. In 1989, Ansys began working with Compuflo. Compuflo's Flotran fluid dynamics software was integrated into Ansys by version 5, which was released in 1993. Performance improvements in version 5.1 shortened processing time two to four-fold, and was followed by a series of performance improvements to keep pace with advancements in computing. Ansys also began integrating its software with CAD software, such as Autodesk.
In 1996, Ansys released the DesignSpace structural analysis software, the LS-DYNA crash and drop test simulation product, and the Ansys Computational Fluid Dynamics (CFD) simulator. Ansys also added parallel processing support for PCs with multiple processors. The educational product Ansys/ed was introduced in 1998. Version 6.0 of the main Ansys product was released in December 2001. Version 6.0 made large-scale modeling practical for the first time, but many users were frustrated by a new blue user interface. The interface was redone a few months later in 6.1. Version 8.0 introduced the Ansys multi-field solver, which allows users to simulate how multiple physics problems would interact with one another.
Version 8.0 was published in 2005 and introduced Ansys' fluid–structure interaction software, which simulates the effect structures and fluids have on one another. Ansys also released its Probabilistic Design System and DesignXplorer software products, which both deal with probabilities and randomness of physical elements. In 2009 version 12 was released with an overhauled second version of Workbench. Ansys also began increasingly consolidating features into the Workbench software.
Version 15 of Ansys was released in 2014. It added a new features for composites, bolted connections, and better mesh tools. In February 2015, version 16 introduced the AIM physics engine and Electronics Desktop, which is for semiconductor design. The following year, version 17 introduced a new user interface and performance improvement for computing fluid dynamics problems. In January 2017, Ansys released version 18. Version 18 allowed users to collect real-world data from products and then incorporate that data into future simulations. The Ansys Application Builder, which allows engineers to build, use, and sell custom engineering tools, was also introduced with version 18.
Released in January 2020, Ansys R1 2020 updates Ansys’ simulation process and data management (SPDM), materials information and electromagnetics product offerings. In early 2020, the Ansys Academic Program surpassed one million student downloads.
In November 2020, South China Morning Post reported that Ansys software had been used for Chinese military research in the development of hypersonic missile technology.
References
External links
Official website
Engineering software companies
Software companies based in Pennsylvania
Software companies established in 1970
Computational fluid dynamics
Finite element software for Linux
1970 establishments in Pennsylvania
Mesh generators
Companies based in Washington County, Pennsylvania
Canonsburg, Pennsylvania
1996 initial public offerings
Software companies of the United States |
4159149 | https://en.wikipedia.org/wiki/Hostapd | Hostapd | hostapd (host access point daemon) is a user space daemon software enabling a network interface card to act as an access point and authentication server. There are three implementations: Jouni Malinen's hostapd, OpenBSD's hostapd and Devicescape's hostapd.
Jouni Malinen's hostapd
Jouni Malinen's hostapd is a user space daemon for access point and authentication servers. It can be used to create a wireless hotspot using a Linux computer. It implements IEEE 802.11 access point management, IEEE 802.1X/WPA/WPA2/EAP Authenticators, RADIUS client, EAP server, and RADIUS authentication server. The current version supports Linux (Host AP, MadWifi, Prism54 and some of the drivers which use the kernel's mac80211 subsystem), QNX, FreeBSD (net80211), and DragonFlyBSD.
OpenBSD's hostapd
OpenBSD's hostapd is a user space daemon that helps to improve roaming and monitoring of OpenBSD-based wireless networks. It implements Inter Access Point Protocol (IAPP) for exchanging station association information between access points. It can trigger a set of actions like frame injection or logging when receiving specified IEEE 802.11 frames.
Devicescape's hostapd
The Open Wireless Linux version of hostapd. It is kept as close as possible to the original open source release, but with OWL specific packaging and defaults.
The website appears to be dead (April 2013), probably as the project itself.
See also
HostAP
References
External links
DragonFlyBSD commit
Undeadly Article
Wi-Fi
OpenBSD |
53773268 | https://en.wikipedia.org/wiki/Bayya%20Yegnanarayana | Bayya Yegnanarayana | Bayya Yegnanarayana is an INSA Senior Scientist at International Institute of Information Technology (IIIT) Hyderabad, Hyderabad, AP, India. He is an eminent professor and is known for his contributions in Speech Signal Processing, Artificial Neural Networks and related areas. He has guided about 30 PhD theses, 42 MS theses and 65 MTech projects, and currently guiding about 10 PhD students. He is the General Chair for the international conference, INTERSPEECH 2018, held at Hyderabad.
Early life
Yegnanarayana was born on January 9, 1944, in Rajahmundry, Andhra Pradesh, India. He obtained his BSc degree from Andhra University, Waltair, in 1961. Then BE in 1964, ME in 1966, and PhD in 1974, from Indian Institute of Science (IISc.), Bangalore.
Awards/Honours Received
1. Honorary Fellow of AP Akademi of Sciences, 2016
2. Fellow of International Speech Communication Association (ISCA), 2014
3. INSA The Syed Hussain Zaheer Medal, 2014
4. Life Fellow of Institute of Electrical and Electronics Engineers (IEEE)(USA), 2013
5. IISc Distinguished Alumnus Award, 2013
6. INAE Professor S. N. Mitra Memorial Award, 2006
7. Fellow of Indian Academy of Sciences, 2004
8. Fellow of Indian National Science Academy, 1999
9. 3rd IETE Professor S. V. C. Aiya Memorial Award, 1996
10. Fellow of Indian National Academy of Engineering, 1995
Work in India
• INSA Senior Scientist, International Institute of Information Technology Hyderabad, 2017–present
• Emeritus Professor, BITS-Pilani Hyderabad campus, 2016-2017
• Institute Professor, International Institute of Information Technology Hyderabad, 2012-2016
• Professor and Microsoft Chair, International Institute of Information Technology Hyderabad, 2006-2012
• Head, Dept. of CS & E, Indian Institute of Technology Madras, Feb. 1985 - Nov. 1989
• Professor, Indian Institute of Technology Madras, 1980-2006
• Visiting Scientist, ISRO, Bangalore, July-Dec. 1980
• Assistant Professor, Indian Institute of Science, Bangalore, 1974-1978
• Lecturer, Indian Institute of Science, Bangalore, 1967-1974
• Sr. Research Assistant, Indian Institute of Science, Bangalore, 1966-1967
Work Abroad
• Center for Speech Technology Research, CSTR, Edinburgh, UK, May 2011
• Center for Speech Technology Research, CSTR, Edinburgh, UK, Sep. 2009
• Center for Speech Technology Research, CSTR, Edinburgh, UK, June 2008
• IDIAP research institute, Switzerland, June–July 2007
• University of Maryland, College Park, USA, Visiting Professor, May–July 2004
• Carnegie Mellon University, Pittsburgh, Visiting Professor, May–July 2003
• University of Maryland, College Park, USA, Visiting Professor, May–July 2002
• University of Maryland, College Park, USA, Visiting Professor, May–June 2001
• Oregon Graduate Institute, Portland, USA, Visiting Professor, May–July 1999
• Oregon Graduate Institute, Portland, USA, Visiting Professor, May–July 1998
• Oregon Graduate Institute, Portland, USA, Visiting Professor, May–July 1997
• Oregon Graduate Institute, Portland, USA, Visiting Professor, May–July 1996
• Eindhoven Tech. Univ., The Netherlands, Visiting Professor, July 1994 - Jan. 1995
• Rutger's University, USA, Visiting Professor, June 1994
• LIMSI, University of Paris, Orsay, France, Visiting Professor, May 1994
• Eindhoven Tech. Univ., The Netherlands, Visiting Professor, May–June 1992
• US West, Boulder, CO, USA, Visiting Scientist, May–June 1991
• University of Bremen, Germany, Visiting Scientist, May 1990
• Center for Speech Technology Research (CSTR), Edinburgh, UK, Visiting Professor, Sept. 1989
• University of Florida, Gainesville, USA, Visiting Professor, May–July 1984
• University of Florida, Gainesville, USA, Visiting Professor, May–July 1983
• CS Dept., Carnegie-Mellon University, Pittsburgh, USA, Visiting Associate Professor, 1977-1980
Professional Societies Involved
1. Guest Editor, Sadhana, Academy Proceedings in Engineering Sciences, Special issue on Speech Communication and Signal Processing, Indian Academy of Sciences, Springer (ISSN 0256-2499), Vol.36, Number 5, October 2011.
2. International Speech Communication Association (ISCA)
3. IEEE (USA)
4. IETE (India)
5. Acoustical Society of India (India)
6. Associate Editor, IEEE Transactions on Speech & Audio Processing (2003-2006)
Teaching Interests
• Speech Technology
• Soft Computing
• Computing with ANN
• Digital Signal Processing
• Signals and Systems
• Principles of Communication
Research Interests
• Digital Signal Processing
• Speech
• Computer Vision
• Artificial Neural Networks
Text Books, Monographs, Conference Proceedings - Authored/Edited
1. PR Reddy, K Sri Rama Murty and B. Yegnanarayana, Representation Learning for Spoken Term Detection, in Amita Pal and Sankar K Pal (Eds.), Pattern Recognition and Big Data, World Scientific Publishing, Oct. 2016
2. P. Gangamohan, Sudarsana Reddy Kadiri, and B. Yegnanarayana, Analysis of Emotional Speech - A Review, in Anna Esposito and Lakhmi C. Jain (Eds.), Toward Robotic Socially Believable Behaving Systems, Vol. I, Chap. 11, Springer International Publishing, pp. 205–238, Mar. 2016.
3. B. Yegnanarayana (Ed.), Speech Communication and Signal Processing, Special issue of Sadhana, Academy Proceedings in Engineering Sciences, Indian Academy of Sciences and Springer, Vol. 36, No. 5, Oct. 2011.
4. L. Rabiner, B-H. Juang, and B. Yegnanarayana, Fundamentals of speech recognition, Pearson Education Inc., Delhi (Indian Subcontinent Adaptation), 2009.
5. B. Yegnanarayana, C. Chandra Sekhar, S. Palanivel, V. Suresh, Anil Kumar Sao and Dhananjaya N., Issues in the Development of an Audio-Video based Person Authentication System, in Jana Dittmann, Claus Vielhauer and Jan Hansen (Eds.), New Advances in Multimedia Security, Biometrics, Watermarking and Cultural Aspects, pp. 53–76, Logos Verlag Berlin, Berlin, 2006.
6. S. V. Gangashetty, C. Chandra Sekhar, and B. Yegnanarayana, Spotting Multilingual Consonant-Vowel Units of Speech using Neural Network Models, in Marcos Fundez-Zanuy, ... (Eds.), Nonlinear Analyses and Algorithms for Speech Processing (Revised Selected Papers from International Conference on Non-linear Speech Processing, NOLISP 2005, Barcelona, Spain, April 19–22) (LNAI Vol. 3817, ), pp. 303–317, Springer Lecture Notes in Artificial Intelligence, 2005.
7. B. Yegnanarayana, Suryakanth V. Gangashetty, and S. Palanivel, Autoassociative Neural Network Models for Pattern Recognition Tasks in Speech and Image, in Ashish Ghosh and Sankar K. Pal (Eds.), Soft Computing Approach to Pattern Recognition and Image Processing, World Scientific Publishing Co. Pte. Ltd., Singapore, 2002.
8. B. Yegnanarayana and C. Chandra Sekhar, Pattern Recognition Issues in Speech Processing, in Sankar K. Pal, and Amita Pal (Eds.), Pattern Recognition from Classical to Modern Approaches, World Scientific Publishing Co. Pte. Ltd., Singapore, 2001.
9. B. Yegnanarayana and S. Ramesh, Solution Manual to Artificial Neural Networks by B. Yegnanarayana, New Delhi, Prentice-Hall of India, 2001.
10. B. Yegnanarayana and M. Narendranath, Neural Networks, Voice Conversion, in S. Katagiri (Ed.), Handbook of Neural Networks for Speech Processing, Artech House Inc., Massachusetts, 2000.
11. Manish Sarkar and B. Yegnanarayana, Application of Fuzzy-Rough Sets in Modular Neural Networks, in S. K. Pal and A. Skowron (Eds.), Rough Fuzzy Hybridization: A New Trend in Decision Making, Springer-Verlag Singapore Pte. Ltd., 1999.
12. [Book] B. Yegnanarayana, Artificial Neural Networks, New Delhi, Prentice-Hall of India, 1999.
13. B. Yegnanarayana, M. Narendranath and S. Rajendran, Voice Conversion, Indian Institute of Technology Madras, September 1995.
14. B. Yegnanarayana, N. Sudha, P.P. Raghu and C. Chandra Sekhar, Artificial Neural Networks for Classification of Sonar Targets, Indian Institute of Technology Madras, August 1995.
15. P. P. Raghu, R. Poongodi and B. Yegnanarayana, Application of Artificial Neural Networks in Remote Sensing, Indian Institute of Technology Madras, November 1995.
16. B. Yegnanarayana and V. R. Ramachandran, "Group Delay Processing of Speech Signals in Martin Cooke", Steve Beet and Malcom Crawford (Eds.), Visual Representations of Speech Signal, John Wiley and Sons Ltd., 1993.
17. B. Yegnanarayana et al., Proceedings of Workshop on Speech Technology, Indian Institute of Technology Madras, December 1992.
18. B. Yegnanarayana et al., Tutorial on Speech Technology, Indian Institute of Technology Madras, December 1992.
19. B. Yegnanarayana, R. Ramaseshan and A. Ravichandran, Studies in Sensor Array Imaging, Indian Institute of Technology Madras, November 1992.
20. V. V. S. Sarma, N. Viswanadham, B. Yegnanarayana and B. L. Deekshatulu, Artificial Intelligence and Expert System Technologies in the Indian Context, vol.1 & 2, Tata McGraw-Hill, 1991.
21. B. Yegnanarayana and P. V. S. Rao (Eds.), Special issue on Speech Processing, JIETE, vol.34, no.1, Jan-Feb 1988.
External links
Microsoft Academic
DBLP
Google scholar
Semantic scholar
Researchgate
Neurotree
Fellow Members of the IEEE
Living people
Year of birth missing (living people) |
69827607 | https://en.wikipedia.org/wiki/JumpStart%20Toddlers | JumpStart Toddlers | JumpStart Toddlers is a 1996 educational video game, the fourth within the JumpStart franchise. An enhanced version was released in 2000.
While the game itself received generally positive reviews, much of the commentary surrounding this title was as a key example of a burgeoning controversial lap-ware video gaming market, targeting children aged 5 and under.
Development and release
The game was designed by Nicole Hardt Wrubel, who spent four years as a lead designer at Knowledge Adventure. JumpStart Toddlers was inspired by her then two-and-a-half year old son, for whom she could not find age-appropriate software.
In order to introduce young players to the concept of interacting with hotspots, players immediately meet Giggles the Gopher, who welcomes them and points out seven hotspots around the screen which lead to activity areas. The game was designed to be part of a full curriculum of games for children to graduate into as they aged; Discover educational toys for children noted that Knowledge Adventure was "one of the few companies that offer a title for toddlers ages 18 months to 3 years".
An enhanced version was released in 2000, published by Havas after a series of mergers and acquisitions involving Knowledge Adventure.
Critical reception
Referring to this title, U.S. News & World Report's article False Promise commented, "parents have been told that it's their responsibility to prepare children for a multi-tasking, technology-driven future, so they "JumpStart" their babies". eMERGing Literacy and Technology wrote the title was "appealing to the very young child or a child with limited motor control". Young kids and computers saw it as an example of a software program specifically designed to teach mouse skills, along with Broderbund's My Very First Software. MacUser gave the title 4 out of 5 stars, though noted it had the fewest activities in the series.
Speaking of the game's suitability for those who may not have the precision required to click narrow targets, Fine motor skills in children with Down syndrome's Maryanne Bruni cites JumpStart Toddlers and Reader Rabbit Toddler as preschool programs that reward any mouse movement of keyboard press with an audiovisual cue. Newsweek praised the game (and JumpStart 2nd Grade) as superior to previous entries in the series, noting they "successfully balance letting youngsters explore" and "using animated hosts to spur them on", adding that they "both understand the concept of reward".
Citing JumpStart Toddlers as a key example, Macs for teachers praised the title for its "neat activities...accessible from a classroom setting" and "activities that teach essential early learning skills". A 2000 Wired reviewer noted that after their daughter experienced the game's music, she "walks away from Barney tapes in the VCR and demands Toddlers". SuperKids wrote the game was an "excellent starter program for very young computer novices".
Parenting magazine deemed the 2000 version among the "best software of the year", noting that the title had been "overhauled, with splendid results". The magazine gave the title a B+. The Discovery School site (quoted in Cyber Rules) recommended JumpStart Toddlers 2000, "best for older toddlers (or those who have graduated from their baby and other toddler software titles)", and gave it a rating of 4-and-a-half out of 5 stars.
Early childhood education today notes, "although the graphics in JumpStart Toddlers can be moved for an activity, they cannot be controlled or manipulated into any other form."
Commercial performance and awards
The game consistently placed on the PC Data's Best Education Software for MS-DOS/Windows list throughout 1996 and 1997 alongside other Jumpstart titles peaking at #8. Throughout 1997, JumpStart Toddlers took in somewhere between $4 million and $5.16 million. It became one of many preschool titles to join the best-seller list, alongside Sesame Street Elmo's Preschool (1996), Ready to Read With Pooh (1997), and Jumpstart Preschool (1995).
The game won numerous awards, including a 1996 Parent's Choice Gold Medal for best new children's software.
Jumpstart Toddlers 2000 would also reach at least 7th on the PC Data best-selling chart.
Legacy
According to the Chicago Sun-Times, Knowledge Adventure "started the baby trend when it introduced JumpStart Toddlers in 1996". Knowledge Adventure's Jennifer Johnson created the term lap-ware to describe software "intended to be used with the child on the parent's lap...So parents are encouraged to play a really active role in their child's first computer experience". The term was picked up by MacUser that July to describe the new software category Jumpstart had helped create. The New York Times later asserted that Knowledge Adventure was the "vanguard of a trend of introducing younger and younger children -- some as young as 6 months old -- to computers".
While children's software had previously been dominated by titles for older children like Mavis Beacon Teaches Typing and Where in the World Is Carmen Sandiego?, by 1997 sales of software for children under 5 had risen from 179,000 units two years prior to 1.5 million. By September 1997, toddler and pre-school titles dominated PC Data's top-10 selling education list. That November, U.S. News & World Report noted that JumpStart Toddlers, Davidson & Associates' Fisher-Price Ready for Learning: Toddler (1997) and The Learning Company'''s Reader Rabbit Toddler (1997) were all vying for the lucrative 18-month-and-under demographic over the holiday season.
In 1998, Wired's Susan Kuchinskas similarly applied the neologism to categorise JumpStart Toddlers, JumpStart Baby, and BowWow House's BabyWow! as contemporaries in a burgeoning "lapware" market, which catered to overworked parents who wish to give their new-borns a competitive edge through technology. That year, PC Magazine's Terri Robinson (quoted in 2002 paper Engineering The Entrepreneurial Infant) commented that, "software developers are keen to the burgeoning ‘baby skills’ market" to "capture the imaginations of your 1- and 2-year olds and provide them with solid educational building blocks". Stanford University professor Clifford Nass found this trend "disturbing", commenting "the social context and the tactile experience [of real objects] are crucial to early development. Similarly, pediatrician T. Berry Brazelton considered lapware like JumpStart Toddlers "an assault on a young child's developing mind".
By this time, just two years after the game's release, JumpStart Toddlers was competing in a crowded lapware market with contemporaries in the edutainment industry. In 2000, a U.S. News & World Report article featured a 13-month old who was already familiar with several games designed for players under two-years old, including JumpStart Toddlers''.
References
1996 video games
Classic Mac OS games
Children's educational video games
Video games developed in the United States
Windows games |
40027345 | https://en.wikipedia.org/wiki/Embroidermodder | Embroidermodder | Embroidermodder is a free machine embroidery software tool that supports a variety of formats and allows the user to add custom modifications to their embroidery designs.
History
Embroidermodder 1 was started by Mark Pontius in 2004 while staying up all night with his son in his first couple months. When Mark returned to his day job, he lacked the time to continue the project. Mark made the decision to focus on his family and work, and in 2005, Mark gave full control of the project to Josh Varga so that Embroidermodder could continue its growth.
Embroidermodder 2 was conceived in mid 2011 when Jonathan Greig and Josh Varga discussed the possibility of making a cross-platform version. It runs on Linux, macOS, Microsoft Windows.
The Embroidermodder website and downloads are hosted on SourceForge. On July 18, 2013, The Embroidermodder 2 Source was moved to GitHub.
Embroidermodder 1 Features
Runs under Microsoft Windows.
Sourcecode (Visual C++) available.
Reads/Writes Tajima .dst file format, which is compatible with most every commercial and vendor's software available.
Also reads/writes Excel .csv file format for hand editing using Excel or a text editor.
Allows scaling designs to any size (not just +/- 20% like some software).
Shows the design on-screen, with unlimited zoom to get up close and personal with any stitch. Zoom to actual size, fit to screen, selection, or just in/out. Scrollbars allow panning around the design.
Multiple documents can be open, and each document may have multiple view windows, each at different zooms to allow fine detail editing, while still getting the big picture.
Print design at actual size.
Displays statistics like max/min/average stitch length, number of colors, etc.
Select, move, insert, or delete either stitches, lines, or selection.
Double click in select mode to select a region (stitches between Jumps or color changes).
Cut/Copy/Paste selection.
Cursor left/right steps selection point though individual stitches.
Add text using any windows font.
Toggle display of Jump stitches (as black dash-dot lines).
Display updates are very fast using a combination of direct screen draw with efficient clipping and background rendering.
Optional Debug mode (compile time option) with additional display and break capability, such as monitoring the background render progress in the status bar.
Embroidermodder 2 Features
Runs under Linux, macOS, Microsoft Windows and Raspberry Pi.
CAD/CAM Graphical User Interface.
Sourcecode (Qt4/Qt5 C++) available.
Undo/Redo functionality.
Reads over 45 different embroidery formats.
Also reads/writes Excel .csv file format for hand editing using Excel or a text editor or generating the .csv data from an external program such as Mathematica.
Cut/Copy/Paste selection between multiple documents.
Scripting API.
Add text using any installed system font.
Customizable icon themes.
libembroidery
One of the byproducts of Embroidermodder 2 was the creation of libembroidery library. libembroidery is written in C. It supports reading and writing of a variety of embroidery formats, and several vector formats which are not commonly used in embroidery.
The formats are as such (last updated January 2017):
'Stable' = Yes, supported and is considered stable.
'Unstable' = Yes, supported but may be unstable.
No = Not supported.
References
External links
Embroidery
Free computer-aided design software
Computer-aided design software for Linux
MacOS graphics software
Windows graphics-related software
Computer-aided design software for Windows
MacOS computer-aided design software
Cross-platform free software
Cross-platform software
Graphics software that uses Qt
Free software programmed in C
Free software programmed in C++
Free software projects
Free graphics software
Software using the zlib license
Vector graphics editors
Vector graphics editors for Linux |
49002717 | https://en.wikipedia.org/wiki/ProntoForms | ProntoForms | ProntoForms (TSXV: PFM) is a Canadian software and mobile app developer providing a low-code development platform to build apps. The application digitizes paperwork to complete forms on mobile devices, collect data, and send it. The company went public in 2005 and trades on the TSX Venture Exchange under the stock ticker symbol, PFM. In 2015, Frost & Sullivan honored ProntoForms with the "North American Mobile Forms Competitive Strategy Innovation and Leadership" Award.
History
ProntoForms was founded in 2001 in Ottawa, Canada as TrueContext Mobile Solutions by CEO Alvaro Pombo. Pombo had immigrated to Canada from Colombia to work in the oil industry and worked for Palm before starting TrueContext. In 2002, the company received $5.4 million in seed funding from Skypoint Capital and Venture Coaches.
In 2007, the company launched their Pronto app in the United States for BlackBerry and Windows Mobile devices. Later, it included support for Android and iOS devices. The firm also entered into a strategic partnership with AT&T that made the telecommunications company a distributor of the ProntoForms software. TrueContext had around 1,800 business clients by the end of 2012. Other strategic partnerships that ProntoForms has formed include those with Nextel, Rogers, and Bell.
In 2013, the company officially changed its name to ProntoForms while also selling off several patents. In 2014, it raised $1 million in funding through private placement. By the end of 2014, it had around 2,500 business clients. In 2015, it established a partnership with Apple, allowing the latter to distribute the ProntoForms technology. In September 2015, it raised an additional $3 million through private placement.
In 2019, ProntoForms was recognized in the 2019 Gartner "Magic Quadrant for Enterprise Low-Code Application Platforms" as one of 18 LCAP software. During this time, its customers included ABB, Daimler, Halliburton, Johnson Controls, Rio Tinto, Pacific Gas & Electric, and WorleyParsons. In 2019, ProntoForms announced partnership with ServicePower, a Field Service Management (FSM) software company.
Products
ProntoForms' flagship product is a low-code mobile app and software for field workers and technicians to collect, send, and receive data using mobile devices. The app is available for iOS and Android, and Windows mobile devices.
The platform consists of an app and a drag-and-drop form builder. Integrations that transmit data to back-end and cloud systems and existing field service management software are also connected via a drag-and-drop interface. Popular out-of-the-box systems of record include Zendesk, Quickbooks, and GeoTab.
Apps made with this platform are mainly used for field service. Dispatching and automatic notifications can also be included. Collected data can be exported to formats like PDF, CSV, Excel, and raw data files.
After data is gathered, real-time dashboards and reports can be generated.
Common use cases for ProntoForms includes environmental, health & safety inspections, asset management, audits, delivery dispatch coordination, tracking time on service calls, and others.
References
Companies listed on the TSX Venture Exchange
Companies established in 2001
Companies based in Ottawa
Software companies of Canada |
54071691 | https://en.wikipedia.org/wiki/Thomson%20computers | Thomson computers | In the 1980s the French Thomson company produced a range of 8-bit computers based on the 6809E CPU.
They were released in several variations (mostly concerning the keyboard or color of the casing) covering the MO and TO series from late 1982 to 1989.
While MO and TO models are incompatible in software, most of the peripherals and hardware were compatible.
These machines were common in France due to the 1980s governmental educational program Computing for All (Informatique pour Tous). Around 100,000 MO5 and TO7/70 computers were ordered and installed in schools.
Export attempts to Germany, Italy, Algeria, USSR, India, Argentina and Spain were unsuccessful. By 1988 Thomson only sells 60,000 of the predicted 150,000 computers, abandoning computer development the following year.
First generation
MICROMEGA 32: released in 1983, it was a Motorola 68000-based machine running Unix v7.
Thomson MO5: released in 1984 in order to honor the "Plan Informatique pour Tous". Supplied with 48K RAM (32K available to user in Basic 1.0) and first released with a rubber keyboard. Later it featured a mechanical keyboard. It was edited in a limited edition with a white casing, named "MO5 Michel Platini".
Thomson TO7: produced from 1982 to 1984. Supplied with 24K RAM (16K used by the video) and upgradable to 48K. 8 color display.
Thomson TO7/70: 1984 version with more RAM (64K, upgradable into 128K) and 16 color display.
Thomson MO5E: 1985 export version, with a different casing featuring a mechanical keyboard, a parallel port, two joystick ports, an internal PAL modulator and an integrated power supply.
Second generation
Thomson TO9 : released in late 1985. Separate keyboard and central unit, 128K RAM and a 3½-inch floppy disk drive.
Thomson MO5NR: released in 1985-1986. This is a MO6 in a MO5E casing, with an integrated network interface controller, the nanoréseau (nano network), which was used in French schools.
Thomson TO16 (prototype): prototype developed in 1985-1988.
Thomson MO6 : Released in 1986. 128K RAM and built in tape recorder.
Olivetti Prodest PC128 : Same as Thomson MO6, were sold in Italy as Olivetti Prodest PC128.
Thomson TO8 : released in late 1986. 256K RAM, 80K ROM with Microsoft Basic 512, extra video modes.
Thomson TO9+ : released in late 1986, Separate keyboard and central unit, 512K RAM with a built in modem and a 3½-inch floppy disk drive.
Thomson TO8D : released in late 1987, it was a TO8 with a 3½-inch floppy disk drive.
PC compatible
Thomson TO16 : released in September 1987. 8088 based IBM compatible PC.
See also
Microsoft Basic 1.0 - A version of BASIC used on Thomson computers
Thomson EF936x - graphic chip used on Thomson computers
Computing for All, a French government plan to introduce computers to the country's pupils
References
External links
Theodore, multi-platform Thomson 8-bit emulator
Lists of computer hardware
6809-based home computers
Computer companies of France
Thomson computers |
211226 | https://en.wikipedia.org/wiki/Winsock | Winsock | In computing, the Windows Sockets API (WSA), later shortened to Winsock, is an application programming interface (API) that defines how Windows network application software should access network services, especially TCP/IP. It defines a standard interface between a Windows TCP/IP client application (such as an FTP client or a web browser) and the underlying TCP/IP protocol stack. The nomenclature is based on the Berkeley sockets API used in BSD for communications between programs.
Background
Early Microsoft operating systems, both MS-DOS and Microsoft Windows, offered limited networking capability, chiefly based on NetBIOS. In particular, Microsoft did not offer support for the TCP/IP protocol stack at that time. A number of university groups and commercial vendors, including the PC/IP group at MIT, FTP Software, Sun Microsystems, Ungermann-Bass, and Excelan, introduced TCP/IP products for MS-DOS, often as part of a hardware/software bundle. When Windows 2.0 was released, these vendors were joined by others such as Distinct and NetManage in offering TCP/IP for Windows.
The drawback faced by all of these vendors was that each of them used their own API (Application Programming Interface). Without a single standard programming model, it was difficult to persuade independent software developers to create networking applications which would work with any vendor's underlying TCP/IP implementation. Add to this the fact that end users were wary of getting locked into a single vendor and it became clear that some standardization was needed.
The Windows Sockets project had its origins in a Birds Of A Feather session held at Interop '91 in San Jose on October 10, 1991. It is based on socket specifications created by NetManage and which it put into public domain at this meeting. At the time the NetManage socket was the only 100% DLL based, multi-threaded product for Windows 3.0 available. The first edition of the specification was authored by Martin Hall, Mark Towfiq of Microdyne (later Sun Microsystems), Geoff Arnold of Sun Microsystems, and Henry Sanders and J Allard of Microsoft, with assistance from many others. There was some discussion about how best to address the copyright, intellectual property, and potential anti-trust issues, and consideration was given to working through the IETF or establishing a non-profit foundation. In the end, it was decided that the specification would simply be copyrighted by the five authors as (unaffiliated) individuals.
All the participating developers resisted the shortening of the name to simple Winsock for a long time, since there was much confusion among users between the API and the DLL library file (winsock.dll) which only exposed the common WSA interfaces to applications above it. Users would commonly believe that only making sure the DLL file was present on a system would provide full TCP/IP protocol support.
Technology
The Windows Sockets API specification defines two interfaces: the API used by application developers, and the SPI, which provides a means for network software developers to add new protocol modules to the system. Each interface represents a contract. The API guarantees that a conforming application will function correctly with a conforming protocol implementation from any network software vendor. The SPI contract guarantees that a conforming protocol module may be added to Windows and will thereby be usable by an API-compliant application. Although these contracts were important when Windows Sockets was first released, since network environments required multi-protocol support (see above) they are now of only academic interest. Included in the Windows Sockets API version 2.0 are functions to use IPX/SPX, although the protocol was all but obsolete already at the time WSA 2.0 shipped. Microsoft has shipped the TCP/IP protocol stack with all recent versions of Windows, and there are no significant independent alternatives. Nor has there been significant interest in implementing protocols other than TCP/IP.
Windows Sockets code and design are based on BSD sockets, but provides additional functionality to allow the API to comply with the regular Windows programming model. The Windows Sockets API covered almost all the features of the BSD sockets API, but there were some unavoidable obstacles which mostly arose out of fundamental differences between Windows and Unix (though Windows Sockets differed less from BSD sockets than the latter did from STREAMS). All function calls in the API begin with the moniker , e.g. for sending data on a connected socket.
However it was a design goal of Windows Sockets that it should be relatively easy for developers to port socket-based applications from Unix to Windows. It was not considered sufficient to create an API which was only useful for newly written Windows programs. For this reason, Windows Sockets included a number of elements which were designed to facilitate porting. For example, Unix applications were able to use the same variable to record both networking errors and errors detected within standard C library functions. Since this was not possible in Windows, Windows Sockets introduced a dedicated function, , to retrieve error information. Such mechanisms were helpful, but application porting remained extremely complex. Many original TCP/IP applications had been implemented by using system features specific to Unix, such as pseudo terminals and the fork system call, and reproducing such functionality in Windows was problematic. Within a relatively short time, porting gave way to the development of dedicated Windows applications.
Specifications
Version 1.0 (June 1992) defined the basic operation of Winsock. It was kept very close to the existing interface of Berkeley sockets to simplify porting of existing applications. A few Windows-specific extensions were added, mainly for asynchronous operations with message-based notifications.
Although the document didn't limit support to TCP/IP, TCP and UDP were the only protocols explicitly mentioned. Most vendors only delivered TCP/IP support, although Winsock from DEC included DECNet support as well.
Version 1.1 (January 1993) made many minor corrections and clarifications of the specification. The most significant change was the inclusion of the function.
Winsock 2 was a backwards-compatible extension of Winsock 1.1. It added support for protocol-independent name resolution, asynchronous operations with event-based notifications and completion routines, layered protocol implementations, multicasting, and quality of service. It also formalized support for multiple protocols, including IPX/SPX and DECnet. The new specification allowed sockets to be optionally shared between processes, incoming connection requests to be conditionally accepted, and certain operations to be performed on socket groups rather than individual sockets. Although the new specification differed substantially from Winsock 1, it provided source- and binary-level compatibility with the Winsock 1.1 API. One of the lesser known additions was the Service Provider Interface (SPI) API and Layered Service Providers.
Versions 2.0.x (May 1994 onwards) had internal draft status, and were not announced as public standards.
Version 2.1.0 (January 1996) was the first public release of the Winsock 2 specification.
Version 2.2.0 (May 1996) included many minor corrections, clarifications, and usage recommendations. It was also the first version to remove support for 16-bit Windows applications.
Version 2.2.1 (May 1997) and Version 2.2.2 (August 1997) introduced minor functionality enhancements. Mechanisms were added for querying and receiving notification of changes in network and system configuration.
The IPv6 Technical Preview for Windows 2000 (December 2000) saw the first implementation of RFC 2553 (March 1999, later obsoleted by RFC 3493), a protocol-independent API for name resolution, which would become part of Winsock in Windows XP.
Updates in Windows 8
Windows 8 includes the "RIO" (Registered IO) extensions for Winsock.
These extensions are designed to reduce the overhead of the user to kernel mode transition for the network data path and the notification path, but use the rest of the regular Windows TCP and UDP stack (and uses existing network cards). The setup path (for example, the "connect" function) is unchanged from the regular Winsock path.
Implementations
Microsoft implementations
Microsoft did not supply an implementation of Winsock 1.0.
Version 1.1 of Winsock was supplied in an add-on package (called Wolverine) for Windows for Workgroups (code named Snowball). It was an integral component of Windows 95 and Windows NT from versions 3.5 and onwards (the initial commercially available version of Windows NT, version 3.1, included only a proprietary and quite incomplete implementation of TCP/IP based on the AT&T UNIX System V "STREAMS" API ).
Version 2.1 of Winsock was supplied in an add-on package for Windows 95. It was an integral component of Windows 98, Windows NT 4.0, and all subsequent Windows releases. (Microsoft did not supply implementations of Winsock 2 for Windows 3.x or Windows NT 3.x.)
Recent versions of Winsock 2.x have been delivered with new Windows releases or as part of service packs.
Winsock 2 is extensible by a mechanism known as a Layered Service Provider (LSP). Winsock LSPs are available for a wide range of useful purposes, including Internet parental controls, web content filtering, QoS etc. The layering order of all providers is kept in the Winsock Catalog. In previous versions of Windows, removing a buggy LSP could result in corruption of the Winsock catalog in the registry, potentially resulting in a loss of all network connectivity. Winsock in Windows XP Service Pack 2, Windows Server 2003 Service Pack 1 and all later Windows operating systems has the ability to self-heal after a user uninstalls such an LSP.
Other implementations
Among the other vendors offering Winsock-compliant TCP/IP and UDP/IP stacks were (alphabetically) 3Com, Beame & Whiteside, DEC, Distinct, FTP Software, Frontier, IBM, Microdyne, NetManage, Novell, Sun Microsystems and Trumpet Software International.
Trumpet Winsock by Peter Tattam was one of the few Winsock 1.0 implementations that could be installed under Windows 3.0, which had no built-in support for Winsock. Trumpet was also the most popular shareware implementation of Winsock for Windows 3.x. Trumpet Winsock 5.0 is available for Windows 95/98 and Windows NT and includes a Winsock 1.1 compliant IPv6 stack for these operating systems.
The Wine project contains a source and binary compatible reimplementation of Winsock on top of the BSD sockets API.
See also
Berkeley sockets
Layered Service Provider (Winsock LSP)
References
External links
MSDN - Winsock2 Reference
MSDN - Winsock2 Home
Sockets FAQ - Windows Sockets FAQ
Client / Server Programming with TCP/IP Sockets - Winsock C++ Programming
Porting Berkley Socket programs to Winsock
Windows Network Development blog — Microsoft developer blog covering Winsock, WSK, WinINet, Http.sys, WinHttp, QoS and System.Net, with a focus on features being introduced in Windows Vista
Brief History of Microsoft on the Web
WinSock Development Information
Winsock Programmer's FAQ
Network socket
History of the Internet
1992 software
Windows communication and services
Microsoft application programming interfaces |
10402302 | https://en.wikipedia.org/wiki/TC%20Electronic | TC Electronic | TC Electronic is a Danish audio equipment company that designs and imports guitar effects, bass amplification, computer audio interfaces, audio plug-in software, live sound equalisers, studio and post production equipment, studio effect processors, and broadcast loudness processors and meters. In August 2015, the company was purchased by Music Group, a holding company chaired by Uli Behringer.
Company history
TC Electronic was founded by two musician brothers, Kim and John Rishøj. Their SCF, ("Stereo Chorus + Pitch Modulator & Flanger") was a successful early product. After initial success with guitar effect pedals, they developed 19" rack mounted processors, including the TC2290 delay released in 1985.
TC Electronic formed the current TC Group in 2002 after acquiring TGI plc. TGI consisted of Tannoy, GLL Goodman, Lab.Gruppen and Martin Audio. Martin Audio was sold quickly after the merger and Goodman Loudspeakers closed. TC Works (software development), based in Germany, was once part of the TC group, but reintegrated with TC Electronic in 2005.
The current TC Group is a holding company of five individual brand companies consisting of Tannoy, (speakers and studio monitors), Lab.Gruppen (amplifiers), TC Electronic, TC-Helicon (harmonisers and vocal processors) and TC|Applied Technologies (digital-audio semiconductor development) and the sales-companies TC Group|International, TC Group|Americas, TC Group|Japan, TC Group|China, and TC Group|Middle East.
TC Electronic also co-develops products with Dynaudio Acoustics (studio monitors) and their European sales arm TC Group|International distributes Blue Microphones outside of the US.
In August 2015, the company was purchased by Music Group, a holding company chaired by Uli Behringer. The parent company has since been rebranded as Music Tribe.
Products
System 6000
System 6000 is an audio mastering system built around the Mainframe 6000. It houses DSP-based electronics for processing and provides audio inputs and outputs. The Mainframe 6000 connects via Ethernet to either the Remote CPU 6000 and the TC ICON hardware controller, or to a computer running TC ICON emulation software.
The System 6000 comes in two versions—Reverb 6000 and Mastering 6000. The two differ in included processing algorithms:
Reverb 6000 comes with reverb and delay effects for stereo and multi-channel purposes, while Mastering 6000 comes with algorithms suited for stereo and multi-channel audio mastering, and includes pitch-shifting tools. Several optional algorithms are available separately. The System 6000 series was upgraded to MK2 in 2010.
Finalizer
The Finalizer is an audio mastering tool available two versions: Finalizer 96K and Finalizer Express. Both Finalizers have a compressor, limiter, equaliser and a Gain Maximiser. The Finalizer Express is a less-functional version of the Finalizer 96K that does not perform 96 kHz processing.
Guitar products
Since the company began with guitar effects, the current line of TC Electronic products remain an important aspect of the company's business. There are several product categories:
Nova - Includes a line of pedals (Delay, Reverb, Modulator, Drive) and the Nova System Multi Effects processor.
G-Series - Includes the top of the range G System (floor processor and switching unit), the dedicated acoustic processor G Natural and rack processors G Major 2 and G-Force.
Tuners - PolyTune (introduced 2010) is the world's first polyphonic guitar tuner.
Compact Pedals - Includes the original SCF (Stereo Chorus Flanger), Ditto mini, HOF mini, VPD1, MojoMojo Overdrive and Dark Matter Distortion.
TonePrint Pedals - Released at the NAMM'10 show these digital pedals have the ability to have TonePrints (custom presets by famous guitar & bass players) downloaded free from the TC website added and uploaded to the pedal via. a mini USB socket. Includes the Hall of Fame and Flashback II
Bass amplification products
In 2009, TC Electronic released a class D bass amplifier, RH450 (originally called RebelHead 450, but this name was cancelled later) along with the complementary RC 4 foot switch. In addition to this, TC Electronic developed a range of speaker cabinets for RH450 and other bass amplifiers: RS112(1x12" speakers) RS210(2x10" speakers), RS212(2 x 12" speakers) and RS410(4 x 10" speakers).
Also, shortly after presenting the RH450 at NAMM'09, TC Electronic engaged a pilot program, selecting 24 bassists worldwide to test the amplifier at concerts and rehearsals.
PowerCore
PowerCore is a series of computer hardware and software, somewhat similar in concept to the DSP card in Digidesign's Pro Tools. At the base is a PowerCore unit that runs PowerCore VST or AU plug-ins. TC Electronic discontinued the PowerCore line in early 2011.
Studio equipment
TC Electronic produces equalizers, microphone preamplifiers, compressors, level maximizers, and multi-effect units (such as FireworX).
FireworX
FireworX is a multi-effects processor. Producers and DJs, such as Sasha, use or used Fireworx in conjunction with CD players for live performances. Fireworx won best Hardware Signal Processing Technology at the 1998 TEC Awards.
The unit itself incorporates effects such as distortion, compression, noise and curve generators, ring modulation, vocoding, parametric equalizers, tremolo, reverb and delay. The effects can be routed in a large variety of ways, and it is possible to modify different parameters.
PA, installation and broadcasting products
TC Electronic also produces equalizers for live use, gain maximizers, loudness controllers, and several products that reduce loudness variations during broadcasting. TC Electronic also has been a partner in developing loudness standardisation work with international standards bodies such as the ITU, ATSC and EBU.
Software
TC Electronic develops a range of plug-ins for its PowerCore system. They are all in the VST or AU formats, but cannot run without the PowerCore hardware. Some of these plug-ins are included with all of the PowerCore systems, except PowerCore Unplugged.
The plugins include equalizers, noise removal plugins, reverbs, analog filter banks emulators, compressors and psychoacoustic effects.
TC Electronic also authors plug-ins for the HD version of Digidesign's Pro Tools and Soundscape software. These plug-ins are also available in a PowerCore version (some of them are ported from the System 6000). They include vocal harmonisers, mastering plugins, reverbs and compressors.
Spark audio editor
TC Electronic also developed a two-track digital audio editor, which featured mastering tools and different effects.
References
External links
TC Electronic
TC Helicon
Dynaudio Acoustics
Audio equipment manufacturers of Denmark
Manufacturers of professional audio equipment
Music equipment manufacturers
Guitar effects manufacturing companies
Danish brands |
20952693 | https://en.wikipedia.org/wiki/IOS%20jailbreaking | IOS jailbreaking | On Apple devices running iOS and iPadOS-based operating systems, jailbreaking is a privilege escalation exploit executed to remove software restrictions imposed by the manufacturer. Typically it is done through a series of kernel patches. A jailbroken device permits root access within the operating system and provides the right to install software not available through the App Store. Different devices and versions are exploited with a variety of tools. Apple views jailbreaking as a violation of the end-user license agreement, and strongly cautions device owners not to try to achieve root access through the exploitation of vulnerabilities.
While sometimes compared to rooting an Android device, jailbreaking is the bypassing of several types of Apple prohibitions for the end-user. Since it includes modifying the operating system (enforced by a "locked bootloader"), installing non-officially approved (not available on the App Store) applications via sideloading, and granting the user elevated administration-level privileges (rooting), the concepts of iOS jailbreaking are therefore technically different from Android device rooting.
Motivation
One of the reasons for jailbreaking is to expand the feature set limited by Apple and its App Store. Apple checks apps for compliance with its iOS Developer Program License Agreement before accepting them for distribution in the App Store. However, their reasons for banning apps are not limited to safety and security and may be regarded as arbitrary and capricious. In one case, Apple mistakenly banned an app by a Pulitzer-Winning cartoonist because it violated its developer license agreement, which specifically bans apps that "contain content that ridicules public figures." To access banned apps, users rely on jailbreaking to circumvent Apple's censorship of content and features. Jailbreaking permits the downloading of programs not approved by Apple, such as user interface customization and tweaks.
Device customization
Since software programs available through APT and or Installer.app (legacy) are not required to adhere to App Store guidelines, many of them are not typical self-contained apps but instead are extensions and customization options for iOS and its features and other apps (commonly called tweaks). Users install these programs for purposes including personalization and customization of the interface by tweaks developed by developers and designers, adding desired features such as access to the root file system and fixing annoyances, and making development work on the device easier by providing access to the filesystem and command-line tools. Many Chinese iOS device owners also jailbreak their phones to install third-party Chinese character input systems because they are easier to use than Apple's.
In some cases, jailbreak features are adopted by Apple and used as inspiration for features that are incorporated into iOS and iPadOS.
Carrier unlocking
Jailbreaking also opens the possibility for using software to unofficially unlock carrier-locked iPhones so they can be used with other carriers. Software-based unlocks have been available since September 2007, with each tool applying to a specific iPhone model and baseband version (or multiple models and versions). This includes the iPhone 4S, iPhone 4, iPhone 3GS, and iPhone 3G models. An example of unlocking an iPhone through a Jailbreak utility would be Redsn0w. Through this software, iPhone users will be able to create a custom IPSW and unlock their device. Moreover, during the unlocking process, there are options to install Cydia the iPad baseband.
Installation of malware
Computer criminals may jailbreak an iPhone to install malware or target jailbroken iPhones on which malware can be installed more easily. The Italian cybersecurity company Hacking Team, which sells hacking software to law enforcement agencies, advised police to jailbreak iPhones to allow tracking software to be installed on them.
Software piracy
On iOS devices, the installation of consumer software is generally restricted to installation through the App Store. Jailbreaking, therefore, allows the installation of pirated applications. It has been suggested that a major motivation for Apple to prevent jailbreaking is to protect the income of its App Store, including third-party developers and allow the buildup of a sustainable market for third-party software. However, the installation of pirated applications is also possible without jailbreaking, taking advantage of enterprise certificates to facilitate the distribution of modified or pirated releases of popular applications.
Package managers
A package manager or package-management system is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs. For jailbreaks, this is essential for the installation of third-party content. There are a few package managers specifically for jailbroken iOS devices, of which the most popular are Cydia, Sileo, Zebra and Installer 5.
Security of the device
Upon jailbreaking the device, a lot of the built-in security is lost due to the vast amount of kernel patches that go into building the tool. Security structures like Apple Mobile File Integrity, Sandbox, Read-Only Root File system and trusted apps get disabled or otherwise tampered with, in order to achieve the goals of the jailbreaking tool. This, in turn, creates potential security issues for the jailbreak user.
Jailbreak users are also often forced to stay on an inferior iOS version that is no longer supported by Apple because newer versions usually cannot be jailbroken right away. This has the potential to introduce security issues because for these older versions there are known security vulnerabilities, exploits and exploit proof of concepts published.
In March 2021, jailbreak developer GeoSn0w has released a tweak called iSecureOS which could alert the users of security issues found on their devices. The application works akin to antivirus software, in that it scans the files on the user device and check them against a database of known malware or unsafe repos.
In June 2021, ESET Research has confirmed that malware did exist on one of the piracy repositories in the jailbreak community. The malware actively targeted iSecureOS to try to bypass the detection, but updates to the security app have been quickly released and have mitigated the malware.
Comparison to Android rooting
Jailbreaking of iOS devices has sometimes been compared to "rooting" of Android devices. Although both concepts involve privilege escalation, they do differ in scope.
Where Android rooting and Jailbreaking are similar is that both are used to grant the owner of the device superuser system-level privileges, which may be transferred to one or more apps. However, unlike iOS phones and tablets, nearly all Android devices already offer an option to allow the user to sideload 3rd-party apps onto the device without having to install from an official source such as the Google Play store. Many Android devices also provide owners the capability to modify or even replace the full operating system after unlocking the bootloader.
In contrast, iOS devices are engineered with security features including a "locked bootloader" which can not be unlocked by the owner to modify the operating system without violating Apple's end-user license agreement. And on iOS, until 2015, while corporations could install private applications onto corporate phones, sideloading unsanctioned, 3rd-party apps onto iOS devices from sources other than the App Store was prohibited for most individual users without a purchased developer membership. After 2015, the ability to install 3rd-party apps became free for all users; however, doing so requires a basic understanding of Xcode and compiling iOS apps.
Jailbreaking an iOS device to defeat all these security restrictions presents a significant technical challenge. Similar to Android, alternative iOS app stores utilizing enterprise certificates are available, offering modified or pirated releases of popular applications and video games, some of which were either previously released through Cydia or are unavailable on the App Store due to these apps not complying with Apple developer guidelines.
Jailbreaking tools
Types of jailbreaking tools
Many different types of jailbreaks have come out over the years, differing in how and when the exploit is applied.
Untethered jailbreak
When a jailbroken device is booting, it loads Apple's own initially. The device is then exploited and the kernel is patched every time it is turned on. An untethered jailbreak is a jailbreak that does not require any assistance when it reboots up. The kernel will be patched without the help of a computer or an application. These jailbreaks are uncommon and take a significant amount of reverse engineering to create. For this reason, untethered jailbreaks have become much less popular, and Fugu14 is currently the only jailbreak that supports recent iOS versions.
Tethered jailbreak
A tethered jailbreak is the opposite of an untethered jailbreak, in the sense that a computer is required to boot. Without a computer running the jailbreaking software, the iOS device will not be able to boot at all. While using a tethered jailbreak, the user will still be able to restart/kill the device's SpringBoard process without needing to reboot. Many early jailbreaks were offered initially as tethered jailbreaks.
Semi-tethered jailbreak
This type of jailbreak allows a user to reboot their phone normally, but upon doing so, the jailbreak and any modified code will be effectively disabled, as it will have an unpatched kernel. Any functionality independent of the jailbreak will still run as normal, such as making a phone call, texting, or using App Store applications. To be able to have a patched kernel and run modified code again, the device must be booted using a computer.
Semi-untethered jailbreak
This type of jailbreak is like a semi-tethered jailbreak in which when the device reboots, it no longer has a patched kernel, but the key difference is that the kernel can be patched without using a computer. The kernel is usually patched using an application installed on the device without patches. This type of jailbreak has become increasingly popular, with most recent jailbreaks classified as semi-untethered.
History of tools
JailbreakMe and AppSnapp
A few days after the original iPhone became available in July 2007, developers released the first jailbreaking tool for it, and soon a jailbreak-only game app became available. In October 2007, JailbreakMe 1.0 (also called "AppSnapp") allowed people to jailbreak iPhone OS 1.1.1 on both the iPhone and iPod touch, and it included Installer.app as a way to get software for the jailbroken device.
ZiPhone
In February 2008, Zibri released ZiPhone, a tool for jailbreaking iPhone OS 1.1.3 and iPhone OS 1.1.4.
PwnageTool
The iPhone Dev Team, which is not affiliated with Apple, has released a series of free desktop-based jailbreaking tools. In July 2008 it released a version of PwnageTool to jailbreak the then new iPhone 3G on iPhone OS 2.0 as well as the iPod touch, newly including Cydia as the primary third-party installer for jailbroken software. PwnageTool continues to be updated for untethered jailbreaks of newer iOS versions.
QuickPwn
In November 2008 the iPhone Dev Team released QuickPwn to jailbreak iPhone OS 2.2 on iPhone and iPod touch, with options to enable past functionality that Apple had disabled on certain devices.
redsn0w
After Apple released iPhone OS 3.0 in June 2009, the Dev Team published redsn0w as a simple jailbreaking tool for Mac and Windows, and also updated PwnageTool primarily intended for expert users making custom firmware, and only for Mac. It continues to maintain redsn0w for jailbreaking most versions of iOS 4 and iOS 5 on most devices.
purplera1n & blackra1n
George Hotz developed the first iPhone unlock. In 2009, he released a jailbreaking tool for the iPhone 3GS on iPhone OS 3.0 called purplera1n, and blackra1n for iPhone OS version 3.1.2 on the 3rd generation iPod touch and other devices.
limera1n
In October 2010, George Hotz released limera1n, a low-level boot ROM exploit that permanently works to jailbreak the iPhone 4 and is used as a part of tools including redsn0w.
Spirit and JailbreakMe
Nicholas Allegra (better known as "comex") released a program called Spirit in May 2010. Spirit jailbreaks devices including iPhones running iPhone OS 3.1.2, 3.1.3, and iPad running iPhone OS 3.2. In August 2010, comex released JailbreakMe 2.0, the first web-based tool to jailbreak the iPhone 4 (on iOS 4.0.1).
In July 2011, he released JailbreakMe 3.0, a web-based tool for jailbreaking all devices on certain versions of iOS 4.3, including the iPad 2 for the first time (on iOS 4.3.3). It used a flaw in PDF file rendering in mobile Safari.
Greenpois0n
Chronic Dev Team initially released Greenpois0n in October 2010, a desktop-based tool for jailbreaking iOS 4.1 and later iOS 4.2.1 on most devices including the Apple TV, as well as iOS 4.2.6 on CDMA (Verizon) iPhones.
ultrasn0w
As of December 2011, redsn0w included the "Corona" untether by pod2g for iOS 5.0.1 for iPhone 3GS, iPhone 4, iPad (1st generation), and iPod touch (3rd and 4th generation). As of June 2012, redsn0w also includes the "Rocky Racoon" untether by pod2g for iOS 5.1.1 on all iPhone, iPad, and iPod touch models that support iOS 5.1.1.
Absinthe
The iPhone Dev Team, Chronic Dev Team, and pod2g collaborated to release Absinthe in January 2012, a desktop-based tool to jailbreak the iPhone 4S for the first time and the iPad 2 for the second time, on iOS 5.0.1 for both devices and also iOS 5.0 for iPhone 4S. In May 2012 it released Absinthe 2.0, which can jailbreak iOS 5.1.1 untethered on all iPhone, iPad, and iPod touch models that support iOS 5.1.1, including jailbreaking the third-generation iPad for the first time.
evasi0n
The hackers together called the evad3rs released an iOS 6.X jailbreak tool called "evasi0n" available for Linux, OS X, and Windows on Monday, February 4, 2013, at noon Eastern Standard Time. Due to the high volume of interest in downloading the jailbreak utility, the site initially gave anticipating users download errors. When Apple upgraded its software to iOS 6.1.3 it permanently patched out the evasi0n jailbreak.
Sn0wbreeze
In April 2013, the latest versions of Sn0wbreeze was released, which added the support for tethered jailbreaking on A4 devices (i.e. devices not newer than the iPhone 4, iPad (1st generation), or iPod touch (4th generation)).
evasi0n7 and p0sixpwn
On December 22, 2013, the evad3rs released a new version of evasi0n that supports jailbreaking iOS 7.0.x, known as evasi0n7. On December 30, 2013, winocm, ih8sn0w and SquiffyPwn released p0sixspwn for untethering devices on iOS 6.1.3 – 6.1.5. Initially, it was necessary to jailbreak tethered using redsn0w and install p0sixpwn at Cydia. A few days later, on January 4, 2014, the same team released a version of p0sixpwn for jailbreaking using a computer.
iOS 7.1 patched the exploits used by evasi0n7, and on June 23, 2014, Pangu, a Chinese untethered jailbreak was released for iOS 7.1.
Pangu8
On October 22, 2014, Pangu Team released Pangu8 to jailbreak all devices running iOS 8–8.1. The first versions did not bundle Cydia, nor was there an iOS 8 compatible version of Cydia at the time.
TaiG
On November 29, 2014, TaiG team released their jailbreak tool called "TaiG" for devices running iOS 8.0–8.1.1. On December 10, 2014, the app was updated to include support for iOS 8.1.2. On July 3, 2015, TaiG 2.3.0 was released, which includes support for iOS 8.0–8.4.
iH8sn0w
On 10 September 2015, 6 days before iOS 9 was released, iH8sn0w had demonstrated a working exploit on his Twitter page, linking to a YouTube video.
Pangu9
On October 14, 2015, Pangu Team released Pangu9, their jailbreak tool for iOS 9.0 through 9.0.2. On March 11, 2016, Pangu Team updated their tool to support iOS 9.1 for 64-bit devices.
Pangu93
On July 17, 2016, Pangu Team released Pangu93, a semi-untethered jailbreak tool for iOS 9.2–9.3.3. It was the first semi-untethered jailbreak and at the same time made within a sideloaded app, and included support only for 64bit devices.
Home Depot
On mid-March 2017, jk9357 (aka @REALKJCMEMBER), part of the KJC (Kim Jong Cracks) hacking team, released the first semi-untethered jailbreak for 32-bit devices on 9.1–9.3.4, known as Home Depot. This jailbreak is based on the open source Trident exploit released on GitHub by Benjamin Randazzo. The exploit makes use of some of the Pegasus vulnerabilities.
yalu+mach_portal and extra_recipe
On December 21, 2016, well-known hacker Luca Todesco released a beta semi-untethered jailbreak tool for iOS 10.1 known as yalu+mach_portal for select 64-bit iOS devices. The jailbreak made use of Ian Beer's, of Google Project Zero, mach_portal exploit. This version is extremely unstable and was only meant for developers. On January 26, 2017, with help from Marco Grassi, a more stable version for iOS 10.2 was released, eventually supporting all 64-bit iOS devices except for the iPhone 7 and 7 Plus, which is only supported by the older, more unstable version for 10.1.1. This jailbreak made use of Ian Beer's, of Project Zero, extra_recipe exploit. Both jailbreaks are installed through a computer application known as Cydia Impactor, which allows signing of apps not in the App Store. Todesco said that the newer version would be eventually updated to support the iPhone 7 and 7 Plus up to iOS 10.1.1, however, he left the jailbreaking scene on March 28, abandoning both jailbreaks before it was released.
Phoenix
On August 6, 2017, Siguza and tihmstar released the semi-untethered Phoenix jailbreak for 32-bit devices on iOS 9.3.5.
EtasonJB
On September 19, 2017, tihmstar released EtasonJB, a jailbreak for all 32-bit devices on iOS 8.4.1. An exploit was found so that users of the 9.3.5 Phoenix jailbreak could downgrade to 8.4.1 without the use of SHSH blobs as 8.4.1 is an unsigned version of iOS.
Saïgon
On October 15, 2017, the hacker Abraham Masri (known on Twitter as @cheesecakeufo) released the Saïgon jailbreak for all 64-bit devices on iOS 10.2.1.
Houdini
On December 6, 2017, Abraham Masri released Houdini, the first semi-jailbreak for iOS 10, compatible with iOS 10.x up to 10.3.2 for all 64-bit devices. The application allows installation of themes, resolution changes etc. from basic Cydia repos. It is untethered, but it needs to run the exploit again to install content. The tool has been updated to beta revision 1, adding the compatibility with iOS 11 up to 11.1.2.
async_wake
On December 11, 2017, Ian Beer released async_wake, an iOS 11.0 up to 11.1.2 exploit that allows tfp0. The exploit has been updated and it has allowed jailbreak apps on jailed devices such as FilzaJailed, FilzaEscaped (which allows root access on a jailed device), MTerminal-Jailed by Ayden Panhuyzen (which allows the execution of commands on a jailed device) and Torngat by 1GamerDev (a tweak manager that can set a boot logo, change resolution, disable OTA updates and more on a jailed device).thats it.
to.panga
On December 23, 2017, Abraham Masri released to.panga, an iOS 11 up to 11.1.2 WIP jailbreak for developers based on the async_wake exploit by Ian Beer. It has Cydia included but since the jailbreak used a KPPless exploit, Cydia and Substrate have to be rewritten. Abraham Masri has since stated that the project wouldn't be updated due to other tools with the same goal.
h3lix
On December 24, 2017, tihmstar released h3lix, a semi-untethered jailbreak for all 32-bit devices, compatible with all iOS 10 versions.
doubleH3lix
Shortly after the release of h3lix, tihmstar and Siguza released doubleH3lix, a semi-untethered jailbreak for 64bit devices (excluding the iPhone 7) on iOS 10.x.
LiberiOS
On December 26, 2017, the hacker Jonathan Levin (known on Twitter as @Morpheus) released LiberiOS, a WIP developer-only jailbreak based on the async_wake exploit by Ian Beer for iOS 11.0 to iOS 11.1.2. Jonathan Levin later stated that LiberiOS would not be officially released with Cydia.
Zimperium
On January 29, 2018, Zimperium, whose goal is to assess issues and locate a possible iOS sandbox escape, released two crucial bugs, one of which is a major vulnerability that leads to execution of arbitrary code on different crucial daemons, found in iOS 11.2 up to 11.2.2. The second bug is a full relative (ASLR bypass) control on the stack in CoreBluetooth, that leads to memory corruption (also on 11.2 – 11.2.2).
Electra
On February 26, 2018, developer CoolStar (known on Twitter as @CStar_OW) released a public version of the Electra jailbreak for iOS devices on 11.0–11.1.2. This jailbreak uses a KPPless exploit. As Cydia Substrate had not been updated for iOS 11, Electra uses a Substrate alternative known as Substitute.
Houdini b3
On June 6, 2018, Abraham Masri released an update to Houdini, the first semi-jailbreak. The tool has been updated to beta 3 revision 1, adding the compatibility with iOS 11 up to 11.3.1.
Electra update
On July 6, 2018, developer CoolStar updated the Electra jailbreak for iOS devices on 11.0–11.3.1 (it previously only supported up to 11.1.2). It continued to use a KPPless exploit and Substitute in place of Cydia Substrate.
Meridian
On August 17, 2018, developer Ben Sparkes released a public version of Meridian, a jailbreak for iOS versions 10.x on all 64-bit devices.
Totally Not Spyware and JakeBlair420
On September 3, 2018, a web-based jailbreak called "Totally Not Spyware" was released by an unknown user named "JakeBlair420". It eventually came out that it was developed by FoxletFox, Ben Sparkes, littlelailo, siguza, stek29, and was based on the works of 5aelo, Ian Beer, Niklas B, Jamie Bishop, Luca Todesco, REALKJCMEMBER, Sam Bingner, sticktron, theninjaprawn, tihmstar, and xerub. It used a WebKit-based exploit to bootstrap either Meridian or doubleh3lix.
unc0ver iOS 11
On October 14, 2018, former Electra Team member and developer Pwn20wnd (known on Twitter as @Pwn20wnd) released a public version of the unc0ver jailbreak for iOS devices on 11.0–11.4.1. This jailbreak uses the three KPPless exploits provided by Ian Beer for iOS 11 (async_wake, empty_list and multi_path). Sam Bingner, Pwn20wnd's associate, also worked with Jay Freeman, the original developer of Cydia, to get Cydia Substrate reupdated to iOS 11. It was released on December 17, 2018.
Osiris Jailbreak for iOS 12
On February 1, 2019, developer (GeoSn0w) released the very first proof of concept for a jailbreak compatible with iOS 12.0 to iOS 12.1.2, called Osiris Jailbreak / OsirisJailbreak12. It was a developer-only jailbreak meant to demonstrate how a basic jailbreak can be achieved on iOS 12 using the new tfp0 kernel exploit released at that time by Google Project Zero security researcher, Brandon Azad. OsirisJailbreak12 has then been released as Open Source on GitHub, but remained developer-only and thus incomplete.
unc0ver iOS 12
On February 22, 2019, developer (Pwn20wnd) released a public update for the unc0ver jailbreak for iOS devices on 12.0–12.1.2. This version of unc0ver uses exploits provided by tihmstar and SorryMyBad for iOS 12 (v1ntex, v3ntex, and voucher_swap). Pwn20wnd once again worked with Jay Freeman to get Cydia and Cydia Substrate updated to work on iOS 12.
Chimera
On April 29, 2019, the Electra team released Chimera, a public jailbreak for iOS 12.0–12.1.2, which included support for all devices, including devices with A12 chips, which had major security reinforcements compared to other devices. This jailbreak uses Substitute, an alternative to Cydia Substrate, to allow tweaks to run.
unc0ver updates
On June 12, 2019, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 12.1.2–12.2. The jailbreak did not support A8 and A7 devices to begin with, but support for those devices has now been added for iOS 12.1.2–12.2.
On August 18, 2019, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 12.4 following a mistake by Apple that reintroduced the vulnerabilities patched in 12.3. This version of unc0ver used the sock_puppet exploit, developed by Ned Williamson and the sock_port exploit, developed by Jake James.
checkm8 and checkra1n
On September 27, 2019, a security researcher (axi0mX) released an exploit for some iPhone boot ROMs, dubbed checkm8 (read as checkmate), that executes when the device is started up, and cannot be patched by Apple. This exploit affects A5-A11 (iPhone 4s–iPhone X) devices, and due to the nature of the exploit it supports any iOS version running on these supported devices. According to the developer, an exploit this powerful has not been released since 2010. Although checkm8 is purely an exploit tool, and not a jailbreak tool, a semi-tethered jailbreak has been achieved as of September 27, 2019, and a beta version of checkra1n, a jailbreak tool that uses checkm8 has been released for macOS and Linux, with a windows version in the works.
unc0ver updates
On December 9, 2019, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 12.4.1. This version of unc0ver used the AppleAVE2Driver exploit, developed by 08Tc3wBB, who submitted/sold the exploit to ZecOps for the Task-For-Pwn 0 Bounty.
EtasonATV
On January 22, 2020, tihmstar released EtasonATV, a jailbreak for Apple TV (3rd generation Rev A) on iOS 8.4.1–8.4.4.
On January 28, 2020, tihmstar released a public update to the EtasonATV jailbreak, to add support for Apple TV (3rd generation).
unc0ver iOS 13
On February 15, 2020, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 13.0–13.3 for A12 and A13 devices. The jailbreak did not support A11 and below devices to begin with, but has since been updated to do so.
On May 20, 2020, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 13.3.1-13.5 for all devices that support that version range.
On June 9, 2020, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 13.5.5~b1.
Odyssey Jailbreak for iOS 13
On November 20, 2020, Odyssey Team released Odyssey, a public jailbreak for iOS 13.0-13.7 used FreeTheSandbox exploit, developed by 08Tc3wBB, who submitted the exploit to ZecOps for ZecOps Reverse Bounty. Exploit reliability on 13.0 - 13.7 for all A10 and up devices has got improved. It is worth mentioning that for A8(X)/A9(X) devices, Odyssey used the tardy0n exploit.
unc0ver iOS 14
On February 27, 2021, developer (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 14.0-14.3 for all devices that support that version range, alongside 12.4.9-12.5.1 and 13.5.1-13.7
On October 24, 2021, (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for iOS 14.4-14.5.1 for all arm64e devices (iPhone XS and later) that support that version range, using Fugu14 as a base.
On December 29, 2021, (Pwn20wnd) released a public update to the unc0ver jailbreak, to add support for IOS 14.6-14.8 for A12-A13 iPhones (XS-11 Pro Max) only, with no iPad support, using an as-yet-unnamed exploit developed by security researcher pattern_f_.
Taurine
On April 1, 2021, Odyssey Team released Taurine, a public jailbreak for iOS 14.0-14.3 for all devices that support that version range via the cicuta_virosa exploit, developed by ModernPwner.
Fugu14
On October 24, 2021, Linus Henze released a public untethered jailbreak called Fugu14, which supports iOS 14.2-14.5.1 on A12 and later processors.
By device and OS
Table of tools
History of exploit-disabling patch releases
Apple has released various updates to iOS that patch exploits used by jailbreak utilities; this includes a patch released in iOS 6.1.3 to software exploits used by the original evasi0n iOS 6–6.1.2 jailbreak, in iOS 7.1 patching the Evasi0n 7 jailbreak for iOS 7–7.0.6-7.1 beta 3. Boot ROM exploits (exploits found in the hardware of the device) cannot be patched by Apple system updates but can be fixed in hardware revisions such as new chips or new hardware in its entirety, as occurred with the iPhone 3GS in 2009.
On July 15, 2011, Apple released a new iOS version that closed the exploit used in JailbreakMe 3.0. The German Federal Office for Information Security had reported that JailbreakMe uncovered the "critical weakness" that information could be stolen or malware unwillingly downloaded by iOS users clicking on maliciously crafted PDF files. Before Apple released a fix for this security hole, jailbreak users had access to a fix published by the developer of JailbreakMe.
On January 27, 2015, Apple released iOS 8.1.3 that patched up the exploits used in jailbreak for iOS 8.0–8.1.2. It was not possible to jailbreak until the iOS 8.3 update.
On August 13, 2015, Apple updated iOS to 8.4.1, patching the TaiG exploit. Pangu and Taig teams both said they were working on exploiting iOS 8.4.1, and Pangu demonstrated these chances at the WWDC 2015.
On September 16, 2015, iOS 9 was announced and made available; it was released with a new "Rootless" security system, dubbed a "heavy blow" to the jailbreaking community.
On October 21, 2015, seven days after the Pangu iOS 9.0–9.0.2 Jailbreak release, Apple pushed the iOS 9.1 update, which contained a patch that rendered it nonfunctional.
On August 4, 2016, Apple released iOS version 9.3.4 to patch the semi-untethered jailbreak for 64-bit devices.
On August 25, 2016, Apple released iOS version 9.3.5 to patch three important vulnerabilities used for the secret jailbreak from the Pegasus Malware from the NSO Group.
On January 23, 2017, Apple released iOS 10.2.1 to patch jailbreak exploits released by Google for the Yalu iOS 10 jailbreak created by Luca Todesco.
On December 2, 2017, iOS 11.2 was released and included a patch to the async_wake exploit by Ian Beer of Google Project Zero.
On May 29, 2018, iOS 11.4 was released, including a patch to 2 exploits (empty_list and multi_path) by Ian Beer of Google Project Zero.
On January 22, 2019, iOS 12.1.3 was released, including a patch to an exploit (voucher_swap), found independently by both Brandon Azad of Google Project Zero, and Zhuo Liang of Qihoo 360 Nirvan Team.
On May 13, 2019, iOS 12.3 was released, including a patch to an exploit (sock_puppet) by Ned Williamson of Google Project Zero. However, on the 18th of August 2019, it became known that Apple accidentally reintroduced these vulnerabilities in iOS 12.4, making it possible to jailbreak 12.4 with the same exploits as pre-12.3 versions.
On August 26, 2019, Apple released the 12.4.1 update, re-patching the exploit (sock_puppet) by Ned Williamson of Google Project Zero.
On November 10, 2019, a group of developers released a jailbreak named "Checkra1n", which exploits the chips found in devices ranging from A5-A11 devices. This was the first jailbreak to feature a boot ROM exploit in a decade. The tool utilizes the "Checkm8" exploit developed by Axi0mX.
On December 10, 2019, Apple used DMCA takedown requests to remove posts from Twitter. The tweet contained an encryption key that could potentially be used to reverse engineer the iPhone's Secure Enclave. Apple later retracted the claim, and the tweet was reinstated.
On June 1, 2020, Apple released the 13.5.1 update, patching the 0 day exploit used by the Unc0ver jailbreak.
Legality
The legal status of jailbreaking is affected by laws regarding circumvention of digital locks, such as laws protecting digital rights management (DRM) mechanisms. Many countries do not have such laws, and some countries have laws including exceptions for jailbreaking.
International treaties have influenced the development of laws affecting jailbreaking. The 1996 World Intellectual Property Organization (WIPO) Copyright Treaty requires nations party to the treaties to enact laws against DRM circumvention. The American implementation is the Digital Millennium Copyright Act (DMCA), which includes a process for establishing exemptions for non-copyright-infringing purposes such as jailbreaking. The 2001 European Copyright Directive implemented the treaty in Europe, requiring member states of the European Union to implement legal protections for technological protection measures. The Copyright Directive includes exceptions to allow breaking those measures for non-copyright-infringing purposes, such as jailbreaking to run alternative software, but member states vary on the implementation of the directive.
While Apple technically does not support jailbreaking as a violation of its EULA, jailbreaking communities have generally not been legally threatened by Apple. At least two prominent jailbreakers have been given positions at Apple, albeit in at least one case a temporary one. Apple has also regularly (though possibly somewhat jokingly) thanked jailbreak communities for detecting security holes in iOS release notes.
Apple's support article concerning jailbreaking claims that they "may deny service for an iPhone, iPad, or iPod touch that has installed any unauthorized software." Therefore, anyone who is jailbroken, warranty or not, may or may not get support from Apple considering assistance with jailbreaking.
Australia
In 2010, Electronic Frontiers Australia said that it is unclear whether jailbreaking is legal in Australia, and that anti-circumvention laws may apply. These laws had been strengthened by the Copyright Amendment Act 2006.
Canada
In November 2012, Canada amended its Copyright Act with new provisions prohibiting tampering with DRM protection, with exceptions including software interoperability. Jailbreaking a device to run alternative software is a form of circumventing digital locks for the purpose of software interoperability.
There had been several efforts from 2008–2011 to amend the Copyright Act (Bill C-60, Bill C-61, and Bill C-32) to prohibit tampering with digital locks, along with initial proposals for C-11 that were more restrictive, but those bills were set aside. In 2011, Michael Geist, a Canadian copyright scholar, cited iPhone jailbreaking as a non-copyright-related activity that overly-broad Copyright Act amendments could prohibit.
India
India's copyright law permits circumventing DRM for non-copyright-infringing purposes. Parliament introduced a bill including this DRM provision in 2010 and passed it in 2012 as Copyright (Amendment) Bill 2012. India is not a signatory to the WIPO Copyright Treaty that requires laws against DRM circumvention, but being listed on the US Special 301 Report "Priority Watch List" applied pressure to develop stricter copyright laws in line with the WIPO treaty.
New Zealand
New Zealand's copyright law allows the use of technological protection measure (TPM) circumvention methods as long as the use is for legal, non-copyright-infringing purposes. This law was added to the Copyright Act 1994 as part of the Copyright (New Technologies) Amendment Act 2008.
Singapore
Jailbreaking might be legal in Singapore if done to provide interoperability and not circumvent copyright, but that has not been tested in court.
United Kingdom
The law Copyright and Related Rights Regulations 2003 makes circumventing DRM protection measures legal for the purpose of interoperability but not copyright infringement. Jailbreaking may be a form of circumvention covered by that law, but this has not been tested in court. Competition laws may also be relevant.
United States
The main law that affects the legality of iOS jailbreaking in the United States is the 1998 Digital Millennium Copyright Act (DMCA), which says "no person shall circumvent a technological measure that effectively controls access to a work protected under" the DMCA, since this may apply to jailbreaking. Every three years, the law allows the public to propose exemptions for legitimate reasons for circumvention, which last three years if approved. In 2010 and 2012, the U.S. Copyright Office approved exemptions that allowed smartphone users to jailbreak their devices legally, and in 2015 the Copyright Office approved an expanded exemption that also covers other all-purpose mobile computing devices, such as tablets. It is still possible Apple may employ technical countermeasures to prevent jailbreaking or prevent jailbroken phones from functioning. It is unclear whether it is legal to traffic in the tools used to make jailbreaking easy.
In 2010, Apple announced that jailbreaking "can violate the warranty". This may be affected by the Magnuson–Moss Warranty Act.
Digital Millennium Copyright Act exemptions
In 2007, Tim Wu, a professor at Columbia Law School, argued that jailbreaking "Apple's superphone is legal, ethical, and just plain fun." Wu cited an explicit exemption issued by the Library of Congress in 2006 for personal carrier unlocking, which notes that locks "are used by wireless carriers to limit the ability of subscribers to switch to other carriers, a business decision that has nothing whatsoever to do with the interests protected by copyright" and thus do not implicate the DMCA. Wu did not claim that this exemption applies to those who help others unlock a device or "traffic" in software to do so.
In 2010, in response to a request by the Electronic Frontier Foundation, the U.S. Copyright Office explicitly recognized an exemption to the DMCA to permit jailbreaking in order to allow iPhone owners to use their phones with applications that are not available from Apple's store, and to unlock their iPhones for use with unapproved carriers. Apple had previously filed comments opposing this exemption and indicated that it had considered jailbreaking to be a violation of copyright (and by implication prosecutable under the DMCA). Apple's request to define copyright law to include jailbreaking as a violation was denied as part of the 2009 DMCA rulemaking. In their ruling, the Library of Congress affirmed on July 26, 2010, that jailbreaking is exempt from DMCA rules with respect to circumventing digital locks. DMCA exemptions must be reviewed and renewed every three years or else they expire.
On October 28, 2012, the US Copyright Office released a new exemption ruling. The jailbreaking of smartphones continued to be legal "where circumvention is accomplished for the sole purpose of enabling interoperability of [lawfully obtained software] applications with computer programs on the telephone handset." However, the U.S. Copyright office refused to extend this exemption to tablets, such as iPads, arguing that the term "tablets" is broad and ill-defined, and an exemption to this class of devices could have unintended side effects. The Copyright Office also renewed the 2010 exemption for unofficially unlocking phones to use them on unapproved carriers, but restricted this exemption to phones purchased before January 26, 2013. In 2015, these exemptions were extended to include other devices, including tablets.
BigBoss
BigBoss (previously TheBigBoss.org) is a software repository and blog for jailbreaking. It publishes news on the topic and allows developers to host tweaks and other extensions. It was one of the first repositories to exist for the Cydia package manager, launched in April 2008 by Sleepers Repository. It comes preinstalled with all copies of Cydia.
While the majority of the packages are for older iOS versions, it still receives a steady flow of new packages. While it was originally founded by pseudonymous developer and namesake, BigBoss, it is currently run by 0ptimo, who maintains it and ensures its reliability.
Software
In addition to the news and software hosting website, BigBoss also distributed many utilities that ran on jailbroken devices. These included SBSettings, BossPrefs, BossPaper, Categories, and Flashlight.
Blog history
On October 18, 2008, the site rebranded from Sleepers Repository to TheBigBoss.
On March 8, 2009, Cydia Store launched, allowing users to purchase software products from the repository. Previously, it had only consisted of free software.
In July 2014, the website was hacked, leaking the entire database and collection of packages (including paid products).
In November 2017, other significant early Cydia Store repositories, ModMy and ZodTTD, shut down, making BigBoss one of the last original repositories standing.
In December 2018, the Cydia Store shut down, meaning that no new packages could be purchased. Packages that have been previously paid for can still be installed, and free packages can still be installed. New repositories have been creating and using their own payment systems. E
Risks
Security, privacy and stability
The first iPhone worm, iKee, appeared in early November 2009, created by a 21-year-old Australian student in the town of Wollongong. He told Australian media that he created the worm to raise awareness of security issues: jailbreaking allows users to install an SSH service, which those users can leave in the default insecure state. In the same month, F-Secure reported on a new malicious worm compromising bank transactions from jailbroken phones in the Netherlands, similarly affecting devices where the owner had installed SSH without changing the default password.
In 2010 blogger John Gruber, who is close to Apple, said that users misunderstood some jailbreak exploits and that they were more serious than they appear. He commented that "it's odd how the press is mostly covering this as 'jailbreaking now more convenient' rather than 'remote code exploit now in the wild'", pointing out that the exploit allowed the creator of a malicious website to take control of iPhones accessing it. Restoring a device with iTunes removes a jailbreak. However, doing so generally updates the device to the latest, and possibly non-jailbreakable, version, due to Apple's use of SHSH blobs. There are many applications that aim to prevent this, by restoring the devices to the same version they are currently running whilst removing the jailbreaks. Examples are, Succession, Semi-Restore and Cydia Eraser.
In 2012, Forbes staff analyzed a UCSB study on 1,407 free programs available from Apple and a third-party source. Of the 1,407 free apps investigated, 825 were downloaded from Apple's App Store using the website App Tracker, and 526 from BigBoss (Cydia's default repository). 21% of official apps tested leaked device ID and 4% leaked location. Unofficial apps leaked 4% and 0.2% respectively. 0.2% of apps from Cydia leaked photos and browsing history, while the App Store leaked none. Unauthorized apps tended to respect privacy better than official ones. Also, a program available in Cydia called PrivaCy allows user to control the upload of usage statistics to remote servers.
In August 2015, the KeyRaider malware was discovered, affecting only jailbroken iPhones.
Fake/scam jailbreaks
In recent years, due to the technical complexity and often rarity of legitimate jailbreaking software, most especially untethered jailbreaks, there has been an increase in fake websites which falsely claim to jailbreak iOS devices. These websites often ask payment or make heavy use of advertising, but have no jailbreak to offer. Others install a fake, lookalike version of the Cydia package manager. In some cases, users have been asked to download free-to-play apps or fill out surveys to complete a (non-existent) jailbreak.
See also
Hacking of consumer electronics
iOS version history
Notes
References
Jailbreaking
Hacker culture |
32339611 | https://en.wikipedia.org/wiki/Copyright%20Alert%20System | Copyright Alert System | Copyright Alert System (CAS) was a voluntary industry effort to educate and penalize internet users who engage in the unauthorized and unlawful distribution of copyrighted works via peer-to-peer file sharing services. The program was operated by the Center for Copyright Information, a consortium consisting of the Recording Industry Association of America (RIAA), the Motion Picture Association of America (MPAA), and the internet service providers AT&T, Cablevision, Comcast, Time Warner Cable, and Verizon.
The CAS used a graduated response system (also known as the "six strikes program"), in which participating ISPs would send warnings notifying subscribers of alleged copyright infringement, as reported by a monitoring service working on behalf of participating copyright owners. The system specified a six-step progression, from advice messages, to warnings that must be acknowledged by the user. After a fifth warning, ISPs were allowed to implement "mitigation measures", which could include penalties such as bandwidth throttling or preventing web access until customers "discuss the matter" with their ISP.
The CAS framework was established on July 7, 2011 after three years in the making. After multiple delays, ISPs began implementing it in late February 2013.
Due to the program not substantially decreasing piracy, the CAS discontinued the program on January 30, 2017.
Overview
The Center for Copyright Information employed the services of MarkMonitor (often doing business as DtecNet) to detect and monitor suspected copyright infringement activity. Although MarkMonitor monitored many infringement venues, in early 2013 the Electronic Frontier Foundation (EFF) said it appeared that for purposes of the CAS, the company will only monitor peer-to-peer traffic from public BitTorrent trackers. The Copyright Alert System does not use deep packet inspection.
A review of MarkMonitor's system for the CAS stated that only uploads were monitored, with infringement being suspected when MarkMonitor's BitTorrent client successfully obtains pieces of known-infringing content from a peer in the swarm. The pieces were compared to pieces from an already-downloaded copy of the content, which had already been matched to content samples supplied by the copyright owners.
When suspected infringement was detected, the ISP for the IP address associated with the suspicious activity was notified. The ISP, in turn, notified the subscriber to whom the IP address was assigned at the time of the alleged infringement, informing the subscriber that their account was suspected of being used to infringe copyright, and warning of potential consequences. Users who continue to receive warnings could be issued up to six alerts in the form of pop-up messages and e-mails before stronger measures were taken by the ISP.
Intent
The Copyright Alert System was intended to be
The Center for Copyright Information believed that the Copyright Alert System would be effective in reducing both intentional and unintentional copyright infringement. By providing willing infringers with information regarding the serious consequences of copyright infringement, the CCI hoped to steer would-be infringers to legal online content providers. The CCI hoped that unwitting infringers, such as those whose home networks have been hijacked by unauthorized users or the parents of children who are infringing, would use the knowledge that their account is being used for copyright infringement to force users of the account to curtail these activities. The CCI also hoped to increase parents' involvement in what their children are doing online by alerting misconduct and abuse of their accounts.
The EFF claimed the materials used to "educate consumers" were "more like propaganda", and that the framework violated the principle of presumption of innocence.
Alerts
The system of alerts was as follows:
The first and second alerts notified ISP subscribers that their Internet account has allegedly been used for copyright infringement, provided an explanation of how to avoid future offenses, and directed users to lawful content sites.
If the suspicious behavior persisted, additional alerts were sent. These alerts ask the subscriber to acknowledge receipt of the messages by clicking a notice.
After a fifth alert, ISPs were allowed to take "mitigation measures" to prevent future infringement.
If the ISP did not institute a mitigation measure following the fifth alert, it had to enact one after the sixth alert.
Mitigation measures included: "temporary reductions of Internet speeds, redirection to a landing page until the subscriber contacts the ISP to discuss the matter or reviews and responds to some educational information about copyright, or other measures (as specified in published policies) that the ISP may deem necessary to help resolve the matter". Mitigation measures varied by company. Verizon was the first to reveal their mitigation policy. Their policy specified that violators with more than three alerts would be directed to an online copyright law information. Upon a fifth or sixth alert, the user would experience a temporary speed reduction down to 256kbit/s (approximately the speed of dial-up internet access). According to a leaked internal document, AT&T decided that after a 6th alert, a user's "access to many of the most frequently visited websites [would be] restricted" until they completed an "online educational tutorial on copyright". However, AT&T later explained that it would not cut the user off the internet even though such measure appeared in the copyright warning letter. It gave a reason that the six strikes warnings are only allegations; therefore, if the alleged user was still suspected of illegal downloading after completion of the "online educational tutorial on copyright" and the six strikes warnings, unless the copyright owner took legal action; nothing would happen. Time Warner Cable stated that it would not discontinue customers' service, but instead redirect users to educational pages on copyright law upon multiple violations.
The CCI said that at no point in the process would the ISPs reveal customer information to the CCI, the monitoring service, or the copyright owners.
The EFF said that because the system "will not identify those who take steps to anonymize their Internet traffic using a VPN or Tor" it would not stop dedicated infringers.
The EFF also argued that the mitigation measures to be undertaken by ISPs in response to alleged copyright infringement were overly harsh, as they could be used to deny consumers access to the Internet, which EFF maintained is as fundamental a right in the digital era as access to other utilities such as electricity. Annemarie Bridy wrote that although temporary suspension was a possible mitigation measure, market pressure would discourage ISPs from using it, instead relying on more educational measures. Mueller, Kuehn, and Santoso attributed the ISPs' adoption of the Copyright Alert System to its voluntary nature and the fact the ISPs were not required to terminate accounts under the Copyright Alert System.
Appeal process
The CAS process did not allow alerts to be challenged until a mitigation measure was about to be imposed. At that time, the subscriber could request, through a special link provided by their ISP, a special arbitration proceeding. The proceeding would be administered by the American Arbitration Association (AAA), and the arbitrator would be selected by the AAA. The process was "automated to the maximum extent practicable", with the intent of resolving the dispute within thirty days of filing.
The appeal had to be requested within 14 days of the alert triggering the mitigation measure. The subscriber had to pay a $35 filing fee, unless the fee was waived by the AAA. According to CCI Executive Director Jill Lesser, the $35 fee "defray[ed] a small part of the cost of the appeal and [was] intended to discourage frivolous appeals". In order to prevail, the subscriber had to successfully challenge a specified number of alerts (possibly more than one) to block the mitigation measure and receive a refund of the filing fee.
Subscribers could only challenge alerts on one or more of the following pre-defined grounds:
Misidentification of Account – The subscriber's account was incorrectly identified as a source of copyright infringement.
Unauthorized Use of Account – The alleged infringement was the result of unauthorized, unpreventable use of the subscriber's account by others, without the subscriber's knowledge.
Authorization – The subscriber's use of the work identified in the alert was specifically authorized in writing by the work's copyright owner or someone with a license to reproduce the work.
Fair Use – The subscriber's use of the work identified in the alert meets the U.S. legal definition of fair use.
Misidentification of File – The file "did not consist primarily of the alleged copyrighted work at issue but rather contained other non-infringing material".
Work Published Before 1923 – The work identified in the alert was published prior to 1923 and therefore is in the public domain in the United States.
In an appeal, the burden of proof was on the subscriber to demonstrate the elements of one of the above grounds for challenging an alert.
The choice of defenses allowed in an appeal was criticized. The defense of unauthorized use of account was not allowed if the user was a member of the subscriber's household or an invitee, and after the first assertion of this defense, the subscriber had to demonstrate by "clear and convincing evidence" that the subscriber "took reasonable steps to secure the account following the first occurrence of unauthorized use". Bridy wrote that this creates an obligation for subscribers to secure their Internet connection even when they had no legal or contractual obligation to do so. In an Ars Technica article, Cyrus Farivar said that the responsibility placed on the subscriber by the CAS for copyright infringement that occurred using the subscriber's connection would harm the availability of open Wi-Fi that small businesses make available to their customers. In a CCI blog post, Jill Lesser responded that many businesses with open Wi-Fi had business accounts, which were not subject to the CAS. Lesser also said that businesses offering open Wi-Fi through residential accounts would be subject to the CAS, but those accounts, like others subject to the CAS, would not be subject to termination. The defense of publication before 1923 was the only allowable challenge to the copyright status of a work, even though a work may have been out of copyright protection for other reasons. LaFrance criticized the presence of the "work published before 1923" defense but not other factual challenges to the copyright status of the work that she said could just as easily be resolved through the appeal process.
The subscriber's personal information was not shared with the copyright holder in the appeal process unless the subscriber asserted the defense of authorization and the arbitrator deemed disclosure necessary for the copyright holder to respond to the defense. Bridy favorably compared the privacy of an appeal proceeding to the previous system of "mass John Doe litigation". The appeals process involved no hearings and no discovery, although the arbitrator could demand additional information from either party.
The CCI said that subscribers may also challenge alerts or mitigation measures in court; Bridy presumed that such a challenge could be made in the form of a request for declaratory judgment that the subscriber did not infringe.
Mary LaFrance wrote that the system prevented any party from introducing a decision from the appeal process into any judicial proceeding. According to the CCI, the failure of a subscriber to appeal a copyright alert under the Copyright Alert System "will not be construed as an admission or waiver in any other proceeding".
Bridy wrote that the copyright law applied in independent reviews would be determined by an expert commissioned by the American Arbitration Association and approved by CCI, who was required to hear "prevailing legal principles" of copyright law as viewed by copyright owners. Bridy suggested that the legal standards in an independent review would call into question the fairness of the independent review proceeding, citing what she said were inaccurate statements from the RIAA on copyright. Bridy also said that the lack of written decisions in the independent review process would make it difficult to determine what the rules are for independent review or whether they are being followed.
The EFF said that the mitigation measures could be imposed without any due process, placing the burden of proof on consumers to show that allegations of copyright infringement are unfounded. The organization recommended that, in the absence of copyright infringement detection systems which have been demonstrated to be entirely accurate, the burden of proof be placed on the content providers to show copyright infringement prior to the mitigation measures. Bridy wrote that IP address collection and content identification methods should be held to a much higher standard (including review by an independent expert with no contractual obligations to the CCI nor non-disclosure obligations), especially since an alert itself could trigger a sanction.
The EFF argued that the procedure for objecting to an alert was unfairly burdensome to the consumer and biased towards the media companies. Bridy said that because the Copyright Alert System arose from a mass consumer contract, the potential for the repeat-player effect to jeopardize the neutrality of the arbitrators was present. However, Bridy wrote that the independent review process was optional, and users were free to go to court to remove alerts or challenge mitigation measures, unlike in other consumer contracts where arbitration was mandatory. Although Bridy said that few subscribers would go to court owing to the possibility of a large damages award, this alternative allowed for full due process safeguards.
According to a May 2014 report by the Center for Copyright Information, from the implementation of the Copyright Alert System through the end of 2013, there were 265 appeals, representing 0.27% of the alerts triggering a mitigation measure for which the appeal process was available. Of the appeals, 47 (17.7%) were successful, the "vast majority" of which involved the "unauthorized use of account" defense. In none of the appeals was it determined that an account or copyrighted work was misidentified.
History
The Copyright Alert System framework was devised by the following companies and organizations:
MPAA members Walt Disney Studios Motion Pictures, Paramount Pictures, Sony Pictures Entertainment, Twentieth Century Fox Film Corporation, Universal Studios, and Warner Brothers Entertainment.
IFTA
RIAA members Universal Music Group, Warner Music Group, Sony Music Entertainment, and EMI Music.
A2IM
The ISPs AT&T, Cablevision, Comcast, Time Warner Cable, and Verizon.
New York Governor Andrew Cuomo is credited with helping the parties negotiate and create the Copyright Alert System. In April 2013, IFPI CEO Frances Moore indicated that the role of the United States Government was greater than previously acknowledged in bringing content owners and ISPs together to fight copyright infringement. Specifically she claimed Vice President Joe Biden and "Copyright Czar" Victoria Espinel, were critical players in "pushing the parties to come to some type of agreement".
The negotiating parties completed a Memorandum of Understanding in July 2011 and amended it in August 2011 and October 2012. The agreement establishes the framework for the Copyright Alert System, establishing the roles and expectations of the ISPs, the copyright owners, and the monitoring services.
The Copyright Alert System launched in late February 2013.
The EFF cited a number of problems with the Memorandum of Understanding. Firstly, the EFF criticized the lack of subscriber representation in the CCI, noting that the establishment of the CCI involved only large media corporations and ISPs, but left out representatives of the subscribers who purchase these companies' products. Further, the CCI only allowed for a three-person advisory board from consumer interest groups, while the executive board is composed of six representatives from the media companies and ISPs, an arrangement which the EFF believed to be inadequate for consumer representation.
Technology review
The CCI contracted with Stroz Friedberg, a global digital risk management and investigations firm, to conduct an analysis of MarkMonitor's system for monitoring, verifying, and enforcing online copyright infringement on P2P file sharing networks. Stroz Friedberg assessed the efficacy of MarkMonitor's methodologies to monitor, identify, collect evidence, and generate notices to P2P infringers The contractor published the findings in a confidential report to the CCI in November 2012 (i.e., before the system launched), concluding that the methodologies were "well developed and robust", though not without room for improvement.
The publicly released version of the report contained a number of redactions of various details. Among the report's conclusions were the following claims:
"MarkMonitor's methodologies effectively identify P2P online copyright infringers".
"MarkMonitor's evidence collection in connection with P2P infringement is robust, defensible, and will withstand adverse party scrutiny or evidentiary challenges".
"The methodologies include appropriate checks and balances at key points in the workflow to ensure accuracy".
Although both the CCI and Stroz Friedberg claimed the review was independent, the CCI didn't release the Stroz Friedberg report until after media reports surfaced that Stroz Friedberg was an RIAA lobbying organization from 2004 to 2009. The firm's sole lobbyist for the RIAA (and, briefly, Universal Music Group) was Beryl A. Howell, the company's executive vice president, executive managing director, and general counsel. Howell was previously involved in crafting numerous pieces of legislation for the investigation and prosecution of computer crime and copyright infringement.
In response to the "questions" raised, the CCI issued a statement reaffirming confidence in Stroz Friedberg's propriety and announcing the public release of the report, but also conceding,
The CCI also hired Stroz Friedberg to review the accuracy of the system by which ISPs match the IP addresses of suspected infringers to the ISPs' subscribers. The results of that review have not yet been publicly released.
In August 2013, the CCI announced it had hired Avi Rubin's high-tech litigation consulting firm Harbor Labs to conduct a follow-up review. In March 2014, Harbor Labs completed a limited review of MarkMonitor's system, concluding that the technical design was generally sound, but that the company needs to improve its safeguards against human error and internal malfeasance. The existence of the report was publicly disclosed when the CCI posted the report's executive summary eight months later. The report itself will not be released because it contains confidential information.
Reception and controversy
The White House supported the Memorandum of Understanding, calling it "a positive step and consistent with our strategy of encouraging voluntary efforts to strengthen online intellectual property enforcement and with our broader Internet policy principles, emphasizing privacy, free speech, competition and due process".
However, the CAS was criticized for a perceived lack of public input, perverse incentives between media corporations and ISPs, harsh mitigation measures, and reversal of burden of proof standards. The EFF made particularly detailed criticisms.
In March 2013, a Freedom of Information Act request related to the White House's role in the policy was denied by a Washington D.C. federal court. Chris Soghoian, a policy analyst at the American Civil Liberties Union (ACLU), submitted the request to obtain information about the interaction in creating the policy. It was denied in a summary judgement by Judge Royce C. Lamberth. Lamberth's judgement rests on an exemption for drafts and "...reducing the possibility of misleading the public by disclosing documents that suggest certain reasons for a future decision that do not ultimately bear upon that decision".
In 2012, the California ISP Sonic.net CEO told TorrentFreak that his ISP will not be participating in the Copyright Alert System, saying that ISPs are not equipped to police the actions of individuals, and that the MPAA and RIAA have not invited small, independent ISPs to participate. Similarly a representative of Cox Communications told TorrentFreak that they "have decided not to participate for internal reasons". Before the CAS' launch, concern existed that data collected for the CAS would be used for more than just the stated "educational" purposes. In the first draft of the CCI's Memorandum of Understanding, it was stated that upon that ISPs would be responsible for providing information collected under the CAS program to Content Owners "if it elects to initiate a legal action against the subscriber". The final agreed Memorandum of Understanding replaced the term "legal action" with "copyright infringement action". [footnote 12 – MOU Art 4 g iv.] Two months after the CAS's implementation, Verizon was subpoenaed to provide a third party plaintiff with information on its users which it recorded for CAS purposes. The plaintiff, Malibu Media argued that "DMCA notices and six strike alerts are relevant because these notices may prove a pattern of infringement and/or notice that infringement is occurring could be used to show a pattern of infringing behaviour". The Northern District Court of Texas agreed and compelled Verizon to give Malibu Media this information. With this information, Malibu Media was able to successfully litigate the first BitTorrent copyright infringement case in American legal history which concluded with a verdict.
Open wireless networks
Tech law professor Derek Bambauer of the University of Arizona, told Ars Technica that the Copyright Alert System "is fundamentally flawed". Bambauer says that injustice could result in the case where a user legally downloaded an entire work under the doctrine of fair use, but the CAS still treated the user as an infringer.
The EFF said that the CAS will hinder the Open Wireless Movement's goals. Jill Lesser, Executive Director of the Center for Copyright Information, said that the CAS will affect small businesses that use a residential connection for open Wi-Fi: "Depending on the type of Internet service they subscribe to, very small businesses like a home-office or a local real estate office may have an Internet connection that is similar from a network perspective to a residential connection... The practical result is that if an employee of the small business, or someone using an open Wi-Fi connection at the business, engages in infringing activity the primary account owner would receive alerts".
Effects since implementation
According to Jill Lesser, the Executive Director of the CCI, the CAS's first year was successful as the CAS is "intended to be education based". Lesser indicated that changes for 2014 might include expanding the program to more Internet service providers and also start including other forms of copyrighted media, such as software. In the first year of the CAS's existence, US traffic to The Pirate Bay increased by 31 percent. In June 2013 Warner Brothers Studios began sending warning to users whose internet service was provided by an entity other than one of the five ISP partners of the CCI. In August, Variety revealed that engineers at Comcast and NBC Universal were developing technology designed to recognize the illegal downloading of copyrighted material by its users and launch a pop-up window with links to legal access to such content. This was not a CCI-backed initiative, but rather was viewed as a potential supplement to the CAS as Comcast had begun approaching other content owners and ISPs to explore the concept with them. In the CAS's first year, Comcast, the nation's largest ISP, sent out 625,000 warnings to its users. Largely seen as another supplement to the CAS, in September 2013, AT&T applied for a patent for technology that would allow the company to "prevent bandwidth abuse" by tracking the content its users are sharing via BitTorrent and other P2P networks in order to charge its users for excessive bandwidth use.
See also
Chilling effect (law)
References
External links
Richwine, Lisa, "Internet providers to act against online pirates", Reuters, July 7, 2011. Retrieved November 6, 2011.
Challis, Ben, "US content industry and ISPs to inform and alert", The 1709 Blog, July 8, 2011. Retrieved November 6, 2011.
Suarez, Paul, "Copyright Cops Team with ISPs to Crack Down on Music, Movie Pirates", PCWorld.com, July 7, 2011. Retrieved November 6, 2011.
Kravets, David, "ISPs to Disrupt Internet Access of Copyright Scofflaws", Wired, July 7, 2011. Retrieved November 6, 2011.
Computer law
Intellectual property law
Peer-to-peer file sharing
Internet terminology |
37822732 | https://en.wikipedia.org/wiki/History%20of%20network%20traffic%20models | History of network traffic models | Design of robust and reliable networks and network services relies on an understanding of the traffic characteristics of the network. Throughout history, different models of network traffic have been developed and used for evaluating existing and proposed networks and services.
Demands on computer networks are not entirely predictable. Performance modeling is necessary for deciding the quality of service (QoS) level. Performance models in turn, require accurate traffic models that have the ability to capture the statistical characteristics of the actual traffic on the network. Many traffic models have been developed based on traffic measurement data. If the underlying traffic models do not efficiently capture the characteristics of the actual traffic, the result may be the under-estimation or over-estimation of the performance of the network. This impairs the design of the network. Traffic models are hence, a core component of any performance evaluation of networks and they need to be very accurate.
“Teletraffic theory is the application of mathematics to the measurement, modeling, and control of traffic in telecommunications networks. The aim of traffic modeling is to find stochastic processes to represent the behavior of traffic. Working at the Copenhagen Telephone Company in the 1910s, A. K. Erlang famously characterized telephone traffic at the call level by certain probability distributions for arrivals of new calls and their holding times. Erlang applied the traffic models to estimate the telephone switch capacity needed to achieve a given call blocking probability. The Erlang blocking formulas had tremendous practical interest for public carriers because telephone facilities (switching and transmission) involved considerable investments. Over several decades, Erlang’s work stimulated the use of queuing theory, and applied probability in general, to engineer the public switched telephone network. Teletraffic theory for packet networks has seen considerable progress in recent decades. Significant advances have been made in long-range dependence, wavelet, and multifractal approaches. At the same time, traffic modeling continues to be challenged by evolving network technologies and new multimedia applications. For example, wireless technologies allow greater mobility of users. Mobility must be an additional consideration for modeling traffic in wireless networks. Traffic modeling is an ongoing process without a real end. Traffic models represent our best current understanding of traffic behavior, but our understanding will change and grow over time.”
Network traffic models usage
Measurements are useful and necessary for verifying the actual network performance. However, measurements do not have the level of abstraction that makes traffic models useful. Traffic models can be used for hypothetical problem solving whereas traffic measurements only reflect current reality. In probabilistic terms, a traffic trace is a realization of a random process, whereas a traffic model is a random process. Thus, traffic models have universality. A traffic trace gives insight about a particular traffic source, but a traffic model gives insight about all traffic sources of that type. Traffic models have three major uses. One important use of traffic models is to properly dimension network resources for a target level of QoS. It was mentioned earlier that Erlang developed models of voice calls to estimate telephone switch capacity to achieve a target call blocking probability. Similarly, models of packet traffic are needed to estimate the bandwidth and buffer resources to provide acceptable packet delays and packet loss probability. Knowledge of the average traffic rate is not sufficient. It is known from queuing theory that queue lengths increase with the variability of traffic. Hence, an understanding of traffic burstiness or variability is needed to determine sufficient buffer sizes at nodes and link capacities. A second important use of traffic models is to verify network performance under specific traffic controls. For example, given a packet scheduling algorithm, it would be possible to evaluate the network performance resulting from different traffic scenarios. For another example, a popular area of research is new improvements to the TCP congestion avoidance algorithm. It is critical that any algorithm is stable and allows multiple hosts to share bandwidth fairly, while sustaining a high throughput. Effective evaluation of the stability, fairness, and throughput of new algorithms would not be possible without realistic source models. A third important use of traffic models is admission control. In particular, connection oriented networks such as ATM depends on admission control to block new connections to maintain QOS guarantees. A simple admission strategy could be based on the peak rate of a new connection; a new connection is admitted if the available bandwidth is greater than the peak rate. However, that strategy would be overly conservative because a variable bit-rate connection may need significantly less bandwidth than its peak rate. A more sophisticated admission strategy is based on effective bandwidths. The source traffic behavior is translated into an effective bandwidth between the peak rate and average rate, which is the specific amount of bandwidth required to meet a given QoS constraint. The effective bandwidth depends on the variability of the source.
Network traffic models steps
Traffic modeling consists of three steps:
(i) selection of one or more models that may provide a good description of the traffic type
(ii) estimation of parameters for the selected models
(iii) statistical testing for election of one of the considered models and analysis of its suitability to describe the traffic type under analysis.
Parameter estimation is based on a set of statistics (e.g. mean, variance, density function or auto covariance function, multifractal characteristics) that are measured or calculated from observed data. The set of statistics used in the inference process depends on the impact they may have in the main performance metrics of interest.
Network traffic models parameter
In recent years several types of traffic behavior, that can have significant impact on network performance, were discovered: long-range dependence, self-similarity and, more recently, multifractality.
There are two major parameters generated by network traffic models: packet length distributions and packet inter-arrival distributions. Other parameters, such as routes, distribution of destinations, etc., are of less importance. Simulations that use traces generated by network traffic models usually examine a single node in the network, such as a router or switch; factors that depend on specific network topologies or routing information are specific to those topologies and simulations. The problem of packet size distribution is fairly well-understood today. Existing models of packet sizes have proven to be valid and simple. Most packet size models do not consider the problem of order in packet sizes. For example, a TCP datagram in one direction is likely to be followed by a tiny ACK in the other direction about half of one Round-Trip Time (RTT) later. The problem of packet inter-arrival distribution is much more difficult. Understanding of network traffic has evolved significantly over the years, leading to a series of evolutions in network traffic models.
Self-similar traffic models
One of the earliest objections to self-similar traffic models was the difficulty in mathematical analysis. Existing self-similar models could not be used in conventional queuing models. This limitation was rapidly overturned and workable models were constructed. Once basic self-similar models became feasible, the traffic modeling community settled into the “detail” concerns. TCP’s congestion control algorithm complicated the matter of modeling traffic, so solutions needed to be created. Parameter estimation of self-similar models was always difficult, and recent research addresses ways to model network traffic without fully understanding it.
Fractional Brownian motion:
When self-similar traffic models were first introduced, there were no efficient, analytically tractable processes to generate the models. Ilkka Norros devised a stochastic process for a storage model with self-similar input and constant bit-rate output. While this initial model was continuous rather than discrete, the model was effective, simple, and attractive.
SWING:
All self-similar traffic models suffer from one significant drawback: estimating the self-similarity parameters from real network traffic requires huge amounts of data and takes extended computation. The most modern method, wavelet multi-resolution analysis, is more efficient, but still very costly. This is undesirable in a traffic model. SWING uses a surprisingly simple model for the network traffic analysis and generation. The model examines characteristics of users, Request-Response Exchanges (RREs), connections, individual packets, and the overall network. No attempt is made to analyze self-similarity characteristics; any self-similarity in the generated traffic comes naturally from the aggregation of many ON/OFF sources.
Pareto distribution process:
The Pareto distribution process produces independent and identically distributed (IID) inter-arrival times. In general if X is a random variable with a Pareto distribution, then the probability that X is greater than some number x is given by P(X > x) = (x/x_m)-k for all x ≥ x_m where k is a positive parameter and x_m is the minimum possible value of Xi The probability distribution and the density functions are represented as:
F(t) = 1 – (α/t)β where α,β ≥ 0 & t ≥ α
f(t) = βαβ t-β-1
The parameters β and α are the shape and location parameters, respectively. The Pareto distribution is applied to model self-similar arrival in packet traffic. It is also referred to as double exponential, power law distribution. Other important characteristics of the model are that the Pareto distribution has infinite variance, when β ≥ 2 and achieves infinite mean, when β ≤ 1.
Weibull distribution process:
The Weibull distributed process is heavy-tailed and can model the fixed rate in ON period and ON/OFF period lengths, when producing self-similar traffic by multiplexing ON/OFF sources. The distribution function in this case is given by:
F(t) = 1 – e-(t/β)α t > 0
and the density function of the weibull distribution is given as:
f(t) = αβ-α tα-1 e -(t/β)α t > 0
where parameters β ≥ 0 and α > 0 are the scale and location parameters respectively.
The Weibull distribution is close to a normal distribution. For β ≤ 1 the density function of the distribution is L shaped and for values of β > 1, it is bell shaped. This distribution gives a failure rate increasing with time. For β > 1, the failure rate decreases with time. At, β = 1, the failure rate is constant and the lifetimes are exponentially distributed.
Autoregressive models:
The Autoregressive model is one of a group of linear prediction formulas that attempt to predict an output y_n of a system based on previous set of outputs {y_k} where k < n and inputs x_n and {x_k} where k < n. There exist minor changes in the way the predictions are computed based on which, several variations of the model are developed. Basically, when the model depends only on the previous outputs of the system, it is referred to as an auto-regressive model. It is referred to as a Moving Average Model (MAM), if it depends on only the inputs to the system. Finally, Autoregressive-Moving Average models are those that depend both on the inputs and the outputs, for prediction of current output. Autoregressive model of order p, denoted as AR(p), has the following form:
Xt = R1 Xt-1 + R2 Xt-2 + ... + Rp Xt-p + Wt
where Wt is the white noise, Ri are real numbers and Xt are prescribed correlated random numbers. The auto-correlation function of the AR(p) process consists of damped sine waves depending on whether the roots (solutions) of the model are real or imaginary. Discrete Autoregressive Model of order p, denoted as DAR(p), generates a stationary sequence of discrete random variables with a probability distribution and with an auto-correlation structure similar to that of the Autoregressive model of order p.[3]
Regression models:
Regression models define explicitly the next random variable in the sequence by previous ones within a specified time window and a moving average of a white noise.[5]
TES models :
Transform-expand-sample (TES) models are non-linear regression models with modulo-1 arithmetic. They aim to capture both auto-correlation and marginal distribution of empirical data. TES models consist of two major TES processes: TES+ and TES–. TES+ produces a sequence which has positive correlation at lag 1, while TES– produces a negative correlation at lag 1.
Non-self-similar traffic models
Early traffic models were derived from telecommunications models and focused on simplicity of analysis. They generally operated under the assumption that aggregating traffic from a large number of sources tended to smooth out bursts; that burstiness decreased as the number of traffic sources increased.
Poisson distribution model:
One of the most widely used and oldest traffic models is the Poisson Model. The memoryless Poisson distribution is the predominant model used for analyzing traffic in traditional telephony networks. The Poisson process is characterized as a renewal process. In a Poisson process the inter-arrival times are exponentially distributed with a rate parameter λ: P{An ≤ t} = 1 – exp(-λt). The Poisson distribution is appropriate if the arrivals are from a large number of independent sources, referred to as Poisson sources. The distribution has a mean and variance equal to the parameter λ.
The Poisson distribution can be visualized as a limiting form of the binomial distribution, and is also used widely in queuing models. There are a number of interesting mathematical properties exhibited by Poisson processes. Primarily, superposition of independent Poisson processes results in a new Poisson process whose rate is the sum of the rates of the independent Poisson processes. Further, the independent increment property renders a Poisson process memoryless. Poisson processes are common in traffic applications scenarios that consist of a large number of independent traffic streams. The reason behind the usage stems from Palm's Theorem which states that under suitable conditions, such large number of independent multiplexed streams approach a Poisson process as the number of processes grows, but the individual rates decrease in order to keep the aggregate rate constant. Traffic aggregation need not always result in a Poisson process. The two primary assumptions that the Poisson model makes are:
1. The number of sources is infinite
2. The traffic arrival pattern is random.
Compound Poisson traffic models:
In the compound Poisson model, the base Poisson model is extended to deliver batches of packets at once. The inter-batch arrival times are exponentially distributed, while the batch size is geometric. Mathematically, this model has two parameters, λ, the arrival rate, and ρ in (0,1), the batch parameter. Thus, the mean number of packets in a batch is 1/ ρ, while the mean inter-batch arrival time is 1/ λ. Mean packet arrivals over time period t are tλ/ ρ.
The compound Poisson model shares some of the analytical benefits of the pure Poisson model: the model is still memoryless, aggregation of streams is still (compound) Poisson, and the steady-state equation is still reasonably simple to calculate, although varying batch parameters for differing flows would complicate the derivation.
Markov and Embedded Markov Models:
Markov models attempt to model the activities of a traffic source on a network, by a finite number of states. The accuracy of the model increases linearly with the number of states used in the model. However, the complexity of the model also increases proportionally with increasing number of states. An important aspect of the Markov model - the Markov Property, states that the next (future) state depends only on the current state. In other words, the probability of the next state, denoted by some random variable Xn+1, depends only on the current state, indicated by Xn, and not on any other state Xi, where i<n. The set of random variables referring to different states {Xn} is referred to as a Discrete Markov Chain.
Packet trains:
Another attempt at providing a bursty traffic model is found in Jain and Routhier’s Packet Trains model. This model was principally designed to recognize that address locality applies to routing decisions; that is, packets that arrive near each other in time are frequently going to the same destination. In generating a traffic model that allows for easier analysis of locality, the authors created the notion of packet trains, a sequence of packets from the same source, traveling to the same destination (with replies in the opposite direction). Packet trains are optionally sub-divided into tandem trailers. Traffic between a source and a destination usually consists of a series of messages back and forth. Thus, a series of packets go one direction, followed by one or more reply packets, followed by a new series in the initial direction. Traffic quantity is then a superposition of packet trains, which generates substantial bursty behavior. This refines the general conception of the compound Poisson model, which recognized that packets arrived in groups, by analyzing why they arrive in groups, and better characterizing the attributes of the group. Finally, the authors demonstrate that packet arrival times are not Poisson distributed, which led to a model that departs from variations on the Poisson theme. The packet train model is characterized by the following parameters and their associated probability distributions:
mean inter-train arrival time
mean inter-car arrival time
mean truck size (in the tandem trailer model)
mean train size.
The train model is designed for analyzing and categorizing real traffic, not for generating synthetic loads for simulation. Thus, little claim has been made about the feasibility of packet trains for generating synthetic traffic. Given accurate parameters and distributions, generation should be straightforward, but derivation of these parameters is not addressed.
Traffic models today
NS-2 is a popular network simulator; PackMimeHTTP is a web traffic generator for NS-2, published in 2004. It does take long-range dependencies into account, and uses the Weibull distribution. Thus, it relies on heavy tails to emulate true self-similarity. Over most time scales, the effort is a success; only a long-running simulation would allow a distinction to be drawn. This follows suggestions from where it is suggested that self-similar processes can be represented as a superposition of many sources each individually modeled with a heavy-tailed distribution. It is clear that self-similar traffic models are in the mainstream.
See also
Traffic generation model
Traffic model
Network traffic simulation
References
History of telecommunications
Network theory
Mathematical modeling |
17708308 | https://en.wikipedia.org/wiki/HyperSpace%20%28software%29 | HyperSpace (software) | HyperSpace is an instant-on Linux-based operating system that has been developed by Phoenix Technologies. It is an application environment that can run either independently or side-by-side with a traditional operating system such as Red Hat Linux. Users are able to boot their personal computers in seconds and immediately browse the Internet and perform other Web-based activities. HyperSpace's connectivity features automatically jump on the best network and are said to be automatically configured. Phoenix executives say this allows mobile PCs to work more like smartphones.
In January 2009, Asus announced HyperSpace would be incorporated into its next-generation notebooks. Hyperspace was re-launched at the 2009 Consumer Electronics Show. The company later announced that HyperSpace has been optimized for the Cortex-A8 ARM processor architecture.
In June 2010, it was announced that HP has arranged to purchase HyperSpace from Phoenix Technologies.
Overview
HyperSpace includes instant-on/instant-off access to applications, a power savings management system, and a secure environment less prone to viruses, malware and other external attacks that generally target Windows. Users still have access to their most-used applications, such as a Web browser and e-mail, even when Windows or another OS is booting, shutting down, on standby, or has crashed. It can also be used to avoid the security problems associated with using the Windows operating system, or for troubleshooting Windows.
HyperSpace can also extend a notebook battery life by 25%. Potential applications for HyperSpace include Web browsers, instant-on multimedia players, IP soft phones, e-mail, instant messaging, VoIP, remote system maintenance, repair, and embedded security. The operating system will be read-only memory-based and as such may only apply to motherboards which have this operating system embedded.
It runs on top of the HyperCore hypervisor.
Hybrid versus Dual
HyperSpace Hybrid allows a platform to run multiple operating systems (OS) and applications as independent, side-by-side environments. This means one computer system can concurrently run both a Windows and a HyperSpace environment. Users have the ability to switch between the two environments in real time with one simple push of the 'F4' key.
HyperSpace Hybrid runs on PCs with Intel Vanderpool Technology (VT). For those without this technology, HyperSpace Dual allows the HyperSpace environment to coexist with another operating system, but the two may not run concurrently.
Partnerships
CyberLink
See also
Splashtop
Latitude ON
Coreboot
References
External links
HyperSpace
Embedded Linux distributions
HP software
Linux distributions |
35221697 | https://en.wikipedia.org/wiki/Dalian%20Hi-tech%20Zone | Dalian Hi-tech Zone | Dalian Hi-tech Zone ( or ) or DHZ (variant officially used spellings include Dalian High-tech Zone) is an industrial district in the western suburbs of Dalian City in Liaoning Province, China. It extends about 30 kilometres along Lushun South Road and Guoshui Highway in Shahekou District and Lushunkou District, where many of the world's high technology companies and organizations have operations.
History
The construction of DHZ started in 1991. Dalian Software Park was added in 1998. The second phase of Dalian Software Park kicked off in 2003 at the site of Dalian Ascendas IT Park, which officially opened in 2007. DHZ is often called "Lushun South Road Software Industry Belt".
IT parks in Dalian Hi-tech Zone
The whole area of Dalian Hi-tech Zone is under the oversight of the Dalian Hi-tech Industrial Zone Administrative Committee, but all parks therein are managed by private enterprises, except Dalian Hi-tech Zone and Animation Industry Base. From east to west, they are:
Shahekou District
Along Lushun South Road:
Dalian Software Park
Name: Dalian Software Park ()
Established: 1998
Managed by Dalian Software Park Co., Ltd., a subsidiary of Yida Group
Address: 1 Digital Square, Dalian, Liaoning, China ()
Area: 3 km²
500 companies, including: Accenture, Genpact, HP, IBM (moving to Dalian Tiandi Software Park), Sony, Panasonic, NEC, Softbank, NEC.
Lingshui Bay General Headquarters
Now being reclaimed from the Yellow Sea
Qixianling Modern Service Industry Area
Name: Qixianling Modern Service Industry Nucleus Function Area ()
This area is often called "Dalian Hi-tech Zone" in its narrow sense.
Established: 1991
Managed by: Dalian Hi-tech Industrial Zone Administrative Committee
Address: 1 Gaoxin Street, Qixianling Industrial Base, Dalian, Liaoning, China ()
Area: 2 km²
800 companies, including Dalian Hi-Think Computer, Citibank, Dell, HiSoft Technology International, NHN Corporation.
Animation Industry Base
Managed by: Dalian Hi-tech Industrial Zone Digital Entertainment Administrative Office
110 companies, including Crystal Digital Technology ().
Hekou International Software Park
Managed by Hekou Village
Companies: Yidatec
Dalian Ascendas IT Park
Name: Dalian Ascendas IT Park ()
Established: 2007
Managed by DLSP Ascendas Co., Ltd., a joint venture of Ascendas Corp. of Singapore and Dalian Software Park Co., Ltd.
Address: 1 Huixianyuan, Dalian Hi-tech Zone, Dalian, Liaoning, China ()
Area: 200,000 m² (100,000 m² in No. 1 Building and 100,000 m² in No. 2 Building) See this guide map.
59 companies, including CitiBank, Infosys Technologies, Konica Minolta, Omron.
Neusoft Group International Software Park
Neusoft Group
Lushunkou District
Dalian Tiandi Software Park
Name: Dalian Tiandi Software Park ()
Established: 2010
Area: Jinhuai Building (30,000 m²). See the area map in Project Introduction (Beware of music!).
Managed by DLSP Shui On Development Co., Ltd., a joint venture of Shui On Land and Dalian Software Park Co., Ltd.
Address: No. 33, Hongchuan East Rd., Dalian Hi-tech Zone, Dalian, Liaoning, China ()
13 companies, including IBM (3,000 employees have already moved), Kingsoft, Mitsui Real Estate.
Yingge Software Park
Under construction
Along GuoShui Highway (Guojia - Shuishiying Villages):
Longtou Technology Park
Under construction
Transportation
Roads: China National Highway 201 (also called Lushun South Road). 30 minutes by taxi from the hotels on Renming Road, downtown Dalian, to the Dalian Hi-tech Zone Administrative Committee.
City bus: Nos. 3, 10, 28, 531, 802 and other lines
Dalian-Lushun bus: From the north and south exits of Dalian Railway Station
Tramway: No. 202 line (Xinggong Street to Xiaopingdaoqian). Being extended via Huangnichuan and Longwangtang to Lüshun New Port.
Dalian Metro: Line 1, Line 12
Railway: 30 minutes by taxi from Dalian Railway Station
Air: 30 minutes by taxi from Dalian Zhoushuizi International Airport
Sea: 40 minutes by taxi from Dalian Port. Yacht harbors in Xinghai Square, in front of Dalian Maritime University, and the Xiaopingdao reclaimed area.
Education
There are universities and research centers in this area, where about half of all the universities of Dalian are located. From east to west are:
Dalian Institute of Chemical Physics
Dalian Ocean University
Dongbei University of Finance and Economics
Dongbei University of Finance and Economics Press
Neusoft Institute of Information
Dalian University of Technology
Dalian University of Technology Press
Dalian Maritime University
Software and IT Service Training Base of Ambow Education Group (), Beijing
Dalian University of Foreign Languages
Dalian Medical University
Accommodation
Downtown and central business district on Renmin Road
Shangri-La, Furama, New World, Ramada, Nikko, Kempinski, Dalian Civil Aviation (Daiwa Royal) and other hotels
Dalian World Financial Center at Xinghai Square
Grand Hyatt hotel
Dalian Hi-tech Zone
Heyi Hotel ()
Local environment
Restaurants
Many Chinese restaurants (including Maizi Dawang on Shuma Road)
Western restaurants (including West Coast coffee shop. There are also KFC, McDonald's and Pizza Hut fastfood restaurants.)
Japanese restaurants (including Momiji and Kappo Shimizu)
Korean restaurants (including Kaicheng and Shengdao)
Cafeterias
Cafeterias in major buildings, run by local Dalianite, British and French catering companies
Apartments
More expensive: Residencies at Shangri-La, Furama, Ramada, Nikko, Kempinski and other hotels
Less expensive: International New Village in Dalian Software Park, Guigu Jiari (Silicon Valley Holiday) on Lushun South Road, etc.
Shopping
Hypermarkets: Dashang Group's Newmart in Heishijiao, Walmart on Shuma Road, and Carrefour, Mykal on Xi'an Road Commercial Zone & Qingniwaqiao.
Schools
In Dalian City, there are:
Kindergartens
Municipal Bilingual School (Chinese/English)
Dalian Maple Leaf International School (English), Japanese School (at Fujiazhuang) and Korean School (in Dalian Development Area)
Medical facilities
No. 2 Hospital, attached to Dalian Medical University
Recreational areas
Xinghai Square, Xinghai Park, Longwangtang Cherry Blossom Park.
Hi-tech parks in other areas of Dalian
There are other parks, often called the "hi-tech zones" of Dalian.
Ganjingzi District
Dalian BEST City () is located along Lushun North Road.
Jinzhou District
New Jinzhou District including Dalian Development Area has DD Port and other hi-tech areas.
See also
List of technology centers
Dalian Software Park
References
External links
Official site
Dalian
High-technology business districts in China |
297265 | https://en.wikipedia.org/wiki/Sales%20force%20management%20system | Sales force management system | Salesforce management systems (also sales force automation systems (SFA)) are information systems used in customer relationship management (CRM) marketing and management that help automate some sales and sales force management functions. They are often combined with a marketing information system, in which case they are often called CRM systems.
An SFA, typically a part of a company's CRM system, is a system that automatically records all the stages in a sales process. SFA includes a contact management system which tracks all contact that has been made with a given customer, the purpose of the contact, and any follow up that may be needed. This ensures that sales efforts are not duplicated, reducing the risk of irritating customers. SFA also includes a sales lead tracking system, which lists potential customers through paid phone lists, or customers of related products. Other elements of an SFA system can include sales forecasting, order management and product knowledge. More developed SFA systems have features where customers can actually model the product to meet their needs through online product building systems. This is becoming popular in the automobile industry, where patrons can customize various features such as color and interior features such as leather vs. upholstered seats.
An integral part of any SFA system is company-wide integration among different departments. If SFA systems aren't adopted and properly integrated to all departments, there might be a lack of communication which could result in different departments contacting the same customer for the same purpose. In order to mitigate this risk, SFA must be fully integrated in all departments that deal with customer service management.
Making a dynamic sales force links strategy and operational actions that can take place within a department. the SFA relies on objectives, plans, budget, and control indicators under specific conditions. In order to perform the objectives correctly, specific procedures must be implemented:
Identifiable sales force management processes
Setting targets and objectives based on inputs (usually via a command center)
Assigning factors responsible for achieving objectives
Control processes for ensuring objectives are being achieved within
a given time frame
a given constrained context (customers and/or markets)
System management to handle uncertain environments
The process usually starts with specific sales targets. The command center analyzes the inputs and outputs established from a modeled control process and the sales force. The control process enables the sales force to establish performance standards, measuring actual performance, comparing measured performance against established standards and taking corrective action. The sales managers adjust their actions based on the overall process.
Aside from the control process, the following metrics are implemented:
Time management – Accurately measures the tasks and the fraction of time needed for each task.
Call management – Plan for customer interaction accounts for the fraction of command center reps that comply with the process and have successful calls.
Opportunity management – If the process is followed correctly then a sales opportunity exists. The fraction of command center reps that use the tools, comply with the objective are all measured.
Account management – For multiple opportunities with a customer the account is measured by the tools, process, and objectives.
Territory management – For monitoring the account, the territory is measured by the number of account reps and prospective versus active customers.
Sales force management – Process includes training, IT systems, control, coaching, and is shared across several people and departments.
Five major activities are involved in staffing a sales force. They must be divided into related steps. The first step is to plan the recruiting and selection process. The responsibilities associated with this step are generally assigned to top sales executives, the field sales manager or the human resources manager. The company wants to determine the number and type of people needed, which involves analyzing the market and the job and preparing a written job description. The qualifications of the job must be established to fill the job. Second, the recruiting phase includes identifying sources of recruits that are consistent with the type of person desired, selecting the source to be used and contacting the recruits. You need to weigh out the options and evaluate its potential effectiveness versus its costs. Third, select the most qualified applicants. The selection phase has three steps, in the planning phase there may be qualifications specified and in the first step it is necessary to design a system for measuring the recruits against the standards from the planning phase. Then the system must be put into effect with the new applicants and then making the actual selection is the final step. The fourth activity is to hire those people who have been selected. Just because one makes an offer does not mean that the job is done. One must convince a recruit that the job offers everything that they need and want to get them to join a company or at least consider it. The fifth activity is to assimilate the new hires into the company. This is done by placing them under direction of an employee in the firm and possibly giving them a mentor to help them feel comfortable working in the firm and going through the training programs.
Components of sales-force automation systems
Sales-force automation systems vary in their capabilities. They can vary depending on what information an organization needs. The application also has implications based on an organization's size, organization rollup, demand of new system, sales processes, and number of users.
Depending on requirements, services can fall into one of two categories:
on-premises software
on-demand (hosted) software
With on-premises software, the customer manages and purchases the application. On-premises software has some advantages and disadvantages. The disadvantage of on-premises is the higher cost of the software, along with maintenance. Customization is also needed for some who use additional processes outside of the normal out of the box solution. Time is also a factor. Many on-premises software implementations take longer - along with numerous testing and training sessions. The overall advantage of on-premises software relates to overall return on investment. Using the application for three to five years becomes more cost-effective. Another advantage may depend on the amount of data. With on-demand, certain volume restrictions hold, but with on-premises, data restrictions are based on the storage size of local hardware.
CRM is a mechanism which manages all the data of their customers, clients and other business partners in a single container. CRM with cloud computing allows businesses to keep stature of its customers from all its corners.
Several tools can aid in automating sales activities. The largest vendors are Salesforce.com, Microsoft Dynamics CRM, SAP AG and Oracle.
Mobile sales force automation application
Many sales managers are always on the go. The growth of smartphones has reignited the creation of mobile sales force automation systems. Most companies IT departments are aware that adopting new abilities requires extensive testing. Despite the time needed to test such a new product, it will pay off in the future for the sales department. Smartphones appeal to salespeople because they are easy to carry and easy to use, show an appealing interface design, touchscreens and fast wireless network abilities. More than 55% of global 2000 organization will deploy mobile SFA project by 2011 and newer Smartphone platforms, such as Apple's iOS and Google's Android, point to a future of increasing diversity in device selecting and support for sales force. When implementing the mobile sales force automation application or during the first stage of systems development life cycle, project teams will need to evaluate how prospective solutions comprising mobile devices, software and support infrastructure and carrier services are packaged to deliver optimal system usability, manageability and integrative abilities, as well as scalability, reliability and performance.
Encouraging use
Many organisations have found it difficult to persuade sales people to enter data into the system. For this reason many have questioned the value of the investment. Recent developments have embedded sales process systems that give something back to the seller within the CRM screens. Because these systems help the sales person plan and structure their selling in the most effective way, increasing productivity, they give a reason to use the CRM.
See also
:Category:Customer relationship management software
Information technology management
Predictive analytics
Sales Management Systems by Microsoft
References
Sources
Customer relationship management
Information systems
Personal selling |
13420020 | https://en.wikipedia.org/wiki/MultiCharts | MultiCharts | MultiCharts is a Windows-based application which is designed, sold and distributed by MultiCharts, LLC. The company is based in Columbus, Ohio, in the United States. MultiCharts is an electronic trading platform and technical analysis software for analyzing the financial markets and performing trade execution. It uses a proprietary programming language called PowerLanguage.
MultiCharts Analysis and Trading Platform
MultiCharts is a professional electronic trading platform for individual and corporate traders. The platform provides the means to receive market data, perform technical analysis, and send and manage orders to a broker, both manually and automatically. MultiCharts supports multiple data feeds and brokers , and provides the ability to receive market data from a variety of data providers and brokers (i.e. the broker does not also have to be a data provider). The platform comes with a number of publicly available technical indicators and trading systems, which are written in PowerLanguage.
PowerLanguage
PowerLanguage is very similar to EasyLanguage, which was created and produced by TradeStation. The platform is capable of “understanding” EasyLanguage files, meaning that the files do not need to be converted to another format prior to being used in the platform. Users may write their own indicators and trading systems, copy and paste EasyLanguage script from another source, or import EasyLanguage files (supported file extensions are ELD, ELA and ELS). Protected EasyLanguage files cannot be imported, unless they are unlocked first.
QuoteManager
The platform includes an organizational tool for data management called the QuoteManager, which saves any loaded data to the local hard drive. This allows the user to examine the data, edit it, delete it, or add to it from some other data source. The database files can be copied to another drive for backing up, or other analytic purposes.
Strategy Backtesting
The platform supports different modes of backtesting, which is a method used for testing theories about market behavior by applying them to historical data. It is possible to apply a technical indicator or a trading strategy to only one instrument or to a portfolio of instruments at the same time. For portfolio analysis users must use a separate module called the Portfolio Backtester, which comes with the platform.
Automated Trading
The platform is capable of algorithmic trading, which means that program generated orders are automatically sent to a broker after being triggered by a PowerLanguage script.
Discretionary Trading
The platform allows the user to generate an order manually (outside of writing, compiling and applying a script to an instrument). Orders can be generated via a vertical ladder style Depth of Market interface, or directly from a market data chart. Discretionary order execution features were introduced in version 7.4 Beta and higher
Market Scanner
A Market Scanner is a feature also known as a "screener", "radar screen", or “quote board” in other trading software. A scanner is an Excel-like table consisting of many cells that simultaneously display real-time streaming quotes, prices and other market information.
MultiCharts describe their Real-Time Market Scanner as being useful for monitoring, researching, and organizing the stock symbols the user is interested in. Where a single chart gives the user details about one stock symbol, the scanner feature enables the user to view opportunities among many symbols.
Third Party Add-ons
A large number of third-party developers sell add-on extensions for MultiCharts. Since MultiCharts is a development platform, a custom script can be written, called a trading system or trading strategy. If a trader has an idea that needs to be developed, he/she can either write his/her own strategy in PowerLanguage or have his trading system developed by third-party developers.
Versions
The latest version is 14.0.21357. The latest version of MultiCharts .Net version is 14.0.21360.
Release History
MultiCharts 14 (October 12, 2020)
MultiCharts 12 (July 8, 2018)
MultiCharts 11 (August 10, 2017)
MultiCharts 10 (November 22, 2016)
MultiCharts 9 (October 62, 2014)
MultiCharts64 8.7 (July 1, 2013)
MultiCharts 8.7 (July 1, 2013)
MultiCharts64 8.5 (February 28, 2013)
MultiCharts 8.5 (February 28, 2013)
MultiCharts64 8.0 (June 19, 2012)
MultiCharts 8.0 (June 19, 2012)
MultiCharts 7.4 (January 11, 2012)
MultiCharts 6.01 (August 9, 2010)
MultiCharts 5.5 (September 10, 2009)
MultiCharts 5.0 Gold (May 23, 2009)
MultiCharts 4.0 (October 22, 2008)
MultiCharts 3.1 (June 25, 2008)
MultiCharts 3.0 (April 15, 2008)
MultiCharts 2.1 (September 18, 2007)
MultiCharts 2.0 (February 18, 2007)
MultiCharts 1.9 (March 11, 2006)
MultiCharts 1.8 (November 15, 2005)
MultiCharts 1.7 (September 16, 2005)
MultiCharts 1.6 build 50328 (June 28, 2005)
MultiCharts 1.5 build 50309 (May 9, 2005)
MultiCharts 1.1 build 40713 (July 13, 2004)
Complete release history along with details for each release can be found here
Recognition, Reviews & Awards
The MultiCharts platform has received recognition from the following third party industry participants:
Trade2Win Members Choice Awards
In the period spanning from 2008 to 2011 the program has received several awards from the Trade2Win online discussion forum community, including “Best Professional Trading Platform and “Best Software for US Intra-Day Traders”.
Technical Analysis of Stocks & Commodities
Independent reviews of the MultiCharts platform were included in the Technical Analysis of Stocks & Commodities magazine on the following edition:
September 2010 citation
"MultiCharts is highly flexible trading software designed to let you have choices. This is without doubt the most robust software when it comes to interfacing with datafeeds. MultiCharts is price competitive and definitely one you ought to consider when looking at trading platforms."
May 2008 citation
"My experiences with MultiCharts have been very favorable, and it's no different with this new and improved version. After seeing the portfolio backtesting features at work, I have to reiterate that the developers put a lot of thought into this product."
Traders Magazine
Independent reviews of the MultiCharts platform were included in the Traders' magazine on the following edition:
"In the modern age of electronic trading, every person who wants to become a trader needs appropriate tools to collect market data, analyse that data and place orders. A new trader has a difficult choice – there are literally hundreds of trading schools, third-party indicators and trading platforms on the market. This article will focus on MultiCharts, a well-established trading platform, but one which remains relatively unknown to mainstream traders."
References
External links
1. MultiCharts Official Website
2. LMAX MultiCharts Page
3. Market Profile® and Volume Profile trading indicators designed for MultiCharts
4. Free MultiCharts indicators and coding tips, tricks and examples
Technical analysis software |
39711324 | https://en.wikipedia.org/wiki/Panzura | Panzura | Panzura is a privately owned American software company based in San Jose, California, that provides hybrid-cloud data management software and services for the enterprise software market. Its software helps users access, manage, analyze, and store unstructured data using techniques in distributed data consolidation, artificial intelligence, and network load balancing.
The company's core offering is a global file system promoted as a unified data engine. It supports large-scale multi-site data workflows and applications in the cloud, and has data protection and disaster recovery capabilities derived from its Immutable object architecture. Panzura licenses software as a service data management and data analytics services for observability, search and auditing over data and storage infrastructure.
History
Panzura was founded in 2008 by Randy Chou and John Taylor, who were both members of the founding software team at Aruba Networks. The company was initially funded by Matrix Partners, Khosla Ventures, and Chevron Technology Ventures, the corporate venture arm of Chevron Corporation. Panzura received $15 million in funding in 2012, followed by a $25 million funding round in 2013 which included Meritech Capital Partners. SanDisk became a strategic investor in 2013, with Alex Lam joining as a board observer.
Panzura was acquired outright in 2020 by Chicago-based private equity firm Profile Capital Management.
After most executives were replaced,
it reorganized under new management and in 2021, the company announced a new brand image.
Products
Panzura CloudFS is a global file system that uses cloud object storage. It overcomes latency to provide users with simultaneous, real-time access to petabyte-size files from any widely distributed location. It is used to migrate or re-platform data, workloads and applications to the cloud, and to consolidate data across multiple on-premises servers and the cloud, without having to refresh existing IT systems. CloudFS integrates with public and private cloud object storage platforms including Amazon Web Services, Microsoft Azure, IBM Cloud Object Storage, EMC Atmos, Google Cloud Platform, Virtustream, Scality, Cloudian and Wasabi Technologies.
Panzura Data Services is a multi-cloud management and data analysis overlay to CloudFS. It offers a unified view and management of unstructured data, whether it is stored in a cloud, on premises in a data center, or at the edge. It also offers search and audit capabilities.
Panzura Mobile adds enterprise file sync and share capabilities to CloudFS, using the same authoritative data source.
As of November 2021, Panzura holds 34 patents, more than half of which are among the top 10% most influential in the data space according to LexisNexis PatentSight.
Panzura has FIPS 140-2 security certification from the National Institute of Standards and Technology (NIST) for use by U.S. government agencies and contractors.
Panzura provides secure-erase capabilities for data destruction in compliance with guidelines for media sanitization as set out in the NIT special publication 800-88.
References
Cloud storage gateways
Cloud storage
Network-attached storage
Cloud computing providers |
62331228 | https://en.wikipedia.org/wiki/Thackeray%20family | Thackeray family | The Thackeray family () is one of the most well-known and powerful political families in the Indian state of Maharashtra.
Keshav Sitaram Thackeray was one of the prominent leaders of the Samyukta Maharashtra Movement.
Family tree
This is the family tree of the Thackeray family.
See also
Nehru–Gandhi family
References
Marathi families
Hindu families
Political families of India
Shiv Sena |
14095021 | https://en.wikipedia.org/wiki/Adam%20Kolawa | Adam Kolawa | Adam Kazimierz Kolawa (June 25, 1957 – April 26, 2011) was CEO and co-founder of Parasoft, a software company in Monrovia, CA that makes software development tools.
History
Kolawa received a M.Sc. in Electrical Engineering from the AGH University of Science and Technology in 1981 and a M.Sc. in Physics from Jagiellonian University in 1982. After Kolawa emigrated from Poland to the United States, he earned a Ph.D in Theoretical Physics from the California Institute of Technology. While at Caltech, he worked with Geoffrey Fox and helped design and implement the Intel hypercube parallel computer known as the Cosmic Cube.
In 1987, he founded Parasoft with four friends from Caltech. Initially, the company focused on parallel processing technologies. Kolawa co-authored two books on Development Testing and Software Testing In 2001, Kolawa was awarded the Los Angeles Ernst & Young's Entrepreneur of the Year Award in the Software category.
Kolawa was granted 20 patents for software technologies he has invented. His patents include runtime memory error detection technology (Patent and - granted in 1998), statically analyzing source code quality using rules (Patent - granted in 1999), and automated unit test case generation technology (Patent and - granted in 1998).
Kolawa died suddenly on April 26, 2011.
Publications
Books
References
External links
Parasoft Executive Biographies
Exclusive JDJ Industry Profile: Dr Adam Kolawa, CEO, Parasoft
Unlocking the Power of Automation: Exclusive Q&A with Dr Adam Kolawa, Co-founder & CEO of Parasoft
Businesspeople in software
American computer businesspeople
American technology chief executives
California Institute of Technology alumni
American technology writers
1957 births
2011 deaths |
2760924 | https://en.wikipedia.org/wiki/AppImage | AppImage | AppImage is a format for distributing portable software on Linux without needing superuser permissions to install the application. It tries also to allow Linux distribution-agnostic binary software deployment for application developers, also called upstream packaging. Released first in 2004 under the name klik, it was continuously developed, then renamed in 2011 to PortableLinuxApps and later in 2013 to AppImage.
Description
Objectives
AppImage aims to be an application deployment system for Linux with the following objectives: simplicity, binary compatibility, distro agnosticism, no installation, no root permission, being portable, and keeping the underlying operating system untouched.
Properties
AppImage does not install the application in the traditional Linux sense. Instead of putting the application's various files in the distro's appropriate places in the file system, the AppImage file is just the application's compressed image. When it runs, the file is mounted with FUSE. This behavior is consistent with AppImage's predecessors, klik and PortableLinuxApps.
It uses one file per application. Each file is self-contained: it includes all libraries the application depends on that are not already part of the targeted base-system. An AppImage of version 1.0 is an ISO 9660 Rock Ridge file (which can be optionally zisofs compressed) containing a minimal AppDir and a tiny runtime. (Version 2 may use other file system image formats like SquashFS). An AppImage application can be added to a live CD by adding only one file to the live CD.
AppImage files are simpler than installing an application. No extraction tools are needed, nor is it necessary to modify the operating system or user environment. Regular users on the common Linux distributions can download it, make it executable, and run it.
AppImage allows generation of embedded digital signature, which need to be verified with an external utility. The format does not offer form of self-check with package authenticity verification or runtime confinement by sandboxing.
History
klik
AppImage's predecessor klik was designed in 2004 by Simon Peter. The client-side software is GPL-licensed. klik integrated with web browsers on the user's computer. Users downloaded and installed software by typing a URL beginning with klik://. This downloaded a klik "recipe" file, which was used to generate a .cmg file. For main ingredients, usually pre-built .deb packages from Debian Stable repositories were fed into the recipe's .cmg generation process. In this way, one recipe could be used to supply packages to a wide variety of platforms. With klik, only eight programs could be run at once because of the limitation of mounting compressed images with the Linux kernel, unless FUSE was used. The file was remounted each time the program is run, meaning the user could remove the program by simply deleting the .cmg file. A next version, klik2, was in development; and would natively incorporate the FUSE kernel module, but it never reached past the beta stage. Around 2011, the klik project went dormant and the homepage went offline for some time.
PortableLinuxApps
Simon Peter started a successor project named PortableLinuxApps with similar goals around that time. The technology was adapted for instance by the "portablelinuxgames.org" repository, providing hundreds of mostly open-source video games.
AppImage
Around 2013, the software was renamed again from portableLinuxApps to AppImage; the license became the MIT license. AppImage is the format and AppImageKit is a concrete open source implementation. The development happens in a GitHub repository.
Reception and usage
Klik was in 2007 the inspiration for Alexander Larsson's glick project, the precursor of Flatpak which was released in 2016.
Linus Torvalds' dive log application Subsurface started to use AppImage around 2015. Subsurface's traditional packaging was changed to a portable, self-contained, distro-agnostic AppImage, as binary software deployment for the Linux users of the various Linux distributions turned out to be problematic.
MuseScore started in April 2016 to use AppImage builds for software deployment for all Linux flavors.
Krita, a digital painting free and open-source software application, is also deployed using AppImage from version 3.0 in May 2016.
digiKam, an open source photo management application, also includes an AppImage bundle from version 5.3.0, released in November 2016.
Network World's Bryan Lunduke noted on March 31, 2017 positively the Linux distribution-agnostic packaging of OpenShot with AppImage.
LMMS, an open source digital audio workstation, started distributing for Linux as an AppImage since version 1.2.0-RC5.
KeePassXC is also using AppImage for Linux binary distributions.
Neo4j uses AppImage for the Linux version of their "Neo4j Desktop" program.
RPCS3, the most used PS3 emulator, for Linux is distributed through AppImage.
GitHub desktop client app Linux version maintained by the community is distributed with an AppImage option.
Iris, the computer-monitor eye-strain reduction and management software, offers a paid AppImage option.
FreeCAD, an open-source parametric 3D modeler, uses AppImage as the primary Linux binary download.
OpenSCAD, another open-source parametric 3D modeler, also uses AppImage as a Linux binary download option.
See also
Portable application creators
Autopackage
Snap (package manager)
Flatpak
Zero Install, another similar project
ROX uses directories (AppDirs) as application bundles.
References
External links
AppImages for various applications by the AppImage developer
Easily Run And Integrate AppImage Files With AppImageLauncher
portablelinuxapps.org project page on SourceForge (accessed August 2, 2011)
klik.atekon.de Official klik site (archived)
klik client repository page at Google Code
Linux installation software
Linux package management-related software
Software distribution
Software using the MIT license |
58034239 | https://en.wikipedia.org/wiki/2018%20Dollar%20General%20Bowl | 2018 Dollar General Bowl | The 2018 Dollar General Bowl was a college football bowl game played on December 22, 2018. It was the 20th edition of the Dollar General Bowl, and one of the 2018–19 bowl games concluding the 2018 FBS football season. The game was sponsored by the Dollar General chain of variety stores.
Teams
The game was played between teams from the Mid-American Conference (MAC) and the Sun Belt Conference.
Troy Trojans
On November 29, college football news organizations reported that Troy would play in the Dollar General Bowl, which was confirmed via an official announcement on December 2. The Trojans entered the bowl with a 10–3 record (7–1 in conference). This was Troy's third consecutive bowl appearance, following victories in the 2016 Dollar General Bowl and 2017 New Orleans Bowl.
Buffalo Bulls
Buffalo received and accepted a bid to the Dollar General Bowl on December 2. The Bulls had a 10–3 record on the year (7–1 in conference), losing to Northern Illinois in the MAC Championship Game. This was Buffalo's third bowl appearance, after losses in the 2009 International Bowl and 2013 Famous Idaho Potato Bowl.
Game summary
Scoring summary
Statistics
Notes
References
External links
Box score at ESPN
2018 Dollar General Bowl Highlights via YouTube
Dollar General Bowl
LendingTree Bowl
Dollar General Bowl
Dollar General Bowl
Buffalo Bulls football bowl games
Troy Trojans football bowl games |
1939833 | https://en.wikipedia.org/wiki/Nokia%20770%20Internet%20Tablet | Nokia 770 Internet Tablet | The Nokia 770 Internet Tablet is a wireless Internet appliance from Nokia, originally announced at the LinuxWorld Summit in New York City on 25 May 2005. It is designed for wireless Internet browsing and email functions and includes software such as Internet radio, an RSS news reader, ebook reader, image viewer and media players for selected types of media.
The device went on sale in Europe on 3 November 2005, at a suggested retail price of €349 to €369 (£245 in the United Kingdom). In the United States, the device became available for purchase through Nokia USA's web site on 14 November 2005 for $359.99. On 8 January 2007, Nokia announced the Nokia N800, the successor to the 770. In July 2007, the price for the Nokia 770 fell to under US$150 / 150 EUR / 100 GBP.
Specifications
Dimensions: 141×79×19 mm (5.5×3.1×0.7 in)
Weight: 230 g (8.1 oz) with protective cover or 185 g (6.5 oz) without.
Processor: Texas Instruments OMAP 1710 CPU running at 252 MHz. It combines the ARM architecture of the ARM926TEJ core subsystem with a Texas Instruments TMS320C55x digital signal processor.
Memory: 64 MB (64 × 220 bytes) of DDR RAM, and 128 MB of internal flash memory, of which about 64 MB should be available to the user. Option for extended virtual memory (RS-MMC up to 1 GB (2 GB after flash upgrade)).
Display and resolution: 4.1 inches, 800×480 pixels at 225 pixels per inch with up to 65,536 colors
Connectivity: WLAN (IEEE 802.11b/g), Bluetooth 1.2, dial-up access, USB (both user-mode, and non-powered host-mode)
Expansion: RS-MMC (both RS-MMC and DV-RS-MMC cards are supported).
Audio: speaker and a microphone
The device was manufactured in Estonia and Germany.
Maemo
The 770, like all Nokia Internet Tablets, runs Maemo, which is similar to many handheld operating systems and provides a "Home" screen—the central point from which all applications and settings are accessed. The home screen is divided into areas for launching applications, a menu bar, and a large customisable area that can display information such as an RSS reader, Internet radio player, and Google search box, for example. Maemo is a modified version of Debian.
The 770 is bundled with applications including the Opera web browser, Macromedia Flash and Gizmo.
A critical bug has been identified that causes memory corruption when using the WLAN connection. This could result in system instability and data corruption. Owners of the 770 are encouraged to apply the bugfix; preferably before having used the WLAN connection for the first time.
Versatility
Because of the Linux-based operating system and the open-source contributions from Nokia, the Nokia 770 Internet Tablet has a great appeal to the hacker and DIY markets. Programmers are porting applications to the Maemo platform, allowing a much more rapidly growing application catalog than other mobile platforms would enjoy. The inclusion of Wi-Fi, Bluetooth, and USB host functionality (through a hack) permits enthusiasts to expand their tablets to include USB mass storage, Bluetooth GPS receivers, a normal USB keyboard, or other devices.
Criticism
The Nokia 770 has received criticism from some technology reviewers. The most common complaint was about the overall speed of the system, due to the relatively slow CPU and the size of the on-board memory (64 MiB). Short battery life (less than 4 hours in the case of continuous Wi-Fi usage) was also a concern. Some reviews suggested problems with the handwriting recognition, and some said tapping the on-screen keyboard was too slow.
Another common complaint was that it lacked the functions of a mobile PDA, although there are now several PIM options created by the community. Also, for Internet access away from Wi-Fi hotspots, the Nokia 770 relies upon a Bluetooth 1.2 phone acting as a modem, and not all bluetooth phones will work with the tablet. Additionally, some complained that the device used Reduced-Size MMC (RS-MMC or Micro-MMC) cards that were originally difficult to find. However, the format has since been used in other products and has become widely available. The device originally could only use cards up to 1 GB, but 2 GB cards are supported with the current version of the operating system.
See also
Nokia N800, the successor to the Nokia 770
Nokia N810
Nokia N900
Nokia N950
Nokia N9
References
External links
Applications for the Nokia 770.
Ari Jaaksi's Blog - Former director of the open source software operations of Nokia.
- Tutorials for new 770, N800, N810 users
Internet Tablet
Mobile computers
Linux-based devices
Computer-related introductions in 2005
Embedded Linux |
17942372 | https://en.wikipedia.org/wiki/United%20States%20v.%20Arnold | United States v. Arnold | United States v. Arnold, 533 F.3d 1003 (9th Cir. 2008), is a United States court case in which the United States Court of Appeals for the Ninth Circuit held that the Fourth Amendment to the United States Constitution does not require government agents to have reasonable suspicion before searching laptops or other digital devices at the border, including international airports.
This decision has caused worry and some controversy among Fourth Amendment advocates, such as the Electronic Frontier Foundation. Especially troubled are those that legitimately store sensitive business, legal, or customer data; who worry that federal agents might leak information found from laptop searches. Several legislators have discussed and introduced measures to counter the ruling in order to provide more protection to travelers, but none have become law.
Background to the case
On July 17, 2005, Michael Arnold arrived at Los Angeles International Airport (LAX) after spending a three-week vacation in the Philippines. After retrieving his luggage, Arnold proceeded through the checkpoint at customs. Customs and Border Protection (CBP) Officer Laura Peng saw Arnold waiting in line and selected him for secondary questioning.
Peng questioned Arnold, and began inspecting his luggage, noting a laptop computer and several computer accessories. Peng instructed Arnold to turn on the computer so she could see if it was functioning. CBP Officer John Roberts was called to assist with the inspection while the computer was booting up.
Among the icons displayed on the computer's desktop, two were named "Kodak Pictures" and one was named "Kodak Memories". Peng and Roberts opened these folders, began viewing the photos, and noted one that depicted two nude women. At that point special agents from U.S. Department of Homeland Security, Immigration and Customs Enforcement (ICE) were called. Arnold was detained and questioned by the ICE agents for several hours. They further examined the computer and equipment, and found numerous images of what they believed to be child pornography.
Arnold was released, but his computer was seized by the agents. Federal agents obtained a warrant two weeks later. Arnold was charged with (1) knowingly transporting child pornography, (2) knowingly possessing a computer hard drive and compact discs which both contained more than one image of child pornography, and (3) knowingly and intentionally traveling in foreign commerce and attempting to engage in illicit sexual conduct with a person under 18 years of age.
Arnold filed for a motion to suppress, and argued that the government required reasonable suspicion to conduct the search. This motion was granted by the District Court, ruling in Arnold's favor. The Government promptly appealed the case, and stated that the border doctrine took precedence over the Fourth Amendment's protection from unreasonable search. The Ninth Circuit Court of Appeals overturned the lower court's ruling, holding in favor of the Government. Arnold was represented in the District Court and before the Ninth Circuit by Marilyn E. Bednarski and Kevin J. LaHue, of Kaye, McLane, Bednarski, & Litt.
Arguments
Reasonable search
The crux of Arnold's argument is that a laptop is very similar to a person's home and the human mind, much more so than an ordinary container for data. His argument is based on the notion that a laptop has the ability to store a greater amount of data and personal documents, much like those stored in one's home. He continued to argue that because a laptop is able to record ideas, e-mail, internet chats, and web-surfing habits, it is very similar to the "human mind." Under these arguments he sought the protection of the Fourth Amendment, which states that "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated ... ".
Arnold also raised the exception that the government's searches are limited in cases where they are of a "particularly offensive manner," and argued that such was the case when CBP officers seized and searched his laptop.
The Electronic Frontier Foundation filed an amicus brief in support of Arnold. The amici included the EFF and the Association of Corporate Travel Executives (ACTE). The brief argued that the government's position and current practice subjects travelers to unconstitutional invasive searches of laptops and other devices. Their argument is similar to that already made by Arnold, and they contended mainly that a computer is different from a gas tank, suitcase, or other closed container because laptops routinely contain some of the most personal information about a person's life. Moreover, the amici were concerned that a search would reveal information that is already protected under other statutes, including privileged legal communications, reporters' notes from confidential sources, and trade secrets.
In summary, the main points of the brief are that: (1) people have a reasonable expectation of privacy in the information stored on their laptop computers; (2) searches of personal electronic information devices ... are particularly invasive of personal privacy; (3) the volume of information stored on computers means that the privacy invasion of a laptop border search is enormous; (4) personal computers often contain information that the individual does not know about, or has even sought to erase; (5) laptop computer searches are indistinguishable from "general searches;" (6) there is a real risk of unconstrained "pretext" searches; and (7) the first amendment protects many of the contents on laptop computers. The final point raised the worry that indiscriminate searches of information stored on laptops will discourage people from storing sensitive data, thus causing a chilling effect on speech.
A personal computer is among a person's most private belongings. Laptop computers are virtual extensions of the mind, used to record and share our thoughts, feelings, and activities; indeed, "they are postal services, playgrounds, jukeboxes, dating services, movie theaters, daily planners, shopping malls, personal secretaries, virtual diaries, and more." ... [a]s a result, our laptop computers contain as much information about us as our homes contain - perhaps more.
Overall, the amici were concerned that unchecked compliance with the border doctrine would infringe too far on a citizen's First and Fourth Amendment rights. In their view, the doctrine did not provide the protection that it was meant to enact, as technology now provides ways to maneuver around it. " ... a smuggler's 'container of choice' for electronic contraband is the internet. They may simply email it to themselves or post it online to avoid customs searches. A ruling that overturns the decision would not undermine the government's ability to protect its borders."
Border doctrine
In response to Arnold's motion to suppress, the government argued two points. Firstly, the Fourth Amendment did not require reasonable suspicion because of the border doctrine. Secondly, even if reasonable suspicion were required, it was present.
The government argued that the United States has a duty to take measures to ensure the safety of its interior. Past cases have stated that "it is axiomatic that the United States, as sovereign, has the inherent authority to protect, and a paramount interest in protecting, its territorial integrity." In United States v. Ramsey, the court stated that "searches made at the border ... are reasonable simply by virtue of the fact that they occur at the border ... " In the case of Arnold, the point that linked these two conclusions together was the notion that international American airports are the functional equivalent of a border, as stated in Almeida-Sanchez v. United States.
Reference was also made to a history of searches of closed containers such as a briefcases, purses, wallets, pockets, pictures, film, and other graphic material. These items are the equivalent of a closed container, and have traditionally been searched at the border without particularized suspicion. It is acknowledged that some limits have been made, specifically a traveler's alimentary canal. Such limitations are made when searching a person, not their objects in possession, in the interest of human dignity and privacy, which the Fourth Amendment protects.
Judgment
Reasonable search
The Court addressed the argument that the search of Arnold's laptop was carried out in a particularly offensive manner, and was so destructive that it required particularized suspicion. The court acknowledged that there was no precedent to declare when a border search should be deemed unreasonable. The Supreme Court has left open the question of the circumstances that make a search particularly offensive, and thus unreasonable. However, the Court pointed out that Arnold never claimed that his laptop was damaged in any way during the government's search.
Even without a clear scale to determine how offensive a search may be, the Court dismissed the reasoning of the District Court that particularized suspicion was needed in the case of Arnold. The District Court-based this decision on previous cases that related to search of the human body, not of property.
Border doctrine
The Court distinguished searches at American airports because they are considered border searches, and occur at the "functional equivalent of a border". Previously, the Supreme Court had stated that:
To address Arnold's claim that a laptop is like a home, and therefore requires Fourth Amendment protection, the court dismissed his argument as without merit. Its decision is based on previous Supreme Court judgments that have denied Fourth Amendment protections to property which is also "capable of functioning as a home."
Lastly, the Court addressed a comparison to United States v. Ickes, 393 F.3d 501 (4th Cir. 2005). In that case, a man's van was stopped and searched as he was driving from Canada into the United States. The court upheld the border search doctrine, and the Ninth Circuit acknowledged the concerns and followed the reasoning that carving out a First Amendment exception to the border doctrine would (1) protect terrorist communications "which are inherently 'expressive'"; (2) create an unworkable standard for government agents who "would have to decide - on their feet - which expressive material is covered by the First Amendment"; and (3) contravene the weight of Supreme Court precedent refusing to subject government action to greater scrutiny with respect to the Fourth Amendment when an alleged First Amendment interest is also at stake."
By this reasoning, the Court justified the government's interest in preventing unwanted persons and effects as greater than a person's desire to conceal the contents of their baggage. The decision of the District Court was overturned, ruling in favor of the government.
Subsequent developments
Death of Arnold
On February 23, 2009, the Supreme Court of the United States notified Arnold's counsel that it had denied Arnold's petition for writ of certiorari. Attorneys for Arnold filed a motion to de-publish the Ninth Circuit opinion (which was denied). In their motion, Arnold's counsel explained that two days after being informed of the Supreme Court's refusal to hear his appeal, Arnold committed suicide.
Controversy
This ruling has caused much controversy and discussion among First and Fourth Amendment rights activists, including the American Civil Liberties Union and the Electronic Frontier Foundation. One of the primary points of controversy is the court's ruling that a laptop is no different from any other type of container. Just as Arnold has argued, critics argue that a laptop contains far more sensitive and personal data than any ordinary container. They continue to state that a laptop search is so revealing and invasive that the Fourth Amendment requires agents to have some reasonable suspicion to justify the intrusion.
The Electronic Frontier Foundation expressed their disappointment in its statement:
The opinion is almost certainly wrong to classify laptop searches as no different from other property searches. Fourth Amendment law constrains police from conducting arbitrary searches, implements respect for social privacy norms, and seeks to maintain traditional privacy rights in the face of technological changes. This Arnold opinion fails to protect travelers in these traditional Fourth Amendment ways.
Other critics have cited that there are some documents and data that need to be legitimately kept secret or discreet, and are worried that there is nothing to prevent the officials involved in a search from disclosing this sensitive data. They give examples of trade secrets, acquisition plans, plans for a new product, security data about private customers. The concern is that officials in the process of searching may copy and leak this information without consent. Most are unwilling to trust government officials and employees to not misuse what they seize or negligently disclose confidential information.
Department of Homeland Security policies after Arnold
On August 1, 2008, the Washington Post reported that Department of Homeland Security policies allow federal agents to "take a traveler's laptop computer or other electronic device to an off-site location for an unspecified period of time without any suspicion of wrongdoing." Further, "officials may share copies of the laptop's contents with other agencies and private entities for language translation, data decryption or other reasons." Senator Russell Feingold called these policies "truly alarming" and said that he intends to introduce legislation soon that would require reasonable suspicion for border searches, as well as prohibit profiling on race, religion, or national origin. Meanwhile, Ryan Singel of Wired.com recommended placing one's electronics and papers "in a first class U.S. mail envelope and stamp it—or even better mail it to yourself before the trip," since 'officers may not read or permit others to read correspondence contained in sealed letter class mail (the international equivalent of First Class) without an appropriate search warrant or consent'. However, this only applies to articles in the postal system, not to letters carried by individuals or private carriers such as DHL, UPS, or FedEx.
Travelers Privacy Protection Act
Senator Feingold, fellow Democratic Senator Maria Cantwell, and Democratic Representative Adam Smith announced on September 26, 2008, that they had proposed a law to limit the searches of laptops or other electronic devices to cases where United States Customs and Border Protection officials have reasonable suspicion of illegal activity. The so-called Travelers' Privacy Protection Act would allow border agents to search electronic devices only if they had reasonable suspicions of wrongdoing. In addition, the legislation would limit the length of time that a device could be out of its owner's possession to 24 hours, after which the search becomes a seizure, requiring probable cause. This bill was introduced, but never became law due to the starting of the new 111th session of Congress.
See also
Border search exception
Computer forensics
Carroll v. United States (1925)
Olmstead v. United States (1928)
United States v. Ramsey (1977)
California v. Acevedo (1991)
United States v. Flores-Montano (2004)
United States v. Cotterman (2013)
References
External links
InformationWeek: Business, Cyber Liberties Groups Fight Laptop Searches
United States Fourth Amendment case law
2008 in United States case law
Digital forensics
United States Court of Appeals for the Ninth Circuit cases |
11843832 | https://en.wikipedia.org/wiki/MonoDevelop | MonoDevelop | MonoDevelop (also known as Xamarin Studio) was an open-source integrated development environment for Linux, macOS, and Windows. Its primary focus is development of projects that use Mono and .NET Framework. MonoDevelop integrates features similar to those of NetBeans and Microsoft Visual Studio, such as automatic code completion, source control, a graphical user interface (GUI) and Web designer. MonoDevelop integrates a Gtk# GUI designer called Stetic. It supports
Boo,
C,
C++,
C#,
CIL,
D,
F#,
Java,
Oxygene,
Vala, JavaScript, TypeScript
and Visual Basic.NET.
MonoDevelop can be used on Windows, macOS and Linux. Officially supported Linux distributions include CentOS, Debian, Fedora, openSUSE, SUSE Linux Enterprise, Red Hat Enterprise Linux and Ubuntu, with many other distributions providing their own unofficial builds of MonoDevelop in their repositories. macOS and Windows have been officially supported since version 2.2.
MonoDevelop has included a C# compiler (an alternative to MSBuild and CSC) since its earliest versions. It currently includes a compiler that supports C# 1.0, C# 2.0, C# 3.0, C# 4.0, C# 5.0 and C# 6.0.
A customized version of MonoDevelop formerly shipped with Windows and Mac versions of Unity, the game engine by Unity Technologies. It enabled advanced C# scripting, which was used to compile cross-platform video games by the Unity compiler. It has since been replaced by Visual Studio Community, except on Linux versions.
History
In late 2003, a group of developers from the Mono community began migrating SharpDevelop, a successful .NET open source IDE from Windows Forms on Windows to the GNOME toolkit (Gtk#) on Linux. The fork was also to target the Mono framework instead of the Microsoft .NET Framework implementation. Being an early fork of SharpDevelop, MonoDevelop architecturally differs from recent SharpDevelop releases.
Over time, the MonoDevelop project was absorbed into the rest of the Mono project and as of 2016, is actively maintained by Xamarin and the Mono community. Since Mono 1.0 Beta 2, MonoDevelop is bundled with Mono releases.
Starting with version 4.x, Xamarin rebranded MonoDevelop as Xamarin Studio, but only for the Windows version of the IDE. As of 2016, Xamarin Studio also runs on macOS.
In October of 2021 it was announced in the issue tracker that the project would be archived because it's no longer maintained
Features
MonoDevelop was an IDE for the .NET platform with features comparable to Microsoft Visual Studio. Highlights include:
Multi-platform IDE and user projects (Linux, Windows and macOS)
Multi-language (C#, F#, Visual Basic .NET, C/C++, Vala, JavaScript, TypeScript)
Project templates for C#, Visual Basic, Boo, Java (IKVM) and C/C++
Code completion support for C#, code templates, code folding
Customizable window layouts, user defined key bindings, external tools
Integrated Debugger for debugging Mono and native applications
Integrated Compiler (supports up to C# 6.0)
GTK# Visual Designer to graphically build GTK# user interfaces
ASP.NET web projects with code completion support and testing on XSP (Mono web server).
Source control, makefile integration, unit testing, packaging and deployment, localization
UI Builder
MonoDevelop has included a GTK# GUI designer called Stetic since version 0.1. to develop GTK+ user interfaces in C#. Stetic is very similar to Glade Interface Designer but is integrated into MonoDevelop with features such as drag and drop. It has been criticized for being more difficult to work with than the likes of Qt Designer and the Microsoft Visual Studio Windows Forms Editor when the programmer does not yet have a concrete layout in mind.
Xamarin Studio
Xamarin offers a rebranded version of MonoDevelop 4.0 as Xamarin Studio which now uses platform-specific code in various places to enhance the look and feel. While Mono provides a package for Solaris 10 running on SPARC, MonoDevelop packages for OpenSolaris are only provided by groups from the OpenSolaris community. MonoDevelop on FreeBSD is likewise supported only by the FreeBSD community.
Visual Studio for Mac
Another rebranded version of MonoDevelop is Visual Studio for Mac. Visual Studio for Mac employs many of the same tools as its Windows counterpart: for example, the Roslyn .NET Compiler Platform is used for refactoring and IntelliSense. Its project system and build engine use MSBuild; and its source editor supports TextMate bundles. It uses the same debugger engines for Xamarin and .NET Core apps, and the same designers for Xamarin.iOS and Xamarin.Android.
See also
Mono (software)
Comparison of integrated development environments
Glade Interface Designer
Qt Creator
Xamarin
SharpDevelop
Visual Studio
Visual Studio Code
References
External links
Xamarin Studio
Visual Studio for Mac
.NET programming tools
Free and open-source software
Free integrated development environments
Free software programmed in C Sharp
Linux integrated development environments
Microsoft free software
Software that uses Mono (software)
Software using the LGPL license
2003 software |
1442152 | https://en.wikipedia.org/wiki/Atari%20TT030 | Atari TT030 | The Atari TT030 is a member of the Atari ST family, released in 1990. It was originally intended to be a high-end Unix workstation, but Atari took two years to release a port of Unix SVR4 for the TT, which prevented the TT from ever being seriously considered in its intended market.
In 1992, the TT was replaced by the Atari Falcon, a low-cost consumer-oriented machine with greatly improved graphics and sound capability, but with a slower and severely bottle-necked CPU. The Falcon possessed only a fraction of the TT's raw CPU performance. Though well priced for a workstation machine, the TT's high cost kept it mostly out of reach of the existing Atari ST market until after the TT was discontinued and sold at discount.
The nascent open source movement eventually filled the void. Thanks to open hardware documentation, the Atari TT, along with the Amiga and Atari Falcon, were the first non-Intel machines to have Linux ported to them, though this work did not stabilize until after the TT had already been discontinued by Atari. By 1995 NetBSD had also been ported to the Atari TT.
History
Atari Corporation realized that to remain competitive as a computer manufacturer, they needed to begin taking steps to exploit the power offered by more advanced processors in the Motorola 68000 series. At that time, the highest performance member was the 68020. It was the first true "thirty-two bit bus/thirty-two bit instruction" chip from Motorola. Unlike the 68000 used in the original STs, the 68020 was capable of fetching a 32-bit value in one memory cycle, while the older STs took two.
The TT was initially designed around the 68020 CPU, but as the project progressed, Atari Corp. realized that the 68020 was not the best option for the TT. The 68020 still lacked certain important features offered by the next successor in the 68000 line, the new 68030. The new 68030 featured a full 32-bit address/data bus and internal registers; separate Supervisor, User, Program, and Data virtual memory spaces; built-in memory-management hardware; and 256-byte on-chip instruction and data caches.
When the decision was made to switch from a 68020 to a 68030 CPU, it presented a whole new set of problems. The original specifications for the TT's clock speed was 16 MHz, which was selected to maintain backward compatibility. The existing ST chips used in the TT (DMA and video chips for example) could not handle anything over 16 MHz. Some software also had problems running at faster speeds. To make the system work with a 32 MHz 68030, Atari Corp. had to scale back their plans somewhat, and add a large amount of cache to the system. As a result, the processor runs at 32 MHz, while the system bus runs at 16 MHz. This is similar to the tactic employed by Apple with the ill-fated Macintosh IIvx and later employed by makers of PCs with an Intel 80486DX2 CPU which ran at double that of the system bus speed.
TOS 3.01 was the operating system that came with the Atari TT. It was a 512 kB ROM specifically designed for the TT. However, it did not feature pre-emptive multitasking. Another variant, known as TT/X, used Unix System V R4 and WISH (Motif extension).
The TT030 was first introduced at CeBIT in Hannover, Germany and launched in 1990. It retailed for $2995 with 2 MB RAM and a 50 MB hard drive. The US release came the following year. In 1993, Atari Corp.'s exit from the computer business marked the end of the TT. A number of TT machines were built as developer systems for the Jaguar.
Details
The TT featured a number of devices that had previously been unavailable for Atari Corp. systems. For example, an AppleTalk network port (there never was a driver for it, possibly due to license problems), VME expansion bus, new VGA video graphics modes, and a true SCSI port. Existing ST features such as MIDI ports, a cartridge port, and the ASCI/DMA port were retained in this system.
One device that was left out was the BLiTTER graphics chip, which first appeared in the Atari Mega ST systems four to five years earlier. Using the existing 8MHz chip would have only served to bottleneck the TT's performance. To be useful, a new 32MHz blitter chip would have had to have been designed for the TT, however Atari chose not to do so.
An Atari Corp. version of Unix System V was not released until mid-1992. By the end of that year, Atari Corp. dropped all Unix development.
A special version of the TT, the TT/X, was supplied with UNIX System V R4 and WISH (an extension of OSF Motif), as well as a collection of free software utilities including GCC. In the boot screen for "Atari System V", as the manual called it, the operating system's kernel identified itself as "UniSoft UNIX (R) System V Release 4.0.
Technical specifications
All TTs were made up of both custom and commercial chips:
Custom chips
TT Shifter "TT Video shift register chip" — Enabled bitmap graphics. Featured a 64-bit wide bus with interleaved access to ("dual purpose") system memory and on-chip buffers for high bandwidths. Contiguous 32 KB memory for ST modes, 154 KB for TT modes.
TT GLU "Generalized Logic Unit" — Control logic for the system used to connect the STs chips. Not part of the data path, but needed to bridge chips with each other. Used in TT and MEGA STE.
DMA "Direct Memory Access" — Three independent channels, one for floppy and hard drive data transfers, one for the SCSI port and one for 85C30 SCC network port. Direct access to ("dual purpose") system memory in the ST. 2 chips used.
MCU "Memory Control Unit" — For system RAM.
Support chips
MC6850P ACIA "Asynchronous Common Interface Adapter" — Enabled the ST to directly communicate with MIDI devices and keyboard (2 chips used). 31.25 kilobaud for MIDI, 7812.5 bit/s for keyboard.
MC68901 MFP "Multi Function Peripheral" — Used as an interrupt controller, timers and RS232C ports (2 chips used).
NCR 5380 "SCSI Controller" — 8-bit asynchronous transfers up to 4 MB/s.
WD-1772-PH "Western Digital Floppy Disk Controller" — Floppy controller chip.
Zilog 85C30 SCC "Zilog Serial Communications Controller" — Two high-speed SDLC serial ports.
YM2149F PSG "Programmable Sound Generator" — Provided 3-voice sound synthesis, also used for floppy signalling and printer port.
HD6301V1 "Hitachi keyboard processor" — Used for keyboard scanning and mouse/joystick ports.
MC146818A "Motorola Real Time Clock"
CPU: Motorola 68030 @ 32 MHz (system bus @ 16 MHz)
FPU: Motorola 68882 @ 32 MHz
RAM:
System RAM ("dual purpose") 2 MB ST RAM expandable to 10 MB
TT RAM ("single purpose") expandable to 256 MB TT RAM on daughter board using either 30-pin or 72-pin SIMMs
Sound: Yamaha YM2149 + Stereo DMA 8-bit PCM, same as in the STe
Drive: 1.44 MB (later version) or 720 KB (first TT version) 3½" floppy disk drive
Ports:
MIDI In/Out
3 x RS-232
Serial LAN RS-422
Printer
VGA Monitor (analog RGB and Mono)
Extra Disk drive port
ACSI and SCSI port
VMEbus inside case
cartridge (128 KB)
keyboard (detachable)
Joystick and Mouse ports (on keyboard)
Operating System:
Atari's TOS with the Graphics Environment Manager (GEM) graphical user interface (GUI) TOS versions: 3.01, 3.05 or 3.06 in ROM. Four socketed 1 Mb ROMs providing 512 KB of ROM space.
MiNT
MagiC
Atari System V
NetBSD
Display modes:
Color: 320×200 (16 color), 320×480 (256 colors or 256 greyscales), 640×200 (4 colors), 640×480 (16 colors), palette of 4096 colors
Duochrome: 640×400 (2 colors)
Monochrome: 1280×960 mono TT high with ECL 19 in (483 mm) TTM195 monitor
Character set: Atari ST character set, based on codepage 437.
Case: Two-piece desktop-style.
Release Date: 1990-1991
The (at least) two versions of the TT can be distinguished by:
Internal sheet plate (old) or coating (new) for electromagnetic compatibility
CPU and FPU on daughter board (old) or directly on main board (new)
1.44 MB HD floppy drive (720 KB DD floppy drive on older models)
Emulation
Hatari is able to emulate an Atari TT on a variety of different OS's using the SDL library.
Atari Coldfire Project - Atari computer clone
References
External links
Guillaume Tello's WEB page What to do with a TT? Some hardware expansions detailed
Download page Programs for Atari, mostly for the TT.
ASV Archive page Atari System V unofficial webpage.
68000-based home computers
Atari ST
Computer workstations |
4716957 | https://en.wikipedia.org/wiki/Numerical%20Electromagnetics%20Code | Numerical Electromagnetics Code | The Numerical Electromagnetics Code, or NEC, is a popular antenna modeling system for wire and surface antennas. It was originally written in FORTRAN during the 1970s by Gerald Burke and Andrew Poggio of the Lawrence Livermore National Laboratory. The code was made publicly available for general use and has subsequently been distributed for many computer platforms from mainframes to PCs.
NEC is widely used for modeling antenna designs, particularly for common designs like television and radio antennas, shortwave and ham radio, and similar examples. Examples of practically any common antenna type can be found in NEC format on the internet. While highly adaptable, NEC has its limits, and other systems are commonly used for very large or complex antennas or special cases like microwave antennas.
By far the most common version is NEC-2, the last to be released in fully public form. There is a wide and varied market of applications that embed the NEC-2 code within frameworks to simplify or automate common tasks. Later versions, NEC-3 and NEC-4, are available after signing a license agreement. These have not been nearly as popular. Versions using the same underlying methods but based on entirely new code are also available, including MININEC.
History
NEC traces its history to an earlier program, BRACT, which was used to analyze antennas consisting of many thin wires in free space. It was useful for modeling certain common types of antennas used on aircraft or spacecraft or other examples where the ground was far enough away that it did not affect the signals. BRACT was developed in the early 1970s by MBAssociates for the US Air Force's Space and Missile Systems Center. MBAssociates, named after the founding partners of Bob Mainhardt and Art Biehl, are better known for the development of the Gyrojet rocket gun.
BRACT's success led to a second contract with MBAssociates, this time by the Naval Research Laboratory and USAF Rome Air Development Center, to adapt the BRACT code to consider the effect of the ground. This produced the Antenna Modeling Program, or AMP, which was extensively modified to support disk-based files, simplify the input and output to make it easier to use, and extensively documented. A follow-up, AMP2, added calculations for extended surfaces like reflectors.
NEC is an advanced version of AMP2, with more options and features. It was written by programmers at Lawrence Livermore National Laboratory (LLNL) under contract to the Naval Ocean Systems Center and the Air Force Weapons Laboratory. The original NEC added a more accurate system for calculating currents along the wires, and at junctions between them, as well as an option that increased the accuracy when the wire was thick, with a low aspect ratio compared to its length. NEC-2 added two major features to the original NEC, a numerical Green's function for working with large planes, and an expansion of the ground-plane code to deal with partially lossy materials that are more realistic for antennas near the ground. With the release of NEC-2, the original became known as NEC-1.
All of these programs originated in the mainframe era, originally running on Control Data machines. The code was written in FORTRAN and designed to take input from punch card stacks in column-delimited format and then print the results on a line printer. These early versions were widely ported to a number of other big-iron platforms. AMP added support for disk-based files by emulating the original system, writing out the data from a single punch card to 80-column line in a text file, with the file as a whole representing a deck of cards. With the move from punch card input to the use of text files, a profusion of slightly different file formats appeared, which was later described as "close to free format".
Versions were introduced on the MS-DOS platform in the late 1980s, mostly using FORTRAN compilers capable of compiling the original code. Later versions converted the FORTRAN to the C programming language, either by hand or using automated tools. These versions were often limited by the platform's resources. Modern versions run on a wide variety of platforms. Modern programs usually have a separate graphical user interface (GUI) that allows the user to draw and edit the antenna. When that is complete, the GUI converts the design into the NEC-2 deck file format and runs NEC-2. The GUI then parses NEC-2's output and graphically displays the results.
Development of the original NEC codes continued at LLNL, producing NEC-3 which added the ability to model elements buried in or projecting out of the ground, and NEC-4, which included a wide variety of updates. NEC-4 formalized what was already widely the case, taking input from a specified file, sending output to another file, and allowing comments to be added to any line using the ! character. NEC-4 also introduced a new licensing system, and is not available as open source.
How it works
The code is based on the method of moments solution of the electric field integral equation (EFIE) for thin wires and the magnetic field integral equation (MFIE) for closed, conducting surfaces. It uses an iterative method to calculate the currents in a set of wires, and the fields that result.
The calculation starts by calculating the electrical field in space for a radio signal of a given frequency, normally traveling along the X axis in three-dimensional space. This field is uniform in Y and Z, but varies along the X axis; the signal's magnitude at any point along X is defined by the phase at that instant. Antennas work because the field changes over time as the wavefront moves past the antenna. This changing field induces current in conductors, the voltage being defined by the magnitude of the field at that instant. An antenna consists of extended but finite length conductors, so the pattern of the field results in different voltages at different points around the antenna. In antenna terms, each of the conductors making up the antenna is known as an element.
To calculate the net result, NEC breaks the antenna's elements into a number of sampled points, called segments. It uses simple calculations based on the diameter of the conductor and the wavelength of the signal to determine the induced voltage and currents at each of these segments. Depending on the arrangement of the wires, the induced currents in some segments will reinforce or resist the currents in others. NEC sums all of these to determine the net current in each of the conductors.
When alternating current flows in a conductor it radiates an electromagnetic wave (radio wave). In multi-element antennas, the fields due to currents in one element induce currents in the other elements. Antennas are self-interacting in this respect; the waves reradiated by the elements superimpose on the original radio signal being studied. NEC calculates the field resulting from these contributions, adds it to the original radio signal, and then runs the entire calculation again with this modified field. Because the reradiated signal is normally small compared to the original signal, it only produces a small change, or perturbation, in the resulting element currents. The program then repeats the calculation again with the new element currents, getting new radiation fields. This process is repeated until resulting values converge.
NEC uses a separate method to calculate the contribution of extended planes of material, like a wire mesh reflector. In this case, the plane is considered as a unit and the magnetic contribution is calculated directly and fed back into the calculation once the contributions from the individual wires are considered. Similar integral solutions are used to calculate the effects of the ground plane. Similarly, inductive and capacitive loads, insulated transmission wires above and buried in the ground and other common parts of an extended antenna system are also modeled using simpler numeric methods.
The calculations normally converge rapidly. The output is then sampled at a user-defined point, the load. In a real antenna, this is normally where the wire attaches for connection to the transmitter or receiver. The result is a value that indicates the energy delivered to the load on reception, or the amount of energy absorbed by the antenna during transmission.
NEC then repeats this entire series of calculations while changing the signal so it approaches the antenna from different angles along the X and Y axes, storing the results for each combination of angles. The results are then normalized to the strongest signal received (almost always at X and Y = 0, or "head on") to produce a 3D pattern illustrating the relative gain for every angle. The gain relative to an isotropic antenna (dBi), front-to-back ratio, standing wave ratio and the general reception pattern are all evident from these numbers. Programs often process this into more common forms like Smith charts.
The algorithm has no theoretical size limit and can be applied to very large arrays or for detailed modeling of very small antenna systems. The algorithm has proven reliable (likely to converge to a solution) and accurate (likely to produce results comparable to measured performance) at modeling thin-element structures like Yagi antennas and radiating towers. The NEC engine provides support for modeling patch antennas as well. It can be used for, but is not well suited to, slotted waveguide antennas, fractal antennas or similar designs in which the component conductive elements are not rod-like.
The method of moments algorithm has practical limitations as well; the number of calculations required to model a three-dimensional structure of N radiating elements is roughly proportional to the cube of N. Modeling an antenna with 100 wire segments requires 1003 = 1 million calculations. Increasing the number of elements by a factor of 10 requires 10003 = 1 billion calculations, increasing the computing time by a factor of 1000, assuming the simulation completes at all given memory limitations and such. Consequently, there are other approaches such as geometric optics which are preferred for modeling large structures.
Most programs using NEC include features that run batches of NEC calculations to produce a composite output. A common example is to run the entire calculation suite for different input frequencies, and then plot samples on a single chart. One might use this to sample through the UHF television frequencies, for instance, producing a diagram that illustrates the gain across the band. Another common feature is an iterative solver that adjusts a given parameter between runs, say the spacing between elements, in order to maximize performance. These operations are highly independent and can be trivially paralleled on modern machines.
Example
The NEC input file is a sequence of lines; the input file is known as a "deck" (from "card deck', referring to the original punch card formats) and uses a .deck or .nec file extension. Each line of text, or "card", starts with one of several dozen identifiers that indicate how the line should be interpreted. One of the most common identifiers found in NEC codes is GW, which defines a single wire (element) in the antenna. Its definition is:
GW ITG NS XW1 YW1 ZW1 XW2 YW2 ZW2 RAD
The string literal GW identifies this as a line describing straight-wire geometry. The parameter ITG, short for "integer tag", is a user-provided number used to identify ("tag") this element. The NS parameter defines the number of segments the wire should be divided into during the calculation; using more segments breaks the wire into smaller parts and may produce more accurate results at the cost of increased calculation time. The next six parameters are real numbers that define the X, Y and Z locations of the wire's two endpoints. Finally, the RAD parameter is the radius of the wire. If this is set to zero, then the next line must be a GC line that includes additional information to define tapering rods.
The following example of a complete input deck models a log-periodic antenna, like those used for VHF television reception:
CM TESTEX5
CM 12 ELEMENT LOG PERIODIC ANTENNA IN FREE SPACE
CM 78 SEGMENTS. SIGMA=O/L RECEIVING AND TRANS. PATTERNS.
CM DIPOLE LENGTH TO DIAMETER RATIO=150.
CE TAU=0.93. SIGMA=0.70. BOOM IMPEDANCE=50. OHMS.
GW 1 5 0.0000 -1.0000 0.0000000 0.00000 1.0000 0.000 .00667
GW 2 5 -.7527 -1.0753 0. -.7527 1.0753 0. .00717
GW 3 5 -1.562 -1.1562 0. -1.562 1.1562 0. .00771
GW 4 5 -2.4323 -1.2432 0. -2.4323 1.2432 0. .00829
GW 5 5 -3.368 -1.3368 0. -3.368 1.3368 0. .00891
GW 6 7 -4.3742 -1.4374 0. -4.3742 1.4374 0. .00958
GW 7 7 -5.4562 -1.5456 0. -5.4562 1.5456 0. .0103
GW 8 7 -6.6195 -1.6619 0. -6.6195 1.6619 0. .01108
GW 9 7 -7.8705 -1.787 0. -7.8705 1.787 0. .01191
GW 10 7 -9.2156 -1.9215 0. -9.2156 1.9215 0. .01281
GW 11 9 -10.6619 -2.0662 0. -10.6619 2.0662 0. .01377
GW 12 9 -12.2171 -2.2217 0. -12.2171 2.2217 0. .01481
GE
FR 0 0 0 0 46.29 0.
TL 1 3 2 3 -50.
TL 2 3 3 3 -50.
TL 3 3 4 3 -50.
TL 4 3 5 3 -50.
TL 5 3 6 4 -50.
TL 6 4 7 4 -50.
TL 7 4 8 4 -50.
TL 8 4 9 4 -50.
TL 9 4 10 4 -50.
TL 10 4 11 5 -50.
TL 11 5 12 5 -50. ,0.,0.,0.,.02
EX 0 1 3 10 1
RP 0 37 1 1110 90. 0. -5. 0.
EN
The example starts with several CM (comment) lines followed by the last comment on a CE (comment end) line. The CE must be followed by geometry lines (lines whose commands start with the letter G.
In this case, the geometry section consists of twelve GW elements making up the antenna. Each element is longer than the last, and to maintain accuracy, the later elements are divided into more segments. All measurements in NEC use metres, so the first element is 2 metres wide, running from -1 to 1. The GE line indicates the end of the geometry section. At this point, NEC scans the geometry for overlapping endpoints, which it then connects together to make a single longer conductor. The GE line also has a single input that indicates whether a ground plane is present; in this example, it is not specified, so the antenna is located above a "standard ground".
The FR line then sets the test frequency to 46.29 MHz. FR lines can optionally define the number and magnitude of the frequency steps if the system is being used to analyze the performance across a range of frequencies, but this is not being used in this case. The TL lines (transmission line) connect the various elements together. These can be seen on most log-periodic designs in the form of two thin rods running down the boom between the main antenna elements, although some designs use the boom itself, or hide the wires within the boom. The EX (excitation) line indicates the location of the energy supplied to the design, in this case an incoming plane wave being captured on segment 10, while the RP (radiation pattern) sets up some specifics of the signal.
Finally the EN line (end of input) indicates the deck is complete, at which point the NEC code starts the simulation and generates reports. The reports start by reprinting much of the input, which allows the user to check for errors. It then includes lengthy sections showing how the system broke the antenna down into segments. Finally, it begins to list calculated values, in tabular format. A small sample of the output from the sample above includes:
- - - RADIATION PATTERNS - - -
- - ANGLES - - - DIRECTIVE GAINS - - - POLARIZATION - - - - E(THETA) - - - - - E(PHI) - - -
THETA PHI VERT. HOR. TOTAL AXIAL TILT SENSE MAGNITUDE PHASE MAGNITUDE PHASE
DEGREES DEGREES DB DB DB RATIO DEG. VOLTS/M DEGREES VOLTS/M DEGREES
90.00 .00 -999.99 9.75 9.75 .00000 90.00 LINEAR 0.00000E+00 .00 2.46922E+00 -66.00
85.00 .00 -999.99 9.70 9.70 .00000 90.00 LINEAR 0.00000E+00 .00 2.45352E+00 -65.20
[many lines removed]
30.00 .00 -999.99 2.10 2.10 .00000 90.00 LINEAR 0.00000E+00 .00 1.02313E+00 38.02
25.00 .00 -999.99 -.14 -.14 .00000 90.00 LINEAR 0.00000E+00 .00 7.90310E-01 59.26
[more lines removed]
The output indicates that the antenna has a maximum gain of 9.75 dBi, a little over three times the gain of an isotropic antenna. However, as the signal moves even five degrees to the side, this has dropped to 9.5. When you reach 75 degrees off the front, the antenna begins to have negative gain. This indicates that this antenna is fairly directional, and one would expect it to have a high front-to-back ratio.
NEC versions
BRACT
BRACT was a pure method of moments implementation, suitable for use on antennas consisting of uniform diameter conductors arranged in free space and connected to each other at their ends (if at all). It did not model the contributions of the ground (or water) and was primarily useful for aircraft and spacecraft type applications.
AMP
AMP modified BRACT by adding a system for calculating the effects of ground planes.
AMP2
AMP2 added the ability to model extended closed surfaces.
NEC-1
The original NEC, later known as NEC-1 when NEC-2 was introduced, was a modification of the earlier AMP2, adding a more accurate current expansion along wires and at multiple wire junctions, and an option in the wire modeling for far greater accuracy on thick wires. A new model for a voltage source was added and several other modifications made for increased accuracy.
NEC-2
NEC-2 is the highest version of the code within the public domain without a license. It cannot model buried radials or ground stakes.
NEC-3
NEC-3 modified NEC-2 to include a Sommerfeld model to properly model wires buried in or close to the ground.
NEC-4
NEC-4 modified NEC-3 to better model very small antennas, like those on cell phones and WiFi routers. The most recent version, 4.2, includes a better version of the Sommerfeld model used in NEC-3 for in- and near-ground wires, added current sources instead of just voltage sources as in earlier models, and used a new memory management system that allows arbitrarily large designs.
NEC-4 remains the property of the Lawrence Livermore National Laboratory and the University of California. NEC-4 requires a license.
NEC-5
NEC-5 solves the Electric-field integral equation for wires and surfaces using the newer mixed potential method developed by Rao, Wilton and Glisson.
NEC-5 remains the property of the Lawrence Livermore National Laboratory and the University of California. NEC-5 requires a license.
MININEC
MININEC is an independent implementation of the concepts in NEC. It uses the same method of moments algorithm to calculate the outcomes, but using entirely original code. The first versions were written in 1980 in BASIC for 32 kB Apple II computers, and after following some advice from Professor Wilton at the University of Mississippi, the first public release was made in 1982 for 64 kB machines. An improved version, MININEC2, was released in 1984, followed by a port to the IBM PC as MININEC3 in 1986. Like the original NEC, MININEC now runs on many platforms, although its popularity has fallen with the more widespread availability of the original NEC codes in C form.
MININEC suffers from some known flaws compared to NEC, the best known being that resonant frequencies may be slightly in error. However, MININEC handles different wire diameters better than NEC-2 and probably NEC-4; this includes different diameter parallel wires, different diameter wires joined at an angle and tapered diameter antenna elements. Placing sources at an intersection of two wires is a problem for NEC-2 but not MININEC. MININEC converges more slowly (requires more segments) when wires join at an angle, when wire segments of significantly different length are adjacent, and has a weaker ground model.
References
Bibliography
This article has an excellent illustrated explanation of the NEC method of moments concepts.
External links
Cost-free resources
nec2++ - an extensive rewrite of NEC-2 in C++ licensed under GPLv2, with a C/C++ interface and python bindings. It can easily be incorporated into automatic optimizers.
4nec2 - A free NEC2/NEC4 implementation for Microsoft Windows. It is a tool for designing 2D and 3D antennas and modeling their near-field/far-field radiation patterns.
Numerical Electromagnetics Code NEC2 unofficial home page - NEC2 documentation and code examples
MMANA-GAL basic - A free antenna modeling program based on MININEC. Opens .MAA files. (MMANA-GAL also works on Linux using Wine, or on Raspberry Pi using Wine inside ExaGear).
xnec2c - A translation of NEC2 into C with multithreading and accelerated math libraries; uses GTK3 to provide 3D radiation pattern and 2D graphs for impedance, gain, directionality and a Smith chart. Opens .NEC files, supports external Simplex optimization, and much more. (official github repo)
xnec2c-optimize - An optimizer that works with xnec2c to tune antenna geometries (ie. high gain, low VSWR) with the Simplex optimization algorithm.
NEC Lab - NEC Lab is a powerful tool that uses Numerical Electromagnetics Code (NEC2) and Artificial Intelligence (AI) to design antennas.
CocoaNEC - Open source GUI front-end for Apple Mac OS X. Includes NEC2 and supports NEC4 with separate license.
Commercial resources
AN-SOF - A Windows simulation software for antennas in free space and above a lossy ground, microstrip patch antennas and printed circuit boards (PCBs). A radial wire ground screen is included and connections to imperfect ground are allowed. Not based on NEC.
EZnec - A well known commercial-based antenna modeling package based on NEC3 and NEC4. The ARRL's "Antenna Book" makes extensive use of EZnec and includes many example files (in .EZ format) to model amateur radio antennas. Opens .EZ files. (EZnec also works on Linux using Wine, or on Raspberry Pi using Wine inside ExaGear).
AutoEZ - An Excel application that works in conjunction with EZNEC v.5.0 & v.6.0. AutoEZ allows you to run multiple EZNEC test cases while AutoEZ automatically changes one or more variables between runs.
NEC4WIN NEC4WIN /VM - A Windows XP, Vista simulation program based on Mininec 3.
AC6LA antenna utilities - A collection of commercial antenna utilities
Nec-Win plus - A commercial modeling package.
GAL-ANA - A commercial antenna modeling package based on NEC2 and MININEC.
GNEC - A commercial NEC package with a graphical user interface.
MMANA-GAL PRO - A commercial modeling package, up to 45000 segments.
Example NEC files (for amateur radio antennas)
DF9CY - EZNEC Simulation files collection - Antenna modeling files for EZnec and 4nec2.
ARRL antenna modeling files - .NEC, .YAG, and .EZ files contributed by various sources.
KK4OBI - Bent and other dipole models - This web site is devoted primarily as a resource for amateur radio operators to see what happens if they bend a half-wave dipole. (.NEC files for dipoles)
L.B. Cebik's Basic & Intermediate Antenna Modeling - Includes exercise models for EZnec (also see Cebik's Antenna Modeling Notes - 7 Volumes including the models).
Simulating the end-fed half-wave (EFHW) dipole antenna
VK3IL - Multiband end-fed 80-10m antenna - NEC2 model file of a "MyAntennas EFHW-8010" multi-band antenna.
Other retail books (such as The ARRL Antenna Book, Marcel De Canck's Advanced Antenna Modeling, and others) also include antenna model files.
Most free or retail NEC software packages include an 'example' folder containing antenna model files.
NEC antenna modeling tutorials
Archive of L. B. Cebik's papers on antennas and antenna modeling (mirror) - Larry Wolfgang, WR1B, QEX Editor, called Cebik "probably the most widely published and often read author of Amateur Radio antenna articles ever to write on the subject."
How to Start Modeling Antennas using EZNEC - Antenna Modeling for Beginners (W8WWV, May 2011)
Computer Antenna Modeling Simplified - Slide show compiled from the Internet for the AARA Ham Radio Club, (KE5KJD, 2010)
Maximum-Gain Radial Ground Systems for Vertical Antennas - Analysis of how many ground radials is theoretically optimal for different vertical antennas (modeled with EZNEC4)
Numerical Electromagnics Code - Older website which gathered some NEC2 documentation from a group of leading scientists and engineers
The ARRL's antenna modeling page - Includes some links to QST magazine publications of LB Cebik's NEC tutorials (for paid QST members only - though the original source articles are also available for free distribution at other websites).
The ARRL's supplemental information page for their Antenna Modeling for Beginners book - Includes some presentations, links to tutorials, and references.
Installing and getting oriented with xnec2c in Debian Linux
YouTube Tutorials
David Casler (KEØOG)'s Antenna Modeling Videos
How High Should a Dipole Be? A Look at Antenna Modeling - Intro to EZnec
Decoding Antenna Modeling Charts
Modeling Common Dipole Variations
Modeling the Simple Inverted Vee Antenna
Antenna Modeling a Modified Dipole with EZNEC 6+
20/40 Two Band Dipole Modeled
What I discovered by modeling the MFJ-1846 Hex Beam
Callum (M0MCX)'s Antenna Software Modeling videos - YouTube playlist
Karl Schneider (KE0JWK)'s Antenna Modeling with 4NEC2 - YouTube Playlist
Other NEC software lists
The unofficial Numerical Electromagnetic Code (NEC) Archives
The W8IO Antenna Site - NEC and Mininec
Electromagnetic simulation software
Free simulation software
Amateur radio software |
45295258 | https://en.wikipedia.org/wiki/Packet%20Digital | Packet Digital | Packet Digital LLC, headquartered in Fargo, North Dakota, designs power management integrated circuits (PMICs) to reduce power consumed and heat produced by computer servers, mobile devices and unmanned aircraft systems (UAS).
Unmanned Aircraft Systems (UAS)
Packet Digital is designing circuitry for solar-powered UAS to more efficiently use electrical output from solar panels on the aircraft wings and fuselage to power avionics and charge on-board batteries. By reducing wasteful power dissipation, the company aims to increases flight times.
Packet Digital is designing power electronics to be used conjointly with advanced photovoltaics being developed by the United States Naval Research Laboratory, Washington, D.C. Ultimately, the two organizations want to help the Navy achieve unlimited flight times for its solar-powered UAS. The circuitry will implement high-frequency power tracking algorithms to accommodate dynamic movements of the unmanned aerial vehicle (UAV) or clouds that could change how much sunlight is absorbed and converted by the solar cells.
Packet Digital announced in June 2015 that it would create Botlink LLC as a joint venture with drone app developer Aerobotic Innovations LLC. Botlink will develop and market a hardware-software platform combining Packet Digital's power management circuits for improved drone endurance and Aerobotic Innovation's cloud-based operations platform for safety, communications, data processing and control of a drone from a drag-and-drop app on a tablet or smartphone—a feature known as "Drag. Drop. Drone."
Leadership
In February 2015, it was announced Peter Lindgren, president and CEO of Otter Products, and Linda Pancratz, retired chairman of the board and CEO of TDL Infomedia Limited in the United Kingdom, joined the Packet Digital board of directors.
Accolades
Packet Digital was named the Electronics and Overall Innovation Challenge Winner at Aviation Week 2012 in Washington, D.C. Gartner named Packet Digital in its 2014 Cool Vendors in Semiconductors report. Inc. listed the company number 468 among its 500 Fastest Growing Companies in 2008.
References
Companies based in Fargo–Moorhead
Semiconductor companies of the United States
Technology companies established in 2003 |
39352 | https://en.wikipedia.org/wiki/List%20of%20telecommunications%20encryption%20terms | List of telecommunications encryption terms |
This is a list of telecommunications encryption terms. This list is derived in part from the Glossary of Telecommunication Terms published as Federal Standard 1037C.
A5/1a stream cipher used to provide over-the-air communication privacy in the GSM cellular telephone standard.
Bulk encryption
Cellular Message Encryption Algorithma block cipher which was used for securing mobile phones in the United States.
Cipher
Cipher system
Cipher text
Ciphony
Civision
Codress message
COMSEC equipment
Cryptanalysis
Cryptographic key
CRYPTO (International Cryptology Conference)
Crypto phone
Crypto-shredding
Data Encryption Standard (DES)
Decipher
Decode
Decrypt
DECT Standard Cipher
Descrambler
Dncipher
Encode
Encoding law
Encrypt
End-to-end encryption
group
IMSI-catcheran eavesdropping device used for interception of cellular phones and usually is undetectable for users of mobile phones.
Key distribution center (KDC)
Key management
Key stream
KSD-64
Link encryption
MISTY1
Multiplex link encryption
Net control station (NCS)
Null cipher
One-time pad
Over the Air Rekeying (OTAR)
Plaintext
PPPoX
Protected distribution system (PDS)
Protection interval (PI)
Pseudorandom number generator
Public-key cryptography
RED/BLACK concept
RED signal
Remote rekeying
Security management
Spoofing
Squirtto load or transfer code key from an electronic key storage device. See Over the Air Rekeying.
STU-IIIa family of secure telephones introduced in 1987 by the National Security Agency for use by the United States government, its contractors, and its allies.
Superencryption
Synchronous crypto-operation
Transmission security key (TSK)
Trunk encryption device (TED)
Type 1 encryption
Type 2 encryption
Type 3 encryption
Type 4 encryption
Unique key
VoIP VPNcombines voice over IP and virtual private network technologies to offer a method for delivering secure voice.
ZRTPa cryptographic key-agreement protocol used in Voice over Internet Protocol (VoIP) phone telephony.
See also
Communications security
CONDOR secure cell phone
Cryptography standards
Secure communication
Secure telephone
Telecommunication
References
Further reading
Rutenbeck, Jeff (2006). Tech terms: what every telecommunications and digital media person should know. Elsevier, Inc.
Kissel, Richard (editor). (February, 2011). Glossary of Key Information Security Terms (NIST IR 7298 Revision 1). National Institute of Standards and Technology.
External links
"Federal Standard 1037C."Telecommunications: Glossary of Telecommunication Terms
Embedding Security into Handsets Would Boost Data Usage - Report (2005) from Cellular-news.com
Wireless, Telecom and Computer Glossary from Cellular Network Perspectives
Telecommunications encryption terms |
40345 | https://en.wikipedia.org/wiki/MOSFET | MOSFET | The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of insulated-gate field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the gate terminal determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals.
The MOSFET was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and first presented in 1960. It is the basic building block of modern electronics, and the most frequently manufactured device in history, with an estimated total of 13sextillion () MOSFETs manufactured between 1960 and 2018. It is the dominant semiconductor device in digital and analog integrated circuits (ICs), and the most common power device. It is a compact transistor that has been miniaturised and mass-produced for a wide range of applications, revolutionizing the electronics industry and the world economy, and being central to the digital revolution, silicon age and information age. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enables high-density ICs such as memory chips and microprocessors. The MOSFET is considered the "workhorse" of the electronics industry.
A key advantage of a MOSFET is that it requires almost no input current to control the load current, when compared with bipolar junction transistors (BJTs). In an enhancement mode MOSFET, voltage applied to the gate terminal can increase the conductivity from the "normally off" state. In a depletion mode MOSFET, voltage applied at the gate can reduce the conductivity from the "normally on" state. MOSFETs are also capable of high scalability, with increasing miniaturization, and can be easily scaled down to smaller dimensions. They also have faster switching speed (ideal for digital signals), much smaller size, consume significantly less power, and allow much higher density (ideal for large-scale integration), compared to BJTs. MOSFETs are also cheaper and have relatively simple processing steps, resulting in high manufacturing yield.
MOSFETs can either be manufactured as part of MOS integrated circuit chips or as discrete MOSFET devices (such as a power MOSFET), and can take the form of single-gate or multi-gate transistors. Since MOSFETs can be made with either p-type or n-type semiconductors (PMOS or NMOS logic, respectively), complementary pairs of MOSFETs can be used to make switching circuits with very low power consumption: CMOS (Complementary MOS) logic.
The name "metal–oxide–semiconductor" (MOS) typically refers to a metal gate, oxide insulation, and semiconductor (typically silicon). However, the "metal" in the name MOSFET is sometimes a misnomer, because the gate material can also be a layer of polysilicon (polycrystalline silicon). Along with oxide, different dielectric materials can also be used with the aim of obtaining strong channels with smaller applied voltages. The MOS capacitor is also part of the MOSFET structure.
Early history
Background
The basic principle of the field-effect transistor (FET) was first proposed by Austrian physicist Julius Edgar Lilienfeld in 1926, when he filed the first patent for an insulated-gate field-effect transistor. Over the course of next two years he described various FET structures. In his configuration, aluminum formed the metal and aluminum oxide the oxide, while copper sulfide was used as a semiconductor. However, he was unable to build a practical working device. The FET concept was later also theorized by German engineer Oskar Heil in the 1930s and American physicist William Shockley in the 1940s. There was no working practical FET built at the time, and none of these early FET proposals involved thermally oxidized silicon.
Semiconductor companies initially focused on bipolar junction transistors (BJTs) in the early years of the semiconductor industry. However, the junction transistor was a relatively bulky device that was difficult to manufacture on a mass-production basis, which limited it to a number of specialised applications. FETs were theorized as potential alternatives to junction transistors, but researchers were unable to build practical FETs, largely due to the troublesome surface state barrier that prevented the external electric field from penetrating into the material. In the 1950s, researchers had largely given up on the FET concept, and instead focused on BJT technology.
In 1955, Carl Frosch and Lincoln Derrick accidentally covered the surface of a silicon wafer with a layer of silicon dioxide. In their subsequent experimental characterisation of this oxide layer they found that it blocked entry of particular dopants to the silicon wafer (despite allowing others) and thereby discovered the passivating effect of a surface oxidation layer on this semiconductor. Their further work demonstrated the etching of small openings into the oxide layer to diffuse dopants into precisely controlled areas of a silicon wafer. In 1957, they published a research paper and patented their technique summarizing their work. The technique they developed is known as oxide diffusion masking, which would later be used in the fabrication of MOSFET devices. At Bell Labs, the importance of Frosch's technique was immediately realized since silicon oxides are much more stable than germanium oxides, have better dielectric properties and at the same time could be used as a diffusion mask. Results of their work circulated around Bell Labs in the form of BTL memos before being published in 1957. At Shockley Semiconductor, Shockley had circulated the preprint of their article in December 1956 to all his senior staff, including Jean Hoerni.
Invention
Mohamed M. Atalla at Bell Labs was dealing with the problem of surface states in the late 1950s. He picked up Frosch's work on oxidation, attempting to passivate the surface of silicon through the formation of oxide layer over it. He thought that growing a very thin high quality thermally grown on top of a clean silicon wafer would neutralize surface states enough to make a practical working field-effect transistor. He wrote his findings in his BTL memos in 1957, before presenting his work at an Electrochemical Society meeting in 1958. This was an important development that enabled MOS technology and silicon integrated circuit (IC) chips. The following year, John L. Moll described the MOS capacitor at Stanford University. Atalla's co-workers J.R. Ligenza and W.G. Spitzer, who studied the mechanism of thermally grown oxides, managed to fabricate a high quality Si/SiO2 stack, with Atalla and Kahng making use of their findings.
The MOSFET was invented when Mohamed Atalla and Dawon Kahng successfully fabricated the first working MOSFET device in November 1959. The device is covered by two patents, each filed separately by Atalla and Kahng in March 1960. They published their results in June 1960, at the Solid-State Device Conference held at Carnegie Mellon University. The same year, Atalla proposed the use of MOSFETs to build MOS integrated circuit (MOS IC) chips, noting the MOSFET's ease of fabrication.
Commercialization
The advantage of the MOSFET was that it was relatively compact and easy to mass-produce compared to the competing planar junction transistor, but the MOSFET represented a radically new technology, the adoption of which would have required spurning the progress that Bell had made with the bipolar junction transistor (BJT). The MOSFET was also initially slower and less reliable than the BJT.
In the early 1960s, MOS technology research programs were established by Fairchild Semiconductor, RCA Laboratories, General Microelectronics (led by former Fairchild engineer Frank Wanlass) and IBM. In 1962, Steve R. Hofstein and Fred P. Heiman at RCA built the first MOS integrated circuit chip. The following year, they collected all previous works on FETs and gave a theory of operation of the MOSFET. CMOS was developed by Chih-Tang Sah and Frank Wanlass at Fairchild in 1963. The first CMOS integrated circuit was later built in 1968 by Albert Medwin.
The first formal public announcement of the MOSFET's existence as a potential technology was made in 1963. It was then first commercialized by General Microelectronics in May 1964, followed Fairchild in October 1964. GMe's first MOS contract was with NASA, which used MOSFETs for spacecraft and satellites in the Interplanetary Monitoring Platform (IMP) program and Explorers Program. The early MOSFETs commercialized by General Microelectronics and Fairchild were p-channel (PMOS) devices for logic and switching applications. By the mid-1960s, RCA were using MOSFETs in their consumer products, including FM radio, television and amplifiers. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild researchers Federico Faggin and Tom Klein adapted for integrated circuits in 1968.
MOS revolution
The development of the MOSFET led to a revolution in electronics technology, called the MOS revolution or MOSFET revolution, fuelling the technological and economic growth of the early semiconductor industry.
The impact of the MOSFET became commercially significant from the late 1960s onwards. This led to a revolution in the electronics industry, which has since impacted daily life in almost every way. The invention of the MOSFET has been cited as the birth of modern electronics and was central to the microcomputer revolution.
Importance
The MOSFET forms the basis of modern electronics, and is the basic element in most modern electronic equipment. It is the most common transistor in electronics, and the most widely used semiconductor device in the world. It has been described as the "workhorse of the electronics industry" and "the base technology" of the late 20th to early 21st centuries. MOSFET scaling and miniaturization (see List of semiconductor scale examples) have been the primary factors behind the rapid exponential growth of electronic semiconductor technology since the 1960s, as the rapid miniaturization of MOSFETs has been largely responsible for the increasing transistor density, increasing performance and decreasing power consumption of integrated circuit chips and electronic devices since the 1960s.
MOSFETs are capable of high scalability (Moore's law and Dennard scaling), with increasing miniaturization, and can be easily scaled down to smaller dimensions. They consume significantly less power, and allow much higher density, than bipolar transistors. MOSFETs can be much smaller than BJTs, about one-twentieth of the size by the early 1990s. MOSFETs also have faster switching speed, with rapid on–off electronic switching that makes them ideal for generating pulse trains, the basis for digital signals. In contrast to BJTs, which more slowly generate analog signals resembling sine waves, MOSFETs are also cheaper and have relatively simple processing steps, resulting in higher manufacturing yield. MOSFETs thus enable large-scale integration (LSI), and are ideal for digital circuits, as well as linear analog circuits.
The MOSFET has been variously described as the most important transistor, the most important device in the electronics industry, arguably the most important device in the computing industry, one of the most important developments in semiconductor technology, and possibly the most important invention in electronics. The MOSFET has been the fundamental building block of modern digital electronics, during the digital revolution, information revolution, information age, and silicon age. MOSFETs have been the driving force behind the computer revolution, and the technologies enabled by it. The rapid progress of the electronics industry during the late 20th to early 21st centuries was achieved by rapid MOSFET scaling (Dennard scaling and Moore's law), down to the level of nanoelectronics in the early 21st century. The MOSFET revolutionized the world during the information age, with its high density enabling a computer to exist on a few small IC chips rather than filling a room, and later making possible digital communications technology such as smartphones.
The MOSFET is the most widely manufactured device in history. The MOSFET generates annual sales of as of 2015. Between 1960 and 2018, an estimated total of 13sextillion MOS transistors have been manufactured, accounting for at least 99.9% of all transistors. Digital integrated circuits such as microprocessors and memory devices contain thousands to billions of integrated MOSFETs on each device, providing the basic switching functions required to implement logic gates and data storage. There are also memory devices which contain at least a trillion MOS transistors, such as a 256GB microSD memory card, larger than the number of stars in the Milky Way galaxy. As of 2010, the operating principles of modern MOSFETs have remained largely the same as the original MOSFET first demonstrated by Mohamed Atalla and Dawon Kahng in 1960.
The US Patent and Trademark Office calls the MOSFET a "groundbreaking invention that transformed life and culture around the world" and the Computer History Museum credits it with "irrevocably changing the human experience." The MOSFET was also the basis for Nobel Prize winning breakthroughs such as the quantum Hall effect and the charge-coupled device (CCD), though there was never any Nobel Prize given for the MOSFET itself. In a 2018 note on Jack Kilby's Nobel Prize for Physics for his part in the invention of the integrated circuit, the Royal Swedish Academy of Sciences specifically mentioned the MOSFET and the microprocessor as other important inventions in the evolution of microelectronics. The MOSFET is also included on the list of IEEE milestones in electronics, and its inventors Mohamed Atalla and Dawon Kahng entered the National Inventors Hall of Fame in 2009.
Composition
Usually the semiconductor of choice is silicon. Recently, some chip manufacturers, most notably IBM and Intel, have started using a chemical compound of silicon and germanium (SiGe) in MOSFET channels. Unfortunately, many semiconductors with better electrical properties than silicon, such as gallium arsenide, do not form good semiconductor-to-insulator interfaces, and thus are not suitable for MOSFETs. Research continues on creating insulators with acceptable electrical characteristics on other semiconductor materials.
To overcome the increase in power consumption due to gate current leakage, a high-κ dielectric is used instead of silicon dioxide for the gate insulator, while polysilicon is replaced by metal gates (e.g. Intel, 2009).
The gate is separated from the channel by a thin insulating layer, traditionally of silicon dioxide and later of silicon oxynitride. Some companies have started to introduce a high-κ dielectric and metal gate combination in the 45 nanometer node.
When a voltage is applied between the gate and body terminals, the electric field generated penetrates through the oxide and creates an inversion layer or channel at the semiconductor-insulator interface due to which makes that part to be less p type and paving the way for conduction of current resulting increase in voltage between gate and body which pushes off the holes and creates layer of immobile carriers which are charged negatively. The inversion layer provides a channel through which current can pass between source and drain terminals. Varying the voltage between the gate and body modulates the conductivity of this layer and thereby controls the current flow between drain and source. This is known as enhancement mode.
Operation
Metal–oxide–semiconductor structure
The traditional metal–oxide–semiconductor (MOS) structure is obtained by growing a layer of silicon dioxide () on top of a silicon substrate, commonly by thermal oxidation and depositing a layer of metal or polycrystalline silicon (the latter is commonly used). As the silicon dioxide is a dielectric material, its structure is equivalent to a planar capacitor, with one of the electrodes replaced by a semiconductor.
When a voltage is applied across a MOS structure, it modifies the distribution of charges in the semiconductor. If we consider a p-type semiconductor (with the density of acceptors, p the density of holes; p = NA in neutral bulk), a positive voltage, , from gate to body (see figure) creates a depletion layer by forcing the positively charged holes away from the gate-insulator/semiconductor interface, leaving exposed a carrier-free region of immobile, negatively charged acceptor ions (see doping (semiconductor)). If is high enough, a high concentration of negative charge carriers forms in an inversion layer located in a thin layer next to the interface between the semiconductor and the insulator.
Conventionally, the gate voltage at which the volume density of electrons in the inversion layer is the same as the volume density of holes in the body is called the threshold voltage. When the voltage between transistor gate and source (VGS) exceeds the threshold voltage (Vth), the difference is known as overdrive voltage.
This structure with p-type body is the basis of the n-type MOSFET, which requires the addition of n-type source and drain regions.
MOS capacitors and band diagrams
The MOS capacitor structure is the heart of the MOSFET. Consider a MOS capacitor where the silicon base is of p-type. If a positive voltage is applied at the gate, holes which are at the surface of the p-type substrate will be repelled by the electric field generated by the voltage applied. At first, the holes will simply be repelled and what will remain on the surface will be immobile (negative) atoms of the acceptor type, which creates a depletion region on the surface. Remember that a hole is created by an acceptor atom, e.g. Boron, which has one less electron than Silicon. One might ask how can holes be repelled if they are actually non-entities? The answer is that what really happens is not that a hole is repelled, but electrons are attracted by the positive field, and fill these holes, creating a depletion region where no charge carriers exist because the electron is now fixed onto the atom and immobile.
As the voltage at the gate increases, there will be a point at which the surface above the depletion region will be converted from p-type into n-type, as electrons from the bulk area will start to get attracted by the larger electric field. This is known as inversion. The threshold voltage at which this conversion happens is one of the most important parameters in a MOSFET.
In the case of a p-type bulk, inversion happens when the intrinsic energy level at the surface becomes smaller than the Fermi level at the surface. One can see this from a band diagram. Remember that the Fermi level defines the type of semiconductor in discussion. If the Fermi level is equal to the Intrinsic level, the semiconductor is of intrinsic, or pure type. If the Fermi level lies closer to the conduction band (valence band) then the semiconductor type will be of n-type (p-type). Therefore, when the gate voltage is increased in a positive sense (for the given example), this will "bend" the intrinsic energy level band so that it will curve downwards towards the valence band. If the Fermi level lies closer to the valence band (for p-type), there will be a point when the Intrinsic level will start to cross the Fermi level and when the voltage reaches the threshold voltage, the intrinsic level does cross the Fermi level, and that is what is known as inversion. At that point, the surface of the semiconductor is inverted from p-type into n-type. Remember that as said above, if the Fermi level lies above the Intrinsic level, the semiconductor is of n-type, therefore at Inversion, when the Intrinsic level reaches and crosses the Fermi level (which lies closer to the valence band), the semiconductor type changes at the surface as dictated by the relative positions of the Fermi and Intrinsic energy levels.
Structure and channel formation
A MOSFET is based on the modulation of charge concentration by a MOS capacitance between a body electrode and a gate electrode located above the body and insulated from all other device regions by a gate dielectric layer. If dielectrics other than an oxide are employed, the device may be referred to as a metal-insulator-semiconductor FET (MISFET). Compared to the MOS capacitor, the MOSFET includes two additional terminals (source and drain), each connected to individual highly doped regions that are separated by the body region. These regions can be either p or n type, but they must both be of the same type, and of opposite type to the body region. The source and drain (unlike the body) are highly doped as signified by a "+" sign after the type of doping.
If the MOSFET is an n-channel or nMOS FET, then the source and drain are n+ regions and the body is a p region. If the MOSFET is a p-channel or pMOS FET, then the source and drain are p+ regions and the body is a n region. The source is so named because it is the source of the charge carriers (electrons for n-channel, holes for p-channel) that flow through the channel; similarly, the drain is where the charge carriers leave the channel.
The occupancy of the energy bands in a semiconductor is set by the position of the Fermi level relative to the semiconductor energy-band edges.
With sufficient gate voltage, the valence band edge is driven far from the Fermi level, and holes from the body are driven away from the gate.
At larger gate bias still, near the semiconductor surface the conduction band edge is brought close to the Fermi level, populating the surface with electrons in an inversion layer or n-channel at the interface between the p region and the oxide. This conducting channel extends between the source and the drain, and current is conducted through it when a voltage is applied between the two electrodes. Increasing the voltage on the gate leads to a higher electron density in the inversion layer and therefore increases the current flow between the source and drain. For gate voltages below the threshold value, the channel is lightly populated, and only a very small subthreshold leakage current can flow between the source and the drain.
When a negative gate–source voltage is applied, it creates a p-channel at the surface of the n region, analogous to the n-channel case, but with opposite polarities of charges and voltages. When a voltage less negative than the threshold value (a negative voltage for the p-channel) is applied between gate and source, the channel disappears and only a very small subthreshold current can flow between the source and the drain. The device may comprise a silicon on insulator device in which a buried oxide is formed below a thin semiconductor layer. If the channel region between the gate dielectric and the buried oxide region is very thin, the channel is referred to as an ultrathin channel region with the source and drain regions formed on either side in or above the thin semiconductor layer. Other semiconductor materials may be employed. When the source and drain regions are formed above the channel in whole or in part, they are referred to as raised source/drain regions.
Modes of operation
The operation of a MOSFET can be separated into three different modes, depending on the voltages at the terminals. In the following discussion, a simplified algebraic model is used. Modern MOSFET characteristics are more complex than the algebraic model presented here.
For an enhancement-mode, n-channel MOSFET, the three operational modes are:
Cutoff, subthreshold, and weak-inversion mode (n-channel MOSFET)
When VGS < Vth:
where is gate-to-source bias and is the threshold voltage of the device.
According to the basic threshold model, the transistor is turned off, and there is no conduction between drain and source. A more accurate model considers the effect of thermal energy on the Fermi–Dirac distribution of electron energies which allow some of the more energetic electrons at the source to enter the channel and flow to the drain. This results in a subthreshold current that is an exponential function of gate–source voltage. While the current between drain and source should ideally be zero when the transistor is being used as a turned-off switch, there is a weak-inversion current, sometimes called subthreshold leakage.
In weak inversion where the source is tied to bulk, the current varies exponentially with as given approximately by:
where = current at , the thermal voltage and the slope factor n is given by:
with = capacitance of the depletion layer and = capacitance of the oxide layer. This equation is generally used, but is only an adequate approximation for the source tied to the bulk. For the source not tied to the bulk, the subthreshold equation for drain current in saturation is
where the is the channel divider that is given by:
with = capacitance of the depletion layer and = capacitance of the oxide layer. In a long-channel device, there is no drain voltage dependence of the current once , but as channel length is reduced drain-induced barrier lowering introduces drain voltage dependence that depends in a complex way upon the device geometry (for example, the channel doping, the junction doping and so on). Frequently, threshold voltage Vth for this mode is defined as the gate voltage at which a selected value of current ID0 occurs, for example, ID0 = 1μA, which may not be the same Vth-value used in the equations for the following modes.
Some micropower analog circuits are designed to take advantage of subthreshold conduction. By working in the weak-inversion region, the MOSFETs in these circuits deliver the highest possible transconductance-to-current ratio, namely: , almost that of a bipolar transistor.
The subthreshold I–V curve depends exponentially upon threshold voltage, introducing a strong dependence on any manufacturing variation that affects threshold voltage; for example: variations in oxide thickness, junction depth, or body doping that change the degree of drain-induced barrier lowering. The resulting sensitivity to fabricational variations complicates optimization for leakage and performance.
Triode mode or linear region, also known as the ohmic mode (n-channel MOSFET)
When VGS > Vth and VDS < VGS − Vth:
The transistor is turned on, and a channel has been created which allows current between the drain and the source. The MOSFET operates like a resistor, controlled by the gate voltage relative to both the source and drain voltages. The current from drain to source is modeled as:
where is the charge-carrier effective mobility, is the gate width, is the gate length and is the gate oxide capacitance per unit area. The transition from the exponential subthreshold region to the triode region is not as sharp as the equations suggest.
Saturation or active mode (n-channel MOSFET)
When VGS > Vth and VDS ≥ (VGS – Vth):
The switch is turned on, and a channel has been created, which allows current between the drain and source. Since the drain voltage is higher than the source voltage, the electrons spread out, and conduction is not through a narrow channel but through a broader, two- or three-dimensional current distribution extending away from the interface and deeper in the substrate. The onset of this region is also known as pinch-off to indicate the lack of channel region near the drain. Although the channel does not extend the full length of the device, the electric field between the drain and the channel is very high, and conduction continues. The drain current is now weakly dependent upon drain voltage and controlled primarily by the gate–source voltage, and modeled approximately as:
The additional factor involving λ, the channel-length modulation parameter, models current dependence on drain voltage due to the channel length modulation, effectively similar to the Early effect seen in bipolar devices. According to this equation, a key design parameter, the MOSFET transconductance is:
where the combination Vov = VGS − Vth is called the overdrive voltage, and where VDSsat = VGS − Vth accounts for a small discontinuity in which would otherwise appear at the transition between the triode and saturation regions.
Another key design parameter is the MOSFET output resistance given by:
.
rout is the inverse of gDS where . ID is the expression in saturation region.
If λ is taken as zero, the resulting infinite output resistance can simplify circuit analysis, however this may lead to unrealistic circuit predictions, particularly in analog circuits.
As the channel length becomes very short, these equations become quite inaccurate. New physical effects arise. For example, carrier transport in the active mode may become limited by velocity saturation. When velocity saturation dominates, the saturation drain current is more nearly linear than quadratic in VGS. At even shorter lengths, carriers transport with near zero scattering, known as quasi-ballistic transport. In the ballistic regime, the carriers travel at an injection velocity that may exceed the saturation velocity and approaches the Fermi velocity at high inversion charge density. In addition, drain-induced barrier lowering increases off-state (cutoff) current and requires an increase in threshold voltage to compensate, which in turn reduces the saturation current.
Body effect
The occupancy of the energy bands in a semiconductor is set by the position of the Fermi level relative to the semiconductor energy-band edges. Application of a source-to-substrate reverse bias of the source-body pn-junction introduces a split between the Fermi levels for electrons and holes, moving the Fermi level for the channel further from the band edge, lowering the occupancy of the channel. The effect is to increase the gate voltage necessary to establish the channel, as seen in the figure. This change in channel strength by application of reverse bias is called the 'body effect'.
Simply put, using an nMOS example, the gate-to-body bias VGB positions the conduction-band energy levels, while the source-to-body bias VSB positions the electron Fermi level near the interface, deciding occupancy of these levels near the interface, and hence the strength of the inversion layer or channel.
The body effect upon the channel can be described using a modification of the threshold voltage, approximated by the following equation:
VTB=VT0 if VSB=0 i.e. threshold voltage for gate and body terminals getting shorted.
where VTB is the threshold voltage with substrate bias present, and VT0 is the zero-VSB value of threshold voltage, is the body effect parameter, and 2φB is the approximate potential drop between surface and bulk across the depletion layer when and gate bias is sufficient to ensure that a channel is present. As this equation shows, a reverse bias causes an increase in threshold voltage VTB and therefore demands a larger gate voltage before the channel populates.
The body can be operated as a second gate, and is sometimes referred to as the "back gate"; the body effect is sometimes called the "back-gate effect".
Circuit symbols
A variety of symbols are used for the MOSFET. The basic design is generally a line for the channel with the source and drain leaving it at right angles and then bending back at right angles into the same direction as the channel. Sometimes three line segments are used for enhancement mode and a solid line for depletion mode (see depletion and enhancement modes). Another line is drawn parallel to the channel for the gate.
The bulk or body connection, if shown, is shown connected to the back of the channel with an arrow indicating pMOS or nMOS. Arrows always point from P to N, so an NMOS (N-channel in P-well or P-substrate) has the arrow pointing in (from the bulk to the channel). If the bulk is connected to the source (as is generally the case with discrete devices) it is sometimes angled to meet up with the source leaving the transistor. If the bulk is not shown (as is often the case in IC design as they are generally common bulk) an inversion symbol is sometimes used to indicate PMOS, alternatively an arrow on the source may be used in the same way as for bipolar transistors (out for nMOS, in for pMOS).
A comparison of enhancement-mode and depletion-mode MOSFET symbols, along with JFET symbols, is available in the table in this section. The orientation of the symbols, most significantly the position of source relative to drain, is such that more positive voltages appear higher on the schematic page than less positive voltages, implying current flowing "down" the page.
In schematics where G, S and D are not labeled, the detailed features of the symbol indicate which terminal is source and which is drain. For enhancement-mode and depletion-mode MOSFET symbols (in columns two and five), the source terminal is the one connected to the arrowhead. Additionally, in this diagram, the gate is shown as an "L" shape, whose input leg is closer to S than D, also indicating which is which. However, these symbols are often drawn with a "T" shaped gate (as elsewhere on this page), so it is the arrowhead which must be relied upon to indicate the source terminal.
For the symbols in which the bulk, or body, terminal is shown, it is here shown internally connected to the source (i.e., the black arrowhead in the diagrams in columns 2 and 5). This is a typical configuration, but by no means the only important configuration. In general, the MOSFET is a four-terminal device, and in integrated circuits many of the MOSFETs share a body connection, not necessarily connected to the source terminals of all the transistors.
Types of MOSFET
PMOS and NMOS logic
P-channel MOS (PMOS) logic uses p-channel MOSFETs to implement logic gates and other digital circuits. N-channel MOS (NMOS) logic uses n-channel MOSFETs to implement logic gates and other digital circuits.
For devices of equal current driving capability, n-channel MOSFETs can be made smaller than p-channel MOSFETs, due to p-channel charge carriers (holes) having lower mobility than do n-channel charge carriers (electrons), and producing only one type of MOSFET on a silicon substrate is cheaper and technically simpler. These were the driving principles in the design of NMOS logic which uses n-channel MOSFETs exclusively. However, unlike CMOS logic (neglecting leakage current), NMOS logic consumes power even when no switching is taking place.
Mohamed Atalla and Dawon Kahng originally demonstrated both pMOS and nMOS devices with 20 µm and then 10 µm gate lengths in 1960. Their original MOSFET devices also had a gate oxide thickness of 100 nm. However, the nMOS devices were impractical, and only the pMOS type were practical working devices. A more practical NMOS process was developed several years later. NMOS was initially faster than CMOS, thus NMOS was more widely used for computers in the 1970s. With advances in technology, CMOS logic displaced NMOS logic in the mid-1980s to become the preferred process for digital chips.
Complementary MOS (CMOS)
The MOSFET is used in digital complementary metal–oxide–semiconductor (CMOS) logic, which uses p- and n-channel MOSFETs as building blocks. Overheating is a major concern in integrated circuits since ever more transistors are packed into ever smaller chips. CMOS logic reduces power consumption because no current flows (ideally), and thus no power is consumed, except when the inputs to logic gates are being switched. CMOS accomplishes this current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET not to conduct and a low voltage on the gates causes the reverse. During the switching time as the voltage goes from one state to another, both MOSFETs will conduct briefly. This arrangement greatly reduces power consumption and heat generation.
CMOS was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. CMOS had lower power consumption, but was initially slower than NMOS, which was more widely used for computers in the 1970s. In 1978, Hitachi introduced the twin-well CMOS process, which allowed CMOS to match the performance of NMOS with less power consumption. The twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s. By the 1970s1980s, CMOS logic consumed over times less power than NMOS logic, and about 100,000 times less power than bipolar transistor-transistor logic (TTL).
Depletion-mode
There are depletion-mode MOSFET devices, which are less commonly used than the standard enhancement-mode devices already described. These are MOSFET devices that are doped so that a channel exists even with zero voltage from gate to source. To control the channel, a negative voltage is applied to the gate (for an n-channel device), depleting the channel, which reduces the current flow through the device. In essence, the depletion-mode device is equivalent to a normally closed (on) switch, while the enhancement-mode device is equivalent to a normally open (off) switch.
Due to their low noise figure in the RF region, and better gain, these devices are often preferred to bipolars in RF front-ends such as in TV sets.
Depletion-mode MOSFET families include BF960 by Siemens and Telefunken, and the BF980 in the 1980s by Philips (later to become NXP Semiconductors), whose derivatives are still used in AGC and RF mixer front-ends.
Metal–insulator–semiconductor field-effect transistor (MISFET)
Metal–insulator–semiconductor field-effect-transistor, or MISFET, is a more general term than MOSFET and a synonym to insulated-gate field-effect transistor (IGFET). All MOSFETs are MISFETs, but not all MISFETs are MOSFETs.
The gate dielectric insulator in a MISFET is silicon dioxide in a MOSFET, but other materials can also be employed. The gate dielectric lies directly below the gate electrode and above the channel of the MISFET. The term metal is historically used for the gate material, even though now it is usually highly doped polysilicon or some other non-metal.
Insulator types may be:
Silicon dioxide, in MOSFETs
Organic insulators (e.g., undoped trans-polyacetylene; cyanoethyl pullulan, CEP), for organic-based FETs.
Floating-gate MOSFET (FGMOS)
The floating-gate MOSFET (FGMOS) is a type of MOSFET where the gate is electrically isolated, creating a floating node in DC and a number of secondary gates or inputs are deposited above the floating gate (FG) and are electrically isolated from it. The first report of a floating-gate MOSFET (FGMOS) was made by Dawon Kahng (co-inventor of the original MOSFET) and Simon Min Sze in 1967.
The FGMOS is commonly used as a floating-gate memory cell, the digital storage element in EPROM, EEPROM and flash memories. Other uses of the FGMOS include a neuronal computational element in neural networks, analog storage element, digital potentiometers and single-transistor DACs.
Power MOSFET
Power MOSFETs have a different structure. As with most power devices, the structure is vertical and not planar. Using a vertical structure, it is possible for the transistor to sustain both high blocking voltage and high current. The voltage rating of the transistor is a function of the doping and thickness of the N-epitaxial layer (see cross section), while the current rating is a function of the channel width (the wider the channel, the higher the current). In a planar structure, the current and breakdown voltage ratings are both a function of the channel dimensions (respectively width and length of the channel), resulting in inefficient use of the "silicon estate". With the vertical structure, the component area is roughly proportional to the current it can sustain, and the component thickness (actually the N-epitaxial layer thickness) is proportional to the breakdown voltage.
Power MOSFETs with lateral structure are mainly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications.
The power MOSFET, which is commonly used in power electronics, was developed in the early 1970s. The power MOSFET enables low gate drive power, fast switching speed, and advanced paralleling capability.
Double-diffused metal–oxide–semiconductor (DMOS)
There are VDMOS (vertical double-diffused metal oxide semiconductor) and LDMOS (lateral double-diffused metal oxide semiconductor). Most power MOSFETs are made using this technology.
MOS capacitor
The MOS capacitor is part of the MOSFET structure, where the MOS capacitor is flanked by two p–n junctions. The MOS capacitor is widely used as a storage capacitor in memory chips, and as the basic building block of the charge-coupled device (CCD) in image sensor technology. In DRAM (dynamic random-access memory), each memory cell typically consists of a MOSFET and MOS capacitor.
Thin-film transistor (TFT)
The thin-film transistor (TFT) is a type of MOSFET distinct from the standard bulk MOSFET. The first TFT was invented by Paul K. Weimer at RCA in 1962, building on the earlier work of Atalla and Kahng on MOSFETs.
The idea of a TFT-based liquid-crystal display (LCD) was conceived by Bernard Lechner of RCA Laboratories in 1968. Lechner, F. J. Marlowe, E. O. Nester and J. Tults demonstrated the concept in 1968 with an 18x2 matrix dynamic scattering LCD that used standard discrete MOSFETs, as TFT performance was not adequate at the time.
Bipolar–MOS transistors
BiCMOS is an integrated circuit that combines BJT and CMOS transistors on a single chip.
The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT).
MOS sensors
A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate FET (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode.
By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
The two main types of image sensors used in digital imaging technology are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on MOS technology, with the CCD based on MOS capacitors and the CMOS sensor based on MOS transistors.
Multi-gate field-effect transistor (MuGFET)
The dual-gate MOSFET (DGMOS) has a tetrode configuration, where both gates control the current in the device. It is commonly used for small-signal devices in radio frequency applications where biasing the drain-side gate at constant potential reduces the gain loss caused by Miller effect, replacing two separate transistors in cascode configuration. Other common uses in RF circuits include gain control and mixing (frequency conversion). The tetrode description, though accurate, does not replicate the vacuum-tube tetrode. Vacuum-tube tetrodes, using a screen grid, exhibit much lower grid-plate capacitance and much higher output impedance and voltage gains than triode vacuum tubes. These improvements are commonly an order of magnitude (10 times) or considerably more. Tetrode transistors (whether bipolar junction or field-effect) do not exhibit improvements of such a great degree.
The FinFET is a double-gate silicon-on-insulator device, one of a number of geometries being introduced to mitigate the effects of short channels and reduce drain-induced barrier lowering. The fin refers to the narrow channel between source and drain. A thin insulating oxide layer on either side of the fin separates it from the gate. SOI FinFETs with a thick oxide on top of the fin are called double-gate and those with a thin oxide on top as well as on the sides are called triple-gate FinFETs.
A double-gate MOSFET transistor was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi. A GAAFET (gate-all-around MOSFET), a type of multi-gate non-planar 3D transistor, was first demonstrated in 1988 by a Toshiba research team including Fujio Masuoka, H. Takato and K. Sunouchi. The FinFET (fin field-effect transistor), a type of 3D non-planar double-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989. The development of nanowire multi-gate MOSFETs have since become fundamental to nanoelectronics.
Quantum field-effect transistor (QFET)
A quantum field-effect transistor (QFET) or quantum well field-effect transistor (QWFET) is a type of MOSFET that takes advantage of quantum tunneling to greatly increase the speed of transistor operation.
Radiation-hardened-by-design (RHBD)
Semiconductor sub-micrometer and nanometer electronic circuits are the primary concern for operating within the normal tolerance in harsh radiation environments like outer space. One of the design approaches for making a radiation-hardened-by-design (RHBD) device is enclosed-layout-transistor (ELT). Normally, the gate of the MOSFET surrounds the drain, which is placed in the center of the ELT. The source of the MOSFET surrounds the gate. Another RHBD MOSFET is called H-Gate. Both of these transistors have very low leakage current with respect to radiation. However, they are large in size and take more space on silicon than a standard MOSFET. In older STI (shallow trench isolation) designs, radiation strikes near the silicon oxide region cause the channel inversion at the corners of the standard MOSFET due to accumulation of radiation induced trapped charges. If the charges are large enough, the accumulated charges affect STI surface edges along the channel near the channel interface (gate) of the standard MOSFET. Thus the device channel inversion occurs along the channel edges and the device creates an off-state leakage path, causing the device to turn on. So the reliability of circuits degrades severely. The ELT offers many advantages. These advantages include improvement of reliability by reducing unwanted surface inversion at the gate edges that occurs in the standard MOSFET. Since the gate edges are enclosed in ELT, there is no gate oxide edge (STI at gate interface), and thus the transistor off-state leakage is reduced considerably. Low-power microelectronic circuits including computers, communication devices and monitoring systems in the space shuttle and satellites are very different to what is used on earth. They require radiation (high-speed atomic particles like proton and neutron, solar flare magnetic energy dissipation in Earth's space, energetic cosmic rays like X-ray, gamma ray etc.) tolerant circuits. These special electronics are designed by applying different techniques using RHBD MOSFETs to ensure safer journeys and space-walks for astronauts.
Applications
The MOSFET generally forms the basis of modern electronics, as the dominant transistor in digital circuits as well as analog integrated circuits. It is the basis for numerous modern technologies, and is commonly used for a wide range of applications. According to Jean-Pierre Colinge, numerous modern technologies would not exist without the MOSFET, such as the modern computer industry, digital telecommunication systems, video games, pocket calculators, and digital wristwatches, for example.
Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement and home and automobile sound systems.
MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits.
MOS integrated circuit (MOS IC)
The MOSFET is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. The monolithic integrated circuit chip was enabled by the surface passivation process, which electrically stabilized silicon surfaces via thermal oxidation, making it possible to fabricate monolithic integrated circuit chips using silicon. The surface passivation process was developed by Mohamed M. Atalla at Bell Labs in 1957. This was the basis for the planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, which was critical to the invention of the monolithic integrated circuit chip by Robert Noyce later in 1959. The same year, Atalla used his surface passivation process to invent the MOSFET with Dawon Kahng at Bell Labs. This was followed by the development of clean rooms to reduce contamination to levels never before thought necessary, and coincided with the development of photolithography which, along with surface passivation and the planar process, allowed circuits to be made in few steps.
Mohamed Atalla first proposed the concept of the MOS integrated circuit (MOS IC) chip in 1960, noting that the MOSFET's ease of fabrication made it useful for integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. These two factors, along with its rapidly scaling miniaturization and low energy consumption, led to the MOSFET becoming the most widely used type of transistor in IC chips.
The earliest experimental MOS IC to be demonstrated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuits in 1964, consisting of 120 p-channel transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. In 1968, Fairchild Semiconductor researchers Federico Faggin and Tom Klein developed the first silicon-gate MOS IC.
MOS large-scale integration (MOS LSI)
With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density IC chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of MOSFETs on a chip by the late 1960s. MOS technology enabled the integration of more than 10,000 transistors on a single LSI chip by the early 1970s, before later enabling very large-scale integration (VLSI).
Microprocessors
The MOSFET is the basis of every microprocessor, and was responsible for the invention of the microprocessor. The origins of both the microprocessor and the microcontroller can be traced back to the invention and development of MOS technology. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The earliest microprocessors were all MOS chips, built with MOS LSI circuits. The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first commercial single-chip microprocessor, the Intel 4004, was developed by Federico Faggin, using his silicon-gate MOS IC technology, with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. With the arrival of CMOS microprocessors in 1975, the term "MOS microprocessors" began to refer to chips fabricated entirely from PMOS logic or fabricated entirely from NMOS logic, contrasted with "CMOS microprocessors" and "bipolar bit-slice processors".
CMOS circuits
Digital
The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases.
Analog
The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to V can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies.
Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors the size of the device does not significantly affect its performance. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. Also, MOSFETs can be configured to perform as capacitors and gyrator circuits which allow op-amps made from them to appear as inductors, thereby allowing all of the normal analog devices on a chip (except for diodes, which can be made smaller than a MOSFET anyway) to be built entirely out of MOSFETs. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback.
Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density.
In the late 1980s, Asad Abidi pioneered RF CMOS technology, which uses MOS VLSI circuits, while working at UCLA. This changed the way in which RF circuits were designed, away from discrete bipolar transistors and towards CMOS integrated circuits. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices. RF CMOS is also used in nearly all modern Bluetooth and wireless LAN (WLAN) devices.
MOS memory
The advent of the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. The first modern computer memory was introduced in 1965, when John Schmidt at Fairchild Semiconductor designed the first MOS semiconductor memory, a 64-bit MOS SRAM (static random-access memory). SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data.
MOS technology is the basis for DRAM (dynamic random-access memory). In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM (dynamic random-access memory) memory cell, based on MOS technology. MOS memory enabled higher performance, was cheaper, and consumed less power, than magnetic-core memory, leading to MOS memory overtaking magnetic core memory as the dominant computer memory technology by the early 1970s.
Frank Wanlass, while studying MOSFET structures in 1963, noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology. In 1967, Dawon Kahng and Simon Min Sze proposed that floating-gate memory cells, consisting of floating-gate MOSFETs (FGMOS), could be used to produce reprogrammable ROM (read-only memory). Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM, EEPROM (electrically erasable programmable ROM) and flash memory.
Consumer electronics
MOSFETs are widely used in consumer electronics. One of the earliest influential consumer electronic products enabled by MOS LSI circuits was the electronic pocket calculator, as MOS LSI technology enabled large amounts of computational capability in small packages. In 1965, the Victor 3900 desktop calculator was the first MOS calculator, with 29 MOS chips. In 1967, the Texas Instruments Cal-Tech was the first prototype electronic handheld calculator, with three MOS LSI chips, and it was later released as the Canon Pocketronic in 1970. The Sharp QT-8D desktop calculator was the first mass-produced LSI MOS calculator in 1969, and the Sharp EL-8 which used four MOS LSI chips was the first commercial electronic handheld calculator in 1970. The first true electronic pocket calculator was the Busicom LE-120A HANDY LE, which used a single MOS LSI calculator-on-a-chip from Mostek, and was released in 1971. By 1972, MOS LSI circuits were commercialized for numerous other applications.
MOSFETs are fundamental to information and communications technology (ICT), including modern computers, modern computing, telecommunications, the communications infrastructure, the Internet, digital telephony, wireless telecommunications, and mobile networks. According to Colinge, the modern computer industry and digital telecommunication systems would not exist without the MOSFET. Advances in MOS technology has been the most important contributing factor in the rapid rise of network bandwidth in telecommunication networks, with bandwidth doubling every 18 months, from bits per second to terabits per second (Edholm's law).
MOS sensors
MOS sensors, also known as MOSFET sensors, are widely used to measure physical, chemical, biological and environmental parameters. The ion-sensitive field-effect transistor (ISFET), for example, is widely used in biomedical applications. MOS chemiresistors and MOSFETs have also been extensively shown to have promising applications is gas sensing either as single sensor devices or as components in chemical sensor arrays.
MOSFETs are also widely used in microelectromechanical systems (MEMS), as silicon MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965.
MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team at NASA's Jet Propulsion Laboratory in the early 1990s.
MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5µm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.
Power MOSFETs
The power MOSFET is the most widely used power device in the world. Advantages over bipolar junction transistors in power electronics include MOSFETs not requiring a continuous flow of drive current to remain in the ON state, offering higher switching speeds, lower switching power losses, lower on-resistances, and reduced susceptibility to thermal runaway. The power MOSFET had an impact on power supplies, enabling higher operating frequencies, size and weight reduction, and increased volume production.
Switching power supplies are the most common applications for power MOSFETs. They are also widely used for MOS RF power amplifiers, which enabled the transition of mobile networks from analog to digital in the 1990s. This led to the wide proliferation of wireless mobile networks, which revolutionised telecommunication systems. The LDMOS in particular is the most widely used power amplifier in mobile networks, such as 2G, 3G, 4G, and 5G.
Over 50billion discrete power MOSFETs are shipped annually, as of 2018. They are widely used for automotive, industrial and communications systems in particular. Power MOSFETs are commonly used in automotive electronics, particularly as switching devices in electronic control units, and as power converters in modern electric vehicles. The insulated-gate bipolar transistor (IGBT), a hybrid MOS-bipolar transistor, is also used for a wide variety of applications.
Construction
Gate material
The primary criterion for the gate material is that it is a good conductor. Highly doped polycrystalline silicon is an acceptable but certainly not ideal conductor, and also suffers from some more technical deficiencies in its role as the standard gate material. Nevertheless, there are several reasons favoring use of polysilicon:
The threshold voltage (and consequently the drain to source on-current) is modified by the work function difference between the gate material and channel material. Because polysilicon is a semiconductor, its work function can be modulated by adjusting the type and level of doping. Furthermore, because polysilicon has the same bandgap as the underlying silicon channel, it is quite straightforward to tune the work function to achieve low threshold voltages for both NMOS and PMOS devices. By contrast, the work functions of metals are not easily modulated, so tuning the work function to obtain low threshold voltages (LVT) becomes a significant challenge. Additionally, obtaining low-threshold devices on both PMOS and NMOS devices sometimes requires the use of different metals for each device type. While bimetallic integrated circuits (i.e., one type of metal for gate electrodes of NFETS and a second type of metal for gate electrodes of PFETS) are not common, they are known in patent literature and provide some benefit in terms of tuning electrical circuits' overall electrical performance.
The silicon-SiO2 interface has been well studied and is known to have relatively few defects. By contrast many metal-insulator interfaces contain significant levels of defects which can lead to Fermi level pinning, charging, or other phenomena that ultimately degrade device performance.
In the MOSFET IC fabrication process, it is preferable to deposit the gate material prior to certain high-temperature steps in order to make better-performing transistors. Such high temperature steps would melt some metals, limiting the types of metal that can be used in a metal-gate-based process.
While polysilicon gates have been the de facto standard for the last twenty years, they do have some disadvantages which have led to their likely future replacement by metal gates. These disadvantages include:
Polysilicon is not a great conductor (approximately 1000 times more resistive than metals) which reduces the signal propagation speed through the material. The resistivity can be lowered by increasing the level of doping, but even highly doped polysilicon is not as conductive as most metals. To improve conductivity further, sometimes a high-temperature metal such as tungsten, titanium, cobalt, and more recently nickel is alloyed with the top layers of the polysilicon. Such a blended material is called silicide. The silicide-polysilicon combination has better electrical properties than polysilicon alone and still does not melt in subsequent processing. Also the threshold voltage is not significantly higher than with polysilicon alone, because the silicide material is not near the channel. The process in which silicide is formed on both the gate electrode and the source and drain regions is sometimes called salicide, self-aligned silicide.
When the transistors are extremely scaled down, it is necessary to make the gate dielectric layer very thin, around 1 nm in state-of-the-art technologies. A phenomenon observed here is the so-called poly depletion, where a depletion layer is formed in the gate polysilicon layer next to the gate dielectric when the transistor is in the inversion. To avoid this problem, a metal gate is desired. A variety of metal gates such as tantalum, tungsten, tantalum nitride, and titanium nitride are used, usually in conjunction with high-κ dielectrics. An alternative is to use fully silicided polysilicon gates, a process known as FUSI.
Present high performance CPUs use metal gate technology, together with high-κ dielectrics, a combination known as high-κ, metal gate (HKMG). The disadvantages of metal gates are overcome by a few techniques:
The threshold voltage is tuned by including a thin "work function metal" layer between the high-κ dielectric and the main metal. This layer is thin enough that the total work function of the gate is influenced by both the main metal and thin metal work functions (either due to alloying during annealing, or simply due to the incomplete screening by the thin metal). The threshold voltage thus can be tuned by the thickness of the thin metal layer.
High-κ dielectrics are now well studied, and their defects are understood.
HKMG processes exist that do not require the metals to experience high temperature anneals; other processes select metals that can survive the annealing step.
Insulator
As devices are made smaller, insulating layers are made thinner, often through steps of thermal oxidation or localised oxidation of silicon (LOCOS). For nano-scaled devices, at some point tunneling of carriers through the insulator from the channel to the gate electrode takes place. To reduce the resulting leakage current, the insulator can be made thinner by choosing a material with a higher dielectric constant. To see how thickness and dielectric constant are related, note that Gauss's law connects field to charge as:
with Q = charge density, κ = dielectric constant, ε0 = permittivity of empty space and E = electric field. From this law it appears the same charge can be maintained in the channel at a lower field provided κ is increased. The voltage on the gate is given by:
with VG = gate voltage, Vch = voltage at channel side of insulator, and tins = insulator thickness. This equation shows the gate voltage will not increase when the insulator thickness increases, provided κ increases to keep tins / κ = constant (see the article on high-κ dielectrics for more detail, and the section in this article on gate-oxide leakage).
The insulator in a MOSFET is a dielectric which can in any event be silicon oxide, formed by LOCOS but many other dielectric materials are employed. The generic term for the dielectric is gate dielectric since the dielectric lies directly below the gate electrode and above the channel of the MOSFET.
Junction design
The source-to-body and drain-to-body junctions are the object of much attention because of three major factors: their design affects the current–voltage (I–V) characteristics of the device, lowering output resistance, and also the speed of the device through the loading effect of the junction capacitances, and finally, the component of stand-by power dissipation due to junction leakage.
The drain induced barrier lowering of the threshold voltage and channel length modulation effects upon I-V curves are reduced by using shallow junction extensions. In addition, halo doping can be used, that is, the addition of very thin heavily doped regions of the same doping type as the body tight against the junction walls to limit the extent of depletion regions.
The capacitive effects are limited by using raised source and drain geometries that make most of the contact area border thick dielectric instead of silicon.
These various features of junction design are shown (with artistic license) in the figure.
Scaling
Over the past decades, the MOSFET (as used for digital logic) has continually been scaled down in size; typical MOSFET channel lengths were once several micrometres, but modern integrated circuits are incorporating MOSFETs with channel lengths of tens of nanometers. Robert Dennard's work on scaling theory was pivotal in recognising that this ongoing reduction was possible. The semiconductor industry maintains a "roadmap", the ITRS, which sets the pace for MOSFET development. Historically, the difficulties with decreasing the size of the MOSFET have been associated with the semiconductor device fabrication process, the need to use very low voltages, and with poorer electrical performance necessitating circuit redesign and innovation (small MOSFETs exhibit higher leakage currents and lower output resistance). As of 2019, the smallest MOSFETs in production are 5 nm FinFET semiconductor nodes, manufactured by Samsung Electronics and TSMC.
Smaller MOSFETs are desirable for several reasons. The main reason to make transistors smaller is to pack more and more devices in a given chip area. This results in a chip with the same functionality in a smaller area, or chips with more functionality in the same area. Since fabrication costs for a semiconductor wafer are relatively fixed, the cost per integrated circuits is mainly related to the number of chips that can be produced per wafer. Hence, smaller ICs allow more chips per wafer, reducing the price per chip. In fact, over the past 30 years the number of transistors per chip has been doubled every 2–3 years once a new technology node is introduced. For example, the number of MOSFETs in a microprocessor fabricated in a 45 nm technology can well be twice as many as in a 65 nm chip. This doubling of transistor density was first observed by Gordon Moore in 1965 and is commonly referred to as Moore's law. It is also expected that smaller transistors switch faster. For example, one approach to size reduction is a scaling of the MOSFET that requires all device dimensions to reduce proportionally. The main device dimensions are the channel length, channel width, and oxide thickness. When they are scaled down by equal factors, the transistor channel resistance does not change, while gate capacitance is cut by that factor. Hence, the RC delay of the transistor scales with a similar factor. While this has been traditionally the case for the older technologies, for the state-of-the-art MOSFETs reduction of the transistor dimensions does not necessarily translate to higher chip speed because the delay due to interconnections is more significant.
Producing MOSFETs with channel lengths much smaller than a micrometre is a challenge, and the difficulties of semiconductor device fabrication are always a limiting factor in advancing integrated circuit technology. Though processes such as atomic layer deposition (ALD) have improved fabrication for small components, the small size of the MOSFET (less than a few tens of nanometers) has created operational problems:
Higher subthreshold conduction As MOSFET geometries shrink, the voltage that can be applied to the gate must be reduced to maintain reliability. To maintain performance, the threshold voltage of the MOSFET has to be reduced as well. As threshold voltage is reduced, the transistor cannot be switched from complete turn-off to complete turn-on with the limited voltage swing available; the circuit design is a compromise between strong current in the on case and low current in the off case, and the application determines whether to favor one over the other. Subthreshold leakage (including subthreshold conduction, gate-oxide leakage and reverse-biased junction leakage), which was ignored in the past, now can consume upwards of half of the total power consumption of modern high-performance VLSI chips.
Increased gate-oxide leakage The gate oxide, which serves as insulator between the gate and channel, should be made as thin as possible to increase the channel conductivity and performance when the transistor is on and to reduce subthreshold leakage when the transistor is off. However, with current gate oxides with a thickness of around 1.2 nm (which in silicon is ~5 atoms thick) the quantum mechanical phenomenon of electron tunneling occurs between the gate and channel, leading to increased power consumption. Silicon dioxide has traditionally been used as the gate insulator. Silicon dioxide however has a modest dielectric constant. Increasing the dielectric constant of the gate dielectric allows a thicker layer while maintaining a high capacitance (capacitance is proportional to dielectric constant and inversely proportional to dielectric thickness). All else equal, a higher dielectric thickness reduces the quantum tunneling current through the dielectric between the gate and the channel. Insulators that have a larger dielectric constant than silicon dioxide (referred to as high-κ dielectrics), such as group IVb metal silicates e.g. hafnium and zirconium silicates and oxides are being used to reduce the gate leakage from the 45 nanometer technology node onwards. On the other hand, the barrier height of the new gate insulator is an important consideration; the difference in conduction band energy between the semiconductor and the dielectric (and the corresponding difference in valence band energy) also affects leakage current level. For the traditional gate oxide, silicon dioxide, the former barrier is approximately 8 eV. For many alternative dielectrics the value is significantly lower, tending to increase the tunneling current, somewhat negating the advantage of higher dielectric constant. The maximum gate–source voltage is determined by the strength of the electric field able to be sustained by the gate dielectric before significant leakage occurs. As the insulating dielectric is made thinner, the electric field strength within it goes up for a fixed voltage. This necessitates using lower voltages with the thinner dielectric.
Increased junction leakage To make devices smaller, junction design has become more complex, leading to higher doping levels, shallower junctions, "halo" doping and so forth, all to decrease drain-induced barrier lowering (see the section on junction design). To keep these complex junctions in place, the annealing steps formerly used to remove damage and electrically active defects must be curtailed increasing junction leakage. Heavier doping is also associated with thinner depletion layers and more recombination centers that result in increased leakage current, even without lattice damage.
Drain-induced barrier lowering (DIBL) and VT roll off Because of the short-channel effect, channel formation is not entirely done by the gate, but now the drain and source also affect the channel formation. As the channel length decreases, the depletion regions of the source and drain come closer together and make the threshold voltage (VT) a function of the length of the channel. This is called VT roll-off. VT also becomes function of drain to source voltage VDS. As we increase the VDS, the depletion regions increase in size, and a considerable amount of charge is depleted by the VDS. The gate voltage required to form the channel is then lowered, and thus, the VT decreases with an increase in VDS. This effect is called drain induced barrier lowering (DIBL).
Lower output resistance For analog operation, good gain requires a high MOSFET output impedance, which is to say, the MOSFET current should vary only slightly with the applied drain-to-source voltage. As devices are made smaller, the influence of the drain competes more successfully with that of the gate due to the growing proximity of these two electrodes, increasing the sensitivity of the MOSFET current to the drain voltage. To counteract the resulting decrease in output resistance, circuits are made more complex, either by requiring more devices, for example the cascode and cascade amplifiers, or by feedback circuitry using operational amplifiers, for example a circuit like that in the adjacent figure.
Lower transconductance The transconductance of the MOSFET decides its gain and is proportional to hole or electron mobility (depending on device type), at least for low drain voltages. As MOSFET size is reduced, the fields in the channel increase and the dopant impurity levels increase. Both changes reduce the carrier mobility, and hence the transconductance. As channel lengths are reduced without proportional reduction in drain voltage, raising the electric field in the channel, the result is velocity saturation of the carriers, limiting the current and the transconductance.
Interconnect capacitance Traditionally, switching time was roughly proportional to the gate capacitance of gates. However, with transistors becoming smaller and more transistors being placed on the chip, interconnect capacitance (the capacitance of the metal-layer connections between different parts of the chip) is becoming a large percentage of capacitance. Signals have to travel through the interconnect, which leads to increased delay and lower performance.
Heat production The ever-increasing density of MOSFETs on an integrated circuit creates problems of substantial localized heat generation that can impair circuit operation. Circuits operate more slowly at high temperatures, and have reduced reliability and shorter lifetimes. Heat sinks and other cooling devices and methods are now required for many integrated circuits including microprocessors. Power MOSFETs are at risk of thermal runaway. As their on-state resistance rises with temperature, if the load is approximately a constant-current load then the power loss rises correspondingly, generating further heat. When the heatsink is not able to keep the temperature low enough, the junction temperature may rise quickly and uncontrollably, resulting in destruction of the device.
Process variations With MOSFETs becoming smaller, the number of atoms in the silicon that produce many of the transistor's properties is becoming fewer, with the result that control of dopant numbers and placement is more erratic. During chip manufacturing, random process variations affect all transistor dimensions: length, width, junction depths, oxide thickness etc., and become a greater percentage of overall transistor size as the transistor shrinks. The transistor characteristics become less certain, more statistical. The random nature of manufacture means we do not know which particular example MOSFETs actually will end up in a particular instance of the circuit. This uncertainty forces a less optimal design because the design must work for a great variety of possible component MOSFETs. See process variation, design for manufacturability, reliability engineering, and statistical process control.
Modeling challenges Modern ICs are computer-simulated with the goal of obtaining working circuits from the very first manufactured lot. As devices are miniaturized, the complexity of the processing makes it difficult to predict exactly what the final devices look like, and modeling of physical processes becomes more challenging as well. In addition, microscopic variations in structure due simply to the probabilistic nature of atomic processes require statistical (not just deterministic) predictions. These factors combine to make adequate simulation and "right the first time" manufacture difficult.
A related scaling rule is Edholm's law. In 2004, Phil Edholm observed that the bandwidth of telecommunication networks (including the Internet) is doubling every 18 months. Over the course of several decades, the bandwidths of communication networks has risen from bits per second to terabits per second. The rapid rise in telecommunication bandwidth is largely due to the same MOSFET scaling that enables Moore's law, as telecommunication networks are built from MOSFETs.
Timeline
See also
BSIM
ggNMOS
High electron mobility transistor
Polysilicon depletion effect
Quantum Hall effect
Transistor model
Intrinsic diode
References
External links
How Semiconductors and Transistors Work (MOSFETs) WeCanFigureThisOut.org
A Flash slide showing the fabricating process of a MOSFET in detail
1959 introductions
1960 introductions
20th-century inventions
Arab inventions
Digital electronics
Egyptian inventions
Integrated circuits
South Korean inventions
Transistor amplifiers
Transistor types |
3682342 | https://en.wikipedia.org/wiki/List%20of%20Star%20Trek%20games | List of Star Trek games | The enduring popularity of the Star Trek science fiction franchise has led to numerous games in many different formats, beginning in 1967 with a board game based on The Original Series and continuing through the present with online and DVD games.
Board games
Space Checkers, a variant of Tri-dimensional chess, produced by Pacific Game Co (1965)
Star Trek Game, the only game based on the original series to be released during the show's run, produced by Ideal Toys (1967)
Star Trek game, produced by Hasbro (1974)
Star Trek game, produced in UK by Palitoy (1975)
Star Trek game, produced by Milton Bradley, based on Star Trek: The Motion Picture (1979)
Star Trek: Starfleet Game, a promotional game released by McDonald's to coincide with the first movie (1979)
Struggle for the Throne, produced by FASA (1984). Players control factions fighting in a succession crisis in the Klingon Empire.
Star Trek: The Adventure Game, produced by West End Games (1985)
Star Trek: The Enterprise 4 Encounter, produced by West End Games (1985)
Golden Trivia Game: Star Trek Edition (1985), Golden Books
Golden Trivia Cards: Star Trek Edition (1985)
Star Trek: The Game, produced by Classic Games (1992)
Star Trek: The Final Frontier, produced by Toys & Games Limited (1992)
The Star Trek Trivia Game, Board game based on the original series, created by Terry W. Hill of Ogden, Utah, limited edition custom made and privately produced. 1992–Present)
How to Host a Mystery - Star Trek: The Next Generation, produced by Decipher in the U.S. and Canada Games in Canada (1992)
Star Trek: The Next Generation Interactive VCR Board Game – A Klingon Challenge, the only Star Trek: The Next Generation video board game, although a second game involving Q and The Borg was planned. This game was produced by Decipher (1993)
Star Trek: The Next Generation: Romulan Challenge, produced by MMG LTD (1994)
Trivial Pursuit: Star Trek Edition VCR Game, a "Sci-Fi Master Game" supplement for the main game, made by TelStar Video Entertainment (1995)
Monopoly, produced by Hasbro licensee USAopoly in three versions; one representing the original Star Trek series (2000), another featuring Star Trek: The Next Generation (1998), and Star Trek: Continuum Edition Monopoly, covering all five series (2009)
All About Trivia: Star Trek, released by Fundex Games. A trivia game with material specifically based on The Original Series and the first six feature films (2009)
Scene It? Star Trek, developed by Screenlife Games and Mattel. Contains Star Trek TV and movie clips from all 5 live action series and the first 10 movies (2009)
Star Trek: Expeditions, developed by Reiner Knizia and WizKids, taking place in the new continuity established by the latest movie (2011)
Star Trek: Fleet Captains, developed by WizKids, a tactical game where players create fleets out of a selection of Federation and Klingon ships and battle to control hex based sectors (2011)
Star Trek Catan, created by Mayfair Games, is a TOS themed version of the board game The Settlers of Catan (2012)
Star Trek: Ascendancy, published by GaleForce 9, is a 4X strategy game, (2016)
Star Trek Panic, published by USAopoly (with Fireside Games) in 2016. A cooperative game based on Castle Panic that uses Star Trek themed enemies and player characters to complete missions before the U.S.S. Enterprise is destroyed.
Tabletop wargames
Star Trek Battle Manual, designed and published by Lou Zocchi in 1972. Unlike most other games in this category, it was played without a board. Ships maneuvered on the tabletop using rulers to determine range and protractor-like angle measures on the large counters to determine facing. The game was produced without authorization from Paramount Pictures, leading to its reissue in 1973 as the Alien Space Battle Manual with all Star Trek references removed, followed by a further release in 1977 as the Star Fleet Battle Manual after Zochi obtained a license from Franz Joseph Designs for material in the Star Fleet Technical Manual.
Star Fleet Battles, the seminal tactical tabletop wargame created by Steven V. Cole and produced by Task Force Games in 1979. Like Zocchi's Star Fleet Battle Manual, it also made use of a license from Franz Joseph Designs. It has had four major editions and is currently published by Amarillo Design Bureau, Inc.
Federation Commander, a more streamlined tactical tabletop wargame in the Star Fleet Universe produced by Amarillo Design Bureau, Inc. in 2005.
A Call To Arms: Star Fleet is another hexless game (using miniatures) set in the Star Fleet Universe. Published in 2011, it is a collaboration between Amarillo Design Bureau and Mongoose Publishing, using a variant of the system seen in Babylon 5: A Call To Arms.
Federation Space, a strategic companion to Star Fleet Battles produced by Task Force Games in 1981.
Federation and Empire, the second and more elaborate strategic game set in the Star Fleet Universe, first published by Task Force Games in 1986 and presently by Amarillo Design Bureau, Inc.
The Star Trek II: Starship Combat Simulator, Star Trek III Starship Combat Roleplaying Game, and Star Trek: Starship Tactical Combat Simulator, all published by FASA Corporation in the 1980s, based on the combat system from the Star Trek: The Role Playing Game
Star Trek: Attack Wing, published by Wizkids in 2013, and based on the "FlightPath maneuver system" from the Star Wars: X-Wing Miniatures Game under license from Fantasy Flight Games.
Card games
Star Trek Customizable Card Game, produced by Decipher.
Star Trek: The Card Game, produced by Fleer.
Star Trek: Deck Building Game, produced by Bandai. It consists of three stand-alone, integrable editions: Star Trek Deck Building Game: The Next Generation (2011), Star Trek Deck Building Game: The Next Generation – Next Phase (2012) and Star Trek Deck Building Game: The Original Series (2012)
Star Fleet Battle Force, produced by Amarillo Design Bureau, Inc.
Star Trek Adversaries, produced by Puppetmaster Games
Star Trek Fluxx, published in 2018 by Looney Labs with Gale Force 9
Star Trek TNG Fluxx, published in 2018 by Looney Labs with Gale Force 9
Star Trek Deep Space Nine Fluxx, to be published in 2019 by Looney Labs with Gale Force 9
ChronoTrek, a time travel game similar to Chrononauts set in the Star Trek universe, published in 2019 by Looney Labs
Role-playing games
Official game titles include the following:
Star Trek: Adventure Gaming in the Final Frontier, produced by Heritage Models (1978)
Starfleet Voyages, produced by Terra Games Company (1982)
Star Trek: The Role Playing Game, the original Star Trek RPG produced by FASA (1982)
Enterprise: Role Play Game in Star Trek, released in Japan (only) by Tsukuda Hobby (1983)
Prime Directive, designed by Amarillo Design Bureau, Inc. and published by Task Force Games (1993). Later editions were produced for the GURPS 3rd edition (2002), 4th edition (2005), d20 (2005) and d20 Modern (2008).
Star Trek: The Next Generation Role-playing Game, produced by Last Unicorn Games (1998) and derived in two other standalone games:
Star Trek: Deep Space Nine Role-playing Game (1999)
Star Trek: The Original Series Role-playing Game (1999)
Star Trek Roleplaying Game, produced by Decipher, Inc. (2002)
Star Trek Adventures, produced by Modiphius Entertainment (2017)
Star Trek Simulation Forum, has served as the chat based role-playing game of the official Star Trek website since October 2002. Remains the only simming organization currently recognized by the site (2002)
Starship simulator games
Starship simulator games create the experience of commanding and operating a starship, and usually allow the player to handle a variety of functions, and to allocate resources such as ship power and systems. Some early Star Trek games in this category have had a huge effect on subsequent games in their genre, often leading to new level of depth and complexity in programming and/or gameplay.
This game category includes both computer games and non-computer board games, since the Star Fleet Battles game series provides a starship simulation, and is wholly a tabletop board wargame. As well as the Star Trek RPG by FASA which allowed players to take charge of specific areas of a ship's functions (such as the engineer allocating power) during combat.
Star Fleet Battles is different from most other wargames, which usually indicate unit strengths with simple numerical ratings. SFB players are able to deploy and manage power for a variety of ship weapons and resources. This is done via an elaborate Energy Allocation mechanism where even partial points of energy can be allocated to a number of different systems. Federation Commander is the continued development of this system in a more fast-paced version. Instead of the Energy Allocation system, it uses an innovative tick sheet system, which manages power use for each ship, and also tracks which weapons and systems are in use. The Star Trek: Starfleet Command computer game is based upon Star Fleet Battles.
In Star Trek: The Role Playing Game, produced by FASA, players actually had individual bridge functions during combat. This at one point became a separate game known as Starship Tactical Combat Simulator. The Captain determined the strategy, the Engineer was responsible for power management and allocation to different systems such as weapons and shields, the Helmsman for firing weapons, the Navigator for managing deflector shields, the Communications Officer for damage control and so on.
Starship simulator computer games which are set in the Star Trek universe occupy a large role in the history of computer games. Some of the earliest and more influential space simulator video games were Star Trek simulations designed to run on mainframes.
David H. Ahl played such games in the late 1960s at Carnegie Mellon University and the University of California, Berkeley. He stated that they were much less sophisticated than Mike Mayfield's Star Trek text game, which originated as a BASIC program on an SDS Sigma 7 mainframe system in 1971 and ported to many different systems. Ahl published source code for this game in his best selling BASIC Computer Games, and variants of the game spread widely to personal computer systems.
Decwar in 1978 was also a groundbreaking game. Another is Super Star Trek, an early text-based, DOS-based game. This game created an impressive starship experience using only text-based commands and graphics. The game Begin is considered notable for having a convincing model of game dynamics, as it has very few random elements, and is highly mathematical. In 1986, the game Multi-Trek (MTrek) was brought online at the University of California, Santa Cruz. Written in C for a PDP mainframe, and also available via dialup and later TELNET, MTrek was arguably the first ever game to combine a persistent world, online multiplayer environment with a real-time, true 3-dimensional game engine and versions of the game still have an active player base.
Netrek was released in 1988, and was probably the first game to use both the TCP and UDP protocols, the first Internet-aware team game, the first Internet game to use metaservers to locate open game servers, and the first to have persistent user information. Netrek should not be confused with NET TREK, a 1984 Macintosh game unofficially based on Star Trek.
In later years, fewer games were produced within this genre, and more games were produced in the adventure games genre. The first new recent game was Starfleet Academy, which incorporated many Star Trek elements, but was criticized for depicting starship operation as more akin to fighter planes than capital ships. A sequel, Klingon Academy, was actually quite different, and was one of the first games to depict starship operation with an appropriate amount of complexity.
The Starfleet Command game series released by Interplay was based largely on the tabletop game Star Fleet Battles, and comprised Starfleet Command, Starfleet Command II: Empires at War, and Starfleet Command III. It constitutes one of the most definitive current games, depicting a wide array of ship systems and Star Trek storylines. This series had a more naval flavor, and depicted a number of ship systems. This series spawned a very large multiplayer ladder competition first with the "Starlance" system, and later on the "GamerZone" ladder. The main multiplayer setting is the "Dynaverse," which began as an official server hosted by Taldren, and has continued as a private effort (an earlier, unauthorized adaptation of Star Fleet Battles as a computer game was SSI's The Warp Factor in 1982).
Star Trek: Bridge Commander was another addition to this genre, reflecting the more deliberative, command aspects of this experience.
In late 2006, Bethesda Softworks released several console games which carry on the tradition of classic Star Trek ship simulator/combat games, Star Trek: Legacy for the PC and Xbox 360, Star Trek: Encounters for the PlayStation 2, Star Trek: Tactical Assault for the Nintendo DS and the PlayStation Portable and Star Trek: Conquest for the Wii and PlayStation 2.
Several online games have appeared on the Internet. Vega Trek is a game mod which is planned to eventually become active as a multiplayer game. Flashtrek: Broken Mirror, first created by Vex Xiang, is one of the online Star Trek games, and is entirely browser-based. It has spawned several sequels. One sequel was created by Vex Xiang, and multiple others were created by fans. A newer game titled Star Trek: Broken Mirror was being developed by a man named Darkwing for several years, but was apparently abandoned in 2014.
Star Trek: Bridge Crew is one of the newest additions to this genre, and continues the historical pattern of Star Trek-themed simulator breaking new ground. This cross platform game is in a virtual reality environment in which four players actually occupy the bridge of the USS Aegis, Enterprise-D (Through Downloadable Content) or the Original Enterprise. Players get to see each other in real-time, and interact during the game to operate the ship and work together to handle various game scenarios.
Pinball games
Four pinball games have been based on the Star Trek series:
Star Trek, released by Bally in 1979, designed by Gary Gayton with artwork by Kevin O'Connor.
Star Trek: 25th Anniversary, released by Data East Pinball (now Stern Pinball) in 1991.
Star Trek: The Next Generation, created by legendary pinball designer Steve Ritchie and released by Williams Electronics in November 1993 as part of Williams' SuperPin series.
Star Trek, created by Steve Ritchie and released by Stern in winter 2013.
Video games
Arcade
Computer
The history of the Star Trek personal computer game franchise began as early as 1971, with a Star Trek text-only computer game written in BASIC. Many PC titles have since been published, and the franchise was one of the first based on a TV program to break into the young PC gamer market in the 1990s. Activision and Viacom signed an agreement to develop games based on the Star Trek property in September 1998
Interplay, Simon & Schuster, MicroProse and Activision released most of the best-known Star Trek games between 2000 and 2003. Titles like Star Trek: Armada, Star Trek: Elite Force and Star Trek: Bridge Commander were all published during this period, as were over half of all the other major Star Trek PC games. The absence of new titles after 2003 was due in large measure to a split and subsequent lawsuit between Activision and Viacom which ended in 2004.
With the departure of Activision in 2003, the franchise under the tenure of Paramount effectively came to a close. Since the end of 2005, CBS has assumed most franchise management, including games and other products. Even with no new licensed titles released during 2003-2006, the older games like Armada and Elite Force still have an avid fan base which keeps the small community going. Development of the new Star Trek: Online title is complete and the game was made available for sale on February 2, 2010.
Star Trek: Alien Domain is a 2015 flash-based Star Trek multiplayer strategy game developed by GameSamba in conjunction with CBS Interactive.
Commercial games
Console
Mobile
Electronic and casino games
Star Trek Super Phaser 2 Target Game is similar to Laser Tag (1976)
Star Trek Phaser Battle Game is similar a tabletop arcade game
Star Trek is a casino slot machine game designed and marketed by WMS Industries since 2008
Star Trek: Deep Space Nine – Red Alert, video game gambling machine.
Handheld electronic games
Numerous stand-alone electronic handheld and tabletop games have been produced by manufacturers like Bandai, Coleco, Konami, and others. Pair Match, manufactured by Bandai in 1984, appeared in several Star Trek: The Next Generation episodes.
See also
Netrek
List of games in Star Trek
The Warp Factor
References
Further reading
Hailing Frequency Star Trek Gaming News - A community website with news and interviews about Star Trek Gaming. A bi-weekly podcast and live radio show, forums with over 5000 members.
TrekCore Gaming Museum - Information on all Star Trek games, by platform, and includes Current online games.
External links
Current online games, at trekcore.com.
20Q Star Trek - An online artificial intelligence game that covers all classic Star Trek TV shows and movies, plus characters, gadgets and locations
Video game lists by franchise |
33476963 | https://en.wikipedia.org/wiki/Comparison%20of%20iOS%20e-reader%20software | Comparison of iOS e-reader software | The following tables compare general and technical features for a number of iOS e-book reader software. Each section corresponds to a major area of functionality in an e-book reader software. The comparisons are based on the latest released version.
Navigation features
Display features
Edit/tool features
Book source management features
Supported File Formats
See Comparison of e-book formats for details on the file formats.
License
Special features
Further reading
Discontinued e-book reader software
The Readmill app, introduced in February 2011, read numerous formats on Android and iOS devices but shut down on July 1, 2014.
Also the Blio, DL Reader and Stanza app are no longer available.
See also
Comparison of Android e-book reader software
Comparison of e-book formats - includes both device and software formats
Comparison of e-book readers - includes hardware e-book readers
References
iOS e-book reader software
IOS software
E-books |
54531032 | https://en.wikipedia.org/wiki/Marwadi%20University | Marwadi University | Marwadi University (MU) is a private university located in Rajkot, Gujarat, India. It was established on 9 May 2016 by the Marwadi Education Foundation through The Gujarat Private Universities (Amendment) Act, 2016. , it offers 54 different courses. It is graded A+ by NAAC.
The university operates under the division of Marwadi Education Foundation's Group of Institutions (MEFGI). MEFGI commenced its operations in the year 2008. It was established as a primary unit of Marwadi Education Foundation under the Bombay Public Trust Act 1950. Marwadi University is aided by the Marwadi Shares and Finance Limited, a major stock broking company in India and Chandarana Intermediaries Brokers Pvt. Ltd. (CIBPL), a well-known firm dealing in technical and arbitrage trading.
Campus
The campus is located on 32 acres of land, having a distance of nearly 40 minutes from railway and airports. The university comprises eight multi-storey buildings. Laboratories, research facilities, student clubs, sports club and college cafeteria are available.
There are two libraries with RFID technologies, 60+ computer systems, 50000+ books. The campus also includes banking and ATM facilities. Around 70+ buses function every day at regular intervals for students and staff. There are hostel rooms with internet facilities, laundry, dance rooms, libraries etc. and capacity to occupy over 2000 students.
Academics
Academic programmes
Marwadi University provides Undergraduate, Postgraduate, Diploma and Doctoral degrees. The university offers all these degrees in different faculties like Engineering, Science, Architecture, Business Management, Computer Applications, Commerce, Law, Pharmacy and Physiotherapy.
Ranking
Faculty of Law, Marwadi University, has been listed as one of the top law schools in India in Forbes India's Legal Powerlist 2020.
Param Shavak
For enhancement of educational infrastructure of the country, Marwadi University has been granted with supercomputer 'Param Shavak' by Gujarat Council on Science and Technology (GUJCOST), Gujarat Government. Param Shavak supercomputing system will aid high-end computations and research work in AI, CE/IT, ICT, ML fields.
Organisation and administration
Faculties
Faculty of Engineering
The faculty offers undergraduate, postgraduate and diploma degrees.
The undergraduate programmes offered are Bachelors in Computer Engineering, Information Technology, Electrical Engineering, Automobile Engineering, Chemical Engineering, Civil Engineering, Mechanical Engineering, Computer Engineering (Big Data & Analytics), Information and Communication Technology, and Computer Engineering (Artificial Intelligence).
The postgraduate programmes offered are Masters in Chemical Engineering, Cyber Security, Environmental Engineering, Computer Engineering, Geotechnical Engineering, Electric Vehicle, CAD/CAM Engineering, Communication System Engineering, Structural Engineering, Thermal Engineering, Transportation Engineering and VLSI.
The diploma courses offered are Diploma in Computer Engineering, Mechanical Engineering, Chemical Engineering, Electrical Engineering, Civil Engineering, Automobile Engineering and Information and Communication Technology.
Faculty of Science
The faculty offers undergraduate and postgraduate degrees.
The undergraduate programmes offered are Bachelor of Science (B.Sc.) in Chemistry, Microbiology, Physics and Mathematics.
The postgraduate programmes offered are in Master of Science (M.Sc.) in Chemistry Microbiology, Mathematics, Bio-Technology, Physics and Environmental Science.
Faculty of Architecture
The faculty provides Bachelor of Architecture, a 5-year undergraduate programme, which focuses on educating students to develop innovative ideas and skills in the architectural discipline.
Faculty of Business management
The faculty offers undergraduate and postgraduate degrees in business management.
The undergraduate programmes offered are Bachelor of Business Administration, Bachelor of Business Administration (Honors) and Bachelor of Business Administration (Financial Markets).
The postgraduate programmes offered are Master of Business Administration, Master of Business Administration in Business Analytics and Master of Business Administration in Executive Program.
Faculty of Computer Applications
The undergraduate programme offered is Bachelor of Computer Applications. The postgraduate programme offered is Master of Computer Application.
Faculty of Commerce
The faculty offers undergraduate programmes in Bachelor of Commerce and Bachelor of Commerce (Honors).
Faculty of Law
The faculty of law offers undergraduate degrees and postgraduate degrees. The undergraduate programmes offered are BA LLB and B.Com LLB. The postgraduate programme offered is LLM.
Faculty of Pharmacy
The faculty offers a 4-year undergraduate program, Bachelor of Pharmacy, which trains students with in-depth knowledge in pharmaceutical sciences.
Faculty of Physiotherapy
The faculty offers a 3-year undergraduate programme, Bachelor of Physiotherapy.
Faculty of Computer Science
The faculty offers a postgraduate programme in Master of Computer Science in Cyber Security & law.
Centre
Entrepreneurship Development Cell
The university aims to offer youth help to set up their own businesses. To aid this initiative, Marwadi University has developed the Entrepreneurship Development Cell, which has implemented various university-based policies on the campus, such as:
- Student Startup and Innovation Policy of Government of Gujarat
- New Generation Innovation And Entrepreneurship Development Centre (NewGen-IEDC)
The cell provides support services to set up businesses by providing them pre-incubation support, guidance for the subsidy, preparation of project reports, registrations, management consultancy and more.
Intellectual Property Rights Cell
To create awareness about intellectual property rights, the university conducts various workshops, seminars, and training courses on IPR.
Energy Swaraj Ashram
Marwadi University declared the inauguration of its solar incubation centre, 'Energy Swaraj Ashram'. The centre was inaugurated by Dr Chetan Singh Solanki, the 'Solar Man of India'. At the solar incubation centre, students will be allowed to conduct advanced research and development, prototyping and analytics in the area of solar energy.
Innovation and Incubation Research (MUIIR) Center
Marwadi University Innovation and Incubation Research (MUIIR) Centre inks strategic MOUs to assist entrepreneurs who are focused on Energy Conservation, AI and Big Data analytics, Sustainable Development and Industry 4.0. 32+ start-ups from engineering, Paramedical, Management and Science are well-supported, and the amount of 50 lakhs + has been raised with the help of MUIIR.
Student life
Clubs
Students clubs at Marwadi University include Aero modeling and Robotics Club, Fitness Club, Team-building Club, Green Club, Computer-Aided Manufacturing Club, Cultural Club, and Music Club.
References
External links
Education in Rajkot
Universities in Gujarat
Educational institutions established in 2016
2016 establishments in Gujarat
Private universities in India |
313043 | https://en.wikipedia.org/wiki/Team%20management | Team management | Team management is the ability of an individual or an organization to administer and coordinate a group of individuals to perform a task. Team management involves teamwork, communication, objective setting and performance appraisals. Moreover, team management is the capability to identify problems and resolve conflicts within a team. There are various methods and leadership styles a team manager can take to increase personnel productivity and build an effective team. In the workplace teams can come in many shapes and sizes who all work together and depend on one another. They communicate and all strive to accomplish a specific goal. Management teams are a type of team that performs duties such as managing and advising other employees and teams that work with them. Whereas work, parallel, and project teams hold the responsibility of direct accomplishment of a goal, management teams are responsible for providing general direction and assistance to those teams.
Elements of a healthy and successful team
Cohesive leadership
In any functional team, cohesion amongst team leaders and decision makers is vital. Cohesive leadership means that team leaders act together as a unit and make decisions as a team instead of each branching off into their own work and operating individually. It ensures that the team will be steered in one direction instead of multiple directions due to team leaders not being concise and consistent with their instructions. Cohesive leadership will require team leaders to have strong communication skills.
Effective communication
There must be an effective channel of communication from the top to the bottom of the chain of command and vice versa. An effective channel of communication will allow messages to be transferred accurately without delay to the intended recipient, which will speed up decision making processes and the operations of the team. Furthermore, effective communication will increase the flexibility of an organization and cause it to be less susceptible to changes in the external environment, as a faster decision making process will allow organizations a longer time period to adapt to the changes and execute contingency plans. The use of social media at work positively influences three team processes, specifically the effective communication, knowledge sharing and coordination.
Common goal
When team members first come together they will each bring different ideas; however, the key to a successful team is the alignment of its objectives. It is essential that the team leader sets a common goal the entire team is willing to pursue. This way, all of the team members will put in effort in order to attain the goal. If there is not a common goal, team members who disagree with the objective in hand will feel reluctant to utilise their full effort, leading to failure to achieve the goal. In other cases, team members might divert themselves to other tasks due to a lack of belief or interest in the goal.
Defined team roles and responsibilities
Poorly defined roles are often the biggest obstacle to a successful team. If team members are unclear what their role is, their contributions will be minimal, therefore it is the team leader's duty to outline the roles and responsibilities of each individual within the team and ensure that they work together as an integral unit.
In a successful team, a leader will first evaluate the team's mission to understand what is needed to accomplish the task. Then, they will identify the strengths and weaknesses of the team members and assign roles accordingly. Lastly, they must ensure that all team members know what each other's responsibilities are to avoid confusion and to create an effective channel of communication.
Individuals in a team can take on different roles that have their own unique responsibilities. A task-oriented role occurs when the individual offers new ideas, coordinates activities, or tries to find new information to share with the team. A social-oriented role occurs when an individual encourages the members of the team to be united. They also encourage participation and communication. An individual role occurs when an individual blocks the team's activities. They tend to call attention to themselves and avoid interaction with others. Another occurrence is role conflict, which is a situation where an individual faces divergent role expectation. This means they are being pulled in various directions and hold different roles simultaneously.
Methods of team management
Command and Control
The "command and control” method as an approach to team management is based on the concept of military management. It was a commonly used system in the private sector during the 21st century. In this method, the team leader instructs their team members to complete a task and if they refuse, they will punish employees until they comply. The team leader has absolute authority and utilises an autocratic leadership style. There are considerable drawbacks to this team management method. First, morale is lowered due to team members being belittled for the slightest mistakes; punishments lead to a lack of confidence resulting in poor performance. Second, in modern organisations roles are often specialised, therefore managers will require the expertise of the employee, elevating the value of the employee. Implementing this team management method leads to a high rate of employee turnover. In addition, in large organisations managers don't have the time to provide instructions to all employees and continuously monitor them; this will impede an organisation's performance as managers are not spending time on their core responsibilities.
Engage and Create
Due to the limitations and ineffectiveness of the "command and control” method, managers developed an alternative management strategy known as “engage and create”. In this method team members are encouraged to participate in discussions and contribute. Furthermore, they are advised to engage with other team members to build a stronger sense of teamwork and unity. This will lead to increased productivity and accountability, driving the team towards success.
Econ 101
In the “econ 101” method of team management, the team leader makes the baseline assumption that all team members are motivated by reward in the form of money, and that the best way to manage the team is to provide financial rewards for performance and issue punishments for failure. This method of team management uses material gains in the place of intrinsic motivation to drive team members. This is similar to Frederick Taylor's theory of scientific management which claims the main form of motivation for employees is money. The main drawback of this method is that it does not take into account other forms of motivation besides money such as personal satisfaction and ambition. Moreover, using reward and punishment as a method of team management can cause demotivation as everyone is motivated by different factors and there is no one way to satisfy all team members; the negative effect is further compounded by punishment leading to demoralisation and loss of confidence.
Problems in team management
Absence of trust
In Patrick Lencioni's book The Five Dysfunctions of a Team, the absence of vulnerability-based trust – where team members are comfortable being vulnerable with each other, trust each other to help when asking for guidance, and are willing to admit their mistakes – within a team is detrimental to a team. Team leaders have to assist each other when they are vulnerable and also allow team members to see their vulnerable side, which is contradictory to the orthodox belief. If a team lacks vulnerability-based trust, team members will not be willing to share ideas or acknowledge their faults due to the fear of being exposed as incompetent, leading to a lack of communication and the hindering of the team.
Fear of conflict
Contrary to general belief, conflict is a positive element in a team as it drives discussion. The fear of conflict is the fear of team members to argue with one another or disagree with the team leader. If team members hold back and are afraid of confronting their leader or teammates, then the concept of a team is non-existent because there is only one person who contributes and no new ideas are generated from discussions.
The fear of conflict in a team stems from an absence of trust, more specifically vulnerability-based trust. If team members are afraid to be vulnerable in front of one another, disputes can be manipulative and a means to overthrow and shame the other team member. However, if team members trust each other and are comfortable being vulnerable in front of one another, then debates can be a pursuit of a better and more effective method to achieve a task.
Lack of commitment
When team members don't provide input on a decision, it shows that they do not agree or approve of the decision, leading to a halt in team activity and progress. Furthermore, when team members don't express their opinions, views and potential ideas are lost, hurting the project and the company.
Avoidance of accountability
The avoidance of accountability in a team is the failure of team members to be accountable for the consequences of their actions. When team members do not commit to a decision, they will be unwilling to take responsibility for the outcomes of the decision.
In addition, if a lack of trust exists within the team then there will be an absence of peer to peer accountability; team members will not feel accountable towards their team members and hence will not put effort into their tasks. The team must trust and hold each other responsible so that the intention will always be for the benefit of the team and for the team to succeed.
Team leaders who are afraid of confrontation might avoid holding team members accountable when in fact they have made a mistake. Team leaders must develop the confidence to hold team members accountable so that they will feel the sense of responsibility and entitlement to the team, and learn from their mistakes. If not, then errors will not be corrected and might lead to worse problems, causing a defective team.
Inattention to results
If team leaders and team members do not hold each other accountable then they will not be concerned about the outcome of the team and whether they have achieved their goal, as they do not have a drive to obtain great results. Inattention to results causes a loss of purpose and brings into question the existence of the team.
Resolving problems through team management
Building trust
An approach to resolving fundamental trust problems within teams is to build trust amongst team members. A team leader can build trust by persuading team members to ask questions and seek guidance from other team members so that they are more familiar and comfortable in being vulnerable with one another. This may include questions such as “Could you teach me how to do this?” or statements like “You are better than me at this”. However, in order to achieve vulnerability-based trust within the team, the team leader must be vulnerable first. If the team leader is unwilling to be vulnerable, the rest of the team will be unwilling to follow.
Appraisals
Appraisals can be a way for team members to provide feedback to one another or for team members to provide advice to the leader. This allows individual members of the team to reflect on their performance and aim to do better by amending their mistakes; furthermore appraisals create an environment where the chain of command is non-existent and team members can be honest towards one another. This is effective in a way that the team can provide progressive feedback towards other members and can advise the leader on how he or she can improve their leadership. After each member reads their appraisals, they will understand how they can strive to improve, benefitting the team in reaching its objectives. The commonly used forms of appraisals are performance appraisals, peer appraisals and 360 degree feedback.
Team building activities
Team-building activities are a series of simple exercises involving teamwork and communication. The main objectives of team building activities are to increase trust amongst team members and allow team members to better understand one another. When choosing or designing team-building activities it is best to determine if your team needs an event or an experience. Generally an event is fun, quick and easily done by non-professionals. Team building experiences provide richer, more meaningful results. Experiences should be facilitated by a professional on an annual basis for teams that are growing, or changing.
What makes teams effective
Team effectiveness occurs when the team has appropriate goals to complete and the confidence to accomplish those goals. Communication is also a large part of effectiveness in a team because in order to accomplish tasks, the members must negotiate ideas and information. Another aspect of effectiveness is reliability and trust. When overcoming the “storming” phase of Bruce Tuckman's stages of group development, trust is established, and it leads to higher levels of team cohesion and effectiveness. If there is a conflict, effectiveness allows cohesion and the ability to overcome conflict. Specifically in management teams, more weight falls on their shoulders because they have to direct and lead other teams. Being effective is a main priority for the team or teams involved. Unlike non-managerial teams, in which the focus is on a set of team tasks, management teams are effective only insofar as they are accomplishing a high level of performance by a significant business unit or an entire firm. Having support from higher-up position leaders can give teams insight on how to act and make decisions, which improves their effectiveness as well.
See also
Economics
Leadership
Leadership style
Management
Socionics
References
Management by type
Group processes |
57031693 | https://en.wikipedia.org/wiki/PageCloud | PageCloud | PageCloud is a software as a service, cloud-based web development platform and drag and drop website builder. The platform allows individuals and businesses to create and maintain websites. PageCloud was founded by Chief executive officer Craig Fitzpatrick and is headquartered in Ottawa, Canada.
In 2015, PageCloud was a finalist in the TechCrunch Disrupt Startup Battlefield.
History
PageCloud was founded in 2014 and officially launched in May 2015 by Craig Fitzpatrick at TechCrunch Disrupt. For seed funding he relied on investments from other software entrepreneurs, including Shopify CEO Tobias Lütke. Fitzpatrick created the initial software after the sale of previous startup Source Metrics, where he became frustrated with how difficult it was to build and run a website.
Finances
During its pre-sale period, the company surpassed $1 million in pre-orders, selling 10,000 licenses before the platform was made publicly available. In November 2016, PageCloud closed a $4 million Series A round, after initially projecting it would do a $10 million Series A.
Product
PageCloud is designed for companies who need websites with less than 30 pages. It provides customizable templates and a drag and drop HTML website builder that includes apps, graphics, image galleries, fonts, and more.
Leadership
Mike Grouchy - Chief Executive Officer
Mark Murray - Chief Financial Officer
Mark Stephenson - Chief of Design
References
External links
Official Website
Leadership
Software companies of Canada
Computer companies of Canada
Web development software
Canadian companies established in 2015
2015 establishments in Ontario
Software companies established in 2015 |
83581 | https://en.wikipedia.org/wiki/Back%20Orifice | Back Orifice | Back Orifice (often shortened to BO) is a computer program designed for remote system administration. It enables a user to control a computer running the Microsoft Windows operating system from a remote location. The name is a play on words on Microsoft BackOffice Server software. It can also control multiple computers at the same time using imaging.
Back Orifice has a client–server architecture. A small and unobtrusive server program is on one machine, which is remotely manipulated by a client program with a graphical user interface on another computer system. The two components communicate with one another using the TCP and/or UDP network protocols. In reference to the Leet phenomenon, this program commonly runs on port 31337.
The program debuted at DEF CON 6 on August 1, 1998 and was the brainchild of Sir Dystic, a member of the U.S. hacker organization Cult of the Dead Cow. According to the group, its purpose was to demonstrate the lack of security in Microsoft's Windows 9x series of operating systems.
Although Back Orifice has legitimate purposes, such as remote administration, other factors make it suitable for illicit uses. The server can hide from cursory looks by users of the system. Since the server can be installed without user interaction, it can be distributed as the payload of a Trojan horse.
For those and other reasons, the antivirus industry immediately categorized the tool as malware and appended Back Orifice to their quarantine lists. Despite this fact, it was widely used by script kiddies because of its simple GUI and ease of installation.
Two sequel applications followed it, Back Orifice 2000, released in 1999, and Deep Back Orifice by French Canadian hacking group QHA.
See also
Back Orifice 2000
Sub7
Trojan horse (computing)
Malware
Backdoor (computing)
Rootkit
MiniPanzer and MegaPanzer
File binder
References
External links
Common trojan horse payloads
Windows remote administration software
Cult of the Dead Cow software
Remote administration software |
5222906 | https://en.wikipedia.org/wiki/Computerra | Computerra | Computerra () was a Russian computer weekly publication. The first edition was released on December 21, 1992 and was published by C&C Computer Publishing Limited (Computerra Publishing House). Later, it received the online counterpart at [www.computerra.ru], which supplements the contents of the publication; due to the financial problems and lack of advertisement material, the issue 811–812 on December 15, 2009 was announced as the last issue to be published offline, with only the online version remaining active. The last issue cover lacks a usual cover image, with only the black rectangle instead and the words roughly translatable as "now you can shut down your computerra", as a pun on the shutdown image of Windows 95.
The founder was Dmitriy Mendrelyuk. The magazine was headquartered in Moscow. There are some other magazines founded by him like Business-Journal ().
The typical audience of Computerra magazine includes the working men 25–34 years old, who have high social status, high or medium income level, and use computers.
The difference of Computerra from the most of other computer magazines is that this magazine not only writes about computer hardware and software, but writes philosophical thoughts about life, "computer people" life above all.
Alexa.com traffic rank for Computerra.ru official website is 36,816 .
History
Editors-in-chief:
Georgiy Kuznetsov (1995–1998)
Eugene Kozlovsky (1998–2004)
Sergey Leonov (2004–2006)
Dmitriy Mendrelyuk (temporary in 2006)
Vladimir Guriev 2007–2008
Vladislav Biryukov
References
External links
Computerra website
old Computerra website
archive
Computerra Inside – editorial blog
1992 establishments in Russia
2009 disestablishments in Russia
Computer magazines published in Russia
Defunct computer magazines
Defunct magazines published in Russia
Magazines established in 1992
Magazines disestablished in 2009
Magazines published in Moscow
Online magazines with defunct print editions
Russian-language magazines
Weekly magazines published in Russia |
3359586 | https://en.wikipedia.org/wiki/Virtual%20instrumentation | Virtual instrumentation | Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments.
Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation.
The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular.
Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems.
The newly updated technology called "HARD VIRTUAL INSTRUMENTATION" is developed by some companies. It is said that with this technology the execution of the software is done by the hardware itself which can help in fast real time processing.
See also
Measuring instrument
Electronic engineering
Measurement
Measuring instruments |
12102506 | https://en.wikipedia.org/wiki/Tamil%20Nadu%20College%20of%20Engineering | Tamil Nadu College of Engineering | Tamil Nadu College of Engineering also known as TCE is situated at Karumathampatti, Coimbatore, Tamil Nadu, India. The college was established in 1984. The college is affiliated to Anna University. This college is managed by the Tamil Nadu Technical Education Foundation.
The institution was founded by philanthropists Lion T.N. Palanisame and Dr.P.V. Ravi.
Location
The college is situated near NH 47 in Karumathampatti, which is 30 km from the industrial city of Coimbatore, on the way to Avinashi. TCE is situated at Chenniyandavar Kovil which is 4 km from Karumathampatti in NH47 road side. Karumathampatti is a suburb of Coimbatore, under the Coimbatore corporation.
Courses
Undergraduate courses
BE Automobile Engineering(60 seats)
BE Civil Engineering(30 seats)
BE Computer Science and Engineering(120 seats)
BE Electrical and Electronics Engineering(60 seats)
BE Electronics & Communication Engineering(120 seats)
BTech Information Technology(120 seats)
BE Instrumentation and Control Engineering (30 seats)
BE Mechanical Engineering(120 seats)
Postgraduate courses
ME Structural Engineering(18 seats)
ME Computer and Communication(18 seats)
MBA (Master of Business Administration)(60 seats)
MCA (Master of Computer Application)(30 seats)
Ph.D
Computer Science
Facilities
Library The academy library is the main source of books, periodicals and other study materials for students during their courses.
Computing facilities The college has two computer laboratories equipped with around 80 computers.
On-campus student residence Halls of Residence for men and women students are available separately. The men’s residence can accommodate 300 students and the women’s residence can accommodate 100 students. There is a separate block for First year students.
Techno park
Auditorium The fully air-conditioned auditorium with a capacity of 1200 seating and an excellent stage is available and used to conduct all cultural festivals and functions.
Seminar halls There are four conference halls with ample seating capacity to conduct seminars, symposiums, conferences, meetings, etc. With full fledged computing facilities.
Other amenities
Canteen with a seating capacity of 100 members at a time.
Water treatment plant in the hostel (RO system)
ATM facility of Syndicate Bank
College buses are available for the transport of day-scholar students and staff.
Reverse osmosis plants
Field of study
Department of Civil Engineering
The department was established in 2006. The department offers electives in subjects such as GIS, FEM, Repair and Rehabilitation of Structure and Pre-fabricated Structures. It has facilities for tests on building materials, and soil and water samples. The department offers consultancy services like topographical survey including contouring, analysis and design of concrete and steel structures, water sample analysis and building materials testing for many private, state and central government organizations.
Department of Mechanical Engineering
The department was established in 2006.
The Department provides value added design courses to enhance the employability of the students. The Department also conduct so many seminars, workshops in each semester for all the years. This department is very much known for its unity.
Department of Electronics and Communication Engineering
The department started in 2006. ME Computer and Communication started in 2005. M Karthikeyan is the Head of the department.
Department of Computer Science and Engineering
The department started in 2006. It offers a bachelor's degree in Computer Science and Engineering.
Department of Information Technology
The department started in 2006.
Department of Electrical and Electronics Engineering
The department started in 2006.
Student bodies
Department Associations
Association of Civil Engineering
Association of Mechanical Engineering
Association of Electronics & Communication Engineering
Association of Electrical & Electronics Engineering
Association of Computer Science and Engineering
Association of Information Technology
Every academic year, the departments organize national level technical symposiums.
Extra-curricular activities
Voluntary Blood donation Club
Tamil Mandram
Literary Association
Sports
The Department of Physical Education is in charge of the sports activities of the college. The college has outdoor and indoor games facilities. The sporting facilities are available in a total playground area around comprising
Concrete basketball court
Volleyball court
Foot Ball Field
Ball badminton court
Cricket ground
Notable alumni
Tovino Thomas, Malayalam Film Actor
Placement and training
The Placement and Training Cell:
advises students on career options with cost,
provides information on training and employment opportunities with cost,
provides guidance and counselling services,
arranges for placement with cost,
arranges courses on personality development with cost,
conducts seminars, lectures, career guidance programmes and entrepreneurship development programmes.
Events
Tamil Nadu College of Engineering has hosted events like Microsoft Dreamsprak Yatra, TEDxCoimbatore for TEDxChange, and IEEE Conference.
Engineering colleges in Coimbatore
Educational institutions established in 1984
1984 establishments in Tamil Nadu |
43480787 | https://en.wikipedia.org/wiki/Rosslyn%20Analytics | Rosslyn Analytics | Rosslyn Data Technologies (aka Rosslyn Analytics) is a software company providing procurement and master data management solutions. Its procurement portfolio includes software solutions for Spend Analytics, Supplier Information Management, Supplier Performance Management and Supplier Contract Management. Master Data Management solutions include Data Extraction, Data Cleansing and Data Enrichment; available via a cloud-based data platform.
The company is listed on AIM, a sub-market of the London Stock Exchange, and is headquartered in London with offices in New York City and Chicago, Illinois.
History
Rosslyn Analytics was founded in 2007 by Charles Clark and Hugh Cox. The company was named after Rosslyn Chapel in Midlothian, Scotland, which is famous for its mysterious carvings, believed to contain a message or even music that has never been verifiably decoded. In 2009, the company launched its first cloud-based business intelligence product - a self-service spend analytics app for its RAPid data platform. The RAPid platform won several awards for innovation and excellence, including the Technology Excellence Award for “Best New Product”.
In April 2011, as a public service to help the UK Government reduce the deficit by getting better control of its purchasing, Rosslyn Analytics offered to provide free spend analytics to the government in just 30 days. The government refused the offer.
In January 2012, Rosslyn Analytics entered the United States, establishing offices in New York City and Denver, Colorado.
Rosslyn Analytics was the first technology company to bring QlikView into the cloud in 2012.
In 2013, Rosslyn Analytics was featured in Gartner’s Magic Quadrant for Strategic Sourcing Application Suites.
In February 2014, the company filed its first patent with the UK Intellectual Property Office on machine-learning. In April, Rosslyn Analytics went public on AIM under the name of Rosslyn Data Technologies Group plc. In May 2014, Rosslyn Analytics joined the UK government’s G-Cloud 5 Framework for Software-as-a-Service (SaaS) providers.
Recognition
In January 2011, JMP Securities recognized Rosslyn Analytics as one of its “Hot 100” software companies to watch.
In March 2013, Bessemer named Rosslyn Analytics as one of the top Business Intelligence / Analytics cloud companies.
In December 2015, Rosslyn Analytics was named a strong performer in the Forrester Research report, “The Forrester Wave™: Cloud Business Intelligence Platforms, Q4 2015.”
In May 2016, Rosslyn Analytics recognized as finalist for the 2016 Microsoft Data Platform Partner of the Year award
In March 2017, Rosslyn Data Technologies was shortlisted for analytics product of the year in the UK Cloud Awards.
Products
The company’s flagship product is its cloud-based data platform called RAPid. RAPid provides self-service data integration, cleansing, enrichment, analysis and visualization capabilities via a single platform. Business users access the benefits of the platform via automated and manual self-service tools. The RAPid platform comes with a number of pre-built analytical apps sold as modular solutions such as spend analytics, people analytics, forensics analytics, expense analytics and supplier performance management.
Founders
Rosslyn Analytics was founded by Charles Clark and Hugh Cox, two former British Army officers. Charles has been recognized for his leadership by Supply & Demand Chain Executive, having been identified as a “Pro to Know” from 2010 to 2015. Hugh Cox is the company’s Chief Data Officer.
References
British companies established in 2007
Business intelligence companies
Business software companies
Companies listed on the Alternative Investment Market
Software companies based in London
Software companies established in 2007 |
62331725 | https://en.wikipedia.org/wiki/Linux%20kernel%20version%20history | Linux kernel version history | The Linux kernel is a free and open-source, monolithic, Unix-like operating system kernel. It was conceived and created in 1991 by Linus Torvalds.
Linux kernels have different support level depending on version. Version 4.4, released in January 2016, was declared to have Long-Term Support (LTS). It has six years of support that way, but it was also defined to have Super Long Term Support (SLTS), i.e. Civil Infrastructure Platform will provide support (for 32-bit ARM and x86-64 only) until at least 2026, possibly until 2036. It is by now the oldest supported version.
Releases 5.x.y
Releases 4.x.y
Releases 3.x.y
The jump from 2.6.x to 3.x wasn't because of a breaking update, but rather the first release of a new versioning scheme introduced as a more convenient system.
Releases 2.6.x.y
Versions 2.6.16 and 2.6.27 of the Linux kernel were unofficially supported in a long-term support (LTS) fashion, before a 2011 working group in the Linux Foundation started a formal long-term support initiative.
Releases before 2.6.0
See also
Linux adoption
Linux kernel
History of Linux
Timeline of free and open-source software
References
External links
Official Linux kernel website
Active kernel releases on the official Linux kernel website
Linux versions in Linux Kernel Newbies
Linux kernel
Software version histories |
21939 | https://en.wikipedia.org/wiki/National%20Security%20Agency | National Security Agency | The National Security Agency (NSA) is a national-level intelligence agency of the United States Department of Defense, under the authority of the Director of National Intelligence (DNI). The NSA is responsible for global monitoring, collection, and processing of information and data for foreign and domestic intelligence and counterintelligence purposes, specializing in a discipline known as signals intelligence (SIGINT). The NSA is also tasked with the protection of U.S. communications networks and information systems. The NSA relies on a variety of measures to accomplish its mission, the majority of which are clandestine. The existence of the NSA was not revealed until 1975.
Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Between then and the end of the Cold War, it became the largest of the U.S. intelligence organizations in terms of personnel and budget, but information available as of 2013 indicates that the CIA pulled ahead in this regard, with a budget of $14.7 billion. The NSA currently conducts worldwide mass data collection and has been known to physically bug electronic systems as one method to this end. The NSA is also alleged to have been behind such attack software as Stuxnet, which severely damaged Iran's nuclear program. The NSA, alongside the Central Intelligence Agency (CIA), maintains a physical presence in many countries across the globe; the CIA/NSA joint Special Collection Service (a highly classified intelligence team) inserts eavesdropping devices in high value targets (such as presidential palaces or embassies). SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, [and] breaking and entering".
Unlike the CIA and the Defense Intelligence Agency (DIA), both of which specialize primarily in foreign human espionage, the NSA does not publicly conduct human-source intelligence gathering. The NSA is entrusted with providing assistance to, and the coordination of, SIGINT elements for other government organizations – which are prevented by law from engaging in such activities on their own. As part of these responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which facilitates cooperation between the NSA and other U.S. defense cryptanalysis components. To further ensure streamlined communication between the signals intelligence community divisions, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service.
The NSA's actions have been a matter of political controversy on several occasions, including its spying on anti–Vietnam War leaders and the agency's participation in economic espionage. In 2013, the NSA had many of its secret surveillance programs revealed to the public by Edward Snowden, a former NSA contractor. According to the leaked documents, the NSA intercepts and stores the communications of over a billion people worldwide, including United States citizens. The documents also revealed the NSA tracks hundreds of millions of people's movements using cellphones' metadata. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing".
History
Formation
The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D.C. and was part of the war effort under the executive branch without direct Congressional authorization. During the course of the war, it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the Navy's cryptanalysis functions in July 1918. World War I ended on November 11, 1918, and the army cryptographic section of Military Intelligence (MI-8) moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley.
The Black Chamber
After the disbandment of the U.S. Army cryptographic section of military intelligence, known as MI-8, in 1919, the U.S. government created the Cipher Bureau, also known as Black Chamber. The Black Chamber was the United States' first peacetime cryptanalytic organization. Jointly funded by the Army and the State Department, the Cipher Bureau was disguised as a New York City commercial code company; it actually produced and sold such codes for business use. Its true mission, however, was to break the communications (chiefly diplomatic) of other nations. At the Washington Naval Conference, it aided American negotiators by providing them with the decrypted traffic of many of the conference delegations, including the Japanese. The Black Chamber successfully persuaded Western Union, the largest U.S. telegram company at the time, as well as several other communications companies to illegally give the Black Chamber access to cable traffic of foreign embassies and consulates. Soon, these companies publicly discontinued their collaboration.
Despite the Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating, "Gentlemen do not read each other's mail."
World War II and its aftermath
During World War II, the Signal Intelligence Service (SIS) was created to intercept and decipher the communications of the Axis powers. When the war ended, the SIS was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence.
On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA). This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff. The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units. However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests such as the Department of State, Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI). In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency.
The National Security Council issued a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9. On the same day, Truman issued a second memorandum that called for the establishment of the NSA. The actual establishment of the NSA was done by a November 4 memo by Robert A. Lovett, the Secretary of Defense, changing the name of the AFSA to the NSA, and making the new agency responsible for all communications intelligence. Since President Truman's memo was a classified document, the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy the U.S. intelligence community referred to the NSA as "No Such Agency".
Vietnam War
In the 1960s, the NSA played a key role in expanding U.S. commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American destroyer during the Gulf of Tonkin incident.
A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as key leaders of the civil rights movement, including Martin Luther King Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War. However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal".
The NSA mounted a major effort to secure tactical communications among U.S. forces during the war with mixed success. The NESTOR family of compatible secure voice systems it developed was widely deployed during the Vietnam War, with about 30,000 NESTOR sets produced. However, a variety of technical and operational problems limited their use, allowing the North Vietnamese to exploit and intercept U.S. communications.
Church Committee hearings
In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Senator Frank Church revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock. The Agency tracked these individuals in a secret filing system that was destroyed in 1974. Following the resignation of President Richard Nixon, there were several investigations of suspected misuse of FBI, CIA and NSA facilities. Senator Frank Church uncovered previously unknown activity, such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro. The investigation also uncovered NSA's wiretaps on targeted U.S. citizens.
After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed. This was designed to limit the practice of mass surveillance in the United States.
From 1980s to 1990s
In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya.
In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'. That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland. The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology". NSA employees must be retired for more than fifteen years to qualify for the memorial.
NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs at a cost of $3 million to get the system running again. (Some incoming traffic was also directed instead to Britain's GCHQ for the time being.) Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure.
In the 1990s the defensive arm of the NSA—the Information Assurance Directorate (IAD)—started working more openly; the first public technical talk by an NSA scientist at a major cryptography conference was J. Solinas' presentation on efficient Elliptic Curve Cryptography algorithms at Crypto 1997. The IAD's cooperative approach to academia and industry culminated in its support for a transparent process for replacing the outdated Data Encryption Standard (DES) by an Advanced Encryption Standard (AES). Cybersecurity policy expert Susan Landau attributes the NSA's harmonious collaboration with industry and academia in the selection of the AES in 2000—and the Agency's support for the choice of a strong encryption algorithm designed by Europeans rather than by Americans—to Brian Snow, who was the Technical Director of IAD and represented the NSA as cochairman of the Technical Working Group for the AES competition, and Michael Jacobs, who headed IAD at the time.
After the terrorist attacks of September 11, 2001, the NSA believed that it had public support for a dramatic expansion of its surveillance activities. According to Neal Koblitz and Alfred Menezes, the period when the NSA was a trusted partner with academia and industry in the development of cryptographic standards started to come to an end when, as part of the change in the NSA in the post-September 11 era, Snow was replaced as Technical Director, Jacobs retired, and IAD could no longer effectively oppose proposed actions by the offensive arm of the NSA.
War on Terror
In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cellphones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was cancelled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system.
Trailblazer Project ramped up in 2002 and was worked on by Science Applications International Corporation (SAIC), Boeing, Computer Sciences Corporation, IBM, and Litton Industries. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was cancelled in early 2004.
Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer. It was to be a realization of information processing at higher speeds in cyberspace.
Global surveillance disclosures
The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor Edward Snowden. On 4 September 2020, the NSA's surveillance program was ruled unlawful by the US Court of Appeals. The court also added that the US intelligence leaders, who publicly defended it, were not telling the truth.
Mission
NSA's eavesdropping mission includes radio broadcasting, both from various organizations and individuals, the Internet, telephone calls, and other intercepted forms of communication. Its secure communications mission includes military, diplomatic, and all other sensitive, confidential or secret government communications.
According to a 2010 article in The Washington Post, "[e]very day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases."
Because of its listening task, NSA/CSS has been heavily involved in cryptanalytic research, continuing the work of predecessor agencies which had broken many World War II codes and ciphers (see, for instance, Purple, Venona project, and JN-25).
In 2004, NSA Central Security Service and the National Cyber Security Division of the Department of Homeland Security (DHS) agreed to expand the NSA Centers of Academic Excellence in Information Assurance Education Program.
As part of the National Security Presidential Directive 54/Homeland Security Presidential Directive 23 (NSPD 54), signed on January 8, 2008, by President Bush, the NSA became the lead agency to monitor and protect all of the federal government's computer networks from cyber-terrorism.
A part of NSA's mission is to serve as a combat support agency for the Department of Defense.
Operations
Operations by the National Security Agency can be divided into three types:
Collection overseas, which falls under the responsibility of the Global Access Operations (GAO) division.
Domestic collection, which falls under the responsibility of the Special Source Operations (SSO) division.
Hacking operations, which fall under the responsibility of the Tailored Access Operations (TAO) division.
Collection overseas
Echelon
"Echelon" was created in the incubator of the Cold War. Today it is a legacy system, and several NSA stations are closing.
NSA/CSS, in combination with the equivalent agencies in the United Kingdom (Government Communications Headquarters), Canada (Communications Security Establishment), Australia (Australian Signals Directorate), and New Zealand (Government Communications Security Bureau), otherwise known as the UKUSA group, was reported to be in command of the operation of the so-called ECHELON system. Its capabilities were suspected to include the ability to monitor a large proportion of the world's transmitted civilian telephone, fax and data traffic.
During the early 1970s, the first of what became more than eight large satellite communications dishes were installed at Menwith Hill. Investigative journalist Duncan Campbell reported in 1988 on the "ECHELON" surveillance program, an extension of the UKUSA Agreement on global signals intelligence SIGINT, and detailed how the eavesdropping operations worked. On November 3, 1999, the BBC reported that they had confirmation from the Australian Government of the existence of a powerful "global spying network" code-named Echelon, that could "eavesdrop on every single phone call, fax or e-mail, anywhere on the planet" with Britain and the United States as the chief protagonists. They confirmed that Menwith Hill was "linked directly to the headquarters of the US National Security Agency (NSA) at Fort Meade in Maryland".
NSA's United States Signals Intelligence Directive 18 (USSID 18) strictly prohibited the interception or collection of information about "... U.S. persons, entities, corporations or organizations...." without explicit written legal permission from the United States Attorney General when the subject is located abroad, or the Foreign Intelligence Surveillance Court when within U.S. borders. Alleged Echelon-related activities, including its use for motives other than national security, including political and industrial espionage, received criticism from countries outside the UKUSA alliance.
Other SIGINT operations overseas
The NSA was also involved in planning to blackmail people with "SEXINT", intelligence gained about a potential target's sexual activity and preferences. Those targeted had not committed any apparent crime nor were they charged with one.
In order to support its facial recognition program, the NSA is intercepting "millions of images per day".
The Real Time Regional Gateway is a data collection program introduced in 2005 in Iraq by NSA during the Iraq War that consisted of gathering all electronic communication, storing it, then searching and otherwise analyzing it. It was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques. This "collect it all" strategy introduced by NSA director, Keith B. Alexander, is believed by Glenn Greenwald of The Guardian to be the model for the comprehensive worldwide mass archiving of communications which NSA is engaged in as of 2013.
A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East. The NSA has also spied extensively on the European Union, the United Nations and numerous governments including allies and trading partners in Europe, South America and Asia.
In June 2015, WikiLeaks published documents showing that NSA spied on French companies.
In July 2015, WikiLeaks published documents showing that NSA spied on federal German ministries since the 1990s. Even Germany's Chancellor Angela Merkel's cellphones and phone of her predecessors had been intercepted.
Boundless Informant
Edward Snowden revealed in June 2013 that between February 8 and March 8, 2013, the NSA collected about 124.8 billion telephone data items and 97.1 billion computer data items throughout the world, as was displayed in charts from an internal NSA tool codenamed Boundless Informant. Initially, it was reported that some of these data reflected eavesdropping on citizens in countries like Germany, Spain and France, but later on, it became clear that those data were collected by European agencies during military missions abroad and were subsequently shared with NSA.
Bypassing encryption
In 2013, reporters uncovered a secret memo that claims the NSA created and pushed for the adoption of the Dual EC DRBG encryption standard that contained built-in vulnerabilities in 2006 to the United States National Institute of Standards and Technology (NIST), and the International Organization for Standardization (aka ISO). This memo appears to give credence to previous speculation by cryptographers at Microsoft Research. Edward Snowden claims that the NSA often bypasses encryption altogether by lifting information before it is encrypted or after it is decrypted.
XKeyscore rules (as specified in a file xkeyscorerules100.txt, sourced by German TV stations NDR and WDR, who claim to have excerpts from its source code) reveal that the NSA tracks users of privacy-enhancing software tools, including Tor; an anonymous email service provided by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in Cambridge, Massachusetts; and readers of the Linux Journal.
Software backdoors
Linus Torvalds, the founder of Linux kernel, joked during a LinuxCon keynote on September 18, 2013, that the NSA, who are the founder of SELinux, wanted a backdoor in the kernel. However, later, Linus' father, a Member of the European Parliament (MEP), revealed that the NSA actually did this.
IBM Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed the export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government.
Boomerang routing
While it is assumed that foreign transmissions terminating in the U.S. (such as a non-U.S. citizen accessing a U.S. website) subject non-U.S. citizens to NSA surveillance, recent research into boomerang routing has raised new concerns about the NSA's ability to surveil the domestic Internet traffic of foreign countries. Boomerang routing occurs when an Internet transmission that originates and terminates in a single country transits another. Research at the University of Toronto has suggested that approximately 25% of Canadian domestic traffic may be subject to NSA surveillance activities as a result of the boomerang routing of Canadian Internet service providers.
Hardware implanting
A document included in NSA files released with Glenn Greenwald's book No Place to Hide details how the agency's Tailored Access Operations (TAO) and other NSA units gain access to hardware. They intercept routers, servers and other network hardware being shipped to organizations targeted for surveillance and install covert implant firmware onto them before they are delivered. This was described by an NSA manager as "some of the most productive operations in TAO because they preposition access points into hard target networks around the world."
Computers seized by the NSA due to interdiction are often modified with a physical device known as Cottonmouth. Cottonmouth is a device that can be inserted in the USB port of a computer in order to establish remote access to the targeted machine. According to NSA's Tailored Access Operations (TAO) group implant catalog, after implanting Cottonmouth, the NSA can establish a network bridge "that allows the NSA to load exploit software onto modified computers as well as allowing the NSA to relay commands and data between hardware and software implants."
Domestic collection
NSA's mission, as set forth in Executive Order 12333 in 1981, is to collect information that constitutes "foreign intelligence or counterintelligence" while not "acquiring information concerning the domestic activities of United States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the United States, while confining its own activities within the United States to the embassies and missions of foreign nations.
The appearance of a 'Domestic Surveillance Directorate' of the NSA was soon exposed as a hoax in 2013.
NSA's domestic surveillance activities are limited by the requirements imposed by the Fourth Amendment to the U.S. Constitution. The Foreign Intelligence Surveillance Court for example held in October 2011, citing multiple Supreme Court precedents, that the Fourth Amendment prohibitions against unreasonable searches and seizures apply to the contents of all communications, whatever the means, because "a person's private communications are akin to personal papers." However, these protections do not apply to non-U.S. persons located outside of U.S. borders, so the NSA's foreign surveillance efforts are subject to far fewer limitations under U.S. law. The specific requirements for domestic surveillance operations are contained in the Foreign Intelligence Surveillance Act of 1978 (FISA), which does not extend protection to non-U.S. citizens located outside of U.S. territory.
President's Surveillance Program
George W. Bush, president during the 9/11 terrorist attacks, approved the Patriot Act shortly after the attacks to take anti-terrorist security measures. Title 1, 2, and 9 specifically authorized measures that would be taken by the NSA. These titles granted enhanced domestic security against terrorism, surveillance procedures, and improved intelligence, respectively. On March 10, 2004, there was a debate between President Bush and White House Counsel Alberto Gonzales, Attorney General John Ashcroft, and Acting Attorney General James Comey. The Attorneys General were unsure if the NSA's programs could be considered constitutional. They threatened to resign over the matter, but ultimately the NSA's programs continued. On March 11, 2004, President Bush signed a new authorization for mass surveillance of Internet records, in addition to the surveillance of phone records. This allowed the president to be able to override laws such as the Foreign Intelligence Surveillance Act, which protected civilians from mass surveillance. In addition to this, President Bush also signed that the measures of mass surveillance were also retroactively in place.
One such surveillance program, authorized by the U.S. Signals Intelligence Directive 18 of President George Bush, was the Highlander Project undertaken for the National Security Agency by the U.S. Army 513th Military Intelligence Brigade. NSA relayed telephone (including cell phone) conversations obtained from ground, airborne, and satellite monitoring stations to various U.S. Army Signal Intelligence Officers, including the 201st Military Intelligence Battalion. Conversations of citizens of the U.S. were intercepted, along with those of other nations.
Proponents of the surveillance program claim that the President has executive authority to order such action, arguing that laws such as FISA are overridden by the President's Constitutional powers. In addition, some argued that FISA was implicitly overridden by a subsequent statute, the Authorization for Use of Military Force, although the Supreme Court's ruling in Hamdan v. Rumsfeld deprecates this view.
The PRISM program
Under the PRISM program, which started in 2007, NSA gathers Internet communications from foreign targets from nine major U.S. Internet-based communication service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube and Apple. Data gathered include email, videos, photos, VoIP chats such as Skype, and file transfers.
Former NSA director General Keith Alexander claimed that in September 2009 the NSA prevented Najibullah Zazi and his friends from carrying out a terrorist attack. However, this claim has been debunked and no evidence has been presented demonstrating that the NSA has ever been instrumental in preventing a terrorist attack.
Hacking operations
Besides the more traditional ways of eavesdropping in order to collect signals intelligence, NSA is also engaged in hacking computers, smartphones and their networks. A division which conducts such operations is the [Tailored Access Operations]] (TAO) division, which has been active since at least circa 1998.
According to the Foreign Policy magazine, "... the Office of Tailored Access Operations, or TAO, has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China."
In an interview with Wired magazine, Edward Snowden said the Tailored Access Operations division accidentally caused Syria's internet blackout in 2012.
Organizational structure
The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS.
NSA also has an Inspector General, head of the Office of the Inspector General (OIG), a General Counsel, head of the Office of the General Counsel (OGC) and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC).
Unlike other intelligence organizations such as the CIA or DIA, NSA has always been particularly reticent concerning its internal organizational structure.
As of the mid-1990s, the National Security Agency was organized into five Directorates:
The Operations Directorate, which was responsible for SIGINT collection and processing.
The Technology and Systems Directorate, which develops new technologies for SIGINT collection and processing.
The Information Systems Security Directorate, which was responsible for NSA's communications and information security missions.
The Plans, Policy and Programs Directorate, which provided staff support and general direction for the Agency.
The Support Services Directorate, which provided logistical and administrative support activities.
Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided into units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, Central and South America.
Directorates
, NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known.
In the year 2000, a leadership team was formed, consisting of the Director, the Deputy Director and the Directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team.
After president George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate.
A 2016 proposal would combine the Signals Intelligence Directorate with Information Assurance Directorate into Directorate of Operations.
NSANet
NSANet stands for National Security Agency Network and is the official NSA intranet. It is a classified network, for information up to the level of TS/SCI to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS).
NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels which are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random and downloading and printing of documents from NSANet are recorded.
In 1998, NSANet, along with NIPRNET and SIPRNET, had "significant problems with poor search capabilities, unorganized data and old information". In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems. Some universities that do highly sensitive research are allowed to connect to it.
The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site"; so, they could easily be read online by NSA personnel. Everyone with a TS/SCI-clearance had access to these documents. As a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to safer storage locations.
Watch centers
The NSA maintains at least two watch centers:
National Security Operations Center (NSOC), which is the NSA's current operations center and focal point for time-sensitive SIGINT reporting for the United States SIGINT System (USSS). This center was established in 1968 as the National SIGINT Watch Center (NSWC) and renamed into National SIGINT Operations Center (NSOC) in 1973. This "nerve center of the NSA" got its current name in 1996.
NSA/CSS Threat Operations Center (NTOC), which is the primary NSA/CSS partner for Department of Homeland Security response to cyber incidents. The NTOC establishes real-time network awareness and threat characterization capabilities to forecast, alert, and attribute malicious activity and enable the coordination of Computer Network Operations. The NTOC was established in 2004 as a joint Information Assurance and Signals Intelligence project.
NSA Police
The NSA has its own police force, known as NSA Police (and formerly as NSA Security Protective Force) which provides law enforcement services, emergency response and physical security to the NSA's people and property.
NSA Police are armed federal officers. NSA Police have use of a K9 division, which generally conducts explosive detection screening of mail, vehicles and cargo entering NSA grounds.
NSA Police use marked vehicles to carry out patrols.
Employees
The number of NSA employees is officially classified but there are several sources providing estimates.
In 1961, NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s, NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one third, which led to a substantial loss of experience.
In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities. In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke, and stated that the agency is "probably the biggest employer of introverts." In 2013 Der Spiegel stated that the NSA had 40,000 employees. More widely, it has been described as the world's largest single employer of mathematicians. Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence.
As of 2013 about 1,000 system administrators work for the NSA.
Personnel security
The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance in personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices. Nonetheless, security breaches reoccurred only a year later when in an issue of Izvestia of July 23, 1963, a former NSA employee published several cryptologic secrets.
The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets on a regular basis. The reluctance of Congressional houses to look into these affairs had prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted, and pointed out the necessity of surveillance by the Congress to prevent abuse of power.
Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information. Snowden claims he suggested such a rule in 2009.
Polygraphing
The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion. As part of the latter, historically EPQs or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph. The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition the NSA conducts periodic polygraph investigations in order to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA.
There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior. NSA's brochure states that the average test length is between two and four hours. A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions." Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected.
In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them—it's not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian".
A 2013 article indicated that after Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter.
Arbitrary firing
The number of exemptions from legal requirements has been criticized. When in 1964 Congress was hearing a bill giving the director of the NSA the power to fire at will any employee, The Washington Post wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced on the basis of anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority. Also, every person hired to a job in the US after 2007, at any private organization, state or federal government agency, must be reported to the New Hire Registry, ostensibly to look for child support evaders, except that employees of an intelligence agency may be excluded from reporting if the director deems it necessary for national security reasons.
Facilities
Headquarters
History of headquarters
When the agency was first established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations. Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families.
Construction of additional buildings began after the agency occupied buildings at Fort Meade in the late 1950s, which they soon outgrew. In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders. COMSEC remained in Washington, D.C., until its new building was completed in 1968. In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan. The four NSA buildings became known as the "Big Four." The NSA director moved to 2B when it opened.
Headquarters for the National Security Agency is located at in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Fort Meade is about southwest of Baltimore, and northeast of Washington, D.C. The NSA has two dedicated exits off Baltimore–Washington Parkway. The Eastbound exit from the Parkway (heading toward Baltimore) is open to the public and provides employee access to its main campus and public access to the National Cryptology Museum. The Westbound side exit, (heading toward Washington) is labeled "NSA Employees Only". The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance.
NSA is the largest employer in the state of Maryland, and two-thirds of its personnel work at Fort Meade. Built on of Fort Meade's , the site has 1,300 buildings and an estimated 18,000 parking spaces.
The main NSA headquarters and operations building is what James Bamford, author of Body of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building." The building is covered with one-way dark glass, which is lined with copper shielding in order to prevent espionage by trapping in signals and sounds. It contains , or more than , of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over."
The facility has over 100 watchposts, one of them being the visitor control center, a two-story area that serves as the entrance. At the entrance, a white pentagonal structure, visitor badges are issued to visitors and security clearances of employees are checked. The visitor center includes a painting of the NSA seal.
The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate, is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube". The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank. NSA headquarters has its own post office, fire department, and police force.
The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community. The NSA maintains a shuttle service from the Odenton station of MARC to its Visitor Control Center and has done so since 2005.
Power consumption
Following a major power outage in 2000, in 2003, and in follow-ups through 2007, The Baltimore Sun reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened."
On August 6, 2006, The Baltimore Sun reported that the NSA had completely maxed out the grid, and that Baltimore Gas & Electric (BGE, now Constellation Energy) was unable to sell them any more power. NSA decided to move some of its operations to a new satellite facility.
BGE provided NSA with 65 to 75 megawatts at Fort Meade in 2007, and expected that an increase of 10 to 15 megawatts would be needed later that year. In 2011, the NSA was Maryland's largest consumer of power. In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland.
One estimate put the potential for power consumption by the new Utah Data Center at 40 million per year.
Computing assets
In 1995, The Baltimore Sun reported that the NSA is the owner of the single largest group of supercomputers.
NSA held a groundbreaking ceremony at Fort Meade in May 2013 for its High Performance Computing Center 2, expected to open in 2016. Called Site M, the center has a 150 megawatt power substation, 14 administrative buildings and 10 parking garages. It cost 3.2 billion and covers . The center is and initially uses 60 megawatts of electricity.
Increments II and III are expected to be completed by 2030, and would quadruple the space, covering with 60 buildings and 40 parking garages. Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area.
National Computer Security Center
The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government. NCSC was part of NSA, and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications. The Rainbow books were replaced by the Common Criteria, however, in the early 2000s.
Other U.S. facilities
As of 2012, NSA collected intelligence from four geostationary satellites. Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California. It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia.
NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Fort Meade; the Aerospace Data Facility at Buckley Space Force Base in Aurora, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia, Georgia Cryptologic Center, Fort Gordon, Augusta, Georgia; NSA Hawaii, Hawaii Cryptologic Center in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere.
On January 6, 2011, a groundbreaking ceremony was held to begin construction on NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located south of Salt Lake City, and will help support the agency's National Cyber-security Initiative. It is expected to be operational by September 2013. Construction of Utah Data Center finished in May 2019.
In 2009, to protect its assets and access more electricity, NSA sought to decentralize and expand its existing facilities in Fort Meade and Menwith Hill, the latter expansion expected to be completed by 2015.
The Yakima Herald-Republic cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology. In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany. In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed. As of 2013, NSA also intended to close operations at Sugar Grove, West Virginia.
International stations
Following the signing in 1946–1956 of the UKUSA Agreement between the United States, United Kingdom, Canada, Australia and New Zealand, who then cooperated on signals intelligence and ECHELON, NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Ontario, Canada; Misawa, Japan; and Waihopai and Tangimoana, New Zealand.
NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world. Planned in 1954, and opened in 1960, the base covered in 1999.
The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focuses on various priorities, including Africa, Europe, the Middle East and counterterrorism operations.
In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany. NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler.
Thailand
Thailand is a "3rd party partner" of the NSA along with nine other nations. These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports.
Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, a joint NSA-CIA Special Collection Service (SCS) unit. It presumably eavesdrops on foreign embassies, governmental communications, and other targets of opportunity.
The second installation is a FORNSAT (foreign satellite interception) station in the Thai city of Khon Kaen. It is codenamed INDRA, but has also been referred to as LEMONWOOD. The station is approximately in size and consists of a large 3,700–4,600 m2 (40,000–50,000 ft2) operations building on the west side of the ops compound and four radome-enclosed parabolic antennas. Possibly two of the radome-enclosed antennas are used for SATCOM intercept and two antennas used for relaying the intercepted material back to NSA. There is also a PUSHER-type circularly-disposed antenna array (CDAA) just north of the ops compound.
NSA activated Khon Kaen in October 1979. Its mission was to eavesdrop on the radio traffic of Chinese army and air force units in southern China, especially in and around the city of Kunming in Yunnan Province. In the late 1970s, the base consisted only of a small CDAA antenna array that was remote-controlled via satellite from the NSA listening post at Kunia, Hawaii, and a small force of civilian contractors from Bendix Field Engineering Corp. whose job it was to keep the antenna array and satellite relay facilities up and running 24/7.
According to the papers of the late General William Odom, the INDRA facility was upgraded in 1986 with a new British-made PUSHER CDAA antenna as part of an overall upgrade of NSA and Thai SIGINT facilities whose objective was to spy on the neighboring communist nations of Vietnam, Laos, and Cambodia.
The base apparently fell into disrepair in the 1990s as China and Vietnam became more friendly towards the US, and by 2002 archived satellite imagery showed that the PUSHER CDAA antenna had been torn down, perhaps indicating that the base had been closed. At some point in the period since 9/11, the Khon Kaen base was reactivated and expanded to include a sizeable SATCOM intercept mission. It is likely that the NSA presence at Khon Kaen is relatively small, and that most of the work is done by civilian contractors.
Research and development
NSA has been involved in debates about public policy, both indirectly as a behind-the-scenes adviser to other departments, and directly during and after Vice Admiral Bobby Ray Inman's directorship. NSA was a major player in the debates of the 1990s regarding the export of cryptography in the United States. Restrictions on export were reduced but not eliminated in 1996.
Its secure government communications work has involved the NSA in numerous technology areas, including the design of specialized communications hardware and software, production of dedicated semiconductors (at the Ft. Meade chip fabrication plant), and advanced cryptography research. For 50 years, NSA designed and built most of its computer equipment in-house, but from the 1990s until about 2003 (when the U.S. Congress curtailed the practice), the agency contracted with the private sector in the fields of research and equipment.
Data Encryption Standard
NSA was embroiled in some controversy concerning its involvement in the creation of the Data Encryption Standard (DES), a standard and public block cipher algorithm used by the U.S. government and banking community. During the development of DES by IBM in the 1970s, NSA recommended changes to some details of the design. There was suspicion that these changes had weakened the algorithm sufficiently to enable the agency to eavesdrop if required, including speculation that a critical component—the so-called S-boxes—had been altered to insert a "backdoor" and that the reduction in key length might have made it feasible for NSA to discover DES keys using massive computing power. It has since been observed that the S-boxes in DES are particularly resilient against differential cryptanalysis, a technique which was not publicly discovered until the late 1980s but known to the IBM DES team.
Advanced Encryption Standard
The involvement of NSA in selecting a successor to Data Encryption Standard (DES), the Advanced Encryption Standard (AES), was limited to hardware performance testing (see AES competition). NSA has subsequently certified AES for protection of classified information when used in NSA-approved systems.
NSA encryption systems
The NSA is responsible for the encryption-related components in these legacy systems:
FNBDT Future Narrow Band Digital Terminal
KL-7 ADONIS off-line rotor encryption machine (post-WWII – 1980s)
KW-26 ROMULUS electronic in-line teletypewriter encryptor (1960s–1980s)
KW-37 JASON fleet broadcast encryptor (1960s–1990s)
KY-57 VINSON tactical radio voice encryptor
KG-84 Dedicated Data Encryption/Decryption
STU-III secure telephone unit, phased out by the STE
The NSA oversees encryption in the following systems that are in use today:
EKMS Electronic Key Management System
Fortezza encryption based on portable crypto token in PC Card format
SINCGARS tactical radio with cryptographically controlled frequency hopping
STE secure terminal equipment
TACLANE product line by General Dynamics C4 Systems
The NSA has specified Suite A and Suite B cryptographic algorithm suites to be used in U.S. government systems; the Suite B algorithms are a subset of those previously specified by NIST and are expected to serve for most information protection purposes, while the Suite A algorithms are secret and are intended for especially high levels of protection.
SHA
The widely used SHA-1 and SHA-2 hash functions were designed by NSA. SHA-1 is a slight modification of the weaker SHA-0 algorithm, also designed by NSA in 1993. This small modification was suggested by NSA two years later, with no justification other than the fact that it provides additional security. An attack for SHA-0 that does not apply to the revised algorithm was indeed found between 1998 and 2005 by academic cryptographers. Because of weaknesses and key length restrictions in SHA-1, NIST deprecates its use for digital signatures, and approves only the newer SHA-2 algorithms for such applications from 2013 on.
A new hash standard, SHA-3, has recently been selected through the competition concluded October 2, 2012 with the selection of Keccak as the algorithm. The process to select SHA-3 was similar to the one held in choosing the AES, but some doubts have been cast over it, since fundamental modifications have been made to Keccak in order to turn it into a standard. These changes potentially undermine the cryptanalysis performed during the competition and reduce the security levels of the algorithm.
Clipper chip
Because of concerns that widespread use of strong cryptography would hamper government use of wiretaps, NSA proposed the concept of key escrow in 1993 and introduced the Clipper chip that would offer stronger protection than DES but would allow access to encrypted data by authorized law enforcement officials. The proposal was strongly opposed and key escrow requirements ultimately went nowhere. However, NSA's Fortezza hardware-based encryption cards, created for the Clipper project, are still used within government, and NSA ultimately declassified and published the design of the Skipjack cipher used on the cards.
Dual EC DRBG random number generator cryptotrojan
NSA promoted the inclusion of a random number generator called Dual EC DRBG in the U.S. National Institute of Standards and Technology's 2007 guidelines. This led to speculation of a backdoor which would allow NSA access to data encrypted by systems using that pseudorandom number generator (PRNG).
This is now deemed to be plausible based on the fact that output of next iterations of PRNG can provably be determined if relation between two internal Elliptic Curve points is known. Both NIST and RSA are now officially recommending against the use of this PRNG.
Perfect Citizen
Perfect Citizen is a program to perform vulnerability assessment by the NSA on U.S. critical infrastructure. It was originally reported to be a program to develop a system of sensors to detect cyber attacks on critical infrastructure computer networks in both the private and public sector through a network monitoring system named Einstein. It is funded by the Comprehensive National Cybersecurity Initiative and thus far Raytheon has received a contract for up to $100 million for the initial stage.
Academic research
NSA has invested many millions of dollars in academic research under grant code prefix MDA904, resulting in over 3,000 papers NSA/CSS has, at times, attempted to restrict the publication of academic research into cryptography; for example, the Khufu and Khafre block ciphers were voluntarily withheld in response to an NSA request to do so. In response to a FOIA lawsuit, in 2013 the NSA released the 643-page research paper titled, "Untangling the Web: A Guide to Internet Research," written and compiled by NSA employees to assist other NSA workers in searching for information of interest to the agency on the public Internet.
Patents
NSA has the ability to file for a patent from the U.S. Patent and Trademark Office under gag order. Unlike normal patents, these are not revealed to the public and do not expire. However, if the Patent Office receives an application for an identical patent from a third party, they will reveal NSA's patent and officially grant it to NSA for the full term on that date.
One of NSA's published patents describes a method of geographically locating an individual computer site in an Internet-like network, based on the latency of multiple network connections. Although no public patent exists, NSA is reported to have used a similar locating technology called trilateralization that allows real-time tracking of an individual's location, including altitude from ground level, using data obtained from cellphone towers.
Insignia and memorials
The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons. The eagle represents the agency's national mission. Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress. The key is taken from the emblem of Saint Peter and represents security.
When the NSA was created, the agency had no emblem and used that of the Department of Defense. The agency adopted its first of two emblems in 1963. The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency.
The NSA's flag consists of the agency's seal on a light blue background.
Crews associated with NSA missions have been involved in a number of dangerous and deadly situations. The USS Liberty incident in 1967 and USS Pueblo incident in 1968 are examples of the losses endured during the Cold War.
The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions. It is made of black granite, and has 171 names carved into it, It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001.
Constitutionality, legality and privacy questions regarding operations
In the United States, at least since 2001, there has been legal controversy over what signal intelligence can be used for and how much freedom the National Security Agency has to use signal intelligence. In 2015, the government made slight changes in how it uses and collects certain types of data, specifically phone records. The government was not analyzing the phone records as of early 2019. The surveillance programs were deemed unlawful in September 2020 in a court of appeals case.
Warrantless wiretaps
On December 16, 2005, The New York Times reported that, under White House pressure and with an executive order from President George W. Bush, the National Security Agency, in an attempt to thwart terrorism, had been tapping phone calls made to persons outside the country, without obtaining warrants from the United States Foreign Intelligence Surveillance Court, a secret court created for that purpose under the Foreign Intelligence Surveillance Act (FISA).
Edward Snowden
Edward Snowden was an American intelligence contractor who, in 2013, revealed the existence of secret wide-ranging information-gathering programs conducted by the National Security Agency (NSA). More specifically, Snowden released information that demonstrated how the United States government was gathering immense amounts of personal communications, emails, phone locations, web histories and more of American citizens without their knowledge. One of Snowden's primary motivators for releasing this information was fear of a surveillance state developing as a result of the infrastructure being created by the NSA. As Snowden recounts, "I believe that, at this point in history, the greatest danger to our freedom and way of life comes from the reasonable fear of omniscient State powers kept in check by nothing more than policy documents... It is not that I do not value intelligence, but that I oppose . . . omniscient, automatic, mass surveillance. . . . That seems to me a greater threat to the institutions of free society than missed intelligence reports, and unworthy of the costs.”
In March 2014, Army General Martin Dempsey, Chairman of the Joint Chiefs of Staff, told the House Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures." When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents.
Other surveillance
On January 17, 2006, the Center for Constitutional Rights filed a lawsuit, CCR v. Bush, against the George W. Bush Presidency. The lawsuit challenged the National Security Agency's (NSA's) surveillance of people within the U.S., including the interception of CCR emails without securing a warrant first.
In the August 2006 case ACLU v. NSA, U.S. District Court Judge Anna Diggs Taylor concluded that NSA's warrantless surveillance program was both illegal and unconstitutional. On July 6, 2007, the 6th Circuit Court of Appeals vacated the decision on the grounds that the ACLU lacked standing to bring the suit.
In September 2008, the Electronic Frontier Foundation (EFF) filed a class action lawsuit against the NSA and several high-ranking officials of the Bush administration, charging an "illegal and unconstitutional program of dragnet communications surveillance," based on documentation provided by former AT&T technician Mark Klein.
As a result of the USA Freedom Act passed by Congress in June 2015, the NSA had to shut down its bulk phone surveillance program on November 29 of the same year. The USA Freedom Act forbids the NSA to collect metadata and content of phone calls unless it has a warrant for terrorism investigation. In that case, the agency must ask the telecom companies for the record, which will only be kept for six months. The NSA's use of large telecom companies to assist it with its surveillance efforts has caused several privacy concerns.
AT&T Internet monitoring
In May 2008, Mark Klein, a former AT&T employee, alleged that his company had cooperated with NSA in installing Narus hardware to replace the FBI Carnivore program, to monitor network communications including traffic between U.S. citizens.
Data mining
NSA was reported in 2008 to use its computing capability to analyze "transactional" data that it regularly acquires from other government agencies, which gather it under their own jurisdictional authorities. As part of this effort, NSA now monitors huge volumes of records of domestic email data, web addresses from Internet searches, bank transfers, credit-card transactions, travel records, and telephone data, according to current and former intelligence officials interviewed by The Wall Street Journal. The sender, recipient, and subject line of emails can be included, but the content of the messages or of phone calls are not.
A 2013 advisory group for the Obama administration, seeking to reform NSA spying programs following the revelations of documents released by Edward J. Snowden. mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application." Retired cybersecurity expert Richard A. Clarke was a group member and stated on April 11, 2014, that NSA had no advance knowledge of Heartbleed.
Illegally obtained evidence
In August 2013 it was revealed that a 2005 IRS training document showed that NSA intelligence intercepts and wiretaps, both foreign and domestic, were being supplied to the Drug Enforcement Administration (DEA) and Internal Revenue Service (IRS) and were illegally used to launch criminal investigations of US citizens. Law enforcement agents were directed to conceal how the investigations began and recreate an apparently legal investigative trail by re-obtaining the same evidence by other means.
Barack Obama administration
In the months leading to April 2009, the NSA intercepted the communications of U.S. citizens, including a Congressman, although the Justice Department believed that the interception was unintentional. The Justice Department then took action to correct the issues and bring the program into compliance with existing laws. United States Attorney General Eric Holder resumed the program according to his understanding of the Foreign Intelligence Surveillance Act amendment of 2008, without explaining what had occurred.
Polls conducted in June 2013 found divided results among Americans regarding NSA's secret data collection. Rasmussen Reports found that 59% of Americans disapprove, Gallup found that 53% disapprove, and Pew found that 56% are in favor of NSA data collection.
Section 215 metadata collection
On April 25, 2013, the NSA obtained a court order requiring Verizon's Business Network Services to provide metadata on all calls in its system to the NSA "on an ongoing daily basis" for a three-month period, as reported by The Guardian on June 6, 2013. This information includes "the numbers of both parties on a call ... location data, call duration, unique identifiers, and the time and duration of all calls" but not "[t]he contents of the conversation itself". The order relies on the so-called "business records" provision of the Patriot Act.
In August 2013, following the Snowden leaks, new details about the NSA's data mining activity were revealed. Reportedly, the majority of emails into or out of the United States are captured at "selected communications links" and automatically analyzed for keywords or other "selectors". Emails that do not match are deleted.
The utility of such a massive metadata collection in preventing terrorist attacks is disputed. Many studies reveal the dragnet like system to be ineffective. One such report, released by the New America Foundation concluded that after an analysis of 225 terrorism cases, the NSA "had no discernible impact on preventing acts of terrorism."
Defenders of the program said that while metadata alone cannot provide all the information necessary to prevent an attack, it assures the ability to "connect the dots" between suspect foreign numbers and domestic numbers with a speed only the NSA's software is capable of. One benefit of this is quickly being able to determine the difference between suspicious activity and real threats. As an example, NSA director General Keith B. Alexander mentioned at the annual Cybersecurity Summit in 2013, that metadata analysis of domestic phone call records after the Boston Marathon bombing helped determine that rumors of a follow-up attack in New York were baseless.
In addition to doubts about its effectiveness, many people argue that the collection of metadata is an unconstitutional invasion of privacy. , the collection process remained legal and grounded in the ruling from Smith v. Maryland (1979). A prominent opponent of the data collection and its legality is U.S. District Judge Richard J. Leon, who issued a report in 2013 in which he stated: "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval...Surely, such a program infringes on 'that degree of privacy' that the founders enshrined in the Fourth Amendment".
As of May 7, 2015, the United States Court of Appeals for the Second Circuit ruled that the interpretation of Section 215 of the Patriot Act was wrong and that the NSA program that has been collecting Americans' phone records in bulk is illegal. It stated that Section 215 cannot be clearly interpreted to allow government to collect national phone data and, as a result, expired on June 1, 2015. This ruling "is the first time a higher-level court in the regular judicial system has reviewed the NSA phone records program." The replacement law known as the USA Freedom Act, which will enable the NSA to continue to have bulk access to citizens' metadata but with the stipulation that the data will now be stored by the companies themselves. This change will not have any effect on other Agency procedures – outside of metadata collection – which have purportedly challenged Americans' Fourth Amendment rights, including Upstream collection, a mass of techniques used by the Agency to collect and store American's data/communications directly from the Internet backbone.
Under the Upstream collection program, the NSA paid telecommunications companies hundreds of millions of dollars in order to collect data from them. While companies such as Google and Yahoo! claim that they do not provide "direct access" from their servers to the NSA unless under a court order, the NSA had access to emails, phone calls, and cellular data users. Under this new ruling, telecommunications companies maintain bulk user metadata on their servers for at least 18 months, to be provided upon request to the NSA. This ruling made the mass storage of specific phone records at NSA datacenters illegal, but it did not rule on Section 215's constitutionality.
Fourth Amendment encroachment
In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring. Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS).
The NSA tracks the locations of hundreds of millions of cellphones per day, allowing it to map people's movements and relationships in detail. The NSA has been reported to have access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk, and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year. It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing or otherwise infiltrating numerous technology companies to leave "backdoors" into their systems), so that the majority of encryption is inadvertently vulnerable to different forms of attack.
Domestically, the NSA has been proven to collect and store metadata records of phone calls, including over 120 million US Verizon subscribers, as well as intercept vast amounts of communications via the internet (Upstream). The government's legal standing had been to rely on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism. The NSA also supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction.
The NSA also spies on influential Muslims to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA.
According to a report in The Washington Post in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim.
Congressional oversight
The Intelligence Committees of US House and Senate exercise primary oversight over the NSA; other members of congress have been denied access to materials and information regarding the agency and it's activities. The United States Foreign Intelligence Surveillance Court, the secret court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules. It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions. NSA officers have even used data intercepts to spy on love interests; "most of the NSA violations were self-reported, and each instance resulted in administrative action of termination."
The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with other law enforcement agencies. A March 2009 FISA Court opinion, which the court released, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively." In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe". Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil.
Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgement of freedom of the people by gradual and silent encroachments by those in power,' would be aghast."
Later that month, U.S. District Judge William Pauley ruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data prior to 9/11 might have prevented the attack.
Official responses
At a March 2013 Senate Intelligence Committee hearing, Senator Ron Wyden asked Director of National Intelligence James Clapper, "does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" Clapper replied "No, sir. ... Not wittingly. There are cases where they could inadvertently perhaps collect, but not wittingly." This statement came under scrutiny months later, in June 2013, details of the PRISM surveillance program were published, showing that "the NSA apparently can gain access to the servers of nine Internet companies for a wide range of digital data." Wyden said that Clapper had failed to give a "straight answer" in his testimony. Clapper, in response to criticism, said, "I responded in what I thought was the most truthful, or least untruthful manner." Clapper added, "There are honest differences on the semantics of what – when someone says 'collection' to me, that has a specific meaning, which may have a different meaning to him."
NSA whistle-blower Edward Snowden additionally revealed the existence of XKeyscore, a top secret NSA program that allows the agency to search vast databases of "the metadata as well as the content of emails and other internet activity, such as browser history," with capability to search by "name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used." XKeyscore "provides the technological capability, if not the legal authority, to target even US persons for extensive electronic surveillance without a warrant provided that some identifying information, such as their email or IP address, is known to the analyst."
Regarding the necessity of these NSA programs, Alexander stated on June 27, 2013, that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream". On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 to Al-Shabaab, a militia that conducts terrorism in Somalia.
The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.
The U.S. military has acknowledged blocking access to parts of The Guardian website for thousands of defense personnel across the country, and blocking the entire Guardian website for personnel stationed throughout Afghanistan, the Middle East, and South Asia.
An October 2014 United Nations report condemned mass surveillance by the United States and other countries as violating multiple international treaties and conventions that guarantee core privacy rights.
Responsibility for international ransomware attack
An exploit dubbed EternalBlue, created by the NSA, was used in the unprecedented worldwide WannaCry ransomware attack in May 2017. The exploit had been leaked online by a hacking group, The Shadow Brokers, nearly a month prior to the attack. A number of experts have pointed the finger at the NSA's non-disclosure of the underlying vulnerability, and their loss of control over the EternalBlue attack tool that exploited it. Edward Snowden said that if the NSA had "privately disclosed the flaw used to attack hospitals when they found it, not when they lost it, [the attack] might not have happened". Wikipedia co-founder, Jimmy Wales, stated that he joined "with Microsoft and the other leaders of the industry in saying this is a huge screw-up by the government ... the moment the NSA found it, they should have notified Microsoft so they could quietly issue a patch and really chivvy people along, long before it became a huge problem."
Activities of previous employees
Former employee David Evenden, who had left the NSA to work for US defense contractor Cyperpoint at a position in the United Arab Emirates, was tasked with hacking UAE neighbor Qatar in 2015 to determine if they were funding terrorist group Muslim Brotherhood. He quit the company after learning his team had hacked Qatari Sheikha Moza bint Nasser's email exchanges with Michelle Obama, just prior to her visit to Doha. Upon Everden's return to the US, he reported his experiences to the FBI. The incident highlights a growing trend of former NSA employees and contractors leaving the agency to start up their own firms, and then hiring out to countries like Turkey, Sudan and even Russia, a country involved in numerous cyberattacks against the US.
2021 Denmark-NSA collaborative surveillance
In May 2021, it was reported that Danish Defence Intelligence Service collaborated with NSA to wiretap on fellow EU members and leaders, leading to wide backlash among EU countries and demands for explanation from Danish and American governments.
See also
Notes
References
Bamford, James. Body of Secrets: Anatomy of the Ultra-Secret National Security Agency, Random House Digital, Inc., December 18, 2007. . Previously published as: Doubleday, 2001, .
Bauer, Craig P. Secret History: The Story of Cryptology (Volume 76 of Discrete Mathematics and Its Applications). CRC Press, 2013. .
Weiland, Matt and Sean Wilsey. State by State. HarperCollins, October 19, 2010. .
Further reading
Adams, Sam, War of Numbers: An Intelligence Memoir Steerforth; new edition (June 1, 1998).
Aid, Matthew, The Secret Sentry: The Untold History of the National Security Agency, 432 pages, , Bloomsbury Press (June 9, 2009).
Mandatory Declassification Review – Interagency Security Classification Appeals Panel
Bamford, James, The Puzzle Palace, Penguin Books, .
Bamford, James, The New York Times, December 25, 2005; The Agency That Could Be Big Brother.
Bamford, James, The Shadow Factory, Anchor Books, 2009, .
Radden Keefe, Patrick, Chatter: Dispatches from the Secret World of Global Eavesdropping, Random House, .
Kent, Sherman, Strategic Intelligence for American Public Policy.
Kahn, David, The Codebreakers, 1181 pp., . Look for the 1967 rather than the 1996 edition.
Laqueur, Walter, A World of secrets.
Liston, Robert A., The Pueblo Surrender: a Covert Action by the National Security Agency, .
Levy, Steven, Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, Penguin Books, .
Prados, John, The Soviet estimate: U.S. intelligence analysis & Russian military strength, hardcover, 367 pages, , Dial Press (1982).
Perro, Ralph J. "Interviewing With An Intelligence Agency (or, A Funny Thing Happened On The Way To Fort Meade)." (Archive) Federation of American Scientists. November 2003. Updated January 2004. – About the experience of a candidate of an NSA job in pre-employment screening. "Ralph J. Perro" is a pseudonym that is a reference to Ralph J. Canine (perro is Spanish for "dog", and a dog is a type of canine)
Shaker, Richard J. "The Agency That Came in from the Cold." (Archive Notices. American Mathematical Society. May/June 1992 pp. 408–411.
Tully, Andrew, The Super Spies: More Secret, More Powerful than the CIA, 1969, LC 71080912.
Church Committee, Intelligence Activities and the Rights of Americans: 1976 US Senate Report on Illegal Wiretaps and Domestic Spying by the FBI, CIA and NSA, Red and Black Publishers (May 1, 2008).
"Just what is the NSA?" (video) CNN. June 7, 2013.
"National Security Agency Releases History of Cold War Intelligence Activities." George Washington University. National Security Archive Electronic Briefing Book No. 260. Posted November 14, 2008.
External links
National Security Agency – 60 Years of Defending Our Nation
Records of the National Security Agency/Central Security Service
The National Security Archive at George Washington University
National Security Agency (NSA) Archive on the Internet Archive
1952 establishments in the United States
Articles containing video clips
Computer security organizations
Government agencies established in 1952
Mass surveillance
Signals intelligence agencies
Supercomputer sites
United States Department of Defense agencies
United States government secrecy
Intelligence analysis agencies |
1933589 | https://en.wikipedia.org/wiki/Scramdisk | Scramdisk | Scramdisk is a free on-the-fly encryption program for Windows 95, Windows 98, and Windows Me. A non-free version was also available for Windows NT.
The original Scramdisk is no longer maintained; its author, Shaun Hollingworth, joined Paul Le Roux (the author of E4M) to produce Scramdisk's commercial successor, DriveCrypt. The author of Scramdisk provided a driver for Windows 9x, and the author of E4M provided a driver for Windows NT, enabling cross-platform versions of both programs.
There is a new project called ScramDisk 4 Linux which provides access to Scramdisk and TrueCrypt containers. Older versions of TrueCrypt included support for ScramDisk.
Licensing
Although Scramdisk's source code is still available, it's stated that it was only released and licensed for private study and not for further development.
However, because it contains an implementation of the MISTY1 Encryption Algorithm (by Hironobu Suzuki, a.k.a. H2NP) licensed under the GNU GPL Version 2, it is in violation of the GPL.
See also
Disk encryption
Disk encryption software
Comparison of disk encryption software
References
External links
Scramdisk @ SamSimpson.com - Original ScramDisk web site from the Internet Archive
Official WWW site- though ScramDisk no longer available
ScramDisk 4 Linux
Cryptographic software
Disk encryption
Windows security software |
7163193 | https://en.wikipedia.org/wiki/Copyright%20and%20Related%20Rights%20Regulations%202003 | Copyright and Related Rights Regulations 2003 | The Copyright and Related Rights Regulations 2003 transpose the Information Society Directive "(Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society)", (known popularly at the time as the EU copyright directive), into United Kingdom law. As such, its main effects are to modify the
Copyright, Designs and Patents Act 1988 c. 48 ("the 1988 Act") with minor consequential modifications to other
Acts and secondary legislation.
The regulations modify the concept of broadcast and broadcasting to take account of development in the internet; restrict
the acts which are permitted without infringing copyright (in particular acts which could be performed commercially); and
provide new measures for the protection and enforcement of copyright and performers' rights.
Only two EU Member States met the 22 December 2002 deadline for implement, the UK implemented the directive in 2003 but in November 2004 was convicted for non-implementation of the directive in the territory of Gibraltar.
Broadcasts
Regulation 4 provides a new definition of "broadcast" in section 6 of the 1988 Act as
an electronic transmission of visual images, sounds or other information which—
(a) is transmitted for simultaneous reception by members of the public and is capable of being lawfully received by them, or
(b) is transmitted at a time determined solely by the person making the transmission for presentation to members of the public,
Internet transmissions are excepted from the definition of a broadcast unless they are transmitted simultaneously with a broadcast by other means, simultaneously with a live event or form part of a service in which programmes are transmitted at specific times determined solely by the person providing the service.
The new definition covers the old definitions of broadcast and of "cable programme" (s. 7 of the 1988 Act). However the
separate definition of cable programme is kept as there is a distinction in the dates at which copyright becomes available:
1995-01-01 for cable programmes as opposed to 1956-06-01 for broadcasts under the previous definition.
Communication to the public
The previous "infringement by broadcasting or inclusion in a cable programme" (s. 20 of the 1988 Act) is replaced by a
new "infringement by communication to the public" (reg. 6). This includes both broadcasting (under the new definition)
and making a work available to the public by electronic transmission. A performer's rights are also infringed if his
performance is made available to the public without his consent (reg; 7; new s. 182CA of the 1988 Act).
Permitted acts and copyright exceptions
Neither copyright (except in a computer program or a database) nor performer's rights are infringed by the
simple act of transmitting the work between third parties over a network, even if the process of transmission involves
making temporary copies (reg. 8)
The existing fair dealing for the purposes of research or private study (s. 29 of the 1988 Act) was limited so that it is only permitted for a non-commercial purpose (reg. 9). A similar restriction was imposed on permitted copying by librarians (ss. 38, 39, 43 of the 1988 Act; reg. 14) or archivists of folksongs (s. 61 of the 1988 Act; reg. 16) for third parties. The observation or study of the functioning of a computer program was removed from the remit of fair dealing (reg. 9) and replaced by a statutory permission to study the functioning of the program while legally performing any of the acts of loading, displaying, running, transmitting or storing the program (reg. 15; new s. 50BA of the 1988 Act). It was clarified that fair dealing for the purposes of criticism, review or news reporting is only allowed for published works (reg. 10).
The permitted use without a licence of copyright material for educational use (ss. 32, 35, 36 of the 1988 Act) was restricted to non-commercial purposes (regs. 11–13). Regulation 18 removes the permission to use third parties (e.g. outside DJs) to play sound recordings for the purposes of a non-commercial club or society (s. 67 of the 1988 Act). The exception for public showing or playing of broadcasts of music (s. 72 of the 1988 Act) was also permitted, and the Secretary of State was enabled to propose a licensing scheme covering such public showing or playing, which may be compulsory (reg. 21).
Technical measures
New s. 296 of the 1988 Act created new rights in respect of copyright works to which copy protection measures have been applied. This right is held concurrently by:
any person issuing copies of the work to the public or communicating the work to the public; and
the owner of the copyright in the work, or his exclusive licensee; and
the owner of any intellectual property right in the technical device or measure, or his exclusive licensee.
who have the same rights against an infringement of this right as the owner of copyright has against infringement of copyright, including seizure.
The right is infringed:
with respect to protected computer programs, by a person who "manufactures for sale or hire, imports, distributes, sells or lets for hire, offers or exposes for sale or hire, advertises for sale or hire or has in his possession for commercial purposes" any means "the sole intended purpose of which is to facilitate the unauthorised removal or circumvention of the technical device" or who publishes information intended to enable or assist other in removing or circumventing the technical device (new s. 296 of the 1988 Act);
with respect to other protected works, by a person who knowingly circumvents the technical measures (new s. 296ZA of the 1988 Act);
by a person who manufactures, imports, distributes, sells or advertises any device or product which, or provides services which:
are promoted or marketed for the purpose of circumventing such measures; or
have only limited commercial significance beyond circumventing such measures; or
are primarily designed, produced, adapted or performed for the purpose of enabling or facilitating the circumvention of such measures (new s. 296ZD of the 1988 Act).
The new section 296ZB established the criminal offenses of:
manufacturing for sale or hire, importation, sale or distribution of devices or products which are primarily designed or adapted for the purpose of the circumvention of technological measures;
providing, promoting, advertising or marketing a service the purpose of which is to enable or facilitate the circumvention of technical measures.
The new section 296ZC allows the use of search warrants and forfeiture with respect to these offenses. The new section 296ZE created a remedy via complaint to the Secretary of State if a technical device or measure prevents a person or group of people from carrying out a permitted act with relation to the work. The Secretary of State may issue a direction to the owner of the copyright to take such measures as are necessary to enable the permitted act to be carried out. The breach of such a direction is actionable as a breach of statutory duty.
Rights management information
New s. 296ZG of the 1988 Act created new rights in respect of electronic rights management information metadata. The right is infringed by:
the person who knowingly removes electronic copyright management information which is associated with a copy of a copyright work, or appears in connection with the communication to the public of a copyright work;
the person who knowingly distributes or communicates to the public copies of a work from which electronic rights management information has been removed.
Enforcement of copyright and performer's rights
The infringement of copyright or performer's rights by making a work available to the public in the course of a business or to an extent which prejudicially affects the copyright owner becomes a criminal offense (reg. 26; new s. 107(2A) of the 1988 Act).
A copyright holder may obtain an injunction (Scots law: interdict) against an Internet service provider (ISP) who has "actual knowledge" of another person using their service to infringe copyright or a performer's right. In determining whether the ISP has actual knowledge of the infringing use, the High Court (or Court of Session in Scotland) shall take into account all matters which appear to be relevant, in particular whether the ISP has received notification under regulation 6(1)(c) of the Electronic Commerce (EC Directive) Regulations 2002 No. 2013 (reg. 27; new ss. 97A, 191JA of the 1988 Act).
Regulation 28 extended the right to bring action for infringement of copyright to non-exclusive licensees (it was previously limited to copyright owners and exclusive licensees) when the infringement is directly connected to a prior licensed act by the licensee and the licence expressly grants a right of action (new s. 101A of the 1998 Act).
The non-exclusive licensee shall have the same rights and remedies as the copyright owner would have in any action.
The right of the non-exclusive licensee to bring action is concomitant with that of the copyright owner.
Duration of protection
Regulation 29 amends section 13A of the 1988 Act to take account of the new definition of "communication to the public" so that the copyright in sound recordings expires:
at the end of the period of fifty years from the end of the calendar year in which the recording is made, or
if during that period the recording is published, fifty years from the end of the calendar year in which it is first published, or
if during that period the recording is not published but is made available to the public by being played in public or communicated to the public, fifty years from the end of the calendar year in which it is first so made available.
See also
Directive (European Union)
Copyright law of the United Kingdom
Copyright law of the European Union
Digital Millennium Copyright Act
Software cracking
Peer-to-peer
References
External links
Electronic Commerce (EC Directive) Regulations 2002 No. 2013.
Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society
Commission of the European Communities v United Kingdom of Great Britain and Northern Ireland (Case C-88/04), OJ no. C045 of 19 February 2005, p. 11.
United Kingdom copyright law
2003 in British law
Statutory Instruments of the United Kingdom
Copyright legislation |
5851855 | https://en.wikipedia.org/wiki/ND-NOTIS | ND-NOTIS | ND-NOTIS was a tightly integrated yet modular office automation suite by Norsk Data introduced in the early 80s, running on the SINTRAN III platform on both ND-100 and ND-500 architectures. It was also available on Microsoft Windows running in networks of Norsk Data servers.
Overview
ND-NOTIS was very successful, and was the main product line of the company for quite a while, cementing its position in the Norwegian government office automation market. It was also very popular in Germany and in the UK (local municipality, DHSS etc.)
The NOTIS family of products was presented to the British Computer Society by Jeremy Salter. Roger Tagg et al. (BCS, End User SG, 1985) and endorsed as the BCS model for user interface. The same endorsement was awarded to NOTIS-IR as a model for information storage and retrieval. The European Commission published in 1985 NOTIS-IR as reference model for document and information search and retrieval.
Where it was offered it had no real competitors. There are still features and functionality that no other system supports - such as multilingual input and search.
Norsk Data also sold custom-made Tandberg Data TDV-2200 terminals as "NOTIS terminals" with special keys for text editing. Other terminals were "endorsed", provided new keycaps and branded as "NOTIS Terminals" - including the Facit "Twist" - that would show a page standing.
Components
Components included:
NOTIS-WP
NOTIS Word Processor, a full text editing environment optimized for word processing.
NOTIS-DS
NOTIS Document Store, a database of documents based on the SIBAS database.
NOTIS-BS
NOTIS Backup System, an advanced system used for automated and incremental backups of a DS document store.
NOTIS-ID
NOTIS mail system. Proprietary e-mail system. It was later interfaced to Notis-Mail (see below).
NOTIS-Mail
NOTIS full X.400 e-mail system, including an X.500 based directory service (implemented using SIBAS) and TCP/IP based SMTP mail. This provided HTML formatted messages in from 1985.
NOTIS-TF
NOTIS Text Formatter, a text formatting system.
NOTIS-RG
NOTIS Report Generator, a powerful data extracting and modifying system. Often used together with database systems like SIBAS, Mimer or Oracle.
NOTIS-RP
NOTIS Report Producer, closely integrated with NOTIS-RG.
NOTIS-CALC
NOTIS Spreadsheet program, similar to VisiCalc and later successors like Microsoft Excel
NOTIS-ENCRYPT
NOTIS Encryption software.
NOTIS-IR
NOTIS Information Retrieval, a document database with free text search allowing full multi-site search.
NOTIS-QL
NOTIS Database query and application generation program, also called "Access-1" predecessor to MS Access.
General
'NOTIS was unique then and still is. It captured the notion of different user interfaces, or terminals; and managed a common user interface for all applications that used the platform. So a key on the keyboard would in all applications "mean" the same.
It relied on an interface system "User Environment" to hold in one place all user profile and preferences. That is everything from log-in name and password, language preference, application skills and user rights to see, edit and change document - or data in applications. It came as a full "Document management" package, with full support for workflow - which was used by 3rd party application software.
Another first was multi-lingual support, - also part of the user interface. Regardless of where you logged on, the system would know of your preferences, and allow you to resume last task. The system also supported full editing from right to left. All deliveries to the Norwegian public sector required capability to use three language, and that in the same office, all three languages would be used, even in the same document. So to sell in its main market, it need multilingual support. That included all messages, error messages and user interaction. The error messages could also be adapter from "novice" to "expert".
An important "first" was the full support for SGML - or "S-code". This allow the text editor to be used to edit and view the first HTML documents created - on hardware running NOTIS. The alternate character set - "T-code" was the CCITT, now ITU T.56 standard character set - used in all television sets to show teletext/"Text TV". So, the systems had full support for semi-graphical input and display, but just a few terminals supported this. Beware that an important customer was CERN, where they at the end of the 1980s worked on what you are now using - the WWW. ND-NOTIS had full support for HTML at that time.
The main "first" - WYSIWYG - What you see is what you get. In 1983, most text editors used "codes" that had to be inserted in the text, like "^p" meaning "paragraph.Even in the early days of text editors on Windows, most text systems were like that. However, NOTIS-WP, would show where the line ended and a new page started for a long time. The "Symbolic Graphic Markup Language" was developed as an international standard, and version 2 of this is the "Extended SGML" or XML that is approved by ODMA.
The "back-end" to all these modules were also flexible. You had direct file system exposure - that included network mounted files. However, with NOTIS-DS it included "Document Storage" and management - a full Electronic Document Management System (as found in systems such as Documentum and OpenText's Livelink today). The EDMS was based on a generic software interface, but only SIBAS was used commercially. This allowed fully localisation transparent document storage and retrieval. That of course demanded NOTIS-IR to search in all the documents. Software today used to power both Google and Altavista.
NOTIS-ID was a "special" NOTIS-DS, with restricted functionality, in that this would interface to "mail" exchanges only. So if you stored a document to an email-recipient, you sent an email. Likewise, if you received an email, this would appear in your mail-count, and you could read it in NOTIS-WP like any other document.
Norsk Data needed NOTIS to avoid duplicating applications. So the software was used in professional text production systems, for newspaper and magazine production - by "NORTEXT". It was reviewed a number of times here and found to be "best of breed" by e.g. the Seybold Report on newspaper systems. Hence demanding users were close to the developers - which may explain the success.
Applications beside Text and Document Management
The list here will become endless since NOTIS was linked to three application generators - beside the Query part. All of these were fully capable of making large applications system, that could also update databases. The most successful link was to "Unique" - an application package developed outside Norsk Data to support SIBAS but later was enhanced to interface to a number of RDBMS. Unique was successful in the UK, implementing systems for the DHSS and local municipalities. The other platforms were "BIM" (Business Information Systems) and "ABM" (Application Building and Maintenance). A full interface was made to "Systemator" to provide full support to newspaper systems generated by this. Norsk Data marketed and sold the system as integrated with their offerings for the medical sector ("Infomedica") and hospital systems;- for local community in Scandinavia and the UK based on Unique(i.e.DIALOGUE-1); - for engineering documentation in Europe: CAD/CAM as Technovision and even to the F-16 Flight Simulator.
NOTIS-QL - also marketed by Norsk Data as "Access-1" (and presented to Microsoft under this name - so you wonder with very good reason), is still commercially available as "QBEvision". It has also been licensed under several names with full product sold by CA.The NOTIS family was fully ported to Microsoft Windows''', but was incredibly difficult to move with its huge customer base. For a time there were plans to include "Ami" into the family, to gain some market momentum - but its very difficult to move when your users do not want to make the change and are so happy with the way things are.
The NOTIS'' family was ported to Norsk Data NDIX Unix line, but here suffered because Unix had problems with coping with the key sequences required (terminfo/termcap was incomplete compared to the proprietary "VTM".). This is still used in telecommunication, all SMS messages use this character set.
External
The NOTIS way of sorting was included in the first version of Sybase, which was acquired by Microsoft as DS1. This taught Microsoft to arrange sort sequences in Windows according to national character sets ("codepage").
NOTIS-WP was the testbed for SGML and HTML. A very visible remnant of NOTIS-WP is the font size parameter in HTML: 1 for tiny and all the way up to 5 for huge.
There is still no other system available that will grant you "only one user interface and make all into one system". You have to fill in the expense report in Oracle HR and type the letter summarising the reason for the expense in some other text editor. With Notis, you clicked on field for providing the information, and WP would fire up, allowing you to write the letter - not as in Wikipedia where you have to supply own mark-up, but with the document template ready. The complete document would then be stored in the application database, with the expense report data.
See also
History of Norsk Data
References
Business software
Norsk Data software |
65741492 | https://en.wikipedia.org/wiki/College%20of%20Science%2C%20University%20of%20Tehran | College of Science, University of Tehran | College of Science is one of the University of Tehran's colleges. It is one of the oldest postgraduate centers in Iran, which was established in 1934 under the name of College of Science. It also called Science Campus. At present, the Science Campus is the largest university unit among all Iranian colleges where basic science research is conducted. The College of Science has five schools and students study at the undergraduate, graduate and doctoral levels there. The main College of Science building is located in the central campus of the University of Tehran, Tehran, Iran. The college has 2600 students and 130 faculty members.
History
The establishment of the College of Science or Science Campus dates back to the establishment of the Dar ul-Funun, about 150 years ago, when the level of education was at the level of high school. Courses taught at the Dar ul-Funun included physics, chemistry, natural sciences, pharmacy, and mining, to which arithmetic, geometry, geography, and painting were gradually added.
In 1928, the plan of an institution called the Higher Teachers' College (Higher Education University) including the faculties of sciences and literature was laid. The Faculty of Science included physics-chemistry, mathematics and natural sciences. The college had two Iranian professors and seven French professors, for a total of 100 students. Mahmoud Hessabi and Gholam Hossein Rahnama were the two Iranian professors in this college.
In 1934, the proposal to establish the University of Tehran, consisting of six faculties of medicine, law, natural sciences, technology, theology and teachers' colleges, was given to the then minister Ali-Asghar Hekmat by Dr. Hesabi, Dr. Sediq and Monsieur Batliani and was approved by the Islamic Consultative Assembly in the same year. After the establishment of the University of Tehran in 1934, the literary department of the Higher Teachers' College became the Faculty of Literature and the scientific department became the Faculty of Science, as part of the University of Tehran. Of course, in 1955, the Higher Teachers' College was separated from the faculties of literature and science and began another period of its activity.
From 1934 to 1958, the Faculty of Science accepted students for a three-year bachelor's degree through exams held by the Faculty (In the form of intra-group).
The building of the Faculty of Science was inaugurated in 1951 and it was decided that instead of having different departments of science in each faculty, all basic science disciplines would be located in the Faculty of Science. In 1952, the field of physics and chemistry was divided into two independent fields of physics and chemistry and the field of natural sciences into two fields of biology and geology, and in 1959 the fields of biology and geology were formed in group independently. In 1963, due to coordination between the faculty program and foreign faculties, it was decided to increase the bachelor's degree course to 4 years, and its degree, which was called a diploma, to be recognized as a bachelor's degree and thus gain value internationally. From this year, students entered the university through the national entrance exam. Postgraduate courses were also expanded and courses organized on a unit-by-unit basis. In 1975, the name of the Mathematics Department was changed to the Department of Mathematics and Computer Science, and in 2001, it was changed to the Department of Mathematics, Statistics and Computer Science.
Since 2005, along with organizational changes at the University of Tehran, Faculty of Science has been renamed to College of Science or Science Campus and formed its own independent schools with an independent educational structure.
Organizational structure
Organizational structure of the Science Campus is as follows:
Schools
The College of Science has five schools and a department:
College of Science
School of Mathematics, Statistics and Computer Science
Department of Mathematics
Pure Mathematics
Bachelor
Master
PhD
Department of Applied Mathematics
Applied Mathematics
Bachelor
Master
PhD
Department of Statistics
Statistics
Bachelor
Master
PhD
Department of Computer Science
Computer Science
Bachelor
Master
PhD
School of Physics
Atomic and Molecular Physics Department
Atomic and Molecular Physics
Bachelor
Master
PhD
Elementary Particle Physics Department
Elementary Particle Physics
Bachelor
Master
PhD
Gravitation and Astrophysics Department
Gravitation and Astrophysics
Bachelor
Master
PhD
Nuclear Physics Department
Nuclear Physics
Bachelor
Master
PhD
Nano-Physics Department
Nano-Physics
Master
PhD
School of Chemistry
Applied Chemistry Department
Applied Chemistry
Bachelor
Master
Pure Chemistry Department
Bachelor
Pure Chemistry
Physical Chemistry Department
Physical Chemistry
Master
PhD
Nano Chemistry Department
Nano Chemistry
Master
PhD
Analytical Chemistry Department
Analytical Chemistry
Master
PhD
Polymer Chemistry Department
Polymer Chemistry
Master
PhD
Organic Chemistry Department
Organic Chemistry
Master
PhD
Inorganic Chemistry Department
Inorganic Chemistry
Master
PhD
School of Geology
Department of Soft Rocks
Sedimentology, Stratigraphy, and Petroleum Geology
Master
PhD
Department of Hard Rocks
Petrology and Economic Geology
Master
PhD
Department of Engineering Geology and Tectonic
Engineering Geology, Structural Geology and Tectonic
Master
PhD
School of Biology
Department of Animal Biology
Animal Sciences
Bachelor
Animal Sciences - Biosystematics
Master
Animal Sciences - Physiology
Master
Animal Sciences - Developmental Biology
Master
Animal Biosystematics
PhD
Animal Physiology
PhD
Animal Developmental Biology
PhD
Department of Cell and Molecular Biology
Cell and Molecular Biology
Bachelor
Master
PhD
-
Biotechnology
Bachelor
Department of Microbiology
Microbiology
Bachelor
Master
PhD
Microbial Biotechnology
Master
PhD
Department of Plant Biology
Plant Sciences
Bachelor
Plant Sciences - Systematics and Ecology
Master
Plant Sciences - Physiology
Master
Plant Systematics
PhD
Plant Physiology
PhD
Department of Biotechnology
Biotechnology
Biotechnology
PhD
Library
The library complex of the Science Campus started its work in 1934. Since then, many changes have taken place in this library, most of which dates back to 1955. On this date, the current building of the Science Campus was inaugurated and the library was moved to the third floor of this building located in the Central Campus of the University of Tehran. The library's collection of information resources now includes over 100,000 copies, including printed and electronic books, journals, dissertations, and CDs. The College of Science's library has the following specifications:
College of Science
The Library
Books
Farsi Books
27312 Copy
Latin Books
42230 Copy
E-Books
21761 Volume
Journals
Farsi Journals
2920 Copy
Latin Journals
18123 Copy
Audio & Visuals
Farsi Audio & Visuals
200 Volume
Latin Audio & Visuals
459 Volume
Thesis
Farsi Thesis
4661 Volume
Latin Thesis
59 Volume
Information Databases
20 Volume
Central Laboratory
The Central Laboratory includes a set of important equipment for qualitative and quantitative examination of materials, which has been established with the aim of providing research services to students and faculty members of the University of Tehran and other scientific, research and industrial centers of the country. The Central Laboratory of has advanced and accurate devices, such as spectroscopic analysis, elemental analysis, thermal analysis and chromatography. The objectives of the Central Laboratory are listed below:
Improve the use of research facilities and equipment
Creating the necessary platform for conducting basic and applied research
Promoting the research capacity of faculty members, graduate students and all researchers and scholars of other universities and centers
Communication with other laboratories inside and outside the University of Tehran, scientific, research and industrial institutions and companies in order to provide mutual services
Purchasing and equipping new devices according to the request of various academic and research fields
Provide test results in the shortest amount of time
Scientific Journals
The Journal of Science of the University of Tehran was established in 1968 and its first issue was published in the same year. In 2009, 33 volumes and each volume including four issues of this magazine have been published. Journal of Science is a scientific-research journal and publishes scientific articles in both English and Persian in the fields of basic sciences including mathematics, statistics and computer science, geology, biology, chemistry and physics. At present, this magazine is published every year with four issues and seasonally. The Journal of Science is currently indexed on the Mathematical Reviews and Chemical Abatracts. In 2009, based on the approval of the Science Campus Council and according to higher education policies, the Journal of Science was changed to two independent specialized journals under the following titles:
1. English language magazine Geopersia: This journal is published twice a year by the School of Geology, Campus of Sciences, University of Tehran and has scientific research validity.
2. Journal of Progress in Biological Sciences: This journal is published by the School of Biology, Campus of Sciences, University of Tehran.
Museums
The College of Science has three museums:
1. Museum of Zoology:
The Museum of Zoology or Museum of Natural History houses a precious and unique collection of invertebrates and vertebrates and was established in 1954. Using museum preservation methods such as taxidermy, stabilization in chemical solutions, etching, resin storage and skeleton building, the collection of Iranian animals began and led to the creation of the first zoological museum in Iran. The museum is located in the School of Biology of the Science Campus.
2. Herbarium:
The most complete and reliable reference of Iranian flora. There are 167 families of vascular plants in Iran, which include 1215 genera, some of them have only one species and some of them have up to about 800 species. The total number of taxa in Iran is about 8000, which includes about 6417 species, 611 subspecies, 465 varieties, and 83 hybrids. Of these, about 1810 are endemic to Iran. The current statistics are related to the studies conducted up to 2000 in the central herbarium of the University of Tehran. The museum is located in the School of Biology of the Science Campus.
3. Microbial Collection:
Microbial Collection section contains a diverse set of microorganisms and provides research-executive services. The museum is located in the School of Biology of the Science Campus.
See also
Faculty of Theology and Islamic Studies of the University of Tehran
Faculty of Letters and Humanities of the University of Tehran
Tehran School of Political Science
Institute of Biochemistry and Biophysics
References
External links
Admissions: University of Tehran
Colleges: University of Tehran
JOURNAL OF SCIENCE (UNIVERSITY OF TEHRAN) on SID
University of Tehran
Educational institutions established in 1934
1934 establishments in Iran
University of Tehran Campuses
University of Tehran faculties |
6837647 | https://en.wikipedia.org/wiki/Eastgate%20Systems | Eastgate Systems | Eastgate Systems is a publisher and software company headquartered in Watertown, Massachusetts, which publishes hypertext.
Eastgate is a pioneer in hypertext publishing and electronic literature and one of the best known publishers of hypertext fiction. It publishes fiction, non-fiction, and poetry hypertexts by established authors with careers in print, as well as new authors. Its software tools include Storyspace, a hypertext system created by Jay David Bolter, Michael Joyce, and John B. Smith in which much early hypertext fiction was written, and Tinderbox, a tool for managing notes and information. Storyspace was used in a project in Michigan to put judicial "bench books" into electronic form. Eastgate's chief scientist, Mark Bernstein, is a well-known figure in hypertext research, and has improved and extended Storyspace as well as developing new hypertext software.
Product list
Tinderbox, a content assistant for managing, analyzing and mapping notes in a hypertextual environment.
Storyspace, a hypertext writing environment.
Works published by Eastgate
Michael Joyce: afternoon, a story (1987, 1990)
Sarah Smith: The King of Space (1991)
Stuart Moulthrop: Victory Garden (1992)
Kathryn Cramer: In Small & Large Pieces (1994)
Shelley Jackson: Patchwork Girl (1995)
Bill Bly: We Descend, Volume One (1997)
Richard Holeton: Figurski at Findhorn on Acid (2001)
Notes
See also
Electronic literature
References
Miller, Laura (March 15, 1998). "Bookend; www.claptrap.com". The New York Times. Retrieved on August 13, 2007.
Guernsey, Lisa (April 15, 1999). "New York Times: New Kind of Convergence: Writers and Programmers". The New York Times. Retrieved on August 13, 2007.
External links
Macintosh software companies
Software companies based in Massachusetts
Software companies of the United States
Electronic literature
Publishing companies of the United States
CD-ROM publishing companies
Interactive narrative
Publishing companies established in 1982
Companies based in Watertown, Massachusetts
1982 establishments in Massachusetts |
4346419 | https://en.wikipedia.org/wiki/Miro%20%28video%20software%29 | Miro (video software) | Miro (formerly named Democracy Player or DTV) was an audio, video player and Internet television application developed by the Participatory Culture Foundation. It runs on Microsoft Windows, macOS, FreeBSD and Linux and supports most known video file formats. It offers both audio and video, some in HD quality.
The Participatory Culture Foundation no longer develops Miro. The last version (6.0) was released in 2013 and is no longer functioning correctly because of changes to the YouTube API.
Miro is free software, released under the terms of the GPL-2.0-or-later.
Features
Miro can automatically download videos from RSS-based "channels", manage them and play them. The application is designed to mesh with other Participatory Culture Foundation (PCF) products such as Video Bomb, a social tagging video website, and the Channel Channel, a TV guide for Internet television.
Miro integrates an RSS news aggregator and podcatcher, a BitTorrent client (based on libtorrent), and a media player (VLC media player under Windows, QuickTime under macOS, and xine media player or GStreamer under Linux and FreeBSD). Since 2.0, Miro supports the adding of website bookmarks under the "Sites" category; by default, ClearBits.net is preloaded in Miro as a bookmark.
Examples of supported video files are QuickTime, Windows Media Video (WMV), MPEG, Audio Video Interleave (AVI), XVID as a video player. It also supports RSS BitTorrent. When a new video is available, the program will notify and download if possible.
The Miro Video Converter converts video formats. It is based on FFmpeg with profiles for the Theora (.ogv), .mp4, and WebM video formats supported by various devices.
A developer of Miro wrote that the Windows installer installs proprietary commercial software such as browser add-ons, also known as crapware, stating "This is one of the primary ways we fund continued Miro development."
History
The application was first launched in 2005 as Democracy Player (sometimes abbreviated as DTV) and later on as Miro in 2007. Video searching of web-based video archives was included in 2007, with access to various archives changing over time.
Miro is mostly written in Python, although it links to various libraries written in a variety of languages. Versions through 2.x had an almost entirely HTML/CSS based UI. Miro uses embedded WebKit in a GTK window on Unix/Linux (Mozilla Gecko/XUL until 3.0.2), WebKit in a Cocoa window on macOS, and Mozilla in a XUL window on Windows. Since version 3.0, the macOS port uses Cocoa and others use GTK. The embedded web browser is used only for web pages.
Reception
Miro received a favorable review from Josh Quittner who wrote "I have seen the future of television and it’s an application called Miro." In May 2011, Seth Rosenblatt of CNET wrote, "Providing one-stop shopping for all your video and audio management desires, open-source and cross-platform Miro deserves much of the praise that's been heaped upon it." The Softonic review gave the software a score of 9/10, and described the software as "a perfect example of how video content from different sources can be integrated into one single application and served directly to your PC in a fast, easy and elegant way."
See also
Broadcatching
Peercasting
Popcorn Time, a similar media player
Open Media Network
References
External links
Democracy Player is dead, long live Miro - last100, 17 July 2007
CNET 5 Star Review 2009
CNET Blog article on how to use Miro to automatically download the latest TV shows of your choice
CNET Video Review of Democracy Player - Dated
Groklaw interview with Nicholas Reville, February 2008
Internet television software
Free media players
Windows media players
Linux media players
MacOS media players
Free BitTorrent clients
Free software programmed in Python
2006 software
Video software that uses GTK
File sharing software that uses GTK
BitTorrent clients for Linux |
6591195 | https://en.wikipedia.org/wiki/Andrew%20Donald%20Booth | Andrew Donald Booth | Andrew Donald Booth (11 February 1918 – 29 November 2009) was a British electrical engineer, physicist and computer scientist, who was an early developer of the magnetic drum memory for computers. He is known for Booth's multiplication algorithm. In his later career in Canada he became president of Lakehead University.
Early life
The son of Sidney Booth (died 1955) a cousin of Sir Felix Booth, he was raised in Weybridge, Surrey, and educated at Haberdashers' Aske's Boys' School. In 1937, he won a scholarship to read mathematics at Jesus College, Cambridge. Booth left Cambridge without taking a degree, having become disaffected with pure mathematics as a subject. He chose an external degree from the University of London instead, which he obtained with a first.
From 1943 to 1945, Booth worked as a mathematical physicist in the X-ray team at the British Rubber Producers' Research Association (BRPRA), Welwyn Garden City, Hertfordshire, gaining his PhD in crystallography from the University of Birmingham in 1944. In 1945, he moved to Birkbeck College, University of London, where his work in the crystallography group led him to build some of the first electronic computers in the United Kingdom including the All Purpose Electronic Computer, first installed at the British Rayon Research Association. Booth founded Birkbeck's department of numerical automation and was recently named a fellow at the university. He also did early pioneering work in machine translation.
Dr. Booth served as President of Lakehead University from 1972 to 1978.
Personal life
Booth married mathematician and computer engineer Kathleen Britten in 1950, and had two children, Amanda and Ian; between 1947 and 1953, together they produced three computing machines.
See also
Booth's multiplication algorithm
Bibliography
.
Booth, A.D. and Britten, K.H.V. (1947) Coding for A.R.C., Institute for Advanced Study, Princeton
Booth, A.D. and Britten, K.H.V. (1947) General considerations in the design of an all-purpose electronic digital computer, Institute for Advance Study, Princeton
Booth, A.D. and Britten, K.H.V. (1948) The accuracy of atomic co-ordinates derived from Fourier series in X-ray crystallography Part V, Proc. Roy. Soc. Vol A 193 pp305–310
The Electronic Principles of Digital Computers, Electronics Forum (1948);
.
Booth, A.D (1949) A Magnetic Digital Storage System, Electronic Engineering
Booth, A.D. (1950) The Physical Realization of An Electronic Digital Computer, Electronic Engineering
Booth, A.D. (1952) On Optimum Relations Between Circuit Elements and Logical Symbols in the Design of Electronic Calculators, Journal of British Institution of Radio Engineers
Booth, A.D. and Booth K.H.V. (1953) Automatic Digital Calculators, Butterworth-Heinmann (Academic Press) London
References
External links
The APEXC driver page
Principles and Progress in the Construction of High-Speed Digital Computers
Andrew Booth Collection, University of Manchester Library.
1918 births
2009 deaths
People educated at Haberdashers' Aske's Boys' School
Academics of Birkbeck, University of London
Alumni of Jesus College, Cambridge
Alumni of the University of Birmingham
Alumni of the University of London
British electrical engineers
British computer scientists
Computer designers
History of computing in the United Kingdom
Lakehead University faculty
British expatriate academics in Canada
Canadian university and college chief executives |
Subsets and Splits