id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
14492090 | https://en.wikipedia.org/wiki/Minister%20for%20Home%20Affairs%20%28Australia%29 | Minister for Home Affairs (Australia) | The Minister for Home Affairs in the Government of Australia is the minister responsible for the Department of Home Affairs, the country's interior ministry. The current minister is Karen Andrews of the Liberal Party, who has held the position since March 2021 in the Morrison Government.
The current Department of Home Affairs was created in December 2017. The first department with that name was created in 1901, as one of the original six departments created at Federation, and was responsible for a wide range of areas not captured by the other departments. Similar departments have existed in almost all subsequent governments, under several different names. The specific title "Minister for Home Affairs" has been created six times – in 1901, 1929, 1977, 1987, 2007 and 2017.
History
The Minister for Home Affairs was a ministerial portfolio that existed continuously from 1901 to 12 April 1932, when Archdale Parkhill became Minister for the Interior in the first Lyons Ministry—subsuming his portfolios of Home Affairs and Transport.
The Home Affairs or Interior portfolio was responsible for various internal matters not handled by other ministries. In due course, other portfolios were established that took over functions from it, including:
Transport from 1928 to 1932 and continuously since 1941
Immigration since 1945
Agriculture since 1942
Industry from 1928 to 1945 and since 1963
The Minister for the Interior existed from 1932 to 1972. The Territories of Australia portfolio has been the responsibility for the varying titles of the Minister for Territories.
The Home Affairs Ministry was re-established in 2007, assuming the responsibilities of the Minister for Justice and Customs within the Attorney-General's Department with policy responsibilities for criminal justice, law enforcement, border control and national security and with oversight responsibilities of the Australian Customs Service and the Border Protection Command, the Australian Federal Police, the Australian Crime Commission, and the Office of Film and Literature Classification.
From September 2010 to September 2013, the Minister for Home Affairs also held the position of Minister for Justice. In September 2013 with the change of government, the position Minister for Home Affairs was disbanded and its responsibilities were assumed by the newly created Minister for Immigration and Border Protection for border control and by the Minister for Justice for law enforcement.
On 18 July 2017, Prime Minister Malcolm Turnbull announced the creation of a new home affairs department to be headed by Immigration Minister Peter Dutton, with responsibility for immigration, border control, domestic security, and law enforcement.
On 20 December 2017, Governor-General Peter Cosgrove swore Dutton into the position of Minister for Home Affairs. The Home Affairs portfolio was formed by way of an Administrative Arrangements Order issued on 20 December 2017 with responsibilities for national security including cybersecurity and counterterrorism, law enforcement, emergency management, transport security, immigration, citizenship, border control, and multicultural affairs.
The Minister for Home Affairs is currently assisted by the Minister for Immigration, Citizenship, Migrant Services and Multicultural Affairs.
List of Ministers for Home Affairs
The following individuals have been appointed as Minister for Home Affairs, or any of its related titles:
List of Ministers for Emergency Management
The following individuals have been appointed as Minister for Emergency Management, or any of its related titles:
List of Assistant Minister for Customs, Community Safety and Multicultural Affairs
The following individuals have been appointed as Assistant Minister for Customs, Community Safety and Multicultural Affairs, or any of its related titles:
Former ministerial titles
List of ministers for Citizenship and Multicultural Affairs
The following individuals have been appointed as Minister for Citizenship and Multicultural Affairs, or any of its related titles:
List of ministers for Law Enforcement and Cybersecurity
The following individuals have been appointed as Minister for Law Enforcement and Cybersecurity, or any of its related titles:
List of Assistant Ministers for Home Affairs
The following individuals have been appointed as Assistant Minister for Home Affairs, or any of its related titles:
See also
Department of Home Affairs (1901–16)
Department of Home and Territories (1916–1928)
Department of Home Affairs (1928–32)
Department of the Interior (1932–39)
Department of the Interior (1939–72)
Department of Home Affairs (1977–80)
Department of Home Affairs and Environment (1980–84)
Department of Home Affairs (2017–Present)
References
Home Affairs |
9088415 | https://en.wikipedia.org/wiki/Tala%2C%20Kenya | Tala, Kenya | Tala is a town in Machakos County, located in the lower eastern region of Kenya and about 56 kilometres east of the Kenyan capital, Nairobi. It is usually classified as being one town with Kangundo, due to their close proximity. It is 3,000 ft above sea level. Tala is a location of Matungulu division. It also part of Matungulu Constituency.
People/Languages
The main language spoken is Kikamba although the people who live there understand both Swahili and English.
Kangundo-Tala
Officially Kangundo is located in Nairobi Metro and its population when combined with Tala is the 8th largest of any urban area in Kenya. Tala is part of Kangundo town council. The CDF office of Matungulu Constituency is also located in the town.
Towns in Machakos County
Economy
Many of its residents are Kambas who practice subsistence farming on rural farms. The land holding size is relatively small and population density is high. Open-air markets are located in downtown Tala and main market days are Tuesday and Friday. Farmers come to sell their wares on this days which include paw paws, bananas, arrow roots, cowpea leaves (a vegetable delicacy in the area), maize and beans. Livestock trading is also a major enterprise during the market days. Crops grown are mostly maize, bean, sorghum, millet, sweet potatoes, onions, bananas and others which can cope with the tropical climate of the area. Apart from the mentioned crops, farmers also grow coffee as a cash crop. Initially, the returns were good but farmers keep on complaining of low prices which has led to the neglect of the crop. However, farmers who mill their coffee and take it directly to KPCU make good profits.
Climate
There are two rainy seasons during the year from November–January and again from March–April. February and May are the main harvesting periods and June–August are the colder months.
Education
Several schools exist in the town, including Tala High School, Mackenzie Education Centre – Tala, Tala Girls' High School, Kwatombe Primary School, Tala Boys' Primary School and Children's Home, Tala Academy and Holy Rosary College (formerly, Tala Secretarial College). A police post is also located in the town.
In October 2007, a number of students from King George V School in Hong Kong took part in a charity and service trip to Tala Boys' Primary School and Children's Home, where they spent days interacting with the children, renovating the classrooms and building pavements for those in need.
Holy Rosary College is located roughly 1.5 kilometers west of downtown. It is an all-girls college (roughly 70-90 students) and classes run year-round. There are several programs offered including: Information Technology (IT), business administration and secretarial. It is accredited to the Jomo Kenyatta University of Agriculture and Technology. The courses offered in conjunction with JKUAT include, Business Information Technology, Business Administration, Public Relations, Information Technology and Bridging Certificate Course in Maths. It is also a Cisco Local Academy. At present the college has about 200 students.
Nairobi Metro
Machakos County is within Greater Nairobi which consists of 4 out of 47 counties in Kenya but the area generates about 60% of the nations wealth. The counties are:
Source: NairobiMetro/ Kenya Census
See also
Kangundo
Machakos County
References
Related links
Tala in Google Maps
Populated places in Eastern Province (Kenya)
Machakos County |
64108739 | https://en.wikipedia.org/wiki/Vectorworks | Vectorworks | Vectorworks, Inc. is a U.S.-based software development company that focuses on CAD and BIM software for the architecture, engineering, and construction (AEC), landscape, and entertainment industries. Vectorworks is owned by the Nemetschek Group, a multinational company.It operates as an independent subsidiary of Nemetschek Group.
History
The company was founded in 1985 as Graphsoft by Richard Diehl. Diehl later changed the name to Diehl Graphsoft. The first version of the company’s software was named MiniCAD, designed for Apple Macintosh. Version 2.0, MiniCAD Plus was released in 1989 and allowed architects to model custom details similar to hand-drawn work. The software was subsequently rebranded as Vectorworks in the late 1990s.
In 2000, the company was acquired by the Nemetschek Group and became Nemetschek Vectorworks, Inc. It rebranded again in 2016 adopting its current name, Vectorworks. In 2015, Vectorworks announced it was acquiring the ESP Vision product line, to further enhance its stage lighting and pre-visualization software.
In April 2016, Biplab Sarkar was named the company’s third CEO. Sarkar took over the position from Sean Flaherty, who assumed the role of Chief Strategy Officer on Nemetschek's supervisory board. That year, Vectorworks’ annual software release for its CAD and BIM programs included updates and new features that were 70% based on customer feedback. That year, the company also introduced its Algorithms Aided Design (AAD) tool, Marionette.
In 2017, Vectorworks acquired Design Software Solutions Limited and opened a new office in the UK. In 2019, Vectorworks shared that its 2020 release would include live data visualization as well as games style Level of Detail (LoD) control.
In April 2019, Vectorworks became the first architectural software developer to receive the IFC4 Export Certification from buildingSMART International (bSI). With the certification, the developer meets export criteria for International Standard ISO-16739. Vectorworks further expanded its platforms in August 2019 by acquiring the ConnectCAD plug-in, for use with its AV installation software.
In 2020, Vectorworks released support for multi-core processing. The core processing for Vectorworks is handled by the Vectorworks Graphic Module (VGM), while the processing for external technologies connected to the company’s software, like Enscape and Lumion, uses Vectorworks Graphic Sync (VGS).
Vectorworks hosts an annual Design Summit to allow users of its software to provide feedback, meet the company’s developers, and learn about new developments and updates. In addition to in-person feedback, users can also opt in to sharing their usage data with the company. Around one billion user interactions are logged daily by Vectorworks.
Vectorworks also licenses Pixar’s OpenSubdiv library for 3D mesh modeling.
Software
Vectorworks supports tight integration of all building elements, providing architects with Building Information Modeling (BIM). Vectorworks software utilizes Siemens’ Parasolid modelling kernel to underpin its geometry creation. Users can choose from eight different options of the software: Landmark for landscape architecture and design, and site analysis; Spotlight for stage and lighting design; Fundamentals for basic 2D/3D modeling and documentation; Architect for conceptual design, construction documents, and fully coordinated BIM models; Braceworks for rigging analysis of temporary structures; Vision for lighting previsualization; ConnectCAD for signal flow designing; and Designer which provides Architect, Landmark, and Spotlight in one package. Vectorworks’ software also includes the ability for customization using, Python, VectorScript, or the included algorithms aided design feature called Marionette.
See also
Architectural drawing
References
External links
Vectorworks website
Building information modeling
Computer-aided design |
20801021 | https://en.wikipedia.org/wiki/Red%20Dorman | Red Dorman | Charles Dwight "Red" Dorman (October 3, 1900 – December 7, 1974) was a Major League Baseball outfielder who played for one season. He played for the Cleveland Indians for 28 games during the 1928 Cleveland Indians season.
Biography
Dorman began his professional baseball career with the Tyler Trojans of the D-Class Lone Star League. With the Trojans, he had a batting average of .408, 39 home runs, and 20 stolen bases. He led the league in home runs, batting average, and doubles, and as a result the Cleveland Indians brought him onto their major league roster. In his major league debut on August 21, he faced the 46-year-old Jack Quinn, and doubled in his first at-bat, drawing praise from Indians manager Roger Peckinpaugh. He spent the last month of the season as the team's center fielder, and finished the year with a .363 batting average.
In 1929, Dorman failed to make the team out of spring training, and was assigned to the New Orleans Pelicans of the Southern Association. On the season, he had a .301 batting average in 102 games for the Pelicans. He joined the Indians in mid-September along with Zeke Bonura and Mike Powers, but none of the three played a game for Cleveland that season.
In February 1930, Dorman's wife of 15 months died. He spent the season with the Indianapolis Indians and the Kansas City Blues, playing in a combined 105 games for the two teams. In 1931, Dorman spent the season with the Terre Haute Tots, finishing the season with a .283 batting average. He suffered an accident in 1931, and after the season never again played professionally.
References
External links
1900 births
1974 deaths
Cleveland Indians players
Major League Baseball outfielders
Baseball players from Illinois
Tyler Trojans players |
4449840 | https://en.wikipedia.org/wiki/Cog%20%28software%29 | Cog (software) | Cog is an open source audio player for macOS. The basic layout is a single-paned playlist interface with two retractable drawers, one for navigating the user's music folders and another for viewing audio file properties, like bitrate. Along with supporting most audio formats compatible with macOS's Core Audio API, Cog supports a wide array of other audio formats, along with their metadata, which are otherwise unsupported on macOS.
In April 2006, Cog joined other Mac OS X audio software Tag and Max in an effort by the respective authors to consolidate Mac OS X open source audio software on the internet. Subsequently, the Cog website was redesigned to Tag and Max's website design, and its forums were also moved to the Tag and Max Forums. In July 2007, Cog moved to its own separate forums shortly before the release of version 0.06. Last build been created in 2013.
As the original project appears to be abandoned, with the website last updated in 2008, there are now several forks of the project maintained by others. In 2013, Christopher Snowhill started a fork and continues to maintain and develop it as of 2020. In 2015, MacRumors user Vivo made a new audio player Phonix, which is based on original Cog code.
Features
General
Last.fm support
Growl support
Global hotkeys
File drawer
Info drawer
Smart shuffle
Seeking
Feedback form
Automatic Updates (choice of Stable, Nightly, or Unstable)
Audio formats
AIFF
Apple Lossless
Free Lossless Audio Codec (FLAC)
Monkey's Audio
MP3
Musepack
Ogg Vorbis
Shorten
WavPack
WAV
Video game music formats (NSF, GBS, GYM, SPC, VGM, HES, etc.)
tracker formats (IT, S3M, XM, MOD, etc.)
Cue sheet
Playlist formats
M3U
PLS
Metadata formats
Vorbis comments
ID3 v1.0, 1.1, 2.3+
FLAC tags
APEv1 and APEv2 tags
Languages/Localizations
English
French
German
Greek
Hebrew
Swedish
Catalan
Dutch
Russian
Spanish
Chinese
Known Issues for Mac OS X v10.5 Leopard
Playlist issues including "invisible playlist"
Unattended cross-fade at random times
See also
Comparison of audio player software
References
External links
Cog home page
Cog forums
macOS media players
Free audio software
Free software programmed in Objective-C
MacOS-only free software |
1589554 | https://en.wikipedia.org/wiki/Inversion%20of%20control | Inversion of control | In software engineering, inversion of control (IoC) is a programming principle. IoC inverts the flow of control as compared to traditional control flow. In IoC, custom-written portions of a computer program receive the flow of control from a generic framework. A software architecture with this design inverts control as compared to traditional procedural programming: in traditional programming, the custom code that expresses the purpose of the program calls into reusable libraries to take care of generic tasks, but with inversion of control, it is the framework that calls into the custom, or task-specific, code.
Inversion of control is used to increase modularity of the program and make it extensible, and has applications in object-oriented programming and other programming paradigms. The term was used by Michael Mattsson in a thesis, taken from there by Stefano Mazzocchi and popularized by him in 1999 in a defunct Apache Software Foundation project, Avalon, then further popularized in 2004 by Robert C. Martin and Martin Fowler.
The term is related to, but different from, the dependency inversion principle, which concerns itself with decoupling dependencies between high-level and low-level layers through shared abstractions. The general concept is also related to event-driven programming in that it is often implemented using IoC so that the custom code is commonly only concerned with the handling of events, whereas the event loop and dispatch of events/messages is handled by the framework or the runtime environment.
Overview
As an example, with traditional programming, the main function of an application might make function calls into a menu library to display a list of available commands and query the user to select one. The library thus would return the chosen option as the value of the function call, and the main function uses this value to execute the associated command. This style was common in text based interfaces. For example, an email client may show a screen with commands to load new mail, answer the current mail, create new mail, etc., and the program execution would block until the user presses a key to select a command.
With inversion of control, on the other hand, the program would be written using a software framework that knows common behavioral and graphical elements, such as windowing systems, menus, controlling the mouse, and so on. The custom code "fills in the blanks" for the framework, such as supplying a table of menu items and registering a code subroutine for each item, but it is the framework that monitors the user's actions and invokes the subroutine when a menu item is selected. In the mail client example, the framework could follow both the keyboard and mouse inputs and call the command invoked by the user by either means, and at the same time monitor the network interface to find out if new messages arrive and refresh the screen when some network activity is detected. The same framework could be used as the skeleton for a spreadsheet program or a text editor. Conversely, the framework knows nothing about Web browsers, spreadsheets or text editors; implementing their functionality takes custom code.
Inversion of control carries the strong connotation that the reusable code and the problem-specific code are developed independently even though they operate together in an application. Callbacks, schedulers, event loops, dependency injection, and the template method are examples of design patterns that follow the inversion of control principle, although the term is most commonly used in the context of object-oriented programming.
Inversion of control serves the following design purposes:
To decouple the execution of a task from implementation.
To focus a module on the task it is designed for.
To free modules from assumptions about how other systems do what they do and instead rely on contracts.
To prevent side effects when replacing a module.
Inversion of control is sometimes facetiously referred to as the "Hollywood Principle: Don't call us, we'll call you".
Background
Inversion of control is not a new term in computer science. Martin Fowler traces the etymology of the phrase back to 1988, but it is closely
related to the concept of program inversion described by Michael Jackson in his Jackson Structured Programming methodology in the 1970s. A bottom-up parser can be seen as an inversion of a top-down parser: in the one case, the
control lies with the parser, while in the other case, it lies with the receiving application.
Dependency injection is a specific type of IoC. A service locator such as the Java Naming and Directory Interface (JNDI) is similar. In an article by Loek Bergman, it is presented as an architectural principle.
In an article by Robert C. Martin, the dependency inversion principle and abstraction by layering come together. His reason to use the term "inversion" is in comparison with traditional software development methods. He describes the uncoupling of services by the abstraction of layers when he is talking about dependency inversion. The principle is used to find out where system borders are in the design of the abstraction layers.
Description
In traditional programming, the flow of the business logic is determined by objects that are statically bound to one another. With inversion of control, the flow depends on the object graph that is built up during program execution. Such a dynamic flow is made possible by object interactions that are defined through abstractions. This run-time binding is achieved by mechanisms such as dependency injection or a service locator. In IoC, the code could also be linked statically during compilation, but finding the code to execute by reading its description from external configuration instead of with a direct reference in the code itself.
In dependency injection, a dependent object or module is coupled to the object it needs at run time. Which particular object will satisfy the dependency during program execution typically cannot be known at compile time using static analysis. While described in terms of object interaction here, the principle can apply to other programming methodologies besides object-oriented programming.
In order for the running program to bind objects to one another, the objects must possess compatible interfaces. For example, class A may delegate behavior to interface I which is implemented by class B; the program instantiates A and B, and then injects B into A.
Implementation techniques
In object-oriented programming, there are several basic techniques to implement inversion of control. These are:
Using a service locator pattern
Using dependency injection, for example
Constructor injection
Parameter injection
Setter injection
Interface injection
Using a contextualized lookup
Using the template method design pattern
Using the strategy design pattern
In an original article by Martin Fowler, the first three different techniques are discussed. In a description about inversion of control types, the last one is mentioned. Often the contextualized lookup will be accomplished using a service locator
Examples
Most frameworks such as .NET or Enterprise Java display this pattern:
public class ServerFacade {
public <K, V> V respondToRequest(K request) {
if (businessLayer.validateRequest(request)) {
Data data = DAO.getData(request);
return Aspect.convertData(data);
}
return null;
}
}
This basic outline in Java gives an example of code following the IoC methodology. It is important, however, that in the a lot of assumptions are made about the data returned by the data access object (DAO).
Although all these assumptions might be valid at some time, they couple the implementation of the to the DAO implementation. Designing the application in the manner of inversion of control would hand over the control completely to the DAO object. The code would then become
public class ServerFacade {
public <K, V> V respondToRequest(K request, DAO dao) {
return dao.getData(request);
}
}
The example shows that the way the method is constructed determines if IoC is used. It is the way that parameters are used that define IoC. This resembles the message-passing style that some object-oriented programming languages use.
See also
Abstraction layer
Archetype pattern
Asynchronous I/O
Aspect-oriented programming
Callback (computer science)
Closure (computer science)
Continuation
Delegate (CLI)
Dependency inversion principle
Flow-based programming
Implicit invocation
Interrupt handler
Message Passing
Monad (functional programming)
Observer pattern
Publish/subscribe
Service locator pattern
Signal (computing)
Software framework
Strategy pattern
User exit
Visitor pattern
XSLT
References
External links
Inversion of Control explanation and implementation example
Software architecture
Architectural pattern (computer science)
Java (programming language)
Programming principles
Component-based software engineering
Software design patterns |
23523100 | https://en.wikipedia.org/wiki/Full%20stop | Full stop | The full stop (Commonwealth English), period (North American English) or full point is a punctuation mark. It is used for several purposes, most often to mark the end of a declarative sentence (as distinguished from a question or exclamation). This sentence-ending use, alone, defines the strictest sense of full stop. Although full stop technically applies only when the mark is used to used to end a sentence, the distinction – drawn since at least 1897 – is not maintained by all modern style guides and dictionaries.
The mark is also used, singly, to indicate omitted characters or, in a series, as an ellipsis (), to indicate omitted words. It may be placed after an initial letter used to stand for a name or after each individual letter in an initialism or acronym (e.g., "U.S.A."). However, the use of full stops after letters in an initialism or acronym is declining, and many of these without punctuation have become accepted norms (e.g., "UK" and "NATO"). This trend has progressed somewhat more slowly in the United States than in other English language dialects.
A full stop is frequently used at the end of word abbreviations – in British usage, primarily truncations like Rev., but not after contractions like Revd (in American English it is used in both cases).
In the English-speaking world, a punctuation mark identical to the full stop is used as the decimal separator and for other purposes, and may be called a point. In computing, it is called a dot. It is sometimes called a baseline dot to distinguish it from the interpunct (or middle dot).
History
Ancient Greek origin
The full stop symbol derives from the Greek punctuation introduced by Aristophanes of Byzantium in the 3rd century . In his system, there were a series of dots whose placement determined their meaning.
stigmḕ teleía, stigmḕ mésē and hypostigmḕ
The full stop at the end of a completed thought or expression was marked by a high dot ⟨˙⟩, called the stigmḕ teleía () or "terminal dot". The "middle dot" ⟨·⟩, the stigmḕ mésē (), marked a division in a thought occasioning a longer breath (essentially a semicolon), while the low dot ⟨.⟩, called the hypostigmḕ () or "underdot", marked a division in a thought occasioning a shorter breath (essentially a comma).
Medieval simplification
In practice, scribes mostly employed the terminal dot; the others fell out of use and were later replaced by other symbols. From the 9th century onwards, the full stop began appearing as a low mark (instead of a high one), and by the time printing began in Western Europe, the lower dot was regular and then universal.
Medieval Latin and modern English period
The name period is first attested (as the Latin loanword ) in Ælfric of Eynsham's Old English treatment on grammar. There, it is distinguished from the full stop (the ), and continues the Greek underdot's earlier function as a comma between phrases. It shifted its meaning, to a dot marking a full stop, in the works of the 16th-century grammarians. In 19th-century texts, both British English and American English were consistent in their usage of the terms period and full stop. The word period was used as a name for what printers often called the "full point", the punctuation mark that was a dot on the baseline and used in several situations. The phrase full stop was only used to refer to the punctuation mark when it was used to terminate a sentence. This terminological distinction seems to be eroding. For example, the 1998 edition of Fowler's Modern English Usage used full point for the mark used after an abbreviation, but full stop or full point when it was employed at the end of a sentence; the 2015 edition, however, treats them as synonymous (and prefers full stop), and New Hart's Rules does likewise (but prefers full point). In 1989, the last edition (1989) of the original Hart's Rules (before it became The Oxford Guide to Style in 2002) exclusively used full point.
Usage
Full stops are one of the most commonly used punctuation marks; analysis of texts indicate that approximately half of all punctuation marks used are full stops., referenced in Frequencies for English Punctuation Marks
Ending sentences
Full stops indicate the end of sentences that are not questions or exclamations.
After initials
It is usual in North American English to use full stops after initials; e.g. A. A. Milne, George W. Bush. British usage is less strict. A few style guides discourage full stops after initials. However, there is a general trend and initiatives to spell out names in full instead of abbreviating them in order to avoid ambiguity.
Abbreviations
A full stop is used after some abbreviations. If the abbreviation ends a declaratory sentence there is no additional period immediately following the full stop that ends the abbreviation (e.g. "My name is Gabriel Gama, Jr."). Though two full stops (one for the abbreviation, one for the sentence ending) might be expected, conventionally only one is written. This is an intentional omission, and thus not haplography, which is unintentional omission of a duplicate. In the case of an interrogative or exclamatory sentence ending with an abbreviation, a question or exclamation mark can still be added (e.g. "Are you Gabriel Gama Jr.?").
Abbreviations and personal titles of address
According to the Oxford A–Z of Grammar and Punctuation, "If the abbreviation includes both the first and last letter of the abbreviated word, as in 'Mister' ['Mr'] and 'Doctor' ['Dr'], a full stop is not used." This does not include, for example, the standard abbreviations for titles such as Professor ("Prof.") or Reverend ("Rev."), because they do not end with the last letter of the word they are abbreviating.
In American English, the common convention is to include the period after all such abbreviations.
Acronyms and initialisms
In acronyms and initialisms, the modern style is generally to not use full points after each initial (e.g.: DNA, UK, USSR). The punctuation is somewhat more often used in American English, most commonly with U.S. and U.S.A. in particular. However, this depends much upon the house style of a particular writer or publisher. As some examples from American style guides, The Chicago Manual of Style (primarily for book and academic-journal publishing) deprecates the use of full points in acronyms, including U.S., while The Associated Press Stylebook (primarily for journalism) dispenses with full points in acronyms except for certain two-letter cases, including U.S., U.K., and U.N., but not EU.
Mathematics
The period glyph is used in the presentation of numbers, but in only one of two alternate styles at a time.
In the more prevalent usage in English-speaking countries, the point represents a decimal separator, visually dividing whole numbers from fractional (decimal) parts. The comma is then used to separate the whole-number parts into groups of three digits each, when numbers are sufficiently large.
1.007 (one and seven thousandths)
1,002.007 (one thousand two and seven thousandths)
1,002,003.007 (one million two thousand three and seven thousandths)
The more prevalent usage in much of Europe, southern Africa, and Latin America (with the exception of Mexico due to the influence of the United States), reverses the roles of the comma and point, but sometimes substitutes a (thin-)space for a point.
1,007 (one and seven thousandths)
1.002,007 or 1 002,007 (one thousand two and seven thousandths)
1.002.003,007 or 1 002 003,007 (one million two thousand three and seven thousandths)
(To avoid problems with spaces, another convention sometimes used is to use apostrophe signs (') instead of spaces.)
India, Bangladesh, Nepal, and Pakistan follow the Indian numbering system, which utilizes commas and decimals much like the aforementioned system popular in most English-speaking countries, but separates values of one hundred thousand and above differently, into divisions of lakh and crore:
1.007 (one and seven thousandths)
1,002.007 (one thousand two and seven thousandths)
10,02,003.007 (one million two thousand three and seven thousandths or ten lakh two thousand three and seven thousandths)
In countries that use the comma as a decimal separator, the point is sometimes found as a multiplication sign; for example, 5,2 . 2 = 10,4; this usage is impractical in cases where the point is used as a decimal separator, hence the use of the interpunct: 5.2 · 2 = 10.4. This notation is also seen when multiplying units in science; for example, 50 km/h could be written as 50 km·h−1. However, the point is used in all countries to indicate a dot product, i.e. the scalar product of two vectors.
Logic
In older literature on mathematical logic, the period glyph used to indicate how expressions should be bracketed (see Glossary of Principia Mathematica).
Computing
In computing, the full point, usually called a dot in this context, is often used as a delimiter, such as in DNS lookups, Web addresses, file names and software release versions:
www.wikipedia.org
document.txt
192.168.0.1
Chrome 92.0.4515.130
It is used in many programming languages as an important part of the syntax. C uses it as a means of accessing a member of a struct, and this syntax was inherited by C++ as a means of accessing a member of a class or object. Java and Python also follow this convention. Pascal uses it both as a means of accessing a member of a record set (the equivalent of struct in C), a member of an object, and after the end construct that defines the body of the program. In APL it is also used for generalised inner product and outer product. In Erlang, Prolog, and Smalltalk, it marks the end of a statement ("sentence"). In a regular expression, it represents a match of any character. In Perl and PHP, the dot is the string concatenation operator. In the Haskell standard library, it is the function composition operator. In COBOL a full stop ends a statement.
In file systems, the dot is commonly used to separate the extension of a file name from the name of the file. RISC OS uses dots to separate levels of the hierarchical file system when writing path names—similar to / (forward-slash) in Unix-based systems and \ (back-slash) in MS-DOS-based systems and the Windows NT systems that succeeded them.
In Unix-like operating systems, some applications treat files or directories that start with a dot as hidden. This means that they are not displayed or listed to the user by default.
In Unix-like systems and Microsoft Windows, the dot character represents the working directory of the file system. Two dots (..) represent the parent directory of the working directory.
Bourne shell-derived command-line interpreters, such as sh, ksh, and bash, use the dot as a command to read a file and execute its content in the running interpreter. (Some of these also offer source as a synonym, based on that usage in the C-shell.)
Versions of software are often denoted with the style x.y.z (or more), where x is a major release, y is a mid-cycle enhancement release and z is a patch level designation, but actual usage is entirely vendor specific.
Telegraphy
The term STOP was used in telegrams in place of the full stop. The end of a sentence would be marked by STOP; its use "in telegraphic communications was greatly increased during the World War, when the Government employed it widely as a precaution against having messages garbled or misunderstood, as a result of the misplacement or emission of the tiny dot or period."
In conversation
In British English, the words "full stop" at the end of an utterance strengthen it, it admits of no discussion: "I'm not going with you, full stop." In American English the word "period" serves this function.
Another common use in African-American Vernacular English is found in the phrase "And that's on period" which is used to express the strength of the speaker's previous statement, usually to emphasise an opinion.
Linguistics
The International Phonetic Alphabet uses the full stop to signify a syllable break.
Time
In British English, whether for the 12 or 24-hour clock, some style guides recommend the full stop when telling time, including those from the BBC and other public broadcasters in the UK, the academic manual published by Oxford University Press under various titles, as well as the internal house style book for the University of Oxford, and that of The Guardian and The Times newspapers. American and Canadian English mostly prefers and uses colons (:) (i.e., 11:15 PM/pm/p.m. or 23:15 for AmE/CanE and 11.15 pm or 23.15 for BrE). The full stop as a time separator is also used in Irish English, particularly by the Raidió Teilifís Éireann (RTÉ), and to a lesser extent in Australian, Cypriot, Maltese, New Zealand, South African and other Commonwealth English varieties outside Canada.
Punctuation styles when quoting
The practice in the United States and Canada is to place full stops and commas inside quotation marks in most styles. In the British system, which is also called "logical quotation", full stops and commas are placed according to grammatical sense: This means that when they are part of the quoted material, they should be placed inside, and otherwise should be outside. For example, they are placed outside in the cases of words-as-words, titles of short-form works, and quoted sentence fragments.
Bruce Springsteen, nicknamed "the Boss," performed "American Skin." (closed or American style)
Bruce Springsteen, nicknamed "the Boss", performed "American Skin". (logical or British style)
He said, "I love music." (both)
There is some national crossover. American style is common in British fiction writing. British style is sometimes used in American English. For example, the Chicago Manual of Style recommends it for fields where comma placement could affect the meaning of the quoted material, such as linguistics and textual criticism.
Use of placement according to logical or grammatical sense, or "logical convention", now the more common practice in regions other than North America, was advocated in the influential book The King's English by Fowler and Fowler, published in 1906. Prior to the influence of this work, the typesetter's or printer's style, or "closed convention", now also called American style, was common throughout the world.
Spacing after a full stop
There have been a number of practices relating to the spacing after a full stop. Some examples are listed below:
One word space ("French spacing"). This is the current convention in most countries that use the ISO basic Latin alphabet for published and final written work, as well as digital media.
Two word spaces ("English spacing"). It is sometimes claimed that the two-space convention stems from the use of the monospaced font on typewriters, but in fact that convention replicates much earlier typography — the intent was to provide a clear break between sentences. This spacing method was gradually replaced by the single space convention in published print, where space is at a premium, and continues in much digital media.;
One widened space (such as an em space). This spacing was seen in historical typesetting practices (until the early 20th century). It has also been used in other typesetting systems such as the Linotype machine and the TeX system. Modern computer-based digital fonts can adjust the spacing after terminal punctuation as well, creating a space slightly wider than a standard word space.
Full stops in other scripts
Greek
Although the present Greek full stop (, teleía) is romanized as a Latin full stop and encoded identically with the full stop in Unicode, the historic full stop in Greek was a high dot and the low dot functioned as a kind of comma, as noted above. The low dot was increasingly but irregularly used to mark full stops after the 9th century and was fully adapted after the advent of print. The teleia should also be distinguished from the ano teleia mark, which is named "high stop" but looks like an interpunct (a middle dot) and principally functions as the Greek semicolon.
Armenian
The Armenian script uses the ։ (, ). It looks similar to the colon (:).
Chinese and Japanese
In Simplified Chinese and Japanese, a small circle is used instead of a solid dot: "。︀" (U+3002 "Ideographic Full Stop"). Traditional Chinese uses the same symbol centered in the line rather than aligned to the baseline.
Korean
Korean uses the Latin full stop along with its native script, while Vietnamese uses both the Latin alphabet and punctuation.
Devanagari
In the Devanagari script, used to write Hindi and Sanskrit among other Indian languages, a vertical line ("।") (U+0964 "Devanagari Danda") is used to mark the end of a sentence. It is known as (full stop) in Hindi and in Bengali. Some Indian languages also use the full stop, such as Marathi. In Tamil, it is known as , which means end dot.
Sinhala
In Sinhala, it is known as kundaliya: "෴" (U+0DF4 "Sinhala Punctuation Kunddaliya"). Periods were later introduced into Sinhala script after the introduction of paper due to the influence of Western languages. See also Sinhala numerals.
Urdu
A (, ) is used as a full stop at the end of sentences and in abbreviations. It looks similar to a lowered dash .
Thai
In Thai, no symbol corresponding to the full stop is used as terminal punctuation. A sentence is written without spaces and a space is typically used to mark the end of a clause or sentence.
Ge'ez
In the Ge'ez script used to write Amharic and several other Ethiopian and Eritrean languages, the equivalent of the full stop following a sentence is the "።"—which means four dots. The two dots on the right are slightly ascending from the two on the left, with space in between.
Encodings
The character is encoded by Unicode at .
There is also , used in several shorthand (stenography) systems.
The character is full-width encoded at . This form is used alongside CJK characters.
In text messages
Researchers from Binghamton University performed a small study, published in 2016, on young adults and found that text messages that included sentences ended with full stops—as opposed to those with no terminal punctuation—were perceived as insincere, though they stipulated that their results apply only to this particular medium of communication: "Our sense was, is that because [text messages] were informal and had a chatty kind of feeling to them, that a period may have seemed stuffy, too formal, in that context," said head researcher Cecelia Klin. The study did not find handwritten notes to be affected.
A 2016 story by Jeff Guo in The Washington Post'', stated that the line break had become the default method of punctuation in texting, comparable to the use of line breaks in poetry, and that a period at the end of a sentence causes the tone of the message to be perceived as cold, angry or passive-aggressive.
According to Gretchen McCulloch, an internet linguist, using a full stop to end messages is seen as "rude" by more and more people. She said this can be attributed to the way we text and use instant messaging apps like WhatsApp and Facebook Messenger. She added that the default way to break up one's thoughts is to send each thought as an individual message.
See also
References
Punctuation
Ancient Greek punctuation |
6724462 | https://en.wikipedia.org/wiki/Chumby | Chumby | The Chumby is a consumer electronics product formerly made by Chumby Industries, Inc. It is an embedded computer which provides Internet and LAN access via a Wi-Fi connection. Through this connection, the Chumby runs various software widgets. In 2010 Sony introduced a single product based on an offshoot version of Chumby, the Sony Dash.
Devices
Roughly resembling a small clock radio, the original Chumby features a small resistive touch-screen housed in a leather and plastic exterior with six color options. Power to the original Chumby and the Chumby 8 is supplied through an AC adapter. A later model, the Chumby One, also offered the option of a 9v backup battery. Related devices, the Infocast 3.5 and Infocast 8, devices manufactured by Best Buy based on the Chumby software, are also only AC powered. The device is designed to be customizable by users: after agreeing to the Chumby HDK License, users may download schematics and other hardware information. Wired magazine named Chumby one of its top gadgets for 2008. Its software is mostly open source, running on Linux.
In 2012, Chumby ceased operation and was liquidated, with the assets being purchased by Duane Maxwell, the former Chief Technology Officer of Chumby Industries, who formed Blue Octy, LLC. The server needed to keep the devices running were kept online as a full service by Blue Octy. LLC until March 2013. At that point, the server went offline and all devices only displayed a single widget, referred to as the "Space Clock." Blue Octy, LLC relaunched the full Chumby service on July 1, 2014 as a paid subscription service, currently charging US$3 per month. An open source firmware is available for free that allows existing devices some of the functionality of the paid service at no cost. Devices without a subscription still receive the Space Clock widget.
History
Andrew "bunnie" Huang was the lead hardware engineer at Chumby.
The Chumby premiered on August 25, 2006 at Foo Camp and was released to around 100 alpha release testers at the event.
Shortly after Foo Camp, Chumby announced a free offer, where applicants would receive the same alpha-level Chumby as those previously given away. Applicants submitted ideas for software applications or hardware modifications. One of the goals for the free offer was to have Chumbys in the hands of developers who were willing to begin building applications.
In July 2007, a First 50 was released to 50 random applicants, who received the next generation of Chumbys. This was followed, in September, with an Insiders Release. Interested parties could send e-mail to Chumby requesting release information, and were given the opportunity to join in the Insiders Release. Finally, in February 2008, the commercial release was made public on the Chumby Store. In May 2008, the price was $179.95 for any one of three colors, latte, basic black, and pearl. In Japan, Chumby was available through Zyyx, Inc. as www.chumby.jp since October 23, 2008. In Australia, the Chumby was available through ISP Internode.
In November 2009 the Chumby One was released: a similar, all-plastic version of the original in white with blue trim. The major difference was the hard plastic case replacing the soft leather. Other changes include a slightly faster processor, only one USB port on the rear of the device, and inclusion of an FM tuner and physical volume knob. The hard plastic case allowed Chumby Industries to offer the Chumby One at a reduced price of $119.95.
In April 2012, Chumby announced the cessation of hardware sales, having ceased manufacture of their own hardware the previous year and exhausted their inventory. On April 20 it was confirmed that the company itself was being broken up. Dedicated fans managed to keep the service running for a period following the company's demise, but on 20 February 2013 Chumby shut down its servers, leaving users with a simple clock that shows time, calendar, and date. A brief message appears on the Chumby Web site, explaining the suspension of service. Alternative open source firmware prepared for such an eventuality became available at this point.
, Blue Octy was in the process of reviving the chumby technology, with one of the original chumby developers working on the project. Visiting www.chumby.com shows details.
Towards the end of March 2014, Blue Octy began beta testing the soon to be revived chumby service.
On July 1, 2014, Blue Octy relaunched the Chumby service as a sustainable, subscription-based platform.
In July 2017, Blue Octy and Chumby undertook an effort to rescue the Sony Dash after Sony discontinued support for it.
In August 2017, Blue Octy and Chumby released a patch for the Sony Dash HID-C10 models to allow them to connect to the Chumby servers, thus extending their useful life.
Features
Hardware
The Chumby is designed to be modified by users, with schematics, printed circuit board layouts and packaging/outerware designs available. Hardware specifications are as follows
The Original Chumby
350 MHz ARM9-based Freescale i.MX21 controller
64 MB of SDRAM
64 MB of NAND flash ROM
320×240 3.5 inch touchscreen TFT LCD running at 12 frames per second
stereo 2-watt speakers, an audio output, an integrated microphone
two USB 2.0 ports
integrated Wi-Fi
a bend sensor for squeeze-based user interface features
motion sensor (accelerometer).
The Chumby One
Freescale iMX233 454 MHz ARM926EJ-S processor
64 MB DDR SDRAM
2 GB internal microSD card (capacity depends on production date)
320x240 3.5" TFT color touchscreen
2W mono speaker
Wi-Fi connectivity (802.11 b/g)
FM radio tuner
Uses rechargeable lithium ion battery (not included); about one hour on a full charge
4" wide x 4" tall x 3.5" deep
1 USB 2.0 high-speed port
Stereo headphone output
Volume knob
Accelerometer (motion sensor)
ABS plastic housing
AC adapter included
USB Ethernet compatible
Dimmable backlight
Comparison Table
Hacks
Hacking the Chumby hardware was encouraged by the manufacturer. Schematics and other hardware information may be downloaded after the user agrees to the Chumby HDK License. For example, users on the Chumby Forums have experimented with and documented some battery hacks, allowing the Chumby to be operated without AC power for short periods of time.
There also exists a Chumby Hacker Board that mostly resembles a Chumby One motherboard. There are some differences to hardware connectivity. Chumby Industries did not officially support the board.
Software
Chumby units run a modified Linux kernel. The software originally installed on the device was designed to play a set of user-customizable widgets, small Adobe Flash animations that deliver real-time information. This is possible because an embedded version of Adobe Flash Player is installed. The animations have the ability to control and interact with the low-level hardware, thereby enabling functionality such as smart alarm clocks that bring the hardware out of sleep, a Web-based picture viewer, a Web-based camera, online RSS feeds, and physical user interface features, such as gesture recognition by squeezing the soft housing.
The software for the Chumby automatically updated when something new became available. The updates came from the free access to the Chumby network, and a modified BitTorrent client was used to upgrade the open-source portions of its firmware.
Multimedia limitations
Although the prototypes did not support video playback, all versions since May 2007 use Flash Lite 3 which allows for Sorenson, FLV, H.264, VP6 and On2 video playback.
See also
Amazon Echo Show
JooJoo
Sony Dash
Personal Information Display
Notes
References
External links
www.chumby.com — official Chumby site
Chumby at WikiSpecs
Summary of the product from O'Reilly
Chumby Review at Broadcasting World
Embedded microprocessors
Computer-related introductions in 2008
Consumer electronics
Internet audio players
Linux-based devices |
288345 | https://en.wikipedia.org/wiki/Firebird%20%28database%20server%29 | Firebird (database server) | Firebird is an open-source SQL relational database management system that "runs on Linux, Microsoft Windows, macOS and several Unix platforms". The database forked from Borland's open source edition of InterBase in 2000 but the code has been largely rewritten since Firebird 1.5.
History
Within a week of the InterBase 6.0 source being released by Borland on 25 July 2000, the Firebird project was created on SourceForge. Firebird 1.0 was released for Linux, Microsoft Windows and Mac OS X on 11 March 2002, with ports to Solaris, FreeBSD 4, HP-UX over the next two months.
Work on porting the codebase from C to C++ began in 2000. On 23 February 2004, Firebird 1.5 was released, which was the first stable release of the new codebase. Version 1.5 featured an improved query optimizer, SQL-92 conditional expressions, SQL:1999 savepoints and support for explicit locking. Firebird 2.0 was released on 12 November 2006, adding support for 64-bit architectures, tables nested in FROM clauses, and programmable lock timeouts in blocking transactions.
The previous stable release was version 2.1.6, which added new features including procedural triggers, recursive queries, and support for SQL:2003 MERGE statements.
Firebird 2.5 introduced new features like improved multithreading, regular expression syntax and the ability to query remote databases.
The most recent stable version is Firebird 3.0, released 19 April 2016, with focus in performance and security. A major re-architecture of the code allowed total support to SMP machines when using the SuperServer version.
Through the Google Summer of Code 2013 work has begun on integrating Firebird as a replacement for HSQLDB in LibreOffice Base.
Mozilla Firefox name conflict
In April 2003, the Mozilla Organization announced a rename of its web browser from Phoenix to Firebird after a trademark dispute with Phoenix Technologies.
This decision caused concern within the Firebird database project due to the assumption that users and Internet search engines would be confused by a database and a web browser both using the name Firebird. The Mozilla developers issued a statement, making clear that their software package was called "Mozilla Firebird", not "Firebird". The statement also said that the Mozilla Firebird name was a project codename.
The dispute was resolved on 9 February 2004, when Mozilla changed the name of its browser to Mozilla Firefox, thus ending the conflict.
Main features
Full support for stored procedures and triggers
Full ACID compliant transactions
Referential integrity
Multi Generational Architecture (sometimes called MVCC)
Support for External Functions (UDFs)
SQL activity can send asynchronous notification events to clients
Third-party tools, including GUI administrative tools and replication tools
Careful writes - fast recovery, no need for transaction logs
Many access methods: native/API, dbExpress/FireDAC drivers, ODBC, OLE DB, .NET provider, JDBC native type 4 driver, Python module, PHP, Perl
Incremental backups
Full cursor implementation in PSQL
Storage and index technology
The Multi-Generational Architecture (MGA)
Firebird inherited the storage architecture of Interbase. To ensure the ACID properties of transactions, the database engine keeps different versions of each record changed by the active users in the database. When the transactions are committed, the last version of every changed record is marked as the definitive. If transactions are rolled back, the database engine keeps the mark on the original record versions, leaving them untouched. As a result, Firebird disk writes are very reduced compared to databases that use the traditional transaction log architecture. Writing transactions does not prevent reading and vice versa, because each one sees its own version of the database. The tradeoff is that some maintenance ("sweeping") is required from time to time to clean up old record versions and free disk space.
The multi-generational architecture ensures that OLTP and DSS/OLAP operations can be run simultaneously without the delays caused by locking mechanisms found in other products.
Indexes
Firebird makes all indices of the database behave like well-tuned "clustered indexes" used by other architectures. Firebird index buckets aren't subject to two-phase locking, and boolean "and" and "or" operations can be performed on intermediate bitmaps at a negligible cost, eliminating the need for the optimizer to choose between alternative indexes.
Variants
Firebird SuperServer has a single daemon/server for all client connections, multithreaded with shared cache
Firebird SuperClassic also has a single daemon/server for all client connections, multithreaded with separate caches
Firebird Classic uses inetd to run one copy of the server per client connection, recommended for SMP systems but might have event-notification issues if access is via a firewall
Firebird Embedded for creating CD-ROM catalogs, single user or evaluation versions of applications
Licensing
The Firebird database engine and its modules are released under an open-source license, the Initial Developer's Public License (IDPL), a variant of the Mozilla Public License (MPL) version 1.1. It does not require the developer to open the products using Firebird or even custom-derivatives made from its source code, but if the developer chooses to do so, then some terms and conditions should be honored. The IDPL allows the developer to make proprietary, closed-source applications that use Firebird or are based on it.
Connectivity APIs
Low-level Firebird Native API, Services API and embedded SQL
The Firebird native API is used directly or indirectly by applications or middleware that connect to a Firebird database. It is implemented in the client library, fbclient.dll, on Windows systems, and in libfbclient.so on Unix ones.
The Services API is a special function set for accessing and controlling service administration tasks such as user management, backup/restore and statistics gathering.
Embedded SQL is a technique that simplifies the development of C/C++ and COBOL Firebird applications, by using a preprocessor called gpre, which allows the embedding of SQL statements directly into the source code of the host language.
Awards
2007. SourceForge Community Choice Award: Best Project for enterprise, Best user support.
2009. SourceForge Community Choice Award: Best Project for enterprise. Finalist on Best Project and Best Project for Government.
See also
Comparison of relational database management systems
List of relational database management systems
Multiversion concurrency control
SQL compliance
References
External links
Firebird - Developer portal
Firebird collations, charts.
Firebird documentation by IBProvider
FirebirdFAQ
FirebirdNews - News about Firebird and related projects
Migration Guide to Firebird 3 - eBook
Firebird Ole Db Driver (alternative for ODBC)
JayBird – JDBC driver for Firebird
Free database management systems
Cross-platform software
Relational database management systems
Client-server database management systems
RDBMS software for Linux
Formerly proprietary software |
1202648 | https://en.wikipedia.org/wiki/OS9 | OS9 | OS9, OS-9, or OS 9 may refer to:
Mac OS 9, an operating system for the Apple Macintosh
iOS 9, the ninth version of the iOS operating system
OS-9, a Unix-like real time operating system
OS/9, an operating system for the UNIVAC
OS9 (gene), which encodes protein OS-9 in humans |
7663090 | https://en.wikipedia.org/wiki/Test%20effort | Test effort | In software development, test effort refers to the expenses for (still to come) tests. There is a relation with test costs and failure costs (direct, indirect, costs for fault correction). Some factors which influence test effort are: maturity of the software development process, quality and testability of the testobject, test infrastructure, skills of staff members, quality goals and test strategy.
Methods for estimation of the test effort
To analyse all factors is difficult, because most of the factors influence each other. Following approaches can be used for the estimation: top-down estimation and bottom-up estimation. The top-down techniques are formula based and they are relative to the expenses for development: Function Point Analysis (FPA) and Test Point Analysis (TPA) amongst others. Bottom-up techniques are based on detailed information and involve often experts. The following techniques belong here: Work Breakdown Structure (WBS) and Wide Band Delphi (WBD).
We can also use the following techniques for estimating the test effort:
Conversion of software size into person hours of effort directly using a conversion factor. For example, we assign 2 person hours of testing effort per one Function Point of software size or 4 person hours of testing effort per one use case point or 3 person hours of testing effort per one Software Size Unit
Conversion of software size into testing project size such as Test Points or Software Test Units using a conversion factor and then convert testing project size into effort
Compute testing project size using Test Points of Software Test Units. Methodology for deriving the testing project size in Test Points is not well documented. However, methodology for deriving Software Test Units is defined in a paper by Murali
We can also derive software testing project size and effort using Delphi Technique or Analogy Based Estimation technique.
Test efforts from literature
In literature test efforts relative to total costs are between 20% and 70%. These values are amongst others dependent from the project specific conditions. When looking for the test effort in the single phases of the test process, these are diversely distributed: with about 40% for test specification and test execution each.
References
Andreas Spillner, Tilo Linz, Hans Schäfer. (2006). Software Testing Foundations - A Study Guide for the Certified Tester Exam - Foundation Level - ISTQB compliant, 1st print. dpunkt.verlag GmbH, Heidelberg, Germany. .
Erik van Veenendaal (Hrsg. und Mitautor): The Testing Practitioner. 3. Auflage. UTN Publishers, CN Den Bosch, Niederlande 2005, .
Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veendendal. (2005). Certified Tester - Foundation Level Syllabus - Version 2005, International Software Testing Qualifications Board (ISTQB), Möhrendorf, Germany. (PDF; 0,424 MB).
Andreas Spillner, Tilo Linz, Thomas Roßner, Mario Winter: Praxiswissen Softwaretest - Testmanagement: Aus- und Weiterbildung zum Certified Tester: Advanced Level nach ISTQB-Standard. 1. Auflage. dpunkt.verlag GmbH, Heidelberg 2006, .
External links
Wide Band Delphi
Test Effort Estimation
Information technology management
Software testing |
30182396 | https://en.wikipedia.org/wiki/Timeline%20of%20United%20States%20inventions%20%281946%E2%80%931991%29 | Timeline of United States inventions (1946–1991) | A timeline of United States inventions (1946–1991) encompasses the ingenuity and innovative advancements of the United States within a historical context, dating from the era of the Cold War, which have been achieved by inventors who are either native-born or naturalized citizens of the United States. Copyright protection secures a person's right to his or her first-to-invent claim of the original invention in question, highlighted in Article I, Section 8, Clause 8 of the United States Constitution which gives the following enumerated power to the United States Congress:
In 1641, the first patent in North America was issued to Samuel Winslow by the General Court of Massachusetts for a new method of making salt. On April 10, 1790, President George Washington signed the Patent Act of 1790 (1 Stat. 109) into law which proclaimed that patents were to be authorized for "any useful art, manufacture, engine, machine, or device, or any improvement therein not before known or used." On July 31, 1790, Samuel Hopkins of Pittsford, Vermont became the first person in the United States to file and to be granted a patent for an improved method of "Making Pot and Pearl Ashes." The Patent Act of 1836 (Ch. 357, 5 Stat. 117) further clarified United States patent law to the extent of establishing a patent office where patent applications are filed, processed, and granted, contingent upon the language and scope of the claimant's invention, for a patent term of 14 years with an extension of up to an additional 7 years. However, the Uruguay Round Agreements Act of 1994 (URAA) changed the patent term in the United States to a total of 20 years, effective for patent applications filed on or after June 8, 1995, thus bringing United States patent law further into conformity with international patent law. The modern-day provisions of the law applied to inventions are laid out in Title 35 of the United States Code (Ch. 950, sec. 1, 66 Stat. 792).
From 1836 to 2011, the United States Patent and Trademark Office (USPTO) has granted a total of 7,861,317 patents relating to several well-known inventions appearing throughout the timeline below. Some examples of patented inventions between the years 1946 and 1991 include William Shockley's transistor (1947), John Blankenbaker's personal computer (1971), Vinton Cerf's and Robert Kahn's Internet protocol/TCP (1973), and Martin Cooper's mobile phone (1973).
Cold War (1946–1991)
Post-war and the late 1940s (1946–1949)
1946 Space observatory
A space observatory is any instrument, such as a telescope, in outer space which is used for observation of distant planets, galaxies, and other outer space objects. In 1946, American theoretical astrophysicist Lyman Spitzer was proposed the idea of a telescope in outer space, a decade before the Soviet Union launched the first artificial satellite, Sputnik into orbit. However, German scientist Hermann Oberth had first conceived the idea of a space based telescope. Spitzer's proposal called for a large telescope that would not be hindered by Earth's atmosphere. After lobbying in the 1960s and 1970s for such a system to be built, Spitzer's vision ultimately materialized into the world's first space-based optical telescope, Hubble Space Telescope, which was launched on April 20, 1990 by the Space Shuttle Discovery (STS-31).
1946 Blowout preventer (annular)
An annular blowout preventer is a large valve that uses a wedge to seal off a wellhead. It has a donut-like rubber seal, known as an elastomeric packing unit, reinforced with steel ribs. During drilling or well interventions, the valve may be closed if overpressure from an underground zone causes formation fluids such as oil or natural gas to enter the wellbore and threaten the rig. The annular blowout preventer was invented by Granville Sloan Knox in 1946 who received a patent on September 9, 1952.
1946 Tupperware
Tupperware is airtight plastic containers used for the preparation, storage, containment, and serving of perishable food in the kitchen and home. Tupperware was invented in 1946 by American chemist Earl Silas Tupper who devised a method of purifying black polyethylene slag, a waste product produced in oil refinement, into a molded substance that was flexible, tough, non-porous, non-greasy and translucent. Available in many colors, the plastic containers with "burp seal" did not become a commercial success until Brownie Wise, a Florida housewife, began throwing Tupperware parties in 1951 in order to demonstrate the product and explain the features.
1946 Spoonplug
A spoonplug is a form of fishing lure. The spoonplug was invented by Elwood L. "Buck" Perry, then a physics and math teacher in Hickory, North Carolina. Elwood Perry combined science with a logical approach to fishing to create a "total fishing system." He is credited as being the father of structure fishing and was later inducted into the National Freshwater Fishing Hall of Fame.
1946 Chipper teeth
A chipper teeth is a variant of a saw chain used on a chainsaw. Using a tooth that is curled over the top of the chain, there are alternate teeth which point left and right. In 1946, American logger Joseph Buford Cox of Portland, Oregon invented chipper teeth, which is still widely used today and represents one of the biggest influences in the history of timber harvesting.
1946 Filament tape
Filament tape or strapping tape is a pressure-sensitive tape used for several packaging functions such as closing corrugated fiberboard boxes, reinforcing packages, bundling items, pallet utilizing, etc. It consists of a pressure-sensitive adhesive coated onto a backing material which is usually a polypropylene or polyester film and fiberglass filaments embedded to add high tensile strength. Filament tape was invented in 1946 by Cyrus Woodrow Bemmels. In 1949, it was placed on the market and was an immediate success.
1946 Credit card
A credit card is part of a system of payments named after the small plastic card issued to users of the system. The issuer of the card grants a line of credit to the consumer from which the user can borrow money for payment to a merchant or as a cash advance to the user. In 1946, American banker John C. Biggins of the Flatbush National Bank of Brooklyn invented the first bank-issued credit card.
1946 Diaper (waterproof)
A diaper or nappy is an absorbent garment for incontinent people. The dampless or waterproof diaper was invented in 1946 when Marion Donovan used a shower curtain from her bathroom to create the "Boater", the first re-usable and leak-proof diaper that contained plastic-lined cloth. Donovan's other innovation was replacing safety pins with plastic snaps on the sides of diapers. First sold in 1949 at Saks Fifth Avenue's flagship store in New York City, patents were later issued in 1951 to Donovan who later sold the rights to the waterproof diaper for $1 million.
1947 Transistor
In electronics, a transistor is a semiconductor device commonly used to amplify or switch electronic signals. Because the controlled output power can be much larger than the controlling input power, the transistor provides amplification of a signal. The transistor is the fundamental building block of all modern electronic devices, and is used in radio, telephone, computer, and other electronic systems. From November 17, 1947 to December 23, 1947, John Bardeen and Walter Brattain at AT&T Bell Labs, underwent experimentations and finally observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced whereby the output power was larger than the input. The manager of the Bell Labs semiconductor research group, William Shockley, saw the potential in this and worked over the next few months greatly expanding the knowledge of semiconductors in order to construct the first point-contact transistor. Shockley is considered by many to be the "father" of the transistor. Hence, in recognition of his work, the transistor is widely, yet not universally acknowledged as the most important invention of the entire 20th century since it forms today's building blocks of processors found and used in almost every modern computing and electronics device. In recognition of their invention of the transistor, Shockley, Bardeen and Brattain were jointly awarded the 1956 Nobel Prize in Physics.
1947 Defibrillator
Defibrillation is the definitive treatment for the life-threatening cardiac arrhythmias, ventricular fibrillation and ventricular tachycardia. Defibrillation consists of delivering a therapeutic dose of electrical energy to the affected heart. Dr. Claude Beck invented the defibrillator in 1947.
1947 Supersonic aircraft
In aerodynamics, the sound barrier usually refers to the point at which an aircraft moves from transonic to supersonic speed. On October 14, 1947, just under a month after the United States Air Force had been created as a separate service, tests culminated in the first manned supersonic flight where the sound barrier was broken, piloted by Air Force Captain Chuck Yeager in the Bell X-1.
1947 Acrylic paint
Acrylic paint is fast-drying paint containing pigment suspended in an acrylic polymer emulsion. The first acrylic paint was invented by Leonard Bocour and Sam Golden in 1947 under the brand Magna paint.
1947 Magnetic particle clutch
A magnetic particle clutch is a special type of electromagnetic clutch which does not use friction plates. Instead, it uses a fine powder of magnetically susceptible material (typically stainless steel) to mechanically link an otherwise free wheeling disc attached to one shaft, to a rotor attached to the other shaft. The magnetic particle clutch was invented in 1947 by Ukrainian-American Jacob Rabinow.
1947 Instant camera
Edwin H. Land invented the instant camera, with self-developing combined film and print that produced photographic images in 60 seconds. A colored photograph model would follow in the 1960s and eventually receive more than 500 patents for Land's innovations in light and plastic technologies.
1948 Windsurfing
Windsurfing, or sailboarding, is a surface water sport using a windsurf board, also commonly called a sailboard, usually two to five meters long and powered by wind pushing a sail. In 1948, 20-year-old Newman Darby was the first to conceive the idea of using a handheld sail and rig mounted on a universal joint so that he could control his small catamaran—the first rudderless sailboard ever built that allowed a person to steer by shifting his or her weight in order to tilt the sail fore and aft. Darby did not file for a patent for his invention. However, he is widely recognized as the inventor of the first sailboard.
1948 Hair spray
Hair spray is a beauty aqueous solution that is used to keep hair stiff or in a certain style. Weaker than hair gel, hair wax, or glue, it is sprayed to hold styles for a long period. Using a pump or aerosol spray nozzle, it sprays evenly over the hair. Hair spray was first invented and manufactured in 1948 by Chase Products Company, based in Broadview, Illinois.
1948 Cat litter
Cat litter is one of any of a number of materials used in litter boxes to absorb moisture from cat feces and urine, which reduces foul odors such as ammonia and renders them more tolerable within the home. The first commercially available cat litter was Kitty Litter, available in 1948 and invented by Ed Lowe.
1948 Halligan bar
A Halligan bar is a special forcible entry tool commonly used by firefighters and law enforcement. It was designed by and named after Hugh Halligan, a First Deputy Fire Chief in the New York City Fire Department, in 1948. While the tool was developed by a Deputy Chief of the New York City Fire Department, the department did not initially purchase it because of a perceived conflict of interest in buying from a member of the department.
1948 Hand dryer
A hand dryer is an electric device found in a public restroom and are used to dry hands. It may either operate with a button, or more recently, automatically using an infrared sensor. The hand dryer was invented in 1948 by George Clemens.
1948 Rogallo wing
The Rogallo wing is a flexible type of airfoil composed of two partial conic surfaces with both cones pointing forward. Neither a kite, glider, or a type of aircraft, the Rogallo wing is most often seen in toy kites, but has been used to construct spacecraft parachutes during preliminary testing for NASA's Gemini program in the early 1960s, dirigible parachutes, ultralight powered aircraft like the trike, as well as hang gliders. Before the end of 1948, American aeronautical engineer Francis Rogallo had succeeded in inventing the first fully successful flexible-wing kite that he called the 'Flexi-Kite'. A patent was applied for in 1948 and granted in 1951. His wife, Gertrude Rogallo, also made a significant impact upon the invention, having sewed the fabric into the required dimensions that used household items like kitchen curtains. Rogallo believed that flexible wings provided more stability than fixed surfaces, leading to an elimination of rigid spars during flight. Because of this, Rogallo's concepts are seen as classics examples of purity and efficiency in aviation.
1948 Cable television
Cable television provides television to consumers via radio frequency signals transmitted to televisions through fixed optical fibers or coaxial cables as opposed to the over-the-air method used in traditional television broadcasting. First known as Community Antenna Television or CATV, cable television was born in the mountains of Pennsylvania in 1948 by John Walson and Margaret Walson.
1948 Flying disc
Flying discs are disc-shaped objects thrown and caught for recreation, which are generally plastic and roughly 20 to 25 centimeters (8–10 inches) in diameter, with a lip. The shape of the disc, an airfoil in cross-section, allows it to fly by generating lift as it moves through the air while rotating. First known as the "Whirlo-Way", the flying disc was invented in 1949 by Walter Frederick Morrison who combined his fascination with invention and his interest in flight. Carved from a solid block of a plastic compound known as "Tenite," Morrison sold his flying disc invention to WHAM–O, which introduced it in 1957 as the "Pluto Platter." In 1958, WHAM–O modified the "Pluto Platter" and rebranded it as a Frisbee flying disc to the world. It became an instant sensation.
1948 Video game
A video game is an electronic game that involves interaction with a user interface to generate visual feedback on a video device. In 1948, ten years before William Higinbotham's Tennis for Two was developed, Thomas T. Goldsmith Jr. and Estle R. Mann co-patented the "Cathode-Ray Tube Amusement Device," making it the earliest documented video game. Primitive by modern standards in video gaming, the amusement device, however, required players to overlay pictures or illustrations of targets such as airplanes in front of the screen, dovetailing the game's action.
1949 Radiocarbon dating
Radiocarbon dating is a dating method that uses the naturally occurring radioisotope carbon-14 (14C) to determine the age of carbonaceous materials up to about 60,000 years. In 1949, Willard F. Libby invented the procedure for carbon-14 dating.
1949 Airsickness bag
An airsickness bag, also known as a barf bag, airsick bag, sick bag, or motion sickness bag, is a small bag commonly provided to passengers on board airplanes and boats to collect and contain vomit in the event of motion sickness. The airsickness bag was invented by Gilmore Schjeldahl in 1949 for Northwest Orient Airlines.
1949 Ice resurfacer
An ice resurfacer is a truck-like vehicle used to clean and smooth the surface of an ice rink. Frank J. Zamboni of Paramount, California invented the first ice resurfacer, which he called a Zamboni, in 1949.
1949 Atomic clock
An atomic clock uses an atomic resonance frequency standard as its timekeeping element. The first atomic clock was an ammonia maser device built in 1949 at the United States National Bureau of Standards.
1949 Holter monitor
A Holter monitor is a portable device for continuously monitoring the electrical activity of the heart for 24 hours or more. Sticky patches (electrodes) on the chest are connected to wires from the Holter monitor. The functions of a Holter monitor captures and records information such as heart rates during day and night, abnormal heart beats, and normal and abnormal heart rhythms. The Holter monitor was invented by Norman Holter.
1949 Crash test dummy
A crash test dummy is a full-scale anthropomorphic test device that simulates the dimensions, weight proportions and articulation of the human body, and is usually instrumented to record data about the dynamic behavior of the ATD in simulated vehicle impacts. Using human and animal cadaver research from earlier studies, the first artificial crash test dummy was an anthropomorphic dummy named "Sierra Sam". It was invented in 1949 by Samuel W. Alderson at his Alderson Research Labs (ARL) And Sierra Engineering Co. for the United States Air Force while conducting tests on aircraft ejection seats, pilot restraint harnesses, and aviation helmets. Alderson's early dummies and those of his competitors were fairly primitive, with no pelvic structure and little spinal articulation. With American automakers interested in durable crash test dummies that could be tested and retested while yielding back a broad spectrum of data during simulated automobile crashes, the first crash test dummy used for automative testing was again invented by Samuel Alderson in 1968. It was called the V.I.P. (Very Important Person) and it was built with dimensions of an average adult man coupled with a steel rib cage, articulated joints, a flexible neck, and a lumbar spine.
1949 Compiler
A compiler is a computer program or set of programs that transforms source code written in a computerized source language into another computer language often having a binary form known as an object code. The most common reason for wanting to transform source code is to create an executable program. The first compiler written for the A-0 programming language is attributed to its inventor, Grace Hopper in 1949.
1949 Aerosol paint
Aerosol paint, also called spray paint, is a type of paint that comes in a sealed pressurized container and is released in a fine spray mist when depressing a valve button. A form of spray painting, aerosol paint leaves a smooth, evenly coated surface, unlike many rolled or brushed paints. In 1949, Ed Seymour of Sycamore, Illinois invented aerosol paint, which he based on the same principle as spray deodorizers and insecticides. The conveyance featured a small can of paint packaged with an aerosol propellant and fitted with a spray head.
1950s
1950 Artificial snowmaking
Snowmaking is the artificial production of snow by forcing water and pressurized air through a "snow gun" or "snow cannon", on ski slopes. Snowmaking is mainly used at ski resorts to supplement natural snow. This allows ski resorts to improve the reliability of their snow cover and to extend their ski seasons. The costly production of snowmaking requires low temperatures. The threshold temperature for snowmaking decreases as humidity decreases. Machine-made snow was first co-invented by three engineers—Art Hunt, Dave Richey and Wayne Pierce of Milford, Connecticut on March 14, 1950. Their patented invention of the first "snow cannon" used a garden hose, a 10-horsepower compressor, and a spray-gun nozzle, which produced about 20 inches of snow.
1950 Hamming code
In telecommunication, a Hamming code is a linear error-correcting code. Hamming codes can detect up to two simultaneous bit errors, and correct single-bit errors; thus, reliable communication is possible when the Hamming distance between the transmitted and received bit patterns is less than or equal to one. By contrast, the simple parity code cannot correct errors, and can only detect an odd number of errors. Hamming codes are of fundamental importance in coding theory and remain of practical use in modern computer design. Hamming codes were invented in 1950 by Richard Hamming at Bell Labs.
1950 Teleprompter
A teleprompter is a display device that prompts the person speaking with an electronic visual text of a speech or script. Using a teleprompter is similar to the practice of using cue cards. The screen is in front of and usually below the lens of the camera, and the words on the screen are reflected to the eyes of the performer using a sheet of clear glass or specially prepared beam splitter. The teleprompter was invented in 1950 by Hubert Schlafly, who was working at 20th Century Fox film studios in Los Angeles.
1950 Sengstaken-Blakemore tube
A Sengstaken-Blakemore tube is an oro or nasogastric tube used occasionally in the management of upper gastrointestinal hemorrhage due to bleeding from esophageal varices which are distended veins in the esophageal wall, usually as a result of cirrhosis. It consists of a gastric balloon, an esophageal balloon, and a gastric suction port. The Sengstaken-Blakemore tube was invented by Dr. Robert W. Sengstaken and Dr. Arthur H. Blakemore in 1950.
1951 Stellarator
A stellarator is a device used to confine a hot plasma with magnetic fields in order to sustain a controlled nuclear fusion reaction. It is the earliest controlled fusion device. In 1951, American astrophysicist Lyman Spitzer recommended that the United States Atomic Energy Commission commence containing and harnessing nuclear fusion of hydrogen at temperatures exceeding those at the Sun's surface. To do this, Spitzer invented a plasma confinement configuration device called the stellarator.
1951 Cooler
A cool box, cooler, portable ice chest, chilly bin, or esky most commonly is an insulated box used to keep perishable food or beverages cool. Ice cubes, which are very cold, are most commonly placed in it to make the things inside stay cool. Ice packs are sometimes used, as they either contain the melting water inside, or have a gel sealed inside that also stays cold longer than plain water. The cooler was invented in 1951 by Richard C. Laramy of Joliet, Illinois. Laramy filed a patent for the cooler on February 24, 1951, and was issued U.S. patent #2,663,157 on December 22, 1953.
1951 Wetsuit
A wetsuit is a garment, usually made of foamed neoprene, which is worn by divers, windsurfers, canoeists, and others engaged in water sports, providing thermal insulation, abrasion resistance and buoyancy. The insulation properties depend on bubbles of gas enclosed within the material, which reduce its ability to conduct heat. The bubbles also give the wetsuit a low density, providing buoyancy in water. The wetsuit was invented in 1951 by the University of California at Berkeley physicist Hugh Bradner.
1951 Correction fluid
Correction fluid is an opaque, white fluid applied to paper to mask errors in text. It was very important when material was typed with a typewriter, but has become less so since the advent of the word processor. Correction fluid was invented by Bette Nesmith Graham in 1951. Originally called by the brand name "Mistake Out", Graham began selling correction fluid in 1956.
1951 Well counter
A well counter is a device used for measuring radioactivity in small samples. It usually employs a sodium iodide crystal detector. It was invented in 1951 by American electrical engineer and biophysicist Hal Anger. Anger filed U.S. patent #2,779,876 on March 3, 1953 for his "Radio-Activity Distribution Detector" which was later issued on January 29, 1957.
1952 Airbag
An air bag is a safety feature designed to protect automobile passengers in a head-on collision. Most cars today have driver's side airbags and many have one on the passenger side as well. Located in the steering wheel assembly on the driver's side and in the dashboard on the passenger side, the air bag device responds within milliseconds of a crash. The original safety cushion was first created by John W. Hetrick in 1952. After a car accident that his family was involved in, Hetrick drew sketches of compressed air stored in a container. When a spring-loaded weight senses the car decelerating at a rapid enough rate, it opens a valve that allows the pressure in the container to fill a bag. With this knowledge, he developed his design until he was able to obtain a patent on the device on August 5, 1952. Later in 1967, Dr. Allen S. Breed invented and developed a key component for automotive use in 1967, the ball-in-tube inertial sensor for crash detection. Breed Corporation then marketed this innovation to Chrysler.
1952 Bread clip
A bread clip is a device used to hold plastic bags, such as the ones pre-sliced bread is commonly packaged in, closed. They are also commonly called bread tags, bread tabs, bread ties, bread crimps, or bread-bag clips. By sealing a bag more securely than tying or folding over its open end, the clip or tie may preserve its contents longer. The bread clip was invented in 1952 by Floyd Paxton of Yakima, Washington. Paxton never patented the device.
1952 Barcode
A barcode is an optical machine-readable representation of data, which shows certain data on certain products. Originally, barcodes represented data in the widths (lines) and the spacings of parallel lines, and may be referred to as linear or one-dimensional barcodes or symbologies. They also come in patterns of squares, dots, hexagons and other geometric patterns within images termed two-dimensional matrix codes or symbologies. Norman Joseph Woodland is best known for inventing the barcode for which he received a patent in October 1952.
1952 Artificial heart
An artificial heart is implanted into the body to replace the biological heart. On July 3, 1952, 41-year-old Henry Opitek suffering from shortness of breath made medical history at Harper University Hospital at Wayne State University in Michigan. The Dodrill-GMR heart, considered to be the first operational mechanical heart, was invented by Dr. Forest Dewey Dodrill and successfully inserted into Henry Opitek while performing open heart surgery. In 1981, Dr. Robert Jarvik implanted the world's first permanent artificial heart, the Jarvik 7, into Dr. Barney Clark. The heart, powered by an external compressor, kept Clark alive for 112 days. The Jarvik heart was not banned for permanent use. Since 1982, more than 350 people have received the Jarvik heart as a bridge to transplantation.
1953 Heart-lung machine
Dr. John Heysham Gibbon performed the first successful cardiopulmonary bypass surgery in which the blood was artificially circulated and oxygenated by using his invention, a pump known as the heart-lung machine. This new medical technology, which allowed the surgeon to operate on a dry and motionless heart by maintaining the circulation of blood and the oxygen content of the body, greatly increased surgical treatment options for heart defects and disease.
1953 Voltmeter (digital)
A voltmeter is an instrument used for measuring electrical potential difference between two points in an electric circuit. Analog voltmeters move a pointer across a scale in proportion to the voltage of the circuit; digital voltmeters give a numerical display of voltage by use of an analog-to-digital converter. The digital voltmeter was invented in 1953 by Andrew Kay, founder of Kaypro.
1953 Marker pen
A marker pen, marking pen, felt-tip pen, or marker, is a pen which has its own colored ink-source, and usually a tip made of a porous material, such as felt or nylon. Sidney Rosenthal, from Richmond Hill, New York, is credited with inventing the marker in 1953.
1953 WD-40
WD-40 is a widely available water-displacing spray that is useful in both home and commercial fields; lubricating and loosening joints and hinges, removing dirt and residue, and extricating stuck screws and bolts are common usages. The product also may be useful in displacing moisture, as this is its original purpose and design intent. WD-40 was invented in 1953 by Norm Larsen and two other employees at the Rocket Chemical Company in San Diego, California.
1953 Apgar scale
The Apgar scale is used to determine the physical status of an infant at birth. The Apgar scale is administered to a newborn at one minute after birth and five minutes after birth. It scores the baby's heart rate, respiration, muscle tone, reflex response, and color. This test quickly alerts medical personnel that the newborn needs assistance. This simple, easy-to-perform test was invented in 1953 by Dr. Virginia Apgar, a professor of anesthesia at the New York Columbia-Presbyterian Medical Center.
1953 Gilhoolie
A gilhoolie is a kitchen appliance that opens jars and bottles. It was invented by Dr. C. W. Fuller in 1953.
1953 Wheel clamp
A wheel clamp, also known as a Denver boot or wheel boot, is a device that is designed to prevent vehicles from moving. In its most common form, it consists of a clamp which surrounds a vehicle wheel, designed to prevent removal of both itself and the wheel. Wheel clamps are used in order to enforce laws against unauthorized or illegal parking, in lieu of towing the offending vehicle, and for security purposes such as a deterrent against stolen vehicles by thieves. Originally known as the auto immobilizer, the wheel clamp or Denver boot was invented in 1953 by Frank Marugg of Denver Colorado. A patent was filed on May 7, 1955, and issued three years later on July 28, 1958.
1953 Wiffle ball
Wiffleball is a variation of the sport of baseball designed for indoor or outdoor play in confined areas. The game is played using a perforated, light-weight, hollow, rubbery plastic ball and a long, hollow, plastic and typically a yellow bat. The Wiffle ball was invented by David N. Mullany of Fairfield, Connecticut, in 1953 when he designed a ball that curved easily for his 12-year-old son. It was named when his son and his friends would refer to a strikeout as a "whiff".
1953 MASER
A maser is produces coherent electromagnetic waves through amplification due to stimulated emission. Historically the term came from the acronym "Microwave Amplification by Stimulated Emission of Radiation". Charles H. Townes, James P. Gordon, and Herbert J. Zeiger built the first maser at Columbia University in 1953.
1953 Carbonless copy paper
Carbonless copy paper is an alternative to carbon paper, used to make a copy of an original, handwritten document without the use of any electronics. Carbonless copy paper was invented by chemists Lowell Schleicher and Barry Green, working for the NCR Corporation, as a biodegradable, stain-free alternative to carbon paper.
1953 Crossed-field amplifier
A crossed-field amplifier (CFA) is a specialized vacuum tube frequently used as a microwave amplifier in very-high-power transmitters. A CFA has lower gain and bandwidth than other microwave amplifier tubes, but it is more efficient and capable of much higher output power. William C. Brown is considered to have invented the first crossed-field amplifier in 1953 which he called an Amplitron.
1954 Zipper storage bag
A zipper storage bag is a plastic bag with a sealed or zipped opening that allows for transparent viewing of stored items inside the bag. Better known under the brand name and genericized trademark Ziploc, zipper storage bags are commonly used to hold perishable foods and snacks. Zipper storage bags were patented by Robert W. Vergobbi on May 18, 1954. However, they would not be introduced to consumers until 1968, when Dow Chemical introduced the Ziploc bags.
1954 TV dinner
A TV dinner is a prepackaged, frozen or chilled meal generally in an individual package. It requires little preparation, oven baked or microwaveable, and contains all the elements for a single-serving meal in a tray with compartments for the food. Carl A. Swanson of C.A. Swanson & Sons is generally credited for inventing the TV dinner. Retired Swanson executive Gerry Thomas said he conceived the idea after the company found itself with a huge surplus of frozen turkeys because of poor Thanksgiving sales.
1954 Acoustic suspension loudspeaker
The acoustic suspension woofer is a type of loudspeaker that reduces bass distortion caused by non-linear, stiff mechanical suspensions in conventional loudspeakers. The acoustic suspension loudspeaker was invented in 1954 by Edgar Villchur, and brought to commercial production by Villchur and Henry Kloss with the founding of Acoustic Research in Cambridge Massachusetts.
1954 Model rocketry
A model rocket is a small rocket that is commonly advertised as being able to be launched by anybody, to generally low altitudes, usually to around 300–1500 feet, and recovered by a variety of means. Popular among children and amateurs, model rocketry is considered a hobby. In 1954, licensed pyrotechnics expert Orville Carlisle along with his brother Robert, designed the first model rocket and model rocket motor.
1954 Door (automatic sliding)
Automatic sliding doors are open and closed either by power, spring, or by a sensor. This eliminates the need for a person to open or close a door by turning a doorknob or pressing up against a bar on the door itself. Automatic sliding doors are commonly found at entrance and exits of supermarkets, department stores, and airport terminals. In 1954, Dee Horton and Lew Hewitt co-invented the automatic sliding door.
1954 Mogen clamp
The Mogen clamp is a surgical tool used to circumcise a human male's penis. The device is designed to remove the foreskin, while protecting the glans. The Mogen clamp was invented in 1954 by Rabbi Harry Bronstein, a Brooklyn, New York mohel. For many years it was used only in Jewish ritual circumcision in a ceremony called a bris. In more recent years though, American physicians are using the clamp more frequently in medical settings for newborn circumcision.
1954 Cardiopulmonary resuscitation
Cardiopulmonary resuscitation is an important life saving first aid skill, practiced throughout the world. It is the only known effective method of keeping someone who has suffered cardiac arrest alive long enough for definitive treatment to be delivered. In 1954, James Elam was the first to demonstrate experimentally that cardiopulmonary resuscitation (CPR) was a sound technique, and together with Dr. Peter Safar he demonstrated its superiority to previous methods.
1954 Active noise control
Active noise control, also known as noise cancellation, is a method for reducing unwanted sound through the addition of a second sound specifically designed to cancel the first. Active noise cancelling headphones were invented by Lawrence J. Fogel, an aerospace engineer working to improve communication in helicopter cockpits, with a patent filed April 2, 1954. His research led to the first five patents in noise cancellation for headphones between 1954-1961.
1954 Synthetic diamond
Synthetic diamonds are diamonds produced in a technological process as opposed to natural diamonds, which are created in geological processes. Synthetic diamonds are also widely known as HPHT diamonds or CVD diamonds, HPHT and CVD being the production methods, high-pressure high-temperature synthesis and chemical vapor deposition, respectively. Although the concept of producing high quality artificial diamonds is an old one, the reproducible synthesis of diamonds is not. In 1954, Howard Tracy Hall at the GE Research Laboratory invented a belt press in the shape of a doughnut, which confined the sample chamber and two curved, tapered pistons to apply pressure on the chamber in order to produce the first commercially successful and reproducible synthesis of a diamond.
1954 Radar gun
A radar gun or speed gun is a small Doppler radar used to detect the speed of objects. It relies on the Doppler Effect applied to a radar beam to measure the speed of objects at which it is pointed. Radar guns may be hand-held or vehicle-mounted. Bryce K. Brown invented the radar gun in March 1954.
1955 Sling lift
A sling lift is an assistive device that allows patients in hospitals and nursing homes and those receiving home health care to be transferred between a bed and a chair or other similar resting places, using hydraulic power. Sling lifts are used for patients whose mobility is limited. The sling lift was patented on April 12, 1955 by Ronald R. Stratton in what he called a "floor crane with adjustable legs".
1955 Crosby-Kugler capsule
A Crosby-Kugler capsule is a device used for obtaining biopsies of small bowel mucosa, necessary for the diagnosis of various small bowel diseases. It was invented by Dr. William Holmes Crosby, Jr. in 1955.
1955 Nuclear submarine
The USS Nautilus, the world's first nuclear submarine, revolutionized naval warfare. Conventional submarines need two engines: a diesel engine to travel on the surface and an electric engine to travel submerged, where oxygen for a diesel engine is not available. By relying on nuclear capability, the USS Nautilus could travel uninterrupted for thousands of miles below the surface with a single fuel charge. Beginning in 1951, Admiral Hyman Rickover can be credited for the design of the world's first nuclear submarine who led and oversaw a group of scientists and engineers at the Naval Reactors Branch of the Atomic Energy Commission. After sea trials were conducted and testing was completed, the USS Nautilus became fully operational in January 1955.
1955 Hard disk drive
A hard disk drive, or hard drive, hard disk, or fixed disk drive, is a non-volatile storage device which stores digitally encoded data on rapidly rotating platters with magnetic surfaces. The hard disk drive was invented by Reynold Johnson and commercially introduced in 1956 with the IBM 305 RAMAC computer.
1955 Harmonic drive
A harmonic drive is a special type of mechanical gear system that can improve certain characteristics compared to traditional gearing systems. The harmonic drive was invented in 1955 by Walton Musser. U.S. patent #2,906,143 was filed on March 21, 1955 and issued to Musser on September 29, 1959.
1955 Vibrating sample magnetometer
A vibrating sample magnetometer or VSM is a scientific instrument that measures magnetic properties where the sample is then physically vibrated sinusoidally, typically through the use of a piezoelectric material. It was invented in 1955 by American physicist Simon Foner at the MIT Lincoln Laboratory in Cambridge, Massachusetts. Foner filed U.S. patent #2,946,948 on June 20, 1957. It was issued on July 26, 1960.
1956 Lint roller
A lint roller or lint remover is a roll of one-sided adhesive paper on a cardboard or plastic barrel that is mounted on a central spindle, with an attached handle. The device facilitates the removal of lint or other small fibers from most materials such as clothing, upholstery and linen. The lint roller was co-invented in 1956 by American electrical engineer Nicholas McKay and his wife Helen.
1956 Kart racing
Kart racing or karting is a variant of an open-wheel motor sport with simple, small four-wheeled vehicles called karts, go-karts, or gearbox karts depending on the design. Karts vary widely in speed and some can reach speeds exceeding 160 mph, while go-karts intended for the general public in amusement parks may be limited to speeds of no more than 15 mph. In the summer of 1956, hot rod veteran Art Ingels built the first go-kart out of old car frame tubing, welding beads, and a lawnmower motor, not realizing that he had invented a new sport and form of auto racing.
1956 Industrial robot
An industrial robot is an automatically controlled, re-programmable, multipurpose manipulator programmable in three or more axes. The first to invent an industrial robot was George Devol and Joseph F. Engelberger.
1956 Operating system (batch processing)
An operating system (OS) is software (programs and data) that runs on computers and manages the computer hardware and provides common services for efficient execution of various application software. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between application programs and the computer hardware, although the application code is usually executed directly by the hardware, but will frequently call the OS or be interrupted by it. Operating systems are found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers. The GM-NAA I/O, created by Owen Mock and Bob Patrick of General Motors Research Laboratories in early 1956 (or late 1955) for their IBM 701 mainframe computer is generally considered to be the first "batch processing" operating system and possibly the first "real" operating system. Rudimentary forms of operating systems existed before batch processing, the Input/Output Control System (IOCS) being one example. However, what specifically differentiated and made the GM-NAA I/O as the first of its kind was that instead of having a human operator manually load each program as what previous systems were only capable of doing, computerized software as used on GM-NAA I/O, thereafter handled the scheduling, management, and multi-tasking of all computer applications.
1956 Fortran
Fortran is a general-purpose, procedural, and imperative programming language that is especially suited to numeric computation and scientific computing. Fortran came to dominate this area of programming early on and has been in continual use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics (CFD), computational physics, and computational chemistry. It is one of the most popular languages in the area of High-performance computing and programs to benchmark and rank the world's fastest supercomputers are written in Fortran. In 1956, John Backus and a team of researchers at IBM invented the Fortran programming language for the IBM 704 mainframe computer.
1956 Videotape
Videotape is a means of recording images and sound onto magnetic tape as opposed to movie film. The first practical professional videotape machines were the Quadruplex videotape machines introduced by Ampex on April 14, 1956. Invented by Charles Ginsburg and Ray Dolby, Quad employed a transverse four-head system on a two-inch (5.08 cm) tape, and linear heads for the soundtrack.
1956 Particle storage ring
A storage ring is a type of circular particle accelerator in which a continuous or pulsed particle beam may be kept circulating for a long period of time, up to many hours. Gerard K. O'Neill invented the first particle storage ring in 1956.
1957 Skid-steer loader
A skid loader or skid steer loader is a small rigid frame, engine-powered machine with lift arms used to attach a wide variety of labor-saving tools or attachments. Though sometimes they are equipped with tracks, skid-steer loaders are typically four-wheel drive vehicles that can push material from one location to another, carry material in its bucket, or load material into a truck or trailer. Brothers Louis and Cyrill Keller co-invented the first skid-steer loader, which was based around a three-wheeled loader they developed in 1957 for a turkey farmer near Rothsay, Minnesota. In September 1958, they were hired by the Melroe brothers at Melroe Manufacturing Company in Gwinner, North Dakota, which was later to become Bobcat Company. Using the brothers' design, Melroe introduced the M60 Self-Propelled Loader and, in 1960, Louis added a rear drive axle, resulting in the M400 model, the world's first true skid-steer loader.
1957 Laser
A laser is a device that emits electromagnetic radiation through a process called stimulated emission. Laser light is usually spatially coherent, which means that the light either is emitted in a narrow, low-divergence beam, or can be converted into one with the help of optical components such as lenses. Lasers are used to read compact discs and bar codes, guide missiles, remove ulcers, fabricate steel, precisely measure the distance from Earth to the Moon, record ultradefined images of brain tissue, entertain people in light shows and do thousands of other things. In 1957, American physicist Gordon Gould first theorized the idea and use of laser technology. Despite a 20-year battle with the United States Patent and Trademark Office, Gould is now widely associated as the original inventor of laser. In addition, Charles H. Townes and Arthur L. Schawlow, scientists at Bell Laboratories, wrote a paper, Infrared and Optical Masers in 1958 that was enormously influential on the theory of lasers. Ironically, Gould, Townes, or Schawlow never built the first working laser. On July 7, 1960, American physicist Theodore H. Maiman created and built the first laser. The core of his laser consisted of a man-made ruby as the active medium, a material that had been judged unsuitable by other scientists who rejected crystal cores in favor of various gases.
1957 Confocal microscopy
Confocal microscopy is an optical imaging technique used to increase micrograph contrast and to reconstruct three-dimensional images by using a spatial pinhole to eliminate out-of-focus light or flare in specimens that are thicker than the focal plane. This technique has gained popularity in the scientific and industrial communities. Typical applications include life sciences and semiconductor inspection. The principle of confocal imaging was invented and patented by Marvin Minsky in 1957.
1957 Sugar packet
A sugar packet is a delivery method for one 'serving' of sugar. Sugar packets are commonly supplied in restaurants and coffee bars in preference to sugar bowls or sugar dispensers for reasons of neatness, spill control, and to some extent portion control. In 1957, the sugar packet that consisted of a granulated low-calorie sugar substitute, was invented by Benjamin Eisenstadt, the founder of Cumberland Packing or better known today as the Sweet 'N Low company.
1957 Air-bubble packing
Better known by the brand name of Bubble Wrap, air-bubble packing is a pliable transparent plastic material commonly used for the cushioning of fragile, breakable items in order to absorb or minimize shock and vibration. Regularly spaced, the protruding air-filled hemispheres are known as "bubbles" which are 1/4 inch (6 millimeters) in diameter, to as large as an inch (26 millimeters) or more. Air-bubble packing was co-invented by Alfred Fielding and Marc Chavannes in 1957.
1957 Borazon
Borazon, a boron nitride allotrope, is the fourth hardest substance, after aggregated diamond nanorods, ultrahard fullerite, and diamond, and the third hardest artificial material. Borazon is a crystal created by heating equal quantities of boron and nitrogen at temperatures greater than 1800 °Celsius, 3300 °Fahrenheit at 7 gigapascal 1 millionpound-force per square inch. Borazon was first invented in 1957 by Robert H. Wentorf, Jr., a physical chemist working for the General Electric Company. In 1969, General Electric adopted the name Borazon as its trademark for the crystal.
1957 Gamma camera
A gamma camera is a device used to image gamma radiation emitting radioisotopes, a technique known as scintigraphy. The applications of scintigraphy include early drug development and nuclear medical imaging to view and analyse images of the human body of the distribution of medically injected, inhaled, or ingested radionuclides emitting gamma rays. The gamma camera was invented by Hal Anger in 1957.
1957 Cryotron
The cryotron is a switch that operates using superconductivity. The cryotron works on the principle that magnetic fields destroy superconductivity. The cryotron was invented by Dudley Allen Buck in 1957.
1958 Doppler fetal monitor
A heartbeat doppler, also called a doppler fetal monitor or doppler fetal heartbeat monitor, is a handheld device which uses ultrasound to identify fetal heartbeat as part of the prenatal health care measures. The doppler fetal monitor was invented in 1958 by American obstetrician Dr. Edward H. Hon.
1958 Cable tie
A cable tie, also known as a zip tie or tie-wrap, is a type of fastener, especially for binding several electronic cables or wires together and to organize cables and wires. They have also been commonly used as makeshift handcuffs, particularly in the United States, the United Kingdom, and in Panama. The cable tie, originally known as the Ty-Rap, was invented in 1958 by Maurus C. Logan, who worked for many years at Thomas & Betts. Logan filed U.S. patent #3,022,557 on June 24, 1958 which was issued to him on February 27, 1962.
1958 Lisp programming language
Lisp is a family of computer programming languages with a long history and a distinctive, fully parenthesized syntax. Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today where Fortran is the oldest. It was invented by John McCarthy in 1958.
1958 Carbon fiber
Carbon fiber is a material consisting of extremely thin fibers about 0.005–0.010 mm in diameter and composed mostly of carbon atoms. In 1958, Dr. Roger Bacon invented the first high-performance carbon fibers at the Union Carbide Parma Technical Center, located outside of Cleveland, Ohio.
1958 Integrated circuit
An integrated circuit is a miniaturized electronic circuit that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. On September 12, 1958, Jack Kilby developed a piece of germanium with an oscilloscope attached. While pressing a switch, the oscilloscope showed a continuous sine wave, proving that his integrated circuit worked. A patent for a "Solid Circuit made of Germanium", the first integrated circuit, was filed by its inventor, Jack Kilby on February 6, 1959.
1958Video Game
The first video game was invented by American Physicist William Higinbotham-a simple tennis game.
1959 Fusor
The fusor is an apparatus invented by Philo T. Farnsworth in 1959 to create nuclear fusion. Unlike most controlled fusion systems, which slowly heat a magnetically confined plasma, the fusor injects "high temperature" ions directly into a reaction chamber, thereby avoiding a considerable amount of complexity. The approach is known as inertial electrostatic confinement.
1959 Weather satellite
A weather satellite is a type of satellite that is primarily used to monitor the weather and climate of the Earth. The first weather satellite, Vanguard 2, was launched on February 17, 1959, although the first weather satellite to be considered a success was TIROS-1, launched by NASA on April 1, 1960.
1959 Spandex
Spandex is a synthetic fiber known for its exceptional elasticity that is typically worn as apparel for exercising and in gymnastics. Spandex is stronger and more durable than rubber, its major non-synthetic competitor. Spandex was invented in 1959 by DuPont chemist Joseph Shivers.
1960s
1960 Child safety seat
A child safety seat (sometimes referred to as an infant safety seat, a child restraint system, a restraint car seat, or ambiguously as car seats), are seats designed specifically to protect children from injury or death during collisions. They are commonly used by children when riding in a vehicle. In 1960, Leonard Rivkin of Denver, Colorado invented the first child safe car seat for use in vehicles equipped with bucket seats. A patent was filed on March 5, 1962 and was issued on October 22, 1963.
1960 Artificial turf
Artificial turf, or synthetic turf, is a man-made surface made to look like natural grass. It is most often used in arenas for sports that were originally or are normally played on grass. In 1960, David Chaney is the man long credited with inventing the first generation of artificial grass turfs. Artificial turf then had its commercial birth in 1965 when it was installed at the Reliant Astrodome, a stadium in Houston, Texas.
1960 Magnetic stripe card
A magnetic stripe card is a type of card capable of storing data by modifying the magnetism of tiny iron-based magnetic particles on a band of magnetic material on the card. The magnetic stripe, sometimes called a magstripe, is read by physical contact and swiping past a reading head. Magnetic stripe cards are commonly used in credit cards, identity cards such as a driver's license, and transportation tickets. The magnetic stripe card was invented in 1960 by IBM engineer Forrest Parry, who conceived the idea of incorporating a piece of magnetic tape in order to store secured information and data to a plastic card base.
1960 Global navigation satellite system
A global navigation satellite system (GNSS) provides autonomous geo-spatial positioning with global coverage. A GNSS allows small electronic receivers to determine their location such as longitude, latitude, and altitude to within a few meters using time signals transmitted along a line of sight by radio from satellites in outer space. Receivers on the ground with a fixed position can also be used to calculate the precise time as a reference for scientific experiments. The first such system was Transit, developed by the Johns Hopkins University Applied Physics Laboratory under the leadership of Richard Kershner. Development of the system for the United States Navy began in 1958, and a prototype satellite,Transit 1A, was launched in September 1959. That satellite failed to reach orbit. A second satellite, Transit 1B, was successfully launched April 13, 1960 by a Thor-Ablestar rocket. The last Transit satellite launch was in August 1988.
1960 Combined oral contraceptive pill
The combined oral contraceptive pill, or birth-control pill, or simply "the Pill", is a combination of an estrogen and a progestin taken orally to inhibit normal female fertility. On May 9, 1960, the FDA announced it would approve Enovid 10 mg for contraceptive use. By the time Enovid 10 mg had been in general use for three years, at least a half a million women had used it. Beginning his research and studies in the feasibility of women's fertility in 1950, Dr. Gregory Pincus invented the combined oral contraceptive pill in 1960.
1960 Obsidian hydration dating
Obsidian hydration dating is a geochemical method of determining age in either absolute or relative terms of an artifact made of obsidian. Obsidian hydration dating was introduced in 1960 by Irving Friedman and Robert Smith of the United States Geological Survey.
1960 Gas laser
A gas laser is a laser in which an electric current is discharged through a gas to produce light. The first gas laser, the Helium-neon, was invented by William R. Bennett, Don Herriott, and Ali Javan in 1960. The first continuous visible gas laser, operating at 632.8 nm in the red, was invented by A. D. White and J. D. Rigden in 1962.
1961 Spreadsheet (electronic)
An electronic spreadsheet organizes data information into computerized software defined columns and rows. Primarily used for business and accounting purposes, the data can then be "added up" by a formula to give a total or sum. The spreadsheet program summarizes information from many paper sources in one place and presents the information in a format to help a decision maker see the financial "big picture" of a company. Spreadsheets in paper format have been used by accountants for hundreds of years. However, computerized, electronic spreadsheets are of much more recent origin. In 1961, Richard Mattessich, a professor at the University of California at Berkeley, pioneered the concept of electronic spreadsheets for use in business accounting. In the autumn of 1978, Harvard Business School student, Dan Bricklin, came up with the idea for an interactive visible calculator. Bricklin and Bob Frankston then co-invented the software program VisiCalc, the world's first "killer application" and electronic spreadsheet for use on personal computers.
1961 Wearable computer
Wearable computers are computers which can be worn on the body. Wearable computers are especially useful for applications that require computational support while the user's hands, voice, eyes or attention are actively engaged with the physical environment. The wearable computer was first conceived by American mathematician Edward O. Thorp in 1955 and co-invented with American electronic engineer Claude Shannon.
1961 Frozen carbonated beverage
A frozen carbonated beverage is a mixture of flavored sugar syrup, carbon dioxide, and water that is frozen by a custom machine creating a drink consisting of a fine slush of suspended ice crystals, with very little liquid. In 1961, Omar Knedlik of Coffeyville, Kansas invented the first frozen carbonated drink machine and is thus recognized as the inventor of the frozen carbonated beverage. In 1965, 7-Eleven licensed the machine, and began selling Knedlik's invention by the brand name popularly known as Slurpee.
1961 Biofeedback
Biofeedback is a form of alternative medicine that involves measuring a subject's quantifiable bodily functions such as blood pressure, heart rate, skin temperature, sweat gland activity, and muscle tension, conveying the information to the patient in real-time. This raises the patient's awareness and conscious control of his or her unconscious physiological activities. Neal Miller is generally considered the father of modern-day biofeedback. Miller theorized the basic principles of biofeedback by applying his theory that classical and operant conditioning were both the result of a common learning principle in 1961. Miller hypothesized that any measurable physiological behavior within the human body would respond in some way to voluntary control.
1962 Communications satellite
A communications satellite is an artificial satellite stationed in space for the purposes of telecommunications. Modern communications satellites use a variety of orbits. For fixed point-to-point services, communications satellites provide a microwave radio relay technology complementary to that of submarine communication cables. Invented in 1962 by the American aerospace engineer John Robinson Pierce, NASA launched Telstar, the world's first active communications satellite, and the first satellite designed to transmit telephone and high-speed data communications. Its name is still used to this day for a number of television broadcasting satellites.
1962 Chimney starter
A chimney starter, also called a charcoal chimney, is a device that is used to start either lump charcoal or stacked charcoal briquettes on a grate. Although the chimney starter is now sometimes considered a "traditional" method of starting charcoal, a basic device used for barbecue grills was co-invented in 1962 by Hugh King, Lavaughn Johnson, and Garner Byars of Corinth, Mississippi and marketed under the "Auto Fire" label. A patent for the chimney starter was filed by its inventors on July 6, 1962 and issued in January 1965.
1962 Light-emitting diode
A light-emitting-diode (LED) is a semiconductor diode that emits light when an electric current is applied in the forward direction of the device, as in the simple LED circuit. The effect is a form of electroluminescence where incoherent and narrow-spectrum light is emitted from the p-n junction in a solid state material. The first practical visible-spectrum LED was invented in 1962 by Nick Holonyak Jr.
1962 Electret microphone
An electret microphone is a type of condenser microphone, which eliminates the need for a power supply by using a permanently charged material. Electret materials have been known since the 1920s, and were proposed as condenser microphone elements several times, but were considered impractical until the foil electret type was invented at Bell Laboratories in 1962 by Jim West, using a thin metallized Teflon foil. This became the most common type, used in many applications from high-quality recording and lavalier use to built-in microphones in small sound recording devices and telephones.
1962 Jet injector
A jet injector is a type of medical injecting syringe that uses a high-pressure narrow jet of the injection liquid instead of a hypodermic needle to penetrate the epidermis. The jet injector was invented by Aaron Ismach in 1962.
1962 Laser diode
A laser diode is a laser where the active medium is a semiconductor similar to that found in a light-emitting diode. The most common and practical type of laser diode is formed from a p-n junction and powered by injected electric current. These devices are sometimes referred to as injection laser diodes to distinguish them from optically pumped laser diodes, which are more easily manufactured in the laboratory. The laser diode was invented in 1962 by Robert N. Hall.
1962 Glucose meter
A glucose meter is a medical device for determining the approximate concentration of glucose in the blood. The first glucose meter was invented by Leland Clark and Ann Lyons at the Cincinnati Children's Hospital which was first known as a glucose enzyme electrode. The sensor worked by measuring the amount of oxygen consumed by the enzyme.
1963 Kicktail
Kicktails are the upwards bent tips of a skateboard deck, today considered vital to a skateboard. The front kicktail is usually called the nose while the back kicktail is referred to as the tail. The kicktail was invented in 1963 by Larry Stevenson. U.S. patent #3,565,454 was filed on June 12, 1969 and issued to Stevenson on February 2, 1971.
1963 Computer mouse
In computing, a mouse is a pointing device that functions by detecting two-dimensional motion relative to its supporting surface. The mouse's motion typically translates into the motion of a pointer on a display, which allows for fine control of a Graphical User Interface. Douglas Engelbart invented the computer mouse at the Augmentation Research Center, funded by the Department of Defense's Advanced Research Projects Agency (now DARPA) in 1963. The first mouse was carved from wood and tracked motion via two wheels mounted on the bottom. Later on, a ball instead of two wheels was employed. The concept was soon overtaken by a modern and more technologically advanced optical mouse.
1963 BASIC
In computer programming, BASIC is a family of high-level programming languages. The original BASIC was invented in 1963 by John George Kemeny and Thomas Eugene Kurtz at Dartmouth College in New Hampshire to provide computer access to non-science students. At the time, nearly all use of computers required writing custom software, which was something only scientists and mathematicians tended to be able to do. The language and its variants became widespread on microcomputers in the late 1970s and 1980s.
1963 Balloon catheter
A balloon catheter is a type of "soft" catheter with an inflatable "balloon" at its tip which is used during a catheterization procedure to enlarge a narrow opening or passage within the body. The deflated balloon catheter is positioned, then inflated to perform the necessary procedure, and deflated again in order to be removed. A common use includes angioplasty. In 1963, Dr. Thomas Fogarty invented and patented the balloon catheter.
1963 Geosynchronous satellite
A geosynchronous satellite is a satellite whose orbital track on the Earth repeats regularly over points on the Earth over time. The world's first geosynchronous satellite, the Syncom II which was launched on a Delta rocket at NASA in 1963, was invented by Harold Rosen.
1964 Buffalo wings
A Buffalo wing, hot wing or wing is a chicken wing section (drumette or flat) that is traditionally fried unbreaded and then coated in sauce. Classic Buffalo-style chicken wing sauce is composed of a vinegar-based cayenne pepper hot sauce and butter. They are traditionally served with celery sticks and blue cheese dressing. Buffalo wings get their name from where they were invented, at the Anchor Bar in Buffalo, New York. In 1964, Teresa Bellissimo at the family-owned Anchor Bar, covered chicken wings in her own special sauce and served them with a side of blue cheese and celery. In 1980, Frank Bellissimo, the husband of Teresa, told The New Yorker that her buffalo wings were invented out of necessity because the restaurant had gotten an overstock of chicken wings instead of other chicken parts that the couple didn't know what to do with. On the other hand, Dominic Bellissimo, the son of Frank and Teresa, disputed this story. Dominic claimed that the wings were an impromptu midnight snack that his mother created on his request while drinking with friends. Whatever the story, all of the Bellissimos have since died so there is no way to verify how buffalo wings were invented.
1964 Plasma display
A plasma display panel is a flat panel display common to large TV displays. Many tiny cells between two panels of glass hold an inert mixture of noble gases. The gas in the cells is electrically turned into a plasma which then excites phosphors to emit light. The monochrome plasma video display was co-invented in July 1964 at the University of Illinois at Urbana–Champaign by Donald Bitzer, H. Gene Slottow, and graduate student Robert Willson for the PLATO Computer System.
1964 Moog synthesizer
The Moog synthesizer is an analog synthesizer without the use of a vacuum tube. A Moog synthesizer uses analog circuits and analog computer techniques to generate sound electronically. In 1964, Dr. Robert Moog invented the Moog synthesizer that has been used by recording artists such as Mick Jagger, The Beatles, The Monkees, and Stevie Wonder.
1964 8-track cartridge
Stereo 8, commonly known as the eight-track cartridge or eight-track, is a magnetic tape sound recording technology. In 1964, William Lear invented the eight-track, which went on to become the most popular musical medium from the mid-1960s to the early 1980s.
1964 Permanent press
A permanent press is a characteristic of fabric that has been chemically processed to resist wrinkles and hold its shape. This treatment has a lasting effect on the fabric, namely in shirts, trousers, and slacks. Permanent press was invented in 1964 by Ruth Rogan Benerito, research leader of the Physical Chemistry Research Group of the Cotton Chemical Reactions Laboratory.
1964 Carbon dioxide laser
The carbon dioxide laser was one of the earliest gas lasers to be developed and is still one of the most useful. The carbon dioxide laser was invented by C. Kumar N. Patel of Bell Labs in 1964.
1964 Liquid crystal display (dynamic scattering mode)
A liquid crystal display (LCD) is an electronically modulated optical device shaped into a thin, flat panel made up of any number of color or monochrome pixels filled with liquid crystals and arrayed in front of a light source or reflector. In 1964, George H. Heilmeier invented the dynamic scattering mode found in liquid crystal displays, wherein an electrical charge is applied which rearranges the molecules so that they scatter light.
1964 SQUID
Superconducting Quantum Interference Devices are very sensitive magnetometers used to measure extremely small magnetic fields based on superconducting loops containing Josephson junctions. The DC SQUID was invented in 1964 by Arnold Silver, Robert Jaklevic, John Lambe, and James Mercereau of Ford Research Labs.
1964 Argon laser
The argon laser is one of a family of ion lasers that use a noble gas as the active medium. The argon laser was invented by William Bridges in 1964.
1965 Adaptive equalizer (automatic)
An automatic adaptive equalizer corrects distorted signals, greatly improving data performance and speed. All computer modems use equalizers. The automatic adaptive equalizer was invented in 1965 by Bell Laboratories electrical engineer Robert Lucky.
1965 Snowboarding
Snowboarding is a sport that involves descending a slope that is either partially or fully covered with snow on a snowboard attached to a rider's feet using a special boot set into a mounted binding. The development of snowboarding was inspired by skateboarding, surfing and skiing. The first snowboard, the Snurfer, was invented by Sherman Poppen in 1965. Snowboarding became a Winter Olympic Sport in 1998.
1965 Kevlar
Kevlar is the registered trademark for a light, strong para-aramid synthetic fiber. Typically it is spun into ropes or fabric sheets that can be used as such or as an ingredient in composite material components. Currently, Kevlar has many applications, ranging from bicycle tires and racing sails to body armor because of its high strength-to-weight ratio. Invented at DuPont in 1965 by Stephanie Kwolek, Kevlar was first commercially used in the early 1970s as a replacement for steel in racing tires.
1965 Hypertext
Hypertext most often refers to text on a computer that will lead the user to other, related information on demand. It is a relatively recent innovation to user interfaces, which overcomes some of the limitations of written text. Rather than remaining static like traditional text, hypertext makes possible a dynamic organization of information through links and connections called hyperlinks. Ted Nelson coined the words "hypertext" and "hypermedia" in 1965 and invented the Hypertext Editing System in 1968 at Brown University.
1965 Cordless telephone
A cordless telephone is a telephone with a wireless handset that communicates via radio waves with a base station connected to a fixed telephone line, usually within a limited range of its base station. The base station is on the subscriber premises, and attaches to the telephone network the same way a corded telephone does. In 1965, an American woman named Teri Pall invented the cordless telephone. Due to difficulties of marketing, Pall never patented her invention. George Sweigert of Euclid, Ohio had more success, thus receiving a patent for the cordless telephone in 1969.
1965 Space pen
The Space Pen, also known as the Zero Gravity Pen, is a pen that uses pressurized ink cartridges and is claimed to write in zero gravity, upside down, underwater, over wet and greasy paper, at any angle, and in extreme temperature ranges. The ballpoint is made from tungsten carbide and is precisely fitted in order to avoid leaks. A sliding float separates the ink from the pressurized gas. The thixotropic ink in the hermetically sealed and pressurized reservoir is claimed to write for three times longer than a standard ballpoint pen. In 1965, the space pen was invented and patented by Paul C. Fisher. After two years of testing at NASA, the space pen was first used during the Apollo 7 mission in 1968.
1965 Minicomputer
A minicomputer is a class of multi-user computers that lies in the middle range of the computing spectrum, in between the largest multi-user systems and the smallest single-user systems. Wesley A. Clark and Charles Molnar co-invented the PDP-8 in 1965, the world's first minicomputer, using integrated circuit technology. Because of its relatively small size and its $18,000 price tag, Digital Equipment only sold several hundred units.
1965 Compact Disc
The Compact Disc, or CD, is an optical disc used to store digital data, originally developed for storing digital audio. In 1965, James Russell acted upon his idea that the music industry needed a new medium whereby a gramophone record and the needle on a phonograph would no longer come into contact with one another. With an interest in lasers, Russell soon began his research in an optical system that would replace a phonograph's needle and replace it with a laser that would read codes in order to record and playback sound. At 12 inches (30 cm) in diameter, Russell in 1970 had successfully invented and built the world's first compact disc that contained digitized codes etched onto the disc that could be read from a laser. After partnering with Digital Recording which was later acquired by Optical Recording Corporation, Russell and the parent company that he worked for, found it increasingly difficult to enforce and protect his patents from infringement by competitors such as Sony, Philips, and Time Warner who all profited from Russell's invention. The belief that Dutch and Japanese scientists "invented" the compact disc is a misconception in the sense that Philips and Sony used Russell's underlying technology in order to develop a disc more refined, practical, smaller and sophisticated. In 1982, Sony and Philips had commercially introduced the compact disc, twelve years after Russell had already created a working prototype in 1970. By 1986, Optical Recording decided to legally act by suing Sony, Phillips, and Time Warner. Two years later, the company came to a licensing settlement with Sony and soon thereafter, agreements with Phillips and others soon followed, including a June 1992 court ruling that required Time Warner to pay Optical Recording $30 million due to patent infringement.
1965 Chemical laser
A chemical laser is a laser that obtains its energy from a chemical reaction. Chemical lasers can achieve continuous wave output with power reaching to megawatt levels. They are used in industry for cutting and drilling, and in military as directed-energy weapons. The first chemical laser was co-invented by Jerome V. V. Kasper and George C. Pimentel in 1965.
1966 Dynamic random access memory
Dynamic random access memory is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to static random access memory and other static memory. In 1966 DRAM was invented by Robert Dennard at the IBM Thomas J. Watson Research Center.
1966 Thermosonic bonding
Thermosonic Bonding is the most widely used wire bonding method to electrically connect silicon integrated circuits. It was introduced by Alexander Coucoulas in 1966. Owing to the reliability of a thermosonic bond, it is extensively used to connect the all important central processing unit (CPU) which are encapsulated integrated circuits that serve as the mainstay and "brains" of the computer.
1967 Backpack (internal frame)
The internal frame backpack consists of strips of either metal or plastic that mold to one's back to provide a good fit, sometimes with additional metal stays to reinforce the frame. Usually a complex series of straps works with the frame to distribute the weight and hold it in place. The close fitting of the back section to the wearer's back allows the pack to be closely attached to the body, and gives a predictable movement of the load. The internal frame backpack was invented in 1967 by Greg Lowe, the founder of Lowepro.
1967 Light beer
Invented by Joseph L. Owades
1967 Calculator (hand-held)
Invented by Jack Kilby in 1967, the hand-held calculator is a device for performing mathematical calculations, distinguished from a computer by having a limited problem solving ability and an interface optimized for interactive calculation rather than programming. Calculators can be hardware or software, and mechanical or electronic, and are often built into devices such as PDAs or mobile phones.
1968 Racquetball
Racquetball is a racquet sport played with a hollow rubber ball in an indoor or outdoor court. Joseph Sobek is credited with inventing the sport of racquetball in the Greenwich YMCA, though not with naming it. A professional tennis player and handball player, Sobek sought a fast-paced sport that was easy to learn and play. He designed the first strung paddle, devised a set of codified rules, and named his game "paddle rackets."
1968 Virtual reality
Virtual reality (VR) is a technology which allows a user to interact with a computer-simulated environment. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. In 1968, Ivan Sutherland, with the help of his student Bob Sproull, invented what is widely considered to be the first virtual reality and augmented reality (AR) head mounted display (HMD) system. It was primitive both in terms of user interface and realism, and the HMD to be worn by the user was so heavy it had to be suspended from the ceiling, and the graphics comprising the virtual environment were simple wireframe model rooms. In 1989, Jaron Lanier, the founder of VPL Research popularized the concept of virtual reality with his "goggles n' gloves" system.
1968 Turtle Excluder Device
A turtle excluder device is a specialized device that allows a captured sea turtle to escape when caught in a fisherman's net. They are used to catch sea turtles when bottom trawling is used by the commercial shrimp fishing industry. The first turtle excluder device was called the Georgia Jumper. It was invented in 1968 by American fisherman Sinkey Boone.
1968 Zipper (ride)
Not to be confused with the 1893 invention with the same name, the "Zipper" is an amusement-thrill ride popular at carnivals and amusement parks in the United States, Canada, Australia, and New Zealand. It features strong vertical G-forces, numerous spins, and a noted sense of unpredictability. The ride's basic format is a long, rotating, oval boom with a cable around its edge that pulls 12 cars around the ride. The Zipper is designed to be transportable and assembled from site to site. The Zipper was invented in 1968 by Joseph Brown of Chance Morgan. Since this time, more than 200 rides have been built and distributed all over the world, making it one of the most mass-produced and modern-day rides of all time.
1969 Lunar Module
The Lunar Module was the lander portion of spacecraft built for the Apollo program by Grumman in order to achieve the transit from cislunar orbit to the surface and back. The module was also known as the LM from the manufacturer designation. NASA achieved the first test flight on January 22, 1968 using a Saturn V rocket. Six successful missions carried twelve astronauts, the first being Neil Armstrong and Buzz Aldrin on July 20, 1969, to surface of the Moon and safely back to Earth. Tom Kelly as a project engineer at Grumman, invented and successfully designed the Lunar Module.
1969 Electromagnetic lock
An electromagnetic lock is a simple locking device that consists of an electromagnet and armature plate. By attaching the electromagnet to the door frame and the armature plate to the door, a current passing through the electromagnet attracts the armature plate holding the door shut. The first modern direct-pull electromagnetic lock was designed by Sumner "Irving" Saphirstein in 1969.
1969 Laser printer
A laser printer is a common type of computer printer that rapidly produces high quality text and graphics on plain paper. The laser printer was invented at Xerox in 1969 by researcher Gary Starkweather, who had an improved printer working by 1971 and incorporated into a fully functional networked printer system by about a year later.
1969 Bioactive glass
Bioactive glasses are a group of surface reactive glass-ceramics. The biocompatibility of these glasses has led them to be investigated extensively for use as implant materials in the human body to repair and replace diseased or damaged bone. Bioactive glass was invented in 1969 by Larry Hench and his colleagues at the University of Florida.
1969 Wide-body aircraft
A wide-body aircraft is a large airliner with two passenger aisles, also known as a twin-aisle aircraft. As the world's first wide-body aircraft, the Boeing 747, also referred to as a jumbo jet, revolutionized international travel around the globe by making non-stop and long distance travel accessible for all. Joe Sutter, the chief engineer of the jumbo jet program at The Boeing Company designed the world's first wide-body aircraft, the Boeing 747, with its first test flight on February 9, 1969.
1969 Taser
A Taser is an electroshock weapon that uses Electro-Muscular Disruption (EMD) technology to cause neuromuscular incapacitation (NMI) and strong muscle contractions through the involuntary stimulation of both the sensory nerves and the motor nerves. The Taser is not dependent on pain compliance, making it highly effective on subjects with high pain tolerance. For this reason it is preferred by law enforcement over traditional stun guns and other electronic control weapons. Jack Cover, a NASA researcher, invented the Taser in 1969.
1969 Charge coupled device
A charge-coupled device (CCD) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. Often the device is integrated with an image sensor, such as a photoelectric device to produce the charge that is being read, thus making the CCD a major technology for digital imaging. First conceived in its usefulness for computer memory, the charge coupled device was co-invented in 1969 by American physicist George E. Smith and Canadian physicist Willard Boyle at AT&T Bell Laboratories.
1969 Mousepad
A mousepad is a hard surface, square-shaped and rubberized mat for enhancing the usability of a computer mouse. Jack Kelley invented the mousepad in 1969.
1969 Chapman Stick
A polyphonic member of the guitar family, the Chapman Stick is an electric musical instrument used for music recordings to play various parts such as bass, lead, chords, and textures. The Chapman Stick looks like a wide version of the fretboard of an electric guitar, but having 8, 10 or 12 strings. The player will use both hands to sound notes by striking the strings against the fingerboard just behind the appropriate frets for the desired notes. The Chapman Stick was invented in 1969 by American jazz musician Emmett Chapman.
1969 Markup language
A markup language is a modern system for annotating a text in a way that is syntactically distinguishable from that text. The idea and terminology evolved from the "marking up" of manuscripts. For example, the revision instructions by editors, traditionally written with a blue pencil on authors' manuscripts. A well-known example of a markup language in widespread use today is HyperText Markup Language (HTML), one of the key document formats of the World Wide Web. The origins of markup languages can be traced to a formatting language called RUNOFF, developed in the 1960s by Jerome H. Saltzer at the Massachusetts Institute of Technology. However, the very first markup language was called the Generalized Markup Language (GML) co-invented by IBM engineers Charles Goldfarb, Ed Mosher, and Ray Lorie.
1970s
1970 Wireless local area network
A wireless local area network is the linking of two or more computers or devices using spread-spectrum or OFDM modulation technology based to enable communication between devices in a limited area. In 1970, the University of Hawaii, under the leadership of Norman Abramson, invented the world's first computer communication network using low-cost ham-like radios, named ALOHAnet. The bidirectional star topology of the system included seven computers deployed over four islands to communicate with the central computer on the Oahu Island without using phone lines.
1970 Surf leash
A surfboard leash or leg rope is the cord that attaches a surfboard to the surfer. It prevents the surfboard from being swept away by waves and prevents a runaway surfboard from hitting other surfers and swimmers. Modern leashes comprise a urethane cord where one end has a band with a velcro strap attached to the surfer's trailing foot, and the opposite has a velcro strap attached to the tail end of the surfboard. The surfboard leash was invented in 1970 by Santa Cruz, California resident Pat O'Neill, son of wetsuit innovator Jack O'Neill, who fastened surgical tubing to the nose of his surfboard with a suction cup looped to the end of his wrist in order to leverage turns and cutbacks in the water. However, modifications in 1971 by O'Neill made the surf leash attached to the ankle and to a surfboard's tail, a practice still in use today.
1971 Uno (card game)
Uno is a card game played with a specially printed deck. Using colored playing cards, he game involves playing the legal card with the highest point value. This is a simple way to minimize points held in the hand at the end of the round, but fails to account for the utility of holding wilds and draw fours near the end of the game. Uno was co-invented by father-son duo Merle and Ray Robbins in 1971 as a twist to the card game called Crazy Eights. The name of the game, "Uno", Spanish for one, was thought up by Merle's son Ray.
1971 Personal computer
The personal computer (PC) is any computer whose original sales price, size, and capabilities make it useful for individuals, and which is intended to be operated directly by an end user, with no intervening computer operator. The Kenbak-1 is officially credited by the Computer History Museum to be the world's first personal computer which was invented in 1971 by John Blankenbaker. With a price tag of $750 and after selling only 40 machines, Kenbak Corporation closed its doors in 1973.
1971 Fuzzball router
Fuzzball routers were the first modern routers on the Internet. They were DEC LSI-11 computers loaded with router software. First conceptualized by its inventor, David L. Mills, fuzzball routers evolved as a virtual machine supporting the DEC RT-11 operating system and early developmental versions of the TCP/IP protocol and applications suite. Prototype versions of popular Internet tools, including Telnet, FTP, DNS, EGP and SMTP were first implemented and tested on fuzzball routers.
1971 Supercritical airfoil
A supercritical airfoil is an airfoil designed, primarily, to delay the onset of wave drag on aircraft in the transonic speed range. Supercritical airfoils are characterized by their flattened upper surface, highly cambered aft section, and greater leading edge radius as compared to traditional airfoil shapes. The supercritical airfoil was invented and designed by NASA aeronautical engineer Richard Whitcomb in the 1960s. Testing successfully commenced on a United States Navy Vought F-8U fighter through wind tunnel results in 1971.
1971 Microprocessor
The microprocessor is a computer chip that processes instructions and communicates with outside devices, controlling most of the operations of a computer through the central processing unit on a single integrated circuit. The first commercially available microprocessor was a silicon-based chip, the Intel 4004, co-invented in 1971 by Ted Hoff, Federico Faggin, and Stanley Mazor for a calculator company named Busicom, and produced by Intel.
1971 Floppy disk
A floppy disk is a data storage medium that is composed of a disk of thin, flexible "floppy" magnetic storage medium encased in a square or rectangular plastic shell. In 1971 while working at IBM, David L. Noble invented the 8-inch floppy disk. Floppy disks in 8-inch, 5¼-inch, and 3½-inch formats enjoyed many years as a popular and ubiquitous form of data storage and exchange, from the mid-1970s to the late 1990s.
1971 String trimmer
A string trimmer is a powered handheld device that uses a flexible monofilament line instead of a blade for cutting grass and trimming other plants near objects. It consists of a cutting head at the end of a long shaft with a handle or handles and sometimes a shoulder strap. String trimmers powered by an internal combustion engine have the engine on the opposite end of the shaft from the cutting head while electric string trimmers typically have an electric motor in the cutting head. Used frequently in lawn and garden care, the string trimmer is more popularly known by the brandnames Weedeater or Weedwhacker. The string trimmer was invented in 1971 by George Ballas of Houston, Texas.
1971 Memristor
A memristor is a passive two-terminal electronic device that is built to express only the property of memristance. However, in practice it may be difficult to build a 'pure memristor,' since a real device may also have a small amount of some other property, such as capacitance. In 1971, American engineer and computer scientist Leon Chua first postulated the memristor that could be used to implement computer memory. Almost four decades after Chua's research, a team of engineers at Hewlett Packard under the direction of R. Stanley Williams constructed a working memristor using a thin film of titanium dioxide in April 2008.
1971 E-mail
Electronic mail, often shortened to e-mail, is a method of creating, transmitting, or storing primarily text-based human communications with digital communications systems. Ray Tomlinson as a programmer while working on the United States Department of Defense's ARPANET, invented and sent the first electronic mail on a time-sharing computer in 1971. Previously, e-mail could only be sent to users on the same computer. Tomlinson is regarded as having sent the first e-mail on a network and for making the "@" sign the mainstream of e-mail communications.
1972 C (programming language)
C is a general-purpose computer programming language originally invented in 1972 by Dennis Ritchie at the Bell Telephone Laboratories in order to implement the Unix operating system. Although C was designed for writing architecturally independent system software, it is also widely used for developing application software.
1972 Video game console
A video game console is an interactive entertainment computer or electronic device that produces a video display signal which can be used with a display device such as a television to display a video game. A joystick or control pad is often used to simulate and play the video game. It was not until 1972 that Magnavox released the first home video game console, the Magnavox Odyssey, invented by Ralph H. Baer.
1972 Global Positioning System
The Global Positioning System (GPS) is a space-based global navigation satellite system that provides reliable, three-dimensional positioning, navigation, and timing services to worldwide users on a continuous basis in all weather, day and night, anywhere on or near the Earth. 24 satellites orbit around the Earth twice a day, transmitting signaled information to GPS receivers that take this information and use triangulation to calculate the user's exact location. Ultimately, the GPS is the descendant of the United States Navy's Timation satellite program and the United States Air Force's 621-B satellite program. The invention of GPS was a collaborative and team effort. The basic architecture of GPS was devised in less than a month in 1972 by Colonel Bradford Parkinson, Mel Birnbaum, Bob Rennard, and Jim Spilker. However, Richard Easton, a son of Roger Easton who was the head of the U.S. Navy's Timation program, claims that his father invented GPS and filed U.S. patent #3,789,409 in 1974. Other names listed by Richard Easton are James Buisson, Thomas McCaskill, Don Lynch, Charles Bartholomew, Randolph Zwirn and, "an important outsider," Robert Kern. Ivan Getting, while working at Raytheon, envisioned a satellite system similar to MOSAIC, a railroad mobile ballistic missile guidance system, but working more like LORAN. The GPS program was approved in December 1973, the first GPS satellite was launched in 1978, and by August 1993, 24 GPS satellites were in orbit. Initial operational capability was established in December of that same year while in February 1994, the Federal Aviation Agency (FAA) declared GPS ready for use.
1972 PET scanner
A PET scanner is a commonly used medical device which scans the whole human body for detecting diseases such as cancer. The PET scanner was invented in 1972 by Edward J. Hoffman and fellow scientist Michael Phelps.
1972 Magnetic resonance imaging
Magnetic resonance imaging (MRI), or nuclear magnetic resonance imaging (NMRI), is primarily a medical imaging technique most commonly used in radiology to visualize the structure and function of the body. Dr. Raymond Damadian, an Armenian-American scientist, who while researching the analytical properties of magnetic resonance, created the world's first magnetic resonance imaging machine in 1972. Damadian filed the first patent for an MRI machine, U.S. patent #3,789,832 on March 17, 1972, which was later issued to him on February 5, 1974. Damadian along with Larry Minkoff and Michael Goldsmith, subsequently went on to perform the first MRI body scan of a human being on July 3, 1977. Reflecting the fundamental importance and applicability of MRI in medicine, Paul Lauterbur of the University of Illinois at Urbana–Champaign and Sir Peter Mansfield of the University of Nottingham were awarded the 2003 Nobel Prize in Physiology or Medicine for their "discoveries concerning magnetic resonance imaging."
1973 Personal watercraft
A personal watercraft (PWC) is a recreational watercraft that the rider sits or stands on, rather than inside of, as in a boat. Models have an inboard engine driving a pump jet that has a screw-shaped impeller to create thrust for propulsion and steering. Clayton Jacobson II is credited with inventing the personal watercraft, including both the sit-down and stand-up models in 1973.
1973 E-paper
Electronic paper, also called e-paper, is a display technology designed to mimic the appearance of ordinary ink on paper. Electronic paper reflects light like ordinary paper and is capable of holding text and images indefinitely without drawing electricity, while allowing the image to be changed later. Applications of e-paper technology include e-book readers capable of displaying digital versions of books, magazines and newspapers, electronic pricing labels in retail shops, time tables at bus stations, and electronic billboards. Electronic paper was invented in 1973 by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometres across.
1973 Recombinant DNA
Recombinant DNA is a form of synthetic DNA that is engineered through the combination or insertion of one or more DNA strands, thereby combining DNA sequences that would not normally occur together. The Recombinant DNA technique was engineered by Stanley Norman Cohen and Herbert Boyer in 1973. They published their findings in a 1974 paper entitled "Construction of Biologically Functional Bacterial Plasmids in vitro", which described a technique to isolate and amplify genes or DNA segments and insert them into another cell with precision, creating a transgenic bacterium.
1973 Catalytic converter (three-way)
A catalytic converter provides an environment for a chemical reaction wherein toxic combustion by-products are converted to less-toxic substances. First used on cars in 1975 to lower emission standards, catalytic converters are also used on generator sets, forklifts, mining equipment, trucks, buses, trains, and other engine-equipped machines. The three-way catalytic converter was co-invented by John J. Mooney and Carl D. Keith at the Engelhard Corporation in 1973.
1973 Mobile phone
A mobile phone, or cell phone, is a long-range, electronic device used for mobile voice or data communication over a network of specialized base stations known as cell sites. Early mobile FM radio telephones were in use for many years, but since the number of radio frequencies were very limited in any area, the number of phone calls were also very limited. To solve this problem, there could be many small areas called cells which share the same frequencies. When users moved from one area to another while calling, the call would have to be switched over automatically without losing the call. In this system, a small number of radio frequencies could accommodate a huge number of calls. The first mobile call was made from a car phone in St. Louis, Missouri on June 17, 1946, but the system was impractical from what is considered a portable handset today. The equipment weighed 80 lbs, and the AT&T service, basically a massive party line, cost $30 per month plus 30 to 40 cents per local call. The basic network and supporting infrastructure of hexagonal cells used to support a mobile telephony system while remaining on the same channel were devised by Douglas H. Ring and W. Rae Young at AT&T Bell Labs in 1947. Finally in 1973, Martin Cooper invented the first handheld cellular/mobile phone. His first mobile phone call was made to Joel S. Engel in April 1973.
1973 Voicemail
Voicemail is the managing of telephone messages from a centralized data storing system. Voicemail is stored on hard disk drives, media generally used by computers in order to store other forms of data. Messages are recorded in digitized natural human voice similar to how music is stored on a compact disc. To retrieve and to play back messages, a user calls the system from any phone, and his or her messages can be retrieved immediately. The first voicemail system, known as the Speech Filing System (SFS), was invented by Stephen J. Boies in 1973. What started as a research project at the IBM Thomas J. Watson Research Center, the first working prototype became available to telephone users in 1975.
1974 Heimlich maneuver
Performing abdominal thrusts, better known as the Heimlich Maneuver, involves a rescuer standing behind a patient and using their hands to exert pressure on the bottom of the diaphragm. This compresses the lungs and exerts pressure on any object lodged in the trachea, hopefully expelling it. This amounts to an artificial cough. Henry Heimlich, as the inventor of his abdominal thrust technique, first published his findings about the maneuver in a June 1974 informal article in Emergency Medicine entitled, "Pop Goes the Cafe Coronary". On June 19, 1974, the Seattle Post-Intelligencer reported that retired restaurant-owner Isaac Piha used the procedure to rescue choking victim Irene Bogachus in Bellevue, Washington.
1974 Post-it note
The Post-it note is a piece of stationery with a re-adherable strip of adhesive on the back, designed for temporarily attaching notes to documents and to other surfaces such as walls, desks and table-tops, computer displays, and so forth. Post-it notes were co-invented by 3M employees Arthur Fry and Spencer Silver in 1974.
1974 Scanning acoustic microscope
A Scanning Acoustic Microscope (SAM) is a device which uses focused sound to investigate, measure, or image an object. It is commonly used in failure analysis and non-destructive evaluation. The first scanning acoustic microscope was co-invented in 1974 by C. F. Lemons and R. A. Quate at the Microwave Laboratory of Stanford University.
1974 Quantum well laser
A quantum well laser is a laser diode in which the active region of the device is so narrow that quantum confinement occurs. The wavelength of the light emitted by a quantum well laser is determined by the width of the active region rather than just the bandgap of the material from which it is constructed. The quantum well laser was invented by Charles H. Henry, a physicist at Bell Labs, in 1974 and was granted a patent for it in 1976.
1974 Universal Product Code
The Universal Product Code (UPC) is a barcode symbology that scans 12-digits numbers along the bar in order to track trade items and to encode information such as pricing to a product on a store's shelf. The Universal Product Code, invented by George Laurer at IBM, was used on a marked item scanned at a retail checkout, Marsh's supermarket in Troy, Ohio, at 8:01 a.m. on June 26, 1974.
1975 Digital camera
The digital camera is a camera that takes video or still photographs, digitally by recording images via an electronic image sensor. Steven Sasson as an engineer at Eastman Kodak invented and built the first digital camera using a CCD image sensor in 1975.
1975 Ethernet
The ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name comes from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model, through means of network access at the Media Access Control (MAC)/Data Link Layer, and a common addressing format. Robert Metcalfe, while at Xerox invented the ethernet in 1975.
1975 Breakaway rim
A breakaway rim is a basketball hoop that can bend slightly when a player dunks a basketball, and then instantly snap back into its original shape when the player releases it. It allows players to dunk the ball without shattering the backboard, and it reduces the possibility of wrist injuries. According to the Lemelson Center, an affiliation of the Smithsonian Institution in Washington D.C., the breakaway rim was invented by Arthur Ehrat. After six years, from July 1976 to December 1982, Ehrat received a patent (U.S. Patent No. 4,365,802). His application was rejected twice, with patent examiner Paul Shapiro noting that Frederick C. Tyner held a patent for a similar device (U.S. Patent No. 4,111,420). However, a court appeal finally ruled in favor of Ehrat, as he proved through notarized copies of canceled checks and a rough sketch of his invention, that he was working on his breakaway basketball goal in 1975 before Frederick Tyner conceived of his.
1976 Gore-Tex
Gore-Tex is a waterproof, breathable fabric and is made using an emulsion polymerization process with the fluorosurfactant perfluorooctanoic acid. Gore-Tex was co-invented by Wilbert L. Gore, Rowena Taylor, and Gore's son, Robert W. Gore for use in space. Robert Gore was granted a patent on April 27, 1976, for a porous form of polytetrafluoroethylene with a micro-structure characterized by nodes interconnected by fibrils. Robert Gore, Rowena Taylor, and Samuel Allen were granted a patent on March 18, 1980 for a "waterproof laminate."
1977 Human-powered aircraft
A human-powered aircraft (HPA) is an aircraft powered by direct human energy and the force of gravity. The thrust provided by the human may be the only source. However, a hang glider that is partially powered by pilot power is a human-powered aircraft where the flight path can be enhanced more than if the hang glider had not been assisted by human power. Invented by designer Paul MacCready and constructed of mylar, polystyrene, and carbon-fiber rods, the Gossamer Condor was the world's first practical and successful human-powered aircraft, staying in the air for 7.5 uninterrupted minutes. By 1979, a cyclist named Byron Allen used McCready's successive model known as the Gossamer Albatross, and won British industrialist Henry Kremer's prize of $214,000 for crossing the 22-mile English Channel.
1977 Chemical oxygen iodine laser
A chemical oxygen iodine laser is an infrared chemical laser. The chemical oxygen iodine laser was invented by the United States Air Force's Phillips Laboratory in 1977 for military purposes. Its properties make it useful for industrial processing as well; the beam is focusable and can be transferred by an optical fiber, as its wavelength is not absorbed much by fused silica but is very well absorbed by metals, making it suitable for laser cutting and drilling. COIL is the main weapon laser for the military airborne laser and advanced tactical laser programs.
1978 Slide Away Bed
A Slide Away Bed is a type of sofa bed that slides to the wall to form a sofa. The mattress is hinged to form a seating surface and back support. The bed frame support is a telescoping frame that allows the bed platform to recess below the seating cushion. The primitive version of the slide away bed was co-invented by Manning Lane, Warren J. Hauck and Roy O. Sweeney of Cincinnati, Ohio. U.S. patent #4,204,287 was filed on September 5, 1978 and issued on May 27, 1980.
1978 Popcorn bag
A popcorn bag is a specially designed, microwaveable bag that contains popcorn, along with oil, spices and seasoning. The bag is typically partially folded when it is placed in a microwave oven, and inflates as a result of steam pressure from the heated kernels. The earliest patent for the popcorn bag, U.S. patent #4,267,420 was filed on October 12, 1978 by William A. Brastad of Minneapolis and issued on May 12, 1981.
1978 Bulletin board system
A Bulletin Board System, or BBS, is a computer system running software that allows users to connect and log into the system using a terminal program. Once logged in, a user can perform functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users, either through electronic mail or in public message boards. Many BBSes also offer on-line games, in which users can compete with each other, and BBSes with multiple phone lines often provide chat rooms, allowing users to interact with each other. CBBS, the first Bulletin Board System, was invented by Ward Christensen and Randy Suess in Chicago, becoming fully operational on February 16, 1978.
1979 Winglets
Wingtip devices or winglets are usually intended to improve the efficiency of fixed-wing aircraft. The concept of winglets originated in the late 19th century, but the idea remained on the drawing board. Throughout the 1970s when the price of aviation fuel started spiraling upward, NASA aeronautical engineer Richard Whitcomb began investigating and studying the feasibility of winglets in order to improve overall aerodynamics and reduce drag on aircraft. Whitcomb's tests finally culminated with the first successful test flight of his attached winglets on a KC-135 Stratotanker on July 24, 1979.
1979 Polar fleece
Polar fleece, or "fleece", is a soft napped insulating synthetic wool fabric made from polyethylene terephthalate or other synthetic fibers. Found in jackets, hoodies, and casual wear, fleece has some of wool's finest qualities but weighs a fraction of the lightest available woolens. The first form of polar fleece was invented in 1979 by Malden Mills, now Polartec LLC., which was a new, light, and strong pile fabric meant to mimic and in some ways surpass wool.
1980s and the early 1990s (1980–1991)
1981 Stealth-aircraft
The Lockheed F-117 Nighthawk was the world's first operational aircraft to be designed around stealth technology. Its maiden flight took place in 1981, and the aircraft achieved initial operating capability status in 1983.
1981 Control-Alt-Delete
Control-Alt-Delete, often abbreviated as Ctrl-Alt-Del, is a computer keyboard command on PC compatible systems that can be used to reboot a computer, and summon the task manager or operating system. It is invoked by pressing the Delete key while holding the Control and Alt keys: Ctrl+Alt+Delete. Thus, it forces a soft reboot, brings up the task manager (on Windows and BeOS) or a jump to ROM monitor. Control-Alt-Delete was invented in 1981 by David Bradley while working at IBM.
1981 Total internal reflection fluorescence microscope
A total internal reflection fluorescence microscope is a type of microscope with which a thin region of a specimen, usually less than 200 nm, can be observed. It can also be used to observe the fluorescence of a single molecule, making it an important tool of biophysics and quantitative biology. Daniel Axelrod invented the first total internal reflection fluorescence microscope in 1981.
1981 Space shuttle
The Space Shuttle, part of the Space Transportation System (STS), is a spacecraft operated by NASA for orbital human spaceflight missions. It carries payloads to low Earth orbit, provides crew rotation for the International Space Station (ISS), and performs servicing missions. The orbiter can also recover satellites and other payloads from orbit and return them to Earth. In 1981, NASA successfully launched its reusable spacecraft called the Space Shuttle. George Mueller, an American from St. Louis, Missouri is widely credited for jump starting, designing, and overseeing the Space Shuttle program after the demise of the Apollo program in 1972.
1981 Paintball
Paintball is a game in which players eliminate opponents by hitting them with pellets containing paint usually shot from a carbon dioxide or compressed-gas, HPA or N2O, in a powered paintball gun. The idea of the game was first conceived and co-invented in 1976 by Hayes Noel, Bob Gurnsey, and Charles Gaines. However, the game of paintball was not first played until June 27, 1981.
1981 Graphic User Interface
Short for Graphic User Interface, the GUI uses windows, icons, and menus to carry out commands such as opening files, deleting files, moving files, etc. and although many GUI Operating Systems are operated by using a mouse, the keyboard can also be used by using keyboard shortcuts or arrow keys. The GUI was co-invented at Xerox PARC by Alan Kay and Douglas Engelbart in 1981.
1983 Internet
Not to be confused with a separate application known as the World wide web which was invented much later in the early 1990s (see article on the English inventor Tim Berners-Lee), the Internet is the global system of overall interconnected computer networks that use the standardized Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies. The concept of packet switching of a network was first explored by Paul Baran in the early 1960s, and the mathematical formulations behind packet switching were later devised by Leonard Kleinrock. On October 29, 1969, the world's first electronic computer network, the ARPANET, was established between nodes at Leonard Kleinrock's lab at UCLA and Douglas Engelbart's lab at the Stanford Research Institute (now SRI International). Another milestone occurred in 1973 when Bob Kahn and Vinton Cerf co-invented Internet Protocol and Transmission Control Protocol while working on ARPANET at the United States Department of Defense. The first TCP/IP-wide area network was operational on January 1, 1983, when the United States' National Science Foundation (NSF) constructed the university network backbone that would later become the NSFNet. This date is held as the "birth" of the Internet.
1983 Blind signature
In cryptography, a blind signature, as invented by David Chaum in 1983, is a form of digital signature in which the content of a message is disguised before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems and digital cash schemes.
1983 Laser turntable
A laser turntable is a phonograph that plays gramophone records using a laser beam as the pickup instead of a conventional diamond-tipped stylus. This playback system has the unique advantage of avoiding physical contact with the record during playback; instead, a focused beam of light traces the signal undulations in the vinyl, with zero friction, mass and record wear. The laser turntable was first conceived by Robert S. Reis, while working as a consultant of analog signal processing for the United States Air Force and the United States Department of Defense.
1984 LCD projector
An LCD projector is a type of video projector for displaying video, images or computer data on a screen or other flat surface. It is a modern equivalent of the slide projector or overhead projector. To display images, LCD (liquid-crystal display) projectors typically send light from a metal-halide lamp through a prism or series of dichroic filters that separates light to three polysilicon panels – one each for the red, green and blue components of the video signal. The LCD projector was invented in 1984 by Gene Dolgoff.
1984 Pointing stick
The pointing stick is an isometric joystick operated by applied force and is used as a pointing device on laptop computers. It takes the form of a rubber cap located on top of the keyboard embedded between the 'G', 'H' and 'B' keys. The pointing stick was invented by American computer scientist Ted Selker in 1984.
1984 Polymerase chain reaction
The polymerase chain reaction (PCR) is a technique widely used in molecular biology. It derives its name from one of its key components, a DNA polymerase used to amplify a piece of DNA by in vitro enzymatic DNA replication. As PCR progresses, the DNA generated is used as a template for replication. The polymerase chain reaction was invented in 1984 by Kary Mullis.
1986 Atomic force microscope
An atomic force microscope is a type of microscope that is used for imaging, measuring, and manipulating matter at the nanoscale. The information is gathered by "feeling" the surface with a mechanical probe. Piezoelectric elements that facilitate tiny but accurate and precise movements on (electronic) command enable the very precise scanning. The atomic force microscope was co-invented in 1986 by Christoph Gerber, Gerd Binning, and Calvin Quate. On April 20, 1987, Gerber, Binning, and Quate filed U.S. patent #4,762,996 for the device which was later issued to them on August 9, 1988.
1986 Stereolithography
Stereolithography is a common rapid manufacturing and rapid prototyping technology for producing parts with high accuracy and good surface finish by utilizing a vat of liquid UV-curable photopolymer "resin" and a UV laser to build parts a layer at a time. Stereolithography was invented by Chuck Hull in 1986.
1987 Digital Micromirror Device
The Digital Micromirror Device (DMD) is a silicon chip of up to 2 million hinged microscopic aluminum mirrors all under digital control that tilt thousands of times per second in order to create an image by directing digital pulses through a projection lens and onto a television or movie theatre screen. The Digital Micromirror Device was invented by Dr. Larry Hornbeck while working at Texas Instruments, also holding several patents relating to DMD technology.
1987 Perl
Perl is a high-level, general-purpose, interpreted, dynamic programming language. It was originally invented by Larry Wall, a linguist working as a systems administrator for NASA, in 1987, as a general purpose Unix scripting language to make report processing easier. Perl is also used for text processing, system administration, web application development, bioinformatics, network programming, applications that require database access, graphics programming etc.
1988 Luggage (tilt-and-roll)
Tilt-and-roll luggage or wheeled luggage, is a variant of luggage for travelers which typically contains two-fixed wheels on one end and a telescoping handle on the opposite end for vertical movement. Tilt-and-roll luggage is pulled and thus eliminates a traveler from directly carrying his or her luggage. In 1988, Northwest Airlines pilot Robert Plath invented tilt-and-roll luggage as travelers beforehand had to carry suitcases in their hands, toss garment bags over their shoulders, or strap luggage on top of metal carts.
1988 Fused deposition modeling
Fused deposition modeling, which is often referred to by its initials FDM, is a type of additive fabrication or technology commonly used within engineering design. FDM works on an "additive" principle by laying down material in layers. Fusion deposition modeling was invented by S. Scott Crump in 1988.
1988 Tcl
Tcl, known as "Tool Command Language", is a scripting language most commonly used for rapid prototyping, scripted applications, GUIs and testing. Tcl is used extensively on embedded systems platforms, both in its full form and in several other small-footprinted versions. Tcl is also used for CGI scripting. Tcl was invented in the spring of 1988 by John Ousterhout while working at the University of California, Berkeley.
1988 Ballistic electron emission microscopy
Ballistic electron emission microscopy or BEEM is a technique for studying ballistic electron transport through variety of materials and material interfaces. BEEM is a three terminal scanning tunneling microscopy (STM) technique that was co-invented in 1988 at the Jet Propulsion Laboratory in Pasadena California by L. Douglas Bell and William Kaiser.
1988 Electron beam ion trap
The electron beam ion trap is used in physics to denote an electromagnetic bottle that produces and confines highly charged ions. The electron beam ion trap was co-invented by M. Levine and R. Marrs in 1988.
1988 Nicotine patch
A nicotine patch is a transdermal patch that releases nicotine into the body through the skin. It is usually used as a method to quit smoking. The nicotine patch was invented in 1988 by Murray Jarvik, Jed Rose and Daniel Rose.
1988 Firewall
A firewall is an integrated collection of security measures designed to prevent unauthorized electronic access to a networked computer system. At AT&T Bell Labs, William Cheswick and Steven M. Bellovin were continuing their research in packet filtering and co-invented a working model for their own company based upon their original first generation architecture of a firewall.
1988 Resin identification code
The SPI resin identification coding system is a set of symbols placed on plastics to identify the polymer type. The resin identification code was developed by the Society of the Plastics Industry (SPI) in 1988.
1989 ZIP file format
The ZIP file format is a data compression and file archiver. A ZIP file contains one or more files that have been compressed to reduce file size, or stored as-is. The zip file format was originally invented in 1989 by Phil Katz for PKZIP, and evolved from the previous ARC compression format by Thom Henderson.
1989 Selective laser sintering
Selective laser sintering is an additive rapid manufacturing technique that uses a high power laser to fuse small particles of plastic, metal, ceramic, or glass powders into a mass representing a desired 3-dimensional object. The laser selectively fuses powdered material by scanning cross-sections generated from a 3-D digital description of the part on the surface of a powder bed. Selective laser sintering was invented and patented by Dr. Carl Deckard at the University of Texas at Austin in 1989.
1990 Sulfur lamp
The sulfur lamp is a highly efficient full-spectrumelectrodeless lighting system whose light is generated by sulfur plasma that has been excited by microwave radiation. The sulfur lamp consists of a golf ball-sized (30 mm) fused-quartz bulb containing several milligrams of sulfur powder and argon gas at the end of a thin glass spindle. The bulb is enclosed in a microwave-resonant wire-mesh cage. The technology was conceived by engineer Michael Ury, physicist Charles Wood and their colleagues in 1990. With support from the United States Department of Energy, it was further developed in 1994 by Fusion Lighting of Rockville, Maryland, a spinoff of the Fusion UV division of Fusion Systems Corporation.
1991 Ant robotics
Ant robotics is a special case of swarm robotics. Swarm robots are simple and cheap robots with limited sensing and computational capabilities. This makes it feasible to deploy teams of swarm robots and take advantage of the resulting fault tolerance and parallelism. In 1991, American electrical engineer James McLurkin was the first to conceptualize the idea of "robot ants" while working at the MIT Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology. The robots consisted of sensors, infrared emitters, and communication systems capable of detecting objects in their path. McLurkin's invention was through studying the behavior of real ants in ant colonies and keeping ant farms as a basis for his programming. Through this examination, he could better understand how insects structured their workloads in order to produce a viable and working prototype of robotic ants.
See also
Timelines of United States inventions
Timeline of United States inventions (before 1890)
Timeline of United States inventions (1890–1945)
Timeline of United States inventions (after 1991)
Related topics
History of United States patent law
Lemelson Foundation
Lemelson–MIT Prize
List of African American inventors and scientists
List of Puerto Ricans
List of inventors
List of inventors killed by their own inventions
List of prolific inventors
List of Puerto Ricans in the United States Space Program
Military invention
NASA spinoff
National Inventors Hall of Fame
Native American contributions
Science and technology in the United States
Technological and industrial history of the United States
Timeline of United States discoveries
United States Patent and Trademark Office
United States patent law
Yankee ingenuity
Footnotes
Further reading
Deitch, Joanne Weisman, "A Nation of Inventors", Carlisle, Massachusetts: Discovery Enterprises Limited, 2001
Haven, Kendall, "100 Greatest Science Inventions of All Time", Westport, Connecticut: Libraries Unlimited, 2006
Hopping-Egan, Lorraine, "Inventors and Inventions", New York City, New York: Scholastic, Incorporated, 1997
Ngeow, Evelyn, "Inventors and Inventions", New York City, New York: Marshall Cavendish Corporation, 2008
Philbin, Tom, "The 100 Greatest Inventions of All Time", New York City, New York: Kensington Publishing Corporation, 2003
External links
American Inventors
Google: U.S. Patents Search
PBS: They Made America
MIT: Invention Dimension
NASA: Scientific and Technical Information: NASA Spinoff
National Inventors Hall of Fame Foundation
The Great Idea Finder
United States Patent and Trademark Office
List of United States inventions and discoveries
United States
United States inventions
Inventions 1946 1991 |
13048207 | https://en.wikipedia.org/wiki/Tizen%20Association | Tizen Association | The Tizen Association, formerly the LiMo Foundation (short for Linux Mobile), is a non-profit consortium which develops and maintains the Tizen mobile operating system. Tizen is a Linux-based operating system for smartphones and other mobile devices. The founding members were Motorola, NEC, NTT DoCoMo, Panasonic Mobile Communications, Samsung Electronics, and Vodafone. The consortium's work resulted in the LiMo Platform—which was integrated into mobile phone products from NEC, Panasonic and Samsung—and later became the Tizen platform.
Members
Members of the Tizen Association are:
Phones
Phones using LiMo include:
LiMo & Tizen
In the end of September 2011 it was announced by the Linux Foundation that MeeGo will be totally replaced by the Tizen mobile operating system project during 2012. Tizen will be a new free and open source Linux-based operating-system which itself will not be released until the first quarter of 2012. Intel and Samsung, in collaboration with the LiMo Foundation and assisting MeeGo developers, have been pointed out to lead the development of this new software platform, using third-party developer frameworks that will primarily be built on the HTML5 and other web standards. As of October 2012, the LiMo website traffic is redirected to tizen.org.
See also
Linux Phone Standards Forum
Android (operating system) from Google
MeeGo Operating System from Nokia and Intel (former Maemo and Moblin)
Openmoko
Symbian Foundation
Open Handset Alliance
References
https://archive.today/20130127214730/http://www.linuxdevices.com/news/NS8711151732.html
External links
Coming Battle Over Open Source Phones
Linux organizations
Mobile Linux
Organizations established in 2007 |
40974572 | https://en.wikipedia.org/wiki/Apache%20Allura | Apache Allura | Apache Allura is an open-source forge software for managing source code repositories, bug reports, discussions, wiki pages, blogs and more for any number of individual projects. Allura graduated from incubation with the Apache Software Foundation in March 2013.
Features
Allura can manage any number of projects, including groups of projects known as Neighborhoods, as well as sub-projects under individual projects. Allura also has a modular design to support tools attached to neighborhoods or individual projects. Allura comes packaged with many tools, and additional external and third-party tools can be installed. There are tools to manage version control for source code repositories, ticket tracking, discussions, wiki pages, blogs and more.
Allura can also export project data, as well as import data from a variety of sources, such as Trac, Google Code, GitHub, and, of course, Allura itself.
Features common to most tools
Most tools support Markdown formatting, threaded comments with integrated and configurable spam prevention, group or individual artifact level subscriptions via email or RSS, and powerful searching using Solr. Additionally, the Markdown syntax supports cross-linking, such that a commit can refer to a specific ticket, a comment on a discussion thread can easily link to a commit, or a wiki page can even link directly to a specific comment in a discussion thread. Allura also has a powerful permissions system that gives fine-grained control over who has access to do what.
Version control
Allura comes packaged with tools for managing Git and SVN repositories. There is also a tool for managing Mercurial repositories, which is packaged separately for license reasons.
Version control integration includes:
Browser-based file and commit browsing
Color-coded unified or side-by-side diff viewing
Syntax highlighting
Forking and merge / pull requests
Commit history graph view
Ticket / bug tracking
Multiple trackers per project
File attachments
Milestones, labels, and custom fields
Saved searches for frequent use
Bulk editing of tickets
Threaded discussion forums
Moderation
Reply-by-email
Spam prevention
Wiki
Attachments
Syntax highlighting for code snippets
Browsing pages by name or tags
Custom macros for things like project listings, blog post listings, and adding a Gittip button
Blogs
Pre-publish drafts
External feed integration
Optional discussion comments with spam prevention
History
Allura began in October 2009 as an open-source reimplementation in Python of the developer tools for SourceForge (previously written in PHP), and was first announced in March 2011. Allura became the default platform for new projects on SourceForge in July 2011.
In June 2012, Allura was submitted to the Apache Software Foundation (ASF) and began incubation to become an Apache project. Allura was moved to the ASF to encourage community engagement and to ensure an open and community oriented development process. Allura graduated to a top-level Apache project in March 2013.
Notable installations
Apache Allura
SourceForge.net
Open Source Projects Europe
DLR German Aerospace Center
DARPA's VehicleForge
See also
Comparison of project management software
Bloodhound
Kallithea
Trac
References
External links
Allura on Apache
Bug and issue tracking software
Free software
Free software programmed in Python
Free project management software
Free wiki software
Project management software
Version control |
11161932 | https://en.wikipedia.org/wiki/Saints%20Row%202 | Saints Row 2 | Saints Row 2 is a 2008 action-adventure game developed by Volition and published by THQ. It is the sequel to 2006's Saints Row and the second installment in the Saints Row series. The game was released in October 2008 for the PlayStation 3 and Xbox 360, January 2009 for Microsoft Windows, and April 2016 for Linux. A mobile tie-in game was developed by G5 Entertainment and also released in October 2008. Saints Row 2 directly follows from the events of the first game, as the player's custom character awakens from a coma after five years to find that their the gang, the 3rd Street Saints, has been disbanded, and their former territories taken over by newly-formed criminal syndicates and a corrupt corporation. With the help of new and old allies, the player attempts to rebuild the Saints and take back Stilwater from their rivals.
Story missions are unlocked by trading in "Respect" points, currency earned by completing minigames and side-missions. Outside of the main story, players can freely roam Stilwater, which has been expanded with new locations and consists of two main islands. The game is played from a third-person perspective and its world is navigated on-foot or by vehicle. Players can fight enemies using a variety of firearms, and call in non-playable gang members to assist them. An online multiplayer mode allows two players to explore Stilwater together while completing missions and activities, or multiple players to engage in a variety of cooperative and competitive game modes.
Saints Row 2's developers opted for a more comedic tone to set the game apart from the Grand Theft Auto series, with which the original game was compared by most reviewers for their similar premise and gameplay elements. The game's promotional effort included various public showings, special editions and downloadable content including the Ultor Exposed and Corporate Warfare mission packages. Reviews were largely favorable, praising the action and straightforward gameplay, while only criticizing technical issues. The Windows port in particular was heavily criticized for technical issues not present in any of the other versions. The game had sold around 400,000 units by November 2008, and 3.4 million units by September 2010. A sequel, Saints Row: The Third, was released in November 2011.
Gameplay
Saints Row 2 is an action-adventure video game set in an open world environment, offering the player a large open environment in which to move around. The player's character is capable of maneuvering through the environment, utilizing weaponry and engaging in melee combat warfare. After successfully completing the first game mission, the player meets the Third Street Saints and begin their devious schemes with the gang. Missions are unlocked by earning 'Respect' points from minigames and side-missions and although completing missions is necessary for game progression, players can complete them at their own leisure. The player is granted the option of instantly retrying the missions should they fail an attempt. Numerous checkpoints save progress in each mission, and missions can be replayed from locations within the environment. Aside from attempting missions, the player can explore the environment, purchase items at shops and participate in the aforementioned mini-games and side-missions. The player can also wreak havoc upon the city of Stilwater which can provoke potentially fatal attention from authoritative forces. The player can recruit members from a friendly gang and use a mobile phone to contact friends and/or businesses, as well as to input cheat codes. Entering cheats will disable Xbox 360 achievements.
Players drive vehicles that are stolen, bought or unlocked. Aside from automobiles, players can use boats and water craft, helicopters, fixed-wing aircraft, and motorcycles. A cruise control system can be activated while using land or sea vehicles. Waypoints can be placed on the pause-screen map, leaving a GPS route between the player character's location and the set destination. Players can hail taxicab services and pay a fee to quickly navigate the city. By taking land vehicles to Mechanics, players can apply paint schemes, body mods, hydraulics and nitrous oxide.
Players create their own character through a system that allows them to modify gender, ethnicity, fitness, voice, face and hairstyle. Walk and fighting styles, and personality traits can be assigned. Players purchase clothes, tattoos and piercings, and set outfits can be bought or created and saved to the player character's wardrobe. "Cribs" (safe houses) allow players to withdraw earnings, change outfits, replay missions and save the game. Cribs can be customized by applying themes and purchasing objects like TVs and pool tables. Boats and fixed-wing aircraft can be stored at purchased docks and hangars. Players select the outfits, vehicles, gestures and graffiti styles used by street members of the Third Street Saints.
The combat systems from Saints Row have been updated but many of the basics remain unchanged. While engaging in melee-based combat, the player character will perform combos and charge up attacks, and can execute a finishing move if three consecutive hits are dealt. With a gun equipped, the player can perform a groin attack, and can zoom in for a finer aim reticle. The player can also employ the use of human shields, and can use makeshift weapons pulled from the environment e.g. fire hydrants, cement blocks. Should the player either commit illegal activities or incite rival gang members, they will provoke potentially fatal attention from authoritative figures or rival gangs. The notoriety bar is a visual representation of the proactivity of the opposing figures' response and continual inciting of these groups will bring about a more powerful response, such as SWAT teams from the police. The player will continue to be chased by these groups until captured or killed and must reduce the notoriety bar by either hiding from the police or gang and wait for it to "cool off", or by seeking out a drive-through confessional which will clear the notoriety bar for a small fee. Should the player character be apprehended or killed, a small percentage of the player's earnings will be removed and the player will respawn at law or healthcare institutions. The game contains over forty different weapons, many of which have been recycled from Saints Row. The game allows the player to utilise new weapon types, examples of which include satchel charges, laser-guided rocket launchers, chainsaws and more. The player has the ability to dual wield handguns and submachine guns. Weaponry can be purchased by the player from specific stores or unlocked throughout in-game progress.
Open world
Players navigate the open world, fictional city of Stilwater. The city consists of forty-five neighborhoods divided between twenty districts. It is expanded from Saints Row's version of Stilwater, roughly one-and-a-half times as big and featuring new districts such as the prison, nuclear power plant, and expanded airport among others. Game developers stated that the city has very much been redeveloped and each individual neighborhood has been touched up in one way or another. According to the storyline, the in-game corporation Ultor spent more than 300 million dollars redeveloping the city, funding the police force and, as it states, turning the "once crime-ridden third-tier city" into an "urban utopia". The Saint's Row district is a more notable change within the city, having been completely redesigned and serving as the location of Ultor's headquarters; a towering skyscraper referred to as the Phillips Building. Many old districts from the earlier revision of Stilwater have also seen changes. Examples include the expansion taken place on the Suburbs district, which is double the size of its depiction in Saints Row and the Museum district, which features the Eramenos Ancient Greek museum exhibit, complete with models of the Acropolis of Athens and Theatre of Dionysus. There are also several completely new districts, such as the Marina and the University.
From the beginning of the game, the map of Stilwater is fully visible. However, shops and activities will simply be displayed as a question mark until the player discovers them. By completing missions and wiping out enemy strongholds the player gains control of the various neighborhoods the city is split into. There are over 130 interiors within the city, and hidden events can be triggered by some, including over ninety shops which can be purchased when the player controls each shops' associated territory. The player gets a 10% discount at owned stores and buying an entire chain of departments will mean that the protagonist's face appears on in-game billboard advertisements. The game shares technology with that of Red Faction: Guerrilla, another Volition-developed game and so certain elements of the environment are fully destructible. A number of Easter eggs are placed within the sandbox, including the pop-out Easter Bunny which won "Top Easter Egg of 2008".
Respect system
The Respect system is a scoring system where the player earns Respect points to unlock missions and progress through the storyline. The player can partake in storyline and stronghold missions only after filling up at least one bar of Respect, and Respect points are removed when the player starts a mission. The 'style rank' is a modifier of how much Respect the player can earn; this is increased by purchasing items for the player character. Respect points can be earned in two ways; by progressing through side-missions called Activities and by completing mini-games and stunts called Diversions.
There are a broad range of Activities and Diversions available for the player to progress through. Many of the game's Activities made appearance in Saints Row and a variety of new Activities have also been introduced in the game. Examples include an underground fight club, a parody of the Cops television show known as FUZZ and numerous others. Each Activity can be initiated from various locations and plays out over six levels of increasing difficulty. The Activities have been designed to suit solo and co-operative play. Most Diversions do not have specific start points. There are numerous Diversions playable in the game, such as acts of indecent exposure, combat and driving stunts, car surfing and a survival horror minigame called Zombie Uprising.
Multiplayer
Saints Row 2 has various multiplayer components implemented throughout the game. Through an online network or through System Linking, the player can progress through the game with a cooperative partner. While in co-op mode, both players can explore the city and progress through the game's storyline missions and minigames. Both players are rewarded and credited for completion of such activities, and the players can furthermore play against each other in competitive metagames. The co-op mode is "drop-in/drop-out" and there is no limit as to how far the players can be away from each other.
The game contains competitive multiplayer modes, supporting between four and twelve players in a match. There are two standard deathmatch modes; the free-for-all "Gangsta Brawl" mode and its team-based variant, "Team Gangsta Brawl". Another mode, known as "Strong Arm", puts two teams against each other fighting for control over the neighborhood. In "Strong Arm", the first team to earn $100,000,000 dollars wins and money is earned by competing head-to-head in activities, controlling "tag spots" which serve as bonus modifiers or by eliminating members of the opposing team. While in a party, players are free to roam around a lobby. The player can rank up in multiplayer and displays this by earning various "badges" which are displayed next to the player's name. By ranking up, the player can unlock more expensive clothing for their multiplayer character.
The multiplayer mode on PC and PlayStation 3 was discontinued in May 2014 when GameSpy's servers were shut down. However, the PC version's multiplayer can still be played via LAN tunneling software such as Evolve or Tunngle and the PS3 version's multiplayer can still be played using XLink Kai. The Xbox 360 versions of the game were unaffected as they use Xbox Live for matchmaking.
Plot
Five years after the explosion on Richard Hughes' yacht, the player character (Charles Shaughnessy, Kenn Michael, Alex Mendoza, Katie Semine, G.K. Bowes, or Rebecca Sanabria), who was the sole survivor, awakens from a coma within the infirmary of Stilwater's maximum security prison after undergoing extensive plastic surgery. They escape to Stilwater with the help of Carlos Mendoza (Joe Camareno), the brother of a former 3rd Street Saints member, who explains that the gang has since disbanded. Most members were arrested by Troy Bradshaw (Michael Rapaport), an undercover cop who had infiltrated the gang as a lieutenant, and who has since become Chief of Police, using his influence to protect the imprisoned Saints. In the gang's absence, their former base of operations, the Saint's Row district, has been redeveloped into a pristine commercial and residential area by the Ultor Corporation, who have further plans for Stilwater.
The player works to rebuild the Saints by rescuing their former lieutenant Johnny Gat (Daniel Dae Kim) from his trial; recruiting new members, including Carlos and Gat's acquaintances Pierce Washington (Arif S. Kinchen) and Shaundi (Eliza Dushku), whom they quickly promote to lieutenants; and setting up a new headquarters in a hotel destroyed by an earthquake. Eventually, the player steps up as the Saints' new leader, earning the monicker of "The Boss", and declares war on the new gangs that have taken over the city during their absence, assigning their lieutenants to discover more about each:
• Gat and Pierce focus on the Ronin, a Japanese bosozoku gang who conduct gambling and porn operations, led by Shogo Akuji (Yuri Lowenthal) and his second-in-command, Jyunichi (Brian Tee) – with Shogo's father, Kazuo, leading international operations.
• Carlos researches the Brotherhood, an outlaw gang who conduct gun-running operations, led by Maero (Michael Dorn), his girlfriend Jessica (Jaime Pressly), and tattoo artist Matt (Anthony Pulcini).
• Shaundi investigates the Sons of Samedi, a Haitian voodoo gang who run drugs operations, led by "The General" (Greg Eagles) and his right-hand, Mr. Sunshine (Phil LaMarr). One of Shaundi's exes, DJ Veteran Child (Neil Patrick Harris), is a high-ranking lieutenant.
Each gang slowly becomes annoyed at the Boss' actions to interfere in their businesses, and seek revenge. The Ronin attempt to ambush the Boss and Gat at the home of the latter's girlfriend, Aisha (Sy Smith), killing her when she tries to warn them, and wounding Gat. After saving Gat from an attempt on his life while hospitalized, the Boss works with him to eliminate the gang's leaders as revenge. Meanwhile, the Brotherhood capture and torture Carlos to death; in response, the Boss tricks Maero into murdering Jessica, and steals the Brotherhood's latest weapon shipment, which the Saints use to attack their headquarters. Although Maero manages to escape the attack, the Boss ultimately kills him in a demolition derby. Concurrently, the Sons of Samedi take revenge on the Saints for stealing their customers by kidnapping Shaundi and attacking their hideout. The Boss rescues Shaundi, murdering Veteran Child in the process, and thwarts the attack, before tracking down and killing Mr. Sunshine and the General to finish off the gang.
With Stilwater back under their control, the Saints find themselves targeted by Ultor's power-hungry CEO, Dane Vogel (Jay Mohr), who seeks to redevelop various parts of the city, and who ordered a hit on the Saints as revenge for eliminating the other gangs, which he had been using to his other ends. After fending off several attacks by Ultor's private security forces, the Saints retaliate by destroying one of the company's labs and killing its board of directors. Taking advantage of this to assume full control of Ultor, Vogel decides to personally deal with the Saints, but before he can do so, he finds himself targeted by them at a press conference. Escaping back to the Ultor Building, Vogel is pursued by the Boss, who fights their way to Vogel's office and kills him. Afterward, the Saints return to ruling over Stilwater undisputed.
At any point during the game, the Boss can listen to wiretap conversations between Troy and former Saints leader Julius Little (Keith David) at the police station, which reveal that the former asked Julius to disband the gang in exchange for not arresting certain members. Knowing that the player would disagree, Julius attempted to kill them on Hughes' yacht, then retired. The Boss calls former Saints lieutenant Dex Jackson, who left the gang to work for Ultor, to discuss their findings, and agrees to meet him in person. When they arrive, however, the Boss is met by Julius instead, and realizes Dex lured them both into a trap. After surviving an attack by Ultor's security forces, the Boss and Julius argue over what the Saints have become, before the former executes the latter in revenge.
Development
Volition began work on Saints Row 2 in 2005, about a year before Saints Row was released. The sequel was first announced by THQ's CEO Brian Farrell in a February 2007 conference call, alongside another six franchise continuations for the 2008 fiscal year. Game details began to surface in May 2008 after the first teaser trailer was released and sites like IGN and GameSpot reported on an early version of the game.
One of the development team's core goals was to develop an identity for the Saints Row franchise within the open world genre. The series was known as a "Grand Theft Auto clone" based on its first release's similarities to the open world sandbox game Grand Theft Auto III. Accordingly, Saints Row 2 was compared to the Grand Theft Auto series' own upcoming sequel, Grand Theft Auto IV. When questioned about the two sequels' close release dates, lead producer Greg Donovan responded that they thought their game could compete, that he saw the Grand Theft Auto sequel moving "in a more realistic direction", and that there was "room for more than just one game" in the open world genre as "a very different experience than what ... other games are looking to do". The team took an "over-the-top" design approach, with cartoonish pastiche and "wild and outrageous" gameplay. Many early game elements were considered too crass to be included in the final release.
Many of the original Saints Row developers continued onto the sequel's team. Thus, the team worked from their lessons learned rather than starting anew. The two games were consequently similar in design. They overhauled the game engine to enhance the sequel's graphics, and added contrast and higher-quality textures to make the city setting of Stilwater more realistic. Visual enhancements were also applied to people, cars, explosions, lighting, shadows, and the weather system. A central design goal was to "create a world that exists independent of the player" that featured more realistic non-player characters (NPCs) that would smoke cigarettes, use cellphones, drink coffee, open umbrellas when it rained, and physically interact. The original game's engine could not support proximity NPC interaction like sitting together or cuddling. As the city design finalized late in development, the team laboriously hand-placed 20,000 nodes throughout the game world that trigger NPC actions.
The Saints Row series narrative was conceived as a three-part story, with Saints Row 2 as the second of the three. While the developers continued the story of the original game, they sought to accommodate newcomers to the series. The final script had roughly 80,000 lines of dialogue, twice that of Saints Row's. The story drew strong cinematic influence from Quentin Tarantino films Pulp Fiction and Kill Bill. While the script was written to follow "a path of betrayal, revenge and redemption", the game retains Saints Rows light humor, with the "over-the-top, socially distorted" narrative juxtaposed with dark, gritty moments. Saints Row's silent protagonist speaks in Saints Row 2, granting the protagonist more personality and improving the storytelling, according to James Tsai, one of the lead designers. They sought to heavily stylize the game's characters and assign them unique personality traits. The basic character designs followed naturally from the story Volition wanted to tell, but the characters' personalities and mannerisms were mainly a product of the voice acting performances, where the actors had freedom to interpret and develop their characters. The game's voice actors include film and television stars such as Neil Patrick Harris, Michael Dorn, Mila Kunis, Jay Mohr, Keith David, and Eliza Dushku.
While the first game was released as an Xbox 360 exclusive, Volition expanded the sequel's initial development to the PlayStation 3 platform. The platform was successful in Europe, where Volition wished to expand. The port was developed in-house by a team that previously worked on Xbox 360 development. They struggled with the PlayStation 3's Cell architecture. The game was particularly unstable during development, and would crash after several hours of play. Lead producer Greg Donovan blamed their "failing to take systems and features to completion" as programmers fought last-minute bugs, artists lacked time to finalize designs, and consequently, playtest versions were not ready until late in the development cycle.
Soundtrack and audio
The game's soundtrack features about 170 licensed tracks accessible across twelve in-game radio stations while driving or at home. Station genres include alternative rock, reggae, hip hop, heavy metal, funk, R&B and 1980s pop, with artists such as As I Lay Dying, Opeth, Duran Duran, Lamb of God, The Used, My Chemical Romance, Avenged Sevenfold, Paramore, Panic! at the Disco and Run-D.M.C. The player can create a custom playlist of the licensed tracks to play on a separate station. Lead audio designer Frank Petreikis's budget for licensed music was double that of the previous game so as to secure more prominent tracks.
Volition extended the game's over-the-top atmosphere and humor into the radio stations via commercials. For example, commercials that promoted Ultor Corporation products served to enhance the player's sense of the corporation's omnipresence in Stilwater. The radio commercials were recorded with voice actors in the same room, rather than apart, so as to maximize their group dynamism. Many of the in-game commercials went through several drafts and the developers found this writing process to be challenging.
Marketing
Prior to its release, Saints Row 2 was heavily marketed and promoted through Internet and television trailers. Volition also ran several fat contests with series-related paraphernalia as prizes. The game's original release date was delayed for marketing considerations. The game's first trailer, in March 2008, was presented as a tourism promotion about the Ultor Corporation's role in rebuilding Stilwater. A full marketing campaign featuring American film actor Gary Busey began the next month. The Street Lessons with Uncle Gary video series demonstrated particular aspects of gameplay. Subsequent trailers over the next several months also highlighted gameplay elements, but one made light of Grand Theft Auto IV's lack of replay value, and another demonstrated the cooperative mode using characters modeled on the candidates from the 2008 United States presidential election. A redesigned official website and community network was launched in July 2008, and American pornographic actress Tera Patrick was featured in her own marketing campaign for the game.
Promotional contests throughout mid-2008 included "Pimp Your Crib" and "Summer of Bling". Another competition from THQ and WorthPlaying gave the winner a trip to a San Francisco Saints Row 2 multiplayer event and published their thoughts online. British fashion label Joystick Junkies ran a T-shirt design competition in September 2008, and the top entry was featured in the game's first downloadable content pack. Another round of "Summer of Bling" awarded the shirts as prizes. The "Trick Your Pack" tool launched in September let the player create their own game box art. There were also other promotions and give-aways. At conventions, Saints Row 2 appeared at the 2008 E3, THQ Gamer's Day, Comic Con, PAX, GameStop Expo, and Leipzig Games Conventions. The game also promoted itself in the Myspace Music Tour and November 2008, an Australian fundraiser for men's health. In November, THQ signed a deal with Massive Incorporated to include in-game advertisements on their Xbox 360 and PlayStation 3 products. In-game and online, players can also find movie posters throughout the city that promote upcoming releases.
Release
The game was originally scheduled for release in North America on August 26, 2008, but was delayed to October 14 both to add final touches and to launch in a more advantageous release window. The game released in three different "Collectors Editions", each with a copy of the game, a poster, a limited edition art book, and several extras. The Saints Row 2 "Initiation Pack", exclusive to Australia and New Zealand, included promotional items such as a pizza box and bullet-shaped USB memory stick. The Russia-exclusive "Gift - Buka Edition" also included the bullet-shaped USB stick, and the "Gun Pack" included a gun-shaped USB stick. A month before the game's release, Saints Row 2 producer Dan Sutton stated in an interview that they "definitely" planned to make downloadable content.
PC releases
In June 2008, THQ confirmed that a Microsoft Windows port of the game was in development. It was developed by the localization team at CD Projekt, the CD Projekt Localisation Centre. The studio later became known as Porting House, and has been referred to by Volition as "CD Projekt Black" (in parallel to CD Projekt Red). The port was released in North America on January 5, 2009, in Europe on January 23, and in Australia on February 5. In April 2016, Volition released a Linux port of the Windows version.
In the aftermath of the auctioning of THQ's assets following its bankruptcy in 2013, the source code for the PC port of Saints Row 2 was believed to be lost. In the interim, the game has become highly unplayable, with no multiplayer support following the shutdown of the GameSpy service. In October 2019, Volition announced it had found the source code and that it would begin work to rebuild the game for modern systems, including replacing GameSpy with Steam matchmaking support, along with allowing for user mods. Additionally, when the update is released it will contain the two DLCs, "Ultor Exposed" and "Corporate Warfare" which had not been previously released for personal computers. The community manager that had led the effort, Mike Watson (also known as "IdolNinja"), died from cancer on August 5, 2021, but he was aware his condition had been deteriorating in the prior year and ensured that the work was moved to a small team with Volition's and Deep Silver's support to continue on without his lead.
Downloadable content
Saints Row 2 received several downloadable content (DLC) releases, including two episodic expansion packs: Ultor Exposed and Corporate Warfare.
Ultor Exposed adds character customization and vehicle options, including Red Faction: Guerrilla-themed content. The Saints attempt to destroy Ultor and they get help from an Ultor worker, Tera, to expose Ultor's darker side. It also adds multiplayer content, including four online multiplayer maps and a cooperative mode metagame wherein players compete for a cash bonus during story missions by accumulating points from special kill bonuses and property damage. The pack's missions feature American pornographic actress Tera Patrick, who plays a whistleblower and former microbiologist for the Ultor Corporation. Originally slated for release on April 16, 2009, the pack was delayed a week to April 23 so it could be released alongside the demo for Volition-developed game Red Faction: Guerrilla. It was released on April 23, 2009. IGN praised the game's new co-op metagame but criticized its relatively short missions. Eurogamer gave a negative review and criticized its value proposition.
Corporate Warfare focuses on the struggle between the 3rd Street Saints and the Ultor Corporation. The pack adds character costume, facial hair, and vehicle options. It also includes three multiplayer maps and another cooperative mode metagame wherein players compete in ranking by performing vehicle stunts. Corporate Warfare was released via digital download on May 28, 2009.
Reception
The PlayStation 3 and Xbox 360 releases of the game received "generally favorable" reviews, according to video game review score aggregator Metacritic, and the Windows release received "mixed or average" reviews.
Ben "Yahtzee" Croshaw of The Escapist Zero Punctuation named Saints Row 2 his 2008 game of the year. It was a runner up for GameShark's overall and Xbox 360 games of the year. Game Developer named Volition in their top 50 developers of the year for their work on the game, and THQ in their top 20 publishers.
The console version of Saints Row 2 garnered generally positive reviews. The PC version was relatively less well received due to frame rate issues and visual pop-in. 1UP.com gave the game a B, stating that it "relishes the hedonistic aspects of the open-world genre", that it has "plenty of innovation" and that the "excellence in the presentation makes the world of Saints [Row] 2 a great introduction for newcomers to open world games".
Eurogamer gave the game a 9/10, stating that it "is one of the most ridiculous and enjoyable games of the year". Game Informer gave the game an 8.75/10, stating that "in its own silly, b-movie way, it's a damn fun game" and a "profately good time". GameSpot gave the game an 8.0/10, stating that "from beginning to end, this is one of the most fun urban chaos games out there" and that it will "keep you happily creating havoc for a long time". GameSpy gave the game four and a half stars out of five, stating that it "offers up a shooting and driving experience that is plenty of fun" and that it is "self-consciously funny in its irreverence" and "will definitely appeal to much of its audience".
IGN U.S. gave the game an 8.2/10, stating that "the core gameplay experience is extremely enjoyable". IGN AU gave the game an 8/10, stating that it is "big, dumb fun". IGN UK gave the game a 7.5/10, stating that it "demonstrates that there is still plenty of mileage to be eked out of open-world games" and that "there's certainly enough here to keep any fans of sandbox violence entertained".
Among positive acclaim, some publishers gave the game generally negative reviews. UK magazine Edge gave the game a 5/10, stating that "few of the game's details will stick in your mind for long, and its pranky focus means it rarely gives you anything interesting to do with the toys on offer".
Entertainment Weekly flagged the game as "racist, misogynistic, crude, cynical, humorless and stupid" and labelled it the worst game of 2008, despite previously giving the game a B and calling it "a larcenous good time".
The game did not gain a favourable response from New York City officials and police. City spokespersons requested that the game be pulled from shelves upon its release; NYPD union boss Patrick Lynch criticized the game, stating that "these horrible and violent video games desensitize young people to violence while encouraging depravity, immorality while glorifying criminal behavior".
Jack Thompson, a former lawyer and longtime critic of violent video games, called Saints Row 2 a "Grand Theft Auto ripoff", and said that "as is true with pornography, as is true with violence, the subsequent products tend to push the envelope even more". On Tuesday, October 14, 2008, the game's US release date, candidate Leslie Crocker Snyder and others spoke out against the game, surrounded by police union members who support her bid.
Governor David Paterson signed a bill in July 2008 requiring prominent display of age ratings on video games and mandating parental control on game consoles by 2010.
Sales
Saints Row 2 sold approximately 365,000 copies in its first month, outselling Dead Space, which was released the same day. The Xbox 360 version comprised the majority of these sales. The game shipped over two million units by the end of 2008. Still, analyst Doug Creutz reported that the game's sales to this point were well below expectations. Following Saints Row 2's January 2009 Windows release, the game had shipped over 2.6 million copies by the next month. In May 2009, THQ reported a $431 million loss in revenue, but Saints Row 2 sales totaled 2.8 million. Combined with the original release, the series had worldwide sales in excess of six million, making it one of the best-selling video game franchises.
As of September 2010, Saints Row 2 has sold over 3.4 million units worldwide. The game's success led THQ to shift its focus to large franchises.
References
External links
2008 video games
Action-adventure games
Cancelled Xbox One games
Games for Windows certified games
Lua (programming language)-scripted video games
Open-world video games
Video games set in 2008
Organized crime video games
PlayStation 3 games
Saints Row
THQ games
Video game sequels
Video games developed in the United States
Video games featuring protagonists of selectable gender
Video games with downloadable content
Video games with expansion packs
Video games set in the United States
Windows games
Linux games
Xbox 360 games
Multiplayer and single-player video games
CD Projekt games
Deep Silver games
Video games using Havok
Works about the Yakuza
de:Saints Row#Saints Row 2 |
742265 | https://en.wikipedia.org/wiki/Free%20as%20in%20Freedom | Free as in Freedom | Free as in Freedom: Richard Stallman's Crusade for Free Software () is a free book licensed under the GNU Free Documentation License about the life of Richard Stallman, written by Sam Williams and published by O'Reilly Media on March 1, 2002.
Williams conducted several interviews with Stallman during the writing of the book, as well as with classmates, colleagues of Stallman, and his mother. The book has received positive reviews.
Structure
The book is divided into a preface, thirteen chapters, an epilogue, three appendices and an index. A copy of the GNU Free Documentation License (GFDL) is included as Appendix C.
License
Free as in Freedom was published under the GNU Free Documentation License version 1.1, which allows modification and redistribution of the text, photographs contained therein, as well as the cover: its texts, photograph and elements of design.
Writing
Williams has written an article about the process of writing FaiF, recording the license negotiations that led to this book being published under a free license. OnLamp also interviewed Williams in 2002 about the writing process.
Standing on the shoulders of giants
In the book, Bob Young of Red Hat supports the free software movement by saying that it enables people to stand on the shoulders of giants. He also says that standing on the shoulders of giants is the opposite of reinventing the wheel.
An excerpt from the book:
"In the western scientific tradition we stand on the shoulders of giants," says Young, echoing both Torvalds and Sir Isaac Newton before him. "In business, this translates to not having to reinvent wheels as we go along. The beauty of [the GPL] model is you put your code into the public domain. If you're an independent software vendor and you're trying to build some application and you need a modem-dialer, well, why reinvent modem dialers? You can just steal PPP off of Red Hat Linux and use that as the core of your modem-dialing tool. If you need a graphic tool set, you don't have to write your own graphic library. Just download GTK. Suddenly you have the ability to reuse the best of what went before. And suddenly your focus as an application vendor is less on software management and more on writing the applications specific to your customer's needs."
Another excerpt from the book:
Integrating GCC improved the performance of Linux. It also raised issues. Although the GPL's "viral" powers didn't apply to the Linux kernel, Torvald's willingness to borrow GCC for the purposes of his own free software operating system indicated a certain obligation to let other users borrow back. As Torvalds would later put it: "I had hoisted myself up on the shoulders of giants." Not surprisingly, he began to think about what would happen when other people looked to him for similar support.
Reception
Andrew Leonard in Salon complimented the amount of new information Williams reveals about Stallman, given the amount of material already published. He describes the book as a "nuanced, detailed picture of Stallman". In Computer User, Jende Huang referred to the book as "straightforward" and wrote, "the juxtaposition of Stallman's public and private personae is the key to the book's appeal." He summarized that the book is "a worthwhile read for its chronicle of an important part of the free software movement, as well as its insight into Stallman as a person." In Italian VITA, Bernardo Parrella described its "greatest merit" to be its "new perspective" on the issues at stake for Free Software and the computer industry as a whole, and its interweaving of Stallman's personal life and complex technical developments to be "gripping". He noted that the book is an important "real time" biography, full of references to other books, publications and web links, about a man who is misunderstood, and underestimated. In a review for Sys-Con, Mike McCallister describes the book as an "easy introduction to Stallman's career and ideas, but at this length cannot go into great depth." He mentions one section as "very funny", but "all too-brief" coverage of another topic, or none at all (GNOME).
Free as in Freedom 2.0
After reading Free as in Freedom in 2009, Richard Stallman made extensive revisions and annotations to the original text. As the book was published under the GFDL, it enabled Stallman to address factual errors and clarify some of the Williams's mistaken or incoherent statements, bringing in his first-hand experiences and technical expertise where appropriate. This new revised edition Free as in Freedom 2.0 was published by GNU Press in October 2010 and is available at FSF online shop and as a free PDF download. Sam Williams wrote a new foreword for the revised edition.
See also
Free software
Free Software, Free Society, selected essays by Stallman
References
External links
Free as in Freedom listing at O'Reilly Media
2002 non-fiction books
American biographies
Books about free software
Copyleft media
O'Reilly Media books
Books about computer hacking
Works about computer hacking |
24011991 | https://en.wikipedia.org/wiki/Mainframe%20%28G.I.%20Joe%29 | Mainframe (G.I. Joe) | Mainframe is a fictional character from the G.I. Joe: A Real American Hero toyline, comic books and animated series. He is the G.I. Joe Team's communications expert and debuted in 1986.
Profile
Born in Phoenix, Arizona, Mainframe's real name is Blaine L. Parker, and his rank is sergeant E-5.
Mainframe was both an athlete and a scholar as a child, though as a self-confessed nerd, he would much rather learn about computers than do anything else. He graduated high school at the age of seventeen, and immediately enlisted in the Army airborne. He soon headed into battle overseas, receiving his Combat Infantryman Badge, and later left the army to get his degree from MIT on the G.I. Bill. Mainframe then did a stint developing computer software in Silicon Valley, making big bucks and fighting boredom with a stick. Luckily, the Marines were looking for a few good men with just his qualifications, and Mainframe was soon back in uniform. He even served at the Pentagon for a time, before joining the G.I. Joe Team as a computer specialist.
The world's ever-increasing reliance upon technology makes him a valued member of the G.I. Joe Team, and his ability to design computer viruses makes him a nuisance to Cobra Command.
Toys
A Real American Hero
Mainframe was first released as an action figure in 1986. He was also available in 1987, and was discontinued in 1988.
A re-colored version of Mainframe was also released in 1986, as an exclusive in a special set from Toys R Us named "Special Mission: Brazil". The boxed set also included Claymore, and re-colored versions of Dial Tone, Leatherneck, and Wet Suit. The set included a cassette tape that detailed the secret mission.
25th Anniversary
In 2008, a new version of Mainframe was released, but renamed "Dataframe". A Comic Pack with Beach Head & Dataframe also has been released.
Comics
Marvel Comics
In the Marvel Comics G.I. Joe series, he first appeared in issue #58. In that issue, Mainframe and Dusty are sent on a mission into a Middle Eastern nation torn apart by the war between the Royalist rebels and the forces of dictator Colonel Sharif. Mainframe and Dusty are sent in to locate a Cobra Terror Drome launch base hidden in the country after spy satellites detect the base's infrared signature. In exchange for helping them ambush one of Sharif's weapons convoys, the Royalists give the Joes a guide to lead them through the desert, a local teen named Rashid. Much of the story deals with Rashid's disrespect for Mainframe for not being a 'real' soldier. However, after watching Mainframe reprogram a Firebat to strafe enemy troops and hearing about his past as a frontline soldier from Dusty, Rashid changes his mind about Mainframe and even becomes an expert in computer in order to honor him. Later, after detecting a shuttle launched from Cobra Island, Mainframe is part of a team that heads into space on board the space shuttle Defiant to defend U.S. satellites against a Cobra attack.<ref>G.I. Joe #65 (November 1987)</ref> Some time later, Mainframe works on the USS Flagg as part of Hawk's operations team during the Joes' involvement in the Cobra civil war. Mainframe serves in a similar capacity on the Flagg later during the Battle of Benzheen. He later aids the G.I. Joe Ninja Force in their efforts to help Destro remove a bounty on his head.
Action Force
Mainframe also appears in the British Action Force continuity. In one incident he's part of a Joe team taken prisoner. They have access to vital technology that would allow Cobra to more easily attack European interests. Mainframe's team and a secondary Joe squad cause enough chaos Cobra's plans are stopped.
Transformers
In the original out of continuity G.I. Joe/Transformers crossover, Mainframe is essential in rebuilding the severely damaged Bumblebee.
Devil's Due
Mainframe appears many times when Devil's Due takes over the Joe license. He is one of the first ones recruited back into active duty when gathered intelligence indicates Cobra is a threat yet again. He works closely with Lifeline to neutralize the threat of microscopic nanites which are causing various forms of deadly havoc. A couple of ideas work out and the nanites are defeated. At a later point, Mainframe's long-term efforts to uncover white-collar Cobra crime results in Roadblock fighting Dreadnoks on live television; this just increases Roadblock's financial well-being. Mainframe's work at the computer causes him to admit he needs to get out more.
G.I. Joe comes into conflict with Serpentor and his new independent army, "Coil", which has taken over Cobra Island. Mainframe teams with Flash for a sabotage mission against EMP generators. Coil troops trap the two with a bomb and they perish when it explodes. The generators are also destroyed. Mainframe's name is part of memorial in Arlington dedicated to all Joes who have lost their lives in the line of duty. Mainframe's protégé, Firewall later aids his teammates in finishing an old mission involving the drug dealer Headman. Rashid also assists in this mission.
Mainframe is part of the alternate reality crossover G.I. Joe Vs. Transformers 2. In the climactic battle he assists Doctor Mindbender and Wheeljack in saving the Earth from complete destruction. However, it doesn't go quite as planned, with the after-effects of the life saving computer hacks accidentally incinerating his Joe teammate Mercer.
IDW
Mainframe is the Joe soldier who discovers that Cobra exists as an organization in the first place; his theories that they are a group willing to profit in the long term pans out. In order to chase the proof, he goes off base, with Snake Eyes the only Joe who thinks he is not a traitor. As he is Joe-trained, the other Joes chasing him use lethal force.
He later returns to the team, a trusted field commander. At one point, he leads operations in Europe defending the life of an injured diplomat, shot in an Cobra attack. Said diplomat is trusted by both sides in a bitter conflict. Mainframe is later called in to assist in tracking down Destro's possible involvement in a fatal conflict in Rio de Janeiro.
Animated series
Sunbow
Mainframe appeared in the original G.I. Joe animated series, voiced by Patrick Pinney. He was often partnered with Dial Tone, and played video games with him in his spare time. In the 5-part mini series "Arise, Serpentor, Arise!", Mainframe and his teammate Beach Head take shelter in the actual coffin of Vlad Tepes from his crumbling castle. In the same series, Mainframe mentions a posting he had monitoring base security in Vietnam. Mainframe was also apparently married with children at one point; while driving through Transylvania, he remarks that the country reminds him of when he would take his kids trick or treating.
In the episode "Computer Complications", he mentions having an ex-wife. In that same episode, Zarana goes undercover at a Joe base as "Carol Weidler". During that time, Mainframe and Zarana develop romantic feelings for each other. Mainframe also programs robot subs to G.I. Joe control, but Zarana reverses the programming, though she saves his life when Zartan leaves an explosive on him. "Carol" then reveals her true identity to Mainframe, who lets her escape. Mainframe is later treated for "neurological shock" and blames himself for the loss of an anti-matter powered space probe that G.I Joe and Cobra were after and the USS Flagg, but Duke says otherwise. The episode ends showing both Mainframe and Zarana, miles away looking at the moon, perhaps pondering the same thoughts.
In "Cobrathon", Duke, Beach Head, Mainframe, Sci-Fi and Lifeline infiltrate a Cobra software development site, where Mainframe steals a decoder box, concluding that Cobra has gone into pay television business. Mainframe and the other Joes escape with it, albeit without Sci-Fi and Lifeline, who are captured. They soon discover that Cobra is holding a telethon to raise money for a Cobra computer virus designed to destroy the computers in every Interpol agency in the world. Using a computer code sheet which Beach Head stole from the software site, Mainframe deciphers the code and finds out the computer housing the virus is located in an abandoned Anasazi city. There, Mainframe destroys the Cobra supercomputer by entering his own computer virus into the infected system, causing the Cobra virus to overload his computer instead.
In "Grey Hairs and Growing Pains", Mainframe is one of the Joes investigating Cobra's theft of a special youth formula. He and Dial-Tone interview football star Brett Tinker regarding his commercial for an "ageless care process", only to be assaulted in response. For further investigation, the Joes visit a Cobra-owned Ageless Care Spa, where they are led to the "steam and sun rooms". In one room, Lady Jaye, Mainframe and Dial-Tone turn into children, while in the other room, Flint, Gung-Ho and Sci-Fi turn into old men. With help from an actress associated with the youth formula, they infiltrate a Cobra desert factory to find a cure. Mainframe again meets Zarana, who gives him the information on reversing the aging/rejuvenating process. He reprograms the factory's prototype sun and steam rooms, restoring the Joes to their normal ages, and bluffs the Cobras into retreat.
In "Joe's Night Out", Mainframe assists Dr. Mullaney in his project to build a nitrogen fuel turbine. In the same episode, Serpentor sends a nightclub called Club Open Air into space, with Leatherneck, Wet-Suit, Dial-Tone and civilians inside, and demands that Dr. Mullaney surrender to Cobra in return for the lives of those in the club to be spared. Mainframe sends the three Joes Mullaney's formula, which they use to fuel the rockets with the air within the club, and prevents Cobra from obtaining it by inserting the computer virus which Cobra developed for their Cobrathon into the disks containing the information.
G.I. Joe: The Movie
Mainframe also appeared briefly in the 1987 animated film G.I. Joe: The Movie. He is present during the test of the Broadcast Energy Transmitter (B.E.T.), and is seen in battle as well.
Books
Mainframe is a supporting character in the novel Divide and Conquer. Without permission, he risks damage to Joe Headquarters to gain information from Cobra computers.
Mainframe is featured in the younger-children storybook "Operation Starfight" drawn by Earl Norem. He is wounded in the arm during the story.
Video games
Mainframe appears as a non-playable supporting character named "Data Frame" in the video game G.I. Joe: The Rise of Cobra, voiced by Wally Wingert.
Other works
Mainframe's figure is briefly featured in the fiction novel 6 Sick Hipsters''. In the story, the character Paul Achting spent four years collecting G.I. Joe figures to set up a battle scene between the Joes and Cobra. As he imagined the characters in his head, he described three of the Joes hanging back from the front lines: Lifeline, Mainframe, and Iceberg. Beside Iceberg, "Mainframe, clad in his distinctive gray short-sleeved uniform, manned the battlefield computer. They did not speak to each other. Only waited and watched."
References
External links
Mainframe at JMM's G.I. Joe Comics Home Page
Mainframe at YOJOE.com
Fictional characters from Arizona
Fictional United States Marine Corps personnel
Fictional Vietnam War veterans
Fictional military sergeants
Fictional staff sergeants
it:G.I. Joe: A Real American Hero
he:כח המחץ
G.I. Joe soldiers
Male characters in animated series
Male characters in comics
Television characters introduced in 1986
Fictional Massachusetts Institute of Technology people |
937103 | https://en.wikipedia.org/wiki/University%20Visvesvaraya%20College%20of%20Engineering | University Visvesvaraya College of Engineering | University Visvesvaraya College of Engineering (UVCE) was established in 1917, under the name Government Engineering College, by Bharat Ratna Sir M. Visvesvaraya. It is the 5th engineering college to be established in the country and 1st in Karnataka. UVCE is one of the few technical institutions in the country that is vested with the status of a university and autonomy on the lines ot IITs. It is one of the oldest technical institutions in the country, imparting technical education leading to B.Tech., B. Arch, M.Tech., M. Arch and PhD degrees in the various disciplines of Engineering and Architecture. The college is approved by the AICTE and the Government of Karnataka. UVCE has secured an NAAC accreditation score of 3.17 (A Grade). The college receives financial aid under the TEQIP program from the World Bank.
A committee has been formed by the Government of Karnataka, headed by S. Sadagopan as the chairman to explore the procedure of granting autonomy to the institute.
History
In 1917, the then Diwan of Mysore, Sir M. Visvesvaraya, felt the need to have an engineering college in the state as the College of Engineering, Guindy and College of Engineering, Pune were unable to accommodate enough students from Mysore State.
He started the college in 1917 in Bangalore as a School of Engineering with 20 students in Civil and Mechanical engineering branches in the PWD building. S.V. Setty was a founding professor. It was the 5th engineering college to be started in India and the 1st one in the Mysore State. In 1965, the name of the college was changed to University Visvesvaraya College of Engineering (UVCE), after the founder.
Campus
UVCE is situated at K.R. Circle and in the neighborhood of Vidhana Soudha, Government of Karnataka, with a campus area spreading over 15 acres, housing the Departments of Mechanical Engineering, Electrical and Electronics Engineering, Electronics and Communication Engineering and Computer Science Engineering. The institute runs the Civil and Architecture courses at the Jnana Bharathi Campus, occupying an area of about 50 acres.
The college provides hostel facilities for boys and girls separately, both in the Jnana Bharathi and K.R. Circle campus.
Admissions
Candidates who qualify in the Karnataka Common Entrance Test (KCET) / Aptitude test conducted by the Government of Karnataka are eligible for admission to the Under Graduate Engineering / Architecture courses. Candidates belonging to Union Territories / other states are also admitted to the college under the Central Government quota. Candidates who qualify through GATE and PGCET are eligible to the Post Graduate programmes. Candidates who qualify in the test conducted by Bangalore University are eligible for Research programmes leading to M.Tech. in Engineering and Doctoral programmes. Admission to UVCE is strictly done on the basis of merit.
Academics
The institution offers 7 undergraduate (B.Tech. / B. Arch) and 24 postgraduate (M.Tech. / M. Arch.) programmes.
Undergraduate programmes
Regular B.Tech. / B. Arch.
Electronics and Communication Engineering
Computer Science and Engineering
Information Science and Engineering
Machine Learning and Artificial Intelligence
Mechanical Engineering
Electrical and Electronics Engineering
Civil Engineering
Architecture
Part-time B.Tech.
No part time B.Tech offered anymore
Postgraduate programmes
M.Tech.
Computer Science and Engineering
Computer Science
Information Technology
Computer Networks
Web Technologies
Software Engineering
Bioinformatics
Electronics and Communication
Electronics and Communication Engineering
Electrical Engineering
Power & Energy Systems
Power Electronics
Control & Instrumentation
Civil Engineering
Construction Technology
Geo-Technical Engineering
Structural Engineering
Highway Engineering
Pre-stressed Concrete Engineering
Water Resource Engineering
Environmental Engineering
Earthquake Engineering
Mechanical Engineering
Machine Design
Manufacturing Science Engineering
Thermal Science Engineering
Advanced Material Technology
Architecture
Construction and Project Management
Landscape Architecture
PhD programmes
The PhD degree is awarded by Bangalore University. UVCE serves as a research centre for the award of PhD degree in several research areas, under following branches:
Civil Engineering
Mechanical Engineering
Computer Science and Engineering (Software Engineering, Computer Networks, Wireless Sensor Networks, Distributed Computing, Web Mining, Data Mining, Semantic Web, Image Processing, Bio-metrics, Bioinformatics, Signal Processing, Cloud Computing, Big Data and Analytics, Data Auditing, Internet of Things, Social Networks, Network Security and many other Computer Science related research areas)
Electrical and Electronics Engineering
Electronics and Communication Engineering
Architecture
Research
Most of the faculty members are actively involved in research along with regular teaching. There are several ongoing sponsored R&D projects wherein UG and PG students are actively involved. An average of 120-150 technical papers per year are published in various national and international conferences and journals. The college annually conducts workshops, seminars, webinars and conferences on the latest technological trends including soft skill enhancement training programs for the students. UVCE undertakes about 200 consultancy projects per year belonging to Government and private organizations.
Student life
Cultural events
Milagro
Milagro is the annual inter-collegiate cultural fest that is organized in UVCE. This event draws crowds of up to 2,000 and is organized during March - April.
Inspiron
Since its inception in 2009, Inspiron has been hosted by The Training and Placement Office, UVCE.
Impetus
IMPETUS is a national-level technical extravaganza organised by IEEE UVCE which was started in the year 2001.
Corporate companies like IBM, Yahoo!, NIIT, Motorola, Cognizant, Infosys, DELL etc. and Government organisations like BWSSB and BESCOM sponsor the fest. The event hosts both technical events like Circuit Debugging, OSP, Web Designing, Technical Quiz and non-technical events like Mad Ads, Ad-Venture, Team building event, Marketing event, Mock stock etc.
Kagada
Kagada is a National Level Annual Technical Paper Presentation Contest organized by IEEE UVCE.
Convocation
Every year convocation is held in June, where degrees are awarded to graduating students.
Student organizations
Training and Placement Office UVCE
The Training and Placement Office is functional in the institute since 1989 founded by Dr. M Channa Reddy and further developed by Dr. Sathya Narayana Makam, Dr. Paul S Vizhian, Dr. P Deepa Shenoy and Dr. K B Raja. The current placement officer is Dr. B M Rajaprakash, Professor, Department of Mechanical Engineering..
IEEE UVCE
IEEE UVCE is an IEEE student branch under the aegis of IEEE Bangalore Section. This branch revived in 2001. Symposiums, guest lectures, KAGADA- the National Level Paper Presentation Contest, IMPETUS- the National Level Technical Fest and collegiate activities- RIPPLES are some of the events held all through the year.
IEEE UVCE has 13 Special Interest Groups.
Notable alumni
Venkatesh K. R. Kodur, University professor at Michigan State University and a pioneer in structural fire design.
Professor S.S. Iyengar, Ryder professor and director of Computer Science at Florida International University, Miami, Florida, USA
Dr. Vasudev Kalkunte Aatre, scientist, former head, Defence Research and Development Organisation (DRDO)
N. Ahmed, Professor Emeritus, Electrical and Computer and Engineering, University of New Mexico
Dr. Narasimhiah Seshagiri Computer scientist, writer and a former director-general of the National Informatics Centre,an apex organization of the Government of India
Mano Murthy, entrepreneur and composer
Roddam Narasimha, eminent scientist
Lakshmi Narayanan, Ex-CEO at Cognizant
Katepalli R. Sreenivasan, Former chairman, Mechanical Engineering, Yale University
H. G. Dattatreya, actor
Ramesh Arvind, actor
Vijaya Bhaskar, composer
Arvind Bhat, badminton player
Prakash Belawadi, actor
Prahlada, missile scientist and former Vice Chancellor of DIAT
G Guruswamy, Principal Aerospace Scientist who pioneered computational aeroelasticity in 1978.
Rajkumar Buyya, Redmond Barry Distinguished Professor and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne
V. K. Aatre
UVCE IEEE Fellow awardees
M A L Thathachar
Dr. S. S. Iyengar
Viktor Prasanna
V. Prasad Kodali
Dr. Vasudev Kalkunte Aatre
Rajkumar Buyya
Venugopal K R
N. Ahmed, 1985 "for his contributions to engineering education and digital signal processing."
Alumni associations
UVCE has three active alumni associations that work to bring about effective change on campus:
Vision UVCE
UVCE Foundation
UVCE Graduates Association
References
External links
1917 establishments in India |
2506812 | https://en.wikipedia.org/wiki/USS%20Gettysburg%20%281858%29 | USS Gettysburg (1858) | The first USS Gettysburg was a steamer in the Union Navy. The ship was built in Glasgow, Scotland in 1858, named Douglas, and operated for the Isle of Man Steam Packet Company between Liverpool, United Kingdom and Douglas on the Isle of Man until November 1862. She was then sold to the Confederacy, renamed Margaret and Jessie, and operated as a blockade runner until her capture by the Union on 5 November 1863. The ship was renamed Gettysburg, and commissioned into the Union Navy on 2 May 1864.
During her military service, Gettysburg operated with the North Atlantic Blockading Squadron, was involved in both the first and second attacks on Fort Fisher, helped lay telegraph cables between Key West and Havana and undertook navigational surveys of the Caribbean and the Mediterranean.
Gettysburg was decommissioned on 6 May 1879 and sold two days later.
Construction and dimensions
Douglas was built by Robert Napier & Co. in Glasgow in 1858. Napier's also supplied her engines and boilers. Her purchase cost was £17,500, plus an allowance from Napier's of £5,000 for the King Orry.
Douglas had a tonnage of ; length 205 feet; beam 26 feet; depth 14 feet and a service speed of 17 knots.
The first Steam Packet steamer with a straight stern, no fiddle bow and no figurehead. She was launched at 13:30hrs on Wednesday 28 April 1858, the christening of the ship being performed by the wife of John Napier. Also in attendance on behalf of the Isle of Man Steam Packet was Captain Edward Quayle, the Commodore of the Company.
The launch had been delayed for a short period as the berth where her fitting out was to take place was occupied by another vessel. Following her launching she was towed to Launcefield Dock in order to receive her engines. Douglas was fitted with a full complement of lifeboats, these being fitted with Clifford's patent lowering apparatus, which enabled the boats to be lowered safely whilst the vessel was under steam.
Below the main saloon and ladies cabin, in one of the watertight compartments was the cargo hold which also contained two fresh water storage tanks each capable of accommodating 500 gallons of water. Moving aft, the cargo hold was followed by the fuel bunkers, furnaces and engines.
Pre-Civil War
Isle of Man Steam Packet Company, 1858–1862
PS (RMS) Douglas (I) No. 20683; the first ship in the line's history to be so named, and the ninth to be ordered by the Company, was an iron paddle steamer which served with the Isle of Man Steam Packet Company until she was sold in 1862. She was sold to Cunard, Wilson & Co, on behalf of the Confederate agents, Fraser, Trenholm & Co. She was renamed Margaret and Jessie and sailed in gray livery for the Confederate States.
Due to increasing passenger traffic between the Isle of Man and England, it was decided in 1858 that a larger, faster ship would be ordered for the packet fleet. During trials, she achieved , and was declared the fastest Channel Steamer in existence during the period.
Appearance and furnishings
Douglas was considered an elegant ship. She had a raised quarterdeck below which was situated a spacious and beautifully fitted up saloon, ladies cabin and sleeping cabins, with accommodation for 100 first class passengers. These cabins ran the full length of the quarterdeck and were lit by large, wide, sky lights two large deck windows and sixteen side windows. The chief saloon was well decorated with three large marble tables with mahogany tops the seats to which had a movable back to allow the occupier to sit either facing toward or away from the table. On the wall at the aft end of the saloon were hung two pictures, one of Douglas Harbour and the other of Liverpool. Seventy people could be accommodated to dine at any given time.
Her deck was said to be clear and roomy with a hurricane deck situated between the paddle boxes which was 50ft long and roofed over on all sides.
The steerage passengers were accommodated in the forward part of the ship and as was the case with first class a ladies lounge was provided. Beyond this, and in the immediate vicinity of the forecastle was situated the crew's quarters.
Mail and cargo
Douglas was designed to carry a mixture of passengers and cargo.
Her designation as a Royal Mail Ship (RMS) indicated that she carried mail under contract with the Royal Mail. A specified area was allocated for the storage of letters, parcels and specie (bullion, coins and other valuables). This was situated in the forecastle beneath the crew's quarters and was accessible only by specified ship's officer.
In addition, there was a considerable quantity of regular cargo, ranging from furniture to foodstuffs.
Maiden voyage
Douglas made her maiden voyage from Glasgow to Douglas on Saturday 3 July 1858. Having made passage from the Clyde in a time of 8 hours 30 minutes her arrival off Onchan Head was heralded by cannon fire from the Conister Rock, Fort Anne and the Castle Mona. As she entered Douglas Bay she stopped and embarked several directors of the Steam Packet Company before sailing across the bay several times among a large number of small boats which had put to sea for the occasion. A large crowd had gathered on the Red Pier in order to welcome her and after she had secured alongside many of these people were able to view the interior of the ship with further public viewings made available over the following two days.
Douglas was claimed to be the fastest steamer then afloat. She attracted wide attention, and her speed made her a strong candidate for more advanced adventures and was acclaimed as "comme le premier" among cross-channel steamers.
Service life
Certified by the Board of Trade to carry between 800 - 900 passengers and longer and faster than her forerunners, Douglas was built to help meet the steady increase in passenger traffic to and from the Isle of Man and under the command of Captain Quayle made her inaugural crossing from Douglas to Liverpool on Tuesday 6 July 1858.
Douglas achieved over on her inaugural trip and immediately broke the record for the crossing time between Douglas and Liverpool, achieving a time of 4 hours 20 minutes and beat the previous record which had belonged to the Mona's Queen of 4 hours 50 minutes and which had stood since September, 1856.On the home run Douglas would routinely record a passage time of 4 hours 40 minutes.
While in the Steam Packet's colours, the only event of interest – apart from the way she broke the record for the home run – was her collision with the brig Dido, which cost the Company £400 in damages.
One accident which befell a passenger was on Monday 16 April 1860, whilst she was making passage to Liverpool. One passenger, a sailor who had been home on leave from the Royal Navy and who was serving on HMS Majestic, whilst in a state of intoxication scaled his way up the foremast head and got himself on to the fore stay. However, in his attempt to come down he fell some 30 feet to the deck. He was subsequently treated by a doctor who happened to be on board and on arrival in Liverpool was transferred to the Northern Hospital.
Ancillary work consisted of charter sailings and day excursions, one of which was on 14 October 1859, when the Douglas took an excursion to Holyhead to see the ocean liner Great Eastern.At one time she was chartered to Henderson's of Belfast for three weeks for the then notable fee of £200 per week.
SS Margaret and Jessie
After only four years in Steam Packet ownership, Douglas was sold to Cunard, Wilson and Co., who were really acting as brokers for the Confederate Agents, Fraser, Trenholm and Co. for £24,000. Douglas departed her home port for the last time on Sunday 16 November 1862.
Whilst in Liverpool Douglas was painted light grey and loaded with cargo departed for Nassau under the command of Captain Corbett on Tuesday 2 December 1862,arriving in Nassau in late January, 1863.
Douglas made an ideal blockade runner in the American Civil War. She was then owned by the Charleston Import and Export Company. She successfully managed to run the blockade and reached Charleston on 31 January, bringing in much needed supplies and subsequently departed for Nassau with a cargo of cotton. She successfully ran the blockade on numerous other occasions.
Douglas was subsequently renamed Margaret and Jessie in honour of the daughters of her new owner, both of whom were on board her during her initial blockade run outbound from Charleston.
On 1 June 1863 off Nassau Margaret and Jessie was gunned down and driven ashore by a Union gunboat. A few days later she escaped although damaged, went back to blockade running and was later captured. Some records maintain that after she was driven ashore and had escaped to Nassau, she took no further part in the American Civil War, and her engines were said to be seen rusting on the Nassau beach as late as 1926.
The official history of the ship in the library of the Department of the Navy, Washington D.C., clarifies the conflicting reports.
1863 capture
Margaret and Jessie was captured as a blockade runner on 5 November 1863 by Army transport Fulton, , and off Wilmington, North Carolina. She was purchased from the New York Prize Court by the Navy and commissioned Gettysburg at New York Navy Yard on 2 May 1864, Lieutenant Roswell Lamson commanding.
Refitting
She had been armed with a 30-pounder Parrot gun, two 12-pounders, and four 24-pounder howitzers. Her tonnage was now given as 950 and she was apparently lengthened by 16 feet to 221 feet. When commissioned, she had a ship's company of 96. She joined the North Atlantic Blockading Squadron, and captured several ships which were running supplies to the South.
Civil War
North Atlantic Blockading Squadron
A fast, strong steamer, Gettysburg was assigned blockading duty with the North Atlantic Blockading Squadron, and departed New York on 7 May. She arrived at Beaufort, North Carolina on 14 May and from there took station at the entrance to the Cape Fear River.
For the next seven months, Gettysburg was engaged in the vital business of capturing blockade runners carrying supplies to the strangling South. She captured several ships, and occasionally performed other duties. On 8 October, for instance, she rescued six survivors from schooner Home, which had capsized in a squall.
Battle of Fort Fisher
Gettysburg took part in the attack on Fort Fisher on 24–25 December 1864. Gettysburg assisted with the devastating bombardment prior to the landings by Army troops, and during the actual landings stood in close to shore to furnish cover for the assault. Gettysburgs boats were used to help transport troops to the beaches.
With the failure of the first attack on the Confederate works, plans were laid for another assault, this time including a landing force of sailors and marines to assault the sea face of the fort. In this second attack on 15 January 1865, Gettysburg again engaged the fort in the preliminary bombardment, and furnished a detachment of sailors under Lt. Roswell Lamson and other officers in an assault, which was stopped under the ramparts of Fort Fisher. Lamson and a group of officers and men were forced to spend the night in a ditch under Confederate guns before they could escape. Though failing to take the sea face of Fort Fisher, the attack by the Navy diverted enough of the defenders to make the Army assault successful. Gettysburg suffered two men killed and six wounded in the assault.
Gettysburg spent the remaining months of the war on blockade duty off Wilmington, North Carolina, and operated from April–June between Boston, Massachusetts and Norfolk, Virginia carrying freight and passengers. She was decommissioned on 23 June at New York Navy Yard.
Post-war
Caribbean, 1866–1875
Recommissioning on 3 December 1866, Gettysburg made a cruise to the Caribbean Sea, returning to Washington on 18 February, and decommissioning again on 1 March 1867.
Gettysburg went back into commission on 3 March 1868 at Norfolk and put to sea on 28 March on special service in the Caribbean. Until July 1868, she visited various ports in the area protecting American interests, among them Kingston, Jamaica, Havana, Cuba, and ports of Haiti. From 3 July – 13 August, Gettysburg assisted in the laying of a telegraph cable from Key West to Havana, and joined with scientists from the Hydrographic Office in a cruise to determine the longitudes of West Indian points using the electric telegraph. From 13 August 1868 – 1 October 1869, she cruised between various Haitian ports and Key West. Gettysburg arrived at the New York Navy Yard on 8 October, decommissioned the same day, and entered the Yard for repairs.
Gettysburg was laid up in ordinary until 6 November 1873, when she again commissioned at Washington Navy Yard. She spent several months transporting men and supplies to the various Navy Yards on the Atlantic coast, and on 25 February 1874 anchored in Pensacola harbor to embark members of the survey team seeking routes for an inter-oceanic canal in Nicaragua. Gettysburg transported the engineers to Aspinwall, Panama and Greytown, Nicaragua, and returned them to Norfolk on 10 May 1874. After several more trips on the Atlantic coast with passengers and supplies, the ship again decommissioned on 9 April 1875 at Washington Navy Yard.
Recommissioned on 21 September, Gettysburg departed Washington for Norfolk, where she arrived on 14 October. Assigned to assist in another of the important Hydrographic Office expeditions in the Caribbean, she departed Norfolk on 7 November. During the next few months she contributed to safe navigation in the West Indies in surveys that led to precise charts. She returned to Washington with the scientific team on 14 June, decommissioning on 26 June.
Mediterranean, 1876–1879
Gettysburg recommissioned on 20 September 1876, for special duty to the Mediterranean, where she was to obtain navigational information about the coasts and islands of the area. Gettysburg departed Norfolk on 17 October for Europe. During the next two years, she visited nearly every port in the Mediterranean, taking soundings and making observations on the southern coast of France, the entire coastline of Italy, and the Adriatic Islands. Gettysburg continued to the coast of Turkey, and from there made soundings on the coast of Egypt and other North African points, Sicily and Sardinia. On 1 October 1878, while the ship was off the coast of Algeria, Landsman Walter Elmore rescued a fellow sailor from drowning, for which he was awarded the Medal of Honor.
While visiting Genoa on 22 April 1879, Gettysburg rescued the crew of a small vessel which had run upon the rocks outside the breakwater.
Decommissioning and fate
Her iron plates corroded from years of almost uninterrupted service and her machinery weakened, Gettysburg was decommissioned on 6 May and sold two days later.
References
Bibliography
Lamson of the Gettysburg: The Civil War Letters of Lieutenant Roswell H. Lamson, U.S. Navy, James M. and Patricia R. McPherson, eds. (Oxford Univ. Press 1999)
Chappell, Connery (1980). Island Lifeline T.Stephenson & Sons Ltd
External links
A drawing by USS Gettysburg Ensign Francis P. B. Sands of the aftermath of the Battle of Fort Fisher
Ferries of the Isle of Man
Steamships of the United States Navy
Ships of the Union Navy
American Civil War patrol vessels of the United States
Ships built on the River Clyde
1858 ships |
10831486 | https://en.wikipedia.org/wiki/Tony%20Hey | Tony Hey | Professor Anthony John Grenville Hey (born 17 August 1946) was Vice-President of Microsoft Research Connections, a division of Microsoft Research, until his departure in 2014.
Education
Hey was educated at King Edward's School, Birmingham and the University of Oxford. He graduated with a Bachelor of Arts degree in physics in 1967, and a Doctor of Philosophy in theoretical physics in 1970 supervised by P. K. Kabir. He was a student of Worcester College, Oxford and St John's College, Oxford.
Career and research
From 1970 through 1972 Hey was a postdoctoral fellow at California Institute of Technology (Caltech).
Moving to Pasadena, California, he worked with Richard Feynman and Murray Gell-Mann, both winners of the Nobel Prize in Physics.
He then moved to Geneva, Switzerland and worked as a fellow at CERN (the European organisation for nuclear research) for two years.
Hey worked about thirty years as an academic at University of Southampton, starting in 1974 as a particle physicist.
He spent 1978 as a visiting fellow at Massachusetts Institute of Technology.
For 1981 he returned to Caltech as a visiting research professor. There he learned of Carver Mead's work on very-large-scale integration and become interested in applying parallel computing techniques to large-scale scientific simulations.
Hey worked with British semiconductor company Inmos on the Transputer project in the 1980s.
He switched to computer science in 1985, and in 1986 became professor of computation in the Department of Electronics and Computer Science at Southampton. While there, he was promoted to Head of the School of Electronics and Computer Science in 1994 and Dean of Engineering and Applied Science in 1999.
Among his work was "doing research on Unix with tools like LaTeX."
In 1990 he was a visiting fellow at the Thomas J. Watson Research Center of IBM Research.
He then worked with Jack Dongarra, Rolf Hempel and David Walker, to define the Message Passing Interface (MPI) which became a de facto open standard for parallel scientific computing.
In 1998 he was a visiting research fellow at Los Alamos National Laboratory in the USA.
Hey led the UK's e-Science Programme from March 2001 to June 2005.
He was appointed corporate vice-president of technical computing at Microsoft on 27 June 2005.
Later he became corporate vice-president of external research, and in 2011 corporate vice-president of Microsoft Research Connections until his departure in 2014.
Since 2015, he is a Senior Data Science Fellow at the University of Washington eScience Institute.
Hey is the editor of the journal Concurrency and Computation: Practice and Experience. Among other scientific advisory boards in Europe and the United States, he is a member of the Global Grid Forum (GGF) Advisory Committee.
Publications
Hey has authored or co-authored a number of books including The Fourth Paradigm: Data-Intensive Scientific Discovery, The Quantum Universe and The New Quantum Universe, The Feynman Lectures on Computation and Einstein's Mirror. Hey has also authored numerous peer-reviewed journal papers.
His latest book is a popular book on computer science called The Computing Universe: A Journey through a Revolution.
Awards and honours
Hey had an open scholarship to Worcester College, Oxford from 1963 to 1967, won the Scott Prize for Physics in 1967, senior scholarship to St John's College, Oxford in 1968 and was a Harkness Fellow from 1970 through 1972.
Hey was made a Commander of the Order of the British Empire (CBE) in 2005. He was elected a Fellow of the British Computer Society (FBCS) in 1996, the Institute of Physics (FInstP) and the Institution of Electrical Engineers in 1996 and the Royal Academy of Engineering (FREng) in 2001.
In 2006 he presented the prestigious IET Pinkerton Lecture. In 2007 he was awarded an honorary Doctor of Civil Law degree from Newcastle University. In 2017 he was elected a Fellow of the Association for Computing Machinery (ACM).
References
E-Science
English physicists
English science writers
Living people
Harkness Fellows
Fellows of the British Computer Society
Fellows of the Institute of Physics
Fellows of the Institution of Engineering and Technology
Fellows of the Royal Academy of Engineering
Fellows of the Association for Computing Machinery
1946 births
Commanders of the Order of the British Empire
Academics of the University of Southampton
Alumni of Worcester College, Oxford
People associated with CERN |
60087 | https://en.wikipedia.org/wiki/Vaporware | Vaporware | In the computer industry, vaporware (or vapourware) is a product, typically computer hardware or software, that is announced to the general public but is late or never actually manufactured nor officially cancelled. Use of the word has broadened to include products such as automobiles.
Vaporware is often announced months or years before its purported release, with few details about its development being released. Developers have been accused of intentionally promoting vaporware to keep customers from switching to competing products that offer more features. Network World magazine called vaporware an "epidemic" in 1989 and blamed the press for not investigating if developers' claims were true. Seven major companies issued a report in 1990 saying that they felt vaporware had hurt the industry's credibility. The United States accused several companies of announcing vaporware early enough to violate antitrust laws, but few have been found guilty. InfoWorld magazine wrote that the word is overused and places an unfair stigma on developers.
"Vaporware" was coined by a Microsoft engineer in 1982 to describe the company's Xenix operating system and first appeared in print in a newsletter by entrepreneur Esther Dyson in 1983. It became popular among writers in the industry as a way to describe products they felt took too long to be released. InfoWorld magazine editor Stewart Alsop helped popularize it by lampooning Bill Gates with a Golden Vaporware award for the late release of his company's first version of Windows in 1985.
Vaporware first implied intentional fraud when it was applied to the Ovation office suite in 1983; the suite's demonstration was well received by the press, but the product was never released.
Etymology
"Vaporware", sometimes synonymous with "vaportalk" in the 1980s, has no single definition. It is generally used to describe a hardware or software product that has been announced, but that the developer has no intention of releasing any time soon, if ever.
The first reported use of the word was in 1982 by an engineer at the computer software company Microsoft. Ann Winblad, president of Open Systems Accounting Software, wanted to know if Microsoft planned to stop developing its Xenix operating system as some of Open System's products depended on it. She asked two Microsoft software engineers, John Ulett and Mark Ursino, who confirmed that development of Xenix had stopped. "One of them told me, 'Basically, it's vaporware'," she later said. Winblad compared the word to the idea of "selling smoke", implying Microsoft was selling a product it would soon not support.
Winblad described the word to influential computer expert Esther Dyson, who published it for the first time in her monthly newsletter RELease 1.0. In an article titled "Vaporware" in the November 1983 issue of RELease 1.0, Dyson defined the word as "good ideas incompletely implemented". She described three software products shown at COMDEX in Las Vegas that year with bombastic advertisements. She stated that demonstrations of the "purported revolutions, breakthroughs and new generations" at the exhibition did not meet those claims.
The practice existed before Winblad's account. In a January 1982 review of the new IBM Personal Computer, BYTE favorably noted that IBM "refused to acknowledge the existence of any product that is not ready to be put on dealers' shelves tomorrow. Although this is frustrating at times, it is a refreshing change from some companies' practice of announcing a product even before its design is finished". When discussing Coleco's delay in releasing the Adam, Creative Computing in March 1984 stated that the company "did not invent the common practice of debuting products before they actually exist. In microcomputers, to do so otherwise would be to break with a veritable tradition". After Dyson's article, the word "vaporware" became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement. InfoWorld magazine editor Stewart Alsop helped popularize its use by giving Bill Gates, CEO of Microsoft, with a Golden Vaporware award for Microsoft releasing Windows in 1985, 18 months late. Alsop presented it to Gates at a celebration for the release while the song "The Impossible Dream" played in the background.
"Vaporware" took another meaning when it was used to describe a product that did not exist. A new company named Ovation Technologies announced its office suite Ovation in 1983. The company invested in an advertising campaign that promoted Ovation as a "great innovation", and showed a demonstration of the program at computer trade shows. The demonstration was well received by writers in the press, was featured in a cover story for an industry magazine, and reportedly created anticipation among potential customers. Executives later revealed that Ovation never existed. The company created the fake demonstration in an unsuccessful attempt to raise money to finish their product, and is "widely considered the mother of all vaporware," according to Laurie Flynn of The New York Times.
Use of the term spread beyond the computer industry. Newsweek magazine's Allan Sloan described the manipulation of stocks by Yahoo! and Amazon.com as "financial vaporware" in 1997. Popular Science magazine uses a scale ranging from "vaporware" to "bet on it" to describe release dates of new consumer electronics. Car manufacturer General Motors' plans to develop and sell an electric car were called vaporware by an advocacy group in 2008 and Car and Driver magazine retroactively described the Vector W8 supercar as vaporware in 2017.
Causes and use
Late release
A product missing its announced release date, and the labeling of it as vaporware by the press, can be caused by its development taking longer than planned. Most software products are not released on time, according to researchers in 2001 who studied the causes and effects of vaporware; "I hate to say yes, but yes", a Microsoft product manager stated in 1984, adding that "the problem isn't just at Microsoft". The phenomenon is so common that Lotus' release of 1-2-3 on time in January 1983, three months after announcing it, amazed many.
Software development is a complex process, and developers are often uncertain how long it will take to complete any given project. Fixing errors in software, for example, can make up a significant portion of its development time, and developers are motivated not to release software with errors because it could damage their reputation with customers. Last-minute design changes are also common. Large organizations seem to have more late projects than smaller ones, and may benefit from hiring individual programmers on contract to write software rather than using in-house development teams. Adding people to a late software project does not help; according to Brooks' Law, doing so increases the delay.
Not all delays in software are the developers' fault. In 1986, the American National Standards Institute adopted SQL as the standard database manipulation language. Software company Ashton-Tate was ready to release dBase IV, but pushed the release date back to add support for SQL. The company believed that the product would not be competitive without it. As the word became more commonly used by writers in the mid-1980s, InfoWorld magazine editor James Fawcette wrote that its negative connotations were unfair to developers because of these types of circumstances.
Vaporware also includes announced products that are never released because of financial problems, or because the industry changes during its development. When 3D Realms first announced Duke Nukem Forever in 1997, the video game was early in its development. The company's previous game released in 1996, Duke Nukem 3D, was a critical and financial success, and customer anticipation for its sequel was high. As personal computer hardware speeds improved at a rapid pace in the late 1990s, it created an "arms race" between companies in the video game industry, according to Wired News. 3D Realms repeatedly moved the release date back over the next 12 years to add new, more advanced features. By the time 3D Realms went out of business in 2009 with the game still unreleased, Duke Nukem Forever had become synonymous with the word "vaporware" among industry writers. The game was revived and released in 2011. However, due to a 13-year period of fan anticipation and design changes in the industry, the game received a mostly negative reception from critics and fans.
A company notorious for vaporware can improve its reputation. In the 1980s, video game maker Westwood Studios was known for shipping products late, but by 1993 it had so improved that, Computer Gaming World reported, "many publishers would assure [us] that a project was going to be completed on time because Westwood was doing it".
Early announcement
Announcing products early—months or years before their release date, also called "preannouncing", has been an effective way by some developers to make their products successful. It can be seen as a legitimate part of their marketing strategy, but is generally not popular with industry press. The first company to release a product in a given market often gains an advantage. It can set the standard for similar future products, attract a large number of customers, and establish its brand before competitor's products are released. Public relations firm Coakley-Heagerty used an early announcement in 1984 to build interest among potential customers. Its client was Nolan Bushnell, formerly of Atari Inc. who wanted to promote the new Sente Technologies, but his contract with Atari prohibited doing so until a later date. The firm created an advertising campaign—including brochures and a shopping-mall appearance—around a large ambiguous box covered in brown paper to increase curiosity until Sente could be announced.
Early announcements send signals not only to customers and the media, but also to providers of support products, regulatory agencies, financial analysts, investors, and other parties. For example, an early announcement can relay information to vendors, letting them know to prepare marketing and shelf space. It can signal third-party developers to begin work on their own products, and it can be used to persuade a company's investors that they are actively developing new, profitable ideas. When IBM announced its Professional Workstation computer in 1986, they noted the lack of third-party programs written for it at the time, signaling those developers to start preparing. Microsoft usually announces information about its operating systems early because third-party developers are dependent on that information to develop their own products.
A developer can strategically announce a product that is in the early stages of development, or before development begins, to gain competitive advantage over other developers. In addition to the "vaporware" label, this is also called "ambush marketing", and "fear, uncertainty and doubt" (FUD) by the press. If the announcing developer is a large company, this may be done to influence smaller companies to stop development of similar products. The smaller company might decide their product will not be able to compete, and that it is not worth the development costs. It can also be done in response to a competitor's already released product. The goal is to make potential customers believe a second, better product will be released soon. The customer might reconsider buying from the competitor, and wait. In 1994, as customer anticipation increased for Microsoft's new version of Windows (codenamed "Chicago"), Apple announced a set of upgrades to its own System 7 operating system that were not due to be released until two years later. The Wall Street Journal wrote that Apple did this to "blunt Chicago's momentum".
A premature announcement can cause others to respond with their own. When VisiCorp announced Visi On in November 1982, it promised to ship the product by spring 1983. The news forced Quarterdeck Office Systems to announce in April 1983 that its DESQ would ship in November 1983. Microsoft responded by announcing Windows 1.0 in fall 1983, and Ovation Technologies followed by announcing Ovation in November. InfoWorld noted in May 1984 that of the four products only Visi On had shipped, albeit more than a year late and with only two supported applications.
Industry publications widely accused companies of using early announcements intentionally to gain competitive advantage over others. In his 1989 Network World article, Joe Mohen wrote the practice had become a "vaporware epidemic", and blamed the press for not investigating claims by developers. "If the pharmaceutical industry were this careless, I could announce a cure for cancer today – to a believing press." In 1985 Stewart Alsop began publishing his influential monthly Vaporlist, a list of companies he felt announced their products too early, hoping to dissuade them from the practice; among the entries in January 1988 were a Verbatim Corp. optical drive that was 30 months late, WordPerfect for Macintosh (12 months), IBM OS/2 1.1 (nine months), and Lotus 1-2-3 for OS/2 and Macintosh (nine and three months late, respectively). Wired Magazine began publishing a similar list in 1997. Seven major software developers—including Ashton-Tate, Hewlett-Packard and Sybase—formed a council in 1990, and issued a report condemning the "vacuous product announcement dubbed vaporware and other misrepresentations of product availability" because they felt it had hurt the industry's credibility.
Antitrust allegations
In the United States, announcing a product that does not exist to gain a competitive advantage is illegal via Section 2 of the Sherman Antitrust Act of 1890, but few hardware or software developers have been found guilty of it. The section requires proof that the announcement is both provably false, and has actual or likely market impact.
False or misleading announcements designed to influence stock prices are illegal under United States securities fraud laws. The complex and changing nature of the computer industry, marketing techniques, and lack of precedent for applying these laws to the industry can mean developers are not aware their actions are illegal. The U.S. Securities and Exchange Commission issued a statement in 1984 with the goal of reminding companies that securities fraud also applies to "statements that can reasonably be expected to reach investors and the trading markets".
Several companies have been accused in court of using knowingly false announcements to gain market advantage. In 1969, the United States Justice Department accused IBM of doing this in the case United States v. IBM. After IBM's competitor, Control Data Corporation (CDC), released a computer, IBM announced the System/360 Model 91. The announcement resulted in a significant reduction in sales of CDC's product. The Justice Department accused IBM of doing this intentionally because the System/360 Model 91 was not released until three years later. IBM avoided preannouncing products during the antitrust case, but after the case ended it resumed the practice. The company likely announced its PCjr in November 1983—four months before general availability in March 1984—to hurt sales of rival home computers during the important Christmas sales season. In 1985 The New York Times wrote
The practice was not called "vaporware" at the time, but publications have since used the word to refer specifically to it. Similar cases have been filed against Kodak, AT&T, and Xerox.
US District Judge Stanley Sporkin was a vocal opponent of the practice during his review of the settlement resulting from United States v. Microsoft Corp. in 1994. "Vaporware is a practice that is deceitful on its face and everybody in the business community knows it," said Sporkin. One of the accusations made during the trial was that Microsoft has illegally used early announcements. The review began when three anonymous companies protested the settlement, claiming the government did not thoroughly investigate Microsoft's use of the practice. Specifically, they claimed Microsoft announced its Quick Basic 3 program to slow sales of its competitor Borland's recently released Turbo Basic program. The review was dismissed for lack of explicit proof.
See also
List of vaporware
List of commercial failures in video gaming
Technology demonstration
Osborne effect
Development hell
Notes
References
External links
Community Memory postings from 1996 on the term's origins crediting Ann Winblad and Stewart Alsop.
RELease 1.0 November 1983 — a scanned copy of Esther Dyson's original article
Wired Magazine Vaporware Awards
Vaporware 1997: We Hardly Knew Ye
Vaporware 1998: Windows NT Wins
Vaporware 1999: The 'Winners'
Vaporware 2000: Missing Inaction
Vaporware 2001: Empty Promises
Vaporware 2002: Tech up in Smoke?
Vaporware 2003: Nuke 'Em if Ya Got 'Em
Vaporware 2004: Phantom Haunts Us All
Vaporware 2005: Better Late Than Never
Vaporware 2006: Return of the King
Vaporware 2007: Long Live the King
Vaporware 2008: Crushing Disappointments, False Promises and Plain Old BS
Vaporware 2009: Inhale the Fail
Vaporware 2010: The Great White Duke
Software release |
40439966 | https://en.wikipedia.org/wiki/Android%20KitKat | Android KitKat | Android KitKat is the codename for the eleventh Android mobile operating system, representing release version 4.4. Unveiled on September 3, 2013, KitKat focused primarily on optimizing the operating system for improved performance on entry-level devices with limited resources.
, 1.47% of Android devices run KitKat.
History
Android 4.4 "KitKat" was officially announced on September 3, 2013. The release was internally codenamed "Key lime pie"; John Lagerling, director of Android global partnerships, and his team, decided to drop the name, arguing that "very few people actually know the taste of a key lime pie". Aiming for a codename that was "fun and unexpected", his team pursued the possibility of naming the release "KitKat" instead. Lagerling phoned a representative of Nestlé, who owns the Kit Kat brand and produces the confectionary (outside the United States, where it is produced by The Hershey Company under license), and quickly reached a preliminary deal for a promotional collaboration between the two companies, later finalized in a meeting at Mobile World Congress in February 2013. The partnership was not revealed publicly, or even to other Google employees and Android developers (who otherwise continued to internally refer to the OS as "KLP"), until its official announcement in September.
As part of the promotional efforts, Kit Kat bars in the shape of the Android robot logo were produced, while Hershey ran a contest in the United States with prizes of Nexus 7 tablets and Google Play Store credit.
The Nexus 5, developed by LG Electronics, was unveiled on September 30, 2013, as the launch device for KitKat.
Up to October 2017, Android 4.4 was still supported with security patches by Google for the source code.
Development
Continuing on from the focus on improving visual performance and responsiveness on Android 4.1 "Jelly Bean", the main objective of Android 4.4 was to optimize the platform for better performance on low-end devices, without compromising its overall capabilities and functionality. The initiative was codenamed "Project Svelte", which Android head of engineering Dave Burke joked was a weight loss plan after Jelly Bean's "Project Butter" added "weight" to the OS. To simulate lower-spec devices, Android developers used Nexus 4 devices underclocked to run at a reduced CPU speed with only a single core active, 512 MB memory, and at 960×540 display resolution—specifications meant to represent a common low-end Android device.
A development tool known as ProcStats was developed in order to analyze the memory usage of apps over time, especially those that run background services. This data was used to optimize and decouple Google apps and services found to be inefficient, thus helping to reduce the overall memory usage of Android. Additionally, 4.4 was designed to be more aggressive in managing memory, helping to guard against apps wasting too much memory.
Features
User experience
The overall interface of KitKat further downplays the "Holo" interface appearance introduced on 4.0, replacing remaining instances of blue accenting with greys and white (such as the status bar icons), and getting rid of the Wi-Fi upstream and downstream traffic indicators (triangles pointing up and down), though they can still be seen in the quick control center menu.
The Wi-Fi icon colour when only a connection to an access point with no Internet access has been established has changed from grey to orange.
The appearance may deviate in custom vendor distributions such as TouchWiz.
Apps may trigger a translucent status and navigation bar appearance, or trigger a full screen mode ("Immersive mode") to hide them entirely. The launcher also received a refreshed appearance, with the implementation of the translucent navigation bars, and the replacement of the black backdrop in the application drawer with a translucent backdrop. Additionally, action overflow menu buttons in apps are always visible, even on devices with the deprecated "Menu" navigation key. In the Settings menu, users can now specify a default Home (launcher) and text messaging app.
On stock devices, the Messaging and Movie Studio apps were removed; the former was replaced by Google Hangouts, which supported SMS. The AOSP Gallery app was also deprecated in favor of Google+ Photos.
Platform
A new runtime environment known as the Android Runtime (ART), intended to replace the Dalvik virtual machine, was introduced as a technology preview in KitKat. ART is a cross-platform runtime which supports the x86, ARM, and MIPS architectures in both 32-bit and 64-bit environments. Unlike Dalvik, which uses just-in-time compilation (JIT), ART compiles apps upon installation, which are then run exclusively from the compiled version from then on. This technique removes the processing overhead associated with the JIT process, improving system performance.
Devices with 512 MB of RAM or less report as "low RAM" devices. Using an API, apps may detect low RAM devices and modify their functionality accordingly. KitKat also supports zram. WebView components were updated to utilize a version of the Google Chrome rendering engine. A new Storage Access Framework API allows apps to retrieve files in a consistent manner; as part of the framework, a new system file picker (branded as "Documents") allows users to access files from various sources (including those exposed by apps, such as online storage services).
A public API was introduced for creating and managing text messaging clients. Sensor batching, step detection and counter APIs were also added. KitKat supports host card emulation for near-field communications, which allows apps to emulate a smart card for activities such as mobile payments.
Criticism
Memory card writing disabled
Writing access to MicroSD memory cards for non-system (user-installed) software has been disabled in this Android version, with no official option to manually grant selected applications write access.
As a response, many users proceeded to root their devices to circumvent the restriction.
The restriction was officially lifted in Android 5.0 Lollipop, albeit only for applications with an updated API level (≥20), restricting backwards compatibility.
Writing access on the internal storage and USB On-The-Go was unaffected by the restriction yet.
Notes
See also
Android version history
iOS 7
Windows Phone 8
Windows 8
OS X Mavericks
References
External links
Android (operating system)
2013 software |
57381799 | https://en.wikipedia.org/wiki/Dynamic%20causal%20modeling | Dynamic causal modeling | Dynamic causal modeling (DCM) is a framework for specifying models, fitting them to data and comparing their evidence using Bayesian model comparison. It uses nonlinear state-space models in continuous time, specified using stochastic or ordinary differential equations. DCM was initially developed for testing hypotheses about neural dynamics. In this setting, differential equations describe the interaction of neural populations, which directly or indirectly give rise to functional neuroimaging data e.g., functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) or electroencephalography (EEG). Parameters in these models quantify the directed influences or effective connectivity among neuronal populations, which are estimated from the data using Bayesian statistical methods.
Procedure
DCM is typically used to estimate the coupling among brain regions and the changes in coupling due to experimental changes (e.g., time or context). A model of interacting neural populations is specified, with a level of biological detail dependent on the hypotheses and available data. This is coupled with a forward model describing how neural activity gives rise to measured responses. Estimating the generative model identifies the parameters (e.g. connection strengths) from the observed data. Bayesian model comparison is used to compare models based on their evidence, which can then be characterised in terms of parameters.
DCM studies typically involve the following stages:
Experimental design. Specific hypotheses are formulated and an experiment is conducted.
Data preparation. The acquired data are pre-processed (e.g., to select relevant data features and remove confounds).
Model specification. One or more forward models (DCMs) are specified for each dataset.
Model estimation. The model(s) are fitted to the data to determine their evidence and parameters.
Model comparison. The evidence for each model is used for Bayesian Model Comparison (at the single-subject level or at the group level) to select the best model(s). Bayesian model averaging (BMA) is used to compute a weighted average of parameter estimates over different models.
The key stages are briefly reviewed below.
Experimental design
Functional neuroimaging experiments are typically either task-based or examine brain activity at rest (resting state). In task-based experiments, brain responses are evoked by known deterministic inputs (experimentally controlled stimuli). These experimental variables can change neural activity through direct influences on specific brain regions, such as evoked potentials in the early visual cortex, or via a modulation of coupling among neural populations; for example, the influence of attention. These two types of input - driving and modulatory - are parameterized separately in DCM. To enable efficient estimation of driving and modulatory effects, a 2x2 factorial experimental design is often used - with one factor serving as the driving input and the other as the modulatory input.
Resting state experiments have no experimental manipulations within the period of the neuroimaging recording. Instead, hypotheses are tested about the coupling of endogenous fluctuations in neuronal activity, or in the differences in connectivity between sessions or subjects. The DCM framework includes models and procedures for analysing resting state data, described in the next section.
Model specification
All models in DCM have the following basic form:
The first equality describes the change in neural activity with respect to time (i.e. ), which cannot be directly observed using non-invasive functional imaging modalities. The evolution of neural activity over time is controlled by a neural function with parameters and experimental inputs . The neural activity in turn causes the timeseries (second equality), which are generated via an observation function with parameters . Additive observation noise completes the observation model. Usually, the neural parameters are of key interest, which for example represent connection strengths that may change under different experimental conditions.
Specifying a DCM requires selecting a neural model and observation model and setting appropriate priors over the parameters; e.g. selecting which connections should be switched on or off.
Functional MRI
The neural model in DCM for fMRI is a Taylor approximation that captures the gross causal influences between brain regions and their change due to experimental inputs (see picture). This is coupled with a detailed biophysical model of the generation of the BOLD response and the MRI signal, based on the Balloon model of Buxton et al., which was supplemented with a model of neurovascular coupling. Additions to the neural model have included interactions between excitatory and inhibitory neural populations and non-linear influences of neural populations on the coupling between other populations.
DCM for resting state studies was first introduced in Stochastic DCM, which estimates both neural fluctuations and connectivity parameters in the time domain, using Generalized Filtering. A more efficient scheme for resting state data was subsequently introduced which operates in the frequency domain, called DCM for Cross-Spectral Density (CSD). Both of these can be applied to large-scale brain networks by constraining the connectivity parameters based on the functional connectivity. Another recent development for resting state analysis is Regression DCM implemented in the Tapas software collection (see Software implementations). Regression DCM operates in the frequency domain, but linearizes the model under certain simplifications, such as having a fixed (canonical) haemodynamic response function. The enables rapid estimation of large-scale brain networks.
EEG / MEG
DCM for EEG and MEG data use more biologically detailed neural models than fMRI, due to the higher temporal resolution of these measurement techniques. These can be classed into physiological models, which recapitulate neural circuity, and phenomenological models, which focus on reproducing particular data features. The physiological models can be further subdivided into two classes. Conductance-based models derive from the equivalent circuit representation of the cell membrane developed by Hodgkin and Huxley in the 1950s. Convolution models were introduced by Wilson & Cowan and Freeman in the 1970s and involve a convolution of pre-synaptic input by a synaptic kernel function. Some of the specific models used in DCM are as follows:
Physiological models:
Convolution models:
DCM for evoked responses (DCM for ERP). This is a biologically plausible neural mass model, extending earlier work by Jansen and Rit. It emulates the activity of a cortical area using three neuronal sub-populations (see picture), each of which rests on two operators. The first operator transforms the pre-synaptic firing rate into a Post-Synaptic Potential (PSP), by convolving pre-synaptic input with a synaptic response function (kernel). The second operator, a sigmoid function, transforms the membrane potential into a firing rate of action potentials.
DCM for LFP (Local Field Potentials). Extends DCM for ERP by adding the effects of specific ion channels on spike generation.
Canonical Microcircuit (CMC). Used to address hypotheses about laminar-specific ascending and descending connections in the brain, which underpin the predictive coding account of functional brain architectures. The single pyramidal cell population from DCM for ERP is split into deep and superficial populations (see picture). A version of the CMC has been applied to model multi-modal MEG and fMRI data.
Neural Field Model (NFM). Extends the models above into the spatial domain, modelling continuous changes in current across the cortical sheet.
Conductance models:
Neural Mass Model (NMM) and Mean-field model (MFM). These have the same arrangement of neural populations as DCM for ERP, above, but are based on the Morris-Lecar model of the barnacle muscle fibre, which in turn derives from the Hodgin and Huxley model of the giant squid axon. They enable inference about ligand-gated excitatory (Na+) and inhibitory (Cl-) ion flow, mediated through fast glutamatergic and GABAergic receptors. Whereas DCM for fMRI and the convolution models represent the activity of each neural population by a single number - its mean activity - the conductance models include the full density (probability distribution) of activity within the population. The 'mean-field assumption' used in the MFM version of the model assumes the density of one population's activity depends only on the mean of another. A subsequent extension added voltage-gated NMDA ion channels.
Phenomenological models:
DCM for phase coupling. Models the interaction of brain regions as Weakly Coupled Oscillators (WCOs), in which the rate of change of phase of one oscillator is related to the phase differences between itself and other oscillators.
Model estimation
Model inversion or estimation is implemented in DCM using variational Bayes under the Laplace assumption. This provides two useful quantities: the log marginal likelihood or model evidence is the probability of observing of the data under a given model. Generally, this cannot be calculated explicitly and is approximated by a quantity called the negative variational free energy , referred to in machine learning as the Evidence Lower Bound (ELBO). Hypotheses are tested by comparing the evidence for different models based on their free energy, a procedure called Bayesian model comparison.
Model estimation also provides estimates of the parameters , for example connection strengths, which maximise the free energy. Where models differ only in their priors, Bayesian Model Reduction can be used to derive the evidence and parameters of nested or reduced models analytically and efficiently.
Model comparison
Neuroimaging studies typically investigate effects that are conserved at the group level, or which differ between subjects. There are two predominant approaches for group-level analysis: random effects Bayesian Model Selection (BMS) and Parametric Empirical Bayes (PEB). Random Effects BMS posits that subjects differ in terms of which model generated their data - e.g. drawing a random subject from the population, there might be a 25% chance that their brain is structured like model 1 and a 75% chance that it is structured like model 2. The analysis pipeline for the BMS approach procedure follows a series of steps:
Specify and estimate multiple DCMs per subject, where each DCM (or set of DCMs) embodies a hypothesis.
Perform Random Effects BMS to estimate the proportion of subjects whose data were generated by each model
Calculate the average connectivity parameters across models using Bayesian Model Averaging. This average is weighted by the posterior probability for each model, meaning that models with greater probability contribute more to the average than models with lower probability.
Alternatively, Parametric Empirical Bayes (PEB) can be used, which specifies a hierarchical model over parameters (e.g., connection strengths). It eschews the notion of different models at the level of individual subjects, and assumes that people differ in the (parametric) strength of connections. The PEB approach models distinct sources of variability in connection strengths across subjects using fixed effects and between-subject variability (random effects). The PEB procedure is as follows:
Specify a single 'full' DCM per subject, which contains all the parameters of interest.
Specify a Bayesian General Linear Model (GLM) to model the parameters (the full posterior density) from all subjects at the group level.
Test hypotheses by comparing the full group-level model to reduced group-level models where certain combinations of connections have been switched off.
Validation
Developments in DCM have been validated using different approaches:
Face validity establishes whether the parameters of a model can be recovered from simulated data. This is usually performed alongside the development of each new model (E.g.).
Construct validity assesses consistency with other analytical methods. For example, DCM has been compared with Structural Equation Modelling and other neurobiological computational models.
Predictive validity assesses the ability to predict known or expected effects. This has included testing against iEEG / EEG / stimulation and against known pharmacological treatments.
Limitations / drawbacks
DCM is a hypothesis-driven approach for investigating the interactions among pre-defined regions of interest. It is not ideally suited for exploratory analyses. Although methods have been implemented for automatically searching over reduced models (Bayesian Model Reduction) and for modelling large-scale brain networks, these methods require an explicit specification of model space. In neuroimaging, approaches such as psychophysiological interaction (PPI) analysis may be more appropriate for exploratory use; especially for discovering key nodes for subsequent DCM analysis.
The variational Bayesian methods used for model estimation in DCM are based on the Laplace assumption, which treats the posterior over parameters as Gaussian. This approximation can fail in the context of highly non-linear models, where local minima may preclude the free energy from serving as a tight bound on log model evidence. Sampling approaches provide the gold standard; however, they are time consuming and have typically been used to validate the variational approximations in DCM.
Software implementations
DCM is implemented in the Statistical Parametric Mapping software package, which serves as the canonical or reference implementation (http://www.fil.ion.ucl.ac.uk/spm/software/spm12/). It has been re-implemented and developed in the Tapas software collection (https://www.tnu.ethz.ch/en/software/tapas.html) and the VBA toolbox (https://mbb-team.github.io/VBA-toolbox/).
References
Further reading
Dynamic Causal Modelling on Scholarpedia
Understanding DCM: ten simple rules for the clinician
Neural masses and fields in dynamic causal modeling
Neuroimaging |
10376337 | https://en.wikipedia.org/wiki/NEi%20Nastran | NEi Nastran | NEi Nastran was an engineering analysis and simulation software product of NEi Software (formerly known as Noran Engineering, Inc.) Based on NASA's Structural Analysis program NASTRAN, the software is a finite element analysis (FEA) solver used to generate solutions for linear and nonlinear stress, dynamics, and heat transfer characteristics of structures and mechanical components. NEi Nastran software is used with all major industry pre and post processors including Femap, a product of Siemens PLM Software, in house brands NEi Nastran in-CAD, NEi Fusion, and NEi Works for SolidWorks.
This software was acquired by Autodesk in May 2014.
History
The original NASTRAN program came out of NASA’s need to develop a common generic structural analysis program that would be used by all of the centers supporting the space program. A specification was written and a contract was awarded to Computer Sciences Corporation for development of NAsa STRuctural ANalysis (NASTRAN) software. NASTRAN was released to NASA in 1968.
Improvements
In the late 1960s, Finite Element Analysis software was confined to run on expensive mainframe computers and highly trained specialists were needed to apply the program. In this environment, the aerospace industry was the typical user because they had critical projects which could justify the resources FEA demanded. With improvements to the software and wider use of mainframes, FEA technology gradually spread to large corporations that could afford funding the huge investment in hardware, software, and a dedicated FEA staff. Usage spread from primarily aerospace and military applications to the automobile and maritime industries.
The microprocessor revolution and the advent of Personal Computers (PCs) in the 1980s brought tremendous improvements in computing power, significant reductions in computing costs, and the steady development of numerical methods and algorithms. In the mid 1980s, Noran Engineering recognized the long term advantages and impact that the PC hardware revolution could have on the engineering analysis field and embarked on a project to significantly enhance and modernize the original NASTRAN code and port it to PCs.
The first commercial version of NEi Nastran for use on PCs was released in 1990. The new code had a number of changes in architecture and programming language compared to legacy Nastran written originally for mainframes. These differences were intended to take advantage of the dramatic changes in computer hardware taking place and provide the code with key strategic advantages for the new PC platform. For example, since the cost of memory was dramatically reduced it was feasible to perform many operations faster in memory that normally were only done on disk.
Present day
NEi Nastran V10.0 was released in May 2010. It incorporates over 85 customer driven enhancements including the following additions: nonlinear composite Progressive Ply Failure Analysis (PPFA), concrete material model, direct enforced motion, bolt preload, enhanced rigid element features, visualization support for various entities, automatic dynamic plots during nonlinear analysis, transparent max/min, and a new look and feel for its Editor tool.
Since August 2014, the NEi Nastran technology is included in "Autodesk Nastran 2015", "Autodesk Nastran In-CAD 2015" and "Autodesk Simulation Mechanical 2015 R1".
External links
NEi Software
NEi Nastran on Eng-Tips forum
NASA/ NEi Nastran
References
Finite element software
Simulation software
Finite element software for Linux |
250466 | https://en.wikipedia.org/wiki/Generative%20art | Generative art | Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that the generative system represents their own artistic idea, and in others that the system takes on the role of the creator.
"Generative art" often refers to algorithmic art (algorithmically determined computer generated artwork) and synthetic media (general term for any algorithmically-generated media), but artists can also make it using systems of chemistry, biology, mechanics and robotics, smart materials, manual randomization, mathematics, data mapping, symmetry, tiling, and more.
History
The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent". Margaret Boden and Ernest Edmonds have noted the use of the term "generative art" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965:
The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was "computer-grafik". "Generative art" and related terms was in common use by several other early computer artists around this time, including Manfred Mohr. Vera Molnár (born 1924) is a French media artist of Hungarian origin. Molnar is widely considered to be a pioneer of generative art, and is also one of the first women to use computers in her art practice. The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998.
The term has also been used to describe geometric abstract art where simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practised by the Argentinian artists Eduardo McEntyre and Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-born Paul Neagu created the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi." In 1972 Neagu gave a lecture titled 'Generative Art Forms' at the Queen's University, Belfast Festival.
In 1970 the School of the Art Institute of Chicago created a department called Generative Systems. As described by Sonia Landy Sheridan the focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information. Also noteworthy is John Dunn, first a student and then a collaborator of Sheridan.
In 1988 Clauser identified the aspect of systemic autonomy as a critical element in generative art:
In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his book Citta' Aleatorie.
In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images."
From the mid-1990s Brian Eno popularized the terms generative music and generative systems, making a connection with earlier experimental music by Terry Riley, Steve Reich and Philip Glass.
From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives.
The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy.
In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999.
On-line discussion has centred around the eu-gene mailing list, which began late 1999, and has hosted much of the debate which has defined the field. These activities have more recently been joined by the Generator.x conference in Berlin starting in 2005.
In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella jointing several generative artists and scientists in the Editorial Board.
Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds put it in 2011:
In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu:
Discussion on the eu-gene mailing list was framed by the following definition by Adrian Ward from 1999:
A similar definition is provided by Philip Galanter:
Types
Music
Johann Philipp Kirnberger's "Musikalisches Würfelspiel" (Musical Dice Game) 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other.
The fugues of J.S. Bach could be considered generative, in that there is a strict underlying process that is followed by the composer. Similarly, serialism follows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention.
Composers such as John Cage, Farmers Manual, and Brian Eno have used generative systems in their works.
Visual art
The artist Ellsworth Kelly created paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement.
Artists such as Hans Haacke have explored processes of physical and social systems in artistic context.
François Morellet has used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to create Moiré Patterns. In other works he has used chance operations to determine the coloration of grids. Sol LeWitt created generative art in the form of systems expressed in natural language and systems of geometric permutation. Harold Cohen's AARON system is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts.
Steina and Woody Vasulka are video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years.
Software systems exploiting evolutionary computing to create visual form include those created by Scott Draves and Karl Sims.
The digital artist Joseph Nechvatal has exploited models of viral contagion.
Autopoiesis by Ken Rinaldo includes fifteen musical and robotic sculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other.
Jean-Pierre Hebert and Roman Verostko are founding members of the Algorists, a group of artists who create their own algorithms to create art.
A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962.
The French artist Jean-Max Albert, beside environmental sculptures like Iapetus, and O=C=O, developed a project dedicated to the vegetation itself, in terms of biological activity. The Calmoduline Monument project is based on the property of a protein, calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc.) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene. The plant can thus, when there is a stimulus, modify its « typical » growth pattern. So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant's « decisions » suggesting a level of fundamental biological activity.
Maurizio Bolognini works with generative machines to address conceptual and social concerns.
Mark Napier is a pioneer in data mapping, creating works based on the streams of zeros and ones in ethernet traffic, as part of the "Carnivore" project. Martin Wattenberg pushed this theme further, transforming "data sets" as diverse as musical scores (in "Shape of Song", 2001) and Wikipedia edits (History Flow, 2003, with Fernanda Viegas) into dramatic visual compositions.
The Canadian artist San Base developed a "Dynamic Painting" algorithm in 2002. Using computer algorithms as "brush strokes," Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork.
Since 1996 there have been ambigram generators that auto generate ambigrams.
Italian composer Pietro Grossi, pioneer of computer music since 1986, he extended his experiments to images, (same procedure used in his musical work) precisely to computer graphics, writing programs with specific auto-decisions, and developing the concept of HomeArt, presented for the first time in the exhibition New Atlantis: the continent of electronic music organized by the Venice Biennale in 1986.
Software art
For some artists, graphic user interfaces and computer code have become an independent art form in themselves. Adrian Ward created Auto-Illustrator as a commentary on software and generative methods applied to art and design.
Architecture
In 1987 Celestino Soddu created the artificial DNA of Italian Medieval towns able to generate endless 3D models of cities identifiable as belonging to the idea.
In 2010, Michael Hansmeyer generated architectural columns in a project called "Subdivided Columns – A New Order (2010)". The piece explored how the simple process of repeated subdivision can create elaborate architectural patterns. Rather than designing any columns directly, Hansmeyer designed a process that produced columns automatically. The process could be run again and again with different parameters to create endless permutations. Endless permutations could be considered a hallmark of generative design.
Literature
Writers such as Tristan Tzara, Brion Gysin, and William Burroughs used the cut-up technique to introduce randomization to literature as a generative system. Jackson Mac Low produced computer-assisted poetry and used algorithms to generate texts; Philip M. Parker has written software to automatically generate entire books. Jason Nelson used generative methods with speech-to-text software to create a series of digital poems from movies, television and other audio sources.
Live coding
Generative systems may be modified while they operate, for example by using interactive programming environments such as SuperCollider, Fluxus and TidalCycles, including patching environments such as Max/MSP, Pure Data and vvvv. This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known as live coding. As with many examples of software art, because live coding emphasises human authorship rather than autonomy, it may be considered in opposition to generative art.
Theories
Philip Galanter
In the most widely cited theory of generative art, in 2003 Philip Galanter describes generative art systems in the context of complexity theory. In particular the notion of Murray Gell-Mann and Seth Lloyd's effective complexity is cited. In this view both highly ordered and highly disordered generative art can be viewed as simple. Highly ordered generative art minimizes entropy and allows maximal data compression, and highly disordered generative art maximizes entropy and disallows significant data compression. Maximally complex generative art blends order and disorder in a manner similar to biological life, and indeed biologically inspired methods are most frequently used to create complex generative art. This view is at odds with the earlier information theory influenced views of Max Bense and Abraham Moles where complexity in art increases with disorder.
Galanter notes further that given the use of visual symmetry, pattern, and repetition by the most ancient known cultures generative art is as old as art itself. He also addresses the mistaken equivalence by some that rule-based art is synonymous with generative art. For example, some art is based on constraint rules that disallow the use of certain colors or shapes. Such art is not generative because constraint rules are not constructive, i.e. by themselves they don't assert what is to be done, only what cannot be done.
Margaret Boden and Ernest Edmonds
In their 2009 article, Margaret Boden and Ernest Edmonds agree that generative art need not be restricted to that done using computers, and that some rule-based art is not generative. They develop a technical vocabulary that includes Ele-art (electronic art), C-art (computer art), D-art (digital art), CA-art (computer assisted art), G-art (generative art), CG-art (computer based generative art), Evo-art (evolutionary based art), R-art (robotic art), I-art (interactive art), CI-art (computer based interactive art), and VR-art (virtual reality art).
Questions
The discourse around generative art can be characterised by the theoretical questions which motivate its development. McCormack et al. propose the following questions, shown with paraphrased summaries, as the most important:
Can a machine originate anything? Related to machine intelligence - can a machine generate something new, meaningful, surprising and of value: a poem, an artwork, a useful idea, a solution to a long-standing problem?
What is it like to be a computer that makes art? If a computer could originate art, what would it be like from the computer's perspective?
Can human aesthetics be formalised?
What new kinds of art does the computer enable? Many generative artworks do not involve digital computers, but what does generative computer art bring that is new?
In what sense is generative art representational, and what is it representing?
What is the role of randomness in generative art? For example, what does the use of randomness say about the place of intentionality in the making of art?
What can computational generative art tell us about creativity? How could generative art give rise to artefacts and ideas that are new, surprising and valuable?
What characterises good generative art? How can we form a more critical understanding of generative art?
What can we learn about art from generative art? For example, can the art world be considered a complex generative system involving many processes outside the direct control of artists, who are agents of production within a stratified global art market.
What future developments would force us to rethink our answers?
Another question is of postmodernism—are generative art systems the ultimate expression of the postmodern condition, or do they point to a new synthesis based on a complexity-inspired world-view?
See also
Artmedia
Conway's Game of Life
Digital morphogenesis
Evolutionary art
Generative music
Interactive art
New media art
Non-fungible token
Post-conceptualism
Synthetic media
Systems art
Virtual art
References
Further reading
Oliver Grau (2003). Virtual Art: From Illusion to Immersion (MIT Press/Leonardo Book Series). Cambridge, Massachusetts: The MIT Press. .
Wands, Bruce (2006). Art of the Digital Age, London: Thames & Hudson. .
Matt Pearson, ''Generative art : a practical guide using processing". Manning 2011.
Playing with Time A conversation between Will Wright and Brian Eno on generative creation.
Off Book: Generative Art - Computers, Data, and Humanity Documentary produced by Off Book (web series)
Thomas Dreher: History of Computer Art, chap.III.2, IV.3, VIII.1
"Epigenetic Painting:Software as Genotype", Roman Verostko(International Symposium on Electronic Art, Utrecht, 1988); Leonardo, 23:1,1990, pp. 17–23
Visual arts media
Computer art
Digital art
New media
Electronic music
Visual arts genres
Art movements
Painting techniques
Conceptual art |
33813477 | https://en.wikipedia.org/wiki/HICIT%20in%20Shorouk%20Academy | HICIT in Shorouk Academy | The Higher Institute Of Computer Science & Information Technology () in El Shorouk City is officially licensed by the Ministry of Higher Education and Scientific Research of the Arab Republic of Egypt. The Higher Institute of Computer & Information Technology in El Shorouk was granted its original accreditation according to the decree of the Supreme Council of Universities No. 79 on June 5, 2004.
Academic study
Academic fields
Data analytics
Data warehousing
Data mining
Software engineering
Cloud computing
System analysis
System design
Management information systems
Programming and SE development
Academic degrees
The Higher Institute of Computer Science provide the Egyptian Bachelors in computer science that is equal to the Egyptian Bachelors in computer science provided by Egyptian universities.
Testing centers
Prometic testing center
In March 2008 an agreement between the Institute and Prometric company was signed, these agreement give the institute the right to start his own testing center that provides all Microsoft exams as well as sun and Apple.
ICDL testing center
In January 2010 another agreement between the Institute and UNESCO, was signed, these agreement give the institute the right to start his own testing center that provide the International ICDL and the Egyptian version of ICDL.
Student activities
Microsoft student partners
2011 MSPs
GDG Shorouk
Formerly known as Shorouk GTUG (Google Technology User Group) is the Google Developers Group in shorouk academy, founded in 2011 by the following students of HICIT:
Ahmed Mahmoud
Ahmed Farid
Ashraf Hesham
Nasser Ali
References
External links
Shorouk GTUG Official Facebook Page
Meahope Official Facebook Page
Universities in Egypt
Education in Cairo
Educational institutions established in 2004
2004 establishments in Egypt |
52250 | https://en.wikipedia.org/wiki/Darmstadt | Darmstadt | Darmstadt (, also , , ) is a city in the state of Hesse in Germany, located in the southern part of the Rhine-Main-Area (Frankfurt Metropolitan Region). Darmstadt has around 160,000 inhabitants, making it the fourth largest city in the state of Hesse after Frankfurt am Main, Wiesbaden, and Kassel. Other major and local cities in the surrounding area are Frankfurt am Main and Offenbach, Wiesbaden and Mainz, Mannheim and Heidelberg.
Darmstadt holds the official title "City of Science" () as it is a major centre of scientific institutions, universities, and high-technology companies. The European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) and the European Space Operations Centre (ESOC) are located in Darmstadt, as well as GSI Centre for Heavy Ion Research, where several chemical elements such as bohrium (1981), meitnerium (1982), hassium (1984), darmstadtium (1994), roentgenium (1994), and copernicium (1996) were discovered. The existence of the following elements were also confirmed at GSI Centre for Heavy Ion Research: nihonium (2012), flerovium (2009), moscovium (2012), livermorium (2010), and tennessine (2012). The Facility for Antiproton and Ion Research (FAIR) is an international accelerator facility under construction. Darmstadt is also the seat of the world's oldest pharmaceutical company, Merck, which is the city's largest employer.
The Mathildenhöhe, including the Darmstadt artists' colony, a major centre of the Jugendstil artistic movement, referring both to the group of artists active in the city in the late 19th and early 20th century, as well as the buildings which they designed, together with the Russian Chapel in Darmstadt, was recognized as a World Heritage Site by UNESCO in 2021.
Darmstadt was formerly the capital of a sovereign country, the Grand Duchy of Hesse and its successor, the People's State of Hesse, a federal state of Germany. As the capital of an increasingly prosperous duchy, the city gained some international prominence and remains one of the wealthiest cities in Europe. In the 20th century, industry (especially chemicals), as well as large science and electronics (and later, information technology) sectors became increasingly important, and are still a major part of the city's economy. It is also home to the football club SV Darmstadt 98. Alexandra Feodorovna (Alix of Hesse), the wife of Nicholas II of Russia, as well as Maria Alexandrovna (Marie of Hesse), the wife of Alexander II of Russia, who were related, were born in this city.
History
Origins
The name Darmstadt first appears towards the end of the 11th century, then as Darmundestat. Its origins are unknown. 'Dar-mund' in Middle Low German is translated as "Boggy Headlands", but it could be a misspelling in local dialect of another name. It is sometimes stated that the name derives from the 'Darmbach' (a small stream formerly running through the city). In fact, the stream received its current name much later, after the city, not vice versa.
Darmstadt was chartered as a city by the Holy Roman Emperor Ludwig the Bavarian in 1330, at which time it belonged to the counts of Katzenelnbogen. The city, then called Darmstait, became a secondary residence for the counts, with a small castle established at the site of the current, much larger edifice.
When the house of Katzenelnbogen became extinct in 1479, the city was passed to the Landgraviate of Hesse, and was seat of the ruling landgraves (1567–1806) and thereafter (to 1918) of the grand dukes of Hesse.
Industrial age
The city grew in population during the 19th century from little over 10,000 to 72,000 inhabitants. A polytechnical school, which later became a Technical University now known as TU Darmstadt, was established in 1877.
In the beginning of the 20th century, Darmstadt was an important centre for the art movement of Jugendstil, the German variant of Art Nouveau. Annual architectural competitions led to the building of many architectural treasures of this period. Also during this period, in 1912 the chemist Anton Kollisch, working for the pharmaceutical company Merck, first synthesised the chemical MDMA (ecstasy) in Darmstadt. Darmstadt's municipal area was extended in 1937 to include the neighbouring localities of Arheilgen and Eberstadt, and in 1938 the city was separated administratively from the surrounding district (Kreis).
Nazi Germany
Darmstadt was the first city in Germany to force Jewish shops to close in early 1933, shortly after the Nazis took power in Germany. The shops were only closed for one day, for "endangering communal order and tranquility". In 1942, over 3,000 Jews from Darmstadt were first forced into a collection camp located in the Liebigschule, and later deported to concentration camps where most eventually died.
Several prominent members of the German resistance movement against the Nazis were citizens of Darmstadt, including Wilhelm Leuschner and Theodor Haubach, both executed for their opposition to Hitler's regime.
Darmstadt was first bombed on 30 July 1940, and 34 other air raids would follow before the war's end. The old city centre was largely destroyed in a British bombing raid on 11 September 1944. This attack was an example of "area bombing" using high explosive and incendiary bombs, which combined in that attack to create a firestorm, a self-sustaining combustion process in which winds generated by the fire ensure it continues to burn until everything possible has been consumed. During this attack an estimated 11,000 to 12,500 of the inhabitants were killed, and 66,000 to 70,000 were left homeless. Over three-quarters of Darmstadt's inner city was destroyed. Post-war rebuilding was done in a relatively plain architectural style, although a number of the historic buildings were rebuilt to their original appearance following the city's capture on 20 March 1945 by the American 4th Armored Division.
Post–World War II
Throughout the 19th and 20th centuries, Darmstadt became home to many technology companies and research institutes, and has been promoting itself as a "city of science" since 1997. It is well known as a high-tech centre in the vicinity of Frankfurt Airport, with important activities in spacecraft operations (the European Space Operations Centre, European Organisation for the Exploitation of Meteorological Satellites), chemistry, pharmacy, information technology, biotechnology, telecommunications (substantial Deutsche Telekom presence) and mechatronics. In 2000, its region also scored Rank 3 amongst 97 German regions in the WirtschaftsWoche test ranking Germany's high-tech regions.
The roots of Darmstadt University of Applied Sciences goes back to 1876 along with Technische Universität Darmstadt (the first electrical engineering chair and inventions fame), when both these Universities were an integrated entities, a need for a separate industry based research educational institution was felt in the early 1930s, finally University of Applied sciences emerged as a separate industry based research educational institution in 1971 and is the largest University of Applied Sciences in Hesse (German: Hessen) with about 11,000 students.
The TU Darmstadt is one of the important technical institutes in Germany and is well known for its research and teaching in the Electrical, Mechanical and Civil Engineering disciplines. Together with other tertiary institutions, the TU is responsible for the large student population of the city, which stood at 33,547 in 2004.
Boroughs
Darmstadt has nine official 'Stadtteile' (boroughs). These are:
Darmstadt-Arheilgen
Darmstadt-Bessungen
Darmstadt-Eberstadt
Darmstadt-Kranichstein
Darmstadt-Mitte ("Central Darmstadt")
Darmstadt-Nord ("North")
Darmstadt-Ost ("East")
Darmstadt-West
Darmstadt-Wixhausen
Population development
Politics
Mayor
The current mayor of Darmstadt is Jochen Partsch of Alliance 90/The Greens, who was elected in 2011 and re-elected in 2017.
The most recent mayoral election was held on 19 March 2017, and the results were as follows:
! colspan=2| Candidate
! Party
! Votes
! %
|-
| bgcolor=|
| align=left| Jochen Partsch
| align=left| Alliance 90/The Greens
| 25,291
| 50.4
|-
| bgcolor=|
| align=left| Michael Siebel
| align=left| Social Democratic Party
| 8,364
| 16.7
|-
| bgcolor=#448581|
| align=left| Kerstin Lau
| align=left| UFFBASSE
| 6,235
| 12.4
|-
| bgcolor=|
| align=left| Christoph Hentzen
| align=left| Free Democratic Party
| 2,801
| 5.6
|-
| bgcolor=|
| align=left| Uli Franke
| align=left| The Left
| 2,145
| 4.3
|-
| bgcolor=#A037A0|
| align=left| Helmut Klett
| align=left| UWiGA
| 2,094
| 4.2
|-
| bgcolor=|
| align=left| Hans Mohrmann
| align=left| Alternative for Germany
| 2,031
| 4.0
|-
| bgcolor=|
| align=left| Achim Pfeffer
| align=left| Independent
| 973
| 1.9
|-
| bgcolor=|
| align=left| Thorsten Przygoda
| align=left| Independent
| 293
| 0.6
|-
! colspan=3| Valid votes
! 50,227
! 99.2
|-
! colspan=3| Invalid votes
! 388
! 0.8
|-
! colspan=3| Total
! 50,615
! 100.0
|-
! colspan=3| Electorate/voter turnout
! 115,316
! 43.9
|-
| colspan=5| Source: City of Darmstadt
|}
The following is a list of mayors since 1945:
City council
The Darmstadt city council (Stadtverordnetenversammlung) governs the city alongside the Mayor. The most recent city council election was held on 14 March 2021, and the results were as follows:
! colspan=2| Party
! Lead candidate
! Votes
! %
! +/-
! Seats
! +/-
|-
| bgcolor=|
| align=left| Alliance 90/The Greens (Grüne)
| align=left| Hildegard Förster-Heldmann
| 1,151,498
| 27.4
| 2.3
| 20
| 1
|-
| bgcolor=|
| align=left| Social Democratic Party (SPD)
| align=left| Tim Huß
| 703,686
| 16.7
| 0.5
| 12
| ±0
|-
| bgcolor=|
| align=left| Christian Democratic Union (CDU)
| align=left| Paul Georg Wandrey
| 654,797
| 15.6
| 2.6
| 11
| 2
|-
| bgcolor=|
| align=left| The Left (Die Linke)
| align=left| Karl-Heinz Böck
| 310,074
| 7.4
| 0.6
| 5
| ±0
|-
| bgcolor=|
| align=left| Volt Germany (Volt)
| align=left| Nicolas Kämmerer
| 289,023
| 6.9
| New
| 5
| New
|-
| bgcolor=#448581|
| align=left| UFFBASSE
| align=left| Kerstin Lau
| 269,301
| 6.4
| 1.3
| 5
| ±0
|-
| bgcolor=|
| align=left| Free Democratic Party (FDP)
| align=left| Leif Blum
| 234,121
| 5.6
| 0.3
| 4
| ±0
|-
| bgcolor=|
| align=left| Alternative for Germany (AfD)
| align=left| Günter Zabel
| 191,982
| 4.6
| 4.6
| 3
| 4
|-
| bgcolor=#A037A0|
| align=left| UWiGA
| align=left| Erich Bauer
| 130,867
| 3.1
| 0.6
| 2
| 1
|-
| bgcolor=|
| align=left| Die PARTEI (PARTEI)
| align=left| Holger Eisenblätter
| 90,254
| 2.1
| 1.8
| 2
| 2
|-
| bgcolor=#556B2F|
| align=left| Voters' Association of Darmstadt (WGD)
| align=left| Falk Neumann
| 85,320
| 2.0
| New
| 1
| New
|-
| bgcolor=|
| align=left| Free Voters (FW)
| align=left| Harald Uhl
| 79,293
| 1.9
| New
| 1
| New
|-
|
| align=left| Take Part in Darmstadt
| align=left| Dorothea Mondry
| 13,680
| 0.3
| New
| 0
| New
|-
! colspan=3| Valid votes
! 60,815
! 96.6
!
!
!
|-
! colspan=3| Invalid votes
! 2,141
! 3.4
!
!
!
|-
! colspan=3| Total
! 62,956
! 100.0
!
! 71
! ±0
|-
! colspan=3| Electorate/voter turnout
! 115,119
! 54.7
! 6.9
!
!
|-
| colspan=8| Source: Statistics Hesse
|}
Transport
Darmstadt is highly connected to all means of transportation, including the Autobahn Network, the Intercity-Express Network and a major international airport.
Roads
Darmstadt is connected to a number of major roads, including two Autobahnen (Bundesautobahn 5 and Bundesautobahn 67). The main road passing west–east is the Bundesstraße 26, the Bundesstraße 3 runs north–south. The rural areas east of the city in the Odenwald are accessed by several secondary roads.
Public transport in Darmstadt
The extensive public transport system of Darmstadt is integrated in the RMV (the transportation authority of the Frankfurt Metropolitan Area). The backbone of public transport in Darmstadt is its modern tram system with 9 lines and a local bus service serving all parts of the city. Darmstadt is furthermore connected to the Frankfurt S-Bahn system and being served by regional bus lines. Furthermore, regional rail lines (R64, R65, R66) connect six secondary railway stations within the city.
Regional rail links
Darmstadt is connected to the Frankfurt rapid transit network by S-Bahn line S3. Besides that, a number of regional trains connect secondary railway stations within Darmstadt and the region with Darmstadt Hauptbahnhof (main station), offering a net of inner city and regional train links.
National rail links
By its main railway station "Darmstadt Hauptbahnhof", which is located in the western part of the central city, Darmstadt is connected to the rest of Germany and Europe by the Intercity-Express network and other long-distance trains. Darmstadt Hauptbahnhof is a busy station with 12 platforms which serves as a transportation hub for the southern Hesse/Odenwald region.
Airports
The historically important local airfield is closed to aviation at large, being reserved for the use of the Technische Universität Darmstadt.
Frankfurt International Airport
Darmstadt can be easily accessed from around the world via Frankfurt Airport (Flughafen Frankfurt am Main) which is located north of central Darmstadt and connected to it via Autobahn 5, S-Bahn, several bus lines and a direct express bus-link ("Airliner"). The airport ranks among the world's busiest airports by passenger traffic and is the second-busiest airport by cargo traffic in Europe. The airport also serves as the main hub for German flag carrier Lufthansa.
Frankfurt Egelsbach Airport
Frankfurt Egelsbach Airport (Flugplatz Frankfurt-Egelsbach) is a busy general aviation airport located 5 km north of Darmstadt, near the town of Egelsbach.
Frankfurt Hahn Airport
Despite the name, Frankfurt Hahn Airport (Flughafen Frankfurt-Hahn) is located far outside the Frankfurt Metro Area, approximately to the west in Lautzenhausen (Rhineland-Palatinate). Hahn Airport is a major base for low-cost carrier Ryanair. This airport can only be reached by car or bus.
National coach services
Darmstadt is served by several national and European bus links which connect Darmstadt with other German and European cities.
Parks, architecture, and attractions
Castles and historical buildings
Darmstadt was the capital of an independent country (the Grand Duchy of Hesse) until 1871 and the capital of the German state of Hesse until 1945. It is due to its past as a capital city that it has many architectural testimonies of this period. Many of its major architectural landmarks were created by Georg Moller who was appointed the court master builder of the Grand Duchy of Hesse. Due to the fact that the last ruling Grand Duke of Hesse, Ernst Ludwig was a grandson of Queen Victoria and brother to Empress Alexandra of Russia, the architecture of Darmstadt has been influenced by British and Russian imperial architecture with many examples still existing, such as the Luisenplatz with its grand-ducal column, the old Hessian State Theatre (at Karolinenplatz) and the Russian Chapel by Leon Benois. The Russian church, St. Mary Magdalene Chapel, is named in honor of the patron saint of Tsar Nicholas' mother and was built of Russian stone on Russian soil brought to Darmstadt by train. It was used by the Russian imperial family and court during regular visits to the Tsarina's brother and family in Darmstadt. The grand-ducal palace of Darmstadt is located in the city centre. It was the residence of the counts of Hesse-Darmstadt, later as Grand Dukes of Hesse by the grace of Napoleon. The rulers of Hesse also owned Jagdschloss Kranichstein, a hunting lodge in Kranichstein which is a nowadays used as a five star hotel. The most famous castle in the Darmstadt region is Frankenstein Castle due to claims that the real castle may have had an influence on Mary Shelley's decision to choose the name Frankenstein for her monster-creating scientist. This castle dates back to the 13th century, but it was acquired by the counts of Hesse-Darmstadt in 1662.
Modern architecture
Darmstadt has a rich tradition in modern architecture. After 1945 several "Meisterbauten" (Masterful Architectonic Creations) were built that set standards for modern architecture. These buildings still exist and are used for various public and private purposes. In the late 1990s the Waldspirale ('Forest Spiral') was built, a residential complex by Austrian Friedensreich Hundertwasser. As an almost surreal building, it is internationally famous for its almost absolute rejection of rectangular forms, down to every window having a different shape, the style being a trademark of Hundertwasser's work. Hundertwasser died before the Waldspirale was finished.
Art Nouveau
Darmstadt was a centre of the Art Nouveau movement. Surviving examples of the Jugendstil period include the Rosenhöhe, a landscaped English-style rose garden from the 19th century, recently renovated and replanted, the UNESCO World Heritage Site Mathildenhöhe, with the Hochzeitsturm ('Wedding tower', also commonly known as the 'Five-Finger-Tower') by Joseph Maria Olbrich, the Russian Chapel in Darmstadt and large exhibition halls as well as many private villas built by Jugendstil architects who had settled in Darmstadt. German Art Nouveau is commonly known by its German name, Jugendstil. The name is taken from the artistic journal, Die Jugend, which was published in Munich and which espoused the new artistic movement. It was founded in 1896 by Georg Hirth (Hirth remained editor until his death in 1916, and the magazine continued to be published until 1940). The magazine was instrumental in promoting the style in Germany. As a result, its name was adopted as the most common German-language term for the style: Jugendstil ("young style"). Although, during the early 20th century, the word was applied to only two-dimensional examples of the graphic arts, especially the forms of organic typography and graphic design found in and influenced by German magazines like Jugend, Pan, and Simplicissimus, it is now applied to more general manifestations of Art Nouveau visual arts in Germany, the Netherlands, the Baltic states, and Nordic countries. The two main centres for Jugendstil art in Germany were Munich and Darmstadt.
Squares
The Luisenplatz, the central square of the city, forms the centre of the city and is the main public transport hub. In 1844 the Ludwigsäule (called Langer Lui, meaning Long Ludwig), a 33-metre (108 ft) column commemorating Ludwig I, first Grand Duke of Hesse, was placed in the middle of the square. While the column still stands, the square is today surrounded by mostly modern buildings. Other important squares are the Marktplatz (see image) near the old city hall and the Sabaisplatz at the Mathildenhöhe.
Parks
The city has a high density of parks. Among the most important parks are the English style Herrngarten in central Darmstadt. In former times it was part of the Royal Gardens used exclusively by the dukes of Darmstadt. Today it is a public park, heavily used in every season of the year. Other important parks are the French style parks Prinz-Georgs-Garten and Orangerie, the modern style Bürgerpark ("People's Park") in northern Darmstadt and the mystical Park Rosenhöhe, ("Rose Heights") which also serves as the cemetery for the dukes and their immediate family, with two impressive mausoleum buildings (Altes Mausoleum and Neues Mausoleum) in its remote parts. The Botanischer Garten in eastern Darmstadt is a botanical garden maintained by the Technische Universität Darmstadt with a fine collection of rare plants and trees.
Churches
The Protestant Stadtkirche Darmstadt built in 1369, is in the pedestrian zone of the downtown city center, next to the historic Hotel Bockshaut. The church has gothic elements along with renaissance and baroque, it houses the royal crypt. Hotel Bockshaut was built in 1580 for a church presbytery. The most important Catholic Church is St. Ludwig in central Darmstadt. The Russian Chapel in Darmstadt is a Russian orthodox church which is still in use. It was built and used as a private chapel by the last Tsar of Russia, Nicholas II, whose wife Alexandra was born in Darmstadt. Although Russian orthodox churches also exist in other cities outside Russia, the Russian Chapel in Darmstadt was the only official Russian church used by the Tsar outside the Russian Empire. It is said that the chapel was built on Russian soil that was brought to Darmstadt exclusively for the purpose of building the Tsar's private chapel on it.
Festivals
Every year on the first weekend of July the Heinerfest festival is held in the streets surrounding the old ducal palace. It is a traditional German festival with music acts, beer halls, amusement rides and booths selling trinkets and food. The similar 'Schloßgrabenfest', which is more live music-oriented, is held in the same location every year in May. These two festivals attract 700,000 and 400,000 visitors respectively.
Culture
Darmstadt has a rich cultural heritage. The Staatstheater Darmstadt (State Theatre Darmstadt) dates back to the year 1711. The present building has been in use since 1972 and has three halls which can be used independently. The "Grand Hall" (Großes Haus) provides seats for 956 people and serves as Darmstadt's opera house. The "Small Hall" (Kleines Haus) is mostly used for plays and dance and has 482 seats. A separate small hall (Kammerspiele) with 120 seats is used for chamber plays.
Among the museums in Darmstadt the most important are the Hessisches Landesmuseum (Hessian State Museum), the Porcelain Museum (exhibition of the ducal porcelain), the Schlossmuseum (exhibition of the ducal residence and possessions), the Kunsthalle Darmstadt (exhibitions of modern art), the exhibition centre Mathildenhöhe and the Museum Künstlerkolonie (Art Nouveau museum).
The Jazz-Institut Darmstadt is Germany's largest publicly accessible jazz archive.
The Internationales Musikinstitut Darmstadt, harboring one of the world's largest collections of post-war sheet music, also hosts the biennial Internationale Ferienkurse für Neue Musik, a summer school in contemporary classical music founded by Wolfgang Steinecke. A large number of avant-garde composers have attended and given lectures there, including Olivier Messiaen, Luciano Berio, Milton Babbitt, Pierre Boulez, Luigi Nono, John Cage, György Ligeti, Iannis Xenakis, Karlheinz Stockhausen, Mauricio Kagel, and Helmut Lachenmann.
The Deutsche Akademie für Sprache und Dichtung provides writers and scholars with a place to research the German language. The Academy's annual Georg Büchner Prize, named in memory of Georg Büchner, is considered the most prestigious literary award for writers of German language.
Geography
Darmstadt is located in the Upper Rhine Plain (German: Oberrheinische Tiefebene), a major rift, about 350 km (217 mi) long and on average 50 km (31 mi) wide, between the cities of Frankfurt in the north and Basel in the south. Darmstadt's southeastern boroughs are located in the spurs of the Odenwald, a low mountain range in Southern Hesse between the Main and Neckar rivers.
Climate
Southern Hesse is well known for its mild climate which allows winegrowing on a large scale in the region south of Darmstadt. The weather is often volatile with the summers being warm and humid with frequent thunderstorms, the winters mostly relatively mild with frequent periods of high fog. Snowfall is most likely in January and February, but mild winters without considerable snowfall can occur.
Education
Schools
The City of Darmstadt offers students a broad variety of public primary, secondary and tertiary schools. Besides them private schools exist, e.g. the catholic secondary school Edith-Stein-Schule, the Adventists' Schulzentrum Marienhöhe, an anthroposophic Waldorf School, a Comenius School and other faith based private schools.
Universities
Darmstadt University of Applied Sciences (German: Hochschule Darmstadt) has the highest number of industrial linkage programs, compared to the rest of the universities of applied sciences. The roots of University of Applied Sciences Darmstadt dates back to 1876. However, it has not emerged as a separate institution before 1971. Today (2017) it is the largest University of Applied Sciences in the State of Hesse with about 16,000 students offering courses in architecture, chemical engineering, materials science, civil engineering, computer science, design, economics, electrical engineering and information technology, mathematics and science, mechanical engineering, media (including information science and engineering), plastics engineering, social and cultural studies, and several social sciences.
Technical University of Darmstadt (German: Technische Universität Darmstadt), commonly referred to as TU Darmstadt, is a prestigious research university in Germany. It was founded in 1877 and received the right to award doctorates in 1899. In 1882 it was the first university in the world to set up a chair in electrical engineering, in 1883 the first faculty for electrical engineering was founded there. The University is organized in 13 departments and 5 fields of study, which all together offer about 100 courses of studies. The fields of study offer interdisciplinary degree courses in which students take lectures in multiple departments. The University, as its title suggests, offers degree courses in the fields of electrical, mechanical and civil engineering, architecture, computer science, mathematics and the natural sciences. It also offers courses in economics, law, history, politics, sociology, psychology, sport science and linguistics. It also offers degree courses for teaching positions at German vocational schools and Gymnasiums.
The Protestant University of Applied Sciences Darmstadt (EHD) is an officially recognised and Church-sponsored University. The sponsors are the Protestant Church in Hesse and Nassau, the Protestant Church of Kurhesse-Waldeck and the social welfare organisation of both Hessian Protestant Churches, the Diakonie Hesse. The EHD has approximately 1,700 students, 40 professors and 10 scientific employees and about 100 visiting lecturers every semester.
Institutions
Technology
Darmstadt is home to many research institutions such as the Fraunhofer Society (Fraunhofer IGD, Fraunhofer LBF, Fraunhofer SIT) and the Gesellschaft für Schwerionenforschung (GSI, "Society for heavy ion Research"), which operates a particle accelerator in northern Darmstadt. The GSI, amongst other elements, discovered the chemical element darmstadtium (atomic number: 110), named after the city in 2003. This makes Darmstadt one of only eight settlements with elements named after them (the others being Ytterby in Sweden (four elements); Stockholm in Sweden (holmium); Strontian in Scotland; Copenhagen in Denmark (whose Latin name gives hafnium); Paris (whose Latin name gives lutetium); Berkeley, California; and Dubna in Russia). Various other elements, including meitnerium (atomic number: 109) (1982), hassium (atomic number: 108) (1984), roentgenium (atomic number: 111) (1994) and copernicium (atomic number: 112) (1996) were also synthesized in the Darmstadt facility.
The European Space Operations Centre (ESOC) of the European Space Agency is located in Darmstadt. From here, various deep-space exploration spacecraft and Earth-orbiting satellites are operated for the purposes of scientific research, and technology development and demonstration.
EUMETSAT, the European Organisation for the Exploitation of Meteorological Satellites, operates the principal European meteorological satellites from its headquarters, including the first and second generations of Meteosat geostationary satellites, and the polar-orbiting Metop series.
Darmstadt is a centre for the pharmaceutical and chemical industry, with Merck, Röhm and Schenck RoTec (part of The Dürr Group) having their main plants and centres here.
United States military presence
U.S. forces entered the city of Darmstadt on 25 March 1945. At the end of World War II, Darmstadt was among the 112 communities where U.S. forces were stationed. Early units stationed here included elements of the U.S. Constabulary, Air Force units and a Quartermaster School.
Over the years, the U.S. military community Darmstadt – under a variety of designations – served as home for thousands of American soldiers and their families. It included six principal installations in Darmstadt and nearby Babenhausen, Griesheim and Münster, plus several housing areas, an airfield and a large number of smaller facilities as far away as Bensheim and Aschaffenburg. The military newspaper European Stars and Stripes also had its headquarters there. As of 1993, the Darmstadt military community also assumed responsibility for the remaining U.S. Army facilities in the Frankfurt area.
As part of the U.S. Army's ongoing transformation in Germany, the Darmstadt military community, by then designated U.S. Army Garrison Darmstadt, inactivated on 30 September 2008. Even after the garrison inactivation, however, there is still one unit active in Darmstadt: The 66th Military Intelligence Group at the Dagger Complex on Eberstädter Weg,. It draws its support from the nearby U.S. Army Garrison Wiesbaden. The website of the 66th Military Intelligence Brigade claims they moved out in 2008, but Google Maps and Bing satellite imagery still show a respectively full and quarter-full parking lot, and the U.S. Army Garrison Wiesbaden's website mentions the unit still being active in Darmstadt, and a Marine Corps company being stationed there as well. With the exception of Dagger Complex, all remaining US installations are now empty and closed to the public, pending property disposal by the German authorities.
Tourist sights in Darmstadt
City
Mathildenhöhe with the Art Nouveau Museum
Wedding Tower (Hochzeitsturm) at Sabaisplatz
The former private chapel of the last Tsar of Russia
State Theatre and Opera House
Waldspirale Hundertwasser Building
City Center with Luisenplatz, the Castle and the Market Square
Hauptbahnhof – Central Train Station (Art Nouveau style)
Parks
Herrngarten Park
Botanical Garden (Botanischer Garten)
Vortex Garden
Park Rosenhöhe (Rose Heights Park) with the Dukal Cemetery
Porcelain Museum at Schlossgartenplatz
St. Ludwig Church
State Museum (Landesmuseum)
State Archive/Old Theatre
Train Museum Kranichstein
Region
Odenwald
Bergstrasse
Vineyards at Zwingenberg
Frankenstein Castle
Messel Pit Fossil Site
Melibokus
Notable people
Christoph Graupner (1683–1760), composer and Hofkapellmeister (chapel master) at the court of Hesse-Darmstadt from 1711 to 1754
Justus Freiherr von Liebig (1803–1873), chemist who made major contributions to agricultural and biological chemistry, and was considered the founder of organic chemistry
Friedrich von Flotow (1812–1883), opera composer, died in Darmstadt.
Georg Büchner (1813–1837), dramatist, poet and revolutionist
Carl Amand Mangold (1813–1889), composer and conductor
Friedrich August Kekulé (1829–1896), prominent organic chemist and the principal founder of the theory of chemical structure
Eugen Bracht (1842–1921), landscape painter
Georg von Hertling (1843–1919), politician
Karl Muck (1859–1940), conductor
Ernest Louis, Grand Duke of Hesse (1868–1937), last Grand Duke of Hesse and by Rhine,
Karl Wolfskehl (1869–1948), poet, editor and translator
Alexandra Feodorovna (1872–1918), Russian Empress, born as Alix of Hesse, married Tsar Nicholas II of Russia
Christian Stock (1884–1967), politician
Anton Köllisch (1888–1916), chemist who first synthesized MDMA (known as "ecstasy")
Beno Gutenberg (1889–1960), German-American seismologist
Karl Plagge (1897–1957), Wehrmacht officer, saved Lithuanian Jews from extermination during The Holocaust, Righteous among the Nations
Karl-Otto Koch (1897–1945), commandant of the Nazi concentration camps at Buchenwald and Sachsenhausen
Josef Ganz (1898–1967), automotive engineer and pioneer, studied at the Technical University of Darmstadt
Heinrich von Brentano (1904–1964), Federal Minister for Foreign Affairs from 1955 to 1961
Hans Möser (1906–1948), Nazi SS concentration camp officer executed for war crimes
Walter Schmiele (1909–1998), author and translator
Arno Schmidt (1914–1979), author and translator
Hans Stark (1921–1991), head of the admissions detail at Auschwitz-II Birkenau of Auschwitz concentration camp
Georg Stern (1921–1980), operatic singer
Karlheinz Stockhausen (1928–2007), leading 20th-century electronic composer
Günter Strack (1929–1999), actor
Maciej Łukaszczyk (1934–2014), Polish pianist at the Staatstheater Darmstadt, founder and president of the Chopin organisation, porter of the Polish and German order
Helmut Markwort (born 1936), journalist
Annegret Soltau (born 1946), artist
Cord Meijering (born 1955), Dutch composer
Christoph Lanz (born 1959), journalist
Volker Weidermann (born 1969), writer and journalist
Florika Fink-Hooijer (born 1962), prominent European civil servant
Markus Rühl (born 1972), bodybuilder
Karola Obermüller (born 1977), composer
Björn Bürger (born 1985), operatic baritone
Andrea Petkovic (born 1987), tennis player
Nina Gerhard (born 1974), singer
Zinaida Petrovna Ziberova (born 1909), composer
Twin towns – sister cities
Darmstadt is twinned with:
Alkmaar, Netherlands (1958)
Brescia, Italy (1991)
Bursa, Turkey (1971)
Chesterfield, England, UK (1959)
Freiberg, Germany (1990)
Graz, Austria (1968)
Gstaad (Saanen), Switzerland (1991)
Gyönk, Hungary (1990)
Liepāja, Latvia (1993)
Logroño, Spain (2002)
Płock, Poland (1988)
San Antonio, United States (2017)
Szeged, Hungary (1990)
Trondheim, Norway (1968)
Troyes, France (1958)
Uzhhorod, Ukraine (1992)
See also
Darmstadt University of Applied Sciences
Technical University of Darmstadt
Rhein-Main-Area
References
External links
Discover Darmstadt – City Tourist Website
Germany Travel – Darmstadt
Capitals of former nations
Grand Duchy of Hesse
Merck Group
Odenwald
Holocaust locations in Germany |
3281107 | https://en.wikipedia.org/wiki/TUTOR | TUTOR | TUTOR, also known as PLATO Author Language, is a programming language developed for use on the PLATO system at the University of Illinois at Urbana-Champaign beginning in roughly 1965. TUTOR was initially designed by Paul Tenczar for use in computer assisted instruction (CAI) and computer managed instruction (CMI) (in computer programs called "lessons") and has many features for that purpose. For example, TUTOR has powerful answer-parsing and answer-judging commands, graphics, and features to simplify handling student records and statistics by instructors. TUTOR's flexibility, in combination with PLATO's computational power (running on what was considered a supercomputer in 1972), also made it suitable for the creation of games — including flight simulators, war games, dungeon style multiplayer role-playing games, card games, word games, and medical lesson games such as Bugs and Drugs (BND). TUTOR lives on today as the programming language for the Cyber1 PLATO System, which runs most of the source code from 1980s PLATO and has roughly 5000 users as of June 2020.
Origins and development
TUTOR was originally developed as a special purpose authoring language for designing instructional lessons, and its evolution into a general purpose programming language was unplanned.
The name TUTOR was first applied to the authoring language of the PLATO system in the later days of Plato III.
The first documentation of the language, under this name, appears to have been The TUTOR Manual, CERL Report X-4, by R. A. Avner and P. Tenczar, Jan. 1969.
The article Teaching the Translation of Russian by Computer gives a snapshot of TUTOR from shortly before PLATO IV was operational. Core elements of the language were present, but commands were given in upper case, and instead of using a general mechanism, support for alternative character sets was through special command names such as WRUSS for "write using the Russian character set."
Through the 1970s, the developers of TUTOR took advantage of the fact that the entire corpus of TUTOR programs were stored on-line on the same computer system. Whenever they felt a need to change the language, they ran conversion software over the corpus of TUTOR code to revise all existing code so that it conformed with the changes they had made.
As a result, once new versions of TUTOR were developed, maintaining compatibility with the PLATO version could be very difficult.
Control Data Corporation (CDC), by 1981, had largely expunged the name TUTOR from their PLATO documentation. They referred to the language itself as the PLATO Author Language. The phrase TUTOR file or even TUTOR lesson file survived, however, as the name of the type of file used to store text written in the PLATO Author Language.
Structure of a TUTOR lesson
A TUTOR lesson consists of a sequence of units where each unit begins with the presentation of information and progress from one unit to the next is contingent on correctly answering one or more questions. As with COBOL paragraphs, control may enter a TUTOR unit from the preceding unit and exit into the next, but units are also callable as subroutines using the do or join commands.
Here is an example unit from page 5 of the TUTOR User's Memo, March 1973 (Computer-based Education Research Laboratory, University of Illinois at Urbana-Champaign):
unit math
at 205
write Answer these problems
3 + 3 =
4 × 3 =
arrow 413
answer 6
arrow 613
answer 12
Several things should be immediately apparent from this example.
First, TUTOR is a fixed format language. Each line begins with a command name, which must fit within a fixed 8-character field for the command name. The arguments to that command (the tag) begin at the 9th character. Although a tab key was be used to get to the 9th column, it generated spaces as plato had no tab character.
In some cases, such as the write command above, the tag may consist of multiple lines. Continuation lines are either blank or have a leading tab.
Screen coordinates are presented as single numbers, so 205 refers to line 2 column 5, and 413 refers to line 4 column 13.
What may not be apparent is the control structure implicit in this unit. The arrow command marks the entrance to a judging block This control structure is one of TUTOR's unique features.
Unique features
TUTOR contained a number of unique features. The following list is not intended as a substitute for a TUTOR manual, but merely highlights the most interesting, innovative, and sometimes confusing features of the language.
Answer judging
A judging block in TUTOR is a control structure that begins with an arrow command and ends with the next arrow, endarrow or unit command. The arrow command also prompts for input, with the special arrow character (resembling "▷") displayed as a prompt at the indicated screen coordinates. In effect, a judging block can be thought of as a backtracking control structure where the student may make multiple attempts to answer a question until a correct answer allows forward progress.
Judging pattern matching
Each judging block consists of a sequence of pattern matching commands, each of which introduces a (possibly empty) block of commands to be executed if that pattern matches. The two most common pattern matching commands were answer and wrong. These had identical pattern matching semantics except that answer judged a student response to be correct if it matched, while wrong judged a student response to be incorrect.
The tag fields on the answer and wrong commands consisted of lists of optional, required and alternative words. consider this example from exercise 4-1 in the 1973 TUTOR User's Memo:
answer <it, is,a, it's, figure,
polygon>
(right, rt) (triangle, triangular)
This would match answers such as "it is a right triangle" or "it's a triangular figure" or just "rt triangle". It would not match "sort of triangular" because the words "sort of" are not listed as ignored, and it would not match "triangle, right?" because the
order is wrong.
The pattern matching subsystem recognized spelling errors, so the words "triangel" or "triangl" would match the example pattern. The lesson author could use the specs command to set how pedantic the system was about spelling errors.
The pattern matching algorithms used by various TUTOR implementations varied in detail, but typically, each word in the input text and each word in the pattern were converted to bit vectors. To see whether a word of student input matched a word of the pattern, the Hamming distance between the two bit vectors was used as a measure of the degree of difference between the words. Bit vectors were typically 60 or 64 bits long, with fields for letter presence, letter pair presence, and the first letter. As a result, the number of one bits in the exclusive or of two such bit vectors approximated the extent of the phonetic difference between the corresponding words.
Judging control structures
All early presentations of the control structure of a TUTOR judging block were confusing. In modern terms, however, a judging block can be described as an iterative control structure that exits when the student input is judged correct. The body
of this control structure consists of a series of cases, each introduced by a pattern matching command such as answer or wrong. All output produced by the body of the judging loop in the
previous cycle is erased from the screen prior to the next cycle.
Consider this example, from exercise 4-1 of the 1973 TUTOR User's Memo:
wrong <it, is,a> square
at 1501
write A square has four
sides.
In the event that the student inputs "square" or "a square", the answer is judged to be incorrect, and the text "A square has four sides." is output starting at line 15 column 1 on the screen. This output remains on the screen until the student begins to enter
a new answer, at which point, it is erased so that the response to the new answer can be computed. The mechanism by which the display screen rolls back to its previous state varies from implementation to implementation. Early implementations operated by switching the terminal into erase mode and re-executing the entire case that had matched. Some later implementations buffered the output produced during judging so that this output could be erased.
The join command was a unique form of subroutine call. It was defined as being equivalent to textual substitution of the body of the joined unit in place of the join command itself (page 21, 1973 TUTOR User's Memo). As such, a joined unit could contain part of a judging block. Thus, while the judging block is conceptually an iterator enclosing a series of cases, this block may be arbitrarily broken into subroutines. (An alternative subroutine call, the do command, conformed to the usual semantics associated with subroutine calls in other programming languages.)
Graphic and display commands
The PLATO IV student terminal had a 512 by 512 pixel plasma display panel,
with hardware support for point plotting, line drawing, and text display.
Each pixel on the PLATO IV terminal was either orange or black. The CDC PLATO V terminal used a monochrome black and white CRT to emulate the plasma panel. The built-in character set had 4 sets of 63 characters, each 8 by 16 pixels, half of these were fixed, half were programmable. The Tutor language provided complete support for this terminal.
There were two coordinate systems (see page II-1 of The TUTOR Language by Bruce Sherwood):
Coarse coordinates were specified in terms of the rows and columns of text. The coarse coordinate 1501, for example, was a reference to line 15 character 1, where the upper left character on the screen was at location 101 and the lower right character was at 3264.
Fine coordinates were specified as X and Y coordinates relative to the lower left corner of the screen. The fine coordinate 0,511 specified the upper left corner of the screen, while 0,496 was equivalent to the coarse 101, allowing for the 16 pixel height of a character and the fact that characters were plotted relative to their lower left corner.
Drawing commands
The following example illustrates some of Tutor's drawing commands.
draw 1812;1852;skip;1844;1544
circle 16,344,288
draw 1837;1537;1535;1633;1833
Note the use of semicolons to separate successive coordinates on the draw command. This allows unambiguous use of comma-separated fine coordinates. Normally, the draw command connects consecutive points with line segments, but by putting skip in the tag, the draw command could be made to conceptually lift its pen.
The tags on the circle command give the radius and fine coordinates of the center. Additional tags could specify starting and
ending angles for partial circles.
Hand composing draw commands is difficult, so a picture editor was included in the PLATO system by 1974 to automate this work. This could only deal with drawing commands with constant coordinates.
Text rendering commands
The following example illustrates some of the text rendering tools of Tutor.
unit title
size 9.5 $$ text 9.5 times normal size
rotate 45 $$ text rotated 45 degrees
at 2519
write Latin
size 0 $$ return to normal writing
rotate 0
at 3125
write Lessons on Verbs
Text rendered in size zero rotation zero used the built-in character rendering hardware of the PLATO terminal, while rendering with nonzero size and rotation was done with line segments and therefore significantly slower due to the speed of the communication link to the terminal.
Control structures
Aside from its unique answer judging mechanisms, TUTOR's original set of control structures was rather sparse. In the mid 1970s, this shortcoming was addressed by introducing if, endif blocks with optional elseif and else sections. The semantics of these control structures was routine, but the syntax inherited the mandatory indentation of the Tutor Language, presaging that of Python and adding a unique nonblank indent character to distinguish indenting from continuation lines.
This is illustrated in the following example, from page S5 of the Summary of TUTOR Commands and System Variables (10th ed) by Elaine Avner, 1981:
if n8<4
. write first branch
. calc n9⇐34
elseif n8=4
. write second branch
. do someunit
else
. write default branch
. if n8>6
. . write special branch
. endif
endif
(The assignment arrow in the calc statement is not rendered correctly in some browsers. It appears similar to <= but as one character. It had a dedicated key on the PLATO IV keyboard.)
The same syntax was used for loop, endloop blocks with semantics comparable to while loops in conventional programming languages. This is illustrated in the following example, from page S6 of the Summary of TUTOR Commands and System Variables (10th ed) by Elaine Avner, 1981:
loop n8<10
. write within loop
. sub1 n8
reloop n8≥5
. write still within loop
. do someunit
outloop n8<3
. write still within loop
endloop
write outside of loop
Note that the reloop and outloop commands are somewhat analogous to the continue and break statements of languages based on C, except that they must sit at the indenting level of the loop they modify, and they have a condition tag that indicates when the indicated control transfer is to take place. This makes the construct more powerful than in other languages, because any line of the inner loop could terminate or reloop several outer loops with one statement.
Expression syntax
TUTOR's expression syntax did not look back to the syntax of FORTRAN, nor was it limited by poorly designed character sets of the era. For example, the PLATO IV character set included control characters for subscript and superscript, and TUTOR used these for exponentiation. Consider this command (from page IV-1 of The TUTOR Language, Sherwood, 1974):
circle (412+72.62)1/2,100,200
The character set also included the conventional symbols for multiplication and division, × and ÷, but in a more radical departure from the conventions established by FORTRAN, it allowed implicit multiplication, so the expressions (4+7)(3+6) and 3.4+5(23-3)/2 were valid, with the values 99 and 15.9, respectively (op cit). This feature was seen as essential. When students typed in a numeric answer to a question, they could use operators and variables and standard algebraic notation, and the program would use the TUTOR "compute" command to compile and run the formula and check that it was numerically equivalent (or within the floating point roundoff error) to the correct answer.
The language included a pre-defined constant named with the Greek letter pi (π), with the appropriate value, which could be used in calculations. Thus, the expression πr2 could be used to calculate the area of a circle, using the built-in π constant, implicit multiplication and exponentiation indicated by a superscript.
In TUTOR, the floating-point comparison x=y was defined as being true if x and y were approximately equal (see page C5 of PLATO User's Memo, Number One by Avner, 1975). This simplified life for mathematically naïve developers of instructional lessons, but it occasionally caused headaches for developers of numerically sophisticated code because it was possible that both x<y and x≥y could be true at the same time.
Memory management
As an authoring language, TUTOR began with only minimal memory resources and only the crudest tools for manipulating them. Each user process had a private data segment of 150 variables, and shared common blocks could be attached, allowing inter-user communication through shared memory.
On the PLATO IV system, words were 60 bits, in keeping with the CDC 6600 family of computers. Some later implementations changed this to 64 bits.
Basic memory resources
The private memory region of each process consisted of 150 words each, referred to as student variables; the values of these variables were persistent, following the individual user from session to session. These were addressed as n1 through n150 when used to hold integer values, or as v1 through v150 when used to hold floating point values.
A TUTOR lesson could attach a single region of up to 1500 words of shared memory using the common command. Each lesson could have an unnamed temporary common block containing variables shared by all users of that lesson. Such blocks were created when a lesson came into use and deallocated when the lesson became inactive. In contrast, named common blocks were associated with a block of a lesson (a disk file). Shared memory was addressed as nc1 through nc1500 (for integers) or vc1 through vc1500 (for floating point numbers).
Where 150 student variables was insufficient, a lesson could use the
storage command to create an additional private memory segment of up to 1000 words. This segment existed in swap space only, but it could be mapped to student variables or common variables. For example (from page X-11 of The TUTOR Language, Sherwood, 1974):
common 1000
storage 75
stoload vc1001,1,75
This example defines nc1 to nc1000 as a shared unnamed common block, while nc1001 to nc1075 are private storage.
Defining symbolic names
The Tutor define command was very similar to the C #define preprocessor directive. This was the only way to associate mnemonic names with variables. It was up to the programmer to statically allocate memory and assign names to variables. Consider this example from page 17 of the TUTOR User's Memo -- Introduction to TUTOR, 1973"
define mynames
first=v1, second =v2
result=v3
This creates a set of definitions named mynames defining three floating point variables. Users were advised that "there should not be any v3's or v26's anywhere in your lesson except in the define statement itself. Put all your definitions at the very beginning of the lesson where you will have ready reference to which variables you are using." (underlining from the original, page IV-5 of The TUTOR Language, Sherwood, 1974.)
Functions could be defined, with macro-substitution semantics, as in this illustration from page IX-2 of The TUTOR Language, Sherwood, 1974:
define cotan(a)=cos(a)/sin(a)
Unlike C, the original scope rules of TUTOR were pure "definition before use" with no provisions for local definitions. Thus, the formal parameter a used above must not have any previous definition.
Later in the development of TUTOR, with the introduction of multiple named sets of definitions, the programmer was given explicit control over which sets of definitions were currently in force. For example, define purge, setname would discard all definitions in the named set.
Arrays, packed arrays, and text manipulation
The original TUTOR tools for text manipulation were based on commands for specific text operations, for example, pack to place a packed character string into consecutive variables in memory, search to search for one string within another, and move to move a string from memory to memory. By 1975, more general tools for arrays of integers and packed arrays were added. Page 14 of PLATO User's Memo -- Summary of TUTOR Commands and System Variables, Avner, 1975, gives the following:
define segment, name=starting var, num bits per byte, s
array, name(size)=starting var
array, name (num rows, num columns)=starting var
Segmented arrays, defined with the keyword segment, were comparable to packed arrays in Pascal. The byte size and whether or not the array elements were to be treated as signed or unsigned were entirely under user control. Arbitrary text manipulation could be done by setting the byte size to the machine byte size, 6 bits on implementations using display code, 8 bits on some later ASCII and extended ASCII implementations. Note the lack of any specification of array dimensionality for segmented arrays.
Parameter passing
A general parameter passing mechanism was added to TUTOR early in the PLATO IV era. Page IV-10 of The TUTOR Language by Sherwood, 1974 gives the following example:
define radius=v1,x=v2,y=v3
unit vary
do halfcirc(100,150,300)
do halfcirc(50)
*
unit halfcirc(radius, x,y)
circle radius, x,y,0,180
draw x-radius, y;x+radius, y
Notice that the formal parameters listed in the argument list to the unit command are simply the defined names for statically allocated global variables. The semantics of parameter passing was given as being equivalent to assignment at the time of the control transfer to the destination unit, and if actual parameters were omitted, as in the second do command above, the effect was to leave the prior values of the corresponding formal parameters unchanged.
Local variables
Local variables were added to TUTOR some time around 1980. Lesson authors wishing to use local variables were required to use the lvars command to declare the size of the buffer used for local variables, up to 128 words. Having done so, a unit using local variables could begin as follows (from Page C2 of Summary of TUTOR Commands and System Variables, Avner, 1981):
unit someu
NAME1,NAME2,NAME3(SIZE)
NAME4=CONSTANT
floating:NAME5,NAME6,NAME7(SIZE)
integer, NUM BITS:NAME8,NAME9
integer, NUM BITS,signed:NAME10
integer:NAME11
The continuation lines of the unit command given above are taken to be lines of an implicit define command with local scope. Conventional definitions in terms of student variables such as n150 could be used in such a local define, but the forms illustrated here all automatically bind names to locations in the block of memory allocated by the lvars command. The available TUTOR documentation does not discuss how local variables are allocated.
Other implementations
There has been a sizable family of TUTOR-related languages, each similar to the original TUTOR language but with differences. In particular, TUTOR was a component of a system (the PLATO computer-based education system) that ran on particular CDC mainframe hardware. For efficiency, there were some hardware-specific elements in TUTOR (e.g. variables that were 60-bit words that could be used as arrays of 60 bits or as 10 six-bit characters, etc.). Also, TUTOR was designed before the advent of the windows-oriented graphical user interface (GUI).
The microTutor language was developed in the PLATO project at UIUC to permit portions of a lesson to run in terminals that contained microcomputers, with connections to TUTOR code running on the mainframe. The microTutor dialect was also the programming language of the Cluster system developed at UIUC and licensed to TDK in Japan; the Cluster system consisted of a small group of terminals attached to a minicomputer which provided storage and compilation. The Tencore Language Authoring System is a TUTOR derivative developed by Paul Tenczar for PCs and sold by Computer Teaching Corporation. cT was a derivative of TUTOR and microTutor developed at Carnegie Mellon which allowed programs to run without change in windowed GUI environments on Windows, Mac, and Unix/Linux systems.
References
External links
PLATO User's Guide, CDC Corporation, Revised April, 1981.
TUTOR User's Memo. Introduction to TUTOR, Computer-Based Education Research Laboratory, University of Illinois at Urbana Champaign, March 1973.
PLATO User's Memo, Number One: Summary of TUTOR Commands and System Variables. Third Edition, by Elaine Avner, Computer-Based Education Research Laboratory, University of Illinois at Urbana Champaign, November, 1975.
Summary of TUTOR Commands and System Variables (10th edition), by Elaine Avner, Computer-Based Education Research Laboratory, University of Illinois at Urbana Champaign, November, 1981.
A personal evaluation of the PLATO system bu Stewart A. Denenberg, ACM SIGCUE Outlook, 12, 2 (April 1978) pages 3–10.
Run Time Support for the TUTOR Language on a Small Computer System, by Douglas W. Jones, 1976.
The TUTOR Language, by Bruce Sherwood, Computer-Based Education Research Laboratory, University of Illinois at Urbana Champaign, June 1974.
The TUTOR Language, by Bruce Sherwood, Control Data Education Company, 1977.
The Plato IV Student Terminal, by Jack Stifle
The cT Programming Language (derived from TUTOR) from Center for Design of Educational Computing at Carnegie Mellon University, by David Andersen, Bruce Sherwood, Judith Sherwood, and Kevin Whitley (no longer supported as of 2002).
Computer-based Education Research Laboratory
Educational programming languages |
307669 | https://en.wikipedia.org/wiki/Raj%20Reddy | Raj Reddy | Dabbala Rajagopal "Raj" Reddy (born 13 June 1937) is an Indian-American computer scientist and a winner of the Turing Award. He is one of the early pioneers of artificial intelligence and has served on the faculty of Stanford and Carnegie Mellon for over 50 years. He was the founding director of the Robotics Institute at Carnegie Mellon University. He was instrumental in helping to create Rajiv Gandhi University of Knowledge Technologies in India, to cater to the educational needs of the low-income, gifted, rural youth. He is the chairman of International Institute of Information Technology, Hyderabad. He is the first person of Asian origin to receive the Turing Award, in 1994, known as the Nobel Prize of Computer Science, for his work in the field of artificial intelligence.
Early life and education
Raj Reddy was born in a Telugu family in Katur village of Chittoor district of present-day Andhra Pradesh, India. His father, Sreenivasulu Reddy, was a farmer, and his mother, Pitchamma, was a homemaker. He was the first member of his family to attend college.
He received his bachelor's degree in civil engineering from College of Engineering, Guindy, then affiliated to the University of Madras (now to Anna University, Chennai), India, in 1958, and a MEng degree from the University of New South Wales, Australia, in 1960. He received his PhD degree in Computer Science from Stanford University in 1966.
Career
Reddy is the University Professor of Computer Science and Robotics and Moza Bint Nasser Chair at the School of Computer Science at Carnegie Mellon University. From 1960, he worked for IBM in Australia. He was an Assistant Professor of Computer Science at Stanford University from 1966 to 1969. He joined the Carnegie Mellon faculty as an associate professor of Computer Science in 1969. He became a full professor in 1973 and a university professor, in 1984.
He was the founding director of the Robotics Institute from 1979 to 1991 and the Dean of School of Computer Science from 1991 to 1999. As a dean of SCS, he helped create the Language Technologies Institute, Human Computer Interaction Institute, Center for Automated Learning and Discovery (since renamed as the Machine Learning Department), and the Institute for Software Research. He is the chairman of Governing Council of IIIT Hyderabad.
Reddy was a co-chair of the President's Information Technology Advisory Committee (PITAC) from 1999 to 2001. He was one of the founders of the American Association for Artificial Intelligence and was its president from 1987 to 1989. He served on the International board of governors of Peres Center for Peace in Israel. He served as a member of the governing councils of EMRI and HMRI which use technology-enabled solutions to provide cost-effective health care coverage to rural population in India.
AI Research
Reddy's early research was conducted at the AI labs at Stanford, first as a graduate student and later as an assistant professor, and at CMU since 1969. His AI research concentrated on perceptual and motor aspect of intelligence such as speech, language, vision and robotics. Over a span of five decades, Reddy and his colleagues created several historic demonstrations of spoken language systems, e.g., voice control of a robot, large vocabulary connected speech recognition, speaker independent speech recognition, and unrestricted vocabulary dictation. Reddy and his colleagues have made seminal contributions to Task Oriented Computer Architectures, Analysis of Natural Scenes, Universal Access to Information, and Autonomous Robotic Systems. Hearsay I was one of the first systems capable of continuous speech recognition. Subsequent systems like Hearsay II, Dragon, Harpy, and Sphinx I/II developed many of the ideas underlying modern commercial speech recognition technology as summarized in his recent historical review of speech recognition with Xuedong Huang and James K. Baker. Some of these ideas—most notably the "blackboard model" for coordinating multiple knowledge sources—have been adopted across the spectrum of applied artificial intelligence.
Technology in Service of Society
Reddy's other major research interest has been in exploring the role of "Technology in Service of Society". One of the early efforts, Centre Mondial Informatique et Ressource Humaine was founded by Jean-Jacques Servan-Schreiber in France in 1981 with a technical team consisting of Nicholas Negroponte, Alan Kay, Seymour Papert, Raj Reddy, and Terry Winograd. Reddy served as the Chief Scientist for the center. The centre had as its objective the Development of Human Resource in Third World Countries using Information Technology. Several seminal experiments in providing computerized classrooms and rural medical delivery were attempted. In 1984, President Mitterrand decorated Reddy with the Légion d'Honneur medal.
Universal Digital Library Project was started by Raj Reddy, Robert Thibadeau, Jaime Carbonell, Michael Shamos, and Gloriana S. Clair in the 1990s, to scan books and other media such as music, videos, paintings, and newspapers and to provide online access to all creative works to anyone, anywhere at any time. A larger Million Book Project was started in 2001 as a collaborative effort with China (Professors Pan Yunhe, Yuting Zhuang, Gao Wen) and India (Prof N. Balakrishnan).
Marks of a student are a result of several factors such as the quality of the teachers, the education level of the parents, the ability to pay for coaching classes and the time spent on the task of learning the subject. Rural students tend to be at a serious disadvantage along each of these dimensions. Rajiv Gandhi University of Knowledge Technologies (RGUKT) was created for educating gifted rural youth in Andhra Pradesh in 2008, by Drs. Y. S. Rajasekhara Reddy, K. C. Reddy, and Raj Reddy, based on the premise that the current nationwide merit-based admissions, such as SAT tests, are flawed and do not provide a level playing field for gifted youth from rural areas.
Reddy proposed that a fully connected population makes it possible to think of a KG-to-PG-Online-College in every village providing personalized instruction. Assuming that all students are provided digital literacy and learning-to-learn training as part of primary education before they dropout, anyone can learn any subject at any age even if there are no qualified teachers on the subject.
AI can be used to empower the people at the bottom-of-the-pyramid, who have not benefited from the IT revolution so far. Reddy proposed that recent technological advances in AI will ultimately enable anyone to watch any movie, read any textbook, and talk to anyone independent of the language of the producer or consumer. He also proposed that the use of Smart Sensor Watches can be used to eliminate COVID lockdowns by monitoring the sensor data to identify and isolate people with symptoms.
Awards and honors
He is a fellow of the AAAI, ACM, Acoustical Society of America, IEEE and Computer History Museum. Reddy is a member of the United States National Academy of Engineering, American Academy of Arts and Sciences, Chinese Academy of Engineering, Indian National Science Academy, and Indian National Academy of Engineering.
He has been awarded honorary doctorates (Doctor Honoris Causa) from SV University, Universite Henri-Poincare, University of New South Wales, Jawaharlal Nehru Technological University, University of Massachusetts, University of Warwick, Anna University, IIIT (Allahabad), Andhra University, IIT Kharagpur and Hong Kong University of Science and Technology.
In 1994 he and Edward Feigenbaum received the Turing Award, "for pioneering the design and construction of large scale artificial intelligence systems, demonstrating the practical importance and potential commercial impact of artificial intelligence technology." In 1984, Reddy was awarded the French Legion of Honour by French President François Mitterrand. Reddy also received Padma Bhushan, from the President of India in 2001, the Okawa Prize in 2004, the Honda Prize in 2005, and the Vannevar Bush Award in 2006.
Contributions
Machine Intelligence and Robotics: Report of the NASA Study Group – Executive Summary, Final Report Carl Sagan (chair), Raj Reddy (vice chair) and others, NASA JPL, September 1979. Foundations and Grand Challenges of Artificial Intelligence, AAAI Presidential Address, 1988.
Miscellaneous
Kai-Fu Lee's 2018 bestseller 'AI Superpowers: China, Silicon Valley, and the New World Order' is dedicated "To Raj Reddy, my mentor in AI and in life"
References
External links
Indian computer scientists
American computer scientists
Artificial intelligence researchers
Carnegie Mellon University faculty
Indian emigrants to the United States
Stanford University School of Engineering alumni
Living people
Recipients of the Padma Bhushan in science & engineering
Stanford University School of Engineering faculty
Turing Award laureates
2006
Fellows of the Association for the Advancement of Artificial Intelligence
Fellow Members of the IEEE
American people of Telugu descent
University of New South Wales alumni
College of Engineering, Guindy alumni
Recipients of the Legion of Honour
Members of the United States National Academy of Engineering
Foreign Fellows of the Indian National Science Academy
Scientists from Andhra Pradesh
University of Madras alumni
People from Chittoor district
1937 births
Speech processing researchers
Fellows of the Acoustical Society of America
Presidents of the Association for the Advancement of Artificial Intelligence
Foreign members of the Chinese Academy of Engineering
Roboticists
Telugu people |
955206 | https://en.wikipedia.org/wiki/In-Q-Tel | In-Q-Tel | In-Q-Tel (IQT), formerly Peleus and In-Q-It, is an American not-for-profit venture capital firm based in Arlington, Virginia. It invests in high-tech companies to keep the Central Intelligence Agency, and other intelligence agencies, equipped with the latest in information technology in support of United States intelligence capability. The name "In-Q-Tel" is an intentional reference to Q, the fictional inventor who supplies technology to James Bond.
The firm is seen as a trend-setter in the information technology industry, with the average dollar invested by In-Q-Tel in 2016 attracting fifteen dollars from other investors.
History
Originally named Peleus and known as In-Q-It, In-Q-Tel was founded by Norm Augustine, a former CEO of Lockheed Martin and by Gilman Louie, who was In-Q-Tel's first CEO. In-Q-Tel's mission is to identify and invest in companies developing cutting-edge technologies that serve United States national security interests. Origins of the corporation can be traced to Ruth A. David, who headed the Central Intelligence Agency Directorate of Science & Technology in the 1990s and promoted the importance of rapidly advancing information technology for the CIA. In-Q-Tel now engages with entrepreneurs, growth companies, researchers, and venture capitalists to deliver technologies that provide superior capabilities for the CIA, DIA, NGA, and the wider intelligence community. In-Q-Tel concentrates on three broad commercial technology areas: software, infrastructure and materials sciences.
Former CIA director George Tenet says,
We [the CIA] decided to use our limited dollars to leverage technology developed elsewhere. In 1999 we chartered ... In-Q-Tel. ... While we pay the bills, In-Q-Tel is independent of CIA. CIA identifies pressing problems, and In-Q-Tel provides the technology to address them. The In-Q-Tel alliance has put the Agency back at the leading edge of technology ... This ... collaboration ... enabled CIA to take advantage of the technology that Las Vegas uses to identify corrupt card players and apply it to link analysis for terrorists [cf. the parallel data-mining effort by the SOCOM-DIA operation Able Danger], and to adapt the technology that online booksellers use and convert it to scour millions of pages of documents looking for unexpected results.
In-Q-Tel sold 5,636 shares of Google, worth over $2.2 million, on November 15, 2005. The shares were a result of Google's acquisition of Keyhole, Inc, the CIA-funded satellite mapping software now known as Google Earth.
In August 2006, In-Q-Tel had reviewed more than 5,800 business plans and invested approximately $150 million in more than 90 companies. In 2016, it was funded with at least $120 million per year primarily from the CIA, as well as the NSA, FBI, and US Defense Department.
Governance
In-Q-Tel is a Virginia-registered corporation, legally independent of the CIA or any other government agency. The corporation is bound by its Charter agreement and annual contract with the CIA, which set out the relationship between the two organizations. In-Q-Tel's mission to support the Intelligence Community's technical needs is promoted by the In-Q-Tel Interface Center (QIC), an office within the CIA that facilitates communication and relationships between In-Q-Tel and government intelligence organizations. While In-Q-Tel is a nonprofit corporation, it differs from IARPA and other models in that its employees and trustees can profit from its investment. A Wall Street Journal investigation found that in 2016, nearly half of In-Q-Tel's trustees had a financial connection with a company the corporation had funded.
In-Q-Tel's current president and CEO is Christopher A. R. Darby. The chairman of the board is Michael M. Crow.
Investments
The company lists the majority of its investments on its website page.
In-Q-Tel functions partially in public; however, what products it has and how they are used is strictly secret. According to The Washington Post, "virtually any U.S. entrepreneur, inventor or research scientist working on ways to analyze data has probably received a phone call from In-Q-Tel or at least been Googled by its staff of technology-watchers."
Software
MemSQL – Distributed, in-memory, SQL database management system for real-time analytics
Keyhole, Inc – Geospatial visualization application (Acquired by Google in 2004 and would go on to become Google Earth in 2005)
Boundless Spatial – geospatial software
Huddle – cloud-based content collaboration software
Oculis Labs – visual cyber security solutions
Destineer – games FPS training simulation
GeoIQ FortiusOne – visualization on maps
Forterra – virtual worlds for training
Quantum4D – visualization technology
Visual Sciences – real-time visual analysis
Spotfire – visualisation data analytics
Algorithmic — Infrastructure for deploying and scaling AI/ML models
Palantir Technologies – data integration, search and discovery, knowledge management, and secure collaboration
PiXlogic – visual search
Agent Logic – event detection and response software – Webspector webpage change software
ArcSight – secure software
Zaplet – email
Authentica – secure messaging and secure document sharing
Teradici Corporation – desktop virtualization
Connectify – Wifi & VPN
SafeWeb PrivacyMatrix – browsing (closed in Nov. 2001)
Visible Technologies – social media monitoring
Silver Tail Systems – website fraud prevention
InnoCentive – crowdsourcing websites
Fetch Technologies -Internet Data Management -bots & RSS
SRA OrionMagic – cms software
Recorded Future – web intelligence and predictive analytics
Traction Software – web 2.0
Internet Evidence Finder – Digital forensic tool
Basis Technology – multilingual text analytics and cyber forensics
Language Weaver – automatic language translation
Lingotek – translation services
Cassatt – desktop software
Tacit Knowledge Systems – internal software
FMS – analysis, visualization, and knowledgebase to the Federal Intelligence Community
Initiate Systems – real-time multiple database software
TerraGo – location intelligence applications and software GeoPDF
Geosemble – unstructured data analytics and geospatial software
NovoDynamics – Arabic character recognition
Adapx – Microsoft Office & GIS
Digital Reasoning – Synthesys v3.0 – review facts and associations at a glance
CallMiner – phone speech analytics software
Carnegie Speech – speech recognition
AzTE] PRISM – handwriting recognition
A4Vision – 3D facial imaging
SRD – identity resolution software
Inktomi Corp – network infrastructure software
Mohomine mohoClassifier – organises mass data
Stratify – organizes mass data
Endeca – search data repositories
Inxight – search engine
Convera RetrievalWare – search engine
MetaCarta – search engine
Attensity – search engine
Platfora – big data analytics and visualization
Intelliseek – search engine
FireEye – malware protection
ReversingLabs – malware detection and analysis
zSpace (company) – 3-Dimensional holographic imaging displays
Socrata – Open Data Solutions for Government Innovation
Interset – Security Analytics/User Behavior Analytics
Nozomi Networks – OT and IoT security and visibility
D2iQ (formerly Mesosphere) – Apache Mesos and Kubernetes consulting firm
Fuel3D – 3D scanning
TRX Systems – 3D mapping
Wickr - Encrypted messaging application
Material science
Biotech
Biomatrica – biolab tech anhydrobiosis storage
SpectraFluidics – detection of trace airborne chemicals
Arcxis Biotechnologies – sample processing and pathogen detection
febit group – DNA
Boreal Genomics – DNA fingerprints
T2 Biosystems – medical diagnostic devices, miniaturized magnetic resonance (MR)
OpGen – microbial genome analysis
Infobionics – biotech cellular database
Microchip Biotechnologies – analysis instrumentation for biodefense
Cambrios Technologies – biomaterials for solid-state electronic devices
Seahawk Biosystems – diagnosis biosensor products
Sionex – chemical and biological sensors
Polychromix – material analysis and chemical sensing
IatroQuest – detect biological and chemical agents
IntegenX – NanoBioProcessor & molecular diagnostics
Seventh Sense Biosystems – health monitoring and medical diagnostics
Sonitus Medical – transmits sound via the teeth
MedShape – orthopedic devices from shape memory materials
Electricity
Electro Energy – nickel-metal hydride batteries for satellites & aircraft
Qynergy Corporation – long-lived batteries, Micro-Electro-Mechanical Systems
Infinite Power Solutions – micro-batteries
Skybuilt Power – solar, wind, fuel cells, batteries, fossil fuels, telecommunications – Mobile Power Station(MPS) 3.5kW to 150kW
Semprius – solar energy
AdaptivEnergy – miniature piezo generators
Power Assure – managing power consumption
MiserWare – reduces energy
Electronics
Nanosys – nanotech components
Alfalight – high-power lasers & torches
IDELIX Software – pliable display technology
Perceptive Pixel – multi-touch displays
WiSpry – radio components
Nextreme Thermal Solutions – circuit-board thermoelectric components
Digital Solid State Propulsion – electronic controls for solid rocket motors
Infinite Z – virtual-holographic monitors
Voxel8 – 3D printed electronics
Video
3VR Security – DVR archiving
MotionDSP – digital video
Pixim – video cameras
COPAN – data storage
iMove – immersive video
Pelican Imaging – better camera phones
LensVector – optical autofocus
InView Technology Corporation – cameras and hyper-spectral imagers
Rhevision – tunable camera lens
Signal Innovations Group – signal, image, and video analytics
Elemental Technologies – video processing
KZO Innovations – streaming video software
VSee – video conferencing
Infrastructure
Hardware
Xanadu Quantum Technologies – photonic quantum computers
Tyfone – digital security for mobility, cloud, and IoT
Genia Photonics – fiber-optics products
Advanced Photonix, Inc. – fiber optics
SitScape – Command & Control room hardware
SpotterRF – micro surveillance radar
QD Vision – monitors, displays and lighting
GATR Technologies – inflatable satellite dishes
CoreStreet – door access control systems
Redlen Technologies – CZT x-ray & gamma ray detectors
Etherstack – radios
Paratek microwave – smart scanning antennas
D-Wave Systems – quantum computers
Sensor networks
ThingMagic – RFID
Dust Networks – low-power wireless mesh networking systems
Ember Corporation – ZigBee – wireless semiconductor
Gainspan – low power Wi-Fi
Tendril Networks – software for wireless sensor and control networks
TenXsys – telemetry systems for remote monitoring, NASA
StreamBase – real-time data in government/military, RFID/sensor networks
Thetus – software for remote sensing instruments
Soflinx defender – a Wireless Sensor Network for fences
PlateScan – automatic license plate recognition (ALPR) sensor network
Data centers
Bay Microsystems – packet processing and data traffic
Cleversafe – data storage clouds and massive digital archives
Cloudera – data storage and analysis
Asankya – Hypermesh data streams
CopperEye – data retention
Systems Research and Development – real-time data warehousing
Network Appliance – Decru (networked data storage)
Security testing
Network Chemistry – RFprotect, WiFi security
Veracode – application security testing
Related personnel
Numerous noteworthy business and intelligence community professionals have been involved with In-Q-Tel at various times, including the following:
Dan Geer (2008–present) Chief Information Security Officer
Michael D. Griffin – former president; later administrator of NASA.
Norman R. Augustine
Gilman Louie – former CEO
Paul G. Kaminski – former director
Amit Yoran – former CEO
John Seely Brown
Stephen Friedman
William J. Perry
Alex J. Mandl
Rebecca Bace
Luciana Borio
Peter Barris
Anita K. Jones
Jami Miscik
Jeong H. Kim
References
External links
Official website
White Paper on the In-Q-Tel concept from the CIA's website
In-Q-Tel from Federal Computer Week
In-Q-Tel from govexec.com
The Report of the Independent Panel on the CIA In-Q-Tel Venture from Business Executives for National Security (bens.org)
Press releases
Lerner, Josh, G. Felda Hardymon, Kevin Book, and Ann Leamon. "In-Q-Tel." Harvard Business School Case 804-146, February 2004. (Revised May 2005.)
Venture Funds and Other Advanced Technologies for National Intelligence Services (September 5, 2012).
Central Intelligence Agency
Venture capital firms of the United States
Mass surveillance
Privately held companies based in Virginia
Financial services companies established in 1999
1999 establishments in Virginia |
69107611 | https://en.wikipedia.org/wiki/NodeOS | NodeOS | NodeOS is an operating system based on Linux (a Linux distribution) that is bundled with a NodeJS installation. It uses Npm as the default package manager.
References
External links
Linux distributions |
38358886 | https://en.wikipedia.org/wiki/Computational%20law | Computational law | Computational Law is the branch of legal informatics concerned with the automation of legal reasoning. What distinguishes Computational Law systems from other instances of legal technology is their autonomy, i.e. the ability to answer legal questions without additional input from human legal experts.
While there are many possible applications of Computational Law, the primary focus of work in the field today is compliance management, i.e. the development and deployment of computer systems capable of assessing, facilitating, or enforcing compliance with rules and regulations. Some systems of this sort already exist. TurboTax is a good example. And the potential is particularly significant now due to recent technological advances – including the prevalence of the Internet in human interaction and the proliferation of embedded computer systems (such as smart phones, self-driving cars, and robots).
There are also applications that do not involve governmental laws. The regulations can just as well be the terms of contracts (e.g. delivery schedules, insurance covenants, real estate transactions, financial agreements). They can be the policies of corporations (e.g. constraints on travel, expenditure reporting, pricing rules). They can even be the rules of games (embodied in computer game playing systems).
History
Speculation about potential benefits to legal practice through applying methods from computational science and AI research to automate parts of the law date back at least to the middle 1940s. Further, AI and law and computational law do not seem easily separable, as perhaps most of AI research focusing on the law and its automation appears to utilize computational methods. The forms that speculation took are multiple and not all related in ways to readily show closeness to one another. This history will sketch them as they were, attempting to show relationships where they can be found to have existed.
By 1949, a minor academic field aiming to incorporate electronic and computational methods to legal problems had been founded by American legal scholars, called jurimetrics. Though broadly said to be concerned with the application of the "methods of science" to the law, these methods were actually of a quite specifically defined scope. Jurimetrics was to be "concerned with such matters as the quantitative analysis of judicial behavior, the application of communication and information theory to legal expression, the use of mathematical logic in law, the retrieval of legal data by electronic and mechanical means, and the formulation of a calculus of legal predictability". These interests led in 1959 to the founding a journal, Modern Uses of Logic in Law, as a forum wherein articles would be published about the applications of techniques such as mathematical logic, engineering, statistics, etc. to the legal study and development. In 1966, this Journal was renamed as Jurimetrics. Today, however, the journal and meaning of jurimetrics seems to have broadened far beyond what would fit under the areas of applications of computers and computational methods to law. Today the journal not only publishes articles on such practices as found in computational law, but has broadened jurimetrical concerns to mean also things like the use of social science in law or the "policy implications [of] and legislative and administrative control of science".
Independently in 1958, at the Conference for the Mechanization of Thought held at the National Physical Laboratory in Teddington, Middlesex, UK, the French jurist Lucien Mehl presented a paper both on the benefits of using computational methods for law and on the potential means to use such methods to automate law for a discussion that included AI luminaries like Marvin Minsky. Mehl believed that the law could by automated by two basic distinct, though not wholly separable, types of machine. These were the "documentary or information machine", which would provide the legal researcher quick access to relevant case precedents and legal scholarship, and the "consultation machine", which would be "capable of answering any question put to it over a vast field of law". The latter type of machine would be able to basically do much of a lawyer's job by simply giving the "exact answer to a [legal] problem put to it".
By 1970, Mehl's first type of machine, one that would be able to retrieve information, had been accomplished but there seems to have been little consideration of further fruitful intersections between AI and legal research. There were, however, still hopes that computers could model the lawyer's thought processes through computational methods and then apply that capacity to solve legal problems, thus automating and improving legal services via increased efficiency as well as shedding light on the nature of legal reasoning. By the late 1970s, computer science and the affordability of computer technology had progressed enough that the retrieval of "legal data by electronic and mechanical means" had been achieved by machines fitting Mehl's first type and were in common use in American law firms. During this time, research focused on improving the goals of the early 1970s occurred, with programs like Taxman being worked on in order to both bring useful computer technology into the law as practical aids and to help specify the exact nature of legal concepts.
Nonetheless, progress on the second type of machine, one that would more fully automate the law, remained relatively inert. Research into machines that could answer questions in the way that Mehl's consultation machine would picked up somewhat in the late 1970s and 1980s. A 1979 convention in Swansea, Wales marked the first international effort solely to focus upon applying artificial intelligence research to legal problems in order to "consider how computers can be used to discover and apply the legal norms embedded within the written sources of the law". That said, little substantial progress seems to have been made in the following decade of the 1980s. In a 1988 review of Anne Gardner's book An Artificial Intelligence Approach to Legal Reasoning (1987), the Harvard academic legal scholar and computer scientist Edwina Rissland wrote that "She plays, in part, the role of pioneer; artificial intelligence ("AI") techniques have not yet been widely applied to perform legal tasks. Therefore, Gardner, and this review, first describe and define the field, then demonstrate a working model in the domain of contract offer and acceptance." Eight years after the Swansea conference had passed, and still AI and law researchers merely trying to delineate the field could be described by their own kind as "pioneer[s]".
In the 1990s and early 2000s more progress occurred. Computational research generated insights for law. The First International Conference on AI and the Law occurred it 1987, but it is in the 1990s and 2000s that the biannual conference began to build up steam and to delve more deeply into the issues involved with work intersecting computational methods, AI, and law. Classes began to be taught to undergraduates on the uses of computational methods to automating, understanding, and obeying the law. Further, by 2005, a team largely composed of Stanford computer scientists from the Stanford Logic group had devoted themselves to studying the uses of computational techniques to the law. Computational methods in fact advanced enough that members of the legal profession began in the 2000s to both analyze, predict and worry about the potential future of computational law and a new academic field of computational legal studies seems to be now well established. As insight into what such scholars see in the law's future due in part to computational law, here is quote from a recent conference about the "New Normal" for the legal profession:
"Over the last 5 years, in the fallout of the Great Recession, the legal profession has entered the era of the New Normal. Notably, a series of forces related to technological change, globalization, and the pressure to do more with less (in both corporate America and law firms) has changed permanently the legal services industry. As one article put it, firms are cutting back on hiring "in order to increase efficiency, improve profit margins, and reduce client costs." Indeed, in its recently noted cutbacks, Weil Gotshal's leaders remarked that it had initially expected old work to return, but came "around to the view that this is the ‘new normal.’"The New Normal provides lawyers with an opportunity to rethink—and reimagine—the role of lawyers in our economy and society. To the extent that law firms enjoyed, or still enjoy, the ability to bundle work together, that era is coming to an end, as clients unbundle legal services and tasks. Moreover, in other cases, automation and technology can change the roles of lawyers, both requiring them to oversee processes and use technology more aggressively as well as doing less of the work that is increasingly managed by computers (think: electronic discovery). The upside is not only greater efficiencies for society, but new possibilities for legal craftsmanship. The emerging craft of lawyering in the New Normal is likely to require lawyers to be both entrepreneurial and fluent with a range of competencies that will enable them to add value for clients. Apropos of the trends noted above, there are emerging opportunities for "legal entrepreneurs" in a range of roles from legal process management to developing technologies to manage legal operations (such as overseeing automated processes) to supporting online dispute resolution processes. In other cases, effective legal training as well as domain specific knowledge (finance, sales, IT, entrepreneurship, human resources, etc.) can form a powerful combination that prepares law school grads for a range of opportunities (business development roles, financial operations roles, HR roles, etc.). In both cases, traditional legal skills alone will not be enough to prepare law students for these roles. But the proper training, which builds on the traditional law school curriculum and goes well beyond it including practical skills, relevant domain knowledge (e.g., accounting), and professional skills (e.g., working in teams), will provide law school students a huge advantage over those with a one-dimensional skill set."
Many see perks to oncoming changes brought about by the computational automation of law. For one thing, legal experts have predicted that it will aid legal self-help, especially in the areas of contract formation, enterprise planning, and the prediction of rule changes. For another thing, those with knowledge about computers see the potential for computational law to really fully bloom as eminent. In this vein, it seems that machines like Mehl's second type may come into existence. Stephen Wolfram has said that:
"So we're slowly moving toward people being educated in the kind of computational paradigm. Which is good, because the way I see it, computation is going to become central to almost every field. Let's talk about two examples—classic professions: law and medicine. It's funny, when Leibniz was first thinking about computation at the end of the 1600s, the thing he wanted to do was to build a machine that would effectively answer legal questions. It was too early then. But now we’re almost ready, I think, for computational law. Where for example contracts become computational. They explicitly become algorithms that decide what's possible and what's not.You know, some pieces of this have already happened. Like with financial derivatives, like options and futures. In the past these used to just be natural language contracts. But then they got codified and parametrized. So they’re really just algorithms, which of course one can do meta-computations on, which is what has launched a thousand hedge funds, and so on. Well, eventually one's going to be able to make computational all sorts of legal things, from mortgages to tax codes to perhaps even patents. Now to actually achieve that, one has to have ways to represent many aspects of the real world, in all its messiness. Which is what the whole knowledge-based computing of Wolfram|Alpha is about."
Approaches
Algorithmic law
There have also been many attempts to create a machine readable or machine executable legal code. A machine readable code would simplify the analysis of legal code, allowing the rapid construction and analysis of databases, without the need for advanced text processing techniques. A machine executable format would allow the specifics of a case to be input, and would return the decision based on the case.
Machine readable legal code is already quite common. METAlex, an XML-based standard proposed and developed by the Leibniz Center for Law of the University of Amsterdam, is used by the governments of both the United Kingdom and the Netherlands to encode their laws. In the United States, an executive order issued by President Barack Obama in the May 2013 mandated that all public government documentation be released in a machine readable format by default, although no specific format was mentioned.
Machine executable legal code is much less common. Currently, as of 2020, numerous projects are working on systems for producing machine executable legal code, sometimes also through natural language, constrained language or a connection between natural language and executable code similar to Ricardian Contracts.
Empirical analysis
Many current efforts in computational law are focused on the empirical analysis of legal decisions, and their relation to legislation. These efforts usually make use of citation analysis, which examines patterns in citations between works. Due to the widespread practice of legal citation, it is possible to construct citation indices and large graphs of legal precedent, called citation networks. Citation networks allow the use of graph traversal algorithms in order to relate cases to one another, as well as the use of various distance metrics to find mathematical relationships between them. These analyses can reveal important overarching patterns and trends in judicial proceedings and the way law is used.
There have been several breakthroughs in the analysis of judicial rulings in recent research on legal citation networks. These analyses have made use of citations in Supreme Court majority opinions to build citation networks, and analyzed the patterns in these networks to identify meta-information about individual decisions, such as the importance of the decision, as well as general trends in judicial proceedings, such as the role of precedent over time. These analyses have been used to predict which cases the Supreme Court will choose to consider.
Another effort has examined United States Tax Court decisions, compiling a publicly available database of Tax Court decisions, opinions, and citations between the years of 1990 and 2008, and constructing a citation network from this database. Analysis of this network revealed that large sections of the tax code were rarely, if ever, cited, and that other sections of code, such as those that dealt with "divorce, dependents, nonprofits, hobby and business expenses and losses, and general definition of income," were involved the vast majority of disputes.
Some research has also been focused on hierarchical networks, in combination with citation networks, and the analysis of United States Code. This research has been used to analyze various aspects of the Code, including its size, the density of citations within and between sections of the Code, the type of language used in the Code, and how these features vary over time. This research has been used to provide commentary on the nature of the Code's change over time, which is characterized by an increase in size and in interdependence between sections.
Visualization
Visualization of legal code, and of the relationships between various laws and decisions, is also a hot topic in computational law. Visualizations allow both professionals and laypeople to see large-scale relationships and patterns, which may be difficult to see using standard legal analysis or empirical analysis.
Legal citation networks lend themselves to visualization, and many citation networks which are analyzed empirically also have sub-sections of the network that are represented visually as a result. However, there are still many technical problems in network visualization. The density of connections between nodes, and the sheer number of nodes in some cases, can make the visualization incomprehensible to humans. There are a variety of methods that can be used to reduce the complexity of the displayed information, for example by defining semantic sub-groups within the network, and then representing relationships between these semantic groups, rather than between every node. This allows the visualization to be human readable, but the reduction in complexity can obscure relationships. Despite this limitation, visualization of legal citation networks remains a popular field and practice.
Examples of tools
OASIS Legal XML, UNDESA Akoma Ntoso, and CEN Metalex, which are standardizations created by legal and technical experts for the electronic exchange of legal data.
Creative Commons, which correspond to custom-generated copyright licenses for internet content.
Legal Analytics, which combines big data, critical expertise, and intuitive tools to deliver business intelligence and benchmarking solutions.
Legal visualizations. Examples include Katz's map of supreme court decisions, Starger's Opinion Lines for the commerce clause and stare decisis., and Surden's visualizations of Copyright Law .
Online legal resources and databases
PACER is an online repository of judicial rulings, maintained by the Federal Judiciary.
The Law Library of Congress maintains a comprehensive online repository of legal information, including legislation at the international, national, and state levels.
The Supreme Court Database is a comprehensive database containing detailed information about decisions made by the Supreme Court from 1946 to the present.
The United States Reports contained detailed information about every Supreme Court decision from 1791 to the near-present.
See also
Artificial intelligence and law
Jurimetrics
Lawbot
Legal informatics
Legal expert systems
References
External links
CodeX Techindex, Stanford Law School Legal Tech List
LawSites List of Legal Tech Startups
Argument technology
Computer law
Computational fields of study |
18364946 | https://en.wikipedia.org/wiki/Daniela%20L.%20Rus | Daniela L. Rus | Daniela L. Rus is a Romanian-American roboticist, the Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS) at the Massachusetts Institute of Technology.
Biography
Daniela L. Rus was born in 1963 in Cluj-Napoca, Romania. She came to the United States together with her parents in 1982. Her father, Teodor Rus, is an emeritus professor of computer science at the University of Iowa.
In 1993 Rus received her Ph.D. at Cornell University under the supervision of John Hopcroft. She started her academic career as an Assistant, Associate, and Full Professor in the Computer Science Department at Dartmouth College before moving to MIT.
At Dartmouth College's Computer Science Department, Rus founded and directed the Dartmouth Robotics Laboratory. She is currently the Director of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and head of its Distributed Robotics Lab.
Rus is a member of the National Academy of Engineering (NAE), and a fellow of AAAI and IEEE. She was also the recipient of an NSF Career award and an Alfred P. Sloan Foundation fellowship, and was a recipient of the 2002 MacArthur Fellowship.
Work
Rus’ research interests include robotics, mobile computing and programmable matter. She is known for her work on self-reconfiguring robots, shape-shifting machines that have the ability to adapt to different environments by altering their internal geometric structure. They do this on their own, without remote control, for locomotion, manipulation, or sensing purposes. She has shown that these self-reconfigurable machines could be used in many situations where the possible obstacles and constraints on movement could not ever be fully anticipated in preprogrammed control software (e.g., deep sea or planetary exploration).
Networked, distributed, collaborative robotics
Rus's research is focused on developing the science of networked/distributed/collaborative robotics. Simply put, her research answers the question: how can many machines collaborate to achieve a common goal? Distributed networked robot systems consist of multiple robots that are connected by communication. In these systems the robots interact locally with the environment.
The objective is for the system as a whole to have guaranteed global behavior. Distributed robotics is an important area of robotics as it addresses how collections of robots can collaborate to achieve a larger task than each individual robot is capable of doing. Rus was the first to put this area on a firm algorithmic footing.
Development of algorithms
Rus's research addresses the development of algorithms that
enable collaboration;
couple tightly communication, control, and perception;
are scalable and generally independent on the number of units in the system;
have provable guarantees. An important theme in this work is self-organization: the study of computational processes that interact with each other and with the physical world by means of perception, communication, and change to achieve system reconfiguration in response to the task requirements and the environment.
Distributed Robotics Lab
At MIT CSAIL, Rus heads the Distributed Robotics Lab which conducts research on modular and self-reconfiguring robots, distributed algorithms and systems of self-organizing robots, networks of robots and sensors for first-responders, mobile sensor networks, cooperative underwater robotics, and desktop robotics.
Her lab has created robots that can mimic a variety of human-like actions like tending a garden, dancing, baking cookies from scratch, cutting birthday cakes, among other behaviors. They can also fly in swarms without human aid to perform surveillance functions. She has led numerous groundbreaking research projects in the areas of multi-robot systems and in applications to transportation, security, environmental modeling and monitoring, underwater exploration, and agriculture.
Awards
In 2017, Rus was included in Forbes "Incredible Women Advancing A.I. Research" list.
Rus was elected a member of the National Academy of Engineering in 2015 for contributions to distributed robotic systems.
References
External links
Daniela Rus Home page at MIT
Daniela Rus CSAIL home page
Daniela Rus MacArthur Fellows Page
MIT Distributed Robotics Lab Homepage
Living people
American computer scientists
American roboticists
Women roboticists
Control theorists
American women computer scientists
Cornell University alumni
Dartmouth College faculty
MIT School of Engineering faculty
MacArthur Fellows
Romanian emigrants to the United States
Scientists from Cluj-Napoca
Fellows of the Association for the Advancement of Artificial Intelligence
Year of birth missing (living people)
Fellows of the American Academy of Arts and Sciences
American women academics
MIT Computer Science and Artificial Intelligence Laboratory people
21st-century American women |
4629897 | https://en.wikipedia.org/wiki/Orange%20Micro | Orange Micro | Orange Micro Inc. was an American computer hardware company that made products for use with Apple computers. The company made a variety of products for many machines, ranging from the Apple II series to the Macintosh line. The company went out of business in 2004.
Products
Products for the Apple II series
Orange Micro entered the market for third-party Apple hardware developing peripherals for the Apple II series. Notably, the company developed the famed Grappler+ card, providing easy way to print Graphics on old dot matrix printers, and later a parallel port adapter for the Apple IIc.
PC compatibility
In the early 1990s, Orange Micro introduced what were described as "DOS compatibility cards". This was a concept first introduced in the Mac286 by AST Research, for which Orange Micro had purchased the rights. These cards essentially consisted of an entire PC on NuBus or PCI cards. They contained enough hardware in order to run PC software such as DOS and Windows at native hardware speeds: notably, an Intel-compatible CPU, RAM, sound cards, and video chipsets supporting CGA or VGA. Some hardware, such as disks, printers, modems and network interface cards, were emulated in software.
While Orange Micro sold their compatibility card under the Mac286 name for a time, later offerings were based on the 80386, 80486, and Pentium lines. Additional cards offered support for AMD, Cyrix, and IDT processors, offering a lower cost.
An example of such a PC compatibility card was the OrangePC Model 220. This card, for NuBus-based Macintoshes, included a 66 MHz 486DX2 and 8MB of preinstalled memory. In December 1995, its retail price was US$1127.
A later model, the OrangePC 620 series, offered a 200 or 233 MHz processor. In 1998, it started at US$399, significantly less than previous incarnations. Various 620 models utilized processors from Intel, AMD, and IDT. High-end models included a Sound Blaster chipset, while more affordable options provided software emulation, with the caveat that sound could not be played in MS-DOS software.
One of its last PC compatibility offerings was the OrangePC 660, introduced in late 1998, supporting a Socket 7 CPU from 100 MHz Pentium up to a 400 MHz K6-III processor, NVIDIA RIVA 128 chipset, and two DIMM slots for up to 256MB of SDRAM. The PCfx!, introduced in late 1998, was a simplified OrangePC 660. The PCfx! includes a soldered-on 200 MHz Pentium processor, NVIDIA RIVA 128 chipset, and only 1 DIMM slot for up to 128MB of SDRAM, the PCfx! was marketed as a way for Macintosh users to play PC games.
The need for such specialized compatibility hardware was eliminated after the Mac transition to Intel processors, particularly after the release of Boot Camp and virtualization software such as Parallels Desktop for Mac and VMware Fusion.
Competition
Competitors to the Orange Micro compatibility solutions at the time of its heyday included SoftPC or SoftWindows, a software based solution. Since SoftPC was an emulator, it was much slower than the Orange Micro offerings, which used real PC hardware.
FireWire
Later in its life, the company focused more on its OrangeLink FireWire based products. This included FireWire controllers, hubs, webcams, and digital cameras, such as the iBot.
Domain name status
After Orange Micro closed, the company's webdomain, orangemicro.com, expired, having nobody to pay for its upkeep. At a point it was brought back as a mirror containing illicit links and pop-up ads amongst the original content, however it is currently just another generic advertisement filled website run by domain squatters.
References
External links
"Orange Micro out of business?" - Arstechnica article on the end of Orange Micro
Orange Micro archive on Archive.org
The patent settlement mentioned in the thread
Defunct computer hardware companies
Companies disestablished in 2004
Companies established in the 1990s |
36193512 | https://en.wikipedia.org/wiki/Datacom%20Group | Datacom Group | Datacom Group Limited is an Information Technology services company, offering management and consulting, cloud services, ITO, data centre services, custom software development, and payroll services. The company was started in New Zealand in 1965, but has expanded to operate in Australia, Malaysia, the Philippines, the United States and the United Kingdom, employing 6,500 people across 23 offices globally. Datacom is the largest technology company in New Zealand.
The company's two biggest shareholders are Evander Management Ltd (the family company of John Holdsworth) with 51 per cent and the New Zealand Superannuation Fund with 35 per cent. The New Zealand Superannuation Fund spent $142 million in 2012 buying out New Zealand Post's 35 per cent shareholding. John Holdsworth stepped down as chairman of the board in 2012 and was replaced by New Zealand businessman Craig Boyce.
Datacom was founded as Computer Bureau Ltd in Christchurch, New Zealand, in 1965. It expanded nationally through the holding company Datacom Group Ltd in 1971, before opening offices in Australia in 1992, and in Asia in 1994. Greg Davidson serves as Group CEO.
History
1960s
Datacom was founded as Computer Bureau Ltd in 1965 by two Christchurch accountants, Dr Bernard Battersby and Paul Hargreaves.
A group of clients put up the original capital for the company - £30,000 - and an order was placed for an ICL 1902 computer, which didn't arrive in New Zealand for another year.
The company hired its first systems analysts and programmers in August 1965 and installed the first computer for a client in September of the following year.
In 1968, the company, now called CBL added additional offices in Wellington, Auckland and Hamilton between 1968 and 1970.
1970s
In 1970, Hargreaves quit his family's accounting firm to run CBL full-time, and the business expanded to Auckland via the acquisition of the Fletcher Computer Bureau.
In 1971, the Datacom Group holding company was established.
CBL began offering remote on-line services through onsite terminals beginning in 1976.
1980s
CBL expanded its software development arm, introducing User-11, the first 4GL (Fourth-generation programming language) seen in New Zealand in 1981. The company also set up a data communications network and a New Zealand-wide timesharing service in the early 1980s initially using DEC PDP 11/70s and then added DEC VAX 11/750s and PDP 11/84s. These systems were also used for Facilities Management customers including the Canterbury Building Society and Hertz Rental Cars.
In 1983, CBL was the first to bring Oracle database technology to New Zealand for the New Zealand Dairy Company (now Fonterra).
In 1984, CBL changed its name to Datacom. Paul Hargreaves was appointed executive director, and later CEO when Battersby retired.
Datacom merged with computer-services company CCL (not to be confused with Computer Concepts Limited) in 1989, added facilities management and payroll divisions.
1990s
In 1991, Datacom signed its first large outsourcing contract in Auckland with Telecom Directories. The same year Datacom Wellington merged with the IT department of New Zealand Post, boosting staff numbers by 90.
In 1992, Datacom established a contact centre in Sydney, and in 1994 its first Australian office. This expanded its NZ-based services to Microsoft Australia. This regional service provided diagnostic technical support services to clients and customers. Datacom began exporting some of its facilities management and IT services into its Australian offices in the late 1990s, leading to the company's first data centre in Australia.
1996 saw the company open its first office in Asia, in Malaysia's Kuala Lumpur.
The business also further established partner programs and sales services with the channel and reseller community.
2000s
During the 2000s, Datacom made a series of acquisitions that spread its services to other locations in Australia. In 2004, it purchased GlobalCenter for $7.15 million, its second data centre,
and the following year, the company purchased NetOptions to establish a presence in Queensland.
In 2003 Datacom merged with Connect Interactive Business Services to create Datacom Connect, which largely expanded their call centre offerings.
It expanded from this base in 2007 after acquiring IT services company Agire Pty Ltd, located in Townsville.
Through start-up opportunities with local partners, Datacom moved into South Australia in 2006, and Western Australia in 2007. By the end of 2007, the company had acquired a third data centre in Sydney through Hansen Professional Services.
Its fourth data centre came in 2011 in Western Australia.
Datacom commenced business in Asia in 1994, building contact centres in Malaysia in 1996, and the Philippines in 2008, at the same time establishing a presence in China.
In 2009 Datacom opened its Auckland data centre, Orbit.
2010s
Datacom Technical Security Services is founded by former DSD security expert, Richard Byfield.
Datacom opened a sister facility to Orbit, Kapua, in Hamilton in 2013.
In 2013 Datacom also sold the contact centre arm of its Asia business, but continued to serve the market with IT services.
In the same year Datacom acquired a SAP payroll firm in Melbourne to establish a strong relationship in the Australian health sector, and as a provider of SAP services.
Datacom purchases WA-based IT services provider, XciteLogic, after it collapses.
In 2014, Datacom announced its acquisition of Tauranga-based software company Origen Technology Ltd.
August 2014 - Datacom co-founder, Paul Hargreaves, died after a short illness.
September 18, 2014, Datacom acquired a 20% stake in health informatics company, Smartward
The company also produced the (then) world's largest SAP migration to the Microsoft Azure cloud platform, with the world's largest kiwifruit exporter, Zespri. For its work, Datacom was awarded Microsoft's Cloud Enterprise Award at the 2013 Microsoft NZ Partner Awards.
In March 2015, Datacom won the Australian Federal Department of Health support services contract after a competitive tender process. The $242 million, five-year deal will see Datacom providing technology infrastructure and support services under a fully managed, consumption-based model. This joined its other Australian long term government contracts, such as Australian Customs, Australian Border Force and CrimTrac, the Department of the Environment, the Australian Taxation Office, and Airservices Australia.
In July 2015, Datacom was announced as Australia's first company to join Amazon Web Services Managed Service Program, one of only 20 worldwide.
In November 2015, Microsoft Australia announced Datacom as one of the latest partners to join its one-tier Cloud Solution Provider (CSP) program, which allows them to own the billing for products such as Office 365.
Datacom opened its third office in Melbourne in April 2016, bring 100 new jobs to the region.
In August 2016, Datacom launched its Augmented Reality practice, the ANZ region's first, focused around Microsoft's Hololens.
In September 2016, Datacom was selected to deliver the Western Australian State Government's $1bn GovNext-ICT programme. This will encapsulate a blend of data centre, server, cloud services, storage and telephone services across all state government departments.
In December 2016, Datacom won a contract to supply IT infrastructure and support services to Toyota Australia, delivering end-to-end services following a competitive tender process.
In March 2017, Datacom New Zealand consolidated its five existing Auckland offices into a single location in Auckland's Wynyard Innovation Quarter.
Datacom completed Australia's first migration of SAP HANA into AWS, by taking $800 million-turnover resources company Oz Minerals into the cloud in early 2017.
Datacom Chairman, Craig Boyce, announced in November 2017, that Greg Davidson, Datacom Systems ANZ CEO, would succeed Jonathan Ladd, as Group CEO. Ladd became the Chairman of the International Business, and Greg Davidson formally took over the Group CEO role on March 31, 2018, alongside a new leadership team and organisational structure.
Financial Results
At the conclusion of the 2018/2019 NZ Financial Year, Datacom reported revenue of NZ$1.29bn, an improvement of 17% like for like over FY18. Datacom Group has adopted the NZ IFRS 15 accounting standard that has a material impact on the Group’s accounting policy but the company has not restated FY18 figures. Net profit after tax was NZ$42.02m.
Further reading
References
Information technology companies of New Zealand
Conglomerate companies of New Zealand
Information technology companies of Australia
Companies based in Wellington
Conglomerate companies established in 1965
Privately held companies of New Zealand
New Zealand brands
New Zealand companies established in 1965 |
11074231 | https://en.wikipedia.org/wiki/AmiKit | AmiKit | AmiKit is a compilation of 425 pre-installed and pre-configured Amiga programs (Amiga software built for Motorola 68k CPU) running on Windows, macOS, Linux computers (thanks to WinUAE emulator) and on Amiga computer with Vampire V2 card.
Features
Besides original Workbench, AmiKit offers Directory Opus Magellan and Scalos as desktop replacements
Includes 425 pre-installed and pre-configured Amiga programs (Tools, Utilities, Games and Demos) freely available from Aminet and other sources
"Rabbit Hole" feature allows to launch Windows, Mac or Linux applications directly from Amiga desktop
Supports HD Ready (720p) and FullHD (1080p) resolutions in 32-bit screen modes
Features TrueType fonts, DualPNG icons (by Ken Lester) and 24-bit visual themes (including Dark Mode, Modern Retro, etc.)
Dropbox and Google Drive support
Requirements
AmiKit requires Vampire v2 turbo card (for classic Amiga) or Windows 7 (or better) or macOS (10.08 up to 10.14) or Linux (x86/64 able to run PlayOnLinux or Raspberry Pi).
For AmiKit to work, the original AmigaOS (version 3.x) and Kickstart ROM (version 3.1) are required. The following sources are supported:
AmigaOS XL CD or ISO
AmigaOS 3.9 CD or ISO (also available in Amiga Forever from Cloanto, including required Kickstart ROM)
AmigaOS 3.5 CD or ISO
AmigaOS 3.1 available on AmigaOS4.1 FE CD or ISO from Hyperion Entertainment (also includes required Kickstart ROM)
AmigaOS 3.1.4 ZIP from Hyperion Entertainment (also includes required Kickstart ROM)
AmigaOS 3.2 from Hyperion Entertainment (AmiKit for Vampire only)
See also
Amiga
AmigaOS
UAE (emulator)
Emulator
External links
Amiga |
575577 | https://en.wikipedia.org/wiki/Sky%20%28New%20Zealand%29 | Sky (New Zealand) | Sky Network Television Limited, more commonly known as Sky, is a New Zealand broadcasting company that provides pay television services via satellite, media streaming services and broadband internet services. It is also a wholesale channel provider to New Zealand IPTV provider Vodafone. As at 30 June 2021, Sky had 955,168 subscribers consisting of 561,989 satellite subscribers and 393,179 streaming subscribers. Despite the similarity of name, branding and services, such as Sky Go and MySky shared with its European equivalent, Sky, there is no connection between the companies.
History
The company was founded by Craig Heatley, Terry Jarvis, Trevor Farmer and Alan Gibbs in 1987 as Sky Media Limited. It was formed to investigate beaming sports programming into nightclubs and pubs using high performance 4-metre satellite dishes by Jarvis and an engineering associate Brian Green, but was redirected into pay television following successful bidding in early 1990 for four groups of UHF frequencies in the Auckland, Hamilton and Tauranga regions. Initially operating only in the Auckland region, Sky contracted Broadcast Communications (now Kordia) to provide the broadcast service and transmission from its Panorama Road studios, formerly owned by defunct broadcaster Northern Television. The first Sky subscriber was former Speaker of the New Zealand House of Representatives Jonathan Hunt, according to Helen Clark, former Prime Minister of New Zealand.
The concept of a pay television service was new to New Zealand and Sky had early problems. These included viewer acceptance of subscriber television. It faced difficulty in educating retailers and customers on the use of the original decoders. However, this problem was eased with the introduction of easier-to-use decoders that allowed greater viewer flexibility.
UHF service
Sky originally launched on 18 May 1990 as an analogue UHF service. Subscribers required a VideoCrypt decoder and a UHF aerial, both of which were supplied by when joining Sky. The signal was sent with the picture scrambled using VideoCrypt technology; the decoder was used to unscramble the picture. Sky Movies was the only channel broadcast in NICAM stereo; Sky Sport and Sky News were broadcast in mono. The original decoder didn't actually support stereo sound; if a subscriber wanted to watch Sky Movies in stereo, the subscriber had to feed the audio from another source such as a NICAM stereo capable VCR.
Free-to-air broadcasts were shown in the early morning hours on Sky News and between 5 pm and 6 pm on Sky Sport until mid-1991; however, those without a Sky subscription could view the broadcasts without a UHF decoder by tuning their TV to the Sky News or Sky Sport UHF channel, as the signals were not scrambled during those times.
The original channel lineup consisted of three channels, Sky Movies (later renamed to HBO before reverting to its original name), Sky Sport and Sky News. Sky rapidly won long term rights from US sports network ESPN (which became a 1% shareholder) as well as CNN and HBO providing it with a supply of sports, news and movies for the three channels. Sky News screened a mixture of CNN International and BBC news bulletins and a replay of the 6 pm One Network News bulletin from TVNZ, later changing to a replay of the 3 News 6 o'clock bulletin from TV3. The Sky News channel was later discontinued and became branded as a CNN channel.
In 1994, Sky launched two further channels, Discovery Channel and Orange, Orange later became known as Sky 1 and then The Box. Discovery Channel broadcast on a channel already used by Trackside. The Trackside service was available free to air to anyone who could receive the UHF signal without the need for a Sky decoder, Discovery Channel screened outside of racing hours and was only available to Sky subscribers.
Orange broadcast from 10 am onwards each day with Juice TV screening outside of Orange's broadcast hours, Juice TV was available originally free to air. Cartoon Network shared the same channel as Orange from 1997 to 2000 screening between 6 am and 4 pm with Orange screening after 4 pm. In 2000, Cartoon Network was replaced with Nickelodeon.
Later, funding allowed Sky to extend its coverage throughout most of New Zealand: In 1991, the company expanded to Rotorua, Wellington and Christchurch. Then in 1994, the company expanded to Hawkes Bay, Manawatu, Southland and Otago, followed by the Wairarapa, Taupo, and Wanganui regions in 1995. Its final UHF expansion, in 1996, was to Taranaki, Whangarei, and eastern Bay of Plenty.
Following the launch of the digital satellite service in 1998 (see below), Sky began reducing services on the UHF platform. NICAM stereo was eventually removed from Sky Movies, the CNN channel was discontinued in 2004 with the UHF frequencies issued to Māori Television.
Sky switched off its analogue UHF TV service on 11 March 2010 at midnight.
Sky used a portion of the freed up UHF and radio spectrum to launch its joint venture, Igloo, in December 2012. The remaining unused spectrum was relinquished back to the Government and will be recycled to support new broadcasting ventures.
Satellite service
In April 1997, Sky introduced a nationwide analogue direct broadcasting via satellite (DBS) service over the Optus B1 satellite. This allowed it to offer more channels and interactive options, as well as nationwide coverage. It upgraded it to a digital service in December 1998.
While some channels on the UHF platform were shared with other channels, Sky Digital screened the same channels 24 hours a day. Orange (later known as Sky 1 and The Box) extended to screening 24 hours a day on Sky Digital but was only available to Sky UHF subscribers between 4 pm and 6 am. Discovery Channel was available to Sky Digital subscribers 24 hours a day but UHF subscribers could only receive the channel outside of Trackside's broadcast hours.
Digital versions of free to air channels have always been available on Sky Digital meaning that some subscribers did not need to purchase any equipment to receive digital TV when New Zealand switched off its analogue service. While most free to air channels have been available on Sky Digital, TVNZ channels TVNZ 1 and TVNZ 2 did not become available until the end of 2001.
A SkyMail email service was featured for a time, but was later pulled due to lack of interest (including the wireless keyboards they had produced for it).
The unreliability of the aging Optus B1 satellite was highlighted when the DBS service went offline just before 7 p.m. NZST (8 a.m. London, 3 a.m. New York) on 30 March 2006. The interruption affected service to over 550,000 customers and caused many decoders to advise customers of "rain fade." Due to excessive volume of calls to the Sky toll-free help-desk, Sky posted update messages on their website advising customers that they were working with Optus to restore service by midnight. Sky credited customers with one day's subscription fees as compensation for the downtime at a cost to the company of NZ$1.5 million. Sky switched its DBS service to the Optus D1 satellite on 15 November 2006. It later expanded its transponder capacity on this satellite to allow for extra channels and HD broadcasts.
My Sky launch
In December 2005, Sky released its own digital video recorder (DVR), which essentially was an upgraded set top box similar to Foxtel IQ in Australia or TiVo in the United States. Called My Sky, it offered viewers the ability to pause live television, rewind television, record up to two channels at once straight to the set top box and watch the start of a recorded programme while still recording the end. It also gave viewers access to a revamped Guide and the new Planner, used to plan and access recordings at the touch of a button.
There was software in My Sky that after an hour of no signal from Sky then the decoder locks playback of pre-recorded programmes. This was discovered on 30 March 2006, after the ageing Optus B1 satellite was out of alignment for a 13-hour period and therefore unable to broadcast Sky to over 600,000 subscribers.
This generation of box was replaced by My Sky HDi when it launched on 1 July 2008. The boxes allow connection of to up to four satellites which can work with its four TV tuner cards in any combination. The device has a 320GB HDD. The quality of My Sky HDi is 576i via component and 720/1080i via HDMI.
A new feature was released exclusive to My Sky HDi on 6 July 2010 called Record Me. This feature allows subscribers to press the green button on programme advertisements to record that advertised programme.
In May 2009, Sky introduced copy protection on My Sky and My Sky HDi decoders limiting the ability to copy material from My Sky/My Sky HDi to DVD/HDD recorders and to PCs. Sky Box Office channels, including adult channels are copy protected so DVD/HDD recorders and PCs will not record from these channels. Other channels are not copy protected. Copy protection technology is not built into other decoders.
On 1 July 2011, a version of the same decoder with a 1TB hard drive was launched as My Sky+.
Purchase of Prime Television
In November 2005, Sky announced it had purchased the free-to-air channel Prime TV for NZ$30 million. Sky uses Prime TV to promote its pay content and to show delayed sports coverage. New Zealand's Commerce Commission issued clearance for the purchase on 8 February 2006.
Purchase of Onsite Broadcasting
In July 2010, Sky purchased Onsite Broadcasting, later Outside Broadcasting (OSB), from Australia's Prime Media Group. The sale price was $35 million but once liabilities were taken into account the net amount was $13.5 million. Since 2001, OSB provided outside broadcast facilities for Sky's sporting coverage and was also contracted out by Sky to other broadcasters like TVNZ, TV3, Warner Brothers, Fox Sports, Channel 9, Ten Network, Channel 7 and BBC among others. It effectively replaced Moving Pictures, which was TVNZ's outside broadcast division, that had dominated the market. Moving Picture's assets were eventually sold when Sky's sports rights increased in the mid 2000s and OSB took hold.
OSB owned the following vehicles (until 2020's sale), based in Auckland, Wellington and Christchurch;
HD1 and HD3: 14.3m semi-trailer production unit with expanding side, capable of holding 20+ cameras. They are supporter by tender vehicles. HD1 is based in Christchurch.
HD2: 14.3m semi-trailer production unit with the capability of holding up to 16+ cameras. It is supported by a tender vehicle with extra production facilities. This unit is based in Wellington.
HD4: 15m semi-trailer production unit with the capability of using 16+ cameras. It too is supported by a tender vehicle with additional production space.
HD5: 12.5m rigid truck and can input 8+ cameras supported by a similar sized tender vehicle with additional production room.
HD6: Small van which is capable of 6+ cameras. It is supported by a similar sized van for storage and linking
AUX1: Was an original outside broadcast production unit (OSB1), however it has been converted into a specialised production trailer (not a stand-alone OB trailer) for specialty cameras, additional graphics and houses any overflow production areas for larger broadcasts
OSB2: An original standard definition 13.5m semi-trailer production unit capable of 14+ cameras. This is supported by a tender truck with additional production space.
HD/SD Fly Away kits: Suitable for broadcasts overseas
On 12 August 2020, Sky announced it had sold Outside Broadcasting to NEP New Zealand, part of American production company NEP Group. As part of the transaction, NEP will be Sky's outsourced technical production partner in New Zealand until at least 2030. The sale was cleared by the Commerce Commission on 5 February 2021.
News Corp sale
In February 2013, News Corp announced it would be selling the 44 percent stake in Sky TV that it acquired via a merger with Independent Newspapers Ltd in 2005.
Replacement of legacy hardware
From November 2015, Sky started replacing the legacy standard digital decoders and original 2005 My Sky decoders with a new digital decoder, manufactured by Kaon.
The Kaon Sky box includes built-in Wi-Fi. A Sky Link adapter device can be ordered for free by customers, in order to use a Wi-Fi connection on current My Sky boxes. The Kaon box has an ability to block recording features and storage capacity. The decoder upgrade allowed Sky to cease broadcasting scrambled channels using H.262 video compression in favor of H.264, which means Sky roughly doubled its capacity on satellite for additional channels and possibly Ultra H.265 HD broadcasts in the future. The upgrade of transponders to H.264 was completed in March 2019. Free-to-air channels such as Prime, Edge TV and Bravo Plus 1 remain in H.262 to be accessible to non-Sky subscribers such as Freeview viewers.
Additionally, the My Sky HDi and My Sky + decoder software was upgraded to use the same system software as the new Kaon boxes. The new Sky software had features such as internet capability, search functionality, favourite channels, and a series stack function.
The software upgrade to My Sky boxes contained many bugs and caused thousands of customers to become disgruntled. The major issue was with the screen font which Sky later addressed in a future upgrade.
Proposed merger with Vodafone New Zealand
In June 2016, Sky TV and Vodafone New Zealand agreed to merge, with Sky TV purchasing 100% of Vodafone NZ operations for a cash payment of $1.25 billion NZD and issuing new shares to the Vodafone Group. Vodafone UK will get 51% stake of the company. However, the proposed merger was rejected by the Commerce Commission which saw a plunge in Sky TV's shares.
Unbundling, IVP project and departure of CEO
In late February 2018, Sky TV announced that it would be splitting its existing Sky Basic service into two new packages called Sky Starter and Sky Entertainment, giving new and existing customers the option of building bundles. The Sky Starter package would cost $24.91 monthly, replacing the earlier Sky Basic service which cost $49.91 monthly with extra charges for sports, movie, and other premier channels. The price reduction came in response to fierce competition from streaming services such as Netflix, Lightbox, and Amazon Prime Video, which had caused the loss of 38,000 satellite subscribers the previous year. Unlike its competitors, Sky TV was dependent on a linear broadcasting model and its exclusive rights to rugby union, rugby league, netball, and cricket content. While Sky had TV hoped that this change would attract new customers, the company's stock market shares dropped by 10% in response to investor concerns about future revenue, knocking NZ$100 million off its market value.
In early March 2018, it was reported that Sky TV CEO John Fellet was pursuing talks with Netflix and Amazon Prime to share content and services. Fellet hoped to mimic the UK-based television company Sky plc's success in negotiating a bundling package with Netflix.
On 26 March 2018, John Fellet announced his intention to step down from his position, after being CEO for 17 years. Fellet had been with the company since 1991, first as chief operating officer before taking on the chief executive role in January 2001. On 21 February 2019, Martin Stewart replaced John Fellet as CEO. He had previously worked for BSkyB, The Football Association and OSN. On 1 December 2020, Stewart left the company to return home to Europe. Sophie Maloney was immediately appointed to the CEO position.
Focus on streaming
In February 2015, Sky launched its own subscription-based video streaming service called Neon to allow New Zealanders to stream various HBO films and shows and to compete with US-based streaming service Netflix, which launched in New Zealand in March 2015. Sky had initially planned to launch Neon in 2014 but was delayed by systems bugs.
On 16 August 2019, Sky announced it had purchased Coliseum Sports Media's global rugby streaming service RugbyPass for approximately US$40 million.
On 19 December 2019, it was announced that Sky would be purchasing Spark New Zealand's streaming service Lightbox. On 14 June 2020, Sky confirmed that Lightbox would be merged into the Neon app on 7 July 2020. The merged service retains the Neon brand but uses Lightbox's interface and includes content drawn from both Neon and the old Lightbox.
Launch of broadband service
On 21 May 2020, Sky announced its plans to launch fibre broadband internet plans in 2021. Sky raised $157 million from investors with a discounted share issue to cover the cost of entering the broadband market. On 10 September 2020, Sky announced that a number of its staff members including Sky's then CEO, Martin Stewart, were trialing the broadband service in their homes. This testing was later expanded to a small group of customers in December 2020.
On 24 March 2021, Sky launched the broadband service initially for existing satellite customers only. Sky later expanded the offering to new customers on 17 May 2021.
Pursuing partnerships
On 22 August 2019, it was announced that Sky had signed a six-year agreement to take over from Westpac as the naming sponsor of Wellington Regional Stadium, effective 1 January 2020.
On 28 November 2019, Sky announced that TVNZ would be its free-to-air broadcast partner for the 2020 Summer Olympics, instead of its own free-to-air channel Prime.
On 27 October 2020, Sky announced a partnership with Spark, where the Sky Sport Now streaming service would be bundled with Spark Sport for a NZ$49.99 monthly subscription.
On 9 June 2021, Sky announced an exclusive partnership with Disney to provide Sky Broadband customers with a 12-month subscription to the Disney+ streaming service.
On 24 June 2021, Sky announced a partnership with Discovery New Zealand to provide coverage of The Championships, Wimbledon for free-to-air channel Three.
Products and services
Satellite television channels
Sky defines a virtual channel order that groups channels by their content.
General entertainment channels are below channel 30 which includes TVNZ's free-to-air TVNZ 1 (four regional markets for SD), free-to-air TVNZ 2 and free-to-air TVNZ Duke, Discovery, Inc.'s free-to-air Three, free-to-air Bravo, TLC, Living, Investigation Discovery, free-to-air HGTV and free-to-air Choice TV, ViacomCBS channels Comedy Central and MTV, NBCUniversal's Universal TV and E!, Sky's Prime (three regional markets), Sky 5, Vibe, Jones!, Jones! too and Sky Box Sets, BBC UKTV, government funded free-to-air Māori Television and The Shopping Channel. Sky Arts and SoHo are available as extra channels.
Movie channels are from 30 to 39 which includes Sky Movies Premiere (new releases), Sky Movies Comedy, Sky Movies Action, Sky Movies Greats (modern classics), Sky Movies Classics, Sky Movies Collection (themed) and Sky Movies Family. Rialto (independent) is available as an extra channel. Sky Box Office channels are available as pay-per-view from 40 to 49.
Sporting channels are from 50 to 69 which includes Sky Sport Select, Sky Sport 1 to 9, select Sky Sport Pop-Up channels for special sporting events, Disney's ESPN & ESPN2, TAB Trackside 1 and TAB Trackside 2. Sky Arena offers one off pay-per-view events.
Documentary channels are from 70 to 79 which includes targeted scheduling for Discovery, Inc.'s Discovery, Animal Planet and Discovery Turbo, as well as Disney's National Geographic, BBC Earth, and Foxtel Networks versions of A+E Networks' Crime + Investigation and History.
Public service channels are from 80 to 85 which includes government funded free-to-air Te Reo and the Auckland regional channel Face TV. The rural sponsored Country TV is an available extra channel.
News coverage channels are from 85 to 99 which includes government provided Parliament TV, Australia's Sky News, WarnerMedia's CNN International, Fox Corporation's Fox News, NBCUniversal's CNBC Australia, Al Jazeera English, BBC World News, and RT.
Children & family entertainment channels are from 100 to 109 which includes the ViacomCBS channels Nickelodeon and Nick Jr., WarnerMedia's Cartoon Network and BBC Worldwide's CBeebies. Prior to 30 November 2019, Sky also provided the Disney and Disney Junior channels but discontinue these channels following the launch of the Disney+ streaming service in New Zealand on 19 November. In addition, Sky replaced the Sky Movies Disney channel with Sky Movies Family.
Music video channels are from 110 to 129 which includes ViacomCBS channels MTV Hits, MTV 80s, NickMusic and Discovery, Inc.'s free-to-air channels The Edge TV and Breeze TV.
Religious channels are from 200 to 299 which includes Shine TV, Daystar, Sonlife Broadcasting Network and Hope Channel.
Channels of an Asian origin include the English-speaking CGTN Documentary on Channel 309 and CGTN on channel 310. Hindi language channels are from 150 to 152 and include Star Plus Hindi, Colors and Star Gold. ABS-CBN’s The Filipino Channel is also available on channel 160.
A selection of Jukebox radio channels from 400 to 499 are available called Sky Digital Music. With free-to-air radio from 420 to 429 which includes RNZ National, RNZ Concert and Tahu FM.
Timeshifted versions of general entertainment channels are from 501 to 599 for an hour delay of TVNZ 1 +1, TVNZ 2 +1, ThreePlus1, TVNZ Duke+1, Bravo Plus 1 and Prime Plus 1.
Channels for special services (system/hidden) are from 800 to 999 which includes Supercheap in-store radio and an auxiliary backup channel.
Previously Sky featured adult TV channels, including content from Playboy, but these were eventually discontinued.
High definition channels include:
TVNZ 1
TVNZ 2
Three
Prime
Sky 5
Vibe
BBC UKTV
SoHo
Universal TV
Comedy Central
MTV
Living
MTV Hits
Sky Arts
TVNZ Duke
Sky Movies channels
Rialto
Sky Box Office
Sky Sport channels
ESPN and ESPN2
TAB Trackside 1 and TAB Trackside 2
Sky Arena
Discovery
National Geographic
BBC Earth
Discovery Turbo
Due to satellite bandwidth constraints, the quality is lower for TVNZ 1, TVNZ 2 and Three than the free-to-air terrestrial versions.
MySky
All Sky customers have the option to subscribe to the MySky service, per each Kaon Sky box in order to activate PVR features on that box. This allows the customer to pause and rewind live television, as well as record three channels while watching a fourth live, on their Sky box. The current Kaon Sky box has 500GB of storage space. Another one of the advertised features of MySky is the ability to record series of programs using the "Series Link" feature. Additionally, an older Pace MySky box is available with 320GB storage and a "+" version of this box with a 1TB hard drive.
Streaming services
Sky On Demand
Sky announced late 2006 that it will be using the 30% reserved disk space in the PVR to offer a video on demand service to its My Sky customers. This service commenced in 2007 offering 12−15 movies at any one time. New titles were downloaded automatically from the Optus D1 satellite to the PVR and listed only when they are available for purchase and instant playback.
In November 2015, the Sky On Demand offering was extended to allow all Sky customers to watch subscribed content at a time that suits them, rather than according to the linear schedule. This removes the need to pre-record certain TV shows or films, because viewers can connect the updated decoder to their home broadband and choose stream content from the catalogue of options depending on which channels they subscribe to. This is more akin to on-demand services offered by TVNZ and Netflix, and is designed to give viewers more freedom.
Sky Go
Sky Go is Sky's video on demand and live streaming service, which was launched in 2011 as iSky. It can be accessed via the Sky Go website on PC or via a device via the Sky Go app.
Remote record
In August 2009 an online service was launched where customers can log on and set their My Sky boxes to record programmes. The instruction to record a programme is sent to the set-top boxes via satellite.
Sky TV Guide app
Sky has released a mobile app which works on iOS devices such as iPhone, iPad and iPod Touches, Android devices & Windows 8. The app contains an electronic program guide, remote record capabilities (for My Sky boxes), Facebook & Twitter social functions and automatic programme reminders. The app had over 50,000 downloads from the iTunes App Store in four weeks.
Sky Sport Now
In 2015, Sky launched an online streaming service called Fan Pass (branded as FAN PASS), which provided access to Sky Sport channels 1–4, including highlights on demand. Pay-Per-View events could be purchased separately when available. This service was offered at a discount to Spark customers with unlimited broadband.
On 14 August 2019, Sky re-branded Fan Pass as Sky Sport Now, featuring live streams for all 10 Sky Sport channels, highlights, on demand, match statistics and points tables. Three passes are available for purchase: a week pass, a month pass and a 12-month Pass. Pay-Per-View events can be purchased separately when they become available.
On 27 October 2020, Sky announced that it would be bundling its Sky Sport Now streaming service with Spark Sport for a NZ$49.99 monthly subscription from 16 November 2020 onwards.
Neon
In February 2015, Sky launched Neon (branded as NEON), a subscription video on demand service. It is the only online streaming service in New Zealand where HBO shows, including Game of Thrones and Big Little Lies, can be legally streamed. Neon is available for streaming on desktop or laptop on all major browsers, apps for select iPad, iPhone and Android devices, as well as PlayStation 4 and Samsung Smart TV. Chromecast and Airplay functionality are available too. Prior to September 2019, Neon offered two packages: the TV package, which came with a free 14-day trial, or the TV & Movies package. In September 2019, Neon shifted to a single TV & Movies package worth $13.95 in order to compete with Netflix, Amazon Prime Video and Spark New Zealand's Lightbox.
In mid December 2019, it was announced that Sky would be purchasing rival streaming service Lightbox. For that period, Spark will continue to provide services to Lightbox customers while Sky will be footing the operational costs. On 7 July 2020, Sky formally merged the Lightbox app into Neon. This revamped streaming service allows users to stream on two devices, download films and shows onto devices, rent movies, and create multiple user profiles.
Discontinued products and services
Igloo
On 24 November 2011 Sky announced they had formed a partnership with Television New Zealand to launch a new low-cost pay television service during the first half of 2012. This was called Igloo and Sky had a 51% share in the venture. Details were announced on 8 December via a press release. Sky offered a selection of channels on a pre-pay basis.
The Igloo service was provided through DVB-T and was available in areas of New Zealand where Freeview's terrestrial service is available. Customers required an Igloo set top box and UHF aerial to use the Igloo service. Unlike Sky Digital and Sky 's former UHF service customers purchased their decoder from a retailer and the customer owned the equipment, the customer was also responsible for the installation of the equipment including the UHF aerial. Sky subscribers do not own their Sky decoders and are required to return the decoder on cancellation of their service, Sky will also arrange for a technician to install any equipment in the customers home including the satellite dish.
Igloo worked on a prepay system where the customer purchased basic channels for 30 days, the customer was able to discontinue their service at any time and could continue to be able to access free to air channels. Customers could also purchase one-off shows such as movies or sport events.
The Igloo service was closed on 1 March 2017, and Igloo boxes can still be used to access free to air channels by updating the system software of the box.
Fatso
Sky also owned an Online DVD and video game rental service called Fatso. It discontinued business in December 2017.
Magazine publishing
Sky provided a Skywatch monthly magazine to all its customers, published by Stuff and printed by Ovato. Skywatch once had a readership of 965,000 which made it the largest magazine read in New Zealand, and the largest monthly magazine. The magazine provided monthly listings for Sky channels, as well as highlights and features. The publication moved to digital only in April 2020 and was discontinued in August 2020.
In January 2007, Sky launched Sky Sport: The Magazine, as the published extension of the Sky Sport television package. The magazine featured articles by local and international sports writers, as well as sports photography. Sky TV Rugby commentator Scotty Stevenson was the editor. Publication ceased in June 2015.
Technical
Sky satellite subscribers receive a standard 60-centimetre satellite dish installed on their home along with set-top boxes as part of the installation.
Sky switched from the aging Optus B1 to the Optus D1 satellite for its DBS service on 15 November 2006. Initially, Sky used vertically polarised transponders on Optus D1 (as it had on Optus B1). However, on 31 July 2007 it moved its programming to horizontally polarised transponders with New Zealand-specific beams to be consistent with Freeview and to gain access to more transmission capacity. Sky have also purchased some of the capacity of Optus D3, which was launched mid August 2009, this gives Sky the ability to add more channels and upgrade existing channels to HD in the future. However, due to the LNB switching that would be required the single D3 transponder lease was later dropped in 2011.
A set-top box (STB) is used to decrypt the satellite signals. Digital broadcasts are in DVB-compliant MPEG-4 AVC. Interactive services and the EPG use the proprietary OpenTV system.
Equipment ownership
When a customer subscribes to Sky they will have a decoder professionally installed and a satellite dish installed if one isn't already available. Sky maintains ownership of the equipment and part of the customers monthly subscription cost includes the rental of the decoder. Customers who have My Sky pay an additional cost per month. If a customer wants to discontinue their Sky service on a temporary basis the customer can switch to a decoder rental option which allows the customer to receive free to air channels only.
When the customer cancels a Sky subscription the customer is required to return the equipment but not the satellite dish. If the customer moves to another address the customer is required to leave the satellite dish behind and arrange for a new satellite dish to be installed at the new address, at the customers expense, if a satellite dish has not already been installed at the new address. The satellite dish can be used to receive the Freeview satellite service using a Freeview set top box.
Reputation
The 2016 NZ Corporate Reputation Index placed Sky in last place. The Corporate Reputation Index lists the top 25 companies in New Zealand based on revenue sourced from the 2015 Deloitte Top 200 list, and is judged by consumers with no company input. In the 2016 list Sky had dropped two places to number 25 from 2015.
In the 2020 Brand Reputation Index, Sky came in at Number 9 in the Top 10 Brands Delivering Brand Purpose.
See also
Optus satellite failures
References
External links
Official site
New Zealand Stock Exchange Listing.
Companies listed on the Australian Securities Exchange
Companies listed on the New Zealand Exchange
Television networks in New Zealand
New Zealand subscription television services
Companies based in Auckland
Television channels and stations established in 1987
1987 establishments in New Zealand |
15680391 | https://en.wikipedia.org/wiki/Lis%20%28linear%20algebra%20library%29 | Lis (linear algebra library) | Lis (Library of Iterative Solvers for linear systems, pronounced [lis]) is a scalable parallel software library for solving discretized linear equations and eigenvalue problems that mainly arise in the numerical solution of partial differential equations by using iterative methods. Although it is designed for parallel computers, the library can be used without being conscious of parallel processing.
Features
Lis provides facilities for:
Automatic program configuration
NUMA aware hybrid implementation with MPI and OpenMP
Exchangeable dense and sparse matrix storage formats
Basic linear algebra operations for dense and sparse matrices
Parallel iterative methods for linear equations and eigenvalue problems
Parallel preconditioners for iterative methods
Quadruple precision floating point operations
Performance analysis
Command-line interface to solvers and benchmarks
Example
A C program to solve the linear equation is written as follows:
#include <stdio.h>
#include "lis_config.h"
#include "lis.h"
LIS_INT main(LIS_INT argc, char* argv[])
{
LIS_MATRIX A;
LIS_VECTOR b, x;
LIS_SOLVER solver;
LIS_INT iter;
double time;
lis_initialize(&argc, &argv);
lis_matrix_create(LIS_COMM_WORLD, &A);
lis_vector_create(LIS_COMM_WORLD, &b);
lis_vector_create(LIS_COMM_WORLD, &x);
lis_input_matrix(A, argv[1]);
lis_input_vector(b, argv[2]);
lis_vector_duplicate(A, &x);
lis_solver_create(&solver);
lis_solver_set_optionC(solver);
lis_solve(A, b, x, solver);
lis_solver_get_iter(solver, &iter);
lis_solver_get_time(solver, &time);
printf("number of iterations = %d\n", iter);
printf("elapsed time = %e\n", time);
lis_output_vector(x, LIS_FMT_MM, argv[3]);
lis_solver_destroy(solver);
lis_matrix_destroy(A);
lis_vector_destroy(b);
lis_vector_destroy(x);
lis_finalize();
return 0;
}
System requirements
The installation of Lis requires a C compiler. The Fortran interface requires a Fortran compiler, and the algebraic multigrid preconditioner requires a Fortran 90 compiler.
For parallel computing environments, an OpenMP or MPI library is required. Both the Matrix Market and Harwell-Boeing formats are supported to import and export user data.
Packages that use Lis
Gerris
OpenModelica
OpenGeoSys
SICOPOLIS
STOMP
Diablo
Kiva
Notus
Solis
GeMA
OpenCFS
numgeo
freeCappuccino
See also
List of numerical libraries
Conjugate gradient method
Biconjugate gradient stabilized method (BiCGSTAB)
Generalized minimal residual method (GMRES)
Eigenvalue algorithm
Lanczos algorithm
Arnoldi iteration
Krylov subspace
Multigrid method
References
External links
Development repository on GitHub
Prof. Jack Dongarra's freely available linear algebra software page
Netlib repository (Courtesy of Netlib Project)
Fedora packages (Courtesy of Fedora Project)
Gentoo packages (Courtesy of Gentoo Linux Project)
AUR packages (Courtesy of Arch Linux Community)
FreeBSD packages (Courtesy of FreeBSD Project)
Packages for macOS (Homebrew) (Courtesy of Homebrew Project)
Packages for macOS (MacPorts) (Courtesy of MacPorts Project)
Packages for Windows (Courtesy of WHPC Project)
Packages for Mingw-w64 (Courtesy of Mingw-w64 Project)
Spack packages (Courtesy of Lawrence Livermore National Laboratory)
Numerical libraries
Numerical linear algebra
Scientific simulation software
C (programming language) libraries
Fortran libraries
Free simulation software
Free software programmed in C
Free software programmed in Fortran |
2420120 | https://en.wikipedia.org/wiki/All-or-nothing%20transform | All-or-nothing transform | In cryptography, an all-or-nothing transform (AONT), also known as an all-or-nothing protocol, is an encryption mode which allows the data to be understood only if all of it is known. AONTs are not encryption, but frequently make use of symmetric ciphers and may be applied before encryption. In exact terms, "an AONT is an unkeyed, invertible, randomized transformation, with the property that it is hard to invert unless all of the output is known."
Algorithms
The original AONT, the package transform, was described by Ronald L. Rivest in his 1997 paper "All-Or-Nothing Encryption and The Package Transform". The transform that Rivest proposed involved preprocessing the plaintext by XORing each plaintext block with that block's index encrypted by a randomly chosen key, then appending one extra block computed by XORing that random key and the hashes of all the preprocessed blocks. The result of this preprocessing is called the pseudomessage, and it serves as the input to the encryption algorithm. Undoing the package transform requires hashing every block of the pseudomessage except the last, XORing all the hashes with the last block to recover the random key, and then using the random key to convert each preprocessed block back into its original plaintext block. In this way, it's impossible to recover the original plaintext without first having access to every single block of the pseudomessage.
Although Rivest's paper only gave a detailed description of the package transform as it applies to CBC mode, it can be implemented using a cipher in any mode. Therefore, there are multiple variants: the package ECB transform, package CBC transform, etc.
In 1999 Victor Boyko proposed another AONT, provably secure under the random oracle model.
Apparently at about the same time, D. R. Stinson proposed a different implementation of AONT, without any cryptographic assumptions. This implementation is a linear transform, perhaps highlighting some security weakness of the original definition.
Applications
AONTs can be used to increase the strength of encryption without increasing the key size. This may be useful to, for example, secure secrets while complying with government cryptography export regulations. AONTs help prevent several attacks.
One of the ways AONTs improve the strength of encryption is by preventing attacks which reveal only part of the information from revealing anything, as the partial information is not enough to recover any of the original message.
Another application, suggested in the original papers is to reduce the cost of security: for example, a file can be processed by AONT, and then only a small portion of it can be encrypted (e.g., on a smart-card). AONT will assure that as a result the whole file is protected. It is important to use the stronger version of the transform (such as the one by Boyko above).
AONT may be combined with forward error correction to yield a computationally secure secret sharing scheme.
Other uses of AONT can be found in optimal asymmetric encryption padding (OAEP).
References
External links
Staple, an open-source prototype All-or-nothing transform implementation.
Applications of cryptography |
51983350 | https://en.wikipedia.org/wiki/WiFi%20Master%20Key | WiFi Master Key | WiFi Master (formerly WiFi Master Key) is the world's first and largest peer-to-peer Wi-Fi sharing platform for free WiFi access developed by LinkSure Network. It is a mobile application software that leverages on the sharing economy, cloud computing, and big data in an attempt to provide users with a safe and free Wi-Fi internet connection shared by Wi-Fi hosts around the world. It is also known as a free wifi access app pioneer in China.
WiFi Master is operated by LinkSure Network, a mobile internet unicorn company providing free internet and content services. The company's founder and CEO, Chen Danian, was previously COO and co-founder of Shanda.
WiFi Master was first released in 2012, and by 2016 it became the world’s largest Wi-Fi sharing community with over 900 million users and 520 million monthly active users.
In terms of combined iOS and Android app downloads, WiFi Master is ranked 5th in the world, after WhatsApp, Instagram, Facebook and Facebook Messenger. WiFi Master is the 3rd largest software app in China after WeChat and Tencent QQ.
History
WiFi Master was created by Chen Danian in hope to bridge the digital divide and help people achieve self-actualization by granting them access to free Internet, like how the Internet had opened doors for him. Chen Danian shared in an interview with Forbes, that he was born into poverty in rural China, and using the Internet he realized that it was a tool to change destinies and pursue happiness by exploring opportunities.
In September 2012, WiFi Master was first launched in China.
In 2015, its operating company, LinkSure closed its A round funding of USD 52 million at a billion-dollar valuation, becoming a unicorn company in the mobile internet industry. In May 2015, LinkSure bought the domain name wifi.com and established a branch in Singapore to expand its overseas services. WiFi Master was launched worldwide, rapidly gaining popularity in Southeast Asia.
In 2016, WiFi Master became the world’s largest WiFi sharing community. WiFi Master announced in June 2016 that it had surpassed the 900 million users milestone with 520 million monthly active users, providing over 4 billion daily average connections with over 80% of successful connection rate worldwide. WiFi Master is available in 223 countries, and is the top tools app on Google Play store in 49 countries. In terms of combined iOS and Android downloads, WiFi Master is ranked 5th in the world in below WhatsApp, Instagram, Facebook and Facebook Messenger.
WiFi Master has the 3rd largest user base among software apps, after WeChat and Tencent QQ, in China.
In 2019, WiFi Master Key rebrand as WiFi Master.
Features
WiFi Search
WiFi Master is a mobile Wi-Fi management tools app that work on iOS and Android operating systems. The application has a core function WiFi Search for users to find and connect to available nearby Wi-Fi hotspots in a tap. All available nearby Wi-Fi hotspots are listed with a "Connect" button; users can connect to the hotspots listed with a "Connect" button icon in a tap without typing login details.
WiFi Security Matrix
WiFi Master introduced an all-round WiFi Security Matrix for enhanced Wi-Fi security assurance before, during, and after, every connection.
Before connection
WiFi Master designed a Cloud Security Detection System (安全云感知系统) to help users detect possible Wi-Fi security risk in advance based on big data tracking. The system also applies machine learning algorithms to intelligently track the predicted risky Wi-Fi hotspots, adjusting hotspot risk level in real-time to avoid false alarm.
During connection
WiFi Master developed a Security Tunnel Protection System (安全隧道保护系统) to provide users’ the fundamental protection during each Wi-Fi connection. The system comprises a set of multi-patented technology— encrypted hierarchical transmission, malicious attack real-time monitoring, and malicious attacks interception of up 90%— to encrypt users’ information, and monitor in real-time for network attacks including fishing, ARP attacks, DNS tampering during any Wi-Fi connection.
After connection
Since September 2015, WiFi Master’s users in China are insured up to RMB100,000 (US$15,000) by the industry’s first WiFi Security Insurance (WiFi安全险), launched by LinkSure in partnership with ZhongAn Insurance (), in the event of network security issues. The WiFi Security Insurance holds the record of zero claims since its inception.
Discover News Feed
WiFi Master features a news feed in the app for users to discover and browse content upon getting connected online.
WiFi tools
In-built tools like WiFi speed test and WiFi signal test are also included in the app.
WiFi Map
In 2016, WiFi Master introduced a WiFi Map function in the app for users to find free and open hotspots available in the location. Later in June 2017, Facebook rolled out a similar function called “Find Wi-Fi” worldwide.
Awards
WiFi Master won the “Product with the Most Growth” at the 2017 iResearch Awards in June 2017.
References
Wi-Fi
Android (operating system) software
IOS software
Cross-platform software |
22313095 | https://en.wikipedia.org/wiki/Foursquare%20City%20Guide | Foursquare City Guide | Foursquare City Guide, commonly known as Foursquare, is a local search-and-discovery mobile app developed by Foursquare Labs Inc. The app provides personalized recommendations of places to go near a user's current location based on users' previous browsing history and check-in history.
The service was created in late 2008 by Dennis Crowley and Naveen Selvadurai and launched in 2009. Crowley had previously founded the similar project Dodgeball as his graduate thesis project in the Interactive Telecommunications Program (ITP) at New York University. Google bought Dodgeball in 2005 and shut it down in 2009, replacing it with Google Latitude. Dodgeball user interactions were based on SMS technology, rather than an application. Foursquare was similar but allowed for more features, allowing mobile device users to interact with their environment. Foursquare took advantage of new smartphones like the iPhone, which had built-in GPS to better detect a user's location.
Until late July 2014, Foursquare featured a social networking layer that enabled a user to share their location with friends, via the "check in" - a user would manually tell the application when they were at a particular location using a mobile website, text messaging, or a device-specific application by selecting from a list of venues the application locates nearby. In May 2014, the company launched Swarm, a companion app to Foursquare City Guide, that reimagined the social networking and location sharing aspects of the service as a separate application. On August 7, 2014, the company launched Foursquare 8.0, a new version of the service. This version removed the check-in feature and location sharing, instead focusing on local search.
In 2011, user demographics showed a roughly equal split between male and female user accounts, with 50 percent of users registered outside of the US. In 2016, Foursquare had 50 million monthly active users.
Features
Major features include local search and recommendations, tips and expertise, tastes, location detection, ratins, lists, superusers, brands, and Places API.
Location detection
Foursquare uses proprietary technology, Pilgrim, to detect a user's location. When users opt in to always-on location sharing, Pilgrim determines a user's current location by comparing historical check-in data with the user's current GPS signal, cell tower triangulation, cellular signal strength and surrounding wifi signals.
Superusers
The service provides ten levels of Superuser. Superuser status is awarded to users after they apply and perform a special test where users should meet quality and quantity criteria. Only Superusers have the ability to edit venue information. Superusers can attain different levels as they contribute more high-quality edits over time.
Brands
In the past, Foursquare has allowed companies to create pages of tips and users to "follow" the company and receive tips from them when they check-in at certain locations. On July 25, 2012, Foursquare revealed Promoted Updates, an app update expected to create a new revenue generation stream for the company. The new program allowed companies to issue messages to Foursquare users about deals or available products.
Places API
Foursquares underlying technology is used by apps such as Uber and Twitter.
Former features
Earlier versions of Foursquare supported check-ins and location sharing, but as of Foursquare 8.0, these were moved to the service's sibling app, Foursquare Swarm.
In previous versions of Foursquare, if a user had checked into a venue on more days than anyone else in the past 60 days, then they would be crowned "Mayor" of that venue. Someone else could then earn the title by checking in more times than the previous mayor. Businesses could also incentivize mayorships through rewards for users who were the mayor (such as food and drink discounts). As the service grew, it became increasingly difficult to compete for mayorships in high-density areas where the service was popular. The mayorship feature was retired from version 8.0 and reimplemented in Swarm.
Badges were earned by checking into venues. Some badges were tied to venue "tags" and the badge earned depended on the tags applied to the venue. Other badges were specific to a city, venue, event, or date. In September 2010, badges began to be awarded for completing tasks as well as checking in. In version 8.0, badges were retired, which upset some existing users.
Earlier versions of the app also used a "points" system with users receiving a numerical score for each check-in, with over 100 bonuses to gain additional points, such as being first among friends to check into a place or becoming the venue's mayor. The use of gamification and game-design principles were integral features. In version 8.0 points and leaderboards were retired, but were reimplemented in the Swarm app.
"Specials" were another feature of the app that acted as an incentive for Foursquare users to check in at new spots or revisit their favorite hangouts. Over 750,000 businesses offered "Specials" that included discounts and freebies. They were intended for businesses to persuade new and regular customers to visit their venues. "Specials" included anything from a free beer for the first check-in to 10% off at a restaurant.
Swarm
In May 2014, the company launched Swarm, a companion app to Foursquare, that migrated the social networking and location sharing aspects of the service into a separate application. Swarm acts as a lifelogging tool for the user to keep a record of the places they have been, featuring statistics on the places they have been, and a search capability to recall places they have visited. Swarm also lets the user share where they have been with their friends, and see where their friends have been. Check-ins are rewarded with points, in the form of virtual coins, and friends can challenge each other in a weekly leaderboard. Checking in to different categories of venue also unlocks virtual stickers. Though it is not necessary to use both apps, Swarm works together with Foursquare to improve a user's recommendations - a user's Swarm check-ins help Foursquare understand the kinds of places they like to go.
Availability
Foursquare is available for Android, iOS & Windows Phone devices. Versions of Foursquare were previously available for Symbian OS, Series 40, MeeGo, WebOS, Maemo, Windows Phone, Bada, BlackBerry OS, PlayStation Vita, and Windows 8. Users may also use their mobile browsers to access Foursquare mobile, but feature phone users must search for venues manually instead of using GPS that most smartphone applications can use.
History
Launch and early years
Foursquare started out in 2009 in 100 worldwide metro areas. In January 2010, Foursquare changed their location model to allow check-ins from any location worldwide.
In September 2010 Foursquare announced version 2.0 of its check-in app which aimed to direct users to new locations and activities, rather than just sharing their location. Foursquare has also created a button that would add any location in the app to a user's to-do list, and the app would now remind the user when there were to-do items nearby.
On February 21, 2011, Foursquare reached 7 million users IDs. The company was expected to pass 750 million check-ins before the end of June 2011, with an average of about 3 million check-ins per day. On August 8, 2011, President Barack Obama joined Foursquare, with the intention that the staff at the White House would use the service to post tips from places the president has visited.
2012 redesign
On June 7, 2012, Foursquare launched a major redesign, which they described as a "whole new app". The app's "explore" function now allowed users to browse locations by category or conduct specific searches like "free wi-fi" or "dumplings". Foursquare incorporated features from social discovery, and local search applications as well as the "like" feature made famous by Facebook.
Swarm
In May 2014, Foursquare launched Swarm, a companion app to Foursquare City Guide, which moved the social networking and location sharing aspects of the service to a separate application. On August 7, 2014, the company launched Foursquare 8.0, a new version of the service which removed location-sharing and check-in features, pivoting to local search instead.
Foursquare Day
Foursquare Day was coined by Nate Bonilla-Warford, an optometrist from Tampa, Florida, on March 12, 2010. The idea came to him while "thinking about new ways to promote his business".
In 2010, McDonald's launched a spring pilot program that took advantage of Foursquare Day. Foursquare users who checked into McDonald's restaurants on Foursquare Day were given the chance to win gift cards in $5 and $10 increments. Mashable reported that there was a "33% increase in foot traffic" to McDonald's venues, as apparent in the increase in Foursquare check-ins.
Privacy
In February 2010, a site known as Please Rob Me was launched, a site which scraped data from public Twitter messages that had been pushed through Foursquare, to list people who were not at home. The purpose of the site was to raise awareness about the potential thoughtlessness of location sharing.
As of March 2010, a privacy issue was observed for users who connected their Twitter account to Foursquare. If the user was joined at a location by one of their Foursquare contacts who was also using Twitter, that user could allow Foursquare to post a message such as "I am at Starbucks – Santa Clara (link to map) w/@mediaphyter" to their own Twitter feed. Similarly, if a user had agreed to Foursquare location sharing, that user's Foursquare contacts would be able to share their location publicly on Twitter.
Later in 2010, white hat hacker Jesper Andersen discovered a vulnerability on Foursquare that raised privacy concerns. foursquare's location pages display a grid of 50 pictures that is generated randomly, regardless of their privacy settings. Whenever a user "checks-in" at that location, their picture is generated on that location page, even if they only want their friends to know where they are. Andersen then crafted a script that collected check-in information. It is estimated that Andersen collected around 875,000 check-ins. Andersen contacted Foursquare about the vulnerability, and Foursquare responded by fixing their privacy settings.
In 2011, in response to privacy issues regarding social networking sites, Foursquare co-founder Naveen Selvadurai stated that "Users decide if they want to push to Twitter or Facebook, over what information they want to share and send" and "There is a lot of misunderstanding about location-based services. On Foursquare, if you don't want people to know you are on a date or with a friend at a certain place, then you don't have to let people know. You don't check in." Selvadurai also stated that Foursquare does not passively track users, which means a user has to actively check in to let people know where they are.
On May 8, 2012, Foursquare developers changed its API in response to a number of "stalker" applications which had been making the locations of all female users within a specific area available to the public.
In late December 2012, Foursquare updated its privacy policy to indicate it would display users' full names, as opposed to an initial for a surname. In addition, companies could view a more detailed overview of visitors who have checked into their businesses throughout the day.
Foursquare has since updated both its privacy policy and cookies policy to detail how location data is used in new features and products.
See also
Gowalla
Jiepang – a similar service often dubbed the "Foursquare of China"
Digu – a similar social network from China
References
External links
Geosocial networking
American social networking websites
Android (operating system) software
BlackBerry software
IOS software
Symbian software
Internet properties established in 2009
Windows Phone software
Bada software
Proprietary cross-platform software
WatchOS software
City guides |
2590407 | https://en.wikipedia.org/wiki/Larsen%20%26%20Toubro | Larsen & Toubro | Larsen & Toubro Ltd, commonly known as L&T, is an Indian multinational conglomerate, with business interests in engineering, construction, manufacturing, technology and financial services, headquartered in Mumbai. The company is counted among world's top five construction companies. It was founded by two Danish engineers taking refuge in India. As of 2020, L&T Group comprises 118 subsidiaries, 6 associates, 25 joint-venture and 35 joint operations companies, operating across basic and heavy engineering, construction, realty, manufacturing of capital goods, information technology, and financial services.
Company structure
Three key products/services which L&T is engaged in are: Construction and project-related activity; manufacturing and trading activity; and engineering services. For administrative purposes, the firm has been structured into five broad categories:
Construction – this covers buildings & factories, heavy civil infrastructure, transportation infrastructure, power transmission & distribution, water & effluent treatment, metallurgical & material handling and smart world & communication;
EPC Projects – this includes hydrocarbon engineering, power and power developments;
Manufacturing – this includes defence equipment & systems, heavy engineering, construction, mining & industrial machinery, industrial valves and electrical & automation systems;
Services: realty, information technology, technology services and financial services;
Others: Hyderabad Metro, Infrastructure Development Projects and other corporate functions.
History
Larsen & Toubro originated from a company founded in 1938 in Bombay by two Danish engineers, Henning Holck-Larsen and Søren Kristian Toubro. The company began as a representative of Danish manufacturers of dairy and allied equipment. However, with the start of the Second World War in 1939 and the resulting blockade of trade lines, the partners started a small workshop to undertake jobs and provide service facilities. Germany's invasion of Denmark in 1940 stopped supplies of Danish products. The war-time need to repair and refit and degauss ships offered L&T an opportunity, and led to the formation of a new company, Hilda Ltd, to handle these operations. L&T also started to repair and fabricate ships signalling the expansion of the company. The sudden internment of German engineers in British India (due to suspicions caused by the Second World War), who were to put up a soda ash plant for the Tata's, gave L&T a chance to enter the field of installation.
In 1944, ECC (Engineering Construction & Contracts) was incorporated by the partners; the company at this time was focused on construction projects (Presently, ECC is the construction division of L&T). L&T began several foreign collaborations. By 1945, the company represented British manufacturers of equipment used to manufacture products such as hydrogenated oils, biscuits, soaps and glass. In 1945, the company signed an agreement with Caterpillar Tractor Company, USA, for marketing earth moving equipment. At the end of the war, large numbers of war-surplus Caterpillar equipments were available at attractive prices, but the finances required were beyond the capacity of the partners. This prompted them to raise additional equity capital, and on 7 February 1946, Larsen & Toubro Private Limited was incorporated.
After India's independence in 1947, the firm set up offices in Calcutta (now Kolkata), Madras (now Chennai) and New Delhi. In 1948, 55 acres of undeveloped marsh and jungle was acquired in Powai, Mumbai. In December 1950, L&T became a public company with a paid-up capital of . The sales turnover in that year was . In 1956, a major part of the company's Mumbai office moved to ICI House in Ballard Estate, which would later be purchased by the company and renamed as L&T House, its present headquarters.
During the 1960s, ventures included UTMAL (set up in 1960), Audco India Limited (1961), Eutectic Welding Alloys (1962) and TENGL (1963).
In 1965,the firm had been chosen as a partner for building nuclear reactors. Dr. Homi Bhabha, then chairman of the Atomic Energy Commission (AEC) had in fact first approached L&T in the 1950s to fabricate critical components for atomic reactors. He convinced Holck-Larsen, a friend with whom he shared an interest in the arts that the company could do it, indeed must do it. L&T has since contributed significantly to the Indian nuclear programme ... Holck-Larsen was once asked by a junior engineer why L&T should get into building nuclear power plants when companies in the US and Germany were losing money on nuclear jobs. He replied: 'Young man, India has to build nuclear power plants. If not L&T, who will do it?'
During the 1970s, L&T was contracted to work with Indian Space Research Organisation (ISRO). Its then chairman, Vikram Sarabhai, chose L&T as manufacturing partner. In 1972, when India launched its space programme, the firm was invited to participate.
In 1976, ECC bid for a large airport project in Abu Dhabi. ECC's balance sheet, however, did not meet the bid's financial qualification requirement. So it was merged into L&T. ECC was eventually rechristened L&T Construction and now accounts for the largest slice of the group's annual revenue.
In 1985, L&T entered into a partnership with Defence Research and Development Organisation (DRDO). L&T was not yet allowed by the government to manufacture defence equipment but was permitted to participate in design and development programmes with DRDO. After the design and development was done,the firm had to hand over all the drawings to DRDO. The government would then assign the production work to a public sector defence unit or ordnance factory for manufacture. After a series of successes and positive policy initiatives, the firm today makes a range of weapon and missile systems, command and control systems, engineering systems and submarines through DRDO.
Group Companies
Construction
L&T Construction is among the world's top 15 contractors. The business involves the construction of Buildings & Factories, Heavy Civil Infrastructure, Transportation Infrastructure, Power transmission & Distribution Infrastructure, Water & Effluent Treatment plants, Metallurgical & Material Handling Infrastructure and Smart World & Communication Infrastructure.
Buildings & Factories
L&T's buildings and factories (B&F) business undertakes construction projects such as commercial buildings and airports, residential buildings, and factories. Its track record includes, 400 high-rise towers, 11 airports, 53 IT parks, 17 automobile plants, 28 cement plants and 45 hospitals. L&T offered to oversee the design and construction of Ram Mandir, Ayodhya free of cost and is the contractor of the project.
Heavy Civil Infrastructure
L&T's Heavy Civil Infrastructure (HCI) business undertakes projects in the areas of hydel power, tunnels, nuclear power, special bridges, metros, ports, harbours and defence installations. Its track record includes 231 km of metro rail corridors, 19.5 km of monorail corridors, 8,315 MW of hydropower projects and 8,080 MW of nuclear power projects. It has a subsidiary, L&T Geostructure LLP, and two JVs set up for metros in Doha and Saudi Arabia – ALYSI JV Gold Line Doha Metro and ArRiyadh New Mobility Consortium Riyadh Metro Orange Line.
Transportation Infrastructure
L&T's Transportation Infrastructure (TI) business undertakes projects such as roads, runways, elevated corridors, railways, etc. Its track record includes 13,500 lane km of highways, 7.49 million sq.m of runways and 3,260 tkm (track kilometre) of railway tracks. It also operates through subsidiaries such as L&T Oman LLC, L&T Infrastructure Engineering Ltd, and Hitech Rock Products & Aggregates Ltd.
Power Transmission and Distribution
L&T's Power Transmission and Distribution (PT&D) business undertakes projects involving the construction of substations, utility power distribution systems, transmission lines and optic fibre cabling projects. It executes projects in the renewables space, such as utility scale, microgrids and energy storage. It also operates in the Middle East, Africa and the ASEAN region. Its track record includes 12,510 tkm of railway electrification, 585 substations, 29,380 MW of E-BoP, and 20,600 ckm (circuit kilometre) of transmission lines.
Water & Effluent Treatment
L&T's Water & Effluent Treatment (WET) business undertakes projects involving water supply and distribution, wastewater treatment, industrial and large water systems and smart water infrastructure. Its track record includes 40,000 km of water and wastewater networks and 3,400 MLD (millions of litres per day) of water and wastewater treatment plants.
Metallurgical and Material Handling
L&T also undertakes projects in the ferrous and non-ferrous sectors, i.e. iron and steel, aluminium, copper, zinc, lead and mineral beneficiation plants. It offers EPC solutions in bulk material handling for the coal sector. Its Industrial Machinery and Cast Products business offers fabrication, machining, assembly and casting products for industries such as chemical, cement, steel, paper, power, mineral and railways.
Renewable Energy - L&T Solar
L&T develops concentrated solar power and solar photovoltaic technologies (grid-connected, rooftop and microgrid). It designs and builds solar power plants. L&T Solar, a subsidiary , undertakes solar energy projects. In April 2012, L&T commissioned India's largest solar photovoltaic power plant (40 MWp) owned by Reliance Power at Jaisalmer, Rajasthan from concept to commissioning in 129 days. In 2011, L&T entered into a partnership with Sharp for EPC (engineering, procurement and construction) in megawatt solar project and plan to construct about 100 MW in the next 12 months in most of the metros. L&T Infra Finance, promoted by the parent L&T Ltd, is also active in the funding of solar projects in India. It is governed by Rebel Enterprises.
EPC Projects
L&T undertakes projects on an engineering, procurement and construction basis. Installation is often part of the package. This includes hydrocarbon engineering, power and power development.
Hydrocarbon Engineering
A wholly owned subsidiary, L&T Hydocarbon Engineering Limited, provides engineering, procurement, fabrication, construction, installation and project management services for onshore and offshore hydrocarbon projects worldwide.
L&T formed a joint venture with SapuraCrest Petroleum Berhad, Malaysia for providing services to the offshore construction industry. The joint venture owns and operates the LTS 3000, a crane vessel for heavy lifting and pipe-laying.
L&T Power
L&T Power has set up an organisation focused on coal-based, gas-based and nuclear power projects. L&T has formed two joint ventures with Mitsubishi Heavy Industries, Japan to manufacture super critical boilers and steam turbine generators. L&T-MHPS Turbine Generators Private Limited (formerly known as L&T-MHI Turbine Generators Private Limited) is a Joint Venture Company formed in 2007 in India between Larsen & Toubro Limited (L&T), India, Mitsubishi Hitachi Power Systems (MHPS) and Mitsubishi Electric Corporation (MELCO), headquartered in Tokyo, Japan for manufacture of super-critical Turbines & Generators. L&T-MHPS Boilers Private Limited (formerly known as L&T-MHI Boilers Private Limited) is a 51:49 Joint Venture Company formed on 16 April 2007 in India between Larsen & Toubro Limited (L&T), India and Mitsubishi Hitachi Power Systems (MHPS), Japan for engaging in the business of design, engineering, manufacturing, selling, maintenance and servicing of Supercritical Boilers and Pulverisers in India.
The design wing of L&T ECC is EDRC (Engineering Design and Research Centre), which provides consultancy, design, and services. It carries out the basic and detailed design for both residential and commercial projects.
Manufacturing
This subdivision includes Defence equipment & systems, Heavy Engineering, Construction, Mining & Industrial Machinery, Industrial Valves and Electrical & Automation Systems.
Defence & Aerospace
L&T is one of India's largest developers and suppliers of defence equipment and systems, with over 30 years of experience in this space. It offers design-to-delivery solutions for land, sea and air defence. The company also offers specialised turnkey defence construction services, infrastructure and modernization of facilities. L&T's Aerospace business manufactures equipment and systems for the aviation and space industry, and has contributed to ISRO's Chandrayaan-2 and Mars Orbiter missions.
Heavy Engineering
L&T is among the five largest fabrication companies in the world. L&T has a shipyard capable of constructing vessels of up to 150 metre long and displacement of 20,000 tons at its heavy engineering complexes at Hazira and Ranoli, Gujarat. The shipyard constructs specialised heavy-lift ships, CNG carriers, chemical tankers, defence & para-military vessels, submarines and other role-specific vessels.
L&T Realty
L&T Realty is the real estate development arm of Larsen & Toubro. The company operates in Western and Southern India, constructing residential, corporate office, retail, leisure and entertainment properties with 35 million sq ft under various stages of development.
Machinery and industrial products
L&T manufactures, markets and provides service support for construction and mining machinery, including surface miners, hydraulic excavators, aggregate crushers, loader backhoes and vibratory compactors; supplies rubber processing machinery and manufactures and markets industrial valves and allied products along with application-engineered welding alloys.
L&T Metro Rail Hyderabad Limited
The company is a subsidiary of L&T Infrastructure Development Projects Ltd., an infrastructure development arm of Larsen & Toubro Ltd.
Larsen and Toubro Limited was awarded the Hyderabad Metro Rail Project by Government of Telangana. L&T incorporated a Special Purpose Vehicle - L&T Metro Rail (Hyderabad) Limited ("The Company") to implement the Project on Design, Built, Finance Operate and Transfer (DBFOT) basis. The company has signed the Concession Agreement with Government of Andhra Pradesh on 4 September 2010 and completed the financial closure for the Project on 1 March 2011 in record six months. A consortium of 10 banks led by the State Bank of India has provided funding, the largest fund tie-up in India for a non-power infrastructure Public- Private Partnership (PPP) project.
The company has commenced work on the Rs. 5,273 Crore Mumbai Metro Line 3 project. It consists of two packages: Package 1(Cuffe Parade-VidhanBhavan-Churchgate-Hutatma Chowk) and Package 7 (Marol Naka-MIDC-SEEPZ). It is also engaged in major metro rail projects in the Middle-East.
Major subsidiaries and joint ventures
As of March 2018, L&T has 93 subsidiaries, 8 associate companies, 34 joint ventures, and 33 joint operation companies.
L&T Infrastructure Engineering Ltd. is one of India's engineering consulting firms offering technical services in transport infrastructure. The company has experience both in India and Globally, delivering single point 'Concept to Commissioning' consulting services for infrastructure projects like airports, roads, bridges, ports and maritime structure including environment, transport planning and other related services. Established in 1990 as L&T-Rambøll Consulting Engineers Limited, the company became the wholly owned subsidiary of L&T in September 2014. Today, L&T Infra Engineering is an independent corporate entity managed by a board of directors. The company enjoys complete freedom to set and pursue its goals, drawing, as and when required, on the technical and managerial resources of L&T Infrastructure Engineering Limited.
L&T – Construction Equipment Limited: having its registered office at Mumbai, India and focusing on construction equipment and mining equipment, L&T-Komatsu Limited was a joint-venture of Larsen and Toubro, and Komatsu Asia Pacific Pte Limited, Singapore, a wholly owned subsidiary of Komatsu Limited, Japan. Komatsu is the world's second largest manufacturer of hydraulic excavators and has manufacturing and marketing facilities. The plant was started in 1975 by L&T to manufacture hydraulic excavators for the first time in India. In 1998, it became a joint-venture. The Bengaluru works comprise machinery and hydraulics works, with a manufacturing facility for design, manufacture, and servicing of earth moving equipment. The hydraulics works have a precision machine shop, manufacturing high-pressure hydraulic components and systems, and designing, developing, manufacturing and servicing hydraulic pumps, motors, cylinders, turning joints, hose assemblies, valve blocks, hydraulic systems, and power drives as well as allied gearboxes. In April 2013, L&T bought the 50% stake held by Komatsu Asia & Pacific. The company's name was changed to L&T Construction Equipment Limited.
L&T has a joint venture with Qatari company Al Balagh group as the main contractors for the Al Rayyan stadium, the 2022 FIFA World Cup stadium which will host matches up to the quarter-final.
L&T Finance: Larsen & Toubro financial services is a subsidiary which was incorporated as a non-banking financial company in November 1994. The subsidiary has financial products and services for corporate, construction equipments.. This became a division in 2011 after the company declared its restructuring A partnership between L&T Finance and Sonalika Group farm equipment maker International Tractors Ltd in April 2014 provided credit and financing to customers of Sonalika Group in India.
L&T Mutual Fund is the mutual fund company of the L&T Group. Its average assets under management (AuM) as of May 2019 is 73,936.68 crore.
Larsen & Toubro Infrastructure Finance: this wholly owned subsidiary commenced business in January 2007 upon obtaining Non-Banking Financial Company (NBFC) license from the Reserve Bank of India (RBI). As of 31 March 2008, L&T Infrastructure Finance had approved financing of more than US$1 billion to select projects in the infrastructure sector. It received the status of "Infrastructure Finance Company" from the RBI within the overall classification of "Non-Banking Financial Company".
L&T Valves markets valves manufactured by L&T's Valve Manufacturing Unit and L&T's joint-venture Larsen & Toubro Valves Manufacturing Unit, Coimbatore as well as allied products other manufacturers. The group's manufacturing unit in Coimbatore manufactures industrial valves for the power industry, along with flow control valves for the oil and gas, refining, petrochemical, chemical and power industries, industrial valves and customised products for refinery, LNG, GTL, petrochemical and power projects. L&T Valves Business Group has offices in the US, South Africa, Dubai, Abu Dhabi, India and China, and alliances with valve distributors and agents in these countries.
L&T-MHPS Boilers is a joint venture between L&T and Mitsubishi Hitachi Power Systems. The group specialises in engineering, manufacturing, erecting and commissioning of supercritical steam generators used in power plants. It is mainly headquartered in Faridabad with a manufacturing facility in Hazira and an engineering centre in Chennai and Faridabad. Currently, the group is engaged in projects for JVPL, MAHAGENCO, Nabha Power & RRVUNL.
L&T MHPS Turbine Generators Pvt Ltd: in 2007, Larsen & Toubro and Mitsubishi Heavy Industries set up a joint-venture manufacturing agreement to supply a supercritical steam turbine and generator facility in Hazira. This followed a technology licensing and technical assistance agreement for the manufacture of supercritical turbines and generators between L&T, MHI, and Mitsubishi Electric Corporation (MELCO), headquartered in Tokyo, Japan. In February 2014, MHI and Hitachi Ltd integrated the business centred on thermal power generation systems (gas turbines, steam turbines, coal gasification generating equipment, boilers, thermal power control systems, generators, fuel cells, environmental equipment and so on) and started a new company as Mitsubishi Hitachi Power Systems (MHPS) Ltd, headquartered in Yokohama, Japan.
L&T Howden Pvt Ltd is a joint venture between L&T and Howden to manufacture axial fans and air pre-heaters in the range of 120-1200 MW to thermal power stations. L&T Howden is an ISO 9001 and ISO 5001 certified organisation, with a plant located in Surat Hazira and a marketing office in Faridabad.
L&T Special Steels and Heavy Forgings Pvt Ltd. is a joint venture between L&T and NPCIL, headquartered at Hazira. It is the largest integrated steel plant and heavy forging unit in India, capable of producing forgings weighing 120 MT each. LTSSHF currently is engaged in projects from the nuclear, hydrocarbon, power and oil and gas sectors.
L&T-Sargent & Lundy Limited (L&T-S&L), established in 1995, is an engineering and consultancy firm in the power sector, formed by L&T and Sargent & Lundy L.L.C. - USA, a global consulting firm in the power industry since 1891
In 2015, the company began developing commercial, retail and office space around the Hyderabad Metro Rail project.
In June 2019, the company acquired a controlling stake in IT services company Mindtree Ltd
Technology Cluster
L&T Technology Services
L&T Technology Services, a subsidiary of Larsen & Toubro, is an engineering services company that operates in the global Engineering, Research and Development ("ER&D") space. L&T Technology Services offers design, development and testing services for the industrial products, medical devices, transportation, aerospace, telecom and process industries.
The company serves customers across the product engineering life cycle from product conceptualization to implementation. Services include consulting, design, development, testing, maintenance, and to-market integration services. L&T Technology hits the Indian Capital Markets with its IPO offering 10.4 million shares at a price band of Rs.850 to Rs.860
L&T Technology Services, a subsidiary of Larsen & Toubro, is a global engineering services company headquartered out of Vadodara, Gujarat, India. It offers design, development, and testing solutions across the product and plant engineering value chain, for various domains including Industrial Products, Transportation, Aerospace, Telecom & Hi-tech, and the Process Industries. As of 2016, L&T Technology Services employs over 10,000 workers and has operations in 35 locations around the world. Its clientele includes a large number of Fortune 500 companies globally.
Larsen & Toubro Infotech (LTI)
Larsen & Toubro Infotech Limited, a wholly owned subsidiary of L&T, offers information technology, software and services with a focus on manufacturing, BFSI and communications and embedded systems. It also provides services for embedded intelligence and engineering.
L&T Smart World & Communication
L&T's Smart World & Communication business vertical is designed to provide end-to-end solutions as a master systems integrator in security solutions for critical infrastructure: ports, airports, metros, IT Parks and public buildings. It has built one of the largest surveillance projects comprising 6000 cameras across 1500 locations in Mumbai, and city surveillance and Intelligent Traffic Management Systems in Ahmedabad, Gandhinagar and Vadodara. In Jaipur, India's first smart city, L&T provided smart solutions like wi-fi hotspots, citizen information systems and surveillance cameras.
L&T-NxT
In 2019 L&T announced a new initiative, L&T-Nxt, to focus on new-age technologies like artificial intelligence and cybersecurity. L&T-Nxt will focus on the areas of artificial intelligence, internet of things (IoT), virtual reality, augmented reality, geospatial solutions as well as cybersecurity and leverage the experience that L&T has garnered over the decades. L&T-NxT is based on the strength and experience of L&T.
Mindtree
In 2019 L&T gained a controlling stake in Mindtree, a global technology consulting and services company with a share of 60 per cent.
International Markets
L&T sharpened its focus on international markets, especially the Gulf, from 2010 onwards. Since then, from under one-tenth, international business now contributes around one-third to both order inflow and revenue. L&T has set up a full range of operations in the Middle East catering to the Gulf and North Africa. Many of the projects are being undertaken through joint ventures with leading companies based in the Gulf. L&T provides turnkey solutions across key regions: the Middle East (UAE, Qatar, Kuwait, Oman, Saudi Arabia and Bahrain), Africa (Algeria, Kenya, Ethiopia and Malawi) and ASEAN (Malaysia and Thailand). The range of work is wide and varied: high-voltage substations, power transmission lines, extra-high-voltage cabling and instrumentation and control systems.
Divestments
L&T has across the decades exited from several businesses. These include:
Cement
L&T separated its 16.5 million tonne cement division into a different entity UltraTech Cemco, divesting 8.5 per cent stake to A V Birla group company Grasim Industries in 2004.
L&T-John Deere
In 1992, L&T established a 50-50 joint venture with John Deere to manufacture tractors in India, called L&T - John Deere. L&T sold their interest to John Deere in 2005.
L&T Case
In 1992, L&T established L&T-Case Construction with CNH Global as a 50-50 joint venture to build backhoes. In 2011, L&T sold its share to CNH, and the company was renamed Case New Holland Construction Equipment India.
L&T Medical Equipment and Systems
's medical equipment division, known as L&T Medical Equipment & Systems, was established in 1987. In November 2012, L&T sold it to Skanray Technologies Pvt Ltd. Currently, L&T Mysore division manufactures Single-phase and Three-phase static solid-state Electricity Meters to various utilities in India. The range of meters varies from residential, industrial, prepayment and smart meters. there are both whole current and CT operated meters. It also houses a relay servicing unit.
EWAC Alloys Limited
EWAC Alloys Limited was a wholly owned subsidiary of L&T. The company was engaged in design & development, manufacture and supply of special welding electrodes, gas brazing rods and fluxes, welding torches and accessories, atomised metal powder alloys, flux cored continuous wires & wire feeders, polymer compounds & wear-resistant plates.
Prof Wasserman, founder of Eutectic Castolin, and Henning Hock Larsen, founder of L&T, founded the Eutectic Division in India in the year 1962. Eutectic Castolin was later merged into the Messer Group of companies, Germany and referred as Messer Eutectic Castolin (MEC). In 2010, L&T, bought the entire stake from Messer to become the wholly owned subsidiary of it. The current headquarters is in Ankleshwar, Gujarat (India), and the products are sold under the name EWAC.
L&T sold its entire stake in unlisted subsidiary EWAC Alloys to UK-registered ESAB Holdings for a total consideration of Rs 522 crore. The share purchase agreement has been executed on 11 October 2017. The acquirer ESAB offers products for welding and cutting process. In 2012, ESAB was acquired by Colfax Corp., a diversified industrial manufacturing company based in the US.
L&T Kobelco Machinery Private Limited
This was a joint venture of L&T and Kobe Steel of Japan, to manufacture internal mixers and twin screw roller-head extruder's for the tyre industry. L&T sold its entire 51% stake in L&T Kobelco Machinery Private to its joint venture partner in the company, Kobe Steel of Japan, for Rs 43.5 crore.
Electrical and Automation
L&T was an international manufacturer of electrical and electronic products and systems. The company manufactured custom-engineered switchboards for industrial sectors like power, refineries, petrochemicals and cement. In the electronic segment, L&T offered a range of metres and provides control automation systems for industries. In May 2018, the firm signed a definitive agreement with Schneider Electric for the strategic divestment of its electrical and automation (E&A) business in an all-cash deal of ₹14,000 crore. The deal was completed on 31 August 2020 after receiving the requisite regulatory approvals. L&T has said that its exit from the electrical and automation business is a part of its strategic portfolio review process.
Listing and shareholding
The equity shares of the company are listed on the Bombay Stock Exchange (BSE) and the National Stock Exchange of India (NSE). The company's shares constitute a part of the BSE SENSEX of the BSE as well as the NIFTY 50 index of the NSE. Its global depository receipts (GDR) are listed on the Luxembourg Stock Exchange and London Stock Exchange.
Shareholders
Shareholders as of 31 March 2020
Employees
As on 31 March 2019, the company had 44,332 permanent employees, out of which 2,822 were women (5.29%) and 90 were employees with disabilities (0.1012%). At the same period company had 2,93,662 employees on contract basis.
Awards and recognition
L&T ranked 4th in the 2021 LinkedIn Top Companies list, India
In 2021, L&T won the Innovation in Onboarding at the OLX People HR Excellence awards by The Economic Times HRWorld
L&T Hydrocarbon won the Federation of Indian Petroleum Industry (FIPI) ‘Engineering Procurement Construction (EPC) - Company of the Year’ Award for 2020
In 1997, the Bengaluru Works division was awarded the "Best of all" Rajiv Gandhi National Quality Award
In 2013, L&T Power received 'Golden Peacock National Quality Award – 2012' at the 23rd World Congress on 'Leadership & Quality of Governance'.
See also
Yeshwant Deosthalee, rose through the ranks to become tChairman
L&T Realty
List of companies of India
List of oilfield service companies
References
External links
BSE SENSEX
NIFTY 50
Companies based in Mumbai
Construction and civil engineering companies of India
Engineering companies of India
Aerospace companies of India
Gas turbine manufacturers
Conglomerate companies established in 1938
Indian brands
Construction and civil engineering companies established in 1938
Construction equipment manufacturers of India
Indian companies established in 1938
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange |
2243946 | https://en.wikipedia.org/wiki/Anthony%20Rother | Anthony Rother | Anthony Rother (born 29 April 1972) is an electronic music composer, producer and label owner living in Frankfurt, Germany.
Rother's electro sound ("Sex With the Machines", "Simulationszeitalter", "Hacker") is characterized by repetitive machine-like beats, robotic, vocoder-driven vocals, melancholy, futuristic mood and lyrics that often deal with the consequences of technological progress, the relationship between humans and machines, and the role of computers in society.
In addition to electro, Rother also composes dark ambient music ("Elixir of Life", "Art Is a Technology"). He has also produced music for Sven Väth and DJ Hell.
Discography
$ex with the Machines (1997)
Simulationszeitalter (2000)
Art Is A Division of Pain (2001) (as Psi Performer)
Little Computer People (2001) (as Little Computer People)
Hacker (2002)
Live Is Life Is Love (2003)
Elixir of Life (2003) (ambient)
Magic Diner (2003) (ambient)
Popkiller (2004)
Art Is A Technology (2005) (ambient)
Super Space Model (2006)
My Name Is Beuys Von Telekraft (2008)
Popkiller II (2010)
62 Minutes On Mars (2011)
The Machine Room (2011) (ambient)
Verbalizer (2011)
Netzwerk Der Zukunft (2014)
Verbalizer (2014)
Koridium (2015)
Terazoid / Octagon (2015)
Compilations
Various – In Electro We Trust (2004)
Anthony Rother – This Is Electro (Works 1997 - 2005) (2005)
Various – We Are Punks (2007)
Various – We Are Punks 2 (2007)
Various – We Are Punks 3 (2008)
Various – Fuse Presents Anthony Rother (2009)
Anthony Rother - Past Represents The Future (2012)
Various – Robotics EP (2021)
Remixes
ALBUM ADJ - MIX 2 - ADJ - MIX - Free download at Bandcamp
Miss Kittin & The Hacker - "1982" (1998)
References
External links
Datapunk, Rother's new record label
PSI49NET, Rother's first record label
Myspace.com: Anthony Rother
Official Instagram Page: Anthony Rother
Anthony Rother @ Last.fm
1972 births
Living people
German record producers
Electro musicians |
2084170 | https://en.wikipedia.org/wiki/DrayTek | DrayTek | DrayTek () is a network equipment manufacturer of broadband CPE (Customer Premises Equipment), including firewalls, VPN devices, routers, managed switches and wireless LAN devices. The company was founded in 1997. The earliest products included ISDN based solutions, the first being the ISDN Vigor128, a USB terminal adaptor for Windows and Mac OS. This was followed by the ISDN Vigor204 ISDN terminal adaptor/PBX and Vigor2000, our first router. The head office is located in Hsinchu, Taiwan with regional offices and distributors worldwide.
DrayTek was one of the first manufacturers to bring VPN technology to low cost routers, helping with the emergence of viable teleworking. In 2004, DrayTek released the first of their VoIP (Voice-Over-IP) products. In 2006, new products aimed at enterprises debuted, including larger scale firewalls and Unified Threat Management (UTM) firewalls products however the UTM Firewalls did not sell in sufficient volume and the UTM products ceased development and production.
DrayTek's product line offers business and consumer DSL modems with support for the PPPoA standard compared to the more widely supported PPPoE for use with full-featured home routers and home computers without more expensive ATM hardware. PPPoA is used primarily in the UK for ADSL lines. Most Vigor routers provide a virtual private network (VPN) feature, provides LAN-to-LAN and Remote-Dial-In Connections. In 2011, DrayTek embedded SSL VPN facilities into VigorRouter Series.
DrayTek's Initial Public Offering (IPO) on the Taiwan Stock Exchange occurred in 2004.
Vigor 2200USB
DrayTek released the Vigor2200USB router in the UK in 2002, a unique router for ADSL, the only router able to be connected to BT's newly launched USB-modem based ADSL service. The router did not incorporate a modem, but allowed certain specified USB ADSL modems to be connected; at the time many Internet Service Providers required their USB modems to be used rather than allowing connection of ADSL equipment directly to the telephone line ("wires-only" service). The product was devised in the UK by SEG Communications and developed by DrayTek engineers. This was the only router supporting a separate USB modem, and was the only router compatible with BT's new USB ADSL service. These factors made it very popular and firmly established DrayTek as a key player in the broadband Internet hardware market in the UK.
BT MCT SIN 498 Compliance
For the UK market, DrayTek were granted MCT approval for their VDSL2 products in 2015, a mandatory requirement for products connected to the UK network operated by BT Openreach (used by most ISPs).
Products
This is an approximate chronological order of all major DrayTek products from 1997:
Router / Modem
DrayTek isdnVigor128
DrayTek isdnVigor204
DrayTek Vigor 2000 Series
DrayTek Vigor 2200 Series a
DrayTek Vigor 2104P
DrayTek Vigor 2100 Series
DrayTek Vigor 2700 Series
DrayTek Vigor 2600 Series
DrayTek minivigor128
DrayTek Vigor 2300 Series
DrayTek Vigor 2900 Series
DrayTek Vigor 2800 Series
DrayTek Vigor 2710 Series
DrayTek Vigor 2110 Series
DrayTek Vigor 2910 Series
DrayTek Vigor 3100
DrayTek Vigor 3300V
DrayTek Vigor 2950
DrayTek VigorPro 5510
DrayTek Vigor 2820 Series
DrayTek Vigor 2130 Series
DrayTek Vigor 2955
DrayTek Vigor 100
DrayTek Vigor 3300VPlus
DrayTek Vigor 2750 Series
DrayTek Vigor 120
DrayTek Vigor 2920 Series
DrayTek Vigor 2830 Series
DrayTek Vigor 2850 Series
DrayTek Vigor 3200 Series
DrayTek Vigor 2925 Series
DrayTek Vigor 3900
DrayTek Vigor 2760 Series
DrayTek Vigor 300B
DrayTek Vigor 2960
DrayTek Vigor 130
DrayTek Vigor 2860 Series
DrayTek Vigor 3220
DrayTek Vigor 2952
DrayTek Vigor 2860Ln
DrayTek Vigor BX2000
Draytek Vigor 2962 series
Draytek Vigor 2926 series
DrayTek Vigor 2762 Series
DrayTek Vigor 2765 Series
DrayTek Vigor 2735 Series
DrayTek Vigor 3910 Series
DrayTek Vigor 2620L Series
DrayTek Vigor LTE200
DrayTek Vigor 2135
802.11 access points
DrayTek VigorAP 700
DrayTek VigorAP 710
DrayTek VigorAP 800
DrayTek VigorAP 810
DrayTek VigorAP 900
DrayTek VigorAP 902
DrayTek VigorAP 903 Mesh
DrayTek VigorAP 910C
DrayTek VigorAP 920R
DrayTek VigorAP 918RP
managed switches
DrayTek VigorSwitch G2240
DrayTek VigorSwitch P2260
DrayTek VigorSwitch P1100
DrayTek VigorSwitch P2261
DrayTek VigorSwitch P1090
DrayTek VigorSwitch P1080
DrayTek VigorSwitch G1240
DrayTek VigorSwitch G2280
DrayTek VigorSwitch P2280
DrayTek VigorSwitch P1092
DrayTek VigorSwitch V1281
DrayTek VigorSwitch G1080
DrayTek VigorSwitch G1085
DrayTek VigorSwitch G1280
DrayTek VigorSwitch P1280
DrayTek VigorSwitch G2100
DrayTek VigorSwitch P2100
DrayTek VigorSwitch G2280x
DrayTek VigorSwitch P2280x
Software
VigorACS-SI
VigorACS 2 (Released 2017)
VigorACS 3
Smart VPN Client (Windows/Android/iOS/macOS)
Vigor Manager apps (Android/iOS)
DrayTek Wireless (Android/iOS) - Wireless Management app for VigorAP 903 Mesh and VigorAP 920R
Vigor AVS apps (Android/iOS) - Audio Visual Switching app for VigorSwitch V1281
Other
DrayTek VigorNIC 132
DrayTek VigorIPPBX2820 Series
DrayTek VigorIPPBX3510
References
Taiwanese companies established in 1997
Networking hardware companies
Manufacturing companies based in Hsinchu
Manufacturing companies established in 1997
Electronics companies of Taiwan
Taiwanese brands |
12034587 | https://en.wikipedia.org/wiki/Charles%20Mok | Charles Mok | Charles Peter Mok, JP (born 1964 in Hong Kong) is a Hong Kong-based Internet entrepreneur and IT advocate who formerly represents the Information Technology functional constituency on the Hong Kong Legislative Council.
Mok founded HKNet in 1994, and contributed the company's expansion as a major IP telecommunications operator in Hong Kong before its acquisition by NTT Communications in 2000. He was a founding chairman of Internet Society, Hong Kong Chapter, and the ex officio member and ex-president of the Hong Kong Information Technology Federation. He was also a past chairman and a co-founder of the Hong Kong Internet Service Providers Association. He is currently a Hong Kong Legislative Councillor.
He has been actively promoting the industry's development and digital comprehension in the region since the early 1990s. He has been actively participating in the community to promote fair competition, media freedom, personal privacy, consumer protection, healthcare, transport, human rights and democracy development in Hong Kong. In 1999, he was awarded as one of Hong Kong's " Ten Outstanding Young Digi Persons”.
Mok is currently a regular columnist for a number of local print media, including the Hong Kong Economic Journal (since 2000) and CUP magazine (since 2005).
In Hong Kong's 2008 Legislative Council Election, Mok lost to Samson Tam in the Information Technology functional constituency with 1982 votes, just 35 fewer than Tam's total of 2017 votes. Mok commenced a legal action in the High Court of Hong Kong against Tam in relation to the latter's alleged misconduct during campaigning.
In the 2012 election, Mok won the Information Technology seat with 2,828 votes, against 2,063 votes for the incumbent, his only opponent, Tam. He was reelected his Legislative Council seat in the 2016 election.
Education
Mok attended Wah Yan College, Hong Kong (Class of 1981), a Roman Catholic single-gender secondary school in Hong Kong. He received his bachelor's and master's degrees, in 1985 and 1987 respectively, in Electrical Engineering from Purdue University, United States. Mok was a PhD candidate in Enterprise Management at Shanghai University of Finance and Economics, People's Republic of China.
Electoral history
2008 Hong Kong legislative election
2012 Hong Kong legislative election
2016 Hong Kong legislative election
Public services
Professional Commons, Vice-Chairman (2008– )
Internet Society Hong Kong Chairman
Hong Kong Information Technology Federation Ex officio Member; Past President (2001–05)
Hong Kong Computer Society Chair, Health Information Technology Special Interest Division
Hong Kong Internet Service Providers Association Past chairman (1998-00)
Web-based Services and Computer Network Working Group, Vice-Chairman, Chairman.
Supporting Services Development Committee, Vice-Chairman.
Radio Television Hong Kong Member, Television Programme Advisory Committee
HK Human Rights Monitor Founding Member
Hong Kong Democratic Foundation, director
Publication
"Innovation and Entrepreneurship Support Policy by Government: HKSAR as Example”, “Waiguo Jingji Yu Guanli” (“Foreign Economics and Management”) Journal Vol. 28, Shanghai University of Finance and Economics, August 2006
See also
HKNet
Mok
Hong Kong Human Rights Monitor
References
External links
HKIRC
Official Site
IT360 Newsletter for the Hong Kong IT Industry
Living people
1965 births
Hong Kong politicians
Hong Kong Christians
Alumni of Wah Yan
Hong Kong financial businesspeople
HK LegCo Members 2012–2016
HK LegCo Members 2016–2020
Members of the Election Committee of Hong Kong, 2007–2012
Members of the Election Committee of Hong Kong, 2012–2017
Shanghai University of Finance and Economics alumni |
351581 | https://en.wikipedia.org/wiki/Health%20informatics | Health informatics | Health informatics is the field of science and engineering that aims at developing methods and technologies for the acquisition, processing, and study of patient data, which can come from different sources and modalities, such as electronic health records, diagnostic test results, medical scans. The health domain provides an extremely wide variety of problems that can be tackled using computational techniques.
Health informatics is a spectrum of multidisciplinary fields that includes study of the design, development and application of computational innovations to improve health care. The disciplines involved combines medicine fields with computing fields, in particular computer engineering, software engineering, information engineering, bioinformatics, bio-inspired computing, theoretical computer science, information systems, data science, information technology, autonomic computing, and behavior informatics. In academic institutions, medical informatics research focus on applications of artificial intelligence in healthcare and designing medical devices based on embedded systems. In some countries term informatics is also used in the context of applying library science to data management in hospitals.
Subspecialities
According to Jan van Bemmel, medical informatics comprises the theoretical and practical aspects of information processing and communication based on knowledge and experience derived from processes in medicine and health care.
Archival science and databases in healthcare
Archival clinical informaticians use their knowledge of patient care combined with their understanding of informatics concepts, methods, and health informatics tools to:
assess information and knowledge needs of health care professionals, patients and their families.
characterize, evaluate, and refine clinical processes,
develop, implement, and refine clinical decision support systems, and
lead or participate in the procurement, customization, development, implementation, management, evaluation, and continuous improvement of clinical information systems.
Clinicians collaborate with other health care and information technology professionals to develop health informatics tools which promote patient care that is safe, efficient, effective, timely, patient-centered, and equitable. Many clinical informaticists are also computer scientists. In October 2011 American Board of Medical Specialties (ABMS), the organization overseeing the certification of specialist MDs in the United States, announced the creation of MD-only physician certification in clinical informatics. The first examination for board certification in the subspecialty of clinical informatics was offered in October 2013 by American Board of Preventive Medicine (ABPM) with 432 passing to become the 2014 inaugural class of Diplomates in clinical informatics. Fellowship programs exist for physicians who wish to become board-certified in clinical informatics. Physicians must have graduated from a medical school in the United States or Canada, or a school located elsewhere that is approved by the ABPM. In addition, they must complete a primary residency program such as Internal Medicine (or any of the 24 subspecialties recognized by the ABMS) and be eligible to become licensed to practice medicine in the state where their fellowship program is located. The fellowship program is 24 months in length, with fellows dividing their time between Informatics rotations, didactic method, research, and clinical work in their primary specialty.
Integrated data repository
One of the fundamental elements of biomedical and translation research is the use of integrated data repositories. A survey conducted in 2010 defined "integrated data repository" (IDR) as a data warehouse incorporating various sources of clinical data to support queries for a range of research-like functions. Integrated data repositories are complex systems developed to solve a variety of problems ranging from identity management, protection of confidentiality, semantic and syntactic comparability of data from different sources, and most importantly convenient and flexible query. Development of the field of clinical informatics led to the creation of large data sets with electronic health record data integrated with other data (such as genomic data). Types of data repositories include operational data stores (ODSs), clinical data warehouses (CDWs), clinical data marts, and clinical registries. Operational data stores established for extracting, transferring and loading before creating warehouse or data marts. Clinical registries repositories have long been in existence, but their contents are disease specific and sometimes considered archaic. Clinical data stores and clinical data warehouses are considered fast and reliable. Though these large integrated repositories have impacted clinical research significantly, it still faces challenges and barriers. One big problem is the requirement for ethical approval by the institutional review board (IRB) for each research analysis meant for publication. Some research resources do not require IRB approval. For example, CDWs with data of deceased patients have been de-identified and IRB approval is not required for their usage. Another challenge is data quality. Methods that adjust for bias (such as using propensity score matching methods) assume that a complete health record is captured. Tools that examine data quality (e.g., point to missing data) help in discovering data quality problems.
Data science and knowledge representation in healthcare
Clinical research informatics
Clinical research informatics (CRI) is a sub-field of health informatics that tries to improve the efficiency of clinical research by using informatics methods. Some of the problems tackled by CRI are: creation of data warehouses of health care data that can be used for research, support of data collection in clinical trials by the use of electronic data capture systems, streamlining ethical approvals and renewals (in US the responsible entity is the local institutional review board), maintenance of repositories of past clinical trial data (de-identified). CRI is a fairly new branch of informatics and has met growing pains as any up and coming field does. Some issue CRI faces is the ability for the statisticians and the computer system architects to work with the clinical research staff in designing a system and lack of funding to support the development of a new system. Researchers and the informatics team have a difficult time coordinating plans and ideas in order to design a system that is easy to use for the research team yet fits in the system requirements of the computer team. The lack of funding can be a hindrance to the development of the CRI. Many organizations who are performing research are struggling to get financial support to conduct the research, much less invest that money in an informatics system that will not provide them any more income or improve the outcome of the research (Embi, 2009). Ability to integrate data from multiple clinical trials is an important part of clinical research informatics. Initiatives, such as PhenX and Patient-Reported Outcomes Measurement Information System triggered a general effort to improve secondary use of data collected in past human clinical trials. CDE initiatives, for example, try to allow clinical trial designers to adopt standardized research instruments (electronic case report forms). A parallel effort to standardizing how data is collected are initiatives that offer de-identified patient level clinical study data to be downloaded by researchers who wish to re-use this data. Examples of such platforms are Project Data Sphere, dbGaP, ImmPort or Clinical Study Data Request. Informatics issues in data formats for sharing results (plain CSV files, FDA endorsed formats, such as CDISC Study Data Tabulation Model) are important challenges within the field of clinical research informatics. There are a number of activities within clinical research that CRI supports, including:
more efficient and effective data collection and acquisition
improved recruitment into clinical trials
optimal protocol design and efficient management
patient recruitment and management
adverse event reporting
regulatory compliance
data storage, transfer, processing and analysis
repositories of data from completed clinical trials (for secondary analyses)
Translational bioinformatics
Translational Bioinformatics (TBI) is a relatively new field that surfaced in the year of 2000 when human genome sequence was released. The commonly used definition of TBI is lengthy and could be found on the AMIA website. In simpler terms, TBI could be defined as a collection of colossal amounts of health related data (biomedical and genomic) and translation of the data into individually tailored clinical entities.
Today, TBI field is categorized into four major themes that are briefly described below:
Clinical big data is a collection of electronic health records that are used for innovations. The evidence-based approach that is currently practiced in medicine is suggested to be merged with the practice-based medicine to achieve better outcomes for patients. As CEO of California-based cognitive computing firm Apixio, Darren Schutle, explains that the care can be better fitted to the patient if the data could be collected from various medical records, merged, and analyzed. Further, the combination of similar profiles can serve as a basis for personalized medicine pointing to what works and what does not for certain condition (Marr, 2016).
Genomics in clinical careGenomic data are used to identify the genes involvement in unknown or rare conditions/syndromes. Currently, the most vigorous area of using genomics is oncology. The identification of genomic sequencing of cancer may define reasons of drug(s) sensitivity and resistance during oncological treatment processes.
Omics for drugs discovery and repurposingRepurposing of the drug is an appealing idea that allows the pharmaceutical companies to sell an already approved drug to treat a different condition/disease that the drug was not initially approved for by the FDA. The observation of "molecular signatures in disease and compare those to signatures observed in cells" points to the possibility of a drug ability to cure and/or relieve symptoms of a disease.
Personalized genomic testingIn the US, several companies offer direct-to-consumer (DTC) genetic testing. The company that performs the majority of testing is called 23andMe. Utilizing genetic testing in health care raises many ethical, legal and social concerns; one of the main questions is whether the health care providers are ready to include patient-supplied genomic information while providing care that is unbiased (despite the intimate genomic knowledge) and a high quality. The documented examples of incorporating such information into a health care delivery showed both positive and negative impacts on the overall health care related outcomes.
Artificial intelligence in healthcare
A pioneer in the use of artificial intelligence in healthcare was american biomedical informatician Edward H. Shortliffe. This field deals with utilization of machine-learning algorithms and artificial intelligence, to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data. AI programs are applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions. Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of "data assessment, storage, management, and analysis technologies" which are all crucial parts of the healthcare industry. The following are examples of large companies that have contributed to AI algorithms for use in healthcare:
IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare.
Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues.
Tencent is working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork
Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays.
Neuralink has come up with a next generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in place of a chunk of skull by a precision surgical robot to avoid accidental injury .
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user's medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders). IFlytek launched a service robot "Xiao Man", which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper"). The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India. With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well. Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress. Examples of projects in computational health informatics include the COACH project.
Telehealth and telemedicine
Telehealth is the distribution of health-related services and information via electronic information and telecommunication technologies. It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions. Telemedicine is sometimes used as a synonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective. Telerehabilitation (or e-rehabilitation[40][41]) is the delivery of rehabilitation services over telecommunication networks and the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance.
Medical signal processing
An important application of information engineering in medicine is medical signal processing. It refers to the generation, analysis and use of signals, which could take many forms such as image, sound, electrical, or biological.
Medical image computing and imaging informatics
Imaging informatics and medical image computing develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care. Those fields aims to extract clinically relevant information or knowledge from medical images and computational analysis of the images. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others.
Other fields in healthcare technology
Medical robotics and autonomic computing
A medical robot is a robot used in the medical sciences. They include surgical robots. These are in most telemanipulators, which use the surgeon's activators on one side to control the "effector" on the other side. There are the following types of medical robots:
Surgical robots: either allow surgical operations to be carried out with better precision than an unaided human surgeon or allow remote surgery where a human surgeon is not physically present with the patient.
Rehabilitation robots: facilitate and support the lives of infirm, elderly people, or those with dysfunction of body parts affecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy.
Biorobots: a group of robots designed to imitate the cognition of humans and animals.
Telepresence robots: allow off-site medical professionals to move, look around, communicate, and participate from remote locations.
Pharmacy automation: robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting.
Companion robot: has the capability to engage emotionally with users keeping them company and alerting if there is a problem with their health.
Disinfection robot: has the capability to disinfect a whole room in mere minutes, generally using pulsed ultraviolet light. They are being used to fight Ebola virus disease.
Computer engineering in healthcare
Field of computer engineering is known in Europe as technical informatics and is closely related to engineering informatics which includes also information engineering. Computer engineers create computer-based devices for the health service, in particular embedded systems.
Neuroengineering & Neuroinformatics
Neuroinformatics is a scientific study of information flow and processing in nervous system. Institute scientists utilize brain imaging techniques, such as magnetic resonance imaging, to reveal the organization of brain networks involved in human thought. Brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. There are three main directions where neuroinformatics has to be applied:
the development of computational models of the nervous system and neural processes,
the development of tools for analyzing data from devices for neurological diagnostic devices,
the development of tools and databases for management and sharing of patients brain data in healthcare institutions.
Brain mapping and simulation
Brain simulation is the concept of creating a functioning computational model of a brain or part of a brain. In December 2006, the Blue Brain project completed a simulation of a rat's neocortical column. The neocortical column is considered the smallest functional unit of the neocortex. The neocortex is the part of the brain thought to be responsible for higher-order functions like conscious thought, and contains 10,000 neurons in the rat brain (and 108 synapses). In November 2007, the project reported the end of its first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. An artificial neural network described as being "as big and as complex as half of a mouse brain" was run on an IBM Blue Gene supercomputer by the University of Nevada's research team in 2007. Each second of simulated time took ten seconds of computer time. The researchers claimed to observe "biologically consistent" nerve impulses that flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron and synapse models.
Mind uploading
Mind uploading is the process of scanning a physical structure of the brain accurately enough to create an emulation of the mental state (including long-term memory and "self") and copying it to a computer in a digital form. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. Substantial mainstream research in related areas is being conducted in animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are currently under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility.
History
Worldwide use of computer technology in medicine began in the early 1950s with the rise of the computers. In 1949, Gustav Wagner established the first professional organization for informatics in Germany. The prehistory, history, and future of medical information and health information technology are discussed in reference. Specialized university departments and Informatics training programs began during the 1960s in France, Germany, Belgium and The Netherlands. Medical informatics research units began to appear during the 1970s in Poland and in the U.S. Since then the development of high-quality health informatics research, education and infrastructure has been a goal of the U.S. and the European Union.
Early names for health informatics included medical computing, biomedical computing, medical computer science, computer medicine, medical electronic data processing, medical automatic data processing, medical information processing, medical information science, medical software engineering, and medical computer technology.
The health informatics community is still growing, it is by no means a mature profession, but work in the UK by the voluntary registration body, the UK Council of Health Informatics Professions has suggested eight key constituencies within the domain–information management, knowledge management, portfolio/program/project management, ICT, education and research, clinical informatics, health records(service and business-related), health informatics service management. These constituencies accommodate professionals in and for the NHS, in academia and commercial service and solution providers.
Since the 1970s the most prominent international coordinating body has been the International Medical Informatics Association (IMIA).
In the United States
Even though the idea of using computers in medicine emerged as technology advanced in the early 20th century, it was not until the 1950s that informatics began to have an effect in the United States.
The earliest use of electronic digital computers for medicine was for dental projects in the 1950s at the United States National Bureau of Standards by Robert Ledley. During the mid-1950s, the United States Air Force (USAF) carried out several medical projects on its computers while also encouraging civilian agencies such as the National Academy of Sciences – National Research Council (NAS-NRC) and the National Institutes of Health (NIH) to sponsor such work. In 1959, Ledley and Lee B. Lusted published "Reasoning Foundations of Medical Diagnosis," a widely read article in Science, which introduced computing (especially operations research) techniques to medical workers. Ledley and Lusted's article has remained influential for decades, especially within the field of medical decision making.
Guided by Ledley's late 1950s survey of computer use in biology and medicine (carried out for the NAS-NRC), and by his and Lusted's articles, the NIH undertook the first major effort to introduce computers to biology and medicine. This effort, carried out initially by the NIH's Advisory Committee on Computers in Research (ACCR), chaired by Lusted, spent over $40 million between 1960 and 1964 in order to establish dozens of large and small biomedical research centers in the US.
One early (1960, non-ACCR) use of computers was to help quantify normal human movement, as a precursor to scientifically measuring deviations from normal, and design of prostheses. The use of computers (IBM 650, 1620, and 7040) allowed analysis of a large sample size, and of more measurements and subgroups than had been previously practical with mechanical calculators, thus allowing an objective understanding of how human locomotion varies by age and body characteristics. A study co-author was Dean of the Marquette University College of Engineering; this work led to discrete Biomedical Engineering departments there and elsewhere.
The next steps, in the mid-1960s, were the development (sponsored largely by the NIH) of expert systems such as MYCIN and Internist-I. In 1965, the National Library of Medicine started to use MEDLINE and MEDLARS. Around this time, Neil Pappalardo, Curtis Marble, and Robert Greenes developed MUMPS (Massachusetts General Hospital Utility Multi-Programming System) in Octo Barnett's Laboratory of Computer Science at Massachusetts General Hospital in Boston, another center of biomedical computing that received significant support from the NIH. In the 1970s and 1980s it was the most commonly used programming language for clinical applications. The MUMPS operating system was used to support MUMPS language specifications. , a descendant of this system is being used in the United States Veterans Affairs hospital system. The VA has the largest enterprise-wide health information system that includes an electronic medical record, known as the Veterans Health Information Systems and Technology Architecture (VistA). A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's electronic medical record at any of the VA's over 1,000 health care facilities.
During the 1960s, Morris Collen, a physician working for Kaiser Permanente's Division of Research, developed computerized systems to automate many aspects of multi-phased health checkups. These systems became the basis the larger medical databases Kaiser Permanente developed during the 1970s and 1980s. The American College of Medical Informatics (ACMI) has since 1993 annually bestowed the Morris F. Collen, MD Medal for Outstanding Contributions to the Field of Medical Informatics.
Kaiser permanente
In the 1970s a growing number of commercial vendors began to market practice management and electronic medical records systems. Although many products exist, only a small number of health practitioners use fully featured electronic health care records systems. In 1970, Warner V. Slack, MD, and Howard L. Bleich, MD, co-founded the academic division of clinical informatics at Beth Israel Deaconess Medical Center and Harvard Medical School. Warner Slack is a pioneer of the development of the electronic patient medical history, and in 1977 Dr. Bleich created the first user-friendly search engine for the worlds biomedical literature. In 2002, Dr. Slack and Dr. Bleich were awarded the Morris F. Collen Award for their pioneering contributions to medical informatics.
Computerized systems involved in patient care have led to a number of changes. Such changes have led to improvements in electronic health records which are now capable of sharing medical information among multiple health care stakeholders (Zahabi, Kaber, & Swangnetr, 2015); thereby, supporting the flow of patient information through various modalities of care. One opportunity for electronic health records (EHR)to be even more effectively used is to utilize natural language processing for searching and analyzing notes and text that would otherwise be inaccessible for review. These can be further developed through ongoing collaboration between software developers and end-users of natural language processing tools within the electronic health EHRs.
Computer use today involves a broad ability which includes but isn't limited to physician diagnosis and documentation, patient appointment scheduling, and billing. Many researchers in the field have identified an increase in the quality of health care systems, decreased errors by health care workers, and lastly savings in time and money (Zahabi, Kaber, & Swangnetr, 2015). The system, however, is not perfect and will continue to require improvement. Frequently cited factors of concern involve usability, safety, accessibility, and user-friendliness (Zahabi, Kaber, & Swangnetr, 2015). As leaders in the field of medical informatics improve upon the aforementioned factors of concern, the overall provision of health care will continue to improve.
Homer R. Warner, one of the fathers of medical informatics, founded the Department of Medical Informatics at the University of Utah in 1968. The American Medical Informatics Association (AMIA) has an award named after him on application of informatics to medicine.
There are Informatics certifications available to help informatics professionals stand out and be recognized. The American Nurses Credentialing Center (ANCC) offers a board certification in Nursing Informatics. For Radiology Informatics, the CIIP (Certified Imaging Informatics Professional) certification was created by ABII (The American Board of Imaging Informatics) which was founded by SIIM (the Society for Imaging Informatics in Medicine) and ARRT (the American Registry of Radiologic Technologists) in 2005. The CIIP certification requires documented experience working in Imaging Informatics, formal testing and is a limited time credential requiring renewal every five years. The exam tests for a combination of IT technical knowledge, clinical understanding, and project management experience thought to represent the typical workload of a PACS administrator or other radiology IT clinical support role. Certifications from PARCA (PACS Administrators Registry and Certifications Association) are also recognized. The five PARCA certifications are tiered from entry-level to architect level. The American Health Information Management Association offers credentials in medical coding, analytics, and data administration, such as Registered Health Information Administrator and Certified Coding Associate. Certifications are widely requested by employers in health informatics, and overall the demand for certified informatics workers in the United States is outstripping supply. The American Health Information Management Association reports that only 68% of applicants pass certification exams on the first try. In 2017, a consortium of health informatics trainers (composed of MEASURE Evaluation, Public Health Foundation India, University of Pretoria, Kenyatta University, and the University of Ghana) identified the following areas of knowledge as a curriculum for the digital health workforce, especially in low- and middle-income countries: clinical decision support; telehealth; privacy, security, and confidentiality; workflow process improvement; technology, people, and processes; process engineering; quality process improvement and health information technology; computer hardware; software; databases; data warehousing; information networks; information systems; information exchange; data analytics; and usability methods.
In the UK
The broad history of health informatics has been captured in the book UK Health Computing: Recollections and reflections, Hayes G, Barnett D (Eds.), BCS (May 2008) by those active in the field, predominantly members of BCS Health and its constituent groups. The book describes the path taken as 'early development of health informatics was unorganized and idiosyncratic'. In the early 1950s, it was prompted by those involved in NHS finance and only in the early 1960s did solutions including those in pathology (1960), radiotherapy (1962), immunization (1963), and primary care (1968) emerge. Many of these solutions, even in the early 1970s were developed in-house by pioneers in the field to meet their own requirements. In part, this was due to some areas of health services (for example the immunization and vaccination of children) still being provided by Local Authorities. The coalition government has proposed broadly to return to the 2010 strategy Equity and Excellence: Liberating the NHS (July 2010); stating: "We will put patients at the heart of the NHS, through an information revolution and greater choice and control' with shared decision-making becoming the norm: 'no decision about me without me' and patients having access to the information they want, to make choices about their care. They will have increased control over their own care records." BCS via FEDIP provides 4 different professional registration levels for Health and Care Informatics Professionals: Practitioner, Senior Practitioner, Advanced Practitioner, and Leading Practitioner.FEDIP is the Federation for Informatics Professionals in Health and Social Care, a collaboration between the leading professional bodies in health and care informatics supporting the development of the informatics profession.
Current state and policy initiatives
America
Argentina
Since 1997, the Buenos Aires Biomedical Informatics Group, a nonprofit group, represents the interests of a broad range of clinical and non-clinical professionals working within the Health Informatics sphere.
Its purposes are:
Promote the implementation of the computer tool in the health care activity, scientific research, health administration and in all areas related to health sciences and biomedical research.
Support, promote and disseminate content related activities with the management of health information and tools they used to do under the name of Biomedical informatics.
Promote cooperation and exchange of actions generated in the field of biomedical informatics, both in the public and private, national and international level.
Interact with all scientists, recognized academic stimulating the creation of new instances that have the same goal and be inspired by the same purpose.
To promote, organize, sponsor and participate in events and activities for training in computer and information and disseminating developments in this area that might be useful for team members and health related activities.
The Argentinian health system is heterogeneous in its function, and because of that, the informatics developments show a heterogeneous stage. Many private health care centers have developed systems, such as the Hospital Aleman of Buenos Aires, or the Hospital Italiano de Buenos Aires that also has a residence program for health informatics.
Brazil
The first applications of computers to medicine and health care in Brazil started around 1968, with the installation of the first mainframes in public university hospitals, and the use of programmable calculators in scientific research applications. Minicomputers, such as the IBM 1130 were installed in several universities, and the first applications were developed for them, such as the hospital census in the School of Medicine of Ribeirão Preto and patient master files, in the Hospital das Clínicas da Universidade de São Paulo, respectively at the cities of Ribeirão Preto and São Paulo campuses of the University of São Paulo. In the 1970s, several Digital Corporation and Hewlett Packard minicomputers were acquired for public and Armed Forces hospitals, and more intensively used for intensive-care unit, cardiology diagnostics, patient monitoring and other applications. In the early 1980s, with the arrival of cheaper microcomputers, a great upsurge of computer applications in health ensued, and in 1986 the Brazilian Society of Health Informatics was founded, the first Brazilian Congress of Health Informatics was held, and the first Brazilian Journal of Health Informatics was published. In Brazil, two universities are pioneers in teaching and research in medical informatics, both the University of Sao Paulo and the Federal University of Sao Paulo offer undergraduate programs highly qualified in the area as well as extensive graduate programs (MSc and PhD). In 2015 the Universidade Federal de Ciências da Saúde de Porto Alegre, Rio Grande do Sul, also started to offer undergraduate program.
Canada
Health Informatics projects in Canada are implemented provincially, with different provinces creating different systems. A national, federally funded, not-for-profit organization called Canada Health Infoway was created in 2001 to foster the development and adoption of electronic health records across Canada. As of December 31, 2008, there were 276 EHR projects under way in Canadian hospitals, other health-care facilities, pharmacies and laboratories, with an investment value of $1.5-billion from Canada Health Infoway.
Provincial and territorial programmes include the following:
eHealth Ontario was created as an Ontario provincial government agency in September 2008. It has been plagued by delays and its CEO was fired over a multimillion-dollar contracts scandal in 2009.
Alberta Netcare was created in 2003 by the Government of Alberta. Today the netCARE portal is used daily by thousands of clinicians. It provides access to demographic data, prescribed/dispensed drugs, known allergies/intolerances, immunizations, laboratory test results, diagnostic imaging reports, the diabetes registry and other medical reports. netCARE interface capabilities are being included in electronic medical record products that are being funded by the provincial government.
United States
In 2004, President George W. Bush signed Executive Order 13335, creating the Office of the National Coordinator for Health Information Technology (ONCHIT) as a division of the U.S. Department of Health and Human Services (HHS). The mission of this office is widespread adoption of interoperable electronic health records (EHRs) in the US within 10 years. See quality improvement organizations for more information on federal initiatives in this area. In 2014 the Department of Education approved an advanced Health Informatics Undergraduate program that was submitted by the University of South Alabama. The program is designed to provide specific Health Informatics education, and is the only program in the country with a Health Informatics Lab. The program is housed in the School of Computing in Shelby Hall, a recently completed $50 million state of the art teaching facility. The University of South Alabama awarded David L. Loeser on May 10, 2014 with the first Health Informatics degree. The program currently is scheduled to have 100+ students awarded by 2016. The Certification Commission for Healthcare Information Technology (CCHIT), a private nonprofit group, was funded in 2005 by the U.S. Department of Health and Human Services to develop a set of standards for electronic health records (EHR) and supporting networks, and certify vendors who meet them. In July 2006, CCHIT released its first list of 22 certified ambulatory EHR products, in two different announcements. Harvard Medical School added a department of biomedical informatics in 2015. The University of Cincinnati in partnership with Cincinnati Children's Hospital Medical Center created a biomedical informatics (BMI) Graduate certificate program and in 2015 began a BMI PhD program. The joint program allows for researchers and students to observe the impact their work has on patient care directly as discoveries are translated from bench to bedside.
Europe
The European Union's Member States are committed to sharing their best practices and experiences to create a European eHealth Area, thereby improving access to and quality health care at the same time as stimulating growth in a promising new industrial sector. The European eHealth Action Plan plays a fundamental role in the European Union's strategy. Work on this initiative involves a collaborative approach among several parts of the Commission services. The European Institute for Health Records is involved in the promotion of high quality electronic health record systems in the European Union.
UK
There are different models of health informatics delivery in each of the home countries (England, Scotland, Northern Ireland and Wales) but some bodies like UKCHIP (see below) operate for those 'in and for' all the home countries and beyond.
NHS informatics in England was contracted out to several vendors for national health informatics solutions under the National Programme for Information Technology (NPfIT) label in the early to mid-2000s, under the auspices of NHS Connecting for Health (part of the Health and Social Care Information Centre as of 1 April 2013). NPfIT originally divided the country into five regions, with strategic 'systems integration' contracts awarded to one of several Local Service Providers (LSP). The various specific technical solutions were required to connect securely with the NHS 'Spine', a system designed to broker data between different systems and care settings. NPfIT fell significantly behind schedule and its scope and design were being revised in real time, exacerbated by media and political lambasting of the Programme's spend (past and projected) against the proposed budget. In 2010 a consultation was launched as part of the new Conservative/Liberal Democrat Coalition Government's White Paper 'Liberating the NHS'. This initiative provided little in the way of innovative thinking, primarily re-stating existing strategies within the proposed new context of the Coalition's vision for the NHS.
The degree of computerization in NHS secondary care was quite high before NPfIT, and the programme stagnated further development of the install base – the original NPfIT regional approach provided neither a single, nationwide solution nor local health community agility or autonomy to purchase systems, but instead tried to deal with a hinterland in the middle.
Almost all general practices in England and Wales are computerized under the GP Systems of Choice programme, and patients have relatively extensive computerized primary care clinical records. System choice is the responsibility of individual general practices and while there is no single, standardized GP system, it sets relatively rigid minimum standards of performance and functionality for vendors to adhere to. Interoperation between primary and secondary care systems is rather primitive. It is hoped that a focus on interworking (for interfacing and integration) standards will stimulate synergy between primary and secondary care in sharing necessary information to support the care of individuals. Notable successes to date are in the electronic requesting and viewing of test results, and in some areas, GPs have access to digital x-ray images from secondary care systems.
In 2019 the GP Systems of Choice framework was replaced by the GP IT Futures framework, which is to be the main vehicle used by clinical commissioning groups to buy services for GPs. This is intended to increase competition in an area that is dominated by EMIS and TPP. 69 technology companies offering more than 300 solutions have been accepted on to the new framework.
Wales has a dedicated Health Informatics function that supports NHS Wales in leading on the new integrated digital information services and promoting Health Informatics as a career.
Netherlands
In the Netherlands, health informatics is currently a priority for research and implementation. The Netherlands Federation of University medical centers (NFU) has created the Citrienfonds, which includes the programs eHealth and Registration at the Source. The Netherlands also has the national organizations Society for Healthcare Informatics (VMBI) and Nictiz, the national center for standardization and eHealth.
European research and development
The European Commission's preference, as exemplified in the 5th Framework as well as currently pursued pilot projects, is for Free/Libre and Open Source Software (FLOSS) for health care. Another stream of research currently focuses on aspects of "big data" in health information systems. For background information on data-related aspects in health informatics see, e.g., the book "Biomedical Informatics" by Andreas Holzinger.
Asia and Oceania
In Asia and Australia-New Zealand, the regional group called the Asia Pacific Association for Medical Informatics (APAMI) was established in 1994 and now consists of more than 15 member regions in the Asia Pacific Region.
Australia
The Australasian College of Health Informatics (ACHI) is the professional association for health informatics in the Asia-Pacific region. It represents the interests of a broad range of clinical and non-clinical professionals working within the health informatics sphere through a commitment to quality, standards and ethical practice. ACHI is an academic institutional member of the International Medical Informatics Association (IMIA) and a full member of the Australian Council of Professions.
ACHI is a sponsor of the "e-Journal for Health Informatics", an indexed and peer-reviewed professional journal. ACHI has also supported the "Australian Health Informatics Education Council" (AHIEC) since its founding in 2009.
Although there are a number of health informatics organizations in Australia, the Health Informatics Society of Australia (HISA) is regarded as the major umbrella group and is a member of the International Medical Informatics Association (IMIA). Nursing informaticians were the driving force behind the formation of HISA, which is now a company limited by guarantee of the members. The membership comes from across the informatics spectrum that is from students to corporate affiliates. HISA has a number of branches (Queensland, New South Wales, Victoria and Western Australia) as well as special interest groups such as nursing (NIA), pathology, aged and community care, industry and medical imaging (Conrick, 2006).
China
After 20 years, China performed a successful transition from its planned economy to a socialist market economy. Along this change, China's health care system also experienced a significant reform to follow and adapt to this historical revolution. In 2003, the data (released from Ministry of Health of the People's Republic of China (MoH)), indicated that the national health care-involved expenditure was up to RMB 662.33 billion totally, which accounted for about 5.56% of nationwide gross domestic products. Before the 1980s, the entire health care costs were covered in central government annual budget. Since that, the construct of health care-expended supporters started to change gradually. Most of the expenditure was contributed by health insurance schemes and private spending, which corresponded to 40% and 45% of total expenditure, respectively. Meanwhile, the financially governmental contribution was decreased to 10% only. On the other hand, by 2004, up to 296,492 health care facilities were recorded in statistic summary of MoH, and an average of 2.4 clinical beds per 1000 people were mentioned as well.
Along with the development of information technology since the 1990s, health care providers realized that the information could generate significant benefits to improve their services by computerized cases and data, for instance of gaining the information for directing patient care and assessing the best patient care for specific clinical conditions. Therefore, substantial resources were collected to build China's own health informatics system. Most of these resources were arranged to construct hospital information system (HIS), which was aimed to minimize unnecessary waste and repetition, subsequently to promote the efficiency and quality-control of health care. By 2004, China had successfully spread HIS through approximately 35–40% of nationwide hospitals. However, the dispersion of hospital-owned HIS varies critically. In the east part of China, over 80% of hospitals constructed HIS, in northwest of China the equivalent was no more than 20%. Moreover, all of the Centers for Disease Control and Prevention (CDC) above rural level, approximately 80% of health care organisations above the rural level and 27% of hospitals over town level have the ability to perform the transmission of reports about real-time epidemic situation through public health information system and to analysis infectious diseases by dynamic statistics.
China has four tiers in its health care system. The first tier is street health and workplace clinics and these are cheaper than hospitals in terms of medical billing and act as prevention centers. The second tier is district and enterprise hospitals along with specialist clinics and these provide the second level of care. The third tier is provisional and municipal general hospitals and teaching hospitals which provided the third level of care. In a tier of its own is the national hospitals which are governed by the Ministry of Health. China has been greatly improving its health informatics since it finally opened its doors to the outside world and joined the World Trade Organization (WTO). In 2001, it was reported that China had 324,380 medical institutions and the majority of those were clinics. The reason for that is that clinics are prevention centers and Chinese people like using traditional Chinese medicine as opposed to Western medicine and it usually works for the minor cases. China has also been improving its higher education in regards to health informatics. At the end of 2002, there were 77 medical universities and medical colleges. There were 48 university medical colleges which offered bachelor, master, and doctorate degrees in medicine. There were 21 higher medical specialty institutions that offered diploma degrees so in total, there were 147 higher medical and educational institutions. Since joining the WTO, China has been working hard to improve its education system and bring it up to international standards.
SARS played a large role in China quickly improving its health care system. Back in 2003, there was an outbreak of SARS and that made China hurry to spread HIS or Hospital Information System and more than 80% of hospitals had HIS. China had been comparing itself to Korea's health care system and figuring out how it can better its own system. There was a study done that surveyed six hospitals in China that had HIS. The results were that doctors didn't use computers as much so it was concluded that it wasn't used as much for clinical practice than it was for administrative purposes. The survey asked if the hospitals created any websites and it was concluded that only four of them had created websites and that three had a third-party company create it for them and one was created by the hospital staff. In conclusion, all of them agreed or strongly agreed that providing health information on the Internet should be utilized.
Collected information at different times, by different participants or systems could frequently lead to issues of misunderstanding, dis-comparing or dis-exchanging. To design an issues-minor system, health care providers realized that certain standards were the basis for sharing information and interoperability, however a system lacking standards would be a large impediment to interfere the improvement of corresponding information systems. Given that the standardization for health informatics depends on the authorities, standardization events must be involved with government and the subsequently relevant funding and supports were critical. In 2003, the Ministry of Health released the Development Lay-out of National Health Informatics (2003–2010) indicating the identification of standardization for health informatics which is 'combining adoption of international standards and development of national standards'.
In China, the establishment of standardization was initially facilitated with the development of vocabulary, classification and coding, which is conducive to reserve and transmit information for premium management at national level. By 2006, 55 international/ domestic standards of vocabulary, classification and coding have served in hospital information system. In 2003, the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) and the ICD-10 Clinical Modification (ICD-10-CM) were adopted as standards for diagnostic classification and acute care procedure classification. Simultaneously, the International Classification of Primary Care (ICPC) were translated and tested in China 's local applied environment.
Another coding standard, named Logical Observation Identifiers Names and Codes (LOINC), was applied to serve as general identifiers for clinical observation in hospitals. Personal identifier codes were widely employed in different information systems, involving name, sex, nationality, family relationship, educational level and job occupation. However, these codes within different systems are inconsistent, when sharing between different regions. Considering this large quantity of vocabulary, classification and coding standards between different jurisdictions, the health care provider realized that using multiple systems could generate issues of resource wasting and a non-conflicting national level standard was beneficial and necessary. Therefore, in late 2003, the health informatics group in Ministry of Health released three projects to deal with issues of lacking national health information standards, which were the Chinese National Health Information Framework and Standardization, the Basic Data Set Standards of Hospital Information System and the Basic Data Set Standards of Public Health Information System.
The objectives of the Chinese National Health Information Framework and Standardization project were:
Establish national health information framework and identify in what areas standards and guidelines are required
Identify the classes, relationships and attributes of national health information framework. Produce a conceptual health data model to cover the scope of the health information framework
Create logical data model for specific domains, depicting the logical data entities, the data attributes, and the relationships between the entities according to the conceptual health data model
Establish uniform represent standard for data elements according to the data entities and their attributes in conceptual data model and logical data model
Circulate the completed health information framework and health data model to the partnership members for review and acceptance
Develop a process to maintain and refine the China model and to align with and influence international health data models
Comparing China's EHR Standard and ASTM E1384
In 2011, researchers from local universities evaluated the performance of China's Electronic Health Record (EHR) Standard compared with the American Society for Testing and Materials Standard Practice for Content and Structure of Electronic Health Records in the United States (ASTM E1384 Standard, withdrawn in 2017). The deficiencies that were found are listed in the following.
The lack of supporting on privacy and security. The ISO/TS 18308 specifies "The EHR must support the ethical and legal use of personal information, in accordance with established privacy principles and frameworks, which may be culturally or jurisdictionally specific" (ISO 18308: Health Informatics-Requirements for an Electronic Health Record Architecture, 2004). However this China's EHR Standard did not achieve any of the fifteen requirements in the subclass of privacy and security.
The shortage of supporting on different types of data and reference. Considering only ICD-9 is referenced as China's external international coding systems, other similar systems, such as SNOMED CT in clinical terminology presentation, cannot be considered as familiar for Chinese specialists, which could lead to internationally information-sharing deficiency.
The lack of more generic and extensible lower level data structures. China's large and complex EHR Standard was constructed for all medical domains. However, the specific and time-frequent attributes of clinical data elements, value sets and templates identified that this once-for-all purpose cannot lead to practical consequence.
In Hong Kong, a computerized patient record system called the Clinical Management System (CMS) has been developed by the Hospital Authority since 1994. This system has been deployed at all the sites of the authority (40 hospitals and 120 clinics). It is used for up to 2 million transactions daily by 30,000 clinical staff. The comprehensive records of 7 million patients are available on-line in the electronic patient record (ePR), with data integrated from all sites. Since 2004 radiology image viewing has been added to the ePR, with radiography images from any HA site being available as part of the ePR.
The Hong Kong Hospital Authority placed particular attention to the governance of clinical systems development, with input from hundreds of clinicians being incorporated through a structured process. The health informatics section in the Hospital Authority has a close relationship with the information technology department and clinicians to develop health care systems for the organization to support the service to all public hospitals and clinics in the region.
The Hong Kong Society of Medical Informatics (HKSMI) was established in 1987 to promote the use of information technology in health care. The eHealth Consortium has been formed to bring together clinicians from both the private and public sectors, medical informatics professionals and the IT industry to further promote IT in health care in Hong Kong.
India
eHCF School of Medical Informatics
eHealth-Care Foundation
Malaysia
Since 2010, the Ministry of Health (MoH) has been working on the Malaysian Health Data Warehouse (MyHDW) project. MyHDW aims to meet the diverse needs of timely health information provision and management, and acts as a platform for the standardization and integration of health data from a variety of sources (Health Informatics Centre, 2013). The Ministry of Health has embarked on introducing the electronic Hospital Information Systems (HIS) in several public hospitals including Putrajaya Hospital, Serdang Hospital and Selayang Hospital. Similarly, under Ministry of Higher Education, hospitals such as University of Malaya Medical Centre (UMMC) and University Kebangsaan Malaysia Medical Centre (UKMMC) are also using HIS for healthcare delivery.
A hospital information system (HIS) is a comprehensive, integrated information system designed to manage the administrative, financial and clinical aspects of a hospital. As an area of medical informatics, the aim of hospital information system is to achieve the best possible support of patient care and administration by electronic data processing. HIS plays a vital role in planning, initiating, organizing and controlling the operations of the subsystems of the hospital and thus provides a synergistic organization in the process.
New Zealand
Health informatics is taught at five New Zealand universities. The most mature and established programme has been offered for over a decade at Otago. Health Informatics New Zealand (HINZ), is the national organisation that advocates for health informatics. HINZ organises a conference every year and also publishes a journal- Healthcare Informatics Review Online.
Saudi Arabia
The Saudi Association for Health Information (SAHI) was established in 2006 to work under direct supervision of King Saud bin Abdulaziz University for Health Sciences to practice public activities, develop theoretical and applicable knowledge, and provide scientific and applicable studies.
Post-Soviet countries
The Russian Federation
The Russian health care system is based on the principles of the Soviet health care system, which was oriented on mass prophylaxis, prevention of infection and epidemic diseases, vaccination and immunization of the population on a socially protected basis. The current government health care system consists of several directions:
Preventive health care
Primary health care
Specialized medical care
Obstetrical and gynecologic medical care
Pediatric medical care
Surgery
Rehabilitation/ Health resort treatment
One of the main issues of the post-Soviet medical health care system was the absence of the united system providing optimization of work for medical institutes with one, single database and structured appointment schedule and hence hours-long lines. Efficiency of medical workers might have been also doubtful because of the paperwork administrating or lost book records.
Along with the development of the information systems IT and health care departments in Moscow agreed on design of a system that would improve public services of health care institutes. Tackling the issues appearing in the existing system, the Moscow Government ordered that the design of a system would provide simplified electronic booking to public clinics and automate the work of medical workers on the first level.
The system designed for that purposes was called EMIAS (United Medical Information and Analysis System) and presents an electronic health record (EHR) with the majority of other services set in the system that manages the flow of patients, contains outpatient card integrated in the system, and provides an opportunity to manage consolidated managerial accounting and personalized list of medical help. Besides that, the system contains information about availability of the medical institutions and various doctors.
The implementation of the system started in 2013 with the organization of one computerized database for all patients in the city, including a front-end for the users. EMIAS was implemented in Moscow and the region and it is planned that the project should extend to most parts of the country.
Law
Health informatics law deals with evolving and sometimes complex legal principles as they apply to information technology in health-related fields. It addresses the privacy, ethical and operational issues that invariably arise when electronic tools, information and media are used in health care delivery. Health Informatics Law also applies to all matters that involve information technology, health care and the interaction of information. It deals with the circumstances under which data and records are shared with other fields or areas that support and enhance patient care.
As many health care systems are making an effort to have patient records more readily available to them via the internet, it is important that providers implement security standards in order to ensure that the patients' information is safe. They have to be able to assure confidentiality, integrity, and security of the people, process, and technology. Since there is also the possibility of payments being made through this system, it is vital that this aspect of their private information will also be protected through cryptography.
The use of technology in health care settings has become popular and this trend is expected to continue. Various health care facilities had instigated different kinds of health information technology systems in the provision of patient care, such as electronic health records (EHRs), computerized charting, etc. The growing popularity of health information technology systems and the escalation in the amount of health information that can be exchanged and transferred electronically increased the risk of potential infringement in patients' privacy and confidentiality. This concern triggered the establishment of strict measures by both policymakers and individual facility to ensure patient privacy and confidentiality.
One of the federal laws enacted to safeguard patient's health information (medical record, billing information, treatment plan, etc.) and to guarantee patient's privacy is the Health Insurance Portability and Accountability Act of 1996 or HIPAA. HIPAA gives patients the autonomy and control over their own health records. Furthermore, according to the U.S. Department of Health & Human Services (n.d.), this law enables patients to:
view their own health records
request a copy of their own medical records
request correction to any incorrect health information
know who has access to their health record
request who can and cannot view/access their health information
Health and medical informatics journals
Computers and Biomedical Research, published in 1967, was one of the first dedicated journals to health informatics. Other early journals included Computers and Medicine, published by the American Medical Association; Journal of Clinical Computing, published by Gallagher Printing; Journal of Medical Systems, published by Plenum Press; and MD Computing, published by Springer-Verlag. In 1984, Lippincott published the first nursing-specific journal, titled Journal Computers in Nursing, which is now known as Computers Informatics Nursing (CIN).
As of September 7, 2016, there are roughly 235 informatics journals listed in the National Library of Medicine (NLM) catalog of journals. The Journal Citation Reports for 2018 gives the top three journals in medical informatics as the Journal of Medical Internet Research (impact factor of 4.945), JMIR mHealth and uHealth (4.301) and the Journal of the American Medical Informatics Association (4.292).
Education and certification
In the United States, clinical informatics is a subspecialty within several medical specialties. For example, in pathology, the American Board of Pathology offers pathology informatics certification for pathologists who have completed 24 months of related training, and the American Board of Preventive Medicine offers pathology informatics certification within preventive medicine.
See also
Related concepts
References
Further reading
External links |
50342838 | https://en.wikipedia.org/wiki/Brett%20Palos | Brett Palos | Brett Alexander Palos (born July 1974) is a British property developer and entrepreneur. He is the founder of Brett Palos Investments and the chairman of The Thackery Estate. Palos owns Palos Developments, a luxury property development and design company in Miami Beach, Florida.
Palos has been involved in the purchase of more than £1 billion of commercial and residential assets since 2012 with Brett Palos Investments.
Early life
Brett Palos was born in 1974 to Robert Palos and Tina Green. His sister is Stasha Palos. His parents opened a clothing shop in Johannesburg, South Africa and expanded the business abroad. They divorced after 20 years together. Brett became the step-son of Philip Green when his mother Tina married the billionaire retailer in 1990.
Career
In 1997, Palos was part of his step-father Philip Green’s negotiating team that outmaneuvered Sears plc when the company was selling its subsidiaries.
In 2003, Palos bought the office supplies group ISA from the receivers to its then US parent, Daisytek International. In 2005, ISA made a pre-tax profit of £5.9 million on sales of £231 million. In 2006, Palos appointed investment bank Rothschild to sell ISA. In 2007, Palos sold ISA to Electra Private Equity for a net gain of £35 million.
In 2008, Palos, then 33 years old, was ranked 1,727 in the Sunday Times rich list with a wealth of £43 million.
In 2009, Palos, along with partners Anthony Lyons and Simon Conway, purchased the O2 Centre on Finchley Road in London for over £90 million.
In 2010, Palos added more than 500 apartments to his property portfolio after buying them in a £400 million deal with Lloyds Banking Group.
In 2012, Palos, along with partner Antony Alberti, acquired The Thackeray Estate, a London-based property investment company that specialises in repositioning commercial, mixed-use and residential development projects.
In 2013, Palos started developing luxury spec homes in Miami. A few years later, his company Palos Developments sold a waterfront spec home on North Bay Road in Miami Beach for 20 million dollars. It was one of the highest sale prices ever on the street. Palos described the homes his company develops as "contemporary Balinese-Deco style."
In 2014, Matterhorn Palos Partnership, a joint venture between Brett Palos Investments and Matterhorn Capital, sold three Spire Healthcare hospitals for £110 million to the largest U.S. healthcare real estate trust. In 2015, Matterhorn Palos Partnership sold Kings Mall shopping centre in Hammersmith, London, to Schroders UK Real Estate fund in a deal worth £153 million.
In 2019, Palos’ Thackeray Estate sold Eastcheap Estate, a mixed-used scheme in London, for £45.5 million to Hong Kong investor LKK Health Products Group, the owner of the nearby Walkie-Talkie building.
Personal life
Palos lives in London with his wife and three children. He owns a home in Miami Beach.
References
British businesspeople
Living people
1974 births
Real estate and property developers |
1444691 | https://en.wikipedia.org/wiki/Kontact | Kontact | Kontact is a personal information manager and groupware software suite developed by KDE. It supports calendars, contacts, notes, to-do lists, news, and email. It offers a number of inter-changeable graphical UIs (KMail, KAddressBook, Akregator, etc.) all built on top of a common core.
Differences between "Kontact" and "KDE PIM"
Technically speaking, Kontact only refers to a small umbrella application that unifies different stand-alone applications under one user interface. KDE PIM refers to a work group within the larger KDE project that develops the individual applications in a coordinated way.
In popular terms, however, Kontact often refers to the whole set of KDE PIM applications. These days many popular Linux distributions such as Kubuntu hide the individual applications and only place Kontact prominently.
History
The initial groupware container application was written in an afternoon by Matthias Hölzer-Klüpfel and later imported into the KDE source repository and maintained by Daniel Molkentin. This container application is essential for Kontact to operate, but without embedded components it is not useful by itself.
The first embedded components were created by Cornelius Schumacher. He modified the KAddressBook and KOrganizer applications to create the initial address book and organizer components. At this stage no mail client component existed, so KDE still lacked a functional integrated groupware application. However, Cornelius' groundbreaking work acted as a prototype for other developers to base their efforts on.
Don Sanders created the missing mail client component by modifying the KMail application. He then integrated the mail client component with the other components, and the groupware container application, assembled and released the initial Kontact packages, and created the initial Kontact website.
Daniel Molkentin, Cornelius Schumacher and Don Sanders then formed the core Kontact team. The KMail and container application changes were imported into the KDE source repository, and Kontact was released as part of KDE 3.2.
During the construction of the Kontact application suite, the Kolab groupware server was being worked on by Erfrakon, Intevation.net and Klarälvdalens Datakonsult simultaneously and was completed at approximately the same time. This work was done as part of the Kroupware project that also involved modifying the KMail and KOrganizer applications to enhance them with additional groupware features.
The core Kontact team, the Kolab consortium, and several independent KDE PIM developers then worked together to enhance Kontact by integrating the Kroupware functionality and making Kolab the primary Kontact server.
Additionally, a news component was created from
the KNode application by KDE developer Zack Rusin, and Kontact was modified to support an array of mainly web based suites of collaboration software.
Components
Kontact embeds the following
Summary Page: A summary which shows unread emails, upcoming appointments, and the latest news and weather from the user's subscribed RSS feeds
Email
KMail supports folders, filtering, viewing HTML mail, and international character sets. It can handle IMAP, IMAP IDLE, dIMAP, POP3, and local mailboxes for incoming mail. It can send mail via SMTP or sendmail protocols. It can forward HTML mail as an attachment but it cannot forward mail inline.
Spam and filtering
KMail uses two special filters to provide a modular access to spam-filtering programs:
Send this e-mail to a program allows any program to be specified, and when that KMail filter is activated, the program will be run and supplied with the contents of the e-mail as its standard input.
Pipe this e-mail through a program not only sends the e-mail to a specified program, but replaces the e-mail with the output of that program. This allows the use of systems such as SpamAssassin which can add their own headers to a piece of e-mail.
These modular filters can be combined with text filters to detect (for example) e-mail which has been flagged by SpamAssassin by looking for the special headers it added.
KMail allows manual filtering of spam directly on the mail server, a very interesting feature for dial-up users. Emails that exceed some threshold size (standard is 50 kb, but it may be set to any value) are not automatically copied to the local computer. With "get, decide later, delete" options, KMail lists them but does not download the whole message, which allows the deletion of spam and oversized messages without wasting time.
Cryptographic support
KMail supports the OpenPGP standard and can automatically encrypt, decrypt, sign, and verify signatures of email messages and its attachments via either the inline or OpenPGP/MIME method of signing/encryption. KMail depends on the GnuPG software for this functionality. As a visual aid, KMail will colour verified email messages green for trusted signatures; yellow for untrusted signatures; red for invalid signatures; and blue for encrypted messages.
KMail also supports S/MIME messages as well as Chiasmus, a proprietary cryptographic system created by the German Federal Office for Information Security (BSI).
Address book
KAddressBook is an address book application.
Description
KAddressBook is a graphical interface to organizing the addresses and contact information of family, friends, business partners, etc. It integrates with KDE Plasma, allowing interoperability with other KDE programs, including the e-mail client KMail – allowing one-click access to composing an e-mail – and the instant messenger Kopete – showing the online status of and easy access to instant messaging contacts. It can be synchronized with other software or device using Kitchensync and OpenSync.
A contact may be classified into customizable categories, such as Family, Business, or Customer. Many of the fields can have multiple entries, for example, if the contact has several e-mail addresses. A contact's fields are separated into four tabs and one tab for custom fields.
Features
Exports and imports cards to and from vCard format.
Uses DBUS to interface with other applications.
Interoperable with KMail and Kopete, as well as Kontact.
Customize fields and categories.
Automatic formatting of names.
Filter ability, to search for addresses.
Capability to query an LDAP database containing person information.
Organizer
KOrganizer is the personal organizer. It has the ability to manage calendars, journals, and a to do list.
News feed aggregator
Akregator is a feed aggregator. It supports both RSS and Atom. Feeds can be sorted into categories. Akregator will aggregate all feeds in a particular category into a single list of new entries so that, for example, all news in the category "Politics" can be shown in one list. It has an incremental search feature for the titles of all the entries in its database.
Akregator can be configured to fetch feeds within regular intervals. The user can also manually request to fetch all feeds, individual ones, or those in a selected category. It supports feed icons and embeds KHTML as an internal, tabbed web browser. Any external browser can also be called.
Akregator is part of KDE since the 3.4 release, and it is distributed with the kdepim module.
Usenet news client
KNode is the news client program for the KDE desktop environment.
It supports multiple NNTP servers, message threads, scoring, X-Face headers (reading and posting), and international character sets.
Personal wiki
KJots is a simple outliner text editor which can be used to create a personal wiki. It uses a basic tree structure to organize information: it refers to nodes as ‘books’ and leaves as ‘pages’. It includes a book view, which shows a table of contents, and a view mode for all entries.
Similar wiki-style programs are Zim (based on GTK and Python), Wixi (based on Python and GTK), KeepNote (based on Python and GTK), Notecase (based on GTK), BasKet (based on Qt), Gnudiary (also based on Qt), Tomboy (GTK, based on Mono), Gnote (Tomboy port to C++) and Tiddlywiki (self-modifying, single-HTML contained personal wiki, written in JavaScript and expandable with plugins). Also Treeline, an advanced outliner written in Python and personal database available for Linux and Microsoft Windows, has similar functions.
Other components
Notes: KNotes – KDE Notes Management
Weather: KWeather
Storage back-end
Along with the KDE Software Compilation 4 life cycle, Kontact moved to Akonadi for storing its data, when in the past every Kontact component implemented the storage technologies itself. Akonadi is currently mostly developed by the KDE PIM team, but its design is done in an agnostic way and thus not depending on KDE technologies.
The first SC 4 release of Kontact was officially shipped with KDE 4.1. That release did not use Akonadi. Since then the Kontact components have been gradually migrating towards Akonadi. The first stable version of KDE PIM using Akonadi was released together with KDE 4.6.4 in June 2011.
See also
List of applications with iCalendar support
List of personal information managers
David Vignoni, the designer of older icons
References
External links
Email clients that use Qt
Free email software
Free note-taking software
Free personal information managers
Instant messaging clients that use Qt
KDE Applications
Kdepim |
36924957 | https://en.wikipedia.org/wiki/List%20of%20file%20copying%20software | List of file copying software | This article provides a list of inbuilt and third party file copying and moving software - utilities and other software used, as part of computer file management, to explicitly move and copy files and other data on demand from one location to another on a storage device.
File copying is a fundamental operation for data storage. Most popular operating systems such as Windows, macOS and Linux as well as smartphone operating systems such as Android contain built-in file copying functions as well as command line (CLI) and graphical (GUI) interfaces to filing system copy and move functions. In some cases these can be replaced or supplemented by third-party software for different, extended, or improved functionality. This article lists inbuilt as well as external software designed for this purpose.
Related software
For software designed to copy, clone, image or author entire storage devices such as CDs, DVDs, Blu-ray disks, hard drives and storage device partitions, back up data, copiers that work on storage devices as a logical unit, and more general file managers and other utilities related to file copying software, please see:
Functionality and demands met by file copy software
Examples of comparable operating functionality seen across file copying programs:
Criteria for original files and target location: typically a location and criteria for selection within that location) and destination location
Existing target files: action to take in relation to existing files in target location (if a file already exists, does not exist, or other files exist)
A subtlety in handling existing files is whether such files are overwritten on attempting to copy, or they are renamed (or the target temporarily named) and only removed once the replacement file has been verified.
Verification: actions taken to ensure integrity of resulting compared to original files
Queuing: how multiple operations, or operations on large files (or large numbers of files) should be scheduled and prioritized, and any queue management
Operator confirmations and warnings: whether and when to request confirmation of an action
File properties: whether to copy file attributes, timestamps, and permissions
Filing system idiosyncrasies: for example, Windows filing systems may also track "8.3" short filenames or may be unable to correctly handle long file names
Program flow and algorithms: multi-threading, buffering, data speed/priority, interruption/restart handling, atomicity/integrity assurance, and other algorithms that affect efficiency of operation.
A notable function here relates to options determining whether the underlying file system will be requested to perform a move operation, a copy operation, or create or delete a new junction point (hard link), if this will meet the needs of the requested action. Moving, linking and delinking can be much faster and lower risk then copying, but are not always desirable or available. In particular they may not be available when the source and destination are on different logical devices or on devices that do not easily allow moving of existing data. In some cases a "copy and delete" operation may be unavoidable in performing a file move.
Variants on pure copy and move: whether to create just the folders (directories) in the source, or create files as "null" (empty), rather than copy all data
Status, error, and status reporting: error handling, and any logs or reports produced of the operation(s), while running or upon completion
Compatible devices and filing systems: usable/unusable types of storage device and filing systems
System administration and networking capabilities: for example, copying across networks and other network management aspects, remote use, authentication.
Generic differentiators and functions as software:
Flexibility and configurability: options, skins, extensibility/plugins
Operating system choice: cross platform?
Operating system integration: whether the software is inbuilt, separate, or is separate but can replace inbuilt functions
Interfaces: command line, GUI, API, script
Review
Gizmo's Freeware published a basic comparison review of a range of well-known third party file copying software on Windows. FastCopy was given top place, being highest speed and also light on system resources (the author states it uses its own cache to avoid slowing other software, and the Win32 API and C runtime rather than MFC). Ultracopier was recognised as having a well-developed GUI interface. Unstoppable Copier was well regarded as a niche copier designed for best results with damaged media and files, but at a cost of speed. TeraCopy was also mentioned below these as also worth considering. More recently, Raymond CC's blog reviewed a similar range of software on Windows versions XP, 7, and 8, and also ranked FastCopy as the overall speed winner. Both reviews are over four years old.
List
Operating system commands:
Peripheral Interchange Program
cp
mv
copy (command)
xcopy – Windows copy utility included until Windows Vista and now deprecated in favour of Robocopy
Robocopy – Windows xcopy replacement with more options, introduced as a standard feature in Windows Vista and Windows Server 2008
Notable third-party file transfer software include:
FastCopy
RichCopy
Rclone – open source, used with cloud storage
rsync – open source GPL copy utility for Windows and UNIX-like operating systems
TeraCopy
Ultracopier, which is the Supercopier evolution
See also
List of backup software
List of Unix commands
File managers
List of data erasing software
Versioning file system
References
File copying software
File copying software
Unix software
Windows administration
Data synchronization |
39912560 | https://en.wikipedia.org/wiki/Raspberry%20Pi%20OS | Raspberry Pi OS | Raspberry Pi OS (formerly Raspbian) is a Debian-based operating system for Raspberry Pi. Since 2013, it has been officially provided by the Raspberry Pi Foundation as the primary operating system for the Raspberry Pi family of compact single-board computers.
Raspberry Pi OS was first developed by Mike Thompson and Peter Green as Raspbian, an independent and unofficial port of Debian to the Raspberry Pi. The first build was released on July 15, 2012. As the Raspberry Pi had no officially provided operating system at the time, the Raspberry Pi Foundation decided to build off of the work done by the Raspbian project and began producing and releasing their own version of the software. The Foundation's first release of Raspbian, which now referred both to the community project as well as the official operating system, was announced on September 10th, 2013.
On May 28th, 2020, the Raspberry Pi Foundation announced they were releasing a beta 64-bit version of their official operating system. However, the 64-bit version was not based on Raspbian, instead taking its userland from Debian directly. Since the Foundation did not want to use the name Raspbian to refer to software that was not based on the Raspbian project, the name of the officially provided operating system was changed to Raspberry Pi OS. This change was carried over to the 32-bit version as well, though it continued to be based on Raspbian. The 64-bit version of Raspberry Pi OS was officially released on February 2nd, 2022.
Raspberry Pi OS is highly optimized for the Raspberry Pi line of compact single-board computers with ARM CPUs. It runs on every Raspberry Pi except the Pico microcontroller. Raspberry Pi OS uses a modified LXDE as its desktop environment with the Openbox stacking window manager, along with a unique theme. The default distribution is shipped with a copy of the algebra program Wolfram Mathematica, VLC, and a lightweight version of the Chromium web browser.
Features
User interface
Raspberry Pi OS's desktop environment, PIXEL, looks similar to many common desktops, such as macOS and Microsoft Windows, and is based on LXDE. The menu bar is positioned at the top and contains an application menu and shortcuts to Terminal, Chromium, and File Manager. On the right is a Bluetooth menu, a Wi-Fi menu, volume control, and a digital clock.
Package management
Packages can be installed via APT, the Recommended Software app, and by using the Add/Remove Software tool, a GUI wrapper for APT.
Components
PCManFM is a file browser allowing quick access to all areas of the computer, and was redesigned in the first Raspberry Pi OS Buster release (2019-06-20).
Raspberry Pi OS originally used Epiphany as the web browser, but switched to Chromium with the launch of its redesigned desktop.
Raspberry Pi OS comes with many beginner IDEs, such as Thonny Python IDE, Mu Editor, and Greenfoot. It also ships with educational software like Scratch and Bookshelf.
Reception
Jesse Smith from DistroWatch reviewed Raspberry Pi OS (then Raspbian) in 2015:
Based on download statistics from the Raspberry Pi Imager, Raspberry Pi OS is by far the most used operating system on the Raspberry Pi, accounting for 68.44% of all OS downloads in the past month, as of 24 February 2022.
Microsoft repository controversy
In late January 2021, the Raspberry Pi OS package added a trusted GPG key and entry to APT. This addition made it easier for users running Raspberry Pi OS to install Visual Studio Code, a source code editor developed by Microsoft. However, this change also meant that every time the system checked for updates, it would query Microsoft's package servers. Given Microsoft's once adversarial history with Linux, this from of telemetry upset some users. The GPG key and APT entry would later be removed.
Release history
Versions
Raspberry Pi OS has three installation versions:
Raspberry Pi OS Lite (32-bit & 64-bit)
Raspberry Pi OS with desktop (32-bit & 64-bit)
Raspberry Pi OS with desktop and recommended software (32-bit)
Raspberry Pi OS also has two legacy versions:
Raspberry Pi OS Lite (Legacy) (32-bit)
Raspberry Pi OS (Legacy) with desktop (32-bit)
Raspberry Pi OS Lite is the smallest version and doesn't include a desktop environment. Raspberry Pi OS with desktop includes the Pixel desktop environment. Raspberry Pi OS with desktop and recommended software additionally comes pre-installed with additional productivity software, such as Libre Office.
On December 2nd, 2021, the Raspberry Pi Foundation released Raspberry Pi OS (Legacy), a branch of the operating system that continued to receive security and hardware compatibility updates but was based in the older Buster version of Debian.
All versions are distributed as .img disk image files. These files can then be flashed on to microSD cards where Raspberry Pi OS runs. In March 2020, the Raspberry Pi Foundation also published the Raspberry Pi Imager, a custom disk flasher that allows for the installation of Raspberry Pi OS as well as other operating systems designed for the Raspberry Pi, including RetroPie, Kodi OS, and others.
The Raspberry Pi documentation recommends at least a 4GB microSD card for Raspberry Pi OS Lite, and at least a 8GB microSD card for all other versions.
See also
Debian
References
External links
Raspberry Pi OS on DistroWatch
ARM Linux distributions
ARM operating systems
Debian-based distributions
Operating systems based on the Linux kernel
Free software culture and documents
Raspberry Pi
Linux distributions |
9276530 | https://en.wikipedia.org/wiki/2006%20RJ103 | 2006 RJ103 | is a Neptune trojan, first observed by the Sloan Digital Sky Survey Collaboration at Apache Point Observatory, New Mexico, on 12 September 2006. It was the fifth and largest such body discovered, approximately 180 kilometers in diameter. , it is 30.3 AU from Neptune.
Orbit and classification
Neptune trojans are resonant trans-Neptunian objects in a 1:1 mean-motion orbital resonance with Neptune. These trojans have a semi-major axis and an orbital period very similar to Neptune's (30.10 AU; 164.8 years).
belongs to the leading group, which follow 60° ahead Neptune's orbit. It orbits the Sun with a semi-major axis 29.925 AU of at a distance of 29.0–30.9 AU once every 163 years and 8 months (59,793 days). Its orbit has an eccentricity of 0.03 and an inclination of 8° with respect to the ecliptic.
Physical characteristics
The discoverers estimate that has a mean-diameter of 180 kilometers based on a magnitude of 22.0. Based on a generic magnitude-to-diameter conversion, it measures approximately 130 kilometers in diameter using an absolute magnitude of 7.5 with an assumed albedo of 0.10.
Numbering and naming
Due to its orbital uncertainty, this minor planet has not been numbered and its official discoverers have not been determined. If named, it will follow the naming scheme already established with 385571 Otrera, which is to name these objects after figures related to the Amazons, an all-female warrior tribe that fought in the Trojan War on the side of the Trojans against the Greek.
References
External links
AstDys-2 about
Neptune trojans
Minor planet object articles (unnumbered)
20060912 |
48648089 | https://en.wikipedia.org/wiki/Game%20development%20kit | Game development kit | Game development kits (GDK) are specialized hardware used to create commercial video games for game consoles. They may be partnered with game development tools, special game engine licenses, and other middleware to aid video game development. GDKs are typically not available to the public, and require game developers to enter an agreement, partnership, or program with the hardware manufacturer to gain access to the hardware. As console generations pass, development kits often get sold through websites like eBay without repercussions. This is often because the console manufacturers discontinue certain development programs as time passes.
Overview
In the 1980s, computing did not involve 3D modelling or any complex programming due to the limitations of hardware. This, combined with the hobbyist nature of early computer game programming, meant that not many individuals or smaller companies would develop for consoles. Even when consoles became mainstream (such as the Nintendo Entertainment System), there was no official or publicly available GDK since most console manufacturers would develop their games in-house. For example, Nintendo had internal development teams for both hardware and software.
By the fifth generation of consoles, game development kits were developed to encourage more developers to make console games and grow the videogame industry. Game development kits began as a simple way for developers to connect their computers to console hardware, allowing them to compile software on their PC and see it play directly on a console. Once most GDKs started becoming bundled with hardware-specific software, hobbyists or anyone not directly affiliated with a console manufacturer would have to write their games without the specialized software to access unique features such as the Xbox One's Kinect or the Wii U GamePad.
Modern game development kits often come bundled with the specialized software, and are much more formalized compared to previous-generation GDKs. In older generations of console gaming, developers had to make their own hardware and write games at various levels of programming (such as assembly). Today, programs such as Unity 3D provide a complete software environment and console manufacturers such as Nintendo provide polished & powerful development hardware through their developer programs. Other console manufacturers even allow the retail consoles to be used as development kits, provided that the development software is being used by the developer.
Third generation
Nintendo Entertainment System
For a significant portion of the NES lifespan, there was no official development kit. Video game developers creating games for the NES would have to make their own development kits, such as Rocket Science Production with their "NES Mission Control" development system. At least two programs were used in conjunction with the NES Mission Control hardware; NESTEST.EXE which would be used to test and debug the development hardware, and HST.EXE which would be used for communication between a computer and the NES development hardware.
Fourth generation
Super Nintendo Entertainment System
The Super Nintendo Entertainment System used specialized EPROM cartridges for development, as well as various software. Similar to the NES, developers often made their own development software or relied on middleware made by other developers.
Fifth generation
PlayStation
There are several variations of the PlayStation development kit used for game creation. One variation of the development kit had only three components, while the PlayStation Ultimate Development Kit included up to 26 components, including the complete Net Yaroze development kit.
The Net Yaroze version of the development kit was unique in that it had some features removed and added compared to the official (complete) PlayStation development kit. The Net Yaroze hardware was designed for hobbyists, while official developers would have access to the official PlayStation development kits. There was also a blue version of the PlayStation made for developers that would read burned discs to allow quick testing of imaged builds of their videogames. While there were official PlayStation-branded CD-Rs that could be used with the blue PlayStation, regular CD-Rs were also compatible with the system.
Nintendo 64/64DD
The Nintendo 64 development kit consisted of multiple components, both for the N64 and its add-on, the N64DD. The main hardware used in N64 game development was the Partner-N64 Development Kit, and used tall cartridges for game development/testing rather than the short cartridges that were sold with retail games. Another hardware component in N64 development was the NU64 Flash Gang Writer, which allowed developers to copy data from one cartridge to multiple cartridges simultaneously. This device was primarily used to create press and test copies of games, and also relied on tall cartridges instead of short retail cartridges.
Other versions of the Nintendo 64 GDK are the SN Systems development suite, as well as the SN Maestro 64 Music development system. The development suite allowed developers to run code from a computer directly to the console, and included a software package. The Maestro 64 Music system allowed developers to load music software on to the console, and play music through the Nintendo 64's hardware. another unofficial alternative to develop games for the N64 was the Doctor V64, made by Bung Enterprises.
Sixth generation
Dreamcast
Sega Dreamcast units were unique in that they used GD-ROM discs; giga discs that held 1GB of data. This was slightly more than a typical CD, but less than a DVD. While GD-ROM burners were used by some developers, since the Dreamcast was compatible with CDs and since most games didn't take up 1GB of data at the time, GD-ROMs remained uncommon as developers opted to use the more-easily accessible CDs for their disc media. The console itself was white, like the retail version of the Dreamcast console, but unlike the retail console, the dev kit console looked like a typical desktop PC from the 1990s but shorter in height. The boot up screen of the dev kit console is also different, as it uses 3D graphics instead of the 2D graphics used in the retail console.
PlayStation 2
The dev kit console for the PS2 looked like a retail PS2, but substantially thicker.
GameCube
The dev kit console of the Nintendo GameCube was white and shaped like a tower desktop PC, with the controller ports being where the optical disk drive would normally go.
Xbox
When developers were creating software for the original Xbox, a prototype of the controller was used in the early development kits. This controller was slimmer, had elongated sides, and used a USB cable instead of an Xbox port-compatible cable. The dev kit console was shaped like a tower desktop PC, was grey colored and had a green circle in the middle of the front of the console with an X inside the circle.
Seventh generation
Xbox 360
Microsoft manages the Xbox 360 Tools and Middleware Program, which licenses development kits (hardware and software) to professional software developers working on tools and technologies for games. Access to this program requires good industry references, prior experience in games tools and middleware development, and signing a non-disclosure agreement.
PlayStation 3
The PlayStation developer program allows registered developers to publish their games across the PlayStation Network, making their games accessible on the PlayStation 3, PlayStation 4, PlayStation Vita, and PlayStation TV all through one program.
Wii
The Wii development kit was a bundle of the "NDEV" hardware – a big black box full of debugging/testing hardware that looks nothing like the slim white Wii consoles sold to consumers – and a disc containing the developer software tools.
Eighth generation
Xbox One
Microsoft maintains multiple developer programs for people wanting to develop games for their platforms; ID@Xbox for Xbox One game development, and the Windows Dev Center for Windows 8, Windows 8.1, Windows 10, and Xbox One game and application development.
The ID@Xbox program allows qualified game developers to self-publish their games to the Xbox One, as well as access free middleware and use two development hardware kits for free.
The Windows Dev Center allows developers to create apps and games on Windows 8, Windows 8.1, and Windows 10 platforms as part of the Universal Windows Platform system.
PlayStation 4 and PlayStation Vita
The PlayStation developer program allows registered developers to publish their games across the PlayStation Network, making their games accessible on the PlayStation 3, PlayStation 4, PlayStation Vita, and PlayStation TV all through one program. The PlayStation 4 development kits were known as "Orbis", though this was just a codename. Academic institutions can register to receive PS4 development kits for educational use, and are not region-restricted unlike regular PlayStation Developer Program members.
Wii U
Nintendo maintains a unified developer program for both its Wii U and Nintendo 3DS families of platforms. This developer program provides software and middleware to developers, and allows developers to self-publish their games to the Nintendo eShop. Games and applications published through this program are considered "third-party" and do not belong to Nintendo, allowing independent developers to publish their games on multiple different platforms.
The Wii U development hardware consists of a system called "CAT-DEV", with its accompanying peripherals such as the Display Remote Controller (presumably the Wii U GamePad) and sensor bar.
Nintendo 3DS Family
Nintendo's developer program allows developers to use Nintendo 3DS development kits, and allows developers to self-publish their games to the Nintendo eShop. As mentioned in the Nintendo Wii U section above, games and applications published through this program are considered "third-party" and do not belong to Nintendo, allowing independent developers to publish their games on multiple different platforms.
Strangely, some 3DS development kits cannot play retail games.
References
External links
http://www.warioworld.com/ - (General Nintendo Developer website)
https://web.archive.org/web/20130816205156/https://wiiu-developers.nintendo.com/ - (Nintendo Wii U Developer website)
https://developer.nintendo.com/home - (Nintendo Developer Portal)
http://www.xbox.com/en-US/developers - (Microsoft Xbox One Developer website)
https://dev.windows.com/en-us/programs - (Microsoft Universal Windows Platforms Developer website)
https://www.playstation.com/en-us/develop/ - (PlayStation Developer Program website)
Video game development
Video game hardware
Computer hardware
Gaming |
23092392 | https://en.wikipedia.org/wiki/DrChrono | DrChrono | DrChrono is an American digital health technology company providing a software and billing service platform consisting of Web- and cloud-based apps for doctors and patients. It makes electronic health records (EHR), practice management software and medical billing software available digitally and provides medical revenue cycle management (RCM) services. The company is based in Sunnyvale, California
History
DrChrono was founded in New York City in 2009 by Daniel Kivatinos and Michael Nusimow. The company spent time in the Rose Tech Ventures incubator in New York City before moving to Silicon Valley to join Y Combinator. Nusimow, a computer engineer, created the program with the intention of streamlining patient-doctor visits. In February 2011, DrChrono launched as an EHR app for the iPad, allowing doctors to complete tasks and access information without needing to use paper records. The information gathered and accessed through the app is also available from a Web browser, iPhone or Android device, on Google Glass and the Apple Watch.
In June 2011, the company released the first tablet EHR system to be certified for meaningful use by Infogard Laboratories. Doctors who used their EHR app to store and track patient data received up to $44,000 in incentives from federal subsidies. In August 2012, DrChrono released OnPatient, an iOS and Android app to replace and expedite the traditional handwritten patient check-in process. It integrates with DrChrono's medical records interface.
In 2014, DrChrono was ranked #249 on Inc. magazine's Inc. 5000, and it was ranked #357 in 2015, with a three-year growth of 1,311%. As of December 2018, the platform has booked over 41.6 million patient appointment visits, with 13.2 million patients under the care of DrChrono providers. Over $3 billion in medical claims are processed annually through the platform.
In January 2020, the company has raised $20 million growth capital round from ORIX Growth Capital a subsidiary of Orix USA to stimulate expansion and increase market share By investing further in the technology platform (EHR, medical billing and API) and expanding engineering, sales, and support functions.
Products and software
DrChrono
DrChrono's EHR platform is built on open source technologies including Linux, Python, MySQL, and Django, atop Apple's iOs platform. In 2013, it opened up its application programming interface (API) so that developers could build apps intended for the physicians and million patients in its system. DrChrono vets the best apps and features them on its website.
The company makes electronic health records of patients available digitally, as an iOS app on the iPhone, iPad and Apple Watch, and also as an app available on most Android phones. Doctors can customize the interface, schedule appointments, take notes and photos, write prescriptions and send them to pharmacies, look at lab results and update a patient's records. The service offers paid monthly subscriptions for premium services, such as dictation, medical billing software and storage for medical records. Patients can use a version of the app to keep track of their own results and appointments. Its Revenue Cycle Management service helps physicians and medical practices manage billing, collections, accounts receivable, insurance processing and other paperwork. It offers direct integration with Acronis for exchanging large files, as well as a revenue cycle management option called Apollo-plus.
In June 2014, DrChrono created the first health record system app for Google Glass, creating a wearable health record allowing doctors to record patient visits, with the patient's permission, releasing the app in a beta phase. Videos, photos and notes are stored in the patient's electronic medical record or in Box, a cloud-based storage and collaboration service. In April 2015, DrChrono migrated its app to the Apple Watch, which had just been announced. Health records on the iPad, iPhone and Apple Watch sync up together and link to Apple's HealthKit data repository and contribute to the patient's ongoing electronic health record. In September 2016, the company announced its EHR iOS app was the first to be certified for Electronic Prescriptions for Controlled Substances (EPCS), allowing doctors to prescribe electronic prescriptions for controlled substances to pharmacies.
In November 2017, DrChrono became the first EHR to use Apple's facial recognition feature for login, allowing the provider to log in quicker. In April 2018, the company introduced leasing plans to provide medical practices with new Apple hardware. In February 2019, DrChrono's EHR attained ONC certification for Meaningful Use Stage 3, the first to obtain that certification for a mobile-based product, enabling it to share data with other MU3-certified EHRs.
OnPatient
OnPatient is a separate application for patients, offering a range of tools for managing health day-to-day. It offers reminders of upcoming appointments, bill-paying capabilities, and allows for text messaging with health professionals. Following the discontinuation of Google Health in 2011, the data from that program can be imported into OnPatient. The application puts an emphasis on primary data recorded by patients themselves through smartphones and apps. The data is used to find doctors, schedule appointments and send secure messages between patients and professionals. The app is free for patients; full access to OnPatient is part of the paid subscription plan for doctors. It is available on Android or iOS devices.
Partnerships
Hundreds of developers have integrated apps on the platform, including CoverMyMeds, FIGmd, Ambra Health, Square, DemandForce, and NextPatient. In April 2016, DrChrono announced four new medical application programming interface partners: Health Gorilla, Inuvio, Medisafe and Wink Health. The partnerships bring the DrChrono platform access to US labs for labs and imaging ordering; streamlined patient data collection; increased access for Medisafe physicians; and integration of sleep studies from a patient's home. In late 2017, the company partnered with FlexScanMD, to help medical practices track inventory. In 2019, DrChrono announced numerous partnerships, with companies including CoverMyMeds to help expedite insurance authorization processing; Beam Health to allow doctors to conduct smartphone video consultations; 3D4Medical, to give medical practices access to 3D interactive modeling and animation videos from within their EHR; Jamf, to help healthcare practices manage their Apple devices and apps; Genomind, to help integrate genetic tests into the app; Kapitus, to help healthcare practices secure additional funding; DeepScribe, to use artificial intelligence to integrate medical notes directly into their EHR; HeathFeed for educational content for providers; Updox to consolidate various administrative tasks; and OutcomeMD, to make it easier to track and analyze patient data.
References
External links
Companies based in Mountain View, California
Health information technology companies
Electronic health records
American companies established in 2009
Health care companies established in 2009
Software companies established in 2009 |
47407843 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Design%20and%20Manufacturing%2C%20Kurnool | Indian Institute of Information Technology, Design and Manufacturing, Kurnool | The Indian Institute of Information Technology Design and Manufacturing, Kurnool (IIITDM Kurnool) is a technical education institute in the field of Information Technology established by MHRD, Government of India in 2015 established by Central Government headed by Prime Minister Narendra Modi.. The institute started functioning at its permanent campus of in Kurnool.
History
IIITDM Kurnool was established in 2015 by the Ministry of Human Resource Development (MHRD) as part as the government obligation under the Andhra Pradesh Reorganisation Act, 2014. The Indian Institutes of Information Technology (Amendment) Act, 2017 granted Institutes of National Importance (INI) status to the institute in August 2017. The Mission of IIITDM Kurnool is "To become a center of excellence pioneering in education, research & development, and leaders in design & manufacturing", the Vision is "To become a leading institute of higher learning in Information Technology enabled design & manufacturing to create technologies and technologists befitting the industries globally" and charter is "To carry out advanced research and development activities in design and manufacturing technologies, both on its own and on sponsorship basis for the industry".
Admission
Admission to the undergraduate program is based on the All India Rank obtained in the JEE Main examination conducted by National Testing Agency(NTA).
This institute Also Offers PhD program And also M.tech Courses through Gate 2020 Rank.
Courses:
{B.tech}
1. Computer science & engineering(CSE)
2. Electronics & Communications engineering with specialization in Design & Manufacturing(EDM)
3. Mechanical engineering with specialization in Design & Manufacturing(MDM)
4. Artificial Intelligence & Data science
{M.tech}
1. Computer Science and Engineering with specialization in Data Analytics and Decision Sciences(CSE)
2. Electronic System Design(ECE)
3. Smart Manufacturing (MECH)
{PhD}
1. CSE
2. ECE
3. MECH
4. PHYSICS
5. MATHEMATICS
Facilities
1. Boys Hostel
2. Girls Hostel
3. Cafe.in
4. Hostel mess
5. Tv room
6. Indoor games
7. Sports
8. Library
References
Kurnool
Universities and colleges in Kurnool district
Kurnool
2015 establishments in Andhra Pradesh
Educational institutions established in 2015 |
34838983 | https://en.wikipedia.org/wiki/BleachBit | BleachBit | BleachBit is a free and open-source disk space cleaner, privacy manager, and computer system optimizer. The BleachBit source code is licensed under the GNU General Public License version 3.
History
BleachBit was first publicly released on 24 December 2008 for Linux systems. The 0.2.1 release created some controversy by suggesting Linux needed a registry cleaner.
Version 0.4.0 introduced CleanerML, a standards-based markup language for writing new cleaners. On May 29, 2009, BleachBit version 0.5.0 added support for Windows XP, Windows Vista, and Windows 7. On September 16, 2009, version 0.6.4 introduced command-line interface support.
BleachBit is available for download through its Web site and the repositories of many Linux distributions.
Features
Identifying and removing Web cache, HTTP cookies, URL history, temporary files log files and Flash cookies for Firefox, Opera, Safari, APT, Google Chrome
Removing unused localizations (also called locale files) which are translations of software
Shredding files and wiping unallocated disk space to minimize data remanence
Wiping unallocated disk space to improve data compression ratio for disk image backups
Vacuuming Firefox's SQLite database which suffers fragmentation
Command line interface for scripting automation and headless operation
Technology
BleachBit is written in the Python programming language and uses PyGTK.
Most of BleachBit's cleaners are written in CleanerML, an open standard XML-based markup language for writing cleaners. CleanerML deals not only with deleting files, but also executes more specialized actions, such as vacuuming an SQLite database (used, for example, to clean Yum).
BleachBit's file shredder uses only a single, "secure" pass because its developers believe that there is a lack of evidence that multiple passes, such as the 35-pass Gutmann method, are more effective. They also assert that multiple passes are significantly slower and may give the user a false sense of security by overshadowing other ways in which privacy may be compromised.
Controversy
In August 2016, Republican U.S. Congressman Trey Gowdy announced that he had seen notes from the Federal Bureau of Investigation (FBI), taken during an investigation of Hillary Clinton's emails, that stated that her staff had used BleachBit in order to delete tens of thousands of emails on her private server. Subsequently, then presidential nominee Donald Trump claimed Clinton had “acid washed” and “bleached” her emails, calling it “an expensive process.”
After the announcement, BleachBit's company website reportedly received increased traffic. In October 2016, the FBI released edited documents from their Clinton email investigation.
See also
AVG PC TuneUp
Desktop Cleanup Wizard
Disk Cleanup
Eraser (software)
CCleaner
Norton Utilities
References
External links
Review by Downloadsquad (June 9, 2009)
Review by SoftPedia (September 16, 2009)
Review by CNET (January 19, 2011)
2008 software
Cross-platform free software
Data erasure software
Free multilingual software
Free software programmed in Python
Software that uses PyGTK
Software using the GPL license
Utilities for Linux
Utilities for Windows |
4974215 | https://en.wikipedia.org/wiki/Benetech | Benetech | Benetech is a nonprofit social enterprise organization that empowers communities with software for social good. Previous projects include the Route 66 Literacy Project, the Miradi environmental project management software, Martus (human rights abuse reporting), and the Human Rights Data Analysis Group. Current program areas include global education, human rights, and poverty alleviation.
About
One of Benetech's key education program initiatives is Bookshare, an e-book library for people with print disabilities such as dyslexia, blindness, low vision, and physical disabilities.
Another project is Benetech Service Net, an open standards data exchange platform that makes it easier to share and maintain information on local social and human services. Organizations providing referrals or referral technology (such as 2-1-1s, Healthify, or Health Leads) and agencies providing information about their services (such as community-based shelters, food pantries, or government agencies) can work together to make better data available for everyone.
History
Benetech was founded by technology entrepreneur Jim Fruchterman in Palo Alto, California, under the name of Arkenstone in 1989. It was initially created to provide reading machines for blind people. During the period 1989-2000, over 35,000 reading machines were sold in sixty countries, reading twelve different languages. In 2000, the Arkenstone reading machine product line was sold to Freedom Scientific, and the nonprofit's name was changed to Benetech. The funding from the asset sale was used to start the Bookshare initiative and Martus project in 2001.
Benetech and its Martus software were featured on the PBS NewsHour.
In 2019, Benetech announced the expansion of its inclusive education initiatives, creating new partnerships with organisations such as Vision Australia, the Royal National Institute of Blind People in the UK, National Council for the Blind of Ireland, Canada's Center for Equitable Library Access and the Dubai government.
See also
Guatemala National Police Archives
Notes
External links
Benetech
Bookshare: accessible online library for people with reading barriers
Benetech Service Net
Martus Human Rights Database Software
Human Rights Data Analysis Group
Miradi Environmental Project Management Software
Social enterprises
Non-profit organizations based in California
Companies based in Palo Alto, California
Free and open-source software organizations |
33020764 | https://en.wikipedia.org/wiki/Cheekpoint | Cheekpoint | Cheekpoint () is a village set on the confluence of the River Suir and the River Barrow. Lying beneath the 150-metre-high Minaun Hill (mountain meadow by a river) the village has panoramic views of Waterford Harbour, the 2131 ft. Barrow Bridge, which was once the longest bridge in Ireland, and Great Island Power Station now owned by Scottish Southern Energy SSE plc who purchased it from Endesa in 2012. The village is also surrounded by the Malting Woods which were planted by Cornelius Bolton.
Toponymy
The Irish name for Cheekpoint is "Pointe na Síge", or perhaps "Pointe na Sí" (in English, Point of the Fairies). It is also claimed to mean Point of the Streak,. Now it is thought that the name comes from a rock called Carraig na Síge out on the river near the low water mark which shows a trail of foam or streak with the ebbing tide.
Subsequently it was called Bolton but this name is now no longer used and the original is the only one recognises.
Industry
Before the building of the pier at Dunmore East, Cheekpoint was a thriving village, being the station at which the mail packets from England for Waterford stopped. In addition there were cotton, rope, and hosiery factories which disappeared when the mail packet station was transferred to Dunmore East on 1 July 1818.
History
Cheekpoint and the lands which surrounded it were owned of the Aylward family from Bristol, who had been granted 7000 acres of pastureland by King Henry II in 1177. They held it until Oliver Cromwell dispossessed them in 1649 when they refused to renounce Catholicism. Cromwell then gave the property to one of his officers, a Captain William Bolton. In 1783 Cornelius Bolton (1751–1829) built Faithlegg House after he had inherited the Faithlegg Estate from his father in 1779.
Cornelius Bolton was a very progressive landlord and he was very interested in helping his tenants to progress. He built the pier at the nearby village of Cheekpoint and then he built a textile factory, a rope factory and a hotel. However these enterprises failed and he went bankrupt in 1819. This was largely due to the mail packet station to Milford Haven in Wales being transferred to Dunmore East in 1818.
Sleater's "Topography of Ireland" published in 1806 has the following reference:- "Bolton, formerly called Cheekpoint, cotton factory and hoisery, established by Mr. Bolton. A most commodious Inn for passengers in the packets to and from Milford Haven in Pembrokershire." An earlier writer refers to it - "Mr. Cornelius Bolton lives very retired in the country and has employed a considerable part of his fortune in building a large village where he has established several important manufactures, particularly looms. The industry which he encourages in his colony renders it probable that his expense will be repaid him, and that it will become an object of utility to the public and of profit to him although suggested by motives of humanity ".
It also mentions "that to the spirited exertions of Mr. Bolton the citizens of Waterford were said to be primarily indebted for the establishment of the packets from England, and that the diversion of these packets from Cheekpoint to Dunmore East would be a serious loss to the proprietor of Cheekpoint who had expanded a considerable sum of money on hotels and other accommodations, unless Parliament should take this loss into consideration".
The decision by the British Government to grant the money to build the harbour at Dunmore East in 1814 spelt the end for the Cheekpoint enterprise and when the transfer of the mail packet took place in 1818 Cheekpoint ceased to have the passenger business which kept the village alive. The change from sail to steam meant that it would now be possible to run a service between Milford Haven and Ireland to a reasonable schedule and the new harbour at Dunmore East facilitated this greatly.
Cheekpoint was then only used as a fishery harbour in the 19th and 20th centuries and became famous for a certain type of small fishing craft called the Cheekpoint Prong which was 17 ft.6in. long with a 4 ft.8in. beam and was normally rowed or paddled. They were used for long lining and salmon fishing with drift nets, snap nets and draft nets.
In 1995 a series of groins (or groyne) were built up to 200 metres out in the river to divert the Cheekpoint Bar which was a mudbank impeding large vessels from travelling to the Port of Waterford. These groins resulted in the harbour at Cheekpoint silting up so badly that only small craft may now enter. The harbour is not used very often now by visiting craft because of this difficulty.
See also
List of towns and villages in Ireland
References
External links
Irish Railway Record Society
River Cots of the South East
Towns and villages in County Waterford |
925719 | https://en.wikipedia.org/wiki/Digital%20asset%20management | Digital asset management | Digital Asset Management (DAM) and the implementation of its use as a computer application is required in the collection of digital assets to ensure that the owner, and possibly their delegates, can perform operations on the data files.
Terminology
The term media asset management (MAM) may be used in reference to DAM when applied to the sub-set of digital objects commonly considered "media", namely audio recordings, photos, and videos. Any editing process that involves media, especially video, can make use of a MAM to access media components to be edited together, or to be combined with a live feed, in a fluent manner. A MAM typically offers at least one searchable index of the images, audio, and videos it contains constructed from metadata harvested from the images using pattern recognition, or input manually.
Management
Creation
Applications implement digital asset management by importing it from the analog and/or digital domains (by encoding, scanning, optical character recognition, etc.) or by authoring it as new object.
Indexing
A primary function of a DAM system is to make assets easily available to its users by providing a searchable index that supports retrieval of assets by their content and/or metadata. The cataloging function is usually part of the ingestion process for new assets.
Workflow
Digital assets will typically have a lifecycle, which may include various states such as creation, approval, live, archived and deleted.
Version control
Often a DAM system will store earlier versions of a digital asset and allow those to be downloaded or reverted to. Therefore, a DAM system can operate as an advanced type of version control system.
Access control
Finally, a DAM system typically includes security controls ensuring relevant people have access to assets. This will often involve integration with existing directory services via a technology such as single sign-on.
Categorization
Smaller DAM systems are used in a particular operational context, for instance in video production systems. The key differentiators between them are the types of input encoders used for creating digital copies of assets to bring them under management, and the output decoders and/or formatters used to make them usable as documents and/or online resources. The metadata of a content item can serve as a guide to the selection of the codec(s) needed to handle the content during processing, and may be of use when applying access control rules to enforce authorization policy.
Assets that require particular technology to be used in a workflow need to have their requirements for bandwidth, latency, and access control considered in the design of the tools that create or store them, and in the architecture of the system that distributes and archives them.
When not being worked on assets can be held in a DAM in a variety of formats including blob (binary large object in a database) or as a file in a normal file system, that are "cheaper" to store than the form needed during operations on them. This makes it possible to implement a large scale DAM as an assembly of high performance processing systems in a network with a high density storage solution at its center.
Media asset issues
An asset can exist in several formats and in a sequence of versions. The digital version of the original asset is generally captured in as high a resolution, colour depth, and (if applicable) frame rate as will be needed to ensure that results are of acceptable quality for the end-use. There can also be thumbnail copies of lower quality for use in visual indexing.
Metadata for an asset can include its packaging, encoding, provenance, ownership and access rights, and location of original creation. It is used to provide hints to the tools and systems used to work on, or with, the asset about how it should be handled and displayed.
Types of systems
Digital asset management systems fall into the following classifications:
Brand management system to enforce brand presentation within an organization by making the approved logos, fonts, and product images easily available.
Library or archive for bulk storage of infrequently changing video or photo assets.
Media asset management systems for handling assets in the audiovisual domain including audio, video, or still images.
Production management systems for handling assets being created on the fly for use in live media production or as visual effects for use in gaming applications, TV, or films.
Streaming for on-demand delivery of digital content, like TV shows or movies, to end users on behalf of digital retailers.
All of these types will include features for work-flow management, collaboration, project-management, and revision control.
See also
Content management
Digital asset
Digital library
Digital rights management
Image organizer, possible presentation layer for a DAM
Web content management system, may be a presentation layer for a DAM
Enterprise content management
References
Further reading
Information technology management
Content management systems
Document management systems
Records management
Asset management |
5462396 | https://en.wikipedia.org/wiki/Unity%20%28game%20engine%29 | Unity (game engine) | Unity is a cross-platform game engine developed by Unity Technologies, first announced and released in June 2005 at Apple Inc.'s Worldwide Developers Conference as a Mac OS X-exclusive game engine. The engine has since been gradually extended to support a variety of desktop, mobile, console and virtual reality platforms. It is particularly popular for iOS and Android mobile game development and used for games such as Pokémon Go, Monument Valley, Call of Duty: Mobile, Beat Saber and Cuphead. It is considered easy to use for beginner developers and is popular for indie game development.
The engine can be used to create three-dimensional (3D) and two-dimensional (2D) games, as well as interactive simulations and other experiences. The engine has been adopted by industries outside video gaming, such as film, automotive, architecture, engineering, construction, and the United States Armed Forces.
History
The Unity game engine launched in 2005, aiming to "democratize" game development by making it accessible to more developers. The next year, Unity was named runner-up in the Best Use of Mac OS X Graphics category in Apple Inc.'s Apple Design Awards. Unity was initially released for Mac OS X, later adding support for Microsoft Windows and Web browsers.
Unity 2.0 (2007)
Unity 2.0 launched in 2007 with approximately 50 new features. The release included an optimized terrain engine for detailed 3D environments, real-time dynamic shadows, directional lights and spotlights, video playback, and other features. The release also added features whereby developers could collaborate more easily. It included a Networking Layer for developers to create multiplayer games based on the User Datagram Protocol, offering Network Address Translation, State Synchronization, and Remote Procedure Calls.
When Apple launched its App Store in 2008, Unity quickly added support for the iPhone. For several years, the engine was uncontested on the iPhone and it became well-known with iOS game developers.
Unity 3.0 (2010)
Unity 3.0 launched in September 2010 with features expanding the engine's graphics features for desktop computers and video game consoles. In addition to Android support, Unity 3 featured integration of Illuminate Labs' Beast Lightmap tool, deferred rendering, a built-in tree editor, native font rendering, automatic UV mapping, and audio filters, among other things.
In 2012 VentureBeat wrote, "Few companies have contributed as much to the flowing of independently produced games as Unity Technologies. [...] More than 1.3 million developers are using its tools to create gee-whiz graphics in their iOS, Android, console, PC, and web-based games. Unity wants to be the engine for multi-platform games, period." A May 2012 survey by Game Developer magazine indicated Unity as its top game engine for mobile platforms.
Unity 4.0 (2012)
In November 2012, Unity Technologies delivered Unity 4.0. This version added DirectX 11 and Adobe Flash support, new animation tools called Mecanim, and access to the Linux preview.
Facebook integrated a software development kit for games using the Unity game engine in 2013. This featured tools that allowed tracking advertising campaigns and deep linking, where users were directly linked from social media posts to specific portions within games, and easy in-game-image sharing. In 2016, Facebook developed a new PC gaming platform with Unity. Unity provided support for Facebook's gaming platforms, and Unity developers could more quickly export and publish games to Facebook.
Unity 5 (2015)
The Verge said of 2015's Unity 5 release: "Unity started with the goal of making game development universally accessible. [...] Unity 5 is a long-awaited step towards that future." With Unity 5, the engine improved its lighting and audio. Through WebGL, Unity developers could add their games to compatible Web browsers with no plug-ins required for players. Unity 5.0 offered real-time global illumination, light mapping previews, Unity Cloud, a new audio system, and the Nvidia PhysX 3.3 physics engine. The fifth generation of the Unity engine also introduced Cinematic Image Effects to help make Unity games look less generic. Unity 5.6 added new lighting and particle effects, updated the engine's overall performance, and added native support for Nintendo Switch, Facebook Gameroom, Google Daydream, and the Vulkan graphics API. It introduced a 4K video player capable of running 360-degree videos for virtual reality. However, some gamers criticized Unity's accessibility due to the high volume of quickly produced games published on the Steam distribution platform by inexperienced developers. CEO John Riccitiello said in an interview that he believes this to be a side-effect of Unity's success in democratizing game development: "If I had my way, I'd like to see 50 million people using Unity – although I don't think we're going to get there any time soon. I'd like to see high school and college kids using it, people outside the core industry. I think it's sad that most people are consumers of technology and not creators. The world's a better place when people know how to create, not just consume, and that's what we're trying to promote."
Unity (2017—present)
In December 2016, Unity Technologies announced that they would change the versioning numbering system for Unity from sequence-based identifiers to year of release to align the versioning with their more frequent release cadence; Unity 5.6 was therefore followed by Unity 2017. Unity 2017 tools featured a real-time graphics rendering engine, color grading and worldbuilding, live operations analytics and performance reporting. Unity 2017.2 underscored Unity Technologies' plans beyond video games. This included new tools such as Timeline, which allowed developers to drag-and-drop animations into games, and Cinemachine, a smart camera system within games. Unity 2017.2 also integrated Autodesk's 3DS Max and Maya tools into the Unity engine for a streamlined asset sharing in-game iteration process.
Unity 2018 featured the Scriptable Render Pipeline for developers to create high-end graphics. This included the High-Definition Rendering Pipeline for console and PC experiences, and the Lightweight Rendering Pipeline for mobile, virtual reality, augmented reality, and mixed reality. Unity 2018 also included machine learning tools, such as Imitation Learning, whereby games learn from real player habits, support for Magic Leap, and templates for new developers.
The C# source code of Unity was published under a "reference-only" license in March 2018, which prohibits reuse and modification.
As of 2020, software built with Unity's game engine was running on more than 1.5 billion devices. According to Unity, apps made with their game engine account for 50 percent of all mobile games, and are downloaded more than 3 billion times per month, and approximately 15,000 new projects are started daily with its software. Financial Times reported that Unity's engine "powers some of the world's most lucrative mobile games", such as Pokémon Go and Activision's Call of Duty Mobile.
In June 2020, Unity introduced the Mixed and Augmented Reality Studio (MARS), which provides developers with additional functionality for rules-based generation of augmented reality (AR) applications. Unity released Unity Forma, an automotive and retail solution tool, on December 9, 2020.
Unity acquired Finger Food Advanced Technology Group in 2020, as it aimed to bolster its non-video game uses and offer additional design help to customers. The company went public in September 2020, to further expand use of its game engine into industries outside of gaming.
Unity 2021 brought multiple new features such as Official Visual Scripting tool in unity, a new multiplayer library to support multiplayer games, improved Il2cpp runtime performance, Volumetric clouds for the High Definition Render pipeline. Shadow caching and Screen Space Global Illumination for HDRP. For the Universal Render Pipeline it added new features such as point light shadows, Deferred renderer and general core engine improvements and fixes.
In December 2021, Unity acquired Peter Jackson's Weta Digital's tools, pipeline, technology, and engineering talent for US$1.625B in a combination of cash and stock.
In 2022, Unity bought Ziva Dynamics, a tech company focused on complex simulations and real-time character creation. The acquisition was announced by a digital human running in engine.
Overview
Unity gives users the ability to create games and experiences in both 2D and 3D, and the engine offers a primary scripting API in C# using Mono, for both the Unity editor in the form of plugins, and games themselves, as well as drag and drop functionality. Prior to C# being the primary programming language used for the engine, it previously supported Boo, which was removed with the release of Unity 5, and a Boo-based implementation of JavaScript called UnityScript, which was deprecated in August 2017, after the release of Unity 2017.1, in favor of C#.
Within 2D games, Unity allows importation of sprites and an advanced 2D world renderer. For 3D games, Unity allows specification of texture compression, mipmaps, and resolution settings for each platform that the game engine supports, and provides support for bump mapping, reflection mapping, parallax mapping, screen space ambient occlusion (SSAO), dynamic shadows using shadow maps, render-to-texture and full-screen post-processing effects.
Two separate render pipelines are available, High Definition Render Pipeline (HDRP) and Universal Render Pipeline (URP), in addition to the legacy built-in pipeline.
All three render pipelines are incompatible with each other. Unity offers a tool to upgrade shaders using the legacy renderer to URP or HDRP.
Supported platforms
Unity is a cross-platform engine. The Unity editor is supported on Windows, macOS, and the Linux platform, while the engine itself currently supports building games for more than 19 different platforms, including mobile, desktop, consoles, and virtual reality. Officially supported platforms as of Unity 2020 LTS are:
Mobile platforms iOS, Android (Android TV), tvOS;
Desktop platforms Windows (Universal Windows Platform), Mac, Linux;
Web platform WebGL;
Console platforms PlayStation (PS4, PS5), Xbox (Xbox One, Xbox Series X/S), Nintendo Switch, Stadia;
Virtual/Extended reality platforms Oculus, PlayStation VR, Google's ARCore, Apple's ARKit, Windows Mixed Reality (HoloLens), Magic Leap, and via Unity XR SDK Steam VR, Google Cardboard.
Formerly supported platforms were Wii, Wii U, PlayStation 3, Xbox 360, Tizen, PlayStation Vita, 3DS, BlackBerry 10, Windows Phone 8, Samsung Smart TV, Gear VR, Daydream, Vuforia, and Facebook Gameroom.
, Unity had been used to create approximately half of the mobile games on the market and 60 percent of augmented reality and virtual reality content, including approximately 90 percent on emerging augmented reality platforms, such as Microsoft HoloLens, and 90 percent of Samsung Gear VR content. Unity technology is the basis for most virtual reality and augmented reality experiences, and Fortune said Unity "dominates the virtual reality business". Unity Machine Learning Agents is open-source software whereby the Unity platform connects to machine learning programs, including Google's TensorFlow. Using trial and error in Unity Machine Learning Agents, virtual characters use reinforcement learning to build creative strategies in lifelike virtual landscapes. The software is used to develop robots and self-driving cars.
Unity formerly supported other platforms including its own Unity Web Player, a Web browser plugin. However, it was deprecated in favor of WebGL. Since version 5, Unity has been offering its WebGL bundle compiled to JavaScript using a 2-stage language translator (C# to C++ and finally to JavaScript).
Unity was the default software development kit (SDK) used for Nintendo's Wii U video game console, with a free copy included by Nintendo with each Wii U developer license. Unity Technologies called this bundling of a third-party SDK an "industry first".
Licensing model
During its first ten years as a product, the paid versions of Unity were sold outright; in 2016, the corporation changed to a subscription model. Unity has free and paid licensing options. The free license is for personal use or smaller companies generating less than $100,000 annually, later raised to $200,000, and the subscriptions are based on revenues generated by the games using Unity. The paid option, Unity Pro, had been required for developers that had over $200,000 in annual revenue, but this also could have been provided for console developers through a Preferred Platform License from the console manufacturer. The Unity Pro keys would have been part of the other SDK from the console manufacturer that the developer paid for. In June 2021, Unity changed this plan slightly to require any developer making games on the closed console systems (PlayStation, Nintendo Switch, and Xbox) regardless of revenue to have a Unity Pro license or a Preferred Platform License Key from the manufacturers. Sony and Nintendo provide this as part of the SDK, but Microsoft had yet to implement this functionality for their SDK.
The engine source code is licensed on a "per-case basis via special arrangements".
Unity Asset Store
Creators can develop and sell user-generated assets to other game makers via the Unity Asset Store. This includes 3D and 2D assets and environments for developers to buy and sell. Unity Asset Store launched in 2010. By 2018, there had been approximately 40 million downloads through the digital store.
Usage
Non-gaming industries uses
In the 2010s, Unity Technologies used its game engine to transition into other industries using the real-time 3D platform, including film and automotive. Unity first experimented in filmmaking with Adam, a short film about a robot escaping from prison. Later, Unity partnered with filmmaker Neill Blomkamp, whose Oats Studios used the engine's tools, including real-time rendering and Cinemachine, to create two computer-generated short films, Adam: The Mirror and Adam: The Prophet. At the 2017 Unite Europe conference in Amsterdam, Unity focused on filmmaking with Unity 2017.1's new Cinemachine tool. In 2018, Disney Television Animation launched three shorts, called Baymax Dreams, that were created using the Unity engine. The Unity engine was also used by Disney to create backgrounds for the 2019 film The Lion King.
Automakers use Unity's technology to create full-scale models of new vehicles in virtual reality, build virtual assembly lines, and train workers. Unity's engine is used by DeepMind, an Alphabet Inc. company, to train artificial intelligence. Other uses being pursued by Unity Technologies include architecture, engineering, and construction.
Unity Technologies Japan mascot
On December 16, 2013, Unity Technologies Japan revealed an official mascot character named , real name (voiced by ). The character's associated game data was released in early 2014. The character was designed by Unity Technologies Japan designer "ntny" as an open-source heroine character. The company allows the use of Unity-chan and related characters in secondary projects under certain licenses. For example, Unity-chan appears as a playable character in Runbow.
See also
List of game engines
List of Unity games
List of WebGL frameworks
References
External links
.NET game engines
2005 software
Game engines for Linux
Game engines that support Vulkan (API)
IPhone video game engines
MacOS programming tools
Mono project applications
Video game engines
Video game IDE |
36374405 | https://en.wikipedia.org/wiki/Director%20Musices | Director Musices | Director Musices is computer software produced by the Department of Speech, Music and Hearing at KTH Royal Institute of Technology. It aims to give an expressive, human-like performance to a musical score by varying the volume and timing of the notes. Director Musices is written in CMU Common Lisp and distributed as free software. It processes MIDI files.
External links
Software for Automatic Music Performance including Director Musices and pDM
Director Musices with Lilypond "Howto" (instructions on how to set up Director Musices to process GNU LilyPond output)
See also
Sibelius (software) a commercial program that also includes automatically expressive playing
List of music software
References
Computer Music Journal (2000)
Music software |
23834567 | https://en.wikipedia.org/wiki/Nintendo%20DSi%20system%20software | Nintendo DSi system software | The Nintendo DSi system software is a set of updatable firmware versions, and a software frontend on the Nintendo DSi (including its XL variant) video game console. Updates, which are downloaded via the system's Internet connection, allow Nintendo to add and remove features and software. All updates also include all changes from previous updates.
Technology
User interface
The user interface of the Nintendo DSi has been redesigned from the Nintendo DS and Nintendo DS Lite. The DSi's user interface is a single row of icons which can be navigated by sliding the stylus across them. When the DSi is booted for the first time, the system snaps a shot of the user's face which is then displayed on the home menu's top screen. From the home menu, the user can take a picture at any time by pressing the shoulder buttons. While the system is on, the power button acts as a soft reset button that returns the user to the home menu.
The Nintendo DSi provides some built-in applications. Initially, users are able to access five programs from the main menu: DSi Camera, DSi Sound, DSi Shop, PictoChat, and Download Play. The DSi's menu is akin to the Channel interface of the Nintendo Wii in that new programs can be downloaded and added to the interface. The DSi Camera application allows for taking images and applying various filters. The DSi Sound application is thematically similar to DSi Camera, serving as a sound recorder and editor (along with an low bitrate AAC music player). Features include themed equalizers and modulators that modify a user's voice to sound similar to a robot or parakeet (Toy Story 3 is the only DSi enhanced game to use the DSi's audio modulator engine). The DSi Shop would serve as the DS version of the Wii Shop Channel.
Multimedia features
Unlike Nintendo's previous handheld consoles such as the Nintendo DS and Nintendo DS Lite, the Nintendo DSi has built in music playback support. The DSi Music program is split into two modes: voice recording and music playback. Both offer plenty of entertainment value because of the tools and gimmicks Nintendo has included. The recording mode lets users record at most 18 clips of maximum 10 seconds length. Once they have recorded a clip, they can play around with it in various ways. For example, users can make the clip play backwards or forwards, isolate small sections using A-B repeat, and modify the speed and tone by dragging a pointer around on a 2D graph. They can also apply 12 effects to the clip, which can be used to transform the sound. The music playback mode also has many play options. Once a song has been loaded up, users can change the speed and tone just like with the recording mode. They can also overlay the recordings that has been made in the recording mode to songs at any point. In addition, Nintendo has provided a set of sound effects which can be selected quickly by using the stylus, then inserted freely using shoulder buttons.
Unlike the built-in DSi Camera application, which would not read any files that were not generated by the DSi itself, the DSi Music application does not have this restriction when it comes to files and directory structure. When files are stored in a multi-level directory structure into the root directory of the SD card, the DSi parsed through them instantly and displayed all the internal directories for quick access. During playback, users have access to features such as forwarding, rewinding, and volume controls. Nintendo presumably envisioned DSi Music as being a substitute for a real music player. However, there is an important drawback of the DS Music application, that is, it does not support the popular MP3 format. Instead, the player only supports the AAC format with .mp4, .m4a, or .3GP filename extensions. Furthermore, compared with Sony's PlayStation Portable it is more difficult to interface the DSi with a PC, as there is no USB port on the system. In order to transfer music and podcasts over, users will need to remove the SD Card and plug it directly into their PC.
Internet features
One of the major updates the Nintendo DSi brings to the Nintendo DS line is full network connectivity. Unlike the original Nintendo DS and Nintendo DS Lite which only featured minimal network connectivity, download content and firmware updates are at the core of the DSi experience, similar to the Wii and Sony's PlayStation Portable consoles. For example, when users first power up the system and click on the DSi Shop icon from the main menu, they are immediately prompted to run a firmware update. The Nintendo DSi supports WEP, WPA (AES/TKIP), and WPA2 (AES/TKIP) wireless encryption; only software with built-in support can use the latter two encryption types, as they were not supported by the DS and DS Lite.
With the DSi Shop application users can purchase various DSiWare titles. The cute music and blocky interface are somewhat similar to the counterpart on the Wii. Users can permanently login with their Club Nintendo account to track purchase rewards, and the main shopping interface also lets users add DSi Points and read the DSi shop manual. As with the firmware updates, the DSi shopping experience is quite similar to that of the Wii, although a big problem with the DSi Shopping is the slow speed.
Furthermore, like the previous Nintendo DS and DS Lite, the Nintendo DSi includes a web browser, which is a version of the Opera browser. It has support for the HTML5 canvas object and CSS opacity. However, there are limitations for these features and web surfing on either of these platforms as a whole is not a good experience. In addition to slow download speeds, the browser has difficulty rendering pages. For example, many pages would not load completely, and it is not compatible with movie files, music files or Adobe Flash on multimedia content sites like YouTube.
DSiWare and backward compatibility
On the Nintendo DSi, there are a collection of games and applications specifically designed for the Nintendo DSi handheld game console and available for download via the DSi Shop, known as DSiWare. Since these games and applications are specifically targeted for the Nintendo DSi, they are not compatible with the original Nintendo DS or Nintendo DS Lite consoles. The Nintendo DSi is Nintendo's first region-locked handheld; it prevents using certain software released for another region, unlike original Nintendo DS models. But as a member of the Nintendo DS line, the Nintendo DSi is backward compatible with most original Nintendo DS games, and cartridge software compatible with previous models including original DS games, Internet browsing, and photo sharing are not region-locked. Later, its successor, the Nintendo 3DS consoles also adopted this approach, and as a result all Nintendo DSi and 3DS-specific games are locked to a certain region, while original DS games are still region-free. In addition to DSiWare, which are DSi-exclusive (although later they can also run on a 3DS), there are also "DSi-enhanced" games containing DSi-exclusive features, but can still be played with earlier Nintendo DS models. While most original DS games can run on the DSi, the DSi is not backward compatible with Game Boy Advance (GBA) games or original DS games that require a GBA slot, since the DSi itself lacks of such a slot, unlike the DS and DS Lite. Because of this absence, the DSi is also not backward compatible with accessories requiring the GBA slot, such as the Nintendo DS Rumble Pak. Homebrew flash cards designed for previous DS models are incompatible with the DSi, but new cards capable of running DS software (or even DSiWare) on a DSi were available. While users cannot transfer purchased DSiWare on Nintendo DSi consoles between units, most DSiWare can be transferred to a Nintendo 3DS, although not saved data. Like the Nintendo DSi, the Nintendo 3DS is backward compatible with most Nintendo DS and Nintendo DSi software.
History of updates
This is a list of major system updates of the Nintendo DSi.
See also
Other gaming platforms from Nintendo:
Nintendo 3DS system software
Wii system software
Wii U system software
Nintendo Switch system software
Other gaming platforms from the next generation:
PlayStation Vita system software
PlayStation 4 system software
Xbox One system software
Other gaming platforms from this generation:
PlayStation 3 system software
PlayStation Portable system software
Xbox 360 system software
References
Nintendo DSi
Game console operating systems
Proprietary operating systems |
8109474 | https://en.wikipedia.org/wiki/Origin%20%28service%29 | Origin (service) | Origin is a digital distribution platform developed by Electronic Arts for purchasing and playing video games. The platform's software client is available for personal computer and mobile platforms.
Origin contains social features such as profile management, networking with friends with chat and direct game joining along with an in-game overlay, streaming via Twitch and sharing of game library and community integration with networking sites like Facebook, Xbox Live, PlayStation Network, and Nintendo Network. In 2011, Electronic Arts stated that it wanted Origin to match Valve's Steam service, Origin's primary competitor, by adding cloud game saves, auto-patching, achievements, and cross-platform releases. By 2013, Origin had over 50 million registered users.
EA has announced in September 2020 that it plans to retire Origin in favor of a new EA Desktop client for its EA Play service in the future.
Components
Origin store
The Origin store allows users to browse and purchase games from Electronic Arts' catalogs. Instead of receiving a box, disc, or even CD key, purchased software is immediately attached to the user's Origin account and is to be downloaded with the corresponding Origin client.
Origin guarantees download availability forever after purchase, and there is no limit to the number of times a game can be downloaded.
Users may also add certain EA games to their Origin account by using CD keys from retail copies, and digital copies obtained from other digital distribution services. However, the addition of retail keys to Origin is restricted to games from 2009 onwards and older keys will not work even if the game is available on Origin, unless user contacts customer support.
Origin client
The Origin client is self-updating software that allows users to download games, expansion packs, content booster packs and patches from Electronic Arts. It shows the status of components available. The Origin client is designed to be similar to its competitor, Steam. The Origin In Game overlay client can be disabled while playing games. The client also features chat features such as a Friends List and a group chat options (implemented in version 9.3). Client and download performance has been patched and improved in past updates.
EA Play
EA released a subscription service for accessing and playing their games on PC in 2016 originally called EA Access; via the Origin client, this was called Origin Access. Users can choose between paying a monthly or yearly subscription fee to access a large collection of EA titles (known as The Vault). Origin Access subscribers also get a 10% discount on all Origin purchases. Starting in March 2018, Origin Access starting offering titles from Warner Bros. Interactive Entertainment and was looking to add other publishers' titles, including those from indie games.
At E3 2018 EA announced a premium tier for Origin Access called Origin Access Premier, that allows to play future EA games early, the games will be full version in contrast to the "First Trials" giving to basic Origin Access members. To streamline branding, EA renamed both EA Access and Origin Access both to EA Play, with the Origin Access Premier named to EA Play Pro.
In September 2020, EA announced it plans to retire Origin in favor for a new desktop client to be called "EA Desktop" that will support the new EA Play and EA Play Pro subscriptions. It is expected all Origin content will carry over to the new EA Desktop client once it is fully released. The EA Desktop client began its beta test in September 2020 with no planned date for full release.
History
EA Downloader was launched in late 2005. It was replaced by EA Link in November 2006, adding trailers, demos and special content to the content delivery service. In September 2007, it was once again replaced by the combination of EA Store and EA Download Manager. Users purchase from the EA Store website and use the downloadable EADM client to download their games. Games bought via EA Link were downloadable using the EA Download Manager. The store and client was reopened under the Origin name on June 3, 2011.
The digital distribution software was first used to deliver the Battlefield 2: Special Forces expansion pack, and subsequently most EA titles. The biggest product launch on the software is Spore Creature Creator.
EA acquired the trademark Origin when it purchased Origin Systems in 1992. Origin Systems was a major game studio in the 1980s and 1990s, best known for its Ultima, Wing Commander, and Crusader game franchises.
Criticism and controversy
Removal of Crysis 2 from Steam and Origin exclusives
Shortly after the launch of Origin, Crysis 2 was pulled from Steam and appeared on EA's website with an "only on Origin" claim, though it remained available on other distribution services. EA has since stated that Valve removed Crysis 2 due to imposed "business terms" and that "this was not an EA decision or the result of any action by EA."
Since then, Crysis 2: Maximum Edition (a re-release of Crysis 2 with all the DLCs) has been released on Steam, matching EA's story about pulling Crysis 2 due to DLC restraints. EA has confirmed that Battlefield 3 would not be available through Steam. The game is currently available for purchase on other non-Origin services such as GameFly, Green Man Gaming or GamersGate, but the Origin client must be used regardless of where the game was purchased. Starting from the release of Battlefield 3 in 2011 until November 2019, every first-party game EA published on PC was exclusive to the Origin service. In late 2019, EA began releasing their games on Steam again, starting with Star Wars Jedi: Fallen Order, however, the game still uses the Origin client to launch.
Origin account bans
There have been several instances of EA enforcing such bans for what critics argue are comparatively minor infractions, such as making rude comments in EA or BioWare's official forums or in chat.
During March 2011, a user named "Arno" was banned for allegedly making the comment "Have you sold your souls to the EA devil?" Arno's account was banned for 72 hours which prevented him from playing any of his Origin games. After reporting on the details of the incident, website Rock, Paper, Shotgun received a statement from EA saying that Arno's account ban was a mistake, and that future violations on the forums would not interfere with Origin users' access to their games.
Later during October and November 2011, one user was banned for posting about teabagging dead players. Another user received a 72-hour account suspension for posting a link to his own network troubleshooting guide in the EA forums. EA interpreted this as a "commercial" link, even though the same link had been posted elsewhere in the forums, and EA's own corporate support site and FAQ. One user was permanently banned for submitting a forum post containing the portmanteau "e-peen," which is slang for "electronic penis."
Security weaknesses
EA has been criticized for not encrypting Origin's XMPP chat functionality, which is available in Origin and in Origin powered games. Unencrypted data includes account numbers, session tokens, as well as the message contents itself. With this type of data, user accounts might get compromised.
Accusations of spying
Origin's end-user license agreement (EULA) gives EA permission to collect information about users' computers regardless of its relation to the Origin program itself, including "application usage (including but not limited to successful installation and/or removal), software, software usage and peripheral hardware." Initially, the EULA also contained a passage permitting EA to more explicitly monitor activity as well as to edit or remove material at their discretion. A report by the news magazine Der Spiegel covered the allegations. In response to the controversy, EA issued a statement claiming they "do not have access to information such as pictures, documents or personal data, which have nothing to do with the execution of the Origin program on the system of the player, neither will they be collected by us." EA also added a sentence to the EULA stating that they would not "use spyware or install spyware on users' machines," though users must still consent to allowing EA to collect information about their computers.
Legal issues in Germany
According to reports in German newspapers, the German version of Origin's EULA violates several German laws, mainly laws protecting consumers and users' privacy. According to Thomas Hoeren, a judge and professor for information, telecommunication and media law at the University of Münster, the German version of the EULA is a direct translation of the original without any modifications and its clauses are "null and void".
References
External links
2011 software
Android (operating system) software
Cloud gaming
Digital rights management systems
DRM for MacOS
DRM for Windows
Electronic Arts
IOS software
Multiplayer video game services
Online-only retailers of video games
Proprietary software that uses Qt
Software that uses Qt |
39376326 | https://en.wikipedia.org/wiki/24%3A%20Live%20Another%20Day | 24: Live Another Day | 24: Live Another Day (also known as Season 9 or Day 9) is a 24 limited event television series that premiered on May 5, 2014, and concluded on July 14, 2014, airing on Fox. Sky 1 simulcast the premiere on May 6 in the United Kingdom and Ireland but switched to Wednesday nights for the rest of the episodes. It began airing in Australia on Network Ten on May 12, 2014. Set four years after the events of season 8, it adheres to the real time concept of covering the events of a 24-hour period and begins and ends at 11:00 a.m. However, there is a 12-hour time jump within the final episode.
Season overview
Live Another Day takes place four years after the events of season 8. James Heller, now president, is negotiating a treaty in London, where a hacker collective preaching freedom of information has enlisted the help of Chloe O'Brian. Jack Bauer, who has been tracking the activities of Chloe's group while living in exile, resurfaces when he hears of an imminent attempt on Heller's life.
There are two main acts in Live Another Day:
Margot Al-Harazi gains control of six US drones and uses them to attack London.
Cheng Zhi attacks his former country China with hijacked American weapons bringing the two countries to the brink of war.
Major subplots
Jack Bauer disapproves of the group Chloe O'Brian has joined.
Margot Al-Harazi grows suspicious of where her daughter's loyalties lie.
James Heller tries to manage a crisis amid the onset of Alzheimer's disease.
Mark Boudreau opens himself up to blackmail by forging Heller's signature.
Kate Morgan has been led to believe that her husband committed treason by selling state secrets.
The head of the CIA London station, Steve Navarro, is conspiring with the leader of Chloe's group.
CIA analyst Jordan Reed's hunt for answers puts his life into jeopardy.
Circumstances reunite Jack and Audrey after nearly a decade.
America's plans for a treaty are derailed when they lose control of their own weapons.
Cheng Zhi partners with the Russians who are looking for Jack.
Summary
While anti-drone protesters gather outside the American embassy where James Heller is staying, officers at a CIA outpost in London find and apprehend federal fugitive and former agent Jack Bauer. Due to her husband's conviction, Kate Morgan is being forced to hand in her badge. However, she gets herself reinstated when she realizes that Jack is infiltrating the CIA to gain access to Chloe O'Brian. Kate is too late to intervene and Jack breaks Chloe out of interrogation with the help of his friend Belcheck. Still distrustful of Chloe, Jack follows her to Open Cell, an organization that specializes in leaking government documents. He explains that he is on the trail of Derrick Yates, a former Open Cell member who has become involved in an assassination attempt on James Heller. The attempt is revealed to involve drones when he programs an unmanned aerial vehicle to fire on British and American troops. The pilot, Chris Tanner, is falsely arrested on murder charges.
Jack and Chloe learn that Yates' device has been taken by Margot Al-Harazi, a known terrorist trying to avenge the death of her husband. When their attempt to capture her daughter Simone fails, Jack breaks into the American embassy to analyze Tanner's flight key and prove that the threat is imminent. Jack locks himself in a room with three hostages and tries to upload the data to Open Cell's leader, Adrian Cross. Marines break in before the upload finishes but Kate Morgan is able to place him in CIA custody instead. In the President's quarters, Chief of Staff Mark Boudreau expresses concerns that Bauer is a terrorist and drafts an agreement for extraditing him to Russia. Suspecting that dementia has clouded Heller's judgement, he forges the President's signature and vows to protect his wife Audrey from any further pain related to Bauer. Jack is ultimately proven right when Margot Al-Harazi broadcasts a video calling for Heller to turn himself in or face attacks on London from six U.S. drones.
When Simone's husband Navid tries to sabotage the attacks, Margot executes Navid. Leaving her son Ian to pilot the drones, she sends Simone to silence Navid's family. When a missile kills several CIA operatives, Heller authorizes Bauer to go undercover with an arms dealer known for working with Al-Harazi. From transaction records, they are able to track Simone and see that she has been struck by traffic in her pursuit of Navid's niece. When Margot learns that Simone is being interrogated in the hospital, she sends a drone to destroy it. Jack and Kate escape with Simone and convince her to reveal Margot's last known whereabouts. The subsequent raid uncovers enough information to give Chloe access to the drone's camera. Meanwhile, CIA analyst Jordan Reed uncovers evidence that Kate's husband Adam Morgan may have been innocent all along. Station Chief Steve Navarro arranges to have him killed in order to cover up his own involvement in selling intel to China and framing Morgan. The assassination does not go as planned and leads to the deaths of both Agent Reed and the hitman.
With Margot's deadline approaching, Heller decides to turn himself in and put an end to the civilian losses though he first pardons Jack for his crimes. Jack delivers his friend to Wembley Stadium where Margot's drone is waiting but convinces him to turn back when Chloe devises a plan to loop the video feed. Thinking that Heller is still inside, Margot and Ian fire on the stadium and then sink five of the six rogue drones. Upon learning that Heller is alive, Margot and Ian try to attack Waterloo station with the last drone. Jack arrives with a CIA team before this can happen and kills both of them. Jack takes Yates' device back to the CIA to be analyzed. Upon arriving, he learns that Jordan's body has been found and identifies the second body as an assassin working for Navarro. Before Jack can apprehend him, Navarro escapes with the device and delivers it to Adrian Cross. Adrian explains to Chloe that some underground dealings with China are needed to finance their activism and takes her to an Open Cell chapter. They find that all of their colleagues have been murdered by Cheng Zhi, who wants to reprogram the override to start a war between the United States and China. Cheng kills Adrian, kidnaps Chloe and fabricates a torpedo launch order that sinks a Chinese aircraft carrier.
Russian operative Anatol Stolnavich contacts Mark Boudreau about the rendition order. When Boudreau tries to withdraw it, Stolnavich threatens to reveal that the signature has been forged. Boudreau co-operates and gives him access to a frequency used by Bauer. As a result, Jack is attacked by Russians on his way to retrieve the override and Cheng has time to escape. Upon discovering that his encrypted frequency was given to the Russians from within the White House, Bauer confronts Boudreau and tells him that Russia will benefit if the United States and China go to war. Heller immediately arrests Boudreau but then delays custody, allowing Boudreau to assist Jack in the raid of a Russian diplomatic compound. He distracts Stolnavich long enough for Jack and Kate to break in but Stolnavich dies in the ensuing struggle. Audrey meets with a contact of hers, the daughter of a high-ranking Chinese official, hoping to convince her that the naval attacks were perpetrated by Cheng and not the American government. Even though Cheng is unable to stop Chloe from escaping, he uses a sniper and has Audrey's secret service guards killed. He contacts Bauer saying that Audrey will die unless he gets safe passage out of England.
From the files in Stolnavich's compound Jack finds out how Cheng is planning to escape and sends Kate to rescue Audrey without attracting attention. Chloe re-establishes contact with Jack and sets up satellite surveillance of the freighter that Cheng has boarded. While Jack and Belcheck raid the ship, Kate eliminates the sniper and tells them that Audrey is safe. However, a second shooter in the area fires several shots and Audrey dies in Kate's arms. Devastated by her loss, Jack kills all of Cheng's bodyguards and transmits proof of Cheng's whereabouts to President Heller and President Wei. He kills Cheng immediately after the authenticity is verified. As the military advances are called off, Heller is told that his daughter is dead and Jack and Belcheck see that Chloe has gone missing again. Jack receives a phone call from the Russians demanding that he turn himself in to them.
Twelve hours later, Kate resigns from the CIA out of regret and Mark awaits trial for committing treason in his attempt to save Audrey. Heller is left to mourn his daughter as his memories start to fade away, and Jack reluctantly gives himself up to Russian agents in exchange for Chloe's freedom despite being pardoned by Heller.
Characters
Starring
Kiefer Sutherland as Jack Bauer (12 episodes)
Yvonne Strahovski as Kate Morgan (12 episodes)
Tate Donovan as Mark Boudreau (12 episodes)
Mary Lynn Rajskub as Chloe O'Brian (12 episodes)
William Devane as President James Heller (12 episodes)
Gbenga Akinnagbe as Erik Ritter (11 episodes)
Giles Matthey as Jordan Reed (9 episodes)
Michael Wincott as Adrian Cross (10 episodes)
Benjamin Bratt as Steve Navarro (10 episodes)
Kim Raver as Audrey Boudreau (12 episodes)
Guest starring
Production
In May 2013, Deadline Hollywood first reported that Fox was considering a limited-run "event series" for 24 based on a concept by Howard Gordon, after failed efforts to produce the 24 feature film and the cancellation of Kiefer Sutherland's series Touch. David Fury confirmed on Twitter that he would also be involved, pulling "double duty" with Gordon's new series Tyrant. The following week, Fox officially announced 24: Live Another Day, a limited-run series of twelve episodes that would feature the return of Jack Bauer. Fox CEO Kevin Reilly said that the series would essentially represent the twelve "most important" hours of a typical 24 season, with jumps forward between hours as needed. As with the rest of Fox's push into event programming, the production will have "a big scope and top talent and top marketing budgets."
In the press release, Gordon said:
Kiefer Sutherland, who was confirmed to executive produce and star in the new series, added:
In June 2013, it was announced that former 24 director Jon Cassar was signed on as executive producer and director of Live Another Day, directing six of the twelve episodes. The remaining six episodes were given to former 24 director and producer Milan Cheylov, and new 24 directors, Adam Kane and Omar Madha. Executive producers and writers Robert Cochran, Manny Coto and Evan Katz were also announced to return. Sean Callery returned as the music composer for the series.
The writing process began on July 1, 2013, with David Fury pitching the first episode, which was tentatively titled "6:30–7:30". On July 11, 2013, executive producer Brian Grazer announced in an interview that the 24 miniseries would "be a limited series that would then spin off into a series itself. Fox is doing it, Fox studio and Fox network, and we're totally thrilled by that." In October 2013, it was confirmed the series would be set and filmed in London, England, United Kingdom. Pre-production and location scouting by the crew, including Jon Cassar, began in November 2013. The production offices for Live Another Day were based in the Gillette Building in west London, previously used for Red 2. Production began on January 6, 2014.
In a May 2014 press release, Fox billed the eighth episode as the franchise's 200th episode.
Trailer
A promotional video was shot on January 22, 2014, with filming beginning for the series on January 26. The first teaser for the show aired on Sky1 on January 21, 2014, but did not show any new footage. The first American trailer, titled "Street Chaos", followed four 10-second teasers during Super Bowl XLVIII on February 2, 2014, but didn't show any footage from the series. Also, a promotional image was sent to Entertainment Weekly on February 20, 2014.
In March, another promo with actual footage was released, showing the President of the United States arriving in London; Bauer being spotted there on camera by the CIA; and him telling Chloe that "there's no going home" for him. A 20-minute preview of Live Another Day was released by Fox on April 7 and broadcast on May 3.
Casting
Kiefer Sutherland was immediately cast as Jack Bauer on May 13, 2013. Mary Lynn Rajskub was announced as the second official cast member in August 2013, reprising her role as Chloe O'Brian. In October 2013, it was confirmed that Kim Raver and William Devane would reprise their roles as Audrey Raines and James Heller, respectively.
The first new character to be cast was Michael Wincott's hacker character Adrian Cross. One month later, two more characters were added to the cast: CIA agents Erik Ritter and Jordan Reed played by Gbenga Akinnagbe and Giles Matthey respectively. On December 19, 2013, it was announced that three-time Primetime Emmy Award winner Judy Davis had joined the cast as the villain Margot Al-Harazi. However, Davis later exited the role for "personal family matters"; the role was recast with Michelle Fairley. On January 13, at a TCA panel discussing the show, it was announced that Yvonne Strahovski would play CIA Agent Kate Morgan. Benjamin Bratt was cast as her boss Steve Navarro.
On January 21, Tate Donovan was cast as Heller's Chief of Staff and the husband of Audrey Raines, Mark Boudreau. On January 24, Stephen Fry was cast as the British Prime Minister Trevor Davies, later renamed Alastair Davies. On the same day, relatively unknown actor Charles Furness was cast in a "small guest part" as Peter, a member of Chloe's hacker group. On January 26, Ross McCall was revealed to have acted in Live Another Day by Jon Cassar playing Ron Clark, assistant of Mark Boudreau. The next day, John Boyega was announced to be playing drone pilot Chris Tanner. Among the last actors to have his role announced was Colin Salmon playing U.S. General Coburn.
24: Solitary
Solitary is a story extension included in the Live Another Day Blu-ray set which was released on September 30, 2014. It takes place approximately three years after the events of Live Another Day and features the return of Carlos Bernard as Tony Almeida as he attempts to be released from solitary confinement. In Solitary, Tony requests to be moved from solitary confinement to general population, in an interview with a Department of Justice attorney and the prison administrator. Tony explains that he could supply inside information to the government regarding other criminals, such as Mexican cartels and al-Qaeda. When the request is denied, and a guard uncuffs him, he attacks the attorney, throwing her to the ground. During the attack, Tony is able to take her glasses without anyone noticing. Later, the attorney calls a man to confirm that Tony has the plans. Back in his cell, Tony puts on the glasses, which reveal they have escape plans in the lenses.
Episodes
Reception
24: Live Another Day received positive reviews from critics. On review aggregator site Rotten Tomatoes the series has an approval rating of 82% based on 55 reviews, with an average rating of 7.3/10. The site's critical consensus reads, "Filled with strong action sequences, 24: Live Another Day is a return to the formula that made the original series popular – though it also suffers from familiarity and sameness." On Metacritic the series has a score of 70 out of 100, based on 40 critics, indicating "generally favorable reviews".
The season's finale was met with critical acclaim, with reviewers praising the performances by Kiefer Sutherland and William Devane, the mix of fast action and emotionally wrenching content and the skilled use of emphatic silences.
The series was nominated for Best Stunt Team at the 21st Screen Actors Guild Awards and for Best Limited Series and Best Actor at the 5th Critics' Choice Television Awards. It also received three nominations for the 67th Primetime Emmy Awards.
Ratings
Award nominations
Home media releases
24: Live Another Day was released on DVD and Blu-ray in region 1 on and in region 2 on .
See also
List of fictional prime ministers of the United Kingdom
References
External links
2014 American television seasons
American political drama television series
Television shows set in London
Terrorism in television
Television series by 20th Century Fox Television
Works about the Central Intelligence Agency |
12565880 | https://en.wikipedia.org/wiki/Vegas%20Movie%20Studio | Vegas Movie Studio | Vegas Movie Studio (previously Sony Vegas Movie Studio) is a consumer-based nonlinear video editing software designed for the PC. It is a scaled-down version of Vegas Pro. Movie Studio was formerly called "Sonic Foundry VideoFactory" and then "Sony Screenblast Movie Studio,". As of version 13, Vegas Movie Studio is now part of Magix GmbH after Sony had officially announced it had sold most of its creative software suite to the German-based company. On 14 February 2017, Magix announced a brand new version of Vegas Movie Studio, Vegas Movie Studio 14.0, which is the first stable release of Vegas Movie Studio since 2014 and Magix's first stable release since its acquisition from Sony.
Features
Video features
Unlike its professional counterpart, Movie Studio can only edit with ten video tracks and ten audio tracks (originally it was set with two video tracks, a title overlay track and three audio tracks). The Platinum Edition of Sony Movie Studio, furthermore, can edit with 20 video and 20 audio tracks. It can edit in multiple as well as standard 4:3 and 16:9 aspect ratios, and it's one of the very few consumer editors that can also edit 24p video (after a manual frame rate setup). It also does not have the same advanced compositing tools as Vegas does, and does not have project nesting or masking.
The Platinum Edition of Movie Studio has powerful color correction tools similar to the version on Vegas Pro, including a three-wheel color corrector. It also adds HDV and AVCHD-editing capabilities, but does not support SD or HD-SDI formats.
Like Vegas Pro, the Movie Studio versions can also perform DV batch capture, a feature usually found only in high-end video editors. Version 6 also added the ability to capture from Sony Handycam DVD camcorders. However, it cannot capture analog video without the use of a FireWire video converter.
Movie Studio features significantly more effects and transitions than the full version of Vegas does. However, if the user upgrades to the full version of Vegas, then the user still gets to keep those same effects.
Movie Studio supports a wide variety of file formats and codecs and can use "Video for Windows" codecs to support even more.
Audio features
Movie Studio has 13 different audio effects, and the Platinum version adds more, in addition to 5.1 Surround sound mixing and editing. The software is also compatible with Sony's ACID Music Studio software, and an even more cut-down version called ACID Xpress ships with the 1001 Sound Effects CD included.
Other features
With version 7, Vegas Movie Studio Platinum Edition added the ability to export to iPod and Sony PSP, a feature that was originally only available in the full version of Vegas and is becoming increasingly common in consumer-level video editors. Both versions also ship with a cut-down version of Sony's DVD Architect software, called DVD Architect Studio, replacing the Sonic MyDVD program bundled with the software when it was titled as Screenblast Movie Studio.
Sony added "Show Me How" tutorials for users new to the software or digital video editing. Both versions also ship with Sony's 1001 Sound Effects CD (in contrast to Vegas's Limited Edition Sony Pictures Sound Effects CD), which also includes ACID XPress, an even more scaled-down version of their ACID music creation software. Similarly, the product also ships with sample video clips and music loops to enhance the users' home video projects.
Version 9 also added direct upload to YouTube, an increasingly common feature in many consumer-oriented editing programs.
Version 10 added GPU rendering, and allowed movie studio users to benefit from Sony Vegas Pro 9's improved audio stretching and pitch shifting capabilities. It also allowed for a maximum of 20 Tracks (10 video, 10 audio).
In Version 12 and 13, the track limit was doubled to 20 video and 20 audio tracks. Version 13 also allows for editing and rendering projects in 4K video. Version 13 dropped "Vegas" from the name of the program, reserving it exclusively for the professional edition.
Version 14 was the first version after the acquisition. It dramatically increased the maximum track limit to 200 video and 200 audio tracks. Each version since has had this limit. Also in this version, Magix restored "VEGAS" in the name.
References
External links
Official Site
Vegas Movie Studio
Vegas Movie Studio
2002 software |
5509033 | https://en.wikipedia.org/wiki/Delegated%20administration | Delegated administration | In computing, delegated administration or delegation of control describes the decentralization of role-based-access-control systems. Many enterprises use a centralized model of access control. For large organizations, this model scales poorly and IT teams become burdened with menial role-change requests. These requests — often used when hire, fire, and role-change events occur in an organization — can incur high latency times or suffer from weak security practices.
Such delegation involves assigning a person or group specific administrative permissions for an Organizational Unit. In information management, this is used to create teams that can perform specific (limited) tasks for changing information within a user directory or database. The goal of delegation is to create groups with minimum permissions that grant the ability to carry out authorized tasks. Granting extraneous/superfluous permissions would create abilities beyond the authorized scope of work.
One best practice for enterprise role management entails the use of LDAP groups. Delegated administration refers to a decentralized model of role or group management. In this model, the application or process owner creates, manages and delegates the management of roles. A centralized IT team simply operates the service of directory, metadirectory, web interface for administration, and related components.
Allowing the application or business process owner to create, manage and delegate groups supports a much more scalable approach to the administration of access rights.
In a metadirectory environment, these roles or groups could also be "pushed" or synchronized with other platforms. For example, groups can be synchronized with native operating systems such as Microsoft Windows for use on an access control list that protects a folder or file. With the metadirectory distributing groups, the central directory is the central repository of groups.
Some enterprise applications (e.g., PeopleSoft) support LDAP groups inherently. These applications are capable of using LDAP to call the directory for its authorization activities.
Web-based group management tools — used for delegated administration — therefore provide the following capabilities using a directory as the group repository:
Decentralized management of groups (roles) and access rights by business- or process-owners
Categorizing or segmenting users by characteristic, not by enumeration
Grouping users for e-mail, subscription, and access control
Reducing work process around maintenance of groups
Reproducing groups on multiple platforms and into disparate environments
Active Directory
In Microsoft Active Directory the administrative permissions this is accomplished using the Delegation of Control Wizard. Types of permissions include managing and viewing user accounts, managing groups, managing group policy links, generating Resultant Set of Policy, and managing and viewing InOrgPerson accounts.
A use of Delegation of Control could be to give managers complete control of users in their own department. With this arrangement managers can create new users, groups, and computer objects, but only in their own OU.
See also
Access control
Identity management
Provisioning
RBAC
Reading list
Delegating Authority in Active Directory, TechNet Magazine
Built-in Groups vs. Delegation, WindowsSecurity.Com
Operating system technology
Computer access control
Decentralization
Active Directory |
231561 | https://en.wikipedia.org/wiki/Software%20in%20the%20Public%20Interest | Software in the Public Interest | Software in the Public Interest, Inc. (SPI) is a US 501(c)(3) non-profit organization formed to help other organizations create and distribute free/open-source software and open-source hardware. Anyone is eligible to apply for membership, and contributing membership is available to those who participate in the free software community.
SPI was originally created to allow the Debian Project to accept donations. It now acts as a fiscal sponsor to many free and open source projects.
SPI has hosted Wikimedia Foundation board elections and audited the tally as a neutral third party from 2007 to 2011.
Associated projects
The 40 currently associated projects of SPI are:
0 A.D.
Adélie Linux
ankur.org.in
aptosid
Arch Linux
Arch Linux 32
ArduPilot
Chakra
Debian
FFmpeg
Fluxbox
Gallery
Ganeti
Glucosio
GNUstep
GNU TeXmacs
haskell.org
LibreOffice
MinGW
NTPsec
ns-3
OFTC
Open Bioinformatics Foundation
Open MPI
Open Voting Foundation
OpenEmbedded
OpenSAF
OpenVAS
OpenZFS
PMIx
PostgreSQL
Privoxy
SproutCore
Swathanthra Malayalam Computing
systemd
The Mana World
translatewiki.net
Tux4Kids
X.Org
YafaRay
Board of directors
Its current board is composed of:
President: Michael Schultheiss
Vice-President: Stephen Frost
Secretary: Tim Potter
Treasurer: Martin Zobel-Helas
Board of Directors:
Joseph Conway
Forrest Fleming
Milan Kupcevic
Chris Lamb
Héctor Orón Martínez
Advisors:
Legal counsel — Software Freedom Law Center
Debian Project Leader
PostgreSQL Project Board representative — currently Robert Treat
See also
Other free software umbrella organizations:
Apache Software Foundation (ASF)
Software Freedom Conservancy
References
External links
501(c)(3) organizations
Debian
Free and open-source software organizations
Organizations established in 1997
1997 establishments in New York (state) |
16074854 | https://en.wikipedia.org/wiki/Commodore%20PC%20compatible%20systems | Commodore PC compatible systems | The Commodore PC compatible systems are a range of IBM PC compatible personal computers introduced in 1984 by home computer manufacturer Commodore Business Machines.
Incompatible with Commodore's prior Commodore 64 and Amiga architectures, they were generally regarded as good, serviceable workhorse PCs with nothing spectacular about them, but the well-established Commodore name was seen as a competitive asset.
History
In 1984 Commodore signed a deal with Intel to second source manufacture the Intel 8088 CPU used in the IBM PC, along with a license to manufacture a computer based on the Dynalogic Hyperion. It is unknown whether any of these systems were produced or sold.
In 1984 the first model released, the PC-10, sold for $559 without monitor ($ in ). They were sold alongside Commodore's Amiga and Commodore 64c/128 lines of home and graphics computers. The PC10 was comparable in the market to the Blue Chip PC, Leading Edge Model D and Tandy 1000 line of PC compatibles.
Models
The line consists of the following models:
First Generation:
Commodore PC 5:
Price: $1,395
Motherboard year: 1984
Processor: 4.77 Mhz Intel 8088
Standard Memory: 256k onboard memory on motherboard
Optional Memory: Commodore 380065-2 256k RAM Expansion card giving a total of 512k.
Video Card: Hercules GB-101 MDPA (Monochrome Display and Printer Adapter)
Floppy drive: One 360k 5.25 drive. Could be expanded with the "Commodore PC910" floppy kit with gives a extra 360/720 3.5 Floppy drive
Harddrive: None.
Operating System: MS-DOS 3.2; GW Basic 3.2
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 5x 8Bit XT ISA
Notes: The Commodore PC 5 is a low-cost version of the PC 10 with a Monochrome Video card.
Commodore PC 10
Price: NOK:18000
Motherboard year: 1984
Processor: 4.77 Mhz Intel 8088
Standard Memory: 256k onboard memory
Optional Memory: Commodore 380065-2 256k RAM Expansion card giving a total of 512k
Video Card: ATI CW16800-A graphics solution (MDA, CGA, HGC & Plantronics color plus).
Floppy drive: One Canon 360k 5.25 drive.
Harddrive: None. could be added
Operating System: MS-DOS 2.11
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 5x 8Bit XT ISA
Notes: The Commodore PC 10 is a mid-range PC 5 with a Colour Video card
Commodore PC 20
Price: NOK 30000
Motherboard year: 1984
Processor: 4.77 Mhz Intel 8088
Standard Memory: 256k onboard memory
Optional Memory: Commodore 380065-1 384k RAM Expansion card giving a total of 640k
Video Card: ATI CW16800-A graphics solution (MDA, CGA, HGC & Plantronics color plus).
Floppy drive: Two Canon 360k 5.25 drive.
Harddrive: 20 Mb Harddrive
Operating System: MS-DOS 2.11
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 5x 8Bit XT ISA
Notes: The Commodore PC 20 is a mid-range PC 10 with a Harddrive and a extra floppy drive
Commodore PC 40
Price:
Motherboard year: 1984
Processor: Intel 80286 that runs at either 6 or 10 MHz choosable by the user
Standard Memory: 1 Mb Ram
Optional Memory:
Video Card: ATI CW16800-A graphics solution (MDA, CGA, HGC & Plantronics color plus).
Floppy drive: 1x 1,2 Mb 5.25 drive.
Harddrive: 20 Mb Harddrive
Operating System: MS-DOS 3.2; GW Basic 3.2
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 8x 8Bit XT ISA & 6x AT 16bit slots
Keyboard: 88 key XT style keyboard.
Cabinet: it has a key lock switch
Notes: The Commodore PC 40 is a 16 bit high-end PC 20 with a better motherboard and CPU. The PC AT is a PC 40 with a AT added to the name.
1985 "Second Generation":
Commodore PC 10-II
Price:
Motherboard year: 1984
Processor: 4.77 Mhz Intel 8088
Standard Memory: 256k onboard memory
Optional Memory: Commodore 380065-1 384k RAM Expansion card giving a total of 640k
Video Card: ATI CW16800-A graphics solution (MDA, CGA, HGC & Plantronics color plus).
Floppy drive: Two Chinon FZ-502LII 360k 5.25 drive.
Harddrive: None
Operating System: MS-DOS 3.2; GW Basic 3.2
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 5x 8Bit XT ISA
Notes: The Commodore PC 10-II is essential a PC 10 with 640k and a extra floppy drive
Commodore PC 20-II
Price:
Motherboard year: 1984
Processor: 4.77 Mhz Intel 8088
Standard Memory: 256k onboard memory
Optional Memory: Commodore 380065-1 384k RAM Expansion card giving a total of 640k
Video Card: ATI CW16800-A graphics solution (MDA, CGA, HGC & Plantronics color plus).
Floppy drive: One Chinon FZ-502LII 360k 5.25 drive.
Harddrive: None
Operating System: MS-DOS 3.2; GW Basic 3.2
Ports: 1x RSR-232 Serial port and 1x Centronic Parallel port.
Expansion slots: 5x 8Bit XT ISA
Notes: The Commodore PC 20-II is essential a PC 10-II with one floppy drive and one hard drive
Commodore PC I
Price:
Description: The Commodore PC 1 is a low-cost, small size PC, meant for budget homes or office use.
Motherboard year: 1987
Processor: Siemens Intel 8088 running at 4.77 Mhz. (A 8087 FPU can be added to a empty slot.)
Onboard RAM: 512k RAM
Optional RAM: 4x 32k RAM could be added to four empty slots in the motherboard, giving a total of 640k
Video Card: Integrated Paradise PVS 2 (CGA, MDA, HGC and Plantronics)
Disk drive: 1x Chinon F-502L 360k 5.25 drive.(Optional 720k Commodore 1010 og 1011 can be added to the Amiga style Disk port on the right side)
Harddrive: the "PC 1-20" came with a 3.5" 20 Mb hard drive connected to the expansion port
Network: the "PC 1-NET" came with a Novell Ethernet 10 Bit card connected to the expansion port.
Options: Expansion box for connection isa cards and Harddrive. an additional 8 Ohm speaker can be added for sound.
Operating System: MS-DOS 3.20 and GW-BASIC
Ports: VGA, Component video, RSR-232 Serial port, Centronic Parallel port.
Expansion slots: Commodore "PCEXP1" is a special expansion cabinet made for PC1. this gives 3 additional ISA Slots plus a extra 5.25 drive
Keyboard: 84 Key XT Keyboard
Cabinet: Special small form factor inspired by the 128C
PC 60-40: A 16MHz 386 CPU, 2.5Mb RAM, EGA video card, 40Mb HDD, 1x 1.2Mb 5.25" floppydrive, and 4x 16bit ISA & 2x 8bit ISA expansion slots. Original OS: MS-DOS 3.21
PC 60-80: A 16MHz 386 CPU, 2.5Mb RAM, EGA video card, 80Mb HDD, 2x 1.44Mb 3.5" floppydrives, and 4x 16bit ISA & 2x 8bit ISA expansion slots. Original OS: MS-DOS 3.21
1989 "Third Generation":
PC 10-III: 4.77/7.16/9.54 variable MHz Intel 8088-1 CPU, 640kb RAM, Paradise PVC4 card with CGA, MDA, Hercules & Plantronics. It features 3x 8bit ISA slots. OS: MS-DOS 3.2
COLT: A rebranded PC-10-III for the American market.
PC 20-III: Same as PC 10-III but with a 20Mb HDD.
PC 30-III: A downgraded PC 40-III with a 12MHz 286 CPU, EGA Wonder 800+ card, two floppy drives and a 20Mb hard disk.
PC 35-III: Identical to PC-30 but with a Cyrix 287 XL Co-Processor added.
PC 40-III: A 12MHz 286 system with 1Mb RAM, 4 expansion slots, on-board VGA, and a 40Mb HDD. It has a 5.25" 1.2Mb floppy drive. It has 3x AT 16 bit ISA slots and one 8 bit XT ISA slot.
PC 45-III: A 12MHz AMD 286 with 1MB of RAM, VGA graphics card.
PC 50-II: A 16MHz 386SX with 640Kb RAM, VGA graphics card, one 1.2Mb 5.25" floppy drive and a 40Mb Conner hard disk.
PC 60-III: A 25MHz 386DX with 2Mb RAM (upgradeable to 18Mb), Paradise 88 VGA card, One Chinon FB-357 1.44Mb 3.5" and one Chinon FZ-506 1.2Mb 5.25" floppy drive. It came in a tower case with a 60Mb to 200Mb hard disk. A 387 FPU can be added.
"1991 SlimLine series!
Price:
Description: Slimline computercase
Motherboard year: 1991
Processor: Intel 80286 running at 8/16 Mhz. (A 80287-16 FPU can be added to a empty slot.)
ROM: 64KB Phoenix Bios
RAM: 1MB onboard standard, expandable to 5MB
Video Card: VGA 256K Byte, expandable to 512K
Disk drive: 1x Chinon F-502L 360k 5.25 drive.(Optional 720k Commodore 1010 og 1011 can be added to the Amiga style Disk port on the right side)
Harddrive: 40 MB - (313241-02), 50 MB - (311839-01) and 100 MB - (311840-01)
Network: the "PC 1-NET" came with a Novell Ethernet 10 Bit card connected to the expansion port.
Options: Expansion box for connection isa cards and Harddrive. an additional 8 Ohm speaker can be added for sound.
Operating System: MS-DOS 3.20 and GW-BASIC
Ports: VGA, Component video, RSR-232 Serial port, Centronic Parallel port.
Expansion slots: 16 bit x1 (expandable to 16 bit x 3 + 8 bit x 2 by use of riser card)
Keyboard: 84 Key XT Keyboard
Cabinet: Special small form factor inspired by the 128C
286-16: A 16MHz 286 with 1Mb RAM, VGA video card, 3.5" floppy drive and 2x AT 16bit expansion slots.
386SX-16: A 16Mhz 386 with 1Mb RAM, VGA graphics card, 3.5" floppy drive and 5x 16bit ISA expansion slots.
386SX-25: A 25Mhz 386 with a Cyrix 387 FPU, 4Mb RAM, Cirrus Logic GD-5402 VGA (512kb video RAM), 40Mb HDD, 3.5" floppy drive and 5x 16bit ISA expansion slots.
Unconfirmed Years "Last Generation":
386DX-33: A 33MHz 386 CPU
486SX-25: A 25MHz 486 with 4Mb RAM, VGA video, 1x 3.5" drive and a 150Mb HDD
486DX-33c: A 33MHz 486 with 8Mb RAM, VGA video, 1x 3.5" drive and a 150Mb HDD
Laptops:
C286SX-LT: a 12Mhz 286 with 1Mb RAM
C386SX-LT: a 386 with 2Mb RAM and a 40Mb HDD
References
External links
Richard Lagendijk: CIP - Commodore Info Page
Bo Zimmerman: Commodore turns blue
Bo Zimmerman's Commodore gallery
OLD-COMPUTERS.COM
Brochure comparing a number of Commodore Models
Commodore History Part 6 - The PC Compatibles By The 8-Bit Guy
Brochure for the Commodore PC10-1 and PC10-2 at classic.technology
IBM PC compatibles
CBM hardware |
655694 | https://en.wikipedia.org/wiki/NX%20bit | NX bit | The NX bit (no-execute) is a technology used in CPUs to segregate areas of memory for use by either storage of processor instructions (code) or for storage of data, a feature normally only found in Harvard architecture processors. However, the NX bit is being increasingly used in conventional von Neumann architecture processors for security reasons.
An operating system with support for the NX bit may mark certain areas of memory as non-executable. The processor will then refuse to execute any code residing in these areas of memory. The general technique, known as executable space protection, also called Write XOR Execute, is used to prevent certain types of malicious software from taking over computers by inserting their code into another program's data storage area and running their own code from within this section; one class of such attacks is known as the buffer overflow attack.
The term NX bit originated with Advanced Micro Devices (AMD), as a marketing term. Intel markets the feature as the XD bit (execute disable). The ARM architecture refers to the feature, which was introduced in ARMv6, as XN (execute never). The term NX bit itself is sometimes used to describe similar technologies in other processors.
Architecture support
x86
x86 processors, since the 80286, included a similar capability implemented at the segment level. However, almost all operating systems for the 80386 and later x86 processors implement the flat memory model, so they cannot use this capability. There was no 'Executable' flag in the page table entry (page descriptor) in those processors, until, to make this capability available to operating systems using the flat memory model, AMD added a "no-execute" or NX bit to the page table entry in its AMD64 architecture, providing a mechanism that can control execution per page rather than per whole segment.
Intel implemented a similar feature in its Itanium (Merced) processor—having IA-64 architecture—in 2001, but did not bring it to the more popular x86 processor families (Pentium, Celeron, Xeon, etc.). In the x86 architecture it was first implemented by AMD, as the NX bit, for use by its AMD64 line of processors, such as the Athlon 64 and Opteron.
After AMD's decision to include this functionality in its AMD64 instruction set, Intel implemented the similar XD bit feature in x86 processors beginning with the Pentium 4 processors based on later iterations of the Prescott core. The NX bit specifically refers to bit number 63 (i.e. the most significant bit) of a 64-bit entry in the page table. If this bit is set to 0, then code can be executed from that page; if set to 1, code cannot be executed from that page, and anything residing there is assumed to be data. It is only available with the long mode (64-bit mode) or legacy Physical Address Extension (PAE) page-table formats, but not x86's original 32-bit page table format because page table entries in that format lack the 63rd bit used to disable and enable execution.
Windows XP SP2 and later support Data Execution Prevention (DEP).
ARM
In ARMv6, a new page table entry format was introduced; it includes an "execute never" bit. For ARMv8-A, VMSAv8-64 block and page descriptors, and VMSAv8-32 long-descriptor block and page descriptors, for stage 1 translations have "execute never" bits for both privileged and unprivileged modes, and block and page descriptors for stage 2 translations have a single "execute never" bit(two bits due to ARMv8.2-TTS2UXN feature); VMSAv8-32 short-descriptor translation table descriptors at level 1 have "execute never" bits for both privileged and unprivileged mode and at level 2 have a single "execute never" bit.
Alpha
As of the Fourth Edition of the Alpha Architecture manual, DEC (now HP) Alpha has a Fault on Execute bit in page table entries with the OpenVMS, Tru64 UNIX, and Alpha Linux PALcode.
SPARC
The SPARC Reference MMU for Sun SPARC version 8 has permission values of Read Only, Read/Write, Read/Execute, and Read/Write/Execute in page table entries, although not all SPARC processors have a SPARC Reference MMU.
A SPARC version 9 MMU may provide, but is not required to provide, any combination of read/write/execute permissions. A Translation Table Entry in a Translation Storage Buffer in Oracle SPARC Architecture 2011, Draft D1.0.0 has separate Executable and Writable bits.
PowerPC/Power ISA
Page table entries for IBM PowerPC's hashed page tables have a no-execute page bit. Page table entries for radix-tree page tables in the Power ISA have separate permission bits granting read/write and execute access.
PA-RISC
Translation lookaside buffer (TLB) entries and page table entries in PA-RISC 1.1 and PA-RISC 2.0 support read-only, read/write, read/execute, and read/write/execute pages.
Itanium
TLB entries in Itanium support read-only, read/write, read/execute, and read/write/execute pages.
z/Architecture
As of the twelfth edition of the z/Architecture Principles of Operation, z/Architecture processors may support the Instruction-Execution Protection facility, which adds a bit in page table entries that controls whether instructions from a given region, segment, or page can be executed.
See also
Executable space protection
References
External links
AMD, Intel put antivirus tech into chips
Microsoft Interviewed on Trustworthy Computing and NX
LKML NX Announcement
Changes to Functionality in Microsoft Windows XP Service Pack 2 Part 3: Memory Protection Technologies
Microsoft Security Developer Center: Windows XP SP 2: Execution Protection
Central processing unit
Operating system security
X86 architecture |
27332303 | https://en.wikipedia.org/wiki/MochiView | MochiView | MochiView (Motif and ChIP Viewer) is software that integrates a genome browser and tools for data and Sequence motif visualization and analysis. The software uses the Java language, contains a fully integrated JavaDB database, is platform-independent, and is freely available.
Description
MochiView was originally designed as a platform for rapidly browsing, visualizing, and extracting Sequence motifs from ChIP-chip and ChIP-Seq data. The software uses a generalized data format that serves other purposes as well, such as the visualization and analysis of RNA-Seq data or the import, maintenance, exploration, and analysis of Sequence motif libraries. The MochiView website contains a detailed feature list and demo videos of the software showing smooth panning/zooming, data/gene/sequence/coordinate browsers, and plot interactivity. The software was created by Oliver Homann in the laboratory of Alexander Johnson at the University of California at San Francisco.
References
External links
MochiView website
Bioinformatics software |
21884989 | https://en.wikipedia.org/wiki/NetScreen%20Technologies | NetScreen Technologies | NetScreen Technologies was an American technology company that was acquired by Juniper Networks for US$4 billion stock for stock in 2004.
NetScreen Technologies developed ASIC-based Internet security systems and appliances that delivered high performance firewall, VPN and traffic shaping functionality to Internet data centers, e-business sites, broadband service providers and application service providers. NetScreen was the first firewall manufacturer to develop a gigabit-speed firewall, the NetScreen-1000.
History
NetScreen Technologies was founded by Yan Ke, Ken Xie, and Feng Deng. Ken Xie, Chief Technology Officer and co-founder was also the CEO until Robert Thomas joined in 1998.
Robert Thomas, NetScreen's president and chief executive officer, came to NetScreen in 1998 from Sun Microsystems, where he was General Manager of Intercontinental Operations for Sun's software business, which includes security, networking, and Internet tools.
Ken Xie left NetScreen in 2000 to found Fortinet, a competing ASIC-based firewall company.
NetScreen acquired its core IPS technology through the purchase of OneSecure, Inc. for US$45 million in stock in 2002. OneSecure was created by Rakesh Loonkar (subsequently the co-founder of Trusteer), and Israeli engineer Nir Zuk, who had been one of Check Point Software’s first employees.
In 2003, NetScreen hired Anson Chen as its vice president of research and development. Anson Chen, a 12-year veteran of Cisco Systems, Inc. and former vice president and general manager of the Network Management and Services Technology Group, led engineering, research and development efforts for NetScreen's entire product line, including its firewall, IPSec virtual private network (VPN) and intrusion detection and prevention technologies. Chen also had functional management responsibility for NetScreen's secure access products.
2015 "unauthorized code" incident
Analysis of the firmware code in 2015 showed that a backdoor key could exist using Dual_EC_DRBG. This would enable whoever held that key to passively decrypt traffic encrypted by ScreenOS.
In December 2015, Juniper Systems announced that they had discovered "unauthorized code" in the ScreenOS software that underlies their NetScreen devices, present from 2012 onwards. There were two vulnerabilities: One was a simple root password backdoor, and the other one was changing a point in Dual_EC_DRBG so that the attackers presumably had the key to use the pre-existing (intentional or unintentional) kleptographic backdoor in ScreenOS to passively decrypt traffic.
References
Juniper Networks
Defunct companies based in California
Computer companies established in 1997
Computer companies disestablished in 2004
Networking hardware companies
Server appliance
Computer security companies
2004 mergers and acquisitions
2001 initial public offerings |
45313332 | https://en.wikipedia.org/wiki/Ncdu | Ncdu | ncdu (NCurses Disk Usage) is a disk utility for Unix systems. Its name refers to its similar purpose to the du utility, but ncdu uses a text-based user interface under the [n]curses programming library. Users can navigate the list using the arrow keys and delete files that are taking up too much space by pressing the 'd' key. Version 1.09 and later can export the file listing in JSON format.
ncdu was developed by Yoran Heling to learn C and to serve as a disk usage analyzer on remote systems over ssh.
References
External links
git repository for ncdu
Disk usage analysis software
Software using the MIT license
Free software programmed in C
2007 software
Unix file system-related software
Software that uses ncurses |
67395293 | https://en.wikipedia.org/wiki/Michael%20Zyda | Michael Zyda | Michael Zyda (or Michael J. Zyda or Mike Zyda) is an American computer scientist, video game designer, and Professor of Computer Science Practice at USC Viterbi School of Engineering, University of Southern California. He was named an IEEE Fellow in 2019 and an ACM Fellow in 2020 for his research contributions in video game design and virtual reality. He is also the founding director of the Computer Science (Games) degree programs at USC Viterbi. Michael received his bachelor's degree in bioengineering from University of California, San Diego, master's degree in computer science from University of Massachusetts Amherst and doctoral degree in computer science from University of Washington.
References
External links
Fellows of the Association for Computing Machinery
University of California, San Diego alumni
University of Massachusetts Amherst alumni
Fellow Members of the IEEE
Year of birth missing (living people)
Living people |
5008428 | https://en.wikipedia.org/wiki/Pfs%3AWrite | Pfs:Write | pfs:Write is a word processor created by Software Publishing Corporation (SPC) and published in 1983. It was released for IBM PC compatibles and the Apple II. It includes the features common to most word processors of the day, including word wrapping, spell checking, copy and paste, underlining, and boldfacing; and it also a few advanced features, such as mail merge and few others. The product was considerably easier to both learn and use than its more fully featured and expensive competitors: WordPerfect, Microsoft Word, and XyWrite.
History
pfs:Write was announced in 1983 and it was part of a family of products released by SPC under the "pfs:" brand (Personal Filing System) which, when installed onto the same computer, combined to form a sort of office suite which included companion products pfs:File in 1980 (a database), pfs:Plan (a spreadsheet), pfs:Report in 1981 (reporting software), and pfs:Graph in 1982 (business graphics software). Other, mostly utilitarian products bearing the "pfs:" brand subsequently emerged, including pfs:Access (for data communications), pfs:Easy Start (a menuing utility), and pfs:Proof (a proofreading utility). Eventually, SPC offered a low- to mid-level desktop publishing product called pfs:Publisher; and it packaged the core word processing, database and spreadsheet products into a suite named pfs:Office.
A Windows 3.0 version, called Professional Write Plus 1.0, was released in 1991.
The last version, Professional Write 3.0 for DOS, was released in 1994.
A Windows Version was available to registered users of PW 3.0.
Lotus 1-2-3 integration
The market dominance of Lotus 1-2-3 encouraged SPC to allow its integration with pfs:Write. A user could use pfs:Write for word processing and link to Lotus 1-2-3 for spreadsheet use through the pfs menu system. Some setup was required, but as Lotus was deficient in word processing, this proved popular, especially among users already devoted to 1-2-3.
Reception
Byte in 1984 described pfs:Write 1.1 as "an elementary program for users who don't have time to major in word processing or who have basic needs". It cited "major deficiencies", however, including the inability to easily justify or delete text, poor printed and built-in documentation, and very slow file saves. II Computing listed it fourth on the magazine's list of top Apple II non-game, non-educational software as of late 1985, based on sales and market-share data.
References
Proprietary software
DOS word processors
Windows word processors
1983 software |
8167770 | https://en.wikipedia.org/wiki/Broadcast%20storm | Broadcast storm | A broadcast storm or broadcast radiation is the accumulation of broadcast and multicast traffic on a computer network. Extreme amounts of broadcast traffic constitute a "broadcast storm". It can consume sufficient network resources so as to render the network unable to transport normal traffic. A packet that induces such a storm is occasionally nicknamed a Chernobyl packet.
Causes
Most commonly the cause is a switching loop in the Ethernet network topology (i.e. two or more paths exist between switches). As broadcasts and multicasts are forwarded by switches out of every port, the switch or switches will repeatedly rebroadcast broadcast messages and flood the network. Since the layer-2 header does not support a time to live (TTL) value, if a frame is sent into a looped topology, it can loop forever.
In some cases, a broadcast storm can be instigated for the purpose of a denial of service (DOS) using one of the packet amplification attacks, such as the smurf attack or fraggle attack, where an attacker sends a large amount of ICMP Echo Requests (ping) traffic to a broadcast address, with each ICMP Echo packet containing the spoof source address of the victim host. When the spoofed packet arrives at the destination network, all hosts on the network reply to the spoofed address. The initial Echo Request is multiplied by the number of hosts on the network. This generates a storm of replies to the victim host tying up network bandwidth, using up CPU resources or possibly crashing the victim.
In wireless networks a disassociation packet spoofed with the source to that of the wireless access point and sent to the broadcast address can generate a disassociation broadcast DOS attack.
Prevention
Switching loops are largely addressed through link aggregation, shortest path bridging or spanning tree protocol. In Metro Ethernet rings it is prevented using the Ethernet Ring Protection Switching (ERPS) or Ethernet Automatic Protection System (EAPS) protocols.
Filtering broadcasts by Layer 3 equipment, typically routers (and even switches that employ advanced filtering called brouters).
Physically segmenting the broadcast domains using routers at Layer 3 (or logically with VLANs at Layer 2) in the same fashion switches decrease the size of collision domains at Layer 2.
Routers and firewalls can be configured to detect and prevent maliciously inducted broadcast storms (e.g. due to a magnification attack).
Broadcast storm control is a feature of many managed switches in which the switch intentionally ceases to forward all broadcast traffic if the bandwidth consumed by incoming broadcast frames exceeds a designated threshold. Although this does not resolve the root broadcast storm problem, it limits broadcast storm intensity and thus allows a network manager to communicate with network equipment to diagnose and resolve the root problem.
MANET broadcast storms
In a mobile ad hoc network (MANET), route request (RREQ) packets are usually broadcast to discover new routes.
These RREQ packets may cause broadcast storms and compete over the channel with data packets.
One approach to alleviate the broadcast storm problem is to inhibit some hosts from rebroadcasting to reduce the redundancy, and thus contention and collision.
References
Network performance
Network topology
Denial-of-service attacks
Wireless networking |
544476 | https://en.wikipedia.org/wiki/Word%20salad | Word salad | A word salad, or schizophasia, is a "confused or unintelligible mixture of seemingly random words and phrases", most often used to describe a symptom of a neurological or mental disorder. The term schizophasia is used in particular to describe the confused language that may be evident in schizophrenia. The words may or may not be grammatically correct, but are semantically confused to the point that the listener cannot extract any meaning from them. The term is often used in psychiatry as well as in theoretical linguistics to describe a type of grammatical acceptability judgement by native speakers, and in computer programming to describe textual randomization.
In psychiatry
Word salad may describe a symptom of neurological or psychiatric conditions in which a person attempts to communicate an idea, but words and phrases that may appear to be random and unrelated come out in an incoherent sequence instead. Often, the person is unaware that he or she did not make sense. It appears in people with dementia and schizophrenia, as well as after anoxic brain injury. In schizophrenia it is also referred to as schizophasia. Clang associations are especially characteristic of mania, as seen in bipolar disorder, as a somewhat more severe variation of flight of ideas. In extreme mania, the patient's speech may become incoherent, with associations markedly loosened, thus presenting as a veritable word salad.
It may be present as:
Clanging, a speech pattern that follows rhyming and other sound associations rather than meaning
Graphorrhea, a written version of word salad that is more rarely seen than logorrhea in people with schizophrenia.
Logorrhea, a mental condition characterized by excessive talking (incoherent and compulsive)
Receptive aphasia, fluent in speech but without making sense, often a result of a stroke or other brain injury
Used deliberately
Narcissistic word salad refers to a type of purposefully confusing speech, using circular reasoning, logical fallacies and other rhetorical devices in order to disorient or manipulate another person. Anti-social and narcissistic personalities employ word salad in order to gaslight their targets.
In computing
Word salad can be generated by a computer program for entertainment purposes by inserting randomly chosen words of the same type (nouns, adjectives, etc.) into template sentences with missing words, a game similar to Mad Libs. The video game company Maxis, in their seminal SimCity 2000, used this technique to create an in-game "newspaper" for entertainment; the columns were composed by taking a vague story-structure, and using randomization, inserted various nouns, adjectives, and verbs to generate seemingly unique stories.
Another way of generating meaningless text is mojibake, also called Buchstabensalat ("letter salad") in German, in which an assortment of seemingly random text is generated through character encoding incompatibility in which one set of characters are replaced by another, though the effect is more effective in languages where each character represents a word, such as Chinese.
More serious attempts to automatically produce nonsense stem from Claude Shannon's seminal paper A Mathematical Theory of Communication from 1948 where progressively more convincing nonsense is generated first by choosing letters and spaces randomly, then according to the frequency with which each character appears in some sample of text, then respecting the likelihood that the chosen letter appears after the preceding one or two in the sample text, and then applying similar techniques to whole words. Its most convincing nonsense is generated by second-order word approximation, in which words are chosen by a random function weighted to the likeliness that each word follows the preceding one in normal text:
The Head And In Frontal Attack On An English Writer That The Character Of This Point Is Therefore Another Method For The Letters That The Time Of Who Ever Told The Problem For An Unexpected.
Markov chains can be used to generate random but somewhat human-looking sentences. This is used in some chat-bots, especially on IRC-networks.
Nonsensical phrasing can also be generated for more malicious reasons, such as the Bayesian poisoning used to counter Bayesian spam filters by using a string of words which have a high probability of being collocated in English, but with no concern for whether the sentence makes sense grammatically or logically.
See also
Similar textual productions or phenomena
Dissociated press, a computer program that applies a Markov chain to generate word salad
Gibberish, nonsensical language
Lorem ipsum, test text which does not have any meaning
Nonsense verse, verse which is nonsensical
Paragrammatism, inability to produce or create grammatically correct sentences
Thought disorder, disorder of thought
Other
Glossolalia, phenomenon in which people speak in languages which are unfamiliar to them
Mad Libs, a phrasal template word game that sometimes results in word salad
Scat singing, vocal improvisation with nonsensical words
References
External links
Medical signs
Random text generation
Nonsense |
8560647 | https://en.wikipedia.org/wiki/Sentry%202020 | Sentry 2020 | Sentry 2020 is a commercial software program for "transparent" disk encryption for PC and PDA. It has two compatible versions, one for desktop Windows XP and one for Windows Mobile 6.5.3, which allows using the same encrypted volume on both platforms.
The latest versions have been released in February 2011. Windows Vista is the "newest" OS supported.
See also
LibreCrypt - an alternative system which also works on both PC and PDAs
Disk encryption
Disk encryption software
Comparison of disk encryption software
External links
Official Sentry 2020 Website
Cryptographic software
Windows security software
Disk encryption
Cross-platform software |
2039956 | https://en.wikipedia.org/wiki/David%20S.%20Touretzky | David S. Touretzky | David S. Touretzky is a research professor in the Computer Science Department and the Center for the Neural Basis of Cognition at Carnegie Mellon University. He received a BA in Computer Science at Rutgers University in 1978, and earned a master's degree and a Ph.D. (1984) in Computer Science at Carnegie Mellon University. Touretzky has worked as an Internet activist in favor of freedom of speech, especially what he perceives as abuse of the legal system by government and private authorities. He is a notable critic of Scientology.
Research
Touretzky's research interests lie in the fields of artificial intelligence, computational neuroscience, and learning. This includes machine learning and animal learning, and in particular neural representation of space in rodents (e.g., in the hippocampus) and in robots. In 2006, he was recognized as a Distinguished Scientist by the Association for Computing Machinery.
Criticism of Scientology
Since the 1990s, Touretzky has worked to expose the actions of the Church of Scientology. He sees the actions of the organization as a threat to free speech, and he has taken a prominent part in Internet-based activism to oppose it, also appearing regularly as a critic in radio and print. He has also worked to expose what he sees as dangerous and potentially life-threatening treatments provided by Narconon, the Scientology-based drug rehabilitation program. He maintains a Web site named Stop Narconon, which archives media articles critical of the program. Dr. Touretzky's research into Narconon was a primary source of information for a series of San Francisco Chronicle newspaper articles criticizing Narconon on June 9 and June 10, 2004 that ultimately led to the organization's program being rejected by the California school system in early 2005.
Touretzky has undertaken extensive research into the secret upper levels of Scientology, and he has made this information available to the public on the OT III Scholarship Page (concerning Xenu) and the NOTs Scholars Page (concerning the higher Operating Thetan levels). These pages, he states, are academic studies of Scientology's texts, and the proprietary materials are therefore legally available due to careful application of the academic fair use provisions of copyright law. The Church has failed in their attempts to have them removed, after repeatedly threatening Touretzky with lawsuits and filing complaints against him with Carnegie Mellon University. Carnegie Mellon, in turn, has issued statements in support of Professor Touretzky, noting that his criticism of Scientology is a personal affair and not the opinion of the University itself.
Touretzky has been the object of public attacks by the Church of Scientology, including various "dead agent" campaigns against him. He has been accused of religious bigotry, racism, misogyny, misuse of government funds, support for terrorism, and collusion with the pharmaceutical industry, among other misdeeds.
Free speech activism
David Touretzky is an Internet free speech activist. He has supported several movements in what he perceives as abuse of the legal system by government and private authorities.
In 2000, Touretzky testified as an expert witness for the defense in Universal City Studios et al. v. Reimerdes et al., a suit brought by seven motion picture studios against the publishers of 2600: The Hacker Quarterly (the case name refers to Shawn Reimerdes, an unrelated defendant who settled prior to trial.) The suit concerned the publication of DVD decryption software known as DeCSS, which the plaintiffs asserted was illegal under the Digital Millennium Copyright Act. Dr. Touretzky testified as an expert in computer science on the expressive nature of computer code, and convinced the court that code was indeed speech. Touretzky also created an online gallery of various renditions of the DeCSS software. Readers sent in their own renditions of the decryption algorithm, including a mathematical description, a haiku, and a square dance.
In reaction to the federal prosecution and eventual imprisonment of 18-year-old political activist Sherman Austin for hosting bomb-making instructions entitled Reclaim Guide on his web site, Dr. Touretzky provided a mirror on his Carnegie Mellon website for more than two years, although he acknowledged on the website that his own reposting of the information did not violate the plain language of the statute under which Austin was convicted. In May 2004, to avoid harassment of the university and controversy in the media, Dr. Touretzky moved the mirror from the Carnegie Mellon server to a private site.
In 2011, Touretzky began hosting a mirror of the website of George Hotz, containing executable files and instructions facilitating the jailbreaking of the Sony PlayStation 3, after Sony filed lawsuits against Hotz and other hackers aiming to utilize the takedown provisions of the DMCA to remove the content from the Internet.
Publications
Books
David S. Touretzky, The Mathematics of Inheritance Systems (Research Notes in Artificial Intelligence) , Los Altos, California: Morgan Kaufmann, 1986. .
David S. Touretzky, Common Lisp: A Gentle Introduction to Symbolic Computation, Redwood City, California: Benjamin Cummings, 1990. . Out of print, but electronic versions are available.
David S. Touretzky, Common Lisp: A Gentle Introduction to Symbolic Computation, Mineola, New York: Dover Publications, Inc., 2013. . The Dover edition, first published in 2013, is a revised republication of work originally published by The Benjamin/Cummings Publishing Company, Inc., in 1990.
Articles
David S. Touretzky, "Viewpoint: Free speech rights for programmers", Communications of the ACM 44(8):23–25, August 2001. , extended version. (On DeCSS.)
David S. Touretzky and Peter Alexander, "A church's lethal contract", Razor, 2003. (On Scientology.)
Other
David S. Touretzky et al., "Gallery of CSS descramblers".
Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference (editor)
Advances in Neural Information Processing Systems 7: Proceedings of the 1994 Conference (editor)
Proceedings of the 1993 Connectionist Models Summer School (co-author)
Connectionist Models: Proceedings of the 1990 Summer School (co-author)
Proceedings of the 1988 Connectionist Models (co-author)
References
Further reading
Relating to DeCSS and MPAA v. 2600:
MPAA v. 2600, transcript of trial, day 6, July 25, 2000. (Complete text of Dr. Touretzky's DeCSS testimony.)
Damian Cave, "A bug in the legal code?", Salon.com, September 13, 2000.
Declan McCullagh, "A thorn in Hollywood's side", Wired, March 20, 2001.
David F. Gallagher, "Movie industry frowns on professor's software gallery", The New York Times, March 30, 2001.
David P. Hamilton, "Critics of DVD-copyright ruling argue constitution protects posting in all forms", The Wall Street Journal, April 12, 2001.
Relating to Dr. Touretzky's mirror of bomb-making instructions originally hosted by Sherman Austin:
Sarah Hennenberger, "Author of explosives guide web site in court", The Tartan (Carnegie Mellon campus newspaper), 2002.
Karen Welles, "CMU professor's web site causing controversy, site offers info on bomb-making", Target 11, WPXI, Pittsburgh, Pennsylvania, May 2, 2003.
John Middleton, "Ethics and tax dollars", Citizens Against Government Waste (an organization backed by Microsoft and the tobacco industry ), May 1, 2004.
External links
In the media
Referred to in discussion of Will Smith's L. Ron Hubbard Middle School in "Scientology is focus of flap over Will Smith's new school", by Carla Rivera, Los Angeles Times, June 29, 2008 and further commentary on Gawker: "L. Ron Hubbard Middle School Not An Indoctrination Center, Says Scientologist Founder Will Smith", June 29, 2008.
Artificial intelligence researchers
Critics of Scientology
Carnegie Mellon University faculty
Living people
Free speech activists
Scientology and the Internet
Rutgers University alumni
Carnegie Mellon University alumni
Lisp (programming language) people
Year of birth missing (living people) |
21618630 | https://en.wikipedia.org/wiki/Network%20Caller%20ID | Network Caller ID | Network Caller ID (NCID) is an open-source client/server network Caller ID (CID) package.
NCID consists of a server called ncidd (short for NCID daemon), a universal client called ncid, and multiple client output modules and gateways. The server, ncidd, monitors either a modem, device or gateway for the CID data. The data is collected and sent, via TCP, to one or more connected clients.
Many devices, including smartphones, and services can detect caller ID information. An NCID gateway collects CID data from these other sources and passes it on to the main NCID server. From there the CID data is distributed to all connected clients, just like CID data collected from a traditional modem. One example of a non-modem device is a VoIP (Voice over IP) service that collects CID data as SIP packets. Another example is the Whozz Calling series of Ethernet Link devices that obtain CID information from multiple POTS (Plain old telephone service) lines.
NCID supports messages. Clients can send a one line message to all connected clients.
The client can also be used to push CID to other computers and devices with output modules.
Various clients are available on numerous platforms, including Android, iOS, Linux, macOS and Windows.
Protocol
The NCID protocol is simple, human-readable ASCII text consisting of field pairs—a field label and its field data—using the asterisk character as a delimiter. Transmission between the NCID server and its clients is done via TCP/IP, usually over port 3333. Additional field pairs have been added as the NCID server has been enhanced with new features and support for more devices. Here is an example of the minimum of caller ID data.
List of input sources
Hardware that can supply caller ID data to the NCID server, either by the NCID server accessing the device directly (RS232 serial port or USB) or indirectly via NCID Gateways (scripts and programs included with the NCID package).
Modems
AT-compatible modems expect Telcos to send caller ID data as either Single Data Message Format (SDMF) or Multiple Data Message Format (MDMF). The modem then decodes the data stream into human readable text, which the NCID Server then parses. If a modem supports it, the NCID Server can also decode the raw SDMF or MDMF data stream.
Note A: RING means ring only, no Caller ID, no hangup. CID means Caller ID and simple hangup. FAX and VOICE mean their respective hangup options. Unless otherwise noted, the presence of VOICE indicates the modem will use the default NumberDisconnected.rmd (raw modem file) distributed with NCID.
Note B(1): Zoom and TRIXES. Prior to NCID version 0.89, FAX hangup was not a configurable option, and two blog/forum posts (here and here) have patches to add FAX hangup. Starting with NCID version 0.89, FAX hangup is now a configurable option so the patches are no longer necessary. The NCID developers have been unable to contact the author at Murphy 101 Blog to have the blog updated.
Note B(2): Zoom 3095 USB modems appear to be particularly sensitive to power levels. A common reported symptom is having to unplug and re-plug the modem into the USB port to get it to work. A more detailed discussion can be found here.
Note C: US Robotics 5637. Tested with Fedora, Raspberry Pi and Ubuntu. Connected to the UK British Telecom and US caller id systems. FAX hangup will not hangup the line, will disconnect the modem and will abort ncidd. Several users report problems using this modem with the Raspberry Pi in particular.
Note D: 3Com 3CP2976. Linux utility lspci reports "04:01.0 Serial controller: 3Com Corp, Modem Division 56K FaxModem Model 5610 (rev 01)".
Note E: Works on several Linux distros. Confirmed to work on Raspberry Pi 3 running Ubuntu Mate and Pi 3-B Raspbian Jessie—but does not work if the Raspberry power supply can't do 2 amps
Note F: Works on several Linux distros. Curiously, StarTech says it has a Conexant CX93010 chip, but the one tested responds with CX93001.
Note G: CallerID intentionally disabled by vendor in EEPROM patch. CallerID can be reenabled in any CX93001-based modem via simple RAM patch after ATZ command: AT!4886=00 for Bell FSK countries, AT!4886=01 for V23 FSK (Japan), AT!4886=02 for ETSI FSK (France, Italy, Spain), AT!4886=03 for SIN227 (UK), AT!4886=05 for ETSI DTMF. Sometimes additionally AT!4892=FF may be required.
NIETO
Thomas Glembocki's entry won Honorable Mention in the 2007 Circuit Cellar Wiznet Ethernet design contest for his project NIETO: An NCID and NTP Client
NCIDdisplay (A large homebrew LED display for NCID (Network Caller ID)
Consumer network routers with embedded NCID Server
References
External links
NCID Homepage
Communication software |
390406 | https://en.wikipedia.org/wiki/GPL%20linking%20exception | GPL linking exception | A GPL linking exception modifies the GNU General Public License (GPL) in a way that enables software projects which provide library code to be "linked to" the programs that use them, without applying the full terms of the GPL to the using program. Linking is the technical process of connecting code in a library to the using code, to produce a single executable file. It is performed either at compile time or run-time in order to produce functional machine-readable code. The Free Software Foundation states that, without applying the linking exception, a program linked to GPL library code may only be distributed under a GPL-compatible license. This has not been explicitly tested in court, but linking violations have resulted in settlement. The license of the GNU Classpath project explicitly includes a statement to that effect.
Many free software libraries which are distributed under the GPL use an equivalent exception, although the wording of the exception varies. Notable projects include ERIKA Enterprise, GNU Guile, the run-time libraries of GNAT, GNU Classpath and the GCC Runtime Library Exception.
Compiler runtime libraries also often use this license modification or an equivalent one, e.g. the libgcc library in the GNU Compiler Collection, as well as all libraries of the Free Pascal project.
In 2007, Sun Microsystems released most of the source code to the class libraries for the Java SE and Java EE projects under version 2 of the GPL license plus the Classpath linking exception, and used the same license as one possible license for their enterprise server GlassFish and for their NetBeans Java IDE.
Version 3 of the GNU Lesser General Public License (LGPL) is likewise constructed as an exception to the GPL.
The Classpath exception
The GNU Classpath project provides an example of the use of such a GPL linking exception. The GNU Classpath library uses the following license:
As such, it can be used to run, create and distribute a large class of applications and applets. When GNU Classpath is used unmodified as the core class library for a virtual machine, compiler for the Java language, or for a program written in the Java programming language it does not affect the licensing for distributing those programs directly.
The GNU Lesser General Public License
While version 2.1 of the LGPL was a standalone licence, the current LGPL version 3 is based on a reference to the GPL.
Compared to the GNU Classpath license above, the LGPL formulates more requirements to the linking exception: licensees must allow modification of the portions of the library they use and reverse engineering (of their software and the library) for debugging such modifications.
See also
Free software licence
GNAT Modified General Public License
OpenSSL Exception
Software using the GPL linking exception (category)
GPL font exception
References
External links
GNU Project
Software licenses |
69662864 | https://en.wikipedia.org/wiki/Dylan%20Field | Dylan Field | Dylan Field (born 1992) is an American technology executive and co-founder of Figma, a web-based vector graphics editing software company. Field founded Figma in 2012 with Evan Wallace, whom he had met while the two were computer science students at Brown University. In 2012, Field received a Thiel Fellowship—a $100,000 grant conditioned on his leaving school to begin working full-time on the company. Field moved to San Francisco with Wallace, where the two spent four years preparing the software for its first public release in 2016.
In 2015, Field was named to the Forbes 30 under 30 list. As of 2021, a Forbes estimate places Field's net worth at just under $1 billion.
Early life and education
Childhood
Field grew up in Penngrove, California. Field was an only child, named after the poet Dylan Thomas. His father worked as a respiratory therapist at Santa Rosa Memorial Hospital and his mother as a resource specialist teacher at Thomas Page Elementary School.
As a child, Field was adept at math, learning algebra at age six. Field's father told a Santa Rosa area newspaper in 2012 that Dylan found middle school so boring that "he mostly hung out with a janitor, who was kind of a math savant." Field was interested in computer science from a young age and participated in FIRST Robotics. He also participated in the arts as a child, acting with credits in TV ads for eToys.com and for Windows XP, and taking an interest in design starting in middle school.
Field attended high school at Technology High School, a magnet school for science, technology, engineering, and math on the campus of Sonoma State University. While in high school, Field built robots and websites for friends. He also worked with social media researcher Danah Boyd, who ultimately wrote one of Field's letters of recommendation for college.
College
In 2009, Field enrolled at Brown University, where he studied computer science. Field was an involved member of Brown's computer science department: In 2011, he organized a hackathon in which 150students participated, and starting in late 2011, he co-chaired Brown's CS Departmental Undergraduate Group.
While attending Brown University, Field interned at LinkedIn and the news-sharing startup Flipboard. At his LinkedIn internship, Field helped devise a social-impact program. His second summer, he interned as a software engineering intern at Flipboard. Afterward, Field began to doubt his plan to major in computer science and math, so he took spring semester off during his junior year to pursue a six-month internship at Flipboard in Palo Alto, this time as a technical product manager.
Around that time, Field met Evan Wallace, another computer science undergraduate studying at Brown; the two decided they wanted to start a company together. One year ahead of Field at Brown, Wallace's studied graphics, and was a TA for the CS department.
During a semester away from Brown, Field applied to the Thiel Fellowship, a $100,000US dollar (USD) grant awarded to young entrepreneurs by investor Peter Thiel on the condition that they drop out of college for at least two years. Field's parents were initially not supportive. Field recalled in 2012, "They totally did not want me to apply." His father told the same interviewer, "Pretty much everything we earned went to education." In its second year, the Thiel Fellowship had attracted 500student applicants; 40finalists were named and 20 were ultimately selected. Field was awarded the Thiel Fellowship in May2012 and dropped out of Brown to accept it.
Field had originally intended to pursue a degree in math and computer science and graduate after four years. Field said in a 2012 interview that Brown had been his "dream school" but that he "wasn't feeling like [he] was getting as much out of it as before." Field said in that interview that he intended to go back to Brown one day, noting the school allowed leaves of absence for up to five years.
Career
Starting Figma (2012)
Field was named a Thiel Fellow in 2012, earning him $100,000 in exchange for taking a leave of absence from college. The Thiel Fellowship was begun in 2011. At the time, the Thiel Fellowship was designed to select 20 "creative and motivated young people" under the age of 20 each year. Recipients were given $100,000 each to leave college for two years and work on their ideas as startup companies. Field viewed it as "almost like an independent study, just you don't get course credit, it's a little bit longer and you get paid." An executive with the Thiel Fellowship commented on their selection of Field: "He has a wonderful blend – he is obviously technically very talented – but he also has a sense of intuition for the art that he will use in his current project, which is blending art and engineering."
In summer 2012 Field co-founded Figma with Evan Wallace, who joined Field in California after completing his degree in computer science that spring.
Field's original objective was to "make it so that anyone can be creative by creating free, simple, creative tools in a browser." Field and Wallace tried many different ideas, including software for drones and a meme generator. The company was described in a 2012 article by The Brown Daily Herald more vaguely as "a technology startup that will allow users to creatively express themselves online." That article reported that the company's first ideas revolved around 3D content generation, and subsequent ideas focused on photo editing and object segmentation.
Early challenges (2012–2015)
Field's inexperience in leadership resulted in challenges both leading Figma's early team and challenges fundraising. A beta version of Figma's first product took years to launch, and many frustrated employees quit before it did. Field said in a 2021 interview, "I was just not a very good manager when I started Figma. I was an intern before that, so I had a lot to learn. I was always very optimistic; I thought that shipping was right around the corner, so I wasn’t setting expectations correctly."
At one point early on in Figma's existence, Field said he was faced with a potential exodus of disaffected employees. A 2021 article reported that the situation had grown dire enough that the senior members of his team eventually "staged a sort of managerial intervention." Field described it: "It was like, 'You need to get some help.' Afterward I took a few days away from the office. It was just hard."
Raising funding was a challenge for similar reasons. Field told a reporter from Business Insider that, when meeting with investors in 2013–2014, the company wasn't clear about what product it was building and what problems that product would solve. Field recalled that he experienced a "wake up call" when investor John Lilly turned down the chance to invest in Figma's seed round and said, "I don't think you know what you're doing yet." Field sought out further advice and improved the company's pitch; Lilly ultimately led Figma's $14millionUSD funding round in December2015, its largest up until that point.
Post-product launch (2015–present)
Figma launched its first beta product in late 2015, its first public product in late 2016, and its first paid product in 2017. Field told the Business Insider reporter that the long availability as a free product helped it acquire its first customers. Figma's initial product met with mixed reviews. In a 2021 interview, Field recalled one of the comments on Designer News reading, "If this is the future of design, I’m changing careers."
In April of 2020, Figma raised venture financing that valued the company at $2billionUSD. Figma had acquired customers including Microsoft, Airbnb, GitHub, Square, Zoom, and Uber.
A 2021 Forbes profile reported that Field spent most of the COVID-19 pandemic "listening to Figma's users": reading customer support tickets, responding to users on Twitter, and visiting clients in person in Ukraine and Nigeria. Field said of his habit of reading customer feedback: "Not all of them are happy, because here's this thing they want fixed, and that gives me a pulse on what's going on. And the people that are happy, that's when I get really stoked. And that motivates me so much."
In 2021, the company was valued at $10billionUSD in a subsequent round of financing. Forbes reported in 2021 that the company had $75millionUSD in revenue in 2020; that Joe Biden's presidential campaign managed all of its visual assets in Figma; and that "when toilet paper ran out across the U.S. in 2020, Kimberly-Clark drafted reorder forms using Figma's tools." Forbes estimated as a result of this valuation that Field and co-founder Evan Wallace were both "near-billionaires."
Other activities
Field is an angel investor in venture-backed startups and an NFT collector. Field purchased his first CryptoPunk NFT in January2018. He later described himself thinking at the time, "this is probably the stupidest thing I've ever done." In 2021 it was reported that Field sold a CryptoPunk NFT for $7.5millionUSD, the highest ever sale price for a CryptoPunk. In the days afterward, Field discussed the sale on The Good Time Show, a popular Clubhouse show, comparing the sold NFT to "a digital Mona Lisa".
This was Field's second reported sale, after a $1.5millionUSD sale in February2021. As of March2021, Field owned 11 other CryptoPunks as well as NFTs from Autoglyphs and Beeple. At that time, Forbes reported Field's profits from collecting to be $9.5millionUSD. Forbes reported in 2021 that Field was an angel investor in OpenSea, an NFT marketplace startup.
Personal life
Field is married. In April2021, Field's wife Elena was pregnant with their first child.
Accolades
Field has received several accolades in connection with his co-founding of Figma. In 2015, Field was named to the Forbes 30 under 30 list. In 2019, Field was named an INC Rising Star. In 2020, Field was named one of Business Insider's "10 people transforming the technology industry."
References
1992 births
Brown University alumni
American technology company founders
Thiel fellows
Living people |
43011132 | https://en.wikipedia.org/wiki/Lovers%20in%20a%20Dangerous%20Spacetime | Lovers in a Dangerous Spacetime | Lovers in a Dangerous Spacetime is a space shooter video game developed by Asteroid Base for Microsoft Windows, OS X, PlayStation 4, Linux, Xbox One, and Nintendo Switch. The project is part of the ID@Xbox program. The game's title is a reference to the Bruce Cockburn song "Lovers in a Dangerous Time".
Gameplay
The game can be played alone or with two to four players. The players pilot a spaceship with a variety of stations located inside it. These stations control the ship's weapons, engine, shield, Yamato cannon, and map. Each player controls only a single avatar (as well as commanding the AI pet in single-player mode), and thus must constantly move from station to station in order to balance flying the ship, protecting it from damage, and attacking enemies. During the course of gameplay, gift boxes can be discovered which may contain gems. These gems can be attached to the stations, giving them new, enhanced powers.
The game consists of four campaigns; each contains four levels and a boss fight. The goal of the regular levels is to find and rescue an assortment of captured creatures including bunnies, frogs, foxes, and ducks. After capturing five such creatures in a level, a heart-shaped portal to the next level is unlocked and the players may enter it to complete the level. A few levels feature an alternative gameplay mode in which a special engine is attached to the ship which the players must protect as it warps the ship to a new area. Up to ten creatures may be rescued per level. Creatures saved count towards improving the effectiveness of the ship by allowing two gems per station or unlocking new ship layouts.
Development
Designer Matt Hammill described the development of Lovers as "almost an accident", having wanted to create a game for a game jam that "was supposed to be this small three-day thing". However, after the jam was over the development team wanted to continue on with the concept. The team sought to avoid "default gunmetal, chrome, cyberpunk textured-look", with Hammill stating that they wanted to go in the opposite direction. The team focused on a brighter aesthetic based on such sources as Sailor Moon and Katamari Damacy.
Lovers in a Dangerous Spacetime was showcased in July 2013 at PAX Prime. The game was released for Microsoft Windows, OS X, Linux, and Xbox One on September 9, 2015.
By 2016, the developers added 4-player co-op on top of the original 2-player co-op mode.
Release
Lovers in a Dangerous Spacetime was released on September 9, 2015, for Microsoft Windows, Xbox One, OS X, and Linux, and on October 3, 2017, for Nintendo Switch.
In February 2016, Asteroid Base teamed up with the monthly subscription box service IndieBox to create an exclusive, custom-designed, physical release of the game. This limited collector's edition included, among others, a themed USB flash drive with DRM-free copy of the game, the original soundtrack on CD, a Steam key (downloadable on Windows, Mac, and Linux platforms), and various collectibles.
Lovers in a Dangerous Spacetime became free to members on Xbox Live Gold in February 2017 and to members on PS Plus in April 2017.
Reception
Lovers in a Dangerous Spacetime received positive reviews from critics. Aggregate review website Metacritic assigned a score of 82/100 for the Xbox One and PlayStation 4 versions, and a score of 80/100 for the PC versions.
Simon Parkin from Eurogamer, playing the PC version, recommended the game. Destructoid awarded it a score of 10 out of 10, saying
Awards and nominations
References
External links
2015 video games
Action video games
Linux games
Multiplayer and single-player video games
MacOS games
Nintendo Switch games
Platform games
PlayStation 4 games
Video games developed in Canada
Windows games
Xbox One games |
332222 | https://en.wikipedia.org/wiki/Hushmail | Hushmail | Hushmail is an encrypted proprietary web-based email service offering PGP-encrypted e-mail and vanity domain service. Hushmail uses OpenPGP standards. If public encryption keys are available to both recipient and sender (either both are Hushmail users or have uploaded PGP keys to the Hush keyserver), Hushmail can convey authenticated, encrypted messages in both directions. For recipients for whom no public key is available, Hushmail will allow a message to be encrypted by a password (with a password hint) and stored for pickup by the recipient, or the message can be sent in cleartext. In July, 2016, the company launched an iOS app that offers end-to-end encryption and full integration with the webmail settings. The company is located in Vancouver, British Columbia, Canada.
History
Hushmail was founded by Cliff Baltzley in 1999 after he left Ultimate Privacy.
Accounts
Individuals
There is one type of paid account, Hushmail Premium, which provides 10GB of storage, as well as IMAP and POP3 service. Hushmail offers a two-week free trial of this account.
Businesses
The standard business account provides the same features as the paid individual account, plus other features like vanity domain, email forwarding, catch-all email and user admin. A standard business plan with email archiving is also available. Features like secure forms and email archiving can be found in the healthcare and legal industry-specific plans.
Additional security features include hidden IP addresses in e-mail headers, two-step verification and HIPAA compliant encryption.
Instant messaging
An instant messaging service, Hush Messenger, was offered until July 1, 2011.
Compromises to email privacy
Hushmail received favorable reviews in the press. It was believed that possible threats, such as demands from the legal system to reveal the content of traffic through the system, were not imminent in Canada unlike the United States and that if data were to be handed over, encrypted messages would be available only in encrypted form.
Developments in November 2007 led to doubts amongst security-conscious users about Hushmail's security specifically, concern over a backdoor. The issue originated with the non-Java version of the Hush system. It performed the encrypt/decrypt steps on Hush's servers, and then used SSL to transmit the data to the user. The data is available as cleartext during this small window of time, with the passphrase being capturable at this point, facilitating the decryption of all stored messages and future messages using this passphrase. Hushmail stated that the Java version is also vulnerable, in that they may be compelled to deliver a compromised Java applet to a user.
Hushmail supplied cleartext copies of private email messages associated with several addresses at the request of law enforcement agencies under a Mutual Legal Assistance Treaty with the United States: e.g. in the case of United States v. Stumbo. In addition, the contents of emails between Hushmail addresses were analyzed, and 12 CDs were supplied to U.S. authorities. Hushmail privacy policy states that it logs IP addresses in order "to analyze market trends, gather broad demographic information, and prevent abuse of our services."
Hush Communications, the company that provides Hushmail, states that it will not release any user data without a court order from the Supreme Court of British Columbia, Canada, and that other countries seeking access to user data must apply to the government of Canada via an applicable Mutual Legal Assistance Treaty. Hushmail states, "...that means that there is no guarantee that we will not be compelled, under a court order issued by the Supreme Court of British Columbia, Canada, to treat a user named in a court order differently, and compromise that user's privacy" and "[...]if a court order has been issued by the Supreme Court of British Columbia compelling us to reveal the content of your encrypted email, the "attacker" could be Hush Communications, the actual service provider."
See also
Comparison of mail servers
Comparison of webmail providers
References
External links
Cryptographic software
Webmail
Internet privacy software
OpenPGP
Internet properties established in 1999 |
589913 | https://en.wikipedia.org/wiki/USS%20Blue%20Ridge%20%28LCC-19%29 | USS Blue Ridge (LCC-19) | USS Blue Ridge (LCC-19) is the lead ship of the two amphibious command ships of the United States Navy, and is the flagship of the Seventh Fleet. Her primary role is to provide command, control, communications, computers, and intelligence (C4I) support to the commander and staff of the United States Seventh Fleet. She is currently forward-deployed to U.S. Navy Fleet Activities, Yokosuka in Japan, and is the third Navy ship named after the Blue Ridge Mountains, a range of mountains in the Appalachian Mountains of the eastern United States. Blue Ridge is the oldest deployed warship of the U.S. Navy, following the decommissioning of . Blue Ridge, as the U.S. Navy's active commissioned ship having the longest total period as active, flies the First Navy Jack instead of the jack of the United States. Blue Ridge is expected to remain in service until 2039.
History
Blue Ridge was put "in commission special" on 14 November 1970, at the Philadelphia Naval Shipyard as an Amphibious Command and Control (LCC) ship, with Captain Kent J. Carroll as the commanding officer. The ship was sponsored by Mrs. Gretchen Byrd, wife of the U.S. Senator from Virginia, Harry F. Byrd Jr. The principal speaker at the ceremony was John W. Warner, Under Secretary of the Navy and later Senator from Virginia.
Blue Ridge was the replacement for , but Estes was decommissioned earlier than planned in October 1969 due to the budget cuts of the late 1960s.
Blue Ridge was the lead ship of her class and represented almost seven years of planning and construction work. The result was a ship specifically designed from the keel up as a command and control ship. As designed, Blue Ridge was capable of supporting the staff of both the Commander of an Amphibious Task Force and the staff of the Commanding General of the Landing Force. The advanced computer system, extensive communications package and modern surveillance and detection systems was molded into the most advanced joint amphibious command and control center ever constructed.
At the time of her commissioning, Blue Ridge had the distinction of carrying the world's most sophisticated electronics suite, which was said to be some thirty percent larger than that of the aircraft carrier , which had been the most complex. Blue Ridge was armed with a "main battery" of computers, communications gear, and other electronic facilities to fulfill her mission as a command ship. An extremely refined communications system was also an integral part of the ship's radical new design. Through an automated patch panel and computer controlled switching matrix her crew could use any combination of communication equipment desired. The clean topside area is the result of careful design intended to minimize the ship's interference with her own communications system. U.S. Navy long-range communications were heavily reliant on high frequency radio systems in the 1970s and have evolved to predominantly satellite communications in the 2000s. This is illustrated by the long wire antennas and the directional HF yagi or log-periodic antenna initially installed on Blue Ridge and later removed and replaced with a number of satellite communications antennas.
Besides small arms, Blue Ridge was armed with two twin Mark 33 3"/50 caliber guns at commissioning, though they have since been removed. She also carried two Mark 25 launchers and electronics for the Basic Point Defense Missile System (BPDMS) which was added sometime in the 1970s and removed in the 1990s. Two 20 mm Phalanx CIWS systems were added in the 1980s for point defense. In recent years she has also carried 25 mm Bushmaster cannons.
1971
In late January 1971, the ship conducted her first INSURV in the North Atlantic, after transiting the Delaware River, from and return to Philadelphia.
On 11 February 1971, Blue Ridge steamed on her maiden voyage from the shipyard to the ship's first homeport, San Diego, California, around South America via the Strait of Magellan, making liberty calls at Norfolk, Virginia (15 February), Rio de Janeiro (4–6 March), Lima (20–22 March), Rodman Naval Station, Panama Canal Zone (27–28 March), and Acapulco (2–5 April). Blue Ridges beam is , but the Panama Canal locks at that time were only , creating problems for the Blue Ridge-class of ships with fenders and barges for the sponsons.
As the ship crossed the equator on 26 February at 38 degrees and 24 minutes longitude, bound for Rio de Janeiro, Blue Ridge performed her first crossing the line ceremony, initiating the "wog" majority of the crew, except for one.
Upon entry to the Strait of Magellan, Blue Ridge took on a passage pilot from the Chilean Navy for the transit. The Chilean patrol boat lost its mast and damaged one of Blue Ridges basket antennas, just aft of the port sponson, in the boarding operation.
In Blue Ridges transit from Lima to Rodman Naval Station, Panama, she was assigned the duty of going to the aid of any U.S. tuna fishing boat being harassed or captured by the Ecuadorian Navy because they were fishing in a claimed fishing zone that the U.S. did not recognize. This was known as the Tuna War, but no incident occurred.
Arriving at San Diego on 9 April, with Rear Admiral David M. Rubel, U.S. Navy, Commander Amphibious Group Three and staff embarked. Rear Admiral Rubel is the first Flag Officer embarked on Blue Ridge. Amphibious Group Three staff came aboard Blue Ridge at the Rodman port call with the next port call being Acapulco.
The rest of the year was highlighted by Command Post Exercises 3–7 May and 11–13 August. Refresher training was conducted in late June and early July. Blue Ridge acted as amphibious task force and landing force flagship for the major amphibious training exercise of the year, ROPEVAL WESTCO (3-71), from 8–16 September.
Blue Ridges first drydock since the Philadelphia Naval Shipyard, from 11 October to 19 November she was in the Long Beach Naval Shipyard for Post Shakedown availability. Blue Ridges power plant was switched from Navy Standard Oil fuel to Navy Distillate fuel.
1972
From 1972 until 1979, Blue Ridge deployed to the Western Pacific on 6 WestPacs, as the flagship of the Commander Amphibious Force, Seventh Fleet.
February - WestPac I
After completing degaussing in the deperming facility at Ballast Point, on 7 January 1972, Blue Ridge departed home port San Diego and steamed to Pearl Harbor for deployment on the ship's first WestPac, with port visits at Guam, Sasebo, Japan, White Beach, Okinawa, Subic Bay, Hong Kong and Singapore.
Making the next leg of the transit to WestPac, Guam, with and , During the transit, four Soviet reconnaissance aircraft overflew the convoy to collect data on the new ship.
As the ship crossed the equator on 27 February 1972 at , bound for Singapore, Blue Ridge performed her second crossing the line ceremony.
Additional port visits were planned, but in late March 1972, as Blue Ridge prepared at White Beach, Okinawa for exercise Golden Dragon, North Vietnam invaded South Vietnam across the Vietnamese Demilitarized Zone (DMZ) on 30 March 1972 in their Easter Offensive. This was the largest invasion since the Korean War, radically departing from previous offensives. It was designed to strengthen the North Vietnamese position as the Paris Peace Accords drew towards a conclusion.
April - Easter Offensive
On 3 April 1972, Commander in Chief of the Pacific Fleet (CinCPac) Admiral John S. McCain Jr. cancelled Exercise Golden Dragon. General Miller and the 9th Marine Amphibious Brigade (9th MAB) staff were ordered to remain on Blue Ridge for combat or evacuation operations. The 9th MAB had various contingency plans from potentially conducting emergency evacuations to building up their forces.
On 5 April 1972, Blue Ridge departed for the war zone, the Gulf of Tonkin. Blue Ridge was the command ship during April through July for the last major combat amphibious engagement of the Vietnam War. The Easter Counter-Offensive was "the largest concentration of wartime amphibious force since the Inchon and Wonsan landings of the Korean War."
Detachment "N" of the 1st Radio Battalion had deployed with the 9th MAB for the exercise in Korea. It was integrated with the Task Force 76 Joint Intelligence Center and operated from the supplemental radio spaces of Blue Ridge using input from the service cryptologic agencies in Southeast Asia. However, operating from Blue Ridge posed reception problems because of the distance from shore. From 24 April 1972, two or three direct support elements were in operation from naval gunfire ships at any one time, with control remaining at the headquarters element on Blue Ridge. In July 1972, they moved to and when Blue Ridge returned to the United States, detachment analysts relocated to the Naval Communications Station, San Miguel, near San Antonio, Zambales, Philippines. As CTU 76.0.1, escorted by , Blue Ridge conducted special operations in the Tonkin Gulf in Operation Venture Road.
June - The Counter-Offensive
With a lull in the fighting and 64 days at sea, Blue Ridge made a port call to Subic, from 7 to 14 June, for supplies and sanity, then returned to the Gulf of Tonkin.
Nguyen Van Thieu, president of South Vietnam, came aboard Blue Ridge on 28 June 1972 to confer with Vice Admiral Holloway, Admiral Gaddis, General Miller and "to convey his personal thanks to the sailors and Marines of the amphibious forces for 'the preservation of Peace and Freedom' in South Vietnam."
On the first of July, while steaming outside of Da Nang Harbor, in the combat zone and the ship's port 3-inch gun manned, Blue Ridge had her first change of command. That day was also the day that Blue Ridge earned the Republic of Vietnam Campaign Medal.
The Easter Counter-Offensive was Blue Ridges longest time at sea, 64 days from 5 April to 7 June 1972. After 7 days in Subic, Blue Ridge returned to the Gulf of Tonkin until 18 July 1972 and was then ordered to the Philippines for typhoon relief along with Tripoli, Juneau, Alamo, and Cayuga. "The 33d MAU and subordinate units were awarded the Philippine Presidential Unit Citation for their efforts", but Blue Ridge was not.
On 18 August 1972, Blue Ridge returned to San Diego. In September the ship received aboard, CNO Admiral Elmo R. Zumwalt Jr. and Secretary of the Navy, John Warner for visits. From 5 thru 9 October, Blue Ridge made a port visit to San Francisco, training and a First Fleet sponsored event.
1973
WestPac II
From 12 February 1973 until 4 April 1973, Operation Homecoming, returning POWs from Hanoi and VC camps in South Vietnam went to Clark Air Base in the Philippines. With Blue Ridge still in her homeport of San Diego, the current ship's intelligence officer and prior ship's intelligence officer contributed to the operation. "The Army, Navy, Air Force and Marines each had liaison officers dedicated to prepare for the return of American POWs well in advance of their actual return. These liaison officers worked behind the scenes traveling around the United States assuring the returnees well being. They also were responsible for debriefing POWs to discern relevant intelligence about MIAs and to discern the existence of war crimes committed against them."
On 24 February 1973, Blue Ridge left San Diego for Pearl Harbor (2–3 March) and her second WestPac, with liberty port visits of Sasebo (7–14 June), Yokosuka (25 July-5 Aug), White Beach (15–31 March, 11 April, 16 Aug, 4-19 Sept), Hong Kong (7–12 May), Subic (26 March-5 April, 22-26 Sept, 7-8 Oct), Manila, Singapore (24-29 Aug) and Chilung (1–5 June). Blue Ridge conducted training exercises: Operation Golden Dragon in early April off South Korea, Operation Pagasa I in middle May off Philippines, Operation Pagasa II in early October off Philippines.
With Operation End Sweep progressing in the coastal waters of North Vietnam for the mines released there, Blue Ridge left White Beach again on 10 July 1973 headed for the Gulf of Tonkin. She carried equipment that was needed by U.S. helicopters that were involved in clearing mined North Vietnamese waters. Blue Ridge spent two nights in north Vietnamese waters off the coast of Vinh and Hon Matt before departing for Manila in the Philippines.
Arab Oil Embargo
Because of the problems associated with the Arab Oil Embargo of 1973, Blue Ridge, on the transit back to White Beach, Okinawa from port call in Singapore, became the first Seventh Fleet combatant ship to refuel at sea with a commercial tanker, taking on some 158,000 gallons of Navy distillate from the Falcon Princess.
Late in Blue Ridge's second WestPac, the ship was conducting a joint exercise with the Philippine Navy in the South China Sea called PAGASA II, as the command ship. One of Blue Ridges ensigns went overboard unnoticed and when found absent for a watch muster, a compartment search was conducted aboard the ship for the missing officer. With failure to find him on 28 September 1973, a search and rescue operation commenced without success. Two days later the ensign was declared missing at sea and Exercise PAGASA II resumed. On Monday, 1 October 1973, the U.S. Embassy in Moscow was notified that the Soviet trawler AGI Kursograph found an American sailor in Blue Ridge's operation area and the ensign was returned safely to Blue Ridge the next day after diplomatic negotiations.
At the end of Pagasa II, bad tropical weather forced the transfer of the staff from Blue Ridge to Denver, after a very short stay in Subic Bay, to occur in White Beach instead, on 7 October. On 8 October, Blue Ridge steamed for homeport San Diego carrying a Patrol Craft Fast on the helicopter deck. Blue Ridge arrived in San Diego 23 October [1973].
1975
WestPac III
Late in March 1975 and late in Blue Ridge's third WestPac, the deteriorating military and political situation in Cambodia and South Vietnam disrupted Blue Ridge's operational plans as had occurred in late March 1972.
Evacuation of Saigon
Blue Ridge was at White Beach, Okinawa when the 9th MAB was alerted on 25 March 1975 for immediate departure to Da Nang to reinforce U.S. facilities, but Blue Ridge did not get underway for Vietnam until 27 March. Marines and sailors hastily trained for crowd control, evacuation procedures, and a Vietnamese orientation course. The printing section on board Blue Ridge reproduced thousands of signs in Vietnamese including a simplified instruction card for the small unit leader that included basic Vietnamese phrases and human relations oriented "do's and don'ts. However North Vietnamese forces captured Da Nang on 29 March.
On 12 April, in response to the Cambodian government's crumbling defenses around the capital of Phnom Penh, Operation Eagle Pull evacuated 289 Americans, Cambodians and third country nationals by helicopter to the .
After the end of the Battle of Xuân Lộc on 21 April, President Thieu resigned and fled into exile and North Vietnamese forces surrounded Saigon. The fixed wing evacuation from Tan Son Nhut Airport was halted by North Vietnamese artillery fire on the morning of 29 April and the helicopter evacuation Operation Frequent Wind commenced. Admiral Gayler directed USSAG/Seventh Air Force and Seventh Fleet to begin Frequent Wind Option IV at 10:51 (Saigon time), but for some unexplainable reason, dissemination of this message to the participating units had been delayed from 10:52 until 12:15. Evacuation helicopters finally departed with the first wave started landing at 15:06 and returning to fleet at 15:40 with the first load of evacuees.
The commanding officer of ProvMAG 39. Colonel McLenon, exercised control of his Marine aircraft through the Tactical Air Coordination Center (TACC) on board Blue Ridge. The Helicopter Direction Center on board Okinawa, maintained aircraft spacing and routing. The primary difference between TACC and HDC was that TACC controlled the tactical disposition of the helicopters and HDC controlled the helicopters as long as they were in the Navy's airspace. These areas of responsibility often overlapped and at times even merged. Under the conditions existing on the morning of 29 April 1975, the difference in control responsibilities of TACC and HDC at best seemed blurred, at worst redundant. Coordination and control of the overall embarkation operation suffered from more serious communication problems. Direct communications with Admiral Whitmire and 9th MAB Rear were sporadic, at best, requiring a continuous relay by the C-130 Airborne Battlefield Command and Control Center.
The sky over the evacuation fleet was soon filled with Republic of Vietnam Air Force (RVNAF) helicopters, looking for a place to land and unload their passengers. Five helicopters crashed on the ship that day, not counting ones ditched or abandoned overboard. One crashed, causing a near disaster and showering the ship and personnel with debris. An NBC film crew, with reporter George Lewis, filmed this unexpected arrival of RVNAF helicopters on the flight deck of Blue Ridge, showing the processing of the refugees and two helicopters' rotor blades colliding. To free up space on the flight deck, RVNAF helicopters were ditched by their pilots in the South China Sea after unloading their refugees on ship. Along with the widely published photo of an RVNAF UH-1 Huey being pushed over the side of Blue Ridge, they filmed one unknown crew member being tossed into a flight deck safety net by the movement of the chopper going over the side.
The evacuation continued until the morning of 30 April with the last helicopter evacuating the Marine Security Guards from the roof of the U.S. Embassy at 07:53 and landing on USS Okinawa at 08:30. At 11:30 North Vietnamese tanks smashed through the gates of the Presidential Palace less than 1 km from the Embassy and raised the flag of the Viet Cong over the building, ending the Vietnam War.
1980s
With the decommissioning of the 7th Fleet Flagship cruiser in December 1979, Blue Ridge became the new flagship of the U.S. Seventh Fleet, and has been forward deployed at the Yokosuka Naval Base, Japan ever since.
From 21 July 1979 through 30 June 1984, Blue Ridge and other ships in the West Pacific engaged in operation Boat People, receiving the Humanitarian Service Medal, rescuing refugees from Vietnam. For example, on 6 October 1980 while transiting the South China Sea, Blue Ridge embarked Vietnamese refugees onboard from two separate small boats. The first being sighted before noon contained 54 total refugees. The second containing 37 were embarked onboard Blue Ridge shortly after 1800. Both boats were dangerously overloaded, and adrift when sighted. Of the 54 total refugees aboard the first boat, all were in good health, having been to sea only a few days. Of the 37 total refugees aboard the second boat, all were severely dehydrated, many so weak they could not stand, and had to be hoisted aboard Blue Ridge. Mechanical failure of the second boat had left the 37 adrift well short of the shipping lane. Initially it was unclear how long they had been at sea, though they had been without potable water for many days. Also on 15 May 1984 Blue Ridge rescued 35 refugees in the South China Sea, northeast of Cam Ranh Bay.
In May 1989, Blue Ridge, Sterett and visited Shanghai, China. They were the first U.S. warships to enter Shanghai Harbor in 40 years and it was only the second visit by U.S. warships to the People's Republic of China since 1949.
1990s
Blue Ridge performed a nine-and-a-half–month deployment as flagship for commander, United States Naval Forces Central Command (ComUSNavCent), during Operations Desert Shield, and Desert Storm from 28 August 1990 through 24 April 1991, receiving a Navy Unit Commendation.
In July 1996, Blue Ridge visited Vladivostok for the 300th Anniversary of the Russian Navy.
2000s
Blue Ridge participated in the international force East Timor (INTERFET) in February 2000.
Blue Ridge participates routinely in U.S. and allied training exercises each year with countries throughout the Western Pacific and Indian Ocean. For example, in 2009 Blue Ridge participated in ANNUALEX 21G (Annual Exercise 21G) with the Japan Maritime Self-Defense Force and PASSEX (Passing Exercise) with the French Navy.
2010s
Blue Ridge was one of several participating in disaster relief in Operation Tomodachi, after the 2011 Tōhoku earthquake and tsunami. Blue Ridge brought relief supplies from Singapore to Japan but remained in the vicinity of Okinawa where the embarked U.S. Seventh Fleet staff provided command and control for the duration of Operation Tomodachi. The Seventh Fleet Band disembarked from Blue Ridge in order to provide the Japanese public with concerts dedicated to the victims of the tsunami. On 9 May 2010, sailors from Blue Ridge took part in a Victory Day Parade of the Russian Navy's Pacific Fleet in the city of Vladivostok, being assembled on the city square next to French sailors. The officer inspecting the parade greeted the sailors, to which the sailors responded with a Russian-style threefold loud Ura.
The ship is expected to remain in service until 2039.
2020s
On 25 January 2020, an MH-60S helicopter attached to the ship crashed approximately from Okinawa, Japan. Following search and rescue efforts all five crewmembers were found uninjured.
Awards
On 18 July 1972, Blue Ridge was awarded the Combat Action Ribbon for her action at Tiger Island, and on 9 August 1972, the ship was awarded the Battle "E" by the commander Amphibious Force, U.S. Pacific Fleet. It was the only one Blue Ridge received prior to substantial changes made to the award in 1976 and is not listed as a Navy "E" Ribbon on the unit awards page. Blue Ridge received 15 Navy "E" Ribbon awards from 1977 to 2010.
Blue Ridge was awarded the Vietnam Service Medal and has two campaign stars one for Consolidation II '72 Campaign and the second for Vietnam Ceasefire '72 Campaign (Easter Counter-Offensive) with a total of 99 days in the combat zone, not counting 18 uncredited days in July 1972. Blue Ridge may have earned the Republic of Vietnam Campaign Medal for six months of service off South Vietnam from February to July 1972 as listed by NavSource.org. However, The Navy Unit awards page does not mention the award and the ship's crew did not paint the Republic of Vietnam Campaign Medal on the ship's bridge wing in 1993 or 2011.
Operation Eagle Pull (11–13 April 1975), the evacuation of the U.S. Embassy in Cambodia, Blue Ridge was awarded the Meritorious Unit Commendation, Armed Forces Expeditionary Medal and the Humanitarian Service Medal.
Operation Frequent Wind (29–30 April 1975), the evacuation of Saigon, South Vietnam, Blue Ridge was awarded the Navy Unit Commendation, Armed Forces Expeditionary Medal and the Humanitarian Service Medal. Blue Ridge received Humanitarian Service Medals for two different operations in 1980 and 1984 for rescuing Vietnamese boat people.
Blue Ridge received the ship's second Navy Unit Commendation along with the Southwest Asia Service Medal, the Kuwait Liberation Medal (Saudi Arabia) and Kuwait Liberation Medal (Kuwait) for Desert Shield and Desert Storm. The ship was also awarded the Joint Meritorious Unit Award and the Humanitarian Service Medal during Operation Tomodachi.
Blue Ridge earned the Captain Edward F. Ney Memorial Award several times, including 2010.
Gallery
References
Citations
Sources
External links
navsource.org: USS Blue Ridge
All USS Blue Ridge commanders
USS Blue Ridge (LCC-19) 70's photos
The official U.S. Navy awards site
List of approved Vietnam campaigns for the Vietnam Service Medal
Blue Ridge-class command ships
United States Navy West Virginia-related ships
United States Navy Virginia-related ships
United States Navy North Carolina-related ships
United States Navy Georgia-related ships
Ships built in Philadelphia
1969 ships |
23551536 | https://en.wikipedia.org/wiki/Milw0rm | Milw0rm | Milw0rm is a group of "hacktivists" best known for penetrating the computers of the Bhabha Atomic Research Centre (BARC) in Mumbai, the primary nuclear research facility of India, on June 3, 1998. The group conducted hacks for political reasons, including the largest mass hack up to that time, inserting an anti-nuclear weapons agenda and peace message on its hacked websites. The group's logo featured the slogan "Putting the power back in the hands of the people."
The BARC attack generated heated debate on the security of information in a world prevalent with countries developing nuclear weapons and the information necessary to do so, the ethics of "hacker activists" or "hacktivists," and the importance of advanced security measures in a modern world filled with people willing and able to break into insecure international websites.
The exploit site milw0rm.com and str0ke are unaffiliated with the milw0rm hacker group.
Members
Little is known about the members of milw0rm, which is typical of hacking groups, which often conceal members' identities to avoid prosecution. The international hacking team "united only by the Internet" was composed of teenagers who went by the aliases of JF, Keystroke, ExtreemUK, savec0re, and VeNoMouS. VeNoMouS, 18, hailed from New Zealand, ExtreemUK and JF, 18, from England, Keystroke, 16, from the US and Savec0re, 17, from the Netherlands.
JF went on to achieve a modicum of notoriety when MTV "hacked" its own website intentionally and graffitied the words "JF Was Here" across the page, at the same time that JF was under investigation for the milw0rm attacks by Scotland Yard. Hundreds of pages hosted on MTV.com sported the new JF logo, including one page that read, "JF was here, greets to milw0rm". MTV later confirmed that the alleged JF "hack" was a publicity stunt to promote the appearance of a commentator named Johnny Fame at the 1998 MTV Video Music Awards. Many were puzzled by the apparent hack committed by JF since the hacker was "known for relatively high ethical standards."
VeNoMouS claimed that he learned to crack into systems from Ehud Tenenbaum, an Israeli hacker known as The Analyzer.
BARC attack
Four days before the incident, the five permanent members of the United Nations Security Council, the US, Russia, United Kingdom, France and China, denounced both India and Pakistan for unilaterally declaring themselves nuclear weapons states. The day before the attack, Jacques Gansler, US Undersecretary of Defense for acquisition and technology, warned a military conference that teenage hackers posed "a real threat" to national security.
On the night of June 3, 1998, from their workstations on three continents, the group used a US military .mil machine to break into the LAN, or local area network, of BARC and gained root access. The center's website, connected to the LAN, and their firewall were not secured enough to prevent the group from entering and gaining access to confidential emails and documents. The emails included correspondence between the center's scientists relating to their development of nuclear weapons and analysis of five recent nuclear tests. Milw0rm took control of six servers and then posted a statement of anti-nuclear intentions on the center's website.
In the process of the break-in, the multinational group of teenagers – from the United States, United Kingdom and New Zealand – gained access to five megabytes of classified documents pertaining to India's nuclear weapons program. Savec0re erased all the data on two servers as a protest against the center's nuclear capabilities. To display their security breach publicly, they changed the center's webpage to display a mushroom cloud along with an anti-nuclear message and the phrase "Don't think destruction is cool, coz its not".
Milw0rm then came forward with the security flaws they exploited in BARC's system, along with some of the thousands of pages of documents they had lifted from the server, concerning India's last five nuclear detonations.
The group's purpose for the attack was to protest nuclear testing, according to Savec0re, VeNoMouS and JF, in their correspondence with Wired'''s reporter James Glave.
After the attack Keystroke claimed that the breach had taken "13 minutes and 56 seconds" to execute. Many news organizations reported breathlessly how the teenagers had penetrated a nuclear research facility in "less than 14 minutes." However, examining more closely the hacker's wording and tone in the interview, and especially the specificity of the "56 seconds" claim, it is apparent that Keystroke meant this as a lighthearted answer to the question, "Exactly how long did it take you?". The actual invasion took careful planning, routing through servers throughout the world from three different continents, and took days to execute. An Indian news agency reported that downloading thousands of pages from India's slow servers would have taken much longer than 14 minutes.
Attack aftermath
The security breach was first reported by Wired News. JF and VeNoMouS claimed credit by emailing Wired reporter James Glave with documents they had obtained from the BARC servers as proof.
After first denying that any incident had occurred, BARC officials admitted that the center had indeed been hacked and emails had been downloaded. An official at BARC downplayed the severity and importance of the incident, announcing that the security flaw resulted from "a very normal loophole in Sendmail," while going on to state that the center had not bothered to download a new version of the Sendmail program, responsible for the center's email servers. The center also admitted that after milw0rm's breach, the site had been hacked into again, this time with less severe consequences. Forbes wrote that perhaps up to 100 hackers had followed milw0rm's footsteps into the BARC servers once they were revealed as insecure. The website was shut down while its security was upgraded. Later, a senior US government official told ZDNet that the Indians had known about the flaw and had chosen to ignore it, creating the opportunity for milw0rm to root the servers. BARC officials said that none of the emails contained confidential information, the group did not destroy data, and that the computers they have that contain important data were isolated from the ones broken into.
Nevertheless, the breach was a severe one and had the potential to cause an incident of international proportions. Forbes called it "potentially the most devastating" hacking incident of 1998. After the attack, members of the group participated in an anonymous Internet Relay Chat (IRC) chat with John Vranesivich, the founder of hacking news website Anti-Online. Keystroke explained how if he wanted to, he could have sent threatening emails from the Indian email server to a Pakistani email server. If the group had possessed malicious intentions, the consequences for both south Asian countries could have been catastrophic.
For these reasons, the milw0rm attack caused other groups to heighten their security to prevent invasion by hackers. The U.S. Army announced, without giving evidence as to why they believed this to be the case, that the hacks might have originated in Turkey, noting that "Turkey is the primary conduit for cyber attacks." A senior US official said that the CIA had obtained the material that milw0rm had purloined and was reviewing it—the official did not mention how the CIA obtained this information.
Later, Wired revealed that an Indian national and self-proclaimed terrorist, Khalid Ibrahim, had approached members of milw0rm and other hacker groups on IRC—including Masters of Downloading and the Noid—and attempted to buy classified documents from them. According to savec0re, Ibrahim threatened to kill him if the hacker did not turn over the classified documents in question. Savec0re told Kevin Mitnick that Ibrahim first approached him posing as a family member of an FBI agent who could grant immunity to the members of milw0rm.
The Electronic Disturbance Theater released a statement in support of JF, applauding him for his hacktivism and maintaining that computer break-ins of this sort were not cyber-terrorism as some claim.
The event received wide international coverage, with reports by CNN, MSNBC and the Associated Press in the days following.
Other attacks
One month after the BARC incident, in July 1998, milw0rm hacked the British web hosting company Easyspace, putting their anti-nuclear mushroom cloud message on more than 300 of Easyspace's websites, along with text that read: "This mass takeover goes out to all the people out there who want to see peace in this world."Wired'' reported that this incident was perhaps the "largest 'mass hack' ever undertaken." The United States Department of Defense adviser John Arquilla later wrote that it was one of the largest hacks ever seen. Some of the sites hacked in the incident were for the World Cup, Wimbledon, the Ritz Casino, Drew Barrymore, and the Saudi royal family. The text placed on the sites read in part, "This mass takeover goes out to all the people out there who want to see peace in this world... This tension is not good, it scares you as much as it scares us. For you all know that this could seriously escalate into a big conflict between India and Pakistan and possibly even World War III, and this CANNOT happen... Use your power to keep the world in a state of PEACE."
While scanning a network for weaknesses, members of the group came across EasySpace, a British company which hosted many sites on one server. Along with members of the fellow hacking group Ashtray Lumberjacks, milw0rm had the revised mushroom cloud image and text on all of Easyspace's websites in less than one hour. Vranesevich said that the mass hack was rare in its effect and its intention: the hackers seemed to be more interested in political purposes than exposing computer security flaws.
It was also reported that milw0rm broke into a Turkish nuclear facility in addition to BARC.
See also
Hacktivism
1984 Network Liberty Alliance
References
External links
Mirrors of hacked sites
BARC hack
Mass hack
Hacked site
Fantasyfootball.co.uk hacked
"We Hacked Prince Charles' Bentley!"
Hacker groups
Anti–nuclear weapons movement
Hacking (computer security) |
44236438 | https://en.wikipedia.org/wiki/HERO%20Hosted%20PBX | HERO Hosted PBX | HERO Hosted PBX is a SIP-based hosted IP-PBX business phone system, first released in 2008 by Canadian telecommunications software provider Dialexia. The HERO (Hosted Enterprise Remote Office) software allows users to connect multiple phones (e.g., extensions, ring groups, etc.), share lines among several phones and implement business PBX telephone phone features such as voicemail, caller ID, call forwarding & call recording into their virtual PBX. The software is also suitable for multi-office connections, connecting branches which are geographically distant from each other. Dialexia Communications, Inc. released the latest version of HERO Hosted PBX (4.3) in 2013.
On June 3, 2014, the Dialexia development team announced in a client newsletter that support for HERO versions 3.9 and earlier would cease effective September 1, 2014. The company advised customers to migrate to a currently-supported operating system in order to receive future security updates & technical support.
Software overview
HERO Hosted PBX is composed of SIP Proxy, Registrar, and Presence server components that work together to allow real-time communication over IP networks. The software can be administered via web interface and is SIP-compliant), hence interoperable with other SIP devices and services. Other features include: Auto-Attendant IVR, emergency 911 support, integrated billing, cost & statistics reporting, device provisioning and failover and high availability support.
In 2009, HERO Hosted PBX was named 'Best Service Provider Solution' by the Technology Marketing Corporation (TMC) at the annual ITEXPO West conference held in Los Angeles.
References
External links
Dialexia Communications, Inc. Official Page
HERO Hosted PBX product page
HERO Hosted PBX product presentation
Telecommunications companies of Canada
VoIP companies
VoIP software |
34828617 | https://en.wikipedia.org/wiki/CNK%20operating%20system | CNK operating system | Compute Node Kernel (CNK) is the node level operating system for the IBM Blue Gene series of supercomputers.
The compute nodes of the Blue Gene family of supercomputers run Compute Node Kernel (CNK), a lightweight kernel that runs on each node and supports one application running for one user on that node. To maximize operating efficiency, the design of CNK was kept simple and minimal. It was implemented in about 5,000 lines of C++ code. Physical memory is statically mapped and the CNK neither needs nor provides scheduling or context switching, given that at each point it runs one application for one user. By not allowing virtual memory or multi-tasking, the design of CNK aimed to devote as many cycles as possible to application processing. CNK does not even implement file input/output (I/O) on the compute node, but delegates that to dedicated I/O nodes.
The I/O nodes of the Blue Gene supercomputers run a different operating system: I/O Node Kernel (INK). INK is based on a modified Linux kernel.
See also
Catamount (operating system)
Compute Node Linux
INK (operating system)
Rocks Cluster Distribution
Timeline of operating systems
References
Supercomputer operating systems |
64686499 | https://en.wikipedia.org/wiki/Michael%20Router | Michael Router | Bishop Michael Router is an Irish Bishop, appointed in 2019 as Auxiliary bishop of Armagh and Titular Bishop of Lugmad.
Born in 1965, he grew up in Virginia, Co. Cavan and was educated locally in Virginia National School and in Kells C.B.S. he studied for the priesthood in St. Patrick’s College, Maynooth, where he graduated with a Bachelor in Divinity Degree and a Higher Diploma in Education.
Bishop Michael was ordained to the diaconate by Bishop Francis McKiernan, the bishop of Kilmore, in 1987 and was subsequently ordained to the priesthood in St. Matthew’s Church, on 25 June 1989.
He commenced priestly ministry as a curate in the parish of Killinkere and in 1991 the then Fr. Michael taught in St Patrick's College, Cavan.
Bishop Michael was appointed Chaplain of Bailieborough Community School in 1996 and Priest in Residence in the Parish of Kilmainhamwood and Moybologue.
Bishop Michael commenced studies at the Mater Dei Institute of Education, in Dublin and during that time assisted in Our Lady of Good Counsel Parish in Drimnagh in 2002/2003 and upon completion of studies was awarded a Masters in Religious Education.
Upon return to the Diocese of Kilmore Bishop Router assumed the role of Diocesan Director of Adult Faith Formation and Pastoral Renewal. His role in the Diocese of Kilmore included providing training, and support for Parish Pastoral Councils, Liturgy Groups, Eucharistic Ministers and Ministers of the Word. He also helped provide Adult Religious Education courses in the Diocesan Pastoral Centre.
In 2010, in addition to these roles, Bishop Michael was appointed director of the Diocesan Pastoral Centre and in 2013 he was transferred as Curate to the Cathedral Parish in Cavan with responsibility for the Butlersbridge area. In 2014 he was appointed to his the position of Parish Priest of Killann Parish, which includes the towns of Bailieborough and Shercock, and as Vicar Forane for the Bailieborough Deanery. Bishop Michael was also a member of the College of Consultors of the Diocese of Kilmore and chairman of the Diocesan Priests’ Council.
Bishop Router was ordained to the episcopate in St. Patrick’s Cathedral in Armagh on Sunday 21 July 2019. Bishop Gerard Clifford was the previous auxiliary bishop of armagh.
References
External links
Living people
Alumni of St Patrick's College, Maynooth
Alumni of Mater Dei Institute of Education
People from County Cavan
21st-century Roman Catholic bishops in Ireland
1965 births |
11403260 | https://en.wikipedia.org/wiki/UWIN | UWIN | UWIN is a computer software package created by David Korn which allows programs written for the operating system Unix to be built and run on Microsoft Windows with few, if any, changes. Some of the software development was subcontracted to Wipro, India. References, correct or not, to the software as U/Win and AT&T Unix for Windows can be found in some cases, especially from the early days of its existence.
UWIN source is available under the Open Source Eclipse Public License 1.0 at AT&T's AST/UWIN repositories on GitHub.
UWIN 5 is distributed with the FireCMD enhanced Windows shell with the Korn Shell thereof as one of three default shells present at install, the others being the FireCMD scripting language and the default Windows command shell cmd.exe. Other UWIN shells like csh and tclsh and those of other interoperability suites like the MKS Toolkit and other shells like those that come with Tcl, Lua, Python and Ruby distributions inter alia can be added to the menu by the user/administrator.
Technical details
Technically, it is an X/Open library for the Windows 32-bit application programming interface (API), called Win32.
UWIN contains:
Libraries that emulate a Unix environment by implementing the Unix API
Include files and development tools such as cc(1), yacc(1), lex(1), and make(1).
ksh(1) (the Korn Shell) and over 250 utilities such as ls(1), sed(1), cp(1), stty(1), etc.
Most of the Unix API is implemented by the POSIX.DLL dynamically loaded (shared) library. Programs linked with POSIX.DLL run under the Win32 subsystem instead of the POSIX subsystem, so programs can freely intermix Unix and Win32 library calls. A cc(1) command is provided to compile and link programs for UWIN on Windows using traditional Unix build tools such as make(1). The cc(1) command is a front end to the underlying compiler that performs the actual compilation and linking. It can be used with the Microsoft Visual C/C++ 5.X compiler, the Visual C/C++ 6.X compiler, the Visual C/C++ 7.X compiler, the Digital Mars C/C++ compiler, the Borland C/C++ compiler, and the MinGW compiler. The GNU compiler and development tools are also available for download to UWIN.
UWIN runs best on Windows NT/2000/XP/7 with the file system NTFS, but can run in degraded mode using FAT, and further degraded on Windows 95/98/ME. (See the External link for more details.) A beta version for Windows Vista and 7 is released as UWin 5.0b (June 2011, 17th). On January 19, 2016, it was announced by AT&T that the AST and UWIN source packages were migrated to GitHub.
Notes
References
David G. Korn (1997) Porting UNIX to Windows NT, USENIX Annual Technical Conference
External links
This page still contains some useful documentation.
AST github repository
UWIN github repository
Compatibility layers
Compilers
Free compilers and interpreters
System administration
Unix emulators |
12284380 | https://en.wikipedia.org/wiki/William%20John%20Sullivan | William John Sullivan | William John Sullivan (more commonly known as John Sullivan; born December 6, 1976) is a software freedom activist, hacker, and writer. John is currently executive director of the Free Software Foundation (FSF), where he has worked since early 2003. He is also a speaker and webmaster for the GNU Project. He also maintains the Plannermode and delicious-el packages for the GNU Emacs text editor.
Biography
Active in both the free software and free culture communities, Sullivan has a BA in philosophy from Michigan State University and an MFA in Writing and Poetics. In college, Sullivan was a successful policy debater, reaching finals of CEDA Nationals and the semifinals of the National Debate Tournament.
Until 2007, John was the main contact behind the Defective by Design, BadVista and Play Ogg campaigns. He also served as the chief-webmaster for the GNU Project, until July 2006.
He has served as Executive Director of the Free Software Foundation since 2011.
As a speaker for the GNU Project
John has delivered speeches on the following topics, in English:
Digital rights management issues and the FSF's Defective by Design campaign
Media format patents, proprietary licensing, and the FSF's PlayOgg.org campaign
Choosing free software over Microsoft Windows
How you can help: Strategies for communicating and organizing around free software ideals
Why software should be free
Introduction to the GPLv3 and free software licensing
FSF/GNU high-priority free software projects
References
External links
Personal homepage
Copyright activists
Free software programmers
GNU people
Michigan State University alumni
Naropa University alumni
1976 births
Living people |
1862647 | https://en.wikipedia.org/wiki/MonetDB | MonetDB | MonetDB is an open-source column-oriented relational database management system (RDBMS) originally developed at the Centrum Wiskunde & Informatica (CWI) in the Netherlands.
It is designed to provide high performance on complex queries against large databases, such as combining tables with hundreds of columns and millions of rows.
MonetDB has been applied in high-performance applications for online analytical processing, data mining, geographic information system (GIS), Resource Description Framework (RDF), text retrieval and sequence alignment processing.
History
Data mining projects in the 1990s required improved analytical database support. This resulted in a CWI spin-off called Data Distilleries, which used early MonetDB implementations in its analytical suite. Data Distilleries eventually became a subsidiary of SPSS in 2003, which in turn was acquired by IBM in 2009.
MonetDB in its current form was first created in 2002 by doctoral student Peter Boncz and professor Martin L. Kersten as part of the 1990s' MAGNUM research project at University of Amsterdam. It was initially called simply Monet, after the French impressionist painter Claude Monet. The first version under an open-source software license (a modified version of the Mozilla Public License) was released on September 30, 2004. When MonetDB version 4 was released into the open-source domain, many extensions to the code base were added by the MonetDB/CWI team, including a new SQL front end, supporting the SQL:2003 standard.
MonetDB introduced innovations in all layers of the DBMS: a storage model based on vertical fragmentation, a modern CPU-tuned query execution architecture that often gave MonetDB a speed advantage over the same algorithm over a typical interpreter-based RDBMS. It was one of the first database systems to tune query optimization for CPU caches. MonetDB includes automatic and self-tuning indexes, run-time query optimization, and a modular software architecture.
By 2008, a follow-on project called X100 (MonetDB/X100) started, which evolved into the VectorWise technology. VectorWise was acquired by Actian Corporation, integrated with the Ingres database and sold as a commercial product.
In 2011 a major effort to renovate the MonetDB codebase was started. As part of it, the code for the MonetDB 4 kernel and its XQuery components were frozen. In MonetDB 5, parts of the SQL layer were pushed into the kernel. The resulting changes created a difference in internal APIs, as it transitioned from MonetDB Instruction Language (MIL) to MonetDB Assembly Language (MAL). Older, no-longer maintained top-level query interfaces were also removed. First was XQuery, which relied on MonetDB 4 and was never ported to version 5. The experimental Jaql interface support was removed with the October 2014 release. With the July 2015 release, MonetDB gained support for read-only data sharding and persistent indices. In this release the deprecated streaming data module DataCell was also removed from the main codebase in an effort to streamline the code. In addition, the license has been changed into the Mozilla Public License, version 2.0.
Architecture
MonetDB architecture is represented in three layers, each with its own set of optimizers.
The front end is the top layer, providing query interface for SQL, with SciQL and SPARQL interfaces under development. Queries are parsed into domain-specific representations, like relational algebra for SQL, and optimized. The generated logical execution plans are then translated into MonetDB Assembly Language (MAL) instructions, which are passed to the next layer. The middle or back-end layer provides a number of cost-based optimizers for the MAL. The bottom layer is the database kernel, which provides access to the data stored in Binary Association Tables (BATs). Each BAT is a table consisting of an Object-identifier and value columns, representing a single column in the database.
MonetDB internal data representation also relies on the memory addressing ranges of contemporary CPUs using demand paging of memory mapped files, and thus departing from traditional DBMS designs involving complex management of large data stores in limited memory.
Query Recycling
Query recycling is an architecture for reusing the byproducts of the operator-at-a-time paradigm in a column store DBMS. Recycling makes use of the generic idea of storing and reusing the results of expensive computations. Unlike low-level instruction caches, query recycling uses an optimizer to pre-select instructions to cache. The technique is designed to improve query response times and throughput, while working in a self-organizing fashion. The authors from the CWI Database Architectures group, composed of Milena Ivanova, Martin Kersten, Niels Nes and Romulo Goncalves, won the "Best Paper Runner Up" at the ACM SIGMOD 2009 conference for their work on Query Recycling.
Database Cracking
MonetDB was one of the first databases to introduce Database Cracking. Database Cracking is an incremental partial indexing and/or sorting of the data. It directly exploits the columnar nature of MonetDB. Cracking is a technique that shifts the cost of index maintenance from updates to query processing. The query pipeline optimizers are used to massage the query plans to crack and to propagate this information. The technique allows for improved access times and self-organized behavior. Database Cracking received the ACM SIGMOD 2011 J.Gray best dissertation award.
Components
A number of extensions exist for MonetDB that extend the functionality of the database engine. Due to the three-layer architecture, top-level query interfaces can benefit from optimizations done in the backend and kernel layers.
SQL
MonetDB/SQL is a top-level extension, which provides complete support for transactions in compliance with the SQL:2003 standard.
GIS
MonetDB/GIS is an extension to MonetDB/SQL with support for the Simple Features Access standard of Open Geospatial Consortium (OGC).
SciQL
SciQL an SQL-based query language for science applications with arrays as first class citizens. SciQL allows MonetDB to effectively function as an array database. SciQL is used in the European Union PlanetData and TELEIOS project, together with the Data Vault technology, providing transparent access to large scientific data repositories. Data Vaults map the data from the distributed repositories to SciQL arrays, allowing for improved handling of spatio-temporal data in MonetDB. SciQL will be further extended for the Human Brain Project.
Data Vaults
Data Vault is a database-attached external file repository for MonetDB, similar to the SQL/MED standard. The Data Vault technology allows for transparent integration with distributed/remote file repositories. It is designed for scientific data data exploration and mining, specifically for remote sensing data. There is support for the GeoTIFF (Earth observation), FITS (astronomy), MiniSEED (seismology) and NetCDF formats.
The data is stored in the file repository in the original format, and loaded in the database in a lazy fashion, only when needed. The system can also process the data upon ingestion, if the data format requires it.
As a result, even very large file repositories can be efficiently analyzed, as only the required data is processed in the database. The data can be accessed through either the MonetDB SQL or SciQL interfaces. The Data Vault technology was used in the European Union's TELEIOS project, which was aimed at building a virtual observatory for Earth observation data. Data Vaults for FITS files have also been used for processing astronomical survey data for The INT Photometric H-Alpha Survey (IPHAS)
SAM/BAM
MonetDB has a SAM/BAM module for efficient processing of sequence alignment data. Aimed at the bioinformatics research, the module has a SAM/BAM data loader and a set of SQL UDFs for working with DNA data. The module uses the popular SAMtools library.
RDF/SPARQL
MonetDB/RDF is a SPARQL-based extension for working with linked data, which adds support for RDF and allowing MonetDB to function as a triplestore. Under development for the Linked Open Data 2 project.
R integration
MonetDB/R module allows for UDFs written in R to be executed in the SQL layer of the system. This is done using the native R support for running embedded in another application, inside the RDBMS in this case. Previously the MonetDB.R connector allowed the using MonetDB data sources and process them in an R session. The newer R integration feature of MonetDB does not require data to be transferred between the RDBMS and the R session, reducing overhead and improving performance. The feature is intended to give users access to functions of the R statistical software for in-line analysis of data stored in the RDBMS. It complements the existing support for C UDFs and is intended to be used for in-database processing.
Python integration
Similarly to the embedded R UDFs in MonetDB, the database now has support for UDFs written in Python/NumPy. The implementation uses Numpy arrays (themselves Python wrappers for C arrays), as a result there is limited overhead - providing a functional Python integration with speed matching native SQL functions. The Embedded Python functions also support mapped operations, allowing user to execute Python functions in parallel within SQL queries. The practical side of the feature gives users access to Python/NumPy/SciPy libraries, which can provide a large selection of statistical/analytical functions.
MonetDBLite
Following the release of remote driver for R (MonetDB.R) and R UDFs in MonetDB (MonetDB/R), the authors created an embedded version of MonetDB in R called MonetDBLite. It is distributed as an R package, removing the need to manage a database server, required for the previous R integrations. The DBMS runs within the R process itself, eliminating socket communication and serialisation overhead - greatly improving efficiency. The idea behind it is to deliver an SQLite-like package for R, with the performance of an in-memory optimized columnar store.
Former extensions
A number of former extensions have been deprecated and removed from the stable code base over time. Some notable examples include an XQuery extension removed in MonetDB version 5; a JAQL extension, and a streaming data extension called Data Cell.
See also
List of relational database management systems
Comparison of relational database management systems
Database management system
Column-oriented DBMS
Array DBMS
References
Bibliography
External links
Official homepage of MonetDB
MonetDB Solutions - MonetDB's professional services company
Database Architectures group at CWI - the original developers of MonetDB
List of scientific projects using MonetDB
MonetDB.R - MonetDB to R Connector
Big data products
Client-server database management systems
Column-oriented DBMS software for Linux
Cross-platform free software
Cross-platform software
Data warehousing products
Database engines
Free database management systems
Free software programmed in C
Products introduced in 2004
Relational database management systems
Structured storage |
1756585 | https://en.wikipedia.org/wiki/Computer%20literacy | Computer literacy | Computer literacy is defined as the knowledge and ability to use computers and related technology efficiently, with skill levels ranging from elementary use to computer programming and advanced problem solving. Computer literacy can also refer to the comfort level someone has with using computer programs and applications. Another valuable component is understanding how computers work and operate. Computer literacy may be distinguished from computer programming, which primarily focuses on the design and coding of computer programs rather than the familiarity and skill in their use. Various countries, including the United Kingdom and the United States, have created initiatives to improve national computer literacy rates.
Background
Computer literacy differs from digital literacy, which is the ability to communicate or find information on digital platforms. Comparatively, computer literacy measures the ability to use computers and to maintain a basic understanding of how they operate.
A person's computer literacy is commonly measured through questionnaires, which test their ability to write and modify text, trouble-shoot minor computer operating issues, and organize and analyze information on a computer.
To increase their computer literacy, computer users should distinguish which computer skills they want to improve, and learn to be more purposeful and accurate in their use of these skills. By learning more about computer literacy, users can discover more computer functions that are worth using.
Arguments for the use of computers in classroom settings, and thus for the promotion of computer literacy, are primarily vocational or practical. Computers are essential in the modern-day workplace. The instruction of computer literacy in education is intended to provide students with employable skills.
Rapid changes in technology make it difficult to predict the next five years of computer literacy. Computer literacy projects have support in many countries because they conform to general political and economic principles of those countries' public and private organizations. The Internet offers great potential for effective and widespread dissemination of knowledge and for the integration of technological advances. Improvements in computer literacy facilitate this.
History
The term computer literacy was coined by Andrew Molnar in 1978. He was director of the Office of Computing Activities at the National Science Foundation in the United States. Shortly after its formation, computer literacy was discussed in several academic articles. In 1985 the Journal of Higher Education asserted that being computer literate involved mastering word processing, spreadsheet programs, and retrieving and sharing information on a computer.
United Kingdom
In the United Kingdom, a number of prominent video game developers emerged in the late 1970s and early 1980s. The ZX Spectrum, released in 1982, helped to popularize home computing, coding and gaming in Britain and Europe.
The BBC Computer Literacy Project, using the BBC Micro computer, ran from 1980 to 1989. This initiative educated a generation of coders in schools and at home, prior to the development of mass market PCs in the 1990s. 'Bedroom computer innovation' led to the development of early web-hosting companies aimed at businesses and individuals in the 1990s.
An expansion of The BBC Computer Literacy Project was established in 2012. The BBC Computer Literacy Project 2012 was launched to develop students' marketable information technology and computer science skills.
Computer programming skills were introduced into the National Curriculum in 2014.
It was reported in 2017 that roughly 11.5 million United Kingdom citizens did not have basic computer literacy skills. In response, the United Kingdom government published a 'digital skills strategy' in 2017.
First released in 2012, the Raspberry Pi is a computer originally intended to promote the teaching basic computer science in schools in the UK. Later, they became far more popular than anticipated, and have been used in a wide variety of applications. The Raspberry Pi Foundation promotes the teaching of elementary computer science in UK schools and in developing countries.
United States
In the United States, students are introduced to tablet computers in preschool or kindergarten. Tablet computers are preferred for their small size and touchscreens. The touch user interface of a tablet computer is more accessible to the under-developed motor skills of young children. Early childhood educators use student-centered instruction to guide the young student through various activities on the tablet computer. This typically includes Internet browsing and the use of applications, familiarizing the young student with a basic level of computer proficiency.
A concern raised within this topic of discussion is that primary and secondary education teachers are often not equipped with the skills to teach basic computer literacy.
In the United States job market, computer illiteracy severely limits employment options. Non-profit organizations such as Per Scholas attempt to reduce the divide by offering free and low-cost computers to children and their families in under-served communities in South Bronx, New York, Miami, Florida, and in Columbus, Ohio.
Worldwide Computer Literacy Rates
Computer literacy world averages, as determined by The World Economic Forum found that the OECD countries are not as computer literate as one would expect since 25% of individuals do not know how to use a computer, at least 45% rate poorly and only 30% rate as moderately to strongly computer literate.
See also
Digital divide
Digital literacy
Information literacies
Transliteracy
Web literacy
Computers
BBC Micro
OLPC XO
Raspberry Pi
Initiatives
BBC Computer Literacy Project 2012
European Computer Driving Licence
One Laptop per Child
References
Further reading |
16889689 | https://en.wikipedia.org/wiki/Paragon%20Software%20Group | Paragon Software Group | Paragon Software Group is a German software company that develops hard drive management software, low-level file system drivers and storage technologies. The Smart Handheld Device Division (SHDD) offers multilingual dictionaries, multilingual handwriting recognition, weather information, and two-way data synchronization with desktop devices.
Overview
The company is headquartered in Freiburg im Breisgau, Germany, with offices in the US, China, Japan, Poland, and Russia.
The company was established in 1994 by a group of Moscow Institute of Physics and Technology (MIPT) students, including founder/CEO Konstantin Komarov. A separate mobile division, called the Mobility Division, was formed in 1995. The German office opened in 1998, the Swiss office in 2000.
History
In 2004, the company started working with Fujitsu-Siemens on its handheld PCs Russian localization. Next year, the company expanded the product line of office and gaming applications for Symbian OS and received the "Developer of the Year" award in the Handango Champion Awards 2005.
In 2011, PCMag recognized the company's flagship Paragon Hard Disk Manager as the best hard drive management program. Paragon Software Group also won Global Telecoms Business Innovation Award 2011 for their mobile product.
Products
Paragon Software Group is serving two markets:
Data security and storage management – disaster recovery and server optimization.
Software for smartphones – multilingual on-line handwriting recognition, localization, business and productivity applications, games, 120 multilingual dictionaries, and encyclopedias.
Data security and storage management
Paragon Hard Disk Manager, including a tool named for resizing partitions.
Paragon File System Link (proprietary)
Software for smartphones
Slovoed
Slovoed is a dictionary and phrasebook app available for more than 37 world languages.
It incorporates more than 350 electronic dictionaries, encyclopedias and phrase books developed in conjunction with Duden, Langenscheidt, Oxford UP, PONS/Klett, Le Robert, VOX, and other publishing houses.
PenReader
PenReader is a real-time handwriting recognition technology for touchscreens with support for 17 languages. The technology is used in a number of prominent iOS and Android applications, such as Evernote, Handwriting Dato and Handwrite Note Free, MyScript Calculator. A 2016 MacWorld review of PenReader was headlined "Disappointing" and added "When it comes to handwriting recognition, PenReader isn't particularly accurate, intuitive, or easy to use."
Distribution channels
Paragon distributes online through the company website, a network of value-added resellers, distributors and OEMs.
The global partnerships of Paragon Software Group include ASUS, Avast, Belkin, D-Link, HP, Intel, Microsoft, Netgear, Nvidia, Realtek, Seagate, Siemens, Technicolor, Telechips, Western Digital, Wyplay.
Competitors
Alternative offerings include those from Acronis, VCOM/System Commander, and Symantec/PowerQuest/PartitionMagic.
A list in an "Alternatives" article had eight others: Iperius Backup, NovaStar DataCenter, Todo Backup, Macrium Reflect, MiniTool Partition Wizard, Daemon Tools, Clonezilla.
See also
PTS/DOS (PTS=PhysTechSoft; the "P" is also a hint to Paragon)
References
German companies established in 1994
Privately held companies of Germany
Software companies of Germany |
62657428 | https://en.wikipedia.org/wiki/Structure%20and%20Interpretation%20of%20Computer%20Programs%2C%20JavaScript%20Edition | Structure and Interpretation of Computer Programs, JavaScript Edition | Structure and Interpretation of Computer Programs, JavaScript Edition (SICP JS) is an adaptation of the computer science textbook Structure and Interpretation of Computer Programs (SICP). It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation. While the original version of SICP uses the programming language Scheme, this edition uses the programming language JavaScript.
This edition features a foreword by Guy L. Steele Jr. and is scheduled for publication on April 12, 2022.
Content
Like its original, SICP JS focuses on discovering general patterns for solving specific problems, and building software systems that make use of those patterns. The book describes computer science concepts using JavaScript. It also uses a virtual register machine and assembler to implement JavaScript interpreters and compilers.
License
The book will be published by MIT Press under a Creative Commons Attribution NonCommercial ShareAlike 4.0 License. The text and figures are subject to a Creative Commons Attribution ShareAlike 4.0 License. The JavaScript programs are licensed under the GNU Public Licence 3.0. The original image of MIT founder William Barton Rogers in section 2.2.4 is courtesy MIT Museum.
Origin
The National University of Singapore (NUS) published draft editions online since 2012, and a first public release on December 13, 2019.
SICP JS has been used in the course CS1101S at NUS since 2012.
Differences to the original textbook
While the book focuses on principles, models and abstractions for programming rather than specific programming languages, all examples in the original SICP are written in the programming language Scheme. SICP JS uses the language JavaScript instead of Scheme. Since JavaScript shares its functional core with Scheme, the adaptation is straightforward and mostly literal in the first three chapters. Chapter four offers new material, in particular an introduction to the notion of program parsing. The evaluator and compiler in chapter five introduce a subtle stack discipline to support return statements (a prominent feature of statement-oriented languages) without sacrificing tail recursion.
Source
Source is a series of sublanguages of JavaScript, originally inspired by , Douglas Crockford. It comprises the languages Source §1, Source §2, Source §3 and Source §4, corresponding to the respective chapters of SICP JS. Each language is a sublanguage of the next, and designed to contain only features needed by the respective chapter. These languages are implemented by the Source Academy, a web-based programming environment that features various tools to support the readers of SICP JS.
See also
Structure and Interpretation of Computer Programs
References
External links
2012 non-fiction books
2019 non-fiction books
Computer science books
Computer programming books
Creative Commons-licensed books
National University of Singapore
JavaScript programming language family
Scheme (programming language) |
40974217 | https://en.wikipedia.org/wiki/Host%20card%20emulation | Host card emulation | Host card emulation (HCE) is the software architecture that provides exact virtual representation of various electronic identity (access, transit and banking) cards using only software. Prior to the HCE architecture, near field communication (NFC) transactions were mainly carried out using secure elements.
HCE enables mobile applications running on supported operating systems to offer payment card and access card solutions independently of third parties while leveraging cryptographic processes traditionally used by hardware-based secure elements without the need for a physical secure element. This technology enables the merchants to offer payment cards solutions more easily through mobile closed-loop contactless payment solutions, offers real-time distribution of payment cards and allows for an easy deployment scenario that does not require changes to the software inside payment terminals.
History
The term "host card emulation" (HCE) was coined in 2012 by Doug Yeager and Ted Fifelski, the founders of SimplyTapp, Inc., describing the ability to open a communication channel between a contactless payments terminal and a remotely hosted secure element containing financial payment card data, allowing financial transactions to be conducted at a point-of-sale terminal. They have implemented this new technology on the Android operating system. At that time, RIM had a similar functionality, calling it "virtual target emulation", which was supposed to be available on the BlackBerry Bold 9900 device through the BB7 operating system. Prior to HCE, card emulation only existed in physical space, meaning that a card could be replicated with multiple-purpose secure element hardware that is typically housed inside the casing of a smart phone.
After the adoption of HCE by Android, Google had hoped that by including HCE in the world's largest mobile operating system (which by that time covered 80% of the market), it would offer the Android payments ecosystem a chance to grow more rapidly while also allowing Google themselves to deploy their Google Wallet more easily across the mobile network operator ecosystem. However, even with the inclusion of HCE in Android 4.4, the banks still needed the major card networks to support HCE. Four months later, at Mobile World Congress 2014, both Visa and MasterCard made public announcements about supporting the HCE technology.
On December 18, 2014, less than ten months after Visa and MasterCard announced their support for HCE, Royal Bank of Canada (RBC) became the first North American financial institution to launch a commercial implementation of mobile payments using the HCE technology.
As a result of widespread adoption of HCE, some companies offer modified implementations that usually focus on providing additional security for the HCE's communication channel. One such implementation is termed HCE+.
Impact
NFC has faced adoption issues due to lack of infrastructure (terminals) and the secure element approach preventing organizations with the desire to participate in mobile payments from doing so due to the high up-front capital costs and complex partner relationships.
By supporting HCE in Android 4.4, Google enabled any organization that can benefit from the NFC technology to do so at a relatively low cost. Some areas the new HCE architecture can support include payments, loyalty programs, card access and transit passes.
Implementation
Host card emulation is the ability for near field communication (NFC) information transfer to happen between a terminal configured to exchange NFC radio information with an NFC card and a mobile device application configured to act or pretend to emulate the functional responses of an NFC card. HCE requires that the NFC protocol be routed to the main operating system of the mobile device instead of being routed to a local hardware-based secure element (SE) chip configured to respond only as a card, with no other functionality.
Since the release of Android 4.4, Google has implemented HCE within the Android operating system. Google introduced platform support for secure NFC-based transactions through Host Card Emulation (HCE), for payments, loyalty programs, card access, transit passes, and other custom services.
With HCE, any app on an Android 4.4 device can emulate an NFC smart card, letting users tap to initiate transactions with an app of their choice. Apps can also use a new Reader Mode so as to act as readers for HCE cards and other NFC-based transactions.
The first known mobile handset to support anything like HCE outside of the Android family was the BlackBerry bold 9900 that was first available in Thailand. released together with BlackBerry 7 OS.
CyanogenMod operating system was the next known mobile device operating system to support HCE through the effort of modifying the NXP NFC stack known as libnfc-nxp, the NFC service manager, and operating system APIs by Doug Yeager. The OS APIs were adapted to include two new tag types that were called ISO_PCDA and ISO_PCDB which are also known terminal or PCD standards. This would imply that you could "read" a tag in the same manner that you could read a terminal.
Microsoft has announced new support for HCE NFC payments in Windows 10. This will allow improved payment integration flows and enable coexistence of HCE with UICC-based secure elements in Windows 10 and Windows 10 Mobile.
Uses
HCE is used to allow transactions between mobile devices and other credential acquiring devices. Those devices may include other mobile devices, contactless point-of-sale terminals, transit turnstiles, or a variety of access control touch pads. For example, Android developers can leverage HCE to create specific payment experiences, such as using HCE to enable a mobile application as a transit card.
References
Near-field communication
Smart cards |
56612525 | https://en.wikipedia.org/wiki/Commercial%20augmented%20reality | Commercial augmented reality | Commercial augmented reality (CAR) describes augmented reality (AR) applications that support various B2B (Business-to-Business) and B2C (Business-to-Consumer) commercial activities, particularly for the retail industry. The use of CAR started in 2010 with virtual dressing rooms for E-commerce.
For commercial purposes, AR applications are often used to integrate print and video marketing. With an AR-enabled device, such as a smartphone or smart glass, aiming a camera at a printed material can trigger an AR video version of the promotional and informational material superimposed on the image.
Apart from the primary use of CAR, technological advancements have yielded more commercial applications for retail, B2C and B2B markets operating with physical stores as well as online virtual stores.
History
The history of commercial augmented reality is brief compared to that of augmented reality.
In 2010, virtual dressing rooms were developed for E-commerce retailers to help customers check the look and fit of products such as clothing, undergarments, apparel, fashion products, and accessories. An AR technology was developed in 2012 to market a commemorative coin in Aruba. In 2013, CrowdOptic technology was used to create AR experiences for an annual festival in Toronto, Canada. An AR app Makeup Genius released to try out beauty makeup and styles with the help of handheld devices was released in 2014.
An AR app was launched for the art market in 2015. In 2016, a Wikitude app included an update to provide AR campaign opportunity to businesses. Users can point phone cameras at certain places and get information from websites such as Yelp, Trip Advisor, Twitter, and Facebook. In 2017, Lenovo developed a Tango-enabled smartphone to assist retailers. The Wayfair app enables customers to try a virtual piece of furniture in their home or office before buying.
Technology
CAR technology dates back to the 1960s but grew considerably during the 2000s. According to CAR, technology involves several contemporary technology components. The three major components are the hardware, software, and algorithms of AR.
Hardware for commercial augmented reality
With advancements in computing and allied hardware technologies, AR hardware such as display devices, sensors, input devices, and computing processors have improved over time.
Display hardware components for CAR
Display hardware can be listed in the following ways: Head Mounted Display (HMD) such as a harness or helmet; eyeglasses, Head-Up Display (HUD); contact lenses; virtual retina display (VRD); and Eye Tap. Spatial AR (SAR) enhances real-world objects in spaces without depending on any display device. Those SAR are Shade Lamps, Mobile Projectors, Virtual Tablets, and Smart Projects.
Sensors for CAR
Tracking and networking hardware must work in a seamless combination to bring about the desired level of mobility in CAR systems. The latest smartphones and tablets like mobile devices consist of cameras to act as an optical sensor, accelerometer, and gyroscopes for position tracking, solid-state compass and Global Positioning System (GPS) circuits, as well as sensors, for location detection, Radio-frequency identification (RFID) for radio signal detection, Wi-Fi for networking, and several mobile-based third-party sensors for a myriad of purposes.
Input CAR devices
To bring complete interactivity in AR systems, different input devices are mandatory such as keyboards for textual inputs, speech recognition systems like Siri, Cortana, Google Voice and so on, gloves stylus, pointers, and other body wears with sensors to provide body gesture inputs, eye movement detection sensors, and hardware.
Software and algorithms for commercial augmented reality
AR software should be capable of carrying an image registration process where software is working independently from the camera and camera images, and it drives real-world coordinates to accomplish the AR process. AR software can achieve augmented reality using two-step methods: It detects interest points, fiduciary marker, and optical flows in camera images or videos. Now, it restores the real-world coordinate system from the data collected in the first step. To restore the real-world coordinates data some methods used include: SLAM (Simultaneous Localization and Mapping), structure from Motion methods including-Bundle Adjustment, and mathematical methods like-Projective or Epipolar Geometry, Geometric Algebra, or Rotation representation (with an exponential map, Kalman & particle filters, non-linear optimization, and robust statistics).
Commercial augmented reality programming technology
The aim of ARML (Augmented Reality Markup Language) is defining and interacting with various Augmented Reality scenes. XML and ECMA scripts are parts of ARML. The role of XML is to describe the location as well as the appearance of the virtual objects in AR visualization. The dynamic access to the properties of virtual objects is possible using ECMA scripts binding.
Object model of augmented reality markup language
The model is built on three main concepts. Features: Represents physical objects in AR scene; Virtual Assets: Represents virtual objects in Augmented Reality scene; and Anchor: Define the spatial relationship between a physical and virtual object in AR scene. The Anchors are four different types—Geometries, Tractable, Relative To, and Screen Anchor.
Commercial augmented reality SDKs
Just like other technologies, AR application development kits are available for a rapid development process in the form of Software development kits (SDKs) including: CloudRidAR, Vuforia, AR ToolKit, Catchoom CraftAR, Mobinett AR, Wikitude, Blippar, Layar, Meta, and ARLab.
Applications
Objective of CAR
According to Hemant, the objectives of CAR is to bring virtual objects, which are generated by computers, into the physical world using simulation techniques. Moreover, it permits real-time interactions to benefit various commercial sectors and industries.
Commercial augmented reality simulation process
The optical device that combines the real-world and virtual world experiences is "Combiner" which acts as a platform for commercial Augmented Reality experiences. Thus, the entire CAR process consists of three main phases. The first is recognizing the object or image. The second is object or image tracking in space. The third is superimposing virtual objects on the physical world.
Traditional combiners for CAR process
The traditional combiners cover two implementations: The polarized beam combiners or flat combiners, or the off-axis combiners or curved combiners.
Non-conventional combiners for CAR process
The non-conventional techniques involve diffractive optics as well as Holography. The hologram or waveguide concepts are involved in these non-conventional techniques and technologies. The real theory behind this is the extraction of a collimated image, which is guided by Total Internal Reflection (TIR) through a waveguide pipe. It behaves like a router where a waveguide is transmitting the image to the user's eyes and provides the most sophisticated optics to see through.
User interactions in AR
Most of the AR devices use a touchpad and voice commands to provide user interaction facilities. Smartphones and tablet devices are the excellent candidates to interact with AR applications. Therefore, most of the AR applications in the market are handheld devices based on whether they use traditional or non-traditional AR techniques and technologies.
Commercial augmented reality applications
Augmented reality is gradually changing the scenario of B2B and B2C businesses by providing AR applications. In due course, Hemant has listed several CAR applications in detail.
The AR dressing room application
Fashion and apparel customers buy products after selecting the best fit by trying them on in a Changing room. This can result in lengthy queues waiting a vacant room. Topshop with Kinect has created CAR dressing rooms to overcome the problem to some extent. This technology has even allowed for size estimation in the dressing room. The Gap has followed the trend. The Augmented Reality dressing rooms are equipped with the AR devices, which are in turn helping focus on the targeted dress/product and capture the virtual 3D image of the product/dress. It helps to visualize the dress on the body of the shopper/user.
Product previews application
The in-store retail customer can view a virtual preview of a product packed in a package, and even without opening it at all. An AR app for Lego is an ideal example of this use. Lego displays an animation of a product in an informative manner to the interested children and their parents. Image recognition technology is behind it rather than sticking a code on the box and scanning it.
To accomplish this, Lego has implemented a second-generation Sandy Bridge Intel Processor that can popup 3D animation over the top of the box. Moreover, the animation can move or rotate as the box moves or rotates. This is possible by the recognition of box movements and postures.
The CAR triggered products application
The AR event was triggered automatically by focusing on an Aruba coin with AR hardware. The AR event revealed additional objects and information, which was invisible without the coin.
Makeup CAR application
Shiseido has developed a makeup mirror called TeleBeauty that helps female shoppers to visualize the product performance on their faces well in advance of applying it. The capability of the AR mirror allows it to portray the shopper's image with lipsticks, eyeliners, and blushes with real-time updates.
Beauty style CAR application
The best example is the Burberry Beauty Box AR application. It provides a nail bar application. Shoppers can choose their skin tone with the app and paint different polishes on the bar to check how the polishes look in real life.
Art market CAR application
In 2015, an AR app was developed by Itondo with the aim of visualizing an art piece on different locations on walls before taking it from a gallery. It displayed live previews of a two-dimensional image of the artwork which is capable of scaling on the walls. Moreover, it enables an art gallery to display background previews using pre-saved photos of the different walls provided by the shopper. The app helps the user to visualize the best location for the artwork before they make a purchase.
The color changing CAR application
American Apparel has products in varying types of colors and color combinations. This can make the color selection process daunting. Therefore, it has invented an AR app to help in the selection process without the customer having to wear the actual product. The AR app simulates the same products in available color choices and makes the selection process easier. The AR app provides real-time ratings and reviews uploaded by customers online and tempts online shoppers to visit the bricks-and-mortar stores.
The fitting CAR Application
De Beers is a known entity in jewelry industry. It has released an AR app useful for online shoppers who wish to see jewelry products as if they are wearing them in the real world. The company provides images of products through Forevermark Fitting site the shopper can download and print on paper. Now, the user can use the mobile AR app by focusing a mobile camera on the image of the item. The app displays a virtual simulation of the jewelry products with real-time updates so products move with the user's movements and displays different facets at different angles. Moreover, customers can judge that how the jewelry looks in certain lighting and on different skin tones.
The catalog CAR application
A product catalog for certain items like furniture cannot test in real life for a real environment. Moreover, small 3D images of products are of little use when the user wants to see the furniture product in real life in their home or office. IKEA has launched their AR catalog IKEA Place that helps to visualize the furniture products in real-world spaces like homes or offices. It also helps customers to judge the appropriate size and shape of the furniture be fitting in the actual environment that meets their needs.
The personal shopper CAR application
IBM has released an AR app that helps shoppers to obtain detailed information on a product without touching it or inviting sales assistants to describe it. The CAR personal shopping application is capable of providing highly personalized experiences as well as marketing offers with a personalized touch. All this is possible in real-time if Beacon technology is applied in the store.
The shoe sampler CAR application
The Converse Sampler is an AR app to assist customers to visualize a shoe with real-time updates. A customer needs to focus the camera of their mobile device on their legs after opening the app. The app provides a catalog for the selection of products. Once a selection is made the app begins superimposing products on the real world legs and gives an idea of the fit as well as its look so the customer can purchase the product online with confidence.
Controversy
A controversy was created by Pokémon Go, a game with two technical problems. The tracking and visualization processes handled in the absence of ergonomic, safe, and secure environment. The immersion in the game by players was too deep and resulted in several deaths, which caused some governments like China to ban the game. This unconventional combination of technology may lead to new inventions, but the cost of the hardware, software, and implementation makes it challenging for common commercial production.
References
Augmented reality applications |
39880682 | https://en.wikipedia.org/wiki/Tcl | Tcl | Tcl (pronounced "tickle" or as an initialism) is a high-level, general-purpose, interpreted, dynamic programming language. It was designed with the goal of being very simple but powerful. Tcl casts everything into the mold of a command, even programming constructs like variable assignment and procedure definition. Tcl supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles.
It is commonly used embedded into C applications, for rapid prototyping, scripted applications, GUIs, and testing. Tcl interpreters are available for many operating systems, allowing Tcl code to run on a wide variety of systems. Because Tcl is a very compact language, it is used on embedded systems platforms, both in its full form and in several other small-footprint versions.
The popular combination of Tcl with the Tk extension is referred to as Tcl/Tk, and enables building a graphical user interface (GUI) natively in Tcl. Tcl/Tk is included in the standard Python installation in the form of Tkinter.
History
The Tcl programming language was created in the spring of 1988 by John Ousterhout while working at the University of California, Berkeley. Originally "born out of frustration", according to the author, with programmers devising their own languages intended to be embedded into applications, Tcl gained acceptance on its own. Ousterhout was awarded the ACM Software System Award in 1997 for Tcl/Tk.
The name originally comes from Tool Command Language, but is conventionally spelled "Tcl" rather than "TCL".
Tcl conferences and workshops are held in both the United States and Europe.
Features
Tcl's features include
All operations are commands, including language structures. They are written in prefix notation.
Commands commonly accept a variable number of arguments (are variadic).
Everything can be dynamically redefined and overridden. Actually, there are no keywords, so even control structures can be added or changed, although this is not advisable.
All data types can be manipulated as strings, including source code. Internally, variables have types like integer and double, but converting is purely automatic.
Variables are not declared, but assigned to. Use of a non-defined variable results in an error.
Fully dynamic, class-based object system, TclOO, including advanced features such as meta-classes, filters, and mixins.
Event-driven interface to sockets and files. Time-based and user-defined events are also possible.
Variable visibility restricted to lexical (static) scope by default, but uplevel and upvar allowing procs to interact with the enclosing functions' scopes.
All commands defined by Tcl itself generate error messages on incorrect usage.
Extensibility, via C, C++, Java, Python, and Tcl.
Interpreted language using bytecode
Full Unicode (3.1 in the beginning, regularly updated) support, first released 1999.
Regular expressions
Cross-platform: Windows API; Unix, Linux, Macintosh etc.
Close, cross-platform integration with windowing (GUI) interface Tk.
Multiple distribution mechanisms exist:
Full development version (for Windows e.g. ActiveState Tcl, see )
Tclkits (single file executable containing a complete scripting runtime, only about 4 megabytes in size), Starkits (wrapping mechanism for delivering an application in a self-contained, installation-free, and highly portable way) and Starpacks (combine Starkit with Tclkit to produce a Starpack – a single platform specific executable file, ideal for easy deployment)
The Jim Interpreter, a small footprint Tcl implementation
Freely distributable source code under a BSD license.
Safe-Tcl
Safe-Tcl is a subset of Tcl that has restricted features so that Tcl scripts cannot harm their hosting machine or application. File system access is limited and arbitrary system commands are prevented from execution. It uses a dual interpreter model with the untrusted interpreter running code in an untrusted script. It was designed by Nathaniel Borenstein and Marshall Rose to include active messages in e-mail. Safe-Tcl can be included in e-mail when the application/safe-tcl and multipart/enabled-mail are supported. The functionality of Safe-Tcl has since been incorporated as part of the standard Tcl/Tk releases.
Syntax and fundamental semantics
The syntax and semantics of Tcl are covered by twelve rules known as the Dodekalogue.
A Tcl script consists of a series of command invocations. A command invocation is a list of words separated by whitespace and terminated by a newline or semicolon. The first word is the name of a command, which may be built into the language, found in an available library, or defined in the script itself. The subsequent words serve as arguments to the command:
commandName argument1 argument2 ... argumentN
The following example uses the puts (short for "put string") command to display a string of text on the host console:
puts "Hello, World!"
This sends the string "Hello, World!" to the standard output device along with an appended newline character.
Variables and the results of other commands can be substituted into strings, such as in this example which uses the set and expr commands to store the result of a calculation in a variable (note that Tcl does not use = as an assignment operator), and then uses puts to print the result together with some explanatory text:
# expr evaluates text string as an expression
set sum [expr 1+2+3+4+5]
puts "The sum of the numbers 1..5 is $sum."
The # character introduces a comment. Comments can appear anywhere the interpreter is expecting a command name.
# with curly braces, variable substitution is performed by expr
set x 1
set sum [expr {$x + 2 + 3 + 4 + 5}]; # $x is not substituted before passing the parameter to expr;
# expr substitutes 1 for $x while evaluating the expression
puts "The sum of the numbers 1..5 is $sum."; # sum is 15
# without curly braces, variable substitution occurs at the definition site (lexical scoping)
set x 2
set op *
set y 3
set res [expr $x$op$y]; # $x, $op, and $y are substituted, and the expression is evaluated
puts "2 * 3 is $res."; # 6 is substituted for $res
As seen in these examples, there is one basic construct in the language: the command. Quoting mechanisms and substitution rules determine how the arguments to each command are processed.
One special substitution occurs before the parsing of any commands or arguments. If the final character on a line (i.e., immediately before a newline) is a backslash, then the backslash-newline combination (and any spaces or tabs immediately following the newline) are replaced by a single space. This provides a line continuation mechanism, whereby long lines in the source code can be wrapped to the next line for the convenience of readers.
Continuing with normal argument processing, a word that begins with a double-quote character (") extends to the next double-quote character. Such a word can thus contain whitespace and semicolons without those characters being interpreted as having any special meaning (i.e., they are treated as normal text characters). A word that begins with an opening curly-brace character ({) extends to the next closing curly-brace character (}). Inside curly braces all forms of substitution are suppressed except the previously mentioned backslash-newline elimination. Words not enclosed in either construct are known as bare words.
In bare and double-quoted words, three types of substitution may occur:
Command substitution replaces the contents of balanced square brackets with the result of evaluating the script contained inside. For example, [expr 1+2+3] is replaced by the result of evaluating the contained expression (in this case 6).
Variable substitution replaces the name of a variable prefixed with a dollar sign with the contents (or value) of the variable. For example, $foo is replaced by the contents of the variable called "foo". The variable name may be surrounded by curly braces to separate it from subsequent text in otherwise ambiguous cases (e.g., ${foo}ing).
Backslash substitution replaces a backslash followed by a letter with another character. For example, \n is replaced by a newline.
Substitution proceeds left-to-right in a single scan through each word. Any substituted text will not be scanned again for possible further substitutions. However, any number of substitutions can appear in a single word.
From Tcl 8.5 onwards, any word may be prefixed by {*}, which causes the word to be split apart into its constituent sub-words for the purposes of building the command invocation (similar to the ,@ sequence of Lisp's quasiquote feature).
As a consequence of these rules, the result of any command may be used as an argument to any other command. Note that, unlike in Unix command shells, Tcl does not reparse any string unless explicitly directed to do so, which makes interactive use more cumbersome, but scripted use more predictable (e.g., the presence of spaces in filenames does not cause difficulties).
The single equality sign (=) serves no special role in the language at all. The double equality sign (==) is the test for equality which is used in expression contexts such as the expr command and in the first argument to if. (Both commands are part of the standard library; they have no special place in the library and can be replaced if desired.)
The majority of Tcl commands, especially in the standard library, are variadic, and the proc (the constructor for scripted command procedures) allows one to define default values for unspecified arguments and a catch-all argument to allow the code to process arbitrary numbers of arguments.
Tcl is not statically typed: each variable may contain integers, floats, strings, lists, command names, dictionaries, or any other value; values are reinterpreted (subject to syntactic constraints) as other types on demand. However, values are immutable and operations that appear to change them actually just return a new value instead.
Basic commands
The most important commands that refer to program execution and data operations are:
set writes a new value to a variable (creates a variable if did not exist). If used only with one argument, it returns the value of the given variable (it must exist in this case).
proc defines a new command, whose execution results in executing a given Tcl script, written as a set of commands. return can be used to immediately return control to the caller.
The usual execution control commands are:
if executes given script body (second argument), if the condition (first argument) is satisfied. It can be followed by additional arguments starting from elseif with the alternative condition and body, or else with the complementary block.
while repeats executing given script body, as long as the condition (first argument) remains satisfied
foreach executes given body where the control variable is assigned list elements one by one.
for shortcut for initializing the control variable, condition (as in while) and the additional "next iteration" statement (command executed after executing the body)
Those above looping commands can be additionally controlled by the following commands:
break interrupts the body execution and returns from the looping command
continue interrupts the body execution, but the control is still given back to the looping command. For while it means to loop again, for for and foreach, pick up the next iteration.
return interrupts the execution of the current body no matter how deep inside a procedure, until reaching the procedure boundary, and returns given value to the caller.
Advanced commands
expr passes the argument to a separate expression interpreter and returns the evaluated value. Note that the same interpreter is used also for "conditional" expression for if and looping commands.
list creates a list comprising all the arguments, or an empty string if no argument is specified. The lindex command may be used on the result to re-extract the original arguments.
array manipulates array variables.
dict manipulates dictionary (since 8.5), which are lists with an even number of elements where every two elements are interpreted as a key/value pair.
regexp matches a regular expression against a string.
regsub Performs substitutions based on regular expression pattern matching.
uplevel is a command that allows a command script to be executed in a scope other than the current innermost scope on the stack.
upvar creates a link to variable in a different stack frame.
namespace lets you create, access, and destroy separate contexts for commands and variables.
apply applies an anonymous function (since 8.5).
coroutine, yield, and yieldto create and produce values from coroutines (since 8.6).
try lets you trap and process errors and exceptions.
catch lets you trap exceptional returns.
zlib provides access to the compression and checksumming facilities of the Zlib library (since 8.6).
Uplevel
uplevel allows a command script to be executed in a scope other than the current innermost scope on the stack. Because the command script may itself call procedures that use the uplevel command, this has the net effect of transforming the call stack into a call tree.
It was originally implemented to permit Tcl procedures to reimplement built-in commands (like for, if or while) and still have the ability to manipulate local variables. For example, the following Tcl script is a reimplementation of the for command (omitting exception handling):
proc for {initCmd testExpr advanceCmd bodyScript} {
uplevel 1 $initCmd
set testCmd [list expr $testExpr]
while {[uplevel 1 $testCmd]} {
uplevel 1 $bodyScript
uplevel 1 $advanceCmd
}
}
Upvar
upvar arranges for one or more local variables in the current procedure to refer to variables in an enclosing procedure call or to global variables. The upvar command simplifies the implementation of call-by-name procedure calling and also makes it easier to build new control constructs as Tcl procedures.
A decr command that works like the built-in incr command except it subtracts the value from the variable instead of adding it:
proc decr {varName {decrement 1}} {
upvar 1 $varName var
incr var [expr {-$decrement}]
}
Object-oriented
Tcl 8.6 added a built-in dynamic object system, TclOO, in 2012. It includes features such as:
Class-based object system. This is what most programmers expect from OO.
Allows per-object customization and dynamic redefinition of classes.
Meta-classes
Filters
Mixins
A system for implementing methods in custom ways, so that package authors that want significantly different ways of doing a method implementation may do so fairly simply.
oo::class create fruit {
method eat {} {
puts "yummy!"
}
}
oo::class create banana {
superclass fruit
constructor {} {
my variable peeled
set peeled 0
}
method peel {} {
my variable peeled
set peeled 1
puts "skin now off"
}
method edible? {} {
my variable peeled
return $peeled
}
method eat {} {
if {![my edible?]} {
my peel
}
next
}
}
set b [banana new]
$b eat → prints "skin now off" and "yummy!"
fruit destroy
$b eat → error "unknown command"
Tcl did not have object oriented (OO) syntax until 2012, so various extension packages emerged to enable object-oriented programming. They are widespread in existing Tcl source code. Popular extensions include:
incr Tcl
XOTcl
Itk
Snit
STOOOP
TclOO was not only added to build a strong object oriented system, but also to enable extension packages to build object oriented abstractions using it as a foundation. After the release of TclOO, incr Tcl was updated to use TclOO as its foundation.
Web application development
Tcl Web Server is a pure-Tcl implementation of an HTTP protocol server. It runs as a script on top of a vanilla Tcl interpreter.
Apache Rivet is an open source programming system for Apache HTTP Server that allows developers to use Tcl as a scripting language for creating dynamic web applications. Rivet is similar to PHP, ASP, and JSP. Rivet was primarily developed by Damon Courtney, David Welton, Massimo Manghi, Harald Oehlmann and Karl Lehenbauer. Rivet can use any of the thousands of publicly available Tcl packages that offer countless features such as database interaction (Oracle, PostgreSQL, MySQL, SQLite, etc.), or interfaces to popular applications such as the GD Graphics Library.
Interfacing with other languages
Tcl interfaces natively with the C language. This is because it was originally written to be a framework for providing a syntactic front-end to commands written in C, and all commands in the language (including things that might otherwise be keywords, such as if or while) are implemented this way. Each command implementation function is passed an array of values that describe the (already substituted) arguments to the command, and is free to interpret those values as it sees fit.
Digital logic simulators often include a Tcl scripting interface for simulating Verilog, VHDL and SystemVerilog hardware languages.
Tools exist (e.g. SWIG, Ffidl) to automatically generate the necessary code to connect arbitrary C functions and the Tcl runtime, and Critcl does the reverse, allowing embedding of arbitrary C code inside a Tcl script and compiling it at runtime into a DLL.
Extension packages
The Tcl language has always allowed for extension packages, which provide additional functionality, such as a GUI, terminal-based application automation, database access, and so on. Commonly used extensions include:
Tk The most popular Tcl extension is the Tk toolkit, which provides a graphical user interface library for a variety of operating systems. Each GUI consists of one or more frames. Each frame has a layout manager.
Expect One of the other very popular Tcl extensions is Expect extension. The early close relationship of Expect with Tcl is largely responsible for the popularity of Tcl in prolific areas of use such as in Unix testing, where Expect was (and still is today) employed very successfully to automate telnet, ssh, and serial sessions to perform many repetitive tasks (i.e., scripting of formerly interactive-only applications). Tcl was the only way to run Expect, so Tcl became very popular in these areas of industry.
Tile/Ttk Tile/Ttk is a styles and theming widget collection that can replace most of the widgets in Tk with variants that are truly platform native through calls to an operating system's API. Themes covered in this way are Windows XP, Windows Classic, Qt (that hooks into the X11 KDE environment libraries) and Aqua (Mac OS X). A theme can also be constructed without these calls using widget definitions supplemented with image pixmaps. Themes created this way include Classic Tk, Step, Alt/Revitalized, Plastik and Keramik. Under Tcl 8.4, this package is known as Tile, while in Tcl 8.5 it has been folded into the core distribution of Tk (as Ttk).
Tix Tix, the Tk Interface eXtension, is a set of user interface components that expand the capabilities of Tcl/Tk and Python applications. It is an open source software package maintained by volunteers in the Tix Project Group and released under a BSD-style license.
Itcl/IncrTcl Itcl is an object system for Tcl, and is normally named as [incr Tcl] (that being the way to increment in Tcl, similar in fashion to the name C++).
Tcllib Tcllib is a set of scripted packages for Tcl that can be used with no compilation steps.
Tklib Tklib is a collection of utility modules for Tk, and a companion to Tcllib.
tDOM tDOM is a Tcl extension for parsing XML, based on the Expat parser
TclTLS TclTLS is OpenSSL extension to Tcl.
TclUDP The TclUDP extension provides a simple library to support User Datagram Protocol (UDP) sockets in Tcl.
Databases Tcl Database Connectivity (TDBC), part of Tcl 8.6, is a common database access interface for Tcl scripts. It currently supports drivers for accessing MySQL, ODBC, PostgreSQL and SQLite databases. More are planned for the future. Access to databases is also supported through database-specific extensions, of which there are many available.
See also
Eggdrop
Expect
TclX
Tkdesk
Comparison of Tcl integrated development environments
Comparison of programming languages
List of programming languages
Environment Modules
References
Further reading
Brent B. Welch, Practical Programming in Tcl and Tk, Prentice Hall, Upper Saddle River, NJ, USA, , 2003
J Adrian Zimmer, Tcl/Tk for Programmers, IEEE Computer Society, distributed by John Wiley and Sons, , 1998
Mark Harrison and Michael McLennan, Effective Tcl/Tk Programming, Addison-Wesley, Reading, MA, USA, , 1998
Bert Wheeler, Tcl/Tk 8.5 Programming Cookbook, Packt Publishing, Birmingham, England, UK, , 2011
Wojciech Kocjan, Piotr Beltowski Tcl 8.5 Network Programming, Packt Publishing, , 2010
Clif Flynt Tcl/Tk, Third Edition: A Developer’s Guide, , 2012
Ashok P. Nadkarni The Tcl Programming Language, , 2017
External links
Tcl Sources, main Tcl and Tk source code download website
Tcler's Wiki
TkDocs
Tcl/Tk Korean Textbook
American inventions
Cross-platform free software
Cross-platform software
Dynamically typed programming languages
Free compilers and interpreters
Free software programmed in C
High-level programming languages
Multi-paradigm programming languages
Object-oriented programming languages
Procedural programming languages
Programming languages created in 1988
Scripting languages
Tcl programming language family
Text-oriented programming languages
Homoiconic programming languages |
573901 | https://en.wikipedia.org/wiki/Berry%20Linux | Berry Linux | Berry Linux is a Live CD Linux distribution that has English and Japanese support. Berry Linux is based on and is compatible with Fedora 20 packages. The distribution is primarily focused on use as a Live CD, but it can also be installed to a live USB drive. Berry Linux can be used to try out and showcase Linux, for educational purposes, or as a rescue system, without the need to make changes to a hard disk. The current version is 1.35 released on 7 July 2021.
Features
Berry includes read/write NTFS support, and AIGLX and Beryl are bundled for 3D desktop effects. Berry also uses bootsplash, giving it a graphical startup.
The full version (v1.12) includes and runs on Linux Kernel 3.0.4. It has the ALSA sound system, ACPI support, and SELinux. Berry Linux features automatic hardware detection, with support for many graphics cards, sound cards, SCSI, USB devices and many other peripherals. Network devices are automatically configured with DHCP.
The full version of Berry Linux uses KDE (Version 4.6.5) while Berry Linux Mini uses the Fluxbox window manager. The full version is 512.7MB, while the mini version is 273.4MB. To test Berry Linux it is not necessary to install the distribution to a hard disk, as the operating system runs entirely from CD-ROM. It is, however, possible to install Berry Linux to a hard disk, which requires 1.7 gigabytes of free space.
Supporting Japanese, Berry includes Whiz, a sharp Kana-Kanji conversion system. It also comes with LibreOffice version 3.4.3, a Microsoft Office compatible office suite, as well as TextMaker/PlanMaker as Berry's office software. The GIMP, version 2.6.10, is bundled for graphics software.
Berry includes the media players Audacious, MPlayer, Xine, and Kaffeine. DVD and DivX codecs are installed by default.
Version history
Berry Linux's historical releases are as following.
See also
Fedora
References
External links
Berry Linux Project
Berry Linux Project (Japanese)
Berry Linux Project (English)
Download Berry
RPM-based Linux distributions
Live CD
Linux distributions |
479333 | https://en.wikipedia.org/wiki/Raw%20device | Raw device | In computing, specifically in Unix and Unix-like operating systems, a raw device is a special kind of logical device associated with a character device file that allows a storage device such as a hard disk drive to be accessed directly, bypassing the operating system's caches and buffers (although the hardware caches might still be used). Applications like a database management system can use raw devices directly, enabling them to manage how data is cached, rather than deferring this task to the operating system.
In FreeBSD, all device files are in fact raw devices. Support for non-raw devices was removed in FreeBSD 4.0 in order to simplify buffer management and increase scalability and performance.
In Linux kernel, raw devices were deprecated and scheduled for removal at one point, because the flag can be used instead. However, later the decision was made to keep raw devices support since some software cannot use the flag. Raw devices simply open block devices as if the flag would have been specified. Raw devices are character devices (major number 162). The first minor number (i.e. 0) is reserved as a control interface and is usually found at . A command-line utility called can be used to bind a raw device to an existing block device. These "existing block devices" may be disks or CD-ROMs/DVDs whose underlying interface can be anything supported by the Linux kernel (for example, IDE/ATA or SCSI).
References
Unix file system technology
Linux kernel features |
68646119 | https://en.wikipedia.org/wiki/Critolaus%20%28mythology%29 | Critolaus (mythology) | In Greek mythology, Critolaus (; Ancient Greek: Κριτολάου or Κριτόλαος Kritolaos) was a member of the Trojan royal family as the son of the Trojan elder Hicetaon, son of King Laomedon of Troy. He was the brother of Melanippus, Thymoetes, and possibly, Antenor. Critolaus married Aristomache (daughter of King Priam) who became a captive after the fall of Troy.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Strabo, Geographica edited by A. Meineke. Leipzig: Teubner. 1877. Greek text available at the Perseus Digital Library.
Trojans
Characters in Greek mythology |
15175697 | https://en.wikipedia.org/wiki/Ma-ubin | Ma-ubin | Maubin ( ) is a town in the Ayeyarwady Division of south-west Myanmar. It is the seat of the Maubin Township in the Maubin District. The population as of 2021 was 51,542. The inhabitants of the town, as well as the district are mainly Bamar and Karen.
During Cyclone Nargis which devastated the Irrawaddy Delta, the Burmese military offered convoys to refugees to Ma-ubin to escape the devastation in the worst-hit areas.
Geography and economy
Rice growing and fishing are the major contributors to the economy. It is a developing town with growing transportation and communication services.
The town is linked with Yangon, 40 miles (65 km) east, by the Twante Canal which heads east. The canal opened in 1932 and improved the transporting of goods back and forth from the former capital, then known as Rangoon.
Landmarks
Pagodas
The main religion is Buddhism and there are many pagodas within Maubin township:
Sein Mya Kantha Zeti pagoda is located on the Sane Mya Kanthar Street just north of the town.
Paw Taw Mu Pagoda, (formally Myo Oo Paw Taw Mu Ceti) is an ancient pagoda situated in the southern part of the town on the Toe River. The old pagoda fell in 2002 following river erosion of the bank but it was rebuilt on 22 May 2005 under government guidance.
Shwe Phone Myint Ceti- this is located in the Pagoda Street in the 2nd quarter of the town. The foundation stone of the Ceti was laid in 1890.
Other notable pagodas include Shwephonemyint pagoda and Akyawsulyanmyattonetan pagoda.
Gen. Maha Bandula’s cemetery and monument statue is situated in Maubin District, but is located in Danuphyu Township. Danuphyu Fort was a prominent location of the Anglo-Burmese War and was destroyed by a flood.
Bridges
There are four bridges in Maubin district, including Maubin Bridge, Khattiya Bridge, Pantanaw Bridge and Bo Myat Htun Bridge. Maubin Bridge is a reinforced concrete bridge with a capacity of 60 tons. It is located in Maubin Township and the foundation stone was laid on 4 April 1994. The bridge formally opened on 10 February 1998.
Climate
Education
The town has several universities including University of Computer Studies (Maubin), Technological University, Maubin and Ma-ubin University. University of Computer Studies (Maubin) offers degrees in Bachelor of Computer Science, Bachelor of Computer Technology and post-graduate degrees of Bachelor of Computer Science, Bachelor of Computer Technology, Master of Computer Science, Master of Computer Technology, Master of Information Science and a Diploma in Computer Science.
Ma-ubin University opened in July 2003.
Dhamma Manorama - meaning ‘Delightful Environment of Dhamma’, is situated a mile away on the main road to Maubin University. It offers 10-day courses in learning and as of May 2005, eight courses had been held in which 349 students participated. Many of the students come from Maubin's No. 1 State High School. The centre was established on 28 March 2004, on some seven acres of land.
The American Baptist Missionary Union had been active in the area at the turn of the 20th century and established a number of churches.
Notable people
Ba Maw (1893–1977) - Burmese political leader
References
External links
Images of pagodas in Ma-ubin and overview
Populated places in Ayeyarwady Region
Township capitals of Myanmar |