id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
34151582 | https://en.wikipedia.org/wiki/Robert%20Hannigan | Robert Hannigan | Robert Peter Hannigan CMG (born 1965) is a cybersecurity specialist who has been Warden of Wadham College, Oxford, since 2021. He was a senior British civil servant who previously served as the director of the signals intelligence and cryptography agency the Government Communications Headquarters (GCHQ) and established the UK's National Cyber Security Centre. His sudden resignation as director was announced on 23 January 2017, and he stepped down at the end of April 2017 to pursue a career in private sector cyber security, academia and as a security commentator. In 2021 he became Warden of Wadham College, Oxford.
Early and family life
Hannigan was born in Gloucestershire and brought up in Yorkshire. He studied classics at Wadham College, Oxford, and continued his education at Heythrop College, University of London. He is married with a son and a daughter.
Career
Northern Ireland Peace Process
After an early career in the private sector, Hannigan became Deputy Director of Communications for the Northern Ireland Office in 2000, Director of Communications for the Northern Ireland Office in 2001 and Associate Political Director for the Northern Ireland Office in 2004. He served as the Director-General, Political at the Northern Ireland Office from 2005, taking over from Jonathan Phillips.
Hannigan has not spoken of his role in the Northern Ireland peace process but he is the only British civil servant involved to be singled out in Tony Blair's autobiography, where Blair describes him as "a great young official who had taken over as the main Number 10 person [on Northern Ireland]" and cites him as an example of creativity. Hannigan appears regularly in other accounts, notably by Blair's Chief of Staff Jonathan Powell, attending private crisis meetings with Irish Republican leaders, including Gerry Adams and Martin McGuinness, at Stormont Castle and Clonard Monastery. Powell describes his key role in brokering agreement with Ian Paisley and the Democratic Unionist Party during and after the St Andrews Agreement talks. He is described as chairing the first meeting between the DUP and Sinn Féin and designed the diamond shaped table which brought Adams and Paisley together at a public meeting on 26 March 2007, which is widely regarded as marking the end of the Northern Ireland 'Troubles'.
Number 10 Downing St and Cabinet Office
In 2007, he was appointed to a new post of Prime Minister's Security Adviser in 10 Downing St, as well as replacing Sir Richard Mottram as the Head of Security, Intelligence and Resilience at the Cabinet Office, responsible for co-ordinating between the intelligence services and government, and acting as Accounting Officer for the Single Intelligence Account which funds MI5, MI6 and GCHQ. During his time in office, Hannigan led the review into the loss of the nation's child benefit data, a major data breach incident; the subsequent report is informally called the "Hannigan Report".
Hannigan moved to the Foreign and Commonwealth Office as the Director-General of Defence and Intelligence with effect from 1 March 2010.
He was appointed Companion of the Order of St Michael and St George (CMG) in the 2013 New Year Honours for services to national security. He was made an Honorary Fellow of Wadham College, Oxford, in November 2015. He became a Fellow of the Institute of Engineering and Technology in 2017 and is one of the few non-US citizens known to have been awarded the US National Intelligence Distinguished Public Service Medal. He is a Senior Associate Fellow of the Royal United Services Institute and a Senior Fellow of the Belfer Center for Science and International Affairs at Harvard University.
Director of GCHQ
It was announced in April 2014 that Hannigan would succeed Iain Lobban as the Director of the signals intelligence and cryptography agency the Government Communications Headquarters (GCHQ) in the autumn of the year, taking over in November 2014 after revelations by the National Security Agency whistleblower Edward Snowden in 2013 exposed mass surveillance by the agency. As of 2015, Hannigan was paid a salary of between £160,000 and £164,999 by GCHQ, making him one of the 328 most highly paid people in the British public sector at that time.
Dialogue with Silicon Valley
On his first day in the role, Hannigan wrote an article in the Financial Times on the topic of Internet surveillance, stating that "however much [large US technology companies] may dislike it, they have become the command and control networks of choice for terrorists and criminals" and that GCHQ and its sister agencies "cannot tackle these challenges at scale without greater support from the private sector", arguing that most Internet users "would be comfortable with a better and more sustainable relationship between the [intelligence] agencies and the tech companies". Since the 2013 surveillance disclosures, large US technology companies have improved security and become less co-operative with foreign intelligence agencies, including those of the UK, generally requiring a US court order before disclosing data. However the head of the UK technology industry group TechUK rejected these claims, stating that they understood the issues but that disclosure obligations "must be based upon a clear and transparent legal framework and effective oversight rather than, as suggested, a deal between the industry and government".
Encryption
Hannigan developed this thinking in a speech at MIT in March 2016, in which he appeared to take a more conciliatory line with the tech companies. He highlighted the importance of strong encryption and argued against 'back doors'. He also set out the role of James Ellis and other GCHQ mathematicians in the invention of public key cryptography and published for the first time facsimiles of Ellis' original papers on the possibility of digital and analogue secure non-secret encryption. Interviewed on BBC Radio 4's Today Programme in July 2017, Hannigan argued against further legislation on encryption and said 'back doors' would be a 'bad idea', suggesting instead that governments and companies should work together against those abusing strong encryption by targeting devices and the 'end of the network'.
Terrorist material online
Returning to the debate on terrorist material on the internet after the London Bridge attack in June 2017, Hannigan commented on the polarised stand-off between politicians and tech companies. He noted an improved relationship between the Silicon Valley companies and government since 2014, but called on the big companies to come together to address extremism and to preserve the freedom of the internet from state control. Interviewed alongside his former counterpart Admiral Michael Rogers, Head of the NSA and US Cyber Command, at the 2017 Aspen Security Forum, Hannigan said that since 2014 the companies had accepted responsibility for the content they carried and were making progress on extremist material, pointing to Mark Zuckerberg's comments on the subject.
Cyber security
Hannigan's major external change to the organisation during his tenure was the creation of the National Cyber Security Centre (NCSC) as an operational part of GCHQ. The NCSC's London headquarters was officially opened by Queen Elizabeth II on 14 February 2017. In a speech welcoming the Queen and Prince Philip, Hannigan described the historical line between Bletchley Park and the NCSC and set out the challenge of cyber security at a national level. In a final interview with Financial Times editor Lionel Barber, at CyberUK 2017, Hannigan described his thinking in creating the NCSC and his involvement in cyber security over the years, from the creation of the first UK Cyber Security Strategy for Prime Minister Gordon Brown, to framing the coalition government's ambition of making the UK "the safest place to live and do business online"; against a '"rising tide" of cyber security incidents, governments could not do this alone but only "with industry". Hannigan has made frequent interventions on cyber security issues. In a speech in November 2015, he said that the usual market mechanisms were failing on cyber security: "The normal drivers of change, from regulation and incentivisation to insurance cover and legal liability, are still immature". He also pointed to a critical cyber skills gap, and has called for a "culture shift" within Boardrooms to meet the cyber threat, with less reliance on the "well-meaning generalist". Other Financial Times articles have covered the sophistication of cyber crime groups and the threat from North Korea.
In July 2017 Hannigan blamed Russia for causing a "disproportionate amount of mayhem in cyberspace", identifying state-linked crime as a major problem: "There is an overlap of crime and state, and a deeply corrupt system that allows crime to flourish, but the Russian state could do a lot to stop that and it could certainly rein in its own state activity." Asked at the 2017 Aspen Security Forum what had changed in Russian cyber behaviour, Hannigan referred to the "brazen recklessness" of Russian agencies who scarcely tried to hide their activity. In December 2017 he joined General Lord Houghton in drawing attention to Russian threats to undersea internet cables, endorsing a report by Rishi Sunak MP for the Policy Exchange thinktank. Hannigan was involved in monitoring Russian interference in the 2016 United States elections, including the Democratic National Committee cyber attacks.
Resignation
On 23 January 2017, Hannigan announced that he had decided to resign once a successor to his role as director had been found, explaining in a letter to the Foreign Secretary, Boris Johnson, that his resignation was for personal reasons. This exchange of letters between Hannigan and Johnson revealed that he had "initiated the greatest internal change within GCHQ for thirty years"; no further details were given but the letters refer to a "focus on technology and skills", to make GCHQ "fit for the digital age". He was widely credited with bringing greater transparency to GCHQ, not least through the use of cryptographic puzzles; his Christmas card puzzle in 2015 inspired some 600,000 attempts worldwide to solve it. This led to the publication of The GCHQ Puzzle Book in 2016, with Forewords by the Duchess of Cambridge and Hannigan. It became a Christmas best-seller, and by April 2017 had raised £240k for the Heads Together mental health charities. According to the Guardian, his resignation was sudden and prompted speculation that it might be related to "British concerns over shared intelligence with the US in the wake of Donald Trump becoming president."
In February 2017, Hannigan was appointed to the UK Government's new Defence Innovation Advisory Panel, along with McLaren Chairman Ron Dennis and astronaut Tim Peake. He has written about the shift in technological innovation from government to private sector and West to East, expressing some concern about the tone of the Brexit debate and its impact on the UK academic tech sector.
In December 2021, the Intelligence and Security Committee of Parliament (ISC) reported that it had been misled by the government over the reasons for Hannigan's sudden resignation. Hannigan had in fact resigned because he had given a character reference some years earlier while working in the Foreign & Commonwealth Office using his FCO title, for Father Edmund Higgins, who had been found guilty of possessing 174 child pornography images, but the priest later reoffended. The report said the ISC is entrusted with oversight of the intelligence community, and ensuring their probity, and must be fully informed in such circumstances, instead of discovering it much later from a Mail on Sunday report. He was also heavily criticised in the ISC report for later revealing operational information on a TV programme of how intelligence agencies had discovered the identity of Islamic State executioner Mohammed Emwazi, commonly known as Jihadi John. The successor Director of GCHQ had written to Hannigan to remind him of his ongoing responsibility to safeguard sensitive information and to seek approval in advance of discussing such matters in the media, but without any substantive sanctions which the ISC viewed as giving the wrong message to other former intelligence staff.
Later career
He has served as chairman of BlueVoyant, a US-based cyber security services company, and as an adviser to a number of governments and international companies. He has been a paid commentator on security matters in the media and a public speaker.
In May 2021, it was announced that Hannigan was to be the next Warden of Wadham College, Oxford, from summer 2021.
References
External links
Foresight Review on Cyber Security for the industrial Internet of things
Organising a Government for Cyber: the Creation of the UK's National Cyber Security Centre (RUSI)
Official website
|-
|-
|-
Living people
1965 births
Alumni of Wadham College, Oxford
Wardens of Wadham College, Oxford
Civil servants in the Northern Ireland Office
Civil servants in the Cabinet Office
Civil servants in the Foreign Office
Companions of the Order of St Michael and St George
Directors of the Government Communications Headquarters |
2074074 | https://en.wikipedia.org/wiki/Hydra%20%28chess%29 | Hydra (chess) | Hydra was a chess machine, designed by a team with Dr. Christian "Chrilly" Donninger, Dr. Ulf Lorenz, GM Christopher Lutz and Muhammad Nasir Ali. Since 2006 the development team consisted only of Donninger and Lutz. Hydra was under the patronage of the PAL Group and Sheikh Tahnoon Bin Zayed Al Nahyan of Abu Dhabi. The goal of the Hydra Project was to dominate the computer chess world, and finally have an accepted victory over humans.
Hydra represented a potentially significant leap in the strength of computer chess. Design team member Lorenz estimates its FIDE equivalent playing strength to be over Elo 3000, and this is in line with its results against Michael Adams and Shredder 8, the former micro-computer chess champion.
Hydra began competing in 2002 and played its last game in June 2006. In June 2009, Christopher Lutz stated that "unfortunately the Hydra project is discontinued." The sponsors decided to end the project.
Architecture
The Hydra team originally planned to have Hydra appear in four versions: Orthus, Chimera, Scylla and then the final Hydra version – the strongest of them all. The original version of Hydra evolved from an earlier design called Brutus and works in a similar fashion to Deep Blue, utilising large numbers of purpose-designed chips (in this case implemented as a field-programmable gate array or FPGA). In Hydra, there are multiple computers, each with its own FPGA acting as a chess coprocessor. These co-processors enabled Hydra to search enormous numbers of positions per second, making each processor more than ten times faster than an unaided computer.
Hydra ran on a 32-node Intel Xeon with a Xilinx FPGA accelerator card cluster, with a total of 64 gigabytes of RAM. It evaluates about 150,000,000 chess positions per second, roughly the same as the 1997 Deep Blue which defeated Garry Kasparov, but with several times more overall computing power. Whilst FPGAs generally have a lower performance level than ASIC chips, modern-day FPGAs run about as fast as the older ASICs used for Deep Blue. The engine is on average able to evaluate up to a depth of about 18 ply (nine moves by each player), whereas Deep Blue only evaluated to about 12 ply on average. Hydra's search used alpha-beta pruning as well as null-move heuristics.
The Hydra computer was physically located in Abu Dhabi, in the United Arab Emirates, and was usually operated over a high speed optical fiber based network link.
Tournaments and matches
In July 2002, Brutus finished third in the World Computer Chess Championship in Maastricht, the Netherlands. It won six games, drew two games, and lost one, giving it a score of 7 points out of 9. The loss, against Deep Junior, included a rook sacrifice for very long term compensation, which the additional computing power of Brutus could not help it to understand.
In November 2003, Brutus finished fourth in the World Computer Chess Championship in Graz, Austria. It won eight games, lost two games, and drew one, giving it a score of 8½ out of 11. This disappointing result left the team to find a new sponsor, which they found in the form of the PAL group.
In February 2004, Hydra won the 13th IPCCC (International Paderborn Computer Chess Championship) tournament. Hydra scored 6½ out of 7, ahead of Fritz and Shredder.
In April 2004, Hydra finished second in the International CSVN Tournament in Leiden, the Netherlands. It won five games, lost one game, and drew three, leaving it with 6½ points out of 9, 1½ points behind winner Shredder. A loss out of the opening led to the hiring of GM Christopher Lutz, who made a new opening book.
In August 2004, at the 14th Abu Dhabi International Chess Festival, Hydra played an eight-game match against the computer program Shredder 8, a multiple-time world computer chess champion. Running on "just" 16 nodes Hydra defeated Shredder 5½ to 2½, winning three games and drawing the rest. In an informal match at the same tournament, Hydra took on International Grandmaster Evgeny Vladimirov of Kazakhstan, and defeated him by a score of 3½ to ½.
In October 2004, in a man vs. machine contest, Hydra defeated former FIDE world champion Ruslan Ponomariov in both of their games. Ponomariov had an ELO rating of 2710 at the time of the match.
In February 2005, Hydra won the 14th IPCCC (International Paderborn Computer Chess Championships) tournament. Hydra scored 8 points out of 9 (seven wins and two draws), defeating chess program Shredder again in the process.
Due to human handler errors and program errors, Hydra did not fare well in the June 2005 PAL/CSS Freestyle Chess Tournament, an online tournament where players are allowed to access any and all resources to them, including computer engines, databases, as well as human grandmasters. Two versions of Hydra participated in the tournament- Hydra Chimera (without human intervention) scored 3½/8, and Hydra Scylla (with human intervention) scored 4/8. Neither version of Hydra qualified for the quarter-finals.
From June 21 to June 27, 2005, Hydra played a six-game match against Michael Adams, the top British player and ranked seventh in the world. The prize fund was $145,000, paid out on a per game basis: a win netting $25,000, a draw $10,000 to both players. Hydra defeated Adams by a score of 5½ to ½; Adams lost each game except for game 2 which he drew. This version of Hydra was running on half power; only 32 out of 64 nodes were utilized. Adams played against the Scylla version of Hydra.
In November 2005, Hydra played 4 games: it beat Rustam Kasimdzhanov, drew with Alexander Khalifman, beat Ruslan Ponomariov and finally drew with Rustam Kasimdzhanov.
In the April 2006 PAL/CSS Freestyle Chess Tournament Hydra finished first with a score of 5½/7, a full point ahead of the field. This tournament allows for any human or computer aid including teams. All 64 of Hydra's nodes were utilized.
In the June 2006 PAL/CSS Freestyle Chess Main Tournament Hydra finished tied for fifth-sixteenth.
Hydra was not defeated by an unaided human player in over-the-board play. Hydra has, however, been beaten by humans who had access to other programs during their games; for example, correspondence chess International Grandmaster Arno Nickel beat an older version of Hydra in a two-game correspondence match lasting six months. The 32-node version that played against Adams managed to draw Nickel in their third game, which lasted five months and ended in December 2005.
References
External links
game 3 against Arno Nickel (07/11/2005)
Play through the games of the Adams vs Hydra 2005 Match
Beginning of New Yorker article on Hydra, Your Move: Chrilly Donninger's Hydra, computer chess program by Tom Muelle, The New Yorker, December 12, 2005
C. Donninger, U. Lorenz. The Chess Monster Hydra. Proc. of 14th International Conference on Field-Programmable Logic and Applications (FPL), 2004, Antwerp – Belgium, LNCS 3203, pp. 927 – 932
C. Donninger, A. Kure, U. Lorenz. Parallel Brutus: The First Distributed, FPGA Accelerated Chess Program. IPDPS 2004
C. Donninger, U. Lorenz. Innovative Opening-Book Handling. ACG 2006: 1-10
W. Ertel. Introduction to Artificial Intelligence, Second Edition, Springer, pp 120f
Chess computers
One-of-a-kind computers |
5953178 | https://en.wikipedia.org/wiki/BTRON | BTRON | BTRON (Business TRON) is one of the subprojects of the TRON Project proposed by Ken Sakamura, which is responsible for the business phase. It refers to the operating systems (OS), keyboards, peripheral interface specifications, and other items related to personal computers (PCs) that were developed there.
Originally, it refers to specifications rather than specific products, but in reality, the term "BTRON" is often used to refer to implementations. Currently, Personal Media Corporation's B-right/V is an implementation of BTRON3, and a software product called "" that includes it has been released.
Specifications
As with other TRON systems, only the specification of BTRON has been formulated, and the implementation method is not specified. Implementation is mentioned in this section to the extent necessary to explain the specification, but please refer to the Implementation section for details.
BTRON1, BTRON2, BTRON3
The BTRON project began with Matsushita Electric Industrial and Personal Media prototyping "BTRON286," an implementation on a 16-bit CPU 286 for the CEC machine described below. BTRON1 specifications include the BTRON1 Programming Standard Handbook, which describes the OS API, and the BTRON1 Specification Software Specification. which describes the OS API.
BTRON2 is planned to be implemented on , and only the specification has been created and published. It is planned to be implemented on evaluation machines equipped with TRON chips made by Fujitsu and named "2B". One of its features is that all OS-managed computing resources such as memory, processes, and threads are handled in a real/pseudomorphic model, a feature of BTRON.
SIGBTRON's TRON chip machine MCUBE implemented "3B," which is 32-bit and uses an ITRON-specification RTOS (modified from "ItIs") for the microkernel. 3B and The B-right specification used in , etc. is "BTRON3" (currently, the microkernel is I-right); the specification that B-right/V conforms to is published as the BTRON3 specification.
μBTRON
This is a BTRON subset that was envisioned as a popular version. With the performance of computer hardware at the time of its conception, a computer that can implement the ideal BTRON would be a workstation-class computer, so it is also positioned as BTRON for general households.
BTRON is a subset of a dedicated machine with fixed applications (like a dedicated machine word processor), and is based on the concept of a "dedicated communication machine. Specific applications include "communication with oneself (creative activities)", "communication with others (Internet communication)", and "communication with machines" (e.g. data exchange with peripheral devices, such as digital cameras).
Linkage with the last peripheral device was envisioned as the key to adding functions to a dedicated machine to which no additional programs could be added. This peripheral device was called "electronic stationery". For communication with these peripherals, a prototype of the real-time "μBTRON bus" (see below) was developed. Note that , a BTRON-equipped PDA realized later, was given the name "electronic stationery" and also called μBTRON, but it was not subsumed to become a dedicated machine and did not implement the μBTRON bus.
Hardware
TRON keyboard
Although it does not bear the BTRON moniker, the TRON keyboard is intended for use with BTRON.
μBTRON bus
The basic specification is based on IEEE 802.5 with modifications. A real-time bus specification for LANs that can be considered as an alternative to MIDI.
Other references in this section:
Features
TAD
For data handled by BTRON, a basic format for exchange called TAD (Tron Application Databus) has been defined, and basic data can be freely exchanged between arbitrary applications, advocating Databus. Standards have been established for text (including word-processing modifications) and graphics (both raster and vector), and for the rest, a header indicating the length of the chunks is common at the beginning, so that applications can skip over unsupported data if they find it. Data chunks are referred to as segments in the BTRON specification, etc.
In addition, since the TRON chip was big-endian, TAD was also designed in big-endian, but when BTRON286 was implemented, a quasi-TAD that was modified to little-endian was defined, and the current widely used implementations are all in that format, including the one on MCUBE that uses the TRON chip.
Image (raster) data is defined in the direction of adapting to any hardware scheme, including palette and direct specification, packed pixels and planes, and so on. Both are solid, uncompressed. For compression, only the MH (facsimile#1D encoding (MH)) method for black and white images was defined. There are working implementations of moving images, but there is no description of any time axis within the published TAD specification. In actual applications, there are some that store their own data as TAD segments according to TAD policy, but there are also some that read and write records directly instead of using TAD.
Real body/pseudo body
BTRON adopts a network-type model with an arbitrary directed graph structure called a real/pseudomorphic model as a file management model, instead of the conventional tree structure model with directories (folders). BTRON2 also manages all computer resources in a real/pseudo model. As for the functions provided to users, the real/pseudomodel of BTRON (BTRON1 and BTRON3, which are widely used implementations) is a convenient hypertext environment.
In the past, files and folders were used to distinguish between entities that contain data and indexes that point to them, but BTRON has done away with such distinction. In the body/pseudo-body model, the entire body of data is defined as a Real Object, and the part of the Real Object that points to another Real Object is defined as a Virtual Object.
A real body is like a file in the sense that it holds data, but it is also like a folder in the sense that it has the ability to point to another real body by the temporary body it contains as its contents.
While almost all current UNIX do not allow the creation of alias hard links that point to directories, BTRON's real/pseudo links are like hard links that can freely create arbitrary links including such links. In Unix, a link to the parent indicated by "..." is used. In Unix, the fact that there is only one link to the parent indicated by "..." is a problem, but this is not a problem in BTRON because it has abandoned tree-like management in the first place.
Confirmation that the real body is no longer referenced by deleting a link is done by the reference count method, as in the Unix file system, but since arbitrary structures are allowed, it is known as a weakness of the reference count method. However, since arbitrary structures are allowed, as is well known as a weakness of the reference-counting method, loops can cause the file system to occupy disk space even though it cannot be reached from anywhere. Currently, a function is implemented to check for such entities by disconnecting the file system or (in the case of system disks) booting the system in a special state and performing a check similar to fsck as well as the so-called stop-the-world method of garbage collection. In addition, most of the previous versions of the software
Also, unlike most previous file systems, the name ("real name") is basically not used for identification (as an ID) on the system side, so the user can give a name freely (currently there is a length limit for implementation reasons).
Currently, BTMemo for Windows is available as software that reproduces the feeling of using BTRON.
The existing implementations of BTRON3 and BTRON1 realize the above functions on a file system with multi-record functions: one file corresponds to one entity, has a record containing the data body, and also has a "link record" that points to a file corresponding to the entity to which the temporary body contained by the entity points. In addition, there is a "link record" that points to the file corresponding to the real body to which the temporary body that the real body contains points. Because of this design, the real body data by TAD itself is not affected by the way links are represented in the underlying system.
Problems
One of the problems currently occurring is that only a maximum of 64Ki files can be placed on a single volume (such as a hard disk partition) due to the implementation of the file system inherited from BTRON286, and the number of real bodies is limited accordingly. It has been pointed out that the number of real bodies is limited accordingly. The current release of Cho-Kanji V has the same limitation.
This is because the file ID is a fixed-length integer of 16 bits, and it is difficult to extend it while maintaining binary compatibility with the current system. Therefore, a redesign and implementation is needed to extend this real body constraint.
Real-time operation
The BTRON-specification OS is a real-time OS, capable of stably processing tasks that require real-time processing, such as video and audio. BTRON3 uses ITRON as a microkernel, and although care must be taken to avoid memory page-out, real-time processing is possible.
In Windows, the graphics card driver is tuned by the manufacturer, while in BTRON there is no such graphics acceleration, etc., so screen rewriting can feel slow. BTRON has short boot time, though it stems from its limited number of daemons and supported devices, which makes it less of an advantage compared to other operating systems.
HMI
Not only BTRON, but the TRON project has also standardized the human-machine interface and published it as the TRON Human Interface Standard Handbook, and BTRON's user interface is designed in accordance with this.
Direct Operation
In BTRON, basically everything on the screen can be operated with a pointing device such as a mouse or electronic pen. To make it easy even for users who are not familiar with computers, not only can the size of the window be changed, but also the selection text and graphics can be dragged and dropped directly onto drag and drop (commonly called "grab-and-poi") to move or copy.
Application launching is basically done by double-clicking or otherwise launching a temporary body that represents the actual body of a document, etc. This is the same operation as selecting an application based on meta-information such as file type and creator in Mac OS and extension linking in Windows, but BTRON In BTRON, there is basically no other way to start an application.
For example, when transcribing a new text, an application is first launched without specifying the editing target, whereas in BTRON, a template of the real body is registered along with the application, and work is started by duplicating the real body. To be more specific, open a special window called "Gather Source Papers", and from there drag a pseudo-form called "Manuscript Paper" for editing text, and drop it into the desired window. (Normally, this operation is to move the pseudostat, but due to the peculiarities of the "Paper Gathering" application, the real body of the new document is duplicated and a pseudostat is created that points to the real body.
Operations common in other operating systems, such as starting an application first and then creating a new document or loading an existing document, are not possible with BTRON. Some people find it easy to understand because it matches real-world actions such as "preparing a new paper or an existing document before writing," while others find it difficult to understand unlike other OSs. The style of selecting the subject of the operation first and then instructing the operation on it is similar to the Xerox Star or the Smalltalk system, which was more object-oriented.
Enableware
TRON, which we call enableware, has also focused on universal design since the beginning of the project. The BTRON-specification OS allows users with various disabilities to freely change the typeface and size of menu items and actual names, the size of the mouse pointer, the size and display method of the temporary body, and the width of the window scroll bar. The design of the mouse pointer, for example, is not to be changed except in special cases, to achieve a consistent feel. To avoid confusion even when using an application for the first time, the order of menus has been standardized. Design guidelines that take into account the use of multiple languages have also been established. The above functions are also implemented in commercial TRON specification OS.
TRON code
BTRON (like OS/omicron, etc.) was designed on the premise of representing characters in 16 bits. The character code for TAD, the data format, is , which switches between multiple 16-bit planes by escaping 0xfe**, allowing the space to be expanded arbitrarily. The current implementation uses 32 bits as the internal code.
Implementation
Shown in order of oldest to newest. First, Matsushita Electric Industrial and Personal Media produced a prototype of BTRON286, an implementation on a 16-bit CPU 286, assuming the CEC machine described below. The specification for this is BTRON1 (also known as BTRON/286 in the early days). Implementations based on this include "ET Master" in Matsushita Communication Industry's "PanaCAL ET", "1B/Note" in the Panacom M series sold by Personal Media, and "1B" in the "1B/V" series for PC/AT compatible machines.
For BTRON2, a specification was prepared, and its implementation on GENESYS, an evaluation machine equipped with Fujitsu's TRON chip F32/300, was planned. An implementation by Personal Media, named "2B".
MCUBE, a TRON chip machine produced by SIGBTRON, implemented "3B", which was 32-bit and used an ITRON-specification OS (ItIs) for its microkernel; implementations based on 3B included "B-rights" such as the (V810), PC/ The specification that B-right/V conforms to is published as the BTRON3 specification. Products that include B-right/V " is sold to the general public and is easy to obtain.
Although it is not called BTRON, T-Shell, a middleware for T-Kernel, provides the same functions as some of the outer shell of Cho-Kanji.
History
In this section, all other descriptions not otherwise mentioned are from the "History" of the TRON Association website, referenced in the web archive,.
Start of the BTRON sub-project
The earliest records include the reference to "B-TRON" in "TRON Project" in the Proceedings of the International Conference on Microcomputer Applications '84 (1984), and "TRON Total Architecture" in "TRON Total Architecture," and "Proposal of a Unified Operation Model for BTRON" in "Information Processing" Vol. 26 No. 11 (1985/Nov, 25th Anniversary Special Issue).
In 1986, the BTRON Project was fully launched as a sub-project of the TRON Project, and the BTRON Technical Committee was established in the TRON Association. The initial concept was summarized in an article in IEEE Micro's special issue on TRON (Vol. 7, No. 2 (1987/Apr)), and can be read in "TRON Introduction," a translation of the same issue.
The June 1988 issue of TRON Review contains several screen shots and a report on BTRON286 by Matsushita Electric Industrial Co.
In December 1988, the TRON Association released an outline of the "BTRON/286" specification, and the following year, in March 1989, Matsushita Electric completed a practical level machine that was designated as the "Educational PC Specification Standard Concept," which was intended for educational use as described in the next section.
Introduction plan for educational PCs
Based on the "Second Report on Educational Reform" issued by the Rinkyoin Council in April 1986, the Computer Education Development Center (CEC) was established in July of the same year under the joint jurisdiction of the Ministry of Education and the Ministry of International Trade and Industry. The CEC established the Educational Software Library in April 1987, and held a symposium in July 1987. In August of the same year, CEC called for prototypes of educational personal computers under the title of "CEC Concept Model '87", and the prototypes gathered in response to the call were exhibited to the public in July of the following year, showing a willingness to show the direction of educational personal computers.
In March 1989, CEC made the decision to use TRON for personal computers to be installed in schools. This seemed to be a stepping stone to the spread of (B)TRON, but it became the subject of a complaint in the trade issue described in the next section.
The keyboard and key layout of the "educational PC" presented at this time was not based on the TRON keyboard, but on the New JIS layout.
It is not unheard of to use a product that does not have a large market share for educational use, such as the Acorn Computer in the UK.
Trade Issues
In 1989, a report by the Office of the United States Trade Representative cited TRON as a trade barrier in Japan.
The details are as follows: "1989 National Trade Estimate Report on Foreign Trade Barriers" ("Report on Foreign Trade Barriers" ( 1989)) issued by the USTR on April 12, 1989. Other Barriers" in the report on Japan's trade barriers in "1989 National Trade Estimate Report on Foreign Trade Barriers" ("Report on Foreign Trade Barriers" () 1989 edition), in Section 7 "7. In Section 7 "7. OTHER BARRIERS", TRON was one of those listed with subsections such as "Large-scale Retail Stores Act". The naming of specific systems is bizarre compared to the others, which basically list fields.
The report points out that several US companies are also members of the TRON Association, but no US company is in a position to sell TRON-based PCs or communication devices, and that the Japanese government's support for TRON could give Japanese manufacturers an advantage, especially in the education sector (referring to the aforementioned CEC) and communication sector (referring to NTT's adoption of CTRON), which is already happening.
Furthermore, he points out that in the education field, TRON has enabled the emergence of US OS (specifically MS-DOS, OS/2, UNIX is being excluded from the huge new market, and that in the long run TRON could affect the entire market of electronics in Japan. The last paragraph of the report states that already on September 9, 1988 (and later), the U.S. had informed Japan of its interest in these matters, and that in March 1989 negotiations were underway to provide detailed specifications for NTT's requirements, and that further information on TRON was being investigated through the Japanese government. The report concludes.
The TRON Association protested in writing to the USTR representative in May, and TRON was removed. However, in June of the same year, the mass media reported "abandonment of BTRON adoption for educational PCs". For example, "Nikkei Computer" reported "BTRON-based educational PCs: Standardization virtually impossible". Although there were twists and turns, in the end, the introduction of BTRON proposed by CEC was not implemented, and what was introduced to school education was MS-DOS machines, including the PC-9801. This led to a period of stagnation for the TRON project, especially the BTRON project, with some labeling it as a "failure.
The background to this uproar is that in the 1990s, NEC had the majority share of the PC market, and at the time of the CEC selection in 1988, it was opposed by a coalition of all other companies, led by Matsushita, which also included IBM, "For example, note 25 of "TRON Today" in Shozaburo Nakamura's Dennou Mandara (1995 edition) says, "At the time, the anti-TRON forces were NEC and Microsoft, which had already established themselves. It is a well-known fact that NEC and Microsoft had already established their positions. In addition to the long-standing Japan-US trade friction, there was also the Japan-US high-tech friction of the 1980s, Japan bashing, the Nippon K.K. theory, the IBM industrial espionage case, and the cover illustration of the September 1983 issue of CACM, was still fresh in our minds, and TRON itself was still in its infancy. IEEE Micro ran a special issue on TRON (April 1987, for example), and there were factors that would have made TRON stand out in the US.
According to Eiji Oshita's "Masayoshi Son: The Young Lion of Entrepreneurship," Masayoshi Son, who had been working hard for some time to create an industry structure that would make his own business, software distribution, profitable, was agitated that TRON would cause Japanese industry to fall behind global standards and be left behind by the rest of the world. Masayoshi Son, who had been working hard to set up the industry so that it could profit from TRON, was about to "lay the rails for the destruction of TRON" with Yuji Tanahashi (then Director General of the Machinery and Information Industry Bureau) and Ryozo Hayashi (then Director of the Information Processing Promotion Division), who were introduced by Akio Morita. The Ministry of International Trade and Industry (MITI) stopped the introduction of TRON into schools,. The book's headline reads "Stopping the spread of TRON at the water's edge".
There was an unusual display of opinion on the part of the TRON project on this matter. In the 60th issue of TRONWARE magazine edited and published by Personal Media Corporation (December 1999), p. 71, an article titled "People who blocked the TRON project" signed by the editorial staff introduced the aforementioned biography as someone who published a book "boasting of his achievements" (in blocking TRON). The article introduced Sun's opposition to the MSX and his apparent support for UNIX. Sakamura's testimony about the conference where Sun, Kazuhiko Nishi, and Sakamura got together was that it was supposed to be a conference for Unix engineers, and that he and Nishi talked about technology, but Sun talked about business and seemed to be out of place. But overall, the article sums up the situation as "Is it right to destroy the seeds of original technology?
In December 1988, Softbank (then Softbank Japan Corp.) Publishing Department (now Softbank Creative) published a book titled "The Tron Revolution" (ISBN 4-89052-037-6).
PanaCAL ET
In preparation for the inclusion of "information" in the optional content of junior high school technology courses from 1993, the Ministry of Education started the "Educational Computer Assistance Program" in 1990.
In conjunction with this, released the PanaCAL ET, an educational personal computer equipped with the BTRON286-based OS "ET Master" as a BTRON machine. The hardware was based on the Panacom M, with enhanced 24-dot font ROM and other features for educational use.
However, most of the machines introduced to schools at this time were PC-9800 series, probably because they could inherit BASIC programs and data from word processing software, which had already been created in large numbers by enterprising teachers.
90s onwards
(Stub) The addition of this section is desired.
December 1989 BTRON1 software specification released.
December 1989 Development of a reservation system using BTRON by Japan Airlines
1990 releases PanaCAL ET equipped with BTRON1 specification (based on BTRON286) "ET Master
1991 Release of 1B/Note
1994 Release of 1B/V1, a general-purpose PC/AT compatible machine
1995 Release of 1B/V2
1996 Release of
1996 Release of 1B/V3
1998 Release of B-right/V
1999 Release of Cho-Kanji (B-right/V R2)
2000 Release of Cho-Kanji 2 (B-right/V R2.5)
2000 Release of GT typeface
2001 Release of Cho-Kanji 3 (B-right/V R3)
2001 T-Engine and T-Kernel released
2001 Release of Cho-Kanji 4 (B-right/V R4)
2006 Cho-Kanji V (B-right/V R4.5)
Footnotes
External links
The TRON Association
Cho-Kanji Website
Personal Media Corporation
BTRON Club
Sakamura and Koshizuka Laboratory
Kaoru Misaki(Hatena Keyword)
Nortia Order(Web Archive)
Unofficial TAD Guide Book(Complete explanation of all segments of TAD)( Web Archive)
Open Gallery:1B/V3 Environment
External sources
Linux Insider, "The Most Popular Operating System in the World", October 15, 2003. Retrieved July 13, 2006.
BTRON Introduction
TRON project
History of software
Window-based operating systems
X86 operating systems
1984 software
Computer-related introductions in 1984
Computing platforms
Operating system families |
47805764 | https://en.wikipedia.org/wiki/William%20Fetter | William Fetter | William Fetter, also known as William Alan Fetter or Bill Fetter (March 14, 1928June 23, 2002), was an American graphic designer and pioneer in the field of computer graphics. He explored the perspective fundamentals of computer animation of a human figure from 1960 on and was the first to create a human figure as a 3D model. The First Man was a pilot in a short 1964 computer animation, also known as Boeing Man and now as Boeman by the Boeing company. Fetter preferred the term "Human Figure" for the pilot. In 1960, working in a team supervised by Verne Hudson, he helped coin the term Computer graphics. He was art director at the Boeing Company in Wichita.
Life
Born in Independence, Missouri, Fetter attended school in Englewood and graduated in 1945 from Northeast High School in Kansas City. He studied at the University of Illinois where he was awarded a BA in graphic design in 1952. His professional career started while studying at the University of Illinois Press (UIP), an American university press. Employed there from 1952-1954, even at this early date he thought of using computers as a tool for his work as a graphic designer. He wrote in 1966:
In 1954, he became art director for Family Weekly magazine in Chicago. In his article "Computer Graphics at Boeing" for Print magazine he wrote that he was interested in developing a computer program that could simplify the designing of the magazine in the closing stages. Together with a computer manager, he worked on the development of a program but before the project was completed, Fetter accepted employment as art director of Boeing in Wichita in 1959.
Computer Graphics
Morphology
"In 1960, 'we' at Boeing coined the term computer graphics", wrote Fetter in a 1966 issue of Print magazine. In the article he wrote about the team involved. Over time, Fetter received universal credit as the first person to use the term "computer graphics". He later recognized the need to unequivocally make clear that Verne L. Hudson, his superior in the development team, used the words first. Boeing also notes that Verne L. Hudson was the first to coin the term.
In a 1966 editorial in the special "The designer and the computer" issue of Print magazine, editor Martin Fox explained the semantic difference, the meaning and interpretation of the words "graphics" and "design", as used by traditional graphic designers and designers, in contrast to how they were used by the new generation of computer graphic designers.
Computer Graphics
From the start of the 1950s, successful developments were underway in controlling machines with computers for industrial production. Subsequent development of computer aided design programs for 2D and 3D production drawings began in the mid-50s. In 1959, Fetter was recruited by Boeing as art director of the CAD department to explore creative new ideas for the production of 3D drawings.
He created a new concept of drawing perspectives. Supported by Walter Bernhardt, assistant professor of Applied Mechanics at Wichita State University, Kansas, his ideas were successfully implemented as mathematical formulae. Programmers subsequently entered these into the computer. Fetter was the team leader (supervisor) of this group. Due to the success of the first experiments, a Boeing research program was launched in November 1960 with Fetter as manager. The result of the research was registered as a "Planar Illustration Method and Apparatus" under US patent in November 1961 – Patent 1970 obtained with the number 3,519,997. The January 1965 issue of Architectural Record magazine described how Fetter had worked as a graphic designer in a team of engineers and programmers to create computer graphics.
In 1963, the research department relocated from Wichita to Seattle, where Fetter became the manager of Boeing's newly founded Computer Graphics Group.
Human Figure
Fetter became well known for the creation of the first human figure in a series of computer graphics of an aeroplane pilot. In his Print magazine article he described the development of computer graphics and the human figure at Boeing. He also mentioned the need for a team of good employees for this type of project. The initial goal of the Computer Graphics Group was to use the pilot as an animation in films. The work began in 1964 and from 1966; the Human Figure was presented at conferences and lectures by Fetter. In the lectures, the film SST Cockpit Visibility simulation was shown in 1966.
The first human figure, which he managed with a computer for a film, however, was the Landing Signal Officer on a CV-A 59 aircraft carrier. The figure was shown in a short CV A-59 film but only as a silhouette and not as detailed elaboration as the First Man had been. Fetter published this in November 1964 in his book Computer Graphics in Communication in the section "Aircraft Carrier Landing Depiction with images".
The Portland E.A.T. Group
In 1965 Fetter was invited to a meeting at Bell Labs in Murray Hill, New Jersey, where he was the only one with an education in graphic and art. Participants at the meeting were Ken Knowlton and Ed Zajac of Bell Laboratories and others who conducted research on the development of computer films. Because of the travel to Bell and New York City, he learned of the Experiments in Art and Technology (E.A.T.) movement and became an active member of the group. His contacts with E.A.T. inspired him in 1968 to help found the Pacific Northwest chapter of the movement. At the founding event Fetter and Hans Graf showed the film Sorcerer's Apprentice.
From 1969
After completion of his tenure at Boeing, from 1969-1970 Fetter was Vice-President of Graphcomp Sciences Corporation in California. He began to teach at Southern Illinois University, Carbondale, in 1970 and at the same time continued his research. He was there for two years as Head of Design. In 1977 he became director of research at Southern Illinois Research Institute (SIRIUS) in Bellevue.
Through an agreement with the Boeing Company and Computer Graphics, Inc., in 1970 Fetter was permitted to use the source code for the First Man for a 30-second TV spot. For this purpose, additional animation of the lips to move in synchronization with the text was added. This may have been the first use of a simulated human figure on TV.
Fetter died on June23, 2002 in Bellevue, Washington.
Exhibitions
The Landmark exhibitions from August 1968 to August 1969 were staged in London, New York City and in Zagreb. During both exhibitions in Zagreb, international scientific symposiums were held. Another exhibition and conference was held in Berlin. The Cybernetic Serendipity exhibition in London over the years received the most attention in secondary literature. Today, it is dependent on the nationality and education and research level of the observer as to which of the three they consider the most important. There were already critical voices about the 1968 exhibition in London. Gustav Metzger was at the Tendencies 4 symposium in Zagreb and wrote 1969 a critic in a journal by Studio International: At a time when there is a widespread concern about computers, the advertising and presentation of the I.C.A.'s ′Cybernetic Serendipity′ exhibition as a ′technological fun-fair′ is a perfectly adequate demonstration of the reactionary potential of art and technology.
The Human Figure by Fetter was seen in all exhibitions as Boeing Man. In the catalog for Cybernetic Serendipity only The Boeing Computer Graphics organization is mentioned as the author.
1968: Cybernetic Serendipity: The Computer and the Arts, London, Institute of Contemporary Art.
1968: On the Path to Computer Art, MIT, und TU Berlin, Berlin.
1968: Some More Beginnings: An Exhibition of Submitted Works Involving Technical Materials and Processes, E.A.T., New York, Brooklyn Museum.
1969: Tendencija 4, Computers and Visual Research, galerija suvremene umjetnosti, Contemporary Art Gallery, Zagreb.
1969: Computerkunst-On the Eve of Tomorrow, Galerie Kubus, Hanover. Thereafter, in Munich, Hamburg, Oslo, Brussels, Rome and Tokyo.
1989: 25 Jahre Computerkunst – Grafik, Animation und Technik, BMW Pavillon, München.
2007: Ex Machina - Frühe Computergrafik bis 1979: Herbert W. Franke zum 80. Geburtstag, Kunsthalle Bremen, Bremen.
2007: bit international: [Nove] Tendencije - Neue Galerie Graz - Universalmuseum Joanneum, Graz.
2008: Bit International, (Nove) tendencije, 1961 bis 1973, Zagreb, In: ZKM, Medienmuseum, Karlsruhe.
2009: Digital Pioneers, Victoria & Albert Museum, London
2015: Galerija suvremene umjetnosti, Contemporary Art Gallery, Zagreb
2015: Tendenzen 4, Computer und Visuelle Forschung, ZKM, Karlsruhe
Work
Human Figure
Book
Computer Graphics in Communication, New York, Verlag McGraw-Hill, 1964.
Articles
The Art Machine, In: The Journal of Commercial Art & Design, Vol. 4, No.2, Feb. 1962, p. 36.
Computer Graphics. In 1967 University of Illinois Conference Emerging Concepts in Computer Graphics, edited by Don Secrest and Jurg Nievergelt. W.A.Benjamin, Inc., 1968, p. 397-418.
A Computer Graphics Human Figure System Applicable to Biostereornetrics, CAD J. Fourth Int'l Con/. and Exhibition on Computers in Engineering and Building Design, IDC Science and Technology Press, Guildford, Surrey, England, 1980, coverandpp.175–179.
A Computer Graphics Human Figure System Applicable to Kineseology, ACM Special Interest Group on Design Automation Newsletter, Vol. No.2 of 3 (late issue), June 1978, pp. 3–7.
A Progression of Human Figures Simulated by Computer Graphics, PROCEEDINGS, SPIE, Volume 166, NATO Symposium on APPLICATIONS OF HUMAN BIOSTEREOMETRICS. July 9–13, 1978 Paris France.
Wide Angle Displays for Tactical Situations, Proc. US Army Third Computer Graphics Workshop, Virginia Beach, Va., Apr. 1981, pp. 99–103. II. Bui-Tuong Phong, Illumination for Computer Generated Images, Comm. ACM, Vol. 18, June 1975, pp. 311–317.
Progression of Human Figures Simulated by Computer Graphics. IEEE Computer Graphics and Applications, 1982, Vol. 2, No. 9, p. 9-13.
Literature
Herbert W. Franke: Computergraphik Computerkunst. Bruckmann, München 1971, first published.
Herbert W. Franke: Computer Graphics Computer Art. Phaidon Press, London, Phaidon Publishers, New York, 1971. Translation by Gustav Metzger.
Notes
References
External links
William Fetter's Boeing Man.
William Fetter
"Southern Illinois University design department"
John Lansdown, "Not only computing - also art" (Computer Bulletin, March 1980)
1928 births
2002 deaths
Computer graphics professionals |
61163243 | https://en.wikipedia.org/wiki/StockX | StockX | StockX is an online marketplace and clothing reseller, primarily of sneakers. Since November 2020, it has also opened up to electronic products such as game consoles, smartphones and computer hardware. The Detroit-based company was founded by Dan Gilbert, Josh Luber, Greg Schwartz, and Chris Kaufman in 2015–2016. StockX has more than 800 employees in Downtown Detroit. StockX currently has international offices in London, UK, in Eindhoven, the Netherlands, and has authentication facilities in Detroit's Corktown neighborhood, Moonachie, NJ, and Tempe, AZ. Scott Cutler and Schwartz serve as chief executive officer and chief operating officer, respectively, and Deena Bahri became the company's first chief marketing officer in 2019.
History and operations
The startup company was founded by Dan Gilbert, Josh Luber, Greg Schwartz, and Chris Kaufman starting in 2015, and launched in February 2016. Luber had previously founded StockX's predecessor website about rare sneakers called Campless (established during 2012–2013), and Schwartz holds the chief operating officer position. After Gilbert acquired Campless from Luber, Luber relocated from Philadelphia to work from Gilbert's One Campus Martius building in Downtown Detroit. StockX opened its first international headquarters in London in October 2018. Scott Cutler was appointed chief executive officer in June 2019.
The company has more than 800 employees, as of August 2019. StockX is among the fastest-growing startups in Detroit and Michigan, as of late 2018.
In addition to the One Campus Martius office, StockX has an authentication facility in Detroit's Corktown neighborhood. Prior to the authentication center's relocation in June 2018, the company had a team of 15 employees authenticating thousands of pairs of shoes daily. The larger Corktown facility increased the number of authentications. StockX opened a second authentication center in Tempe, Arizona, in late 2018, followed by two more in Moonachie, New Jersey, and West London. In 2019, the company opened a fifth authentication center in Eindhoven, Netherlands. StockX maintains a catalog of all fake items received.
The company's first "StockX Day" event in Detroit, which invited resellers, platform users, and industry influencers to meet employees and see operations, was held in October 2017 and attended by approximately 200 people. 150 buyers and sellers were selected from 3,000–5,000 applicants to attend the second "StockX Day" in April 2018. In May 2019, the company held a third "StockX Day" event for an audience of 300 and announced the opening of its first permanent location in New York City along with several major product updates.
StockX has collaborated with numerous celebrities and companies on charitable initiatives. In 2017, Eminem collaborated with the company to raffle off his Air Jordan sneakers designed in collaboration with Carhartt to raise funds for the Greater Houston Community Foundation Hurricane Harvey Relief Fund and Team Rubicon to support relief efforts in Texas and Florida following Hurricane Harvey. Nike Inc.'s debut of LeBron James' first retro sneaker via StockX marked the first time the brand bypassed retail and went directly to the secondary market. In 2018, StockX and Wu-Tang Clan collaborated on the Charity Rules Everyone Around Me (C.R.E.A.M.) campaign; proceeds from nine exclusive products benefitted the Wu-Tang Foundation to support children in underserved communities.
StockX suffered a data breach in mid-2019. In September, the company and Bleacher Report reached a multiyear advertising agreement, and Deena Bahri became StockX's first chief marketing officer.
In September 2020, StockX co-founder Josh Luber left the company.
In November 2021, StockX announced that it had signed UConn basketball superstar Paige Bueckers as its primary spokesperson for its basketball and women's sports lines.
In January 2022, StockX announced the ability to buy and sell NFTs on its marketplace.
Business model
StockX serves as an online marketplace, facilitating auctions between sellers and buyers, then collecting transaction and payment fees. Sellers send purchased items to StockX facilities for inspection and verification, then authenticated products are shipped to buyers. StockX features a "stock market-like" variable pricing framework and discloses price histories for specific items. StockX is most known for sneakers and streetwear but also carries other clothing and accessories such as handbags and watches. StockX surpassed eBay in total sneaker transactions in 2017. Counterfeit items are returned to sellers, and buyers are refunded.
StockX charges a 3 percent processing fee for all resellers, and a 9.5 percent transaction fee for new users, which decreases with experience. Prior to the company's expansion into Europe, StockX only advertised in the United States and accepted U.S dollars. Fifteen percent of the company's buyers were international, as of September 2018.
In January 2019, StockX partnered with celebrity jeweler and influencer Ben Baller to sell 800 pairs of black and red slides directly to the public, marking the company's first "initial product offering" ("I.P.O."). The shoes were printed with the phrase "Ben Baller did the chain", a lyric from ASAP Ferg's song "Plain Jane" (2017).
Funding
StockX has received financial backing from Gilbert, investment companies Battery Ventures and GV, as well as Scooter Braun, Jon Buscemi, Eminem, Joe Haden, Ted Leonsis, and Mark Wahlberg. The company raised $6 million in February 2017. StockX received $44 million in a second venture round in September 2018. In addition to Battery and GV, investors included Steve Aoki, Marc Benioff, Don C, and Karlie Kloss. The investment helped fund StockX's international expansion.
In June 2019, StockX raised $110 million and was valued at $1 billion in another venture round. Investors included General Atlantic, GGV Capital, and Yuri Milner's firm DST Global, in addition to Battery Ventures and GV.
References
Companies based in Detroit
American companies established in 2015
2015 establishments in the United States
Sneaker culture
Electronic trading platforms |
5190684 | https://en.wikipedia.org/wiki/Aladdin%20Knowledge%20Systems | Aladdin Knowledge Systems | Aladdin Knowledge Systems (formerly and ) was a company that produced software for digital rights management and Internet security. The company was acquired by Safenet Inc, in 2009. Its corporate headquarters are located in Belcamp, MD.
History
Aladdin Knowledge Systems was founded in 1985 by Jacob (Yanki) Margalit, when he was 23 years old; he was soon joined by his brother Dany Margalit, who took the responsibility for product development at the age of 18, while at the same time completing a Mathematics and Computer Science degree at Tel Aviv University. In its early years the company developed two product lines, an artificial intelligence package (which was dropped early on) and a hardware product to prevent unauthorized software copying, similar to digital rights management. Margalit raised just $10,000 as an initial capital for the company.
The digital rights management product became a success and by 1993 generated sales of $4,000,000. The same year that company had an initial public offering on NASDAQ raising $7,900,000. In 2004 the company's shares were also listed on the Tel Aviv Stock Exchange. By 2007 the company's annual revenues reached over $105 million.
In mid-2008, Vector Capital was attempting to purchase Aladdin. Vector initially offered $14.50 per share, but Aladdin's founder Margalit refused the offer arguing that the company was worth more. Aladdin's shareholders agreed on the merger in February 2009 at $11.50 per share, in cash. In March 2009, Vector Capital acquired Aladdin and officially merged it with SafeNet.
Corporate timeline
1985 – Aladdin Knowledge Systems was established
1993 – Aladdin held an initial public offering
1996 – Aladdin acquired the German company FAST
1998 – Aladdin patented USB smart card-based authentication tokens
1998_Dec – Aladdin acquired the software protection business of EliaShim
1999 – Aladdin acquired the eSafe "content security" business of EliaShim
2000 – Aladdin acquired 10% of Comsec
2001 – Aladdin acquired the ESD assets of Preview Systems
2005 – Aladdin completed second offering – 2,000,000 shares with net proceeds of $39m
2009 – Aladdin was acquired by Vector Capital.
2010 – Aladdin was merged with Vector Capital's SafeNet.
Products
DRM
Aladdin's HASP product line is a digital rights management (DRM) suite of protection and licensing software with 40% global market share, used by over 30,000 software publishers. It is used across many platforms (Windows, Linux, Mac).
HASP, which stands for Hardware Against Software Piracy, was the company's first product and evolved into a complete digital rights management suite, that includes a software only option and a back office management application, in recent years also software as a service capability.
Internet security
In the late 1990s the company started diversifying and began offering Internet security and network security products, offering two product lines:
Digital identity management
eToken, portable device for two-factor authentication, pasdigital identity management, mainly deployed as a USB token.
Network security
eSafe a line of integrated network security and content filtering products, protecting networks against cracked and pirated Internet-borne software.
See also
Product activation
License manager
List of license managers
Floating licensing
Silicon Wadi
References
External links
SafeNet Inc. Data Protection & Software Licensing website
Content Security – eSafe
eToken PASS – Aladdin Product
Aladdin Knowledge Systems website
Information technology companies of Israel
Computer security software companies
Copyright enforcement companies
Software licenses
Digital rights management
Companies based in Petah Tikva
Companies formerly listed on the Nasdaq
Software companies established in 1985
1985 establishments in Israel |
64516462 | https://en.wikipedia.org/wiki/NEC%20%CE%BCCOM%20series | NEC μCOM series | The NEC μCOM series is a series of microprocessors and microcontrollers manufactured by NEC in the 1970s and 1980s.
Overview
The μCOM series has its roots in one of the world's earliest microprocessor chipsets, the two-chip processor µPD707 / µPD708. Early in 1970, Coca Cola Japan set out to increase the efficiency of their sales outlets by introducing new POS terminals. Sharp was contracted to build these terminals, and NEC in turn to develop a chipset. The chipset development was complete in December 1971, at about the same time as other early microprocessors in the USA.
Since then, NEC has developed and manufactured various microprocessors and microcontrollers. General-purpose products among them were given series names starting with μCOM. The μCOM-4 series (4 bit) and μCOM-16 series (16 bit) were original developments, while the μCOM-8 series (8 bit and 16 bit) consisted mostly of Intel- and Zilog-compatible microprocessors.
The μCOM name disappeared when the V series and 78K series appeared in the 1980s, and the μCOM-87AD series, for example, came to be described simply as the 87AD series.
μCOM-4 series
μCOM-4
The μCOM-4 (μPD751) is NEC's original single-chip 4-bit microprocessor, announced in 1973. Unlike the Intel 4040, the μPD751 has separate data and address buses. A number of peripheral integrated circuits were provided for the μPD751:
μPD752 - 8-bit I/O port
μPD757 - Keyboard and display controller
μPD758 - Printer controller
μCOM-41
The μCOM-41 (μPD541) is a PMOS microprocessor in a 42-pin package. The following peripheral integrated circuits were available:
μPD542 - ROM plus RAM
μPD543 - ROM plus I/O port
μCOM-42
The μCOM-42 (μPD548) is a 4-bit PMOS microcontroller in a 42-pin package. It has built-in ROM (1920 × 10 bit) and RAM (96 × 4 bit) as well as keyboard, display, and printer controllers. The μPD548 requires a power supply of -10V and the outputs can switch up to -35V. A ROM-less chip (μPD555) in a 64-pin quad-in-line package was available for hardware and software development.
μCOM-43 through μCOM-46
The μCOM-43 series consists of more than 10 different 4-bit microcontrollers. Broadly speaking, there are PMOS devices (μPD500 series), NMOS devices (μPD1500 series, μCOM-43N ), and CMOS devices (μPD650 series, μCOM-43C ). The μCOM-43, μCOM-44, μCOM-45, and μCOM-46 have the same basic instruction set. They differ in the amount of ROM and RAM, the number of I/O pins, and the package (28-pin or 42-pin). A ROM-less chip (μPD556) in a 64-pin quad-in-line package was available for hardware and software development. Beginning in 1980, they there were gradually replaced by the μCOM-75 series (see below).
μCOM-47
The μCOM-47 (μPD766) is a 4-bit NMOS microcontroller in a 64-pin package. It has built-in ROM and RAM as well as keyboard, display, and printer controllers.
μCOM-75
The μCOM-75 series consists of 4-bit microcontrollers. Only the first device in the series, the μPD7520, was still developed in PMOS technology. All subsequent microcontrollers in the series (μPD7502 etc.) used CMOS. A ROM-less chip (μPD7500) in a 64-pin quad-in-line package was available for hardware and software development. By 1982 the μCOM-75 series was referred to as the μPD7500 series and later replaced by the 75X and 75XL series.
μCOM-8 series
μCOM-8
The μCOM-8 (μPD753) is an 8-bit microprocessor that is software-compatible with the Intel 8080, but differs in its 42-pin package and its completely different pin-out. There are minor software differences as well, e.g. the setting of flags for the SUB instruction.
μCOM-80
The μCOM-80 (μPD8080A) is an 8-bit microprocessor that is pin-compatible with the Intel 8080 and software-compatible with the μCOM-8. That is, the μPD8080A has some improvements compared to the Intel 8080:
BCD arithmetic is supported for both addition and subtraction (Intel 8080: addition only). Similar to the N flag in the Zilog Z80, the μPD8080A has a SUB flag (bit 5 of the flag register) to indicate that a subtraction was performed.
The instruction requires 4 clock cycles (Intel 8080: 5 clock cycles).
3-byte instructions are allowed in an interrupt acknowledge cycle, so a instruction to any memory address can be used (Intel 8080: only 1-byte instructions are allowed).
Unfortunately, these improvements cause some programs written for the Intel 8080 not to run correctly. To overcome this problem, NEC introduced the μCOM-80F (μPD8080AF) which is completely compatible with the Intel 8080 in all details. The 1979 catalog no longer listed the improved μPD8080A. With the TK-80, NEC offered a development board for μCOM-80, which due to its low price became popular with hobbyists.
μCOM-82
The μCOM-82 (μPD780) is an 8-bit microprocessor compatible with the Zilog Z80. The μPD780C corresponds to the original Z80 (max. 2.5 MHz clock) while the μPD780C-1 corresponds to the Z80A (max. 4 MHz clock). The µPD780C-1 was used in Sinclair's ZX80, ZX81 and early versions of the ZX Spectrum, in several MSX computers, in musical synthesizers such as Oberheim OB-8, and in Sega's SG-1000 game console.
A CMOS version (μPD70008) followed later.
μCOM-84
The µCOM-84 (µPD8048 etc.) is compatible with Intel's 8-bit microcontroller 8048. CMOS microcontrollers up to μPD80C50 followed, but an Intel 8051 compatible product, which is the 8-bit industry standard, was never offered.
μCOM-85
The µCOM-85 (µPD8085) is an Intel 8085 compatible 8-bit microprocessor.
μCOM-86, μCOM-88
The µCOM-86 (µPD8086) and µCOM-88 (µPD8088) are Intel 8086 and Intel 8088 compatible 16-bit microprocessors. They were superseded by the V series.
μCOM-87, μCOM-87AD
The µCOM-87 (µPD7800 etc.) and µCOM-87AD (µPD7810 etc.) are NEC original 8-bit microcontrollers. The μCOM-87AD adds an A/D converter to the μCOM-87. The register configuration consists of two sets of 8 registers each (A, V, B, C, D, E, H, L). The V register is a vector register that stores the upper 8 bits of the address of the working memory area, and the short address space which is fixed in the current 78K series can be freely arranged. The μPD7805 and μPD7806 have only one set of 7 registers (no V register). In the µPD7807 and later, the ALU is expanded to 16 bit and an EA register is added for 16-bit operation.
The series came in 64-pin quad in-line package. This series was superseded by the 78K series.
μCOM-16 series
μCOM-16
The μCOM-16 is a NEC original 16-bit microprocessor, implemented in two chips, the μPD755 (register + ALU) and μPD756 (controller), in 1974.
μCOM-1600
The μCOM-1600 (μPD768) is a NEC original single-chip 16-bit microprocessor that was announced in 1978.
The processor has 93 basic instructions, consisting of 1 to 3 16-bit words. The memory space of 1 Mbyte (512K words) is byte-addressable. The I/O address space is 2048 bytes. There are 14 general-purpose registers. The processor has a 2-input vector interrupt, DMA control, refresh control for DRAM, and a master/slave mode to enable multiprocessor operation.
References
muCOM series |
7931987 | https://en.wikipedia.org/wiki/Organizational%20Systems%20Security%20Analyst | Organizational Systems Security Analyst | The Organizational Systems Security Analyst (OSSA) is a technical vendor-neutral Information Security certification programme which is being offered in Asia. It is developed by ThinkSECURE Pte Ltd, an information-security certification body and consultancy. The programme consists of a specialized technical information security training and certification course and practical examination which technical Information Technology professionals can attend in order to become skilled and effective technical Information Security professionals and to prove their level of competence and skill by undergoing the examination.
Technical staff enrolling in the programme are taught and trained how to address the technical security issues they encounter in daily operations and how to methodically establish, operate and maintain security for their organization's computer network and computer systems infrastructure.
The OSSA programme does not focus on hackers' software as these quickly become obsolete as software patches are released. It first looks at security from a methodological thinking perspective and draws lessons from Sun Tzu's "The Art of War" to generate a security framework and then introduces example resources and tools by which the various security aims and objectives, such as "how to defend your server against a hacker's attacks" can be met.
Sun Tzu's 'Art of War' treatise is used to provide a guiding philosophy throughout the programme, addressing both offensive threats and the defensive measures needed to overcome them. The philosophy also extends to the sections on incident response methodology (i.e. how to respond to security breaches), computer forensics and the impact of law on security-related activities such as the recovery of information from a computer crime suspect's hard drive. Under the programme, students are given coursework and experience how to set up and maintain a complete enterprise-class security monitoring and defence infrastructure which includes firewalls, network intrusion detection systems, file-integrity checkers, honeypots and encryption. A unique attacker's methodology is also introduced to assist the technical staff with identifying the modus operandi of an attacker and his arsenal and to conduct auditing against computer systems by using that methodology.
The generic title sections under the programme appear to comprise the following:
What is Information Security
Network 101
Defending your Turf & Security Policy Formulation
Defensive Tools & Lockdown
The 5E Attacker Methodology: Attacker Methods & Exploits
Wireless (In)Security
Incident Response & Computer Forensics
The Impact Of Law
Under each section are many modules, for example the defensive section covers the setting up of firewalls, NIDS, HIDS, honeypots, cryptographic software, etc.
The OSSA programme consists of both practical hands-on lab-based coursework and a practical hands-on lab-based certification examination. According to the ThinkSECURE website, the rationale for this is that only those who prove they can apply their skills and knowledge to a completely new and unknown exam setup will get certified and those who only know how to do exam-cramming by memorizing facts and figures and visiting brain dump sites will not be able to get certified.
Compared to non-practical multiple-choice-question exam formats, this method of examination is beneficial for the Information Security industry and employers as a whole because it provides the following benefits:
makes sure only candidates who can prove ability to apply skills in a practical examination are certified.
stops brain-dumpers from attaining and devaluing the certification as a basis of competency evaluation.
protects people's and companies' money and time investment in getting certified.
helps employers identify technical staff who are more skilled.
provides the industry with a pool of competent, qualified technical staff.
External links
Organizational Systems Security Analyst (OSSA)
Definition of OSSA acronym
OSSA Programme Outline
ThinkSECURE
Association of Information Security Professionals list of certifications
Information technology qualifications
Professional titles and certifications |
44065971 | https://en.wikipedia.org/wiki/Blockchain | Blockchain | A blockchain is a growing list of records, called blocks, that are linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a Merkle tree). The timestamp proves that the transaction data existed when the block was published in order to get into its hash. As blocks each contain information about the block previous to it, they form a chain, with each additional block reinforcing the ones before it. Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent blocks.
Blockchains are typically managed by a peer-to-peer network for use as a publicly distributed ledger, where nodes collectively adhere to a protocol to communicate and validate new blocks. Although blockchain records are not unalterable as forks are possible, blockchains may be considered secure by design and exemplify a distributed computing system with high Byzantine fault tolerance.
The blockchain was popularized by a person (or group of people) using the name Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency bitcoin, based on work by Stuart Haber, W. Scott Stornetta, and Dave Bayer. The identity of Satoshi Nakamoto remains unknown to date. The implementation of the blockchain within bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The bitcoin design has inspired other applications and blockchains that are readable by the public and are widely used by cryptocurrencies. The blockchain is considered a type of payment rail.
Private blockchains have been proposed for business use. Computerworld called the marketing of such privatized blockchains without a proper security model "snake oil"; however, others have argued that permissioned blockchains, if carefully designed, may be more decentralized and therefore more secure in practice than permissionless ones.
History
Cryptographer David Chaum first proposed a blockchain-like protocol in his 1982 dissertation "Computer Systems Established, Maintained, and Trusted by Mutually Suspicious Groups." Further work on a cryptographically secured chain of blocks was described in 1991 by Stuart Haber and W. Scott Stornetta. They wanted to implement a system wherein document timestamps could not be tampered with. In 1992, Haber, Stornetta, and Dave Bayer incorporated Merkle trees to the design, which improved its efficiency by allowing several document certificates to be collected into one block. Under their company Surety, their document certificate hashes have been published in The New York Times every week since 1995.
The first decentralized blockchain was conceptualized by a person (or group of people) known as Satoshi Nakamoto in 2008. Nakamoto improved the design in an important way using a Hashcash-like method to timestamp blocks without requiring them to be signed by a trusted party and introducing a difficulty parameter to stabilize the rate at which blocks are added to the chain. The design was implemented the following year by Nakamoto as a core component of the cryptocurrency bitcoin, where it serves as the public ledger for all transactions on the network.
In August 2014, the bitcoin blockchain file size, containing records of all transactions that have occurred on the network, reached 20 GB (gigabytes). In January 2015, the size had grown to almost 30 GB, and from January 2016 to January 2017, the bitcoin blockchain grew from 50 GB to 100 GB in size. The ledger size had exceeded 200 GB by early 2020.
The words block and chain were used separately in Satoshi Nakamoto's original paper, but were eventually popularized as a single word, blockchain, by 2016.
According to Accenture, an application of the diffusion of innovations theory suggests that blockchains attained a 13.5% adoption rate within financial services in 2016, therefore reaching the early adopters phase. Industry trade groups joined to create the Global Blockchain Forum in 2016, an initiative of the Chamber of Digital Commerce.
In May 2018, Gartner found that only 1% of CIOs indicated any kind of blockchain adoption within their organisations, and only 8% of CIOs were in the short-term "planning or [looking at] active experimentation with blockchain". For the year 2019 Gartner reported 5% of CIOs believed blockchain technology was a 'game-changer' for their business.
Structure
A blockchain is a decentralized, distributed, and oftentimes public, digital ledger consisting of records called blocks that is used to record transactions across many computers so that any involved block cannot be altered retroactively, without the alteration of all subsequent blocks. This allows the participants to verify and audit transactions independently and relatively inexpensively. A blockchain database is managed autonomously using a peer-to-peer network and a distributed timestamping server. They are authenticated by mass collaboration powered by collective self-interests. Such a design facilitates robust workflow where participants' uncertainty regarding data security is marginal. The use of a blockchain removes the characteristic of infinite reproducibility from a digital asset. It confirms that each unit of value was transferred only once, solving the long-standing problem of double spending. A blockchain has been described as a value-exchange protocol. A blockchain can maintain title rights because, when properly set up to detail the exchange agreement, it provides a record that compels offer and acceptance.
Logically, a blockchain can be seen as consisting of several layers:
infrastructure (hardware)
networking (node discovery, information propagation and verification)
consensus (proof of work, proof of stake)
data (blocks, transactions)
application (smart contracts/decentralized applications, if applicable)
Blocks
Blocks hold batches of valid transactions that are hashed and encoded into a Merkle tree. Each block includes the cryptographic hash of the prior block in the blockchain, linking the two. The linked blocks form a chain. This iterative process confirms the integrity of the previous block, all the way back to the initial block, which is known as the genesis block. To assure the integrity of a block and the data contained in it, the block is usually digitally signed.
Sometimes separate blocks can be produced concurrently, creating a temporary fork. In addition to a secure hash-based history, any blockchain has a specified algorithm for scoring different versions of the history so that one with a higher score can be selected over others. Blocks not selected for inclusion in the chain are called orphan blocks. Peers supporting the database have different versions of the history from time to time. They keep only the highest-scoring version of the database known to them. Whenever a peer receives a higher-scoring version (usually the old version with a single new block added) they extend or overwrite their own database and retransmit the improvement to their peers. There is never an absolute guarantee that any particular entry will remain in the best version of the history forever. Blockchains are typically built to add the score of new blocks onto old blocks and are given incentives to extend with new blocks rather than overwrite old blocks. Therefore, the probability of an entry becoming superseded decreases exponentially as more blocks are built on top of it, eventually becoming very low. For example, bitcoin uses a proof-of-work system, where the chain with the most cumulative proof-of-work is considered the valid one by the network. There are a number of methods that can be used to demonstrate a sufficient level of computation. Within a blockchain the computation is carried out redundantly rather than in the traditional segregated and parallel manner.
Block time
The block time is the average time it takes for the network to generate one extra block in the blockchain. Some blockchains create a new block as frequently as every five seconds. By the time of block completion, the included data becomes verifiable. In cryptocurrency, this is practically when the transaction takes place, so a shorter block time means faster transactions. The block time for Ethereum is set to between 14 and 15 seconds, while for bitcoin it is on average 10 minutes.
Hard forks
Decentralization
By storing data across its peer-to-peer network, the blockchain eliminates a number of risks that come with data being held centrally. The decentralized blockchain may use ad hoc message passing and distributed networking. One risk of a lack of a decentralization is a so-called "51% attack" where a central entity can gain control of more than half of a network and can manipulate that specific blockchain record at will, allowing double-spending.
Peer-to-peer blockchain networks lack centralized points of vulnerability that computer crackers can exploit; likewise, it has no central point of failure. Blockchain security methods include the use of public-key cryptography. A public key (a long, random-looking string of numbers) is an address on the blockchain. Value tokens sent across the network are recorded as belonging to that address. A private key is like a password that gives its owner access to their digital assets or the means to otherwise interact with the various capabilities that blockchains now support. Data stored on the blockchain is generally considered incorruptible.
Every node in a decentralized system has a copy of the blockchain. Data quality is maintained by massive database replication and computational trust. No centralized "official" copy exists and no user is "trusted" more than any other. Transactions are broadcast to the network using software. Messages are delivered on a best-effort basis. Mining nodes validate transactions, add them to the block they are building, and then broadcast the completed block to other nodes. Blockchains use various time-stamping schemes, such as proof-of-work, to serialize changes. Alternative consensus methods include proof-of-stake. Growth of a decentralized blockchain is accompanied by the risk of centralization because the computer resources required to process larger amounts of data become more expensive.
Openness
Open blockchains are more user-friendly than some traditional ownership records, which, while open to the public, still require physical access to view. Because all early blockchains were permissionless, controversy has arisen over the blockchain definition. An issue in this ongoing debate is whether a private system with verifiers tasked and authorized (permissioned) by a central authority should be considered a blockchain. Proponents of permissioned or private chains argue that the term "blockchain" may be applied to any data structure that batches data into time-stamped blocks. These blockchains serve as a distributed version of multiversion concurrency control (MVCC) in databases. Just as MVCC prevents two transactions from concurrently modifying a single object in a database, blockchains prevent two transactions from spending the same single output in a blockchain. Opponents say that permissioned systems resemble traditional corporate databases, not supporting decentralized data verification, and that such systems are not hardened against operator tampering and revision. Nikolai Hampton of Computerworld said that "many in-house blockchain solutions will be nothing more than cumbersome databases," and "without a clear security model, proprietary blockchains should be eyed with suspicion."
Permissionlessness
An advantage to an open, permissionless, or public, blockchain network is that guarding against bad actors is not required and no access control is needed. This means that applications can be added to the network without the approval or trust of others, using the blockchain as a transport layer.
Bitcoin and other cryptocurrencies currently secure their blockchain by requiring new entries to include a proof of work. To prolong the blockchain, bitcoin uses Hashcash puzzles. While Hashcash was designed in 1997 by Adam Back, the original idea was first proposed by Cynthia Dwork and Moni Naor and Eli Ponyatovski in their 1992 paper "Pricing via Processing or Combatting Junk Mail".
In 2016, venture capital investment for blockchain-related projects was weakening in the USA but increasing in China. Bitcoin and many other cryptocurrencies use open (public) blockchains. , bitcoin has the highest market capitalization.
Permissioned (private) blockchain
Permissioned blockchains use an access control layer to govern who has access to the network. In contrast to public blockchain networks, validators on private blockchain networks are vetted by the network owner. They do not rely on anonymous nodes to validate transactions nor do they benefit from the network effect. Permissioned blockchains can also go by the name of 'consortium' blockchains. It has been argued that permissioned blockchains can guarantee a certain level of decentralization, if carefully designed, as opposed to permissionless blockchains, which are often centralized in practice.
Disadvantages of private blockchain
Nikolai Hampton pointed out in Computerworld that "There is also no need for a '51 percent' attack on a private blockchain, as the private blockchain (most likely) already controls 100 percent of all block creation resources. If you could attack or damage the blockchain creation tools on a private corporate server, you could effectively control 100 percent of their network and alter transactions however you wished." This has a set of particularly profound adverse implications during a financial crisis or debt crisis like the financial crisis of 2007–08, where politically powerful actors may make decisions that favor some groups at the expense of others, and "the bitcoin blockchain is protected by the massive group mining effort. It's unlikely that any private blockchain will try to protect records using gigawatts of computing power — it's time consuming and expensive." He also said, "Within a private blockchain there is also no 'race'; there's no incentive to use more power or discover blocks faster than competitors. This means that many in-house blockchain solutions will be nothing more than cumbersome databases."
Blockchain analysis
The analysis of public blockchains has become increasingly important with the popularity of bitcoin, Ethereum, litecoin and other cryptocurrencies. A blockchain, if it is public, provides anyone who wants access to observe and analyse the chain data, given one has the know-how. The process of understanding and accessing the flow of crypto has been an issue for many cryptocurrencies, crypto-exchanges and banks. The reason for this is accusations of blockchain enabled cryptocurrencies enabling illicit dark market trade of drugs, weapons, money laundering, etc. A common belief has been that cryptocurrency is private and untraceable, thus leading many actors to use it for illegal purposes. This is changing and now specialised tech-companies provide blockchain tracking services, making crypto exchanges, law-enforcement and banks more aware of what is happening with crypto funds and fiat crypto exchanges. The development, some argue, has led criminals to prioritise use of new cryptos such as Monero. The question is about public accessibility of blockchain data and the personal privacy of the very same data. It is a key debate in cryptocurrency and ultimately in blockchain.
Standardisation
In April 2016, Standards Australia submitted a proposal to the International Organization for Standardization to consider developing standards to support blockchain technology. This proposal resulted in the creation of ISO Technical Committee 307, Blockchain and Distributed Ledger Technologies. The technical committee has working groups relating to blockchain terminology, reference architecture, security and privacy, identity, smart contracts, governance and interoperability for blockchain and DLT, as well as standards specific to industry sectors and generic government requirements. More than 50 countries are participating in the standardization process together with external liaisons such as the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the European Commission, the International Federation of Surveyors, the International Telecommunication Union (ITU) and the United Nations Economic Commission for Europe (UNECE).
Many other national standards bodies and open standards bodies are also working on blockchain standards. These include the National Institute of Standards and Technology (NIST), the European Committee for Electrotechnical Standardization (CENELEC), the Institute of Electrical and Electronics Engineers (IEEE), the Organization for the Advancement of Structured Information Standards (OASIS), and some individual participants in the Internet Engineering Task Force (IETF).
Types
Currently, there are at least four types of blockchain networks — public blockchains, private blockchains, consortium blockchains and hybrid blockchains.
Public blockchains
A public blockchain has absolutely no access restrictions. Anyone with an Internet connection can send transactions to it as well as become a validator (i.e., participate in the execution of a consensus protocol). Usually, such networks offer economic incentives for those who secure them and utilize some type of a Proof of Stake or Proof of Work algorithm.
Some of the largest, most known public blockchains are the bitcoin blockchain and the Ethereum blockchain.
Private blockchains
A private blockchain is permissioned. One cannot join it unless invited by the network administrators. Participant and validator access is restricted. To distinguish between open blockchains and other peer-to-peer decentralized database applications that are not open ad-hoc compute clusters, the terminology Distributed Ledger (DLT) is normally used for private blockchains.
Hybrid blockchains
A hybrid blockchain has a combination of centralized and decentralized features. The exact workings of the chain can vary based on which portions of centralization decentralization are used.
Sidechains
A sidechain is a designation for a blockchain ledger that runs in parallel to a primary blockchain. Entries from the primary blockchain (where said entries typically represent digital assets) can be linked to and from the sidechain; this allows the sidechain to otherwise operate independently of the primary blockchain (e.g., by using an alternate means of record keeping, alternate consensus algorithm, etc.).
Uses
Blockchain technology can be integrated into multiple areas. The primary use of blockchains is as a distributed ledger for cryptocurrencies such as bitcoin; there were also a few other operational products which had matured from proof of concept by late 2016. As of 2016, some businesses have been testing the technology and conducting low-level implementation to gauge blockchain's effects on organizational efficiency in their back office.
In 2019, it was estimated that around $2.9 billion were invested in blockchain technology, which represents an 89% increase from the year prior. Additionally, the International Data Corp has estimated that corporate investment into blockchain technology will reach $12.4 billion by 2022. Furthermore, According to PricewaterhouseCoopers (PwC), the second-largest professional services network in the world, blockchain technology has the potential to generate an annual business value of more than $3 trillion by 2030. PwC's estimate is further augmented by a 2018 study that they have conducted, in which PwC surveyed 600 business executives and determined that 84% have at least some exposure to utilizing blockchain technology, which indicts a significant demand and interest in blockchain technology.
Individual use of blockchain technology has also greatly increased since 2016. According to statistics in 2020, there were more than 40 million blockchain wallets in 2020 in comparison to around 10 million blockchain wallets in 2016.
Cryptocurrencies
Most cryptocurrencies use blockchain technology to record transactions. For example, the bitcoin network and Ethereum network are both based on blockchain. On 8 May 2018 Facebook confirmed that it would open a new blockchain group which would be headed by David Marcus, who previously was in charge of Messenger. Facebook's planned cryptocurrency platform, Libra (now known as Diem), was formally announced on June 18, 2019.
The criminal enterprise Silk Road, which operated on Tor, utilized cryptocurrency for payments, some of which the US federal government has seized through research on the blockchain and forfeiture.
Governments have mixed policies on the legality of their citizens or banks
owning cryptocurrencies. China implements blockchain technology in several industries including a national digital currency which launched in 2020. In order to strengthen their respective currencies, Western governments including the European Union and the United States have initiated similar projects.
Smart contracts
Blockchain-based smart contracts are proposed contracts that can be partially or fully executed or enforced without human interaction. One of the main objectives of a smart contract is automated escrow. A key feature of smart contracts is that they do not need a trusted third party (such as a trustee) to act as an intermediary between contracting entities -the blockchain network executes the contract on its own. This may reduce friction between entities when transferring value and could subsequently open the door to a higher level of transaction automation. An IMF staff discussion from 2018 reported that smart contracts based on blockchain technology might reduce moral hazards and optimize the use of contracts in general. But "no viable smart contract systems have yet emerged." Due to the lack of widespread use their legal status was unclear.
Financial services
According to Reason, many banks have expressed interest in implementing distributed ledgers for use in banking and are cooperating with companies creating private blockchains, and according to a September 2016 IBM study, this is occurring faster than expected.
Banks are interested in this technology not least because it has potential to speed up back office settlement systems. Moreover as the blockchain industry has reached early maturity institutional appreciation has grown that it is, practically speaking, the infrastructure of a whole new financial industry, with all the implications which that entails.
Banks such as UBS are opening new research labs dedicated to blockchain technology in order to explore how blockchain can be used in financial services to increase efficiency and reduce costs.
Berenberg, a German bank, believes that blockchain is an "overhyped technology" that has had a large number of "proofs of concept", but still has major challenges, and very few success stories.
The blockchain has also given rise to initial coin offerings (ICOs) as well as a new category of digital asset called security token offerings (STOs), also sometimes referred to as digital security offerings (DSOs). STO/DSOs may be conducted privately or on a public, regulated stock exchange and are used to tokenize traditional assets such as company shares as well as more innovative ones like intellectual property, real estate, art, or individual products. A number of companies are active in this space providing services for compliant tokenization, private STOs, and public STOs.
Games
Blockchain technology, such as cryptocurrencies and non-fungible tokens (NFTs), has been used in video games for monetization. Many live-service games offer in-game customization options, such as character skins or other in-game items, which the players can earn and trade with other players using in-game currency. Some games also allow for trading of virtual items using real-world currency, but this may be illegal in some countries where video games are seen as akin to gambling, and has led to gray market issues such as skin gambling, and thus publishers typically have shied away from allowing players to earn real-world funds from games. Blockchain games typically allow players to trade these in-game items for cryptocurrency, which can then be exchanged for money.
The first known game to use blockchain technologies was CryptoKitties, launched in November 2017, where the player would purchase NFTs with Ethereum cryptocurrency, each NFT consisting of a virtual pet that the player could breed with others to create offspring with combined traits as new NFTs. The game made headlines in December 2017 when one virtual pet sold for more than US$100,000. CryptoKitties also illustrated scalability problems for games on Ethereum when it created significant congestion on the Ethereum network in early 2018 with approximately 30% of all Ethereum transactions being for the game.
By the early 2020s there had not been a breakout success in video games using blockchain, as these games tend to focus on using blockchain for speculation instead of more traditional forms of gameplay, which offers limited appeal to most players. Such games also represent a high risk to investors as their revenues can be difficult to predict. However, limited successes of some games, such as Axie Infinity during the COVID-19 pandemic, and corporate plans towards metaverse content, refueled interest in the area of GameFi, a term describing the intersection of video games and financing typically backed by blockchain currency, in the second half of 2021. Several major publishers, including Ubisoft, Electronic Arts, and Take Two Interactive, have stated that blockchain and NFT-based games are under serious consideration for their companies in the future.
In October 2021, Valve Corporation banned blockchain games, including those using cryptocurrency and NFTs, from being hosted on its Steam digital storefront service, which is widely used for personal computer gaming, claiming that this was an extension of their policy banning games that offered in-game items with real-world value. Valve's prior history with gambling, specifically skin gambling, was speculated to be a factor in the decision to ban blockchain games. Journalists and players responded positively to Valve's decision as blockchain and NFT games have a reputation for scams and fraud among most PC gamers, Epic Games, which runs the Epic Games Store in competition to Steam, said that they would be open to accepted blockchain games, in the wake of Valve's refusal.
Supply chain
There have been several different efforts to employ blockchains in supply chain management.
Precious commodities mining — Blockchain technology has been used for tracking the origins of gemstones and other precious commodities. In 2016, The Wall Street Journal reported that the blockchain technology company, Everledger was partnering with IBM's blockchain-based tracking service to trace the origin of diamonds to ensure that they were ethically mined. As of 2019, the Diamond Trading Company (DTC) has been involved in building a diamond trading supply chain product called Tracr.
Food supply — As of 2018, Walmart and IBM were running a trial to use a blockchain-backed system for supply chain monitoring for lettuce and spinach — all nodes of the blockchain were administered by Walmart and were located on the IBM cloud. In 2021, scientists from Nosh Technologies and the University of Essex developed a blockchain based approach named FoodSQRBlock using QR code and cloud computing to digitize food supply chain data to improve traceability of food by the farmers and consumers.
Domain names
There are several different efforts to offer domain name services via blockchain. These domain names can be controlled by the use of a private key, which purport to allow for uncensorable websites. This would also bypass a registrar's ability to suppress domains used for fraud, abuse, or illegal content.
Namecoin is a cryptocurrency that supports the ".bit" top-level domain (TLD). Namecoin was forked from bitcoin in 2011. The .bit TLD is not sanctioned by ICANN, instead requiring an alternative DNS root. As of 2015, it was used by 28 websites, out of 120,000 registered names. Namecoin was dropped by OpenNIC in 2019, due to malware and potential other legal issues. Other blockchain alternatives to ICANN include The Handshake Network, EmerDNS, and Unstoppable Domains.
Specific TLDs include ".eth", ".luxe", and ".kred", which are associated with the Ethereum blockchain through the Ethereum Name Service (ENS). The .kred TLD also acts an alternative to conventional cryptocurrency wallet addresses, as a convenience for transferring cryptocurrency.
Other uses
Blockchain technology can be used to create a permanent, public, transparent ledger system for compiling data on sales, tracking digital use and payments to content creators, such as wireless users or musicians. The Gartner 2019 CIO Survey reported 2% of higher education respondents had launched blockchain projects and another 18% were planning academic projects in the next 24 months. In 2017, IBM partnered with ASCAP and PRS for Music to adopt blockchain technology in music distribution. Imogen Heap's Mycelia service has also been proposed as blockchain-based alternative "that gives artists more control over how their songs and associated data circulate among fans and other musicians."
New distribution methods are available for the insurance industry such as peer-to-peer insurance, parametric insurance and microinsurance following the adoption of blockchain. The sharing economy and IoT are also set to benefit from blockchains because they involve many collaborating peers. The use of blockchain in libraries is being studied with a grant from the U.S. Institute of Museum and Library Services.
Other blockchain designs include Hyperledger, a collaborative effort from the Linux Foundation to support blockchain-based distributed ledgers, with projects under this initiative including Hyperledger Burrow (by Monax) and Hyperledger Fabric (spearheaded by IBM). Another is Quorum, a permissionable private blockchain by JPMorgan Chase with private storage, used for contract applications.
Blockchain is also being used in peer-to-peer energy trading.
Blockchain could be used in detecting counterfeits by associating unique identifiers to products, documents and shipments, and storing records associated to transactions that cannot be forged or altered. It is however argued that blockchain technology needs to be supplemented with technologies that provide a strong binding between physical objects and blockchain systems. The EUIPO established an Anti-Counterfeiting Blockathon Forum, with the objective of "defining, piloting and implementing" an anti-counterfeiting infrastructure at the European level. The Dutch Standardisation organisation NEN uses blockchain together with QR Codes to authenticate certificates.
Blockchain interoperability
With the increasing number of blockchain systems appearing, even only those that support cryptocurrencies, blockchain interoperability is becoming a topic of major importance. The objective is to support transferring assets from one blockchain system to another blockchain system. Wegner stated that "interoperability is the ability of two or more software components to cooperate despite differences in language, interface, and execution platform". The objective of blockchain interoperability is therefore to support such cooperation among blockchain systems, despite those kinds of differences.
There are already several blockchain interoperability solutions available. They can be classified in three categories: cryptocurrency interoperability approaches, blockchain engines, and blockchain connectors.
Several individual IETF participants produced the draft of a blockchain interoperability architecture.
Energy consumption concerns
Blockchain mining — the peer-to-peer computer computations by which transactions are validated and verified — requires a significant amount of energy. In June 2018 the Bank for International Settlements criticized the use of public proof-of-work blockchains for their high energy consumption. In 2021, a study by Cambridge University determined that Bitcoin (at 121 terawatt-hours per year) used more electricity than Argentina (at 121TWh) and the Netherlands (109TWh). According to Digiconomist, one bitcoin transaction required 708 kilowatt-hours of electrical energy, the amount an average U.S. household consumed in 24 days.
In February 2021, U.S. Treasury secretary Janet Yellen called Bitcoin "an extremely inefficient way to conduct transactions", saying "the amount of energy consumed in processing those transactions is staggering." In March 2021, Bill Gates stated that "Bitcoin uses more electricity per transaction than any other method known to mankind", adding "It's not a great climate thing."
Nicholas Weaver, of the International Computer Science Institute at the University of California, Berkeley, examined blockchain's online security, and the energy efficiency of proof-of-work public blockchains, and in both cases found it grossly inadequate. The 31TWh–45TWh of electricity used for bitcoin in 2018 produced 17–22.9 million tonnes of CO2. By 2022, the University of Cambridge and Digiconomist estimated that the two largest proof-of-stake blockchains, Bitcoin and Ethereum, together used twice as much electricity in one year as the whole of Sweden, leading to the release of up to 120 million tonnes of CO2 each year.
Inside the cryptocurrency industry, concern about high energy consumption has led some companies to consider moving from the proof of work blockchain model to the less energy-intensive proof of stake model. Academics and researchers have estimated that Bitcoin consumes 100,000 times as much energy as proof-of-stake networks.
Academic research
In October 2014, the MIT Bitcoin Club, with funding from MIT alumni, provided undergraduate students at the Massachusetts Institute of Technology access to $100 of bitcoin. The adoption rates, as studied by Catalini and Tucker (2016), revealed that when people who typically adopt technologies early are given delayed access, they tend to reject the technology. Many universities have founded departments focusing on crypto and blockchain, including MIT, in 2017. In the same year, Edinburgh became "one of the first big European universities to launch a blockchain course", according to the Financial Times.
Adoption decision
Motivations for adopting blockchain technology (an aspect of innovation adoptation) have been investigated by researchers. For example, Janssen, et al. provided a framework for analysis, and Koens & Poll pointed out that adoption could be heavily driven by non-technical factors. Based on behavioral models, Li has discussed the differences between adoption at the individual level and organizational levels.
Collaboration
Scholars in business and management have started studying the role of blockchains to support collaboration. It has been argued that blockchains can foster both cooperation (i.e., prevention of opportunistic behavior) and coordination (i.e., communication and information sharing). Thanks to reliability, transparency, traceability of records, and information immutability, blockchains facilitate collaboration in a way that differs both from the traditional use of contracts and from relational norms. Contrary to contracts, blockchains do not directly rely on the legal system to enforce agreements. In addition, contrary to the use of relational norms, blockchains do not require trust or direct connections between collaborators.
Blockchain and internal audit
The need for internal audit to provide effective oversight of organizational efficiency will require a change in the way that information is accessed in new formats. Blockchain adoption requires a framework to identify the risk of exposure associated with transactions using blockchain. The Institute of Internal Auditors has identified the need for internal auditors to address this transformational technology. New methods are required to develop audit plans that identify threats and risks. The Internal Audit Foundation study, Blockchain and Internal Audit, assesses these factors. The American Institute of Certified Public Accountants has outlined new roles for auditors as a result of blockchain.
Journals
In September 2015, the first peer-reviewed academic journal dedicated to cryptocurrency and blockchain technology research, Ledger, was announced. The inaugural issue was published in December 2016. The journal covers aspects of mathematics, computer science, engineering, law, economics and philosophy that relate to cryptocurrencies such as bitcoin.
The journal encourages authors to digitally sign a file hash of submitted papers, which are then timestamped into the bitcoin blockchain. Authors are also asked to include a personal bitcoin address in the first page of their papers for non-repudiation purposes.
See also
Version control – a record of all changes (mostly of software project) in a form of a graph
Changelog – a record of all notable changes made to a project
Checklist – an informational aid used to reduce failure
Economics of digitization
Privacy and blockchain
References
Further reading
D. Puthal, N. Malik, S. P. Mohanty, E. Kougianos, and G. Das, "Everything you Wanted to Know about the Blockchain", IEEE Consumer Electronics Magazine, Volume 7, Issue 4, July 2018, pp. 06–14.
David L. Portilla, David J. Kappos, Minh Van Ngo, Sasha Rosenthal-Larrea, John D. Buretta and Christopher K. Fargo, Cravath, Swaine & Moore LLP, "Blockchain in the Banking Sector: A Review of the Landscape and Opportunities" , Harvard Law School of Corporate Governance, posted on Friday, January 28, 2022
External links
Bitcoin
Cryptocurrencies
Database models
Emerging technologies
Financial metadata
Computer-related introductions in 2009
Information systems
Writing systems
Mathematical tools
Counting instruments
Encodings
Decentralization
21st-century inventions |
11441217 | https://en.wikipedia.org/wiki/Cap%20Gemini%20SDM | Cap Gemini SDM | Cap Gemini SDM, or SDM2 (System Development Methodology) is a software development method developed by the software company Pandata in the Netherlands in 1970. The method is a waterfall model divided in seven phases that have a clear start and end. Each phase delivers subproducts, called milestones. It was used extensively in the Netherlands for ICT projects in the 1980s and 1990s. Pandata was purchased by the Capgemini group in the 1980s, and the last version of SDM to be published in English was SDM2 (6th edition) in 1991 by Cap Gemini Publishing BV. The method was regularly taught and distributed among Capgemini consultants and customers, until the waterfall method slowly went out of fashion in the wake of more iterative extreme programming methods such as Rapid application development, Rational Unified Process and Agile software development.
The Cap Gemini SDM Methodology
In the early to mid-1970s, the various generic work steps of system development methodologies were replaced with work steps based on various structured analysis or structured design techniques. SDM, SDM2, SDM/70, and Spectrum evolved into system development methodologies that were based on the works of Steven Ward, Tom Demarco, Larry Constantine, Ken Orr, Ed Yourdon, Michael A. Jackson and others, as well as data modeling techniques developed by Thomas Bachmann and Peter Chen. SDM is a top-down model. Starting from the system as a whole, its description becomes more detailed as the design progresses. The method was marketed as a proprietary method that all company developers were required to use to ensure quality in customer projects. This method shows several similarities with the proprietary methods of CAP Gemini's most important competitors in 1990. A similar waterfall method that was later used against the company itself in court proceedings in 2002 was CMG:Commander.
History
SDM was developed in 1970 by a company known as PANDATA, now part of Cap Gemini, which itself was created as a joint venture by three Dutch companies: AKZO, Nationale Nederlanden and Posterijen, Telegrafie en Telefonie (Nederland). The company was founded in order to develop the method and create training materials to propagate the method. It was successful, but was revised in 1987 to standardize and separate the method theory from the more technical aspects used to implement the method. Those aspects were bundled into the process modelling tool called "Software Development Workbench", that was later sold in 2000 to BWise, another Dutch company. This revised version of the method without the tool is commonly known as SDM2.
Main difference between SDM and SDM2
SDM2 was a revised version of SDM that attempted to solve a basic problem that occurred often in SDM projects; the delivered system failed to meet the customer requirements. Though any number of specific reasons for this could arise, the basic waterfall method used in SDM was a recipe for this problem due to the relatively large amount of time spent by development teams between the Definition Study and the Implementation phases. It was during the design phases that the project often became out of sync with customer requirements.
During the SDM functional design phase called BD (Basic Design), design aspects were documented (out of phase) in detail for the later technical design DD (Detailed Design). This caused a gray zone of responsibility to occur between the two phases; the functional crew responsible for the data flows and process flows in the BD were making decisions that the technical crew later needed to code, although their technical knowledge was not detailed enough to make those decisions. This obviously led to problems in collaboration between project teams during both the BD and DD phases. Because of the waterfall method of Go/No Go decisions at the end of each phase, the technical crew would have to make a formal Change request in order to make corrections in the detailed sections of the Basic Design. Such changes were often confusing for the customer, because these originated from the project team rather than directly from the customer requirements, even after a change freeze was put in place. Usually the customer was only allowed to produce requirements up to and including the functional design in the BD phase. After that, the customer had to wait patiently until acceptance testing in the Implementation phase.
In SDM2, the term "Basic Design" was replaced by the term "Global Design" to indicate that this document was continuously updated and subject to change during both the BD and DD phases. Thus the "Basic design" is both global and detailed at the end of the project. In the global design, the principles of functionality and construction, as well as their relations, are documented. This is how the idea of iterative development got started; a functional design is by nature influenced by the technology platform chosen for implementation, and some basic design decisions will need to be revisited when early assumptions prove later to be wrong or costly to implement. This became the forerunner of the Rapid Application Development method, which caused these two phases to become cyclical and work in tandem.
SDM2 only partially solved the problem of meeting customer requirements; modern software development methods go several steps further by insisting for example on incremental deliveries, or that the customer appoint key users of the delivered system to play a role in the project from start to finish.
The SDM method
SDM is a method based on phases. Before every phase, an agreement needs to be reached detailing the activities for that phase. These documents are known as milestone documents. Several uses for these documents exist:
Traceability — Through applying deadlines to milestone documents, clients can keep track on whether a project is on schedule
Consolidation — By approving a milestone document, it gains a certain status. The client can not change any of the specifications later during development.
If necessary, the project can be aborted. This mostly happens during the start of development.
Phases
The method uses 7 phases which are successively executed, like the waterfall model. The phases are:
Information planning: Problem definition and initial plan
Definition study: Requirements analysis and revised plan
Basic Design: High level technical design and revised plan
Detailed Design: Building the system (and revised plan)
Realization: Testing and acceptance (and revised plan)
Implementation: Installation, data conversion, and cut-over to production
Operation and Support: Delivery to ICT support department
Upon completion of a phase, it is decided whether to go on to the next phase or not; the terms 'Go' and 'No-Go' are used for this. The next phase will not start until a 'Go' is given, while if there is a 'No-Go', the project either stays in the current phase to be improved or is canceled completely.
Information planning
In this phase, the problems that have to be solved by the project are defined. The current and desired situations are analysed, and goals for the project are decided upon. In this phase, it is important to consider the needs of all parties, such as future users and their management. Often, their expectations clash, causing problems later during development or during use of the system.
Definition study
In this phase, a more in-depth study of the project is made. The organization is analysed to determine their needs and determine the impact of the system on the organization. The requirements for the system are discussed and decided upon. The feasibility of the project is determined. Aspects that can be considered to determine feasibility are:
Advisable — Are the resources (both time and knowledge) available to complete the project.
Significance — Does the current system need to be replaced?
Technique — Can the available equipment handle the requirements the system places on it?
Economics — Are the costs of developing the system lower than the profit made from using it?
Organization — Will the organization be able to use the new system?
Legal — Does the new system conflict with existing laws?
Basic Design
In this phase, the design for the product is made. After the definition study has determined what the system needs to do, the design determines how this will be done. This often results in two documents: The functional design, or User interface design explaining what each part of the system does, and the high-level technical design, explaining how each part of the system is going to work. This phase combines the functional and technical design and only gives a broad design for the whole system. Often, the architecture of the system is described here.
SDM2 split this step in two parts, one for the BD phase, and one for the DD phase, in order to create a Global Design document.
Detailed Design
In this phase, the design for the product is described technically in the jargon needed for software developers (and later, the team responsible for support of the system in the O&S phase). After the basic design has been signed off, the technical detailed design determines how this will be developed with software. This often results in a library of source documentation: The functional design per function, and the technical design per function, explaining how each part of the system is going to work, and how they relate to each other.
In SDM2, this phase elaborates on the Global Design by creating more detailed designs, or further refining existing detailed designs, to the point where they can be used to build the system itself.
Realization
In this phase, the design is converted to a working system. The actual way this is done will depend on the system used. Where in older systems programmers often had to write all of the code, newer systems allow the programmers to convert the design into code directly, leaving less work to be done and a smaller chance for errors. At the same type, the system becomes more reliant on the design—if the design has been properly tested, the proper code will be generated, but if the design is not fully correct, the code will be incorrect without a programmer to look for such problems.
Implementation
The implementation, or testing phase consists of two steps: a system test and an acceptance test.
During the system test the development team—or a separate testing team—tests the system. Most of this will be focused on the technical aspects: does the system work as it should, or are there bugs still present? Bugs that are found in this phase will be fixed. At the ending of this phase, the program should work properly.
During the acceptance test, the end-users will test the system. They will test to see if the program does what they want it to do. They will not test every possible scenario, but they will test to see if the program does what they want and expect it to do and that it works in an easy way. Bugs that are found in this phase will be reported to the development team so that they can fix these bugs.
During this phase, the final version of the system is implemented by the organization: the hardware is set up, the software is installed, end user documentation is created and, end users trained to use the program, existing data is entered into the system.
Operation and Support
Once the system has been implemented, it is used within the organization. During its lifetime, it needs to be kept running and possibly enhanced.
References
External links
Software Development Methodology
Checklist SDM Activiteiten (In Dutch)
Record for SDM book on Association for Computing Machinery
Software development process |
2768858 | https://en.wikipedia.org/wiki/Majesty%3A%20The%20Fantasy%20Kingdom%20Sim | Majesty: The Fantasy Kingdom Sim | Majesty: The Fantasy Kingdom Sim is a real-time strategy video game developed by Cyberlore Studios, and published by Hasbro Interactive under the MicroProse brand name for Windows in March 2000. MacPlay released a Mac OS port in December 2000. Infogrames released the expansion pack Majesty: The Northern Expansion for Windows in March 2001, and Majesty Gold Edition, a compilation for Windows bundling Majesty and The Northern Expansion, in January 2002. Linux Game Publishing released a Linux port of Majesty Gold Edition in April 2003. Majesty Gold Edition was re-released by Paradox Interactive under the name Majesty Gold HD Edition in March 2012, adding support for higher resolutions and including two downloadable quests that were incompatible with the original release of Majesty: The Northern Expansion.
In Majesty, players assume the role of king in a fantasy realm called Ardania which features city sewers infested with giant rats, landscapes dotted with ancient evil castles, and soldiers helpless against anything bigger than a goblin. As Sovereign, the player must rely on hiring bands of wandering heroes in order to get anything done. The game has 19 single player scenarios but no overarching plotline. The Northern Expansion adds new unit abilities, buildings, monsters, and 12 new single player scenarios. Freestyle (sandbox) play and multiplayer are also available.
Gameplay
Henchmen are free non-hero characters that are nonetheless essential to maintaining the realm. Peasants construct and repair buildings. Tax collectors collect gold from guilds and houses to finance the realm. Guards provide defense against monsters. Caravans travel from trading posts to the marketplace, where they deliver gold based on the distance they traveled.
Each scenario (or quest) has a unique map. Even if the player chooses the same quest twice, it will have a map that, while retaining the general terrain of the region, is significantly different. The map is initially shrouded in blackness, but all activity in explored areas can be viewed no matter how far away from a building or character it is, with no fog of war.
In certain quest scenarios, the player also has the ability to interact with other kingdoms. This mainly includes the use of a kingdom's services by the heroes of a foreign faction, although in many cases, the player may choose to attack the foreign faction or will be automatically hostile toward them. In other, rarer instances, heroes may switch sides between kingdoms in the event that their guild has been destroyed and their native kingdom can no longer offer them hospitality.
Buildings
Base-building is comparable to other real-time strategy games of the period, but units are autonomous—a feature usually associated with construction and management simulation games—and possess attributes borrowed from role-playing video games. The Sovereign's actions are limited to constructing and enhancing buildings, using building abilities and spells, hiring heroes, and offering rewards.
The basic building is the palace, and its loss means the loss of the game. Guilds and temples can be used to summon and house heroes (typically four per building), almost all other ones offer equipment or services (inns, royal gardens, etc.). Some guilds and temples may not co-exist, and some buildings require the presence of certain buildings before they are available for construction.
The system of heroes in Majesty is similar to most other sim games. These heroes are not under the direct control of the player, but they can be influenced by reward flags to perform certain tasks, such as slaying a particularly troublesome monster or exploring an unknown area of the map. However, their cooperation is not guaranteed even then. Heroes have free will, though some classes are more inclined to certain actions than other. (For example, a paladin is more likely to attack a dangerous monster than a rogue.)
Each hero has different favored behaviors as well. For example, paladins often choose to raid lairs, while rogues will steal, and elves will perform at inns. Furthermore, rewards influence heroes differently. Rogues will be the first to make an attempt at the rewards, followed soon after by elves or dwarves.
The powers and abilities of the heroes also move in a rock-paper-scissors format. Some monsters are especially weak against ranged attacks, while strong against melee or magic. Other monsters are especially strong against melee and ranged attacks, and magic makes killing them much easier. It is important to plan ahead and be able to defend your kingdom against different types of monsters, exploiting their weaknesses.
Individual heroes gain experience points and level up as they would if they were characters in a role-playing game. Other hero attributes borrowed from role-playing games include ability scores and inventories. Though all heroes in a class share the same in-game sprite and portrait, they all have individual names, have unique stats, and varied levels.
Reception
Majesty was generally well received by the gaming press, with many reviews commenting positively on its unique combination of elements from different genres. The game's Linux port was also well received, with gamers giving it four stars and numerous positive comments on The Linux Game Tome, as well as numerous positive comments at LinuxGames.
The game was reviewed in 2000 in Dragon #269 by Johnny L. Wilson in the "Silycon Sorcery" column. Wilson sums up the game: "Majesty offers a very different feeling than the average strategy or role-playing game in a fantasy world. It is similar to being a Dungeon Master or playing a simplified version of Birthright."
The editors of Computer Gaming World nominated Majesty as the best strategy game of 2000, although it lost to Sacrifice. However, the magazine presented Majesty with a special award "Pleasant Surprise of the Year", and the editors wrote that it "hooked more than one of us with a quick-paced, hands-off formula that defied our expectations and won our hearts."
Daniel Erickson reviewed the PC version of the game for Next Generation, rating it three stars out of five, and stated that "A great take on a classic formula. Only its lack of solid multiplay keeps Majesty out of the top ranks of RTS games."
Legacy
Majesty: The Northern Expansion
Majesty: The Northern Expansion is generally seen as a fine sequel to the critically acclaimed Majesty. It features new unit abilities, buildings, monsters, and twelve new single player scenarios (two of which are in a new "Master" level). Freestyle play is also available and includes new features including those present in the single player quests.
Majesty Gold HD Edition
On March 21, 2012, Paradox Interactive (who had created Majesty 2) released Majesty Gold HD Edition. This version is identical to the standard Gold Edition containing both Majesty and Majesty: The Northern Expansion, but includes support for larger resolutions and native support for Windows 7. It also includes two downloadable quests that were compatible with the original Majesty, but not with the original release of The Northern Expansion.
Sequel
Cyberlore Studios planned a sequel, Majesty Legends, but it was never officially released. The developer cited the lack of a publisher as the reason. In July 2007, Paradox Interactive acquired the intellectual property for Majesty
and released a sequel, Majesty 2: The Fantasy Kingdom Sim, on September 18, 2009.
Majesty Mobile
Mobile "Majesty: The Fantasy Kingdom Sim" is developed and published by HeroCraft and released on January 20, 2011. The game is designed to run on BlackBerry Playbook, iOS, Android, Bada and high-end Nokia Symbian devices. An iOS version is also available for iPhone, iPod touch, and iPad. The game is also available on Microsoft's Windows Phone platform as of March 2012.
Notes
References
External links
The official Majesty: The Fantasy Kingdom Sim home page
The Linux version of Majesty: The Fantasy Kingdom Sim
2000 video games
Fantasy video games
IOS games
Linux games
Classic Mac OS games
MicroProse games
Real-time strategy video games
Symbian games
Video games scored by Kevin Manthei
Video games developed in the United States
Video games developed in Russia
Video games with expansion packs
Windows games
Windows Phone games
Android (operating system) games |
55880994 | https://en.wikipedia.org/wiki/John%20Bates%20%28technology%20executive%29 | John Bates (technology executive) | John Bates is a British computer scientist, and businessman. Since graduating with a PhD in computer science, he has started several technology companies in the UK.
Education
John Bates received his PhD in mobile and distributed computing (computer science) at the University of Cambridge Computer Laboratory in 1993. His PhD advisor was Jean Bacon.
Career
Bates has been CTO of IONA Technologies Limited since December 2009 and its Executive Vice President since 2011. He serves as CEO of PLAT.ONE Inc. He has been the CEO of TestPlant Limited since February, 2017. He has been CTO of Terracotta, Inc. since October 2013. He was a member of the Technology Council at C5 Capital Ltd.
Bates was co-founder, President and CTO of Apama, the pioneering streaming analytics company. He was the Chief Marketing Officer at Software AG (alternate name Software Aktiengesellschaft) from 2014 to 2015. He was CTO of Intelligent Business Operations & Big Data at Software AG since October, 2013. He served as Head of Industry Solutions at Software AG. He was an EVP of Progress Software Corp since 3 May 2011 and served as its Divisional General Manager. He was CTO of Progress Software Corp. from 2009 to 2013 and its Decision Analytics Business Line Leader from 2012 to 2013.
Earlier on, he was a lecturer and Fellow of St Catherine's College, Cambridge until 2000. At Cambridge, he led several research projects, often in collaboration with industry, and designed and taught courses covering operating systems, distributed systems, software engineering and mobile computing.
Bates is an entrepreneur in the software industry, focusing on areas such as event-driven architectures, smart environments and business activity monitoring.
In 2011 Wall Street and Technology magazine named him as one of the "10 innovators of the decade". In 2012, 2013, 2014 and 2015, Institutional Investor named him in its "Tech 50" of disruptive technologists.
Bates has published a book entitled Thingalytics: Smart Big Data for the Internet of Things.
References
Alumni of St Catharine's College, Cambridge
British computer scientists
English engineers
Living people
Members of the University of Cambridge Computer Laboratory
Year of birth missing (living people) |
41792965 | https://en.wikipedia.org/wiki/Brandlive | Brandlive | Brandlive is a Software as a service (SaaS) company based in Portland, Oregon, USA.
Products
The Stream platform is used for virtual events with a high volume of guests and is designed as a creative canvas for the video content being streamed. The software allows people to create branded virtual and hybrid meetings and event pages, and add features like chat, downloads, product links, and social integrations.
The Showrooms platform is software brands use to launch products and do other training and content-driven experiences in a private environment. It includes features such as “product catalog”, an agenda customized to each event participant, and moderated Q&A and chat.
Greenroom is a cloud-based video production tool that streams to any of Brandlive's Audience Platforms (Events, Showrooms, or Allhands) or any other streaming destination. Features include media uploads to share slides or pre-recorded video, producer “in ear” comms, drag and drop segments, and lower third titles for presenters.
Allhands is for all-company, town hall style corporate meetings for companies of all sizes. The virtual “allhands” destination is designed with company branding and features including: between-meeting Q&A, agenda, chat, video question submissions, and upvoting certain employee questions.
History
2010 — 2019
Brandlive was founded in 2010 by Ben McKinley and Fritz Brumder as a product of Cascade Web Development, as a way to incorporate live video into online shopping. The idea for Brandlive came to Brumder when he was using Skype to see a friend's new retail store in New York City, and made a purchase as a result of the demonstration via live video.
In 2012, Brandlive was established as its own separate entity.
In May 2013, the company received $1.6 million in Series A funding from Oregon Angel Fund, Angel Oregon, Portland Seed Fund, among others.
In September 2013, Brandlive was announced as a finalist for the Oregon Entrepreneurs Network Tom Holce Entrepreneurship Awards.
In 2016, Stephen Marsh's Archivist Capital led Brandlive's $3.2 million funding round.
Brandlive closed 2017 with revenue of $1.9M, and 2018 with $3.1M, and was featured as one of the top ten fastest growing Portland, OR companies.
In July 2018, CEO Fritz Brumder moved into the role of COO, and new CEO Jeff Allen took the reins.
2020 — Present
At the start of 2020, Archivist Capital finalized a transaction to buy most of the business and Sam Kolbert-Hyle, a partner at Archivist, stepped in as Brandlive's new CEO. After analyzing the usage of the platform, Kolbert-Hyle invested in a new direction of product development: to reinvent large, internal corporate allhands and townhall meetings.
When quarantine set in towards the end of March 2020, customers started reaching out to Brandlive because they were having trouble playing premium video content through popular web meeting platforms. Many started asking for help producing elevated video meetings that looked more like TV for gatherings that were historically held in person. In response, Brandlive launched Greenroom in May 2020.
Starting in May 2020, Brandlive did 230 events for Biden Campaign for President team, securing more than $30 million in donations. The largest fundraising event featured President Barack Obama and raised over in $11 million with more than 470,000 views. Brandlive also did celebrity reunion fundraisers including the casts of The Princess Bride, Parks and Rec, The West Wing, and more.
In February 2021, the Showrooms platform was mentioned in a in a Vogue Business article about digital wholesale technology.
In March 2021, Fast Company named Brandlive as 2021's #1 Most Innovative Company in the Live Events category. Fast Company also named Brandlive on their “50 Most Innovative Companies of the Year” at #44 out of 400 global companies.
Also in March 2021, Forbes ran an article about the Allhands Platform, reporting, “Features like interactive quizzes, surveys and upvoting allow employees to interact in real time. And for those who aren’t able to tune in live, Allhands provides replays and highlight reels”
As of May 2021, Brandlive had 200 employees.
References
External links
Software companies based in Oregon
Companies based in Portland, Oregon
2010 establishments in Oregon
As a service
Software companies of the United States
Software companies established in 2010
American companies established in 2010 |
194763 | https://en.wikipedia.org/wiki/Kaypro | Kaypro | Kaypro Corporation was an American home and personal computer manufacturer based out of San Diego in the 1980s. The company was founded by Non-Linear Systems (NLS) to compete with the popular Osborne 1 portable microcomputer. Kaypro produced a line of rugged, "luggable" CP/M-based computers sold with an extensive software bundle which supplanted its competitors and quickly became one of the top-selling personal computer lines of the early 1980s.
Kaypro was exceptionally loyal to its original customer base but slow to adapt to the changing computer market and the advent of IBM PC compatible technology. It faded from the mainstream before the end of the decade and was eventually forced into bankruptcy in 1992.
History
Kaypro began as Non-Linear Systems, a maker of electronic test equipment, founded in 1952 by Andrew Kay, the inventor of the digital voltmeter.
In the 1970s, NLS was an early adopter of microprocessor technology, which enhanced the flexibility of products such as production-line test sets. In 1981, Non-Linear Systems began designing a personal computer, called KayComp, that would compete with the popular Osborne 1 transportable microcomputer. In 1982, Non-Linear Systems organized a daughter company named the Kaypro Corporation.
Despite being the first model to be released commercially, the original system was branded as the Kaypro II (at a time when one of the most popular microcomputers was the Apple II). The Kaypro II was designed to be portable like the Osborne, contained in a single enclosure with a handle for carrying. Set in an aluminum case, with a keyboard that snapped onto the front, covering the 9" CRT display and drives, it weighed and was equipped with a Zilog Z80 microprocessor, 64 kilobytes of RAM, and two 5¼-inch double-density single-sided floppy disk drives. It ran Digital Research, Inc.'s CP/M operating system, the industry standard for 8-bit computers with 8080 or Z80 CPUs, and sold for about US$1,795 ().
The company advertised the Kaypro II as "the $1595 computer that sells for $1595". Although some of the press mocked its design—one magazine described Kaypro as "producing computers packaged in tin cans"—others raved about its value, noting that the included software bundle had a retail value over $1000 by itself, and by mid-1983 the company was selling more than 10,000 units a month, briefly making it the fifth-largest computer maker in the world.
The Kaypro II was part of a new generation of consumer-friendly personal computers that were designed to appeal to novice users who wanted to perform basic productivity on a machine that was relatively easy to set up and use. It managed to correct most of the Osborne 1's deficiencies: the screen was larger and showed more characters at once, the floppy drives stored over twice as much data, the case was more attractive-looking, and it was also much better-built and more reliable.
Computers such as the Kaypro II were widely referred to as "appliance" or "turnkey" machines; they offered little in the way of expandability or features that would interest hackers or electronics hobbyists and were mainly characterized by their affordable price and a collection of bundled software. While it was easy to obtain and use new software with the Kaypro II—there were thousands of application programs available for CP/M, and every Kaypro 8-bit computer had a full 64 KB of RAM, enough to run virtually any CP/M program—the hardware expandability of this computer was nearly nonexistent. The Kaypro II had no expansion slots or system bus connector, no spare ROM socket, no peripheral bus, only two I/O ports, and an ASCII text-only green-on-black video display, of 80 x 24 characters, that could only be shown on the internal 9" CRT monitor (despite the video being scanned at NTSC TV-compatible rates).
In contrast, one feature that was favorable to electronics hobbyists was that all the chips on the Kaypro II mainboard were installed in sockets, not soldered to the board, making it easy to repair the machines or even to splice custom circuits into the stock logic (temporarily or permanently). Also, while Kaypro machines were generally not upgradeable without factory-unauthorized custom modification, some Kaypro computers that came with single-sided floppy disk drives could be upgraded to double-sided drives, and some that came with only one floppy drive could have a second drive added. (The Kaypro II itself may be upgradeable or not to double-sided drives depending on which of two possible mainboard types is installed in the machine.)
Despite their limitations, the boxy units were so popular that they spawned a network of hobbyist user groups across the United States that provided local support for Kaypro products; the company worked with the user groups and would have a salesman drop by if in the area.
Kaypro's success contributed to the eventual failure of the Osborne Computer Corporation and Morrow Designs. A much more rugged seeming, "industrialized" design than competitors such as the Osborne made the Kaypro popular for commercial/industrial applications. Its RS-232 port was widely used by service technicians for on-site equipment configuration, control and diagnostics. The relatively high quality of mechanical fabrication seen in the aluminum-cased Kaypro 8-bit computers was a natural outgrowth of NLS's prior business building professional and industrial electronic test instruments.
The version of CP/M included with the Kaypro could also read the Xerox 810's single-sided, single-density 86k floppy format. The Kaypro 8-bit computers used the popular Western Digital FD1793 floppy disk controller; any disk format that the FD1793 could read and/or write (at 250 kbit/s), the Kaypro II, 4, 10, and similar models are capable of reading and/or writing. Theoretically, any soft-sector MFM or FM floppy format that is within the limits of the '1793 could be read or written if the user wrote his own utility program.
Kaypro published and subsidized ProFiles: The Magazine for Kaypro Users, a monthly, 72-page, four-color magazine that went beyond coverage of Kaypro's products to include substantive information on CP/M and MS-DOS; frequent contributors included Ted Chiang, David Gerrold, Robert J. Sawyer, and Ted Silveira. Keeping its namesake, the publication profiled Kaypro founder Andrew Kay and software engineer Stephen Buccaci.
Another popular magazine that covered Kaypro computers was Micro Cornucopia, published at Bend, Oregon.
Arthur C. Clarke used a Kaypro II to write and collaboratively edit (via modem from Sri Lanka) his 1982 novel 2010: Odyssey Two and the later film adaptation. A book, The Odyssey File - The Making of 2010, was later released about the collaboration.
Following the success of the Kaypro II, Kaypro moved on to produce a long line of similar computers into the mid-1980s. Exceedingly loyal to its original core group of customers, Kaypro continued using the CP/M operating system long after it had been abandoned by its competitors.
In late 1984, Kaypro introduced its first IBM PC compatible, the Kaypro 16 transportable. While admitting that "it's what our dealers asked for", the company stated that it would continue to produce its older computers. This was followed by other PC compatibles: the Kaypro PC, Kaypro 286i (the first 286 IBM PC AT compatible), the Kaypro 386, and the Kaypro 2000 (a rugged aluminum-body battery-powered laptop with a detachable keyboard). The slow start into the IBM clone market would have serious ramifications for the company.
After several turbulent years, with sales dwindling, Kaypro filed for Chapter 11 bankruptcy in March 1990. Despite restructuring, the company was unable to recover and filed for Chapter 7 bankruptcy in June 1992. In 1995, its remaining assets were sold for $2.7 million.
The Kaypro name briefly re-emerged as an online vendor of Microsoft Windows PCs in 1999, but was discontinued in 2001 by its parent company Premio Inc. because of sluggish sales.
Kaypro computers
Hardware
The Kaypro II has a 2.5 MHz Zilog Z80 microprocessor; 64 kB of RAM; two single-sided 191-kilobyte 5¼-inch floppy disk drives (named A: and B:); and an 80-column, green monochrome, 9" CRT that was praised for its size and clarity (the Osborne 1 had a 5" display).
Early in the Kaypro's life, there was a legal dispute with the owner of the Big Board computer, who charged that the Kaypro II main circuit board was an unlicensed copy or clone.
The outer case is constructed of painted aluminum. The computer features a large detachable keyboard unit that covers the screen and disk drives when stowed. The computer could fit into an airline overhead rack. This and other Kaypro computers (except for the Kaypro 2000) run off regular AC mains power and are not equipped with a battery.
The Kaypro IV and later the Kaypro 4 have two double-sided disks. The Kaypro 4 was released in 1984, and was usually referred to as Kaypro 4 '84, as opposed to the Kaypro IV, released one year earlier and referred to as Kaypro IV '83. The Kaypro IV uses different screen addresses than the Kaypro II, meaning software has to be specific to the model.
The Kaypro 10 followed the Kaypro II, and is much like the Kaypro II and Kaypro 4, with the addition of a 10 megabyte hard drive (dually partitioned A: and B:) and replacing one of the two floppy drives (the remaining drive being addressed as C:). The Kaypro 10 also eliminated the complicated procedures to turn the computer on and off often associated with hard disk technology.
Kaypro later replaced their CP/M machines with the MS-DOS-based Kaypro 16, Kaypro PC and others, as the IBM PC and its clones gained popularity. Kaypro was late to the market, however, and never gained the kind of prominence in the MS-DOS arena that it had enjoyed with CP/M. Instead, Kaypro watched as a new company—Compaq—grabbed its market with the Compaq Portable, an all-in-one portable computer that was similar to Kaypro's own CP/M portables with the exception of running MS-DOS with near 100% IBM PC compatibility. The Compaq was larger and less durable—whereas the Kaypro had a heavy-gauge alumininum case, the Compaq case was plastic, with a thin-gauge aluminum inner shield to reduce radio frequency interference—but rapidly took over the portable PC market segment.
The 1985 introductions of the Kaypro 286i, the first IBM PC AT clone, and the Kaypro 2000, one of the first laptop computers (an MS-DOS system with monochrome LCD and durable aluminum case), did little to change Kaypro's fortunes. Kaypro's failure in the MS-DOS market and other corporate issues helped lead to the company's eventual downfall.
Software
CP/M was the standard operating system for the first generation of Kaypros. The first application software that came with the Kaypro II included a highly unpopular word processor called Select that was quickly dropped in favor of a proto office suite from Perfect Software which included Perfect Writer, Perfect Calc, Perfect Filer, and Perfect Speller, as well as Kaypro's own S-BASIC compiler (which produced executable .com files). Perfect Filer featured non-relational, flat-file databases suitable for merging a contact list with form letters created in Perfect Writer.
Perfect Writer was initially a rebranded version of the MINCE and Scribble software packages from Mark of the Unicorn, which are CP/M implementations of Emacs and Scribe, ported from their original minicomputer-based versions using BDS C. Later, MBasic (a variant of Microsoft BASIC) and The Word Plus spellchecker were added to the model II suite of software. Word Plus included a set of utilities that could help solve crossword puzzles or anagrams, insert soft hyphens, alphabetize word lists, and compute word frequencies. Another utility program called Uniform allowed the Kaypro to read disks formatted by Osborne, Xerox, or TRS-80 computers.
The initial bundled applications were soon replaced by the well-known titles WordStar, a word processor, with MailMerge, originally a third-party accessory, for personalised mass mailings (form letters), the SuperCalc spreadsheet, two versions of the Microsoft BASIC interpreter, Kaypro's S-BASIC, a bytecode-compiled BASIC called C-Basic, and the dBase II relational database system.
Data could be moved between these programs relatively easily by using comma delimited format files (now more commonly known as CSV files), which enhanced the utility of the package. The manuals assumed no computer background, the programs were straightforward to use, and thus it was possible to find the CEO of a small company developing the applications needed in-house.
The Kaypro II and later models also came with some games, including versions of old character-based games such as Star Trek; a few were arcade games re-imagined in ASCII, including CatChum (a Pac-Man-like game), Aliens (a Space Invaders-like game) and Ladder (a Donkey Kong-like game).
If bought separately, this software would have cost more than the entire hardware and software package together. The Kaypro II was a very usable and (at the time) powerful computer for home or office, even though the painted metal case made it look more like a rugged laboratory instrument than an office machine. It enjoyed a reputation for durability.
Later Kaypro CP/M models came with even more software. In 1984, BYTE magazine observed "Kaypro apparently has tremendous buying and bargaining power," noting the Kaypro 10 came with both WordStar and Perfect Writer, plus "two spelling checkers, two spreadsheets, two communications programs and three versions of BASIC".
Later MS-DOS Kaypro computers offered a similar software bundle.
Reception
InfoWorld in 1982 described Kaypro II as "a rugged, functional and practical computer system marketed at a reasonable price." The reviewer called the hardware "first-rate," writing that he had used the computer indoors and outdoors in several countries without fault, and praising the keyboard and screen. Deficiencies included the heavy weight and mediocre documentation.
Jerry Pournelle wrote in BYTE in 1983 that he was able to use a Kaypro II without the documentation. Although he preferred the much more expensive Otrona Attaché, Pournelle called the Kaypro's hardware "impressive" and "rugged," approving of the keyboard layout and "certainly the largest screen you'll ever get in a portable machine." A later review by the magazine described the computer as "best value," citing the rugged hardware design, sharp display, keyboard, documentation, and the extensive bundled software. In 1984 Pournelle stated that "For those without much money, there's no real choice ... you need a Kaypro, which has become both the VW and Chevrolet of the micro industry".
BYTE stated in 1984 that while the Kaypro 10 was "not a technologically innovative machine ... the equipment and power delivered for the price are outstanding", noting that the $2,795 computer "costs less than many stand-alone hard-disk drives". It approved of the "beautiful" monitor as an improvement from the Kaypro II's, and the extensive menus for running software on the hard drive without using the command line. The magazine criticized the "unacceptable" user's guide, and predicted that the large software bundle would be "stupefying" to novice users, but concluded that the computer was an "exceptional value for the money. It should be considered by anyone interested in hard-disk capacity or performance at an excellent price".
Creative Computing in December 1984 chose the Kaypro 2 as the best transportable computer under $2500, praising the "incredible array of software" included for "an astounding $1295" price.
Kaypro by model and year
Kaypro's nomenclature was odd, with the numerical designations for their machines having more to do with the capacity of the drives than the order they were produced. Kaypro also released several different models with the same names, perhaps hoping to capitalize on the name recognition of their older machines. As a result, identifying exactly which model a Kaypro is often requires an inspection of their hardware configuration.
All of the computers listed below are of the portable type unless otherwise noted.
1982
Kaycomp I: The first Kaypro was a demonstrator model shown mainly to prospective dealers. It had the same case as future models, but was painted green with two single sided floppy drives that were mounted vertically on opposite sides of the monitor like the Osborne I, its intended competition. A computer virtually identical to the later Kaypro II but labeled "Kaycomp" (not "Kaycomp I") on the side was sold to the public in limited numbers. This version had two vertically mounted drives on the right and a Keytronic keyboard with all-black keys rather than the blue numeric keypad.
Kaypro II: The first commercially released Kaypro was an immediate success, dominating its competition, the Osborne I microcomputer. The Kaypro II had a 9-inch internal monitor instead of the Osborne's tiny 5 inch display, and single sided floppy drives. A redesigned version of the Kaypro II was released in 1984 that allowed block style graphics, and had half-height drives. This version of the Kaypro II had a version of Space Invaders along with the typical ASCII games.
1983
Kaypro IV: An evolution of the Kaypro II, the Kaypro IV had two DS/DD drives (390 KB) and came with Wordstar in addition to the Perfect Suite of software.
Kaypro 10: The Kaypro 10 was one of the earliest computers to come standard with a hard drive. It came with a 10 megabyte internal hard drive and a single DS/DD floppy drive.
1984
Kaypro 4: The Kaypro 4 was virtually identical to the IV, but featured half-height drives instead of full height drives, a 4 MHz clock speed and had basic graphics capabilities. It also had an internal 300-baud modem.
Kaypro 2X: The Kaypro 2X was similar to the Kaypro 4, but it lacked the built-in 300 baud modem that was available in the Kaypro 4. Kaypro 2X's were often sold in a bundle with the Wordstar word processing software suite, spreadsheet and database software. The impact printer that was also included in the bundle was labeled as the "Kaypro Printer," but was actually a re-branded Juki 6100 daisywheel printer.
Kaypro Robie: The Kaypro Robie was the only CP/M based Kaypro to be non-portable. Designed as a desktop computer, it had the same motherboard as the Kaypro 4. It was also equipped with two 2.6 MB high density floppy drives and a 300 Baud modem. The floppy drives were notorious for destroying disks as they literally scraped the media off of the disk substrate. The Robie was jet black, with the drives mounted above the screen, and the front panel angled upward. The Robie did not sell well, but it did make periodic cameo appearances on the ABC television series Moonlighting, as the desktop computer used by Bruce Willis' character David Addison. Due to its black color, the fact that it sat upright and looked like a helmet, and its handle mounted on the top, it was nicknamed "Darth Vader's lunchbox".
1985
Kaypro "New" 2: A scaled-down Kaypro 2X for the budget buyer, came with minimal software, and did not feature the internal modem.
Kaypro 4+88: A dual system computer, the 4+88 was equipped with both an 8088 processor and a Z80, and was capable of running both the MS-DOS and CP/M operating systems. It came with 256 KB of RAM for the MS-DOS operating system that could double as a RAM disk for CP/M.
Kaypro 16: Very similar in appearance to the Kaypro 10, the Kaypro's 16's main difference was that it had an 8088 processor and 256 KB of RAM and ran on the MS-DOS operating system instead of CP/M. The Kaypro 16/2e was a "Bundle" for a computer college. It came with Dos 3.3, 2 5.25" 360k Floppy Drives and 768K Ram and bundles software to complete the college course.
Kaypro 2000: Kaypro's first and only laptop, it was an MS-DOS machine that ran on heavy lead-acid batteries—the same battery technology used in automobile batteries. Similar in basic appearance to a modern laptop, it featured a detachable keyboard, rugged brushed aluminum casing and a pop-up 3.5 inch floppy drive. In what seems to have been a recurring comparison, it has been called "Darth Vader's laptop."
Kaypro PC: Late to the PC market, the Kaypro PC was intended as a competitor to the IBM PC-XT desktop machine. Running at a faster clock speed than IBM's machine, it was available with a larger hard drive than that offered by IBM and an extensive software package. It featured the CPU on a daughterboard on a backplane, which, like the Zenith Data Systems'Z-DOS machines, promised upgradability.
Kaypro 286i: A 6 MHz 286 desktop, it was the first IBM PC/AT compatible, with dual 1.2 MB floppy drives standard and an extensive software package but no MS-DOS 3.0, which had not yet been released, requiring the user to purchase PC DOS 3.0 from IBM.
1986
Kaypro 1: The Kaypro 1 was the last CP/M model Kaypro introduced. In most ways, it was simply a Kaypro 2X with a smaller software package. It is distinctive from earlier Kaypro models because of its vertically oriented disk drives (although some Kaypro 10 models also had them).
1987
Kaypro 386: A 20 MHz 386 desktop, with an extensive software package. It featured a CPU on a circuit board that fit onto a backplane, just like the other expansion cards.
References
External links
Kaypro II: pictures and details on oldcomputers.net
Kaypro II on Obsolete Computer Museum
Kaypro IV & 4
Kaypro Technical Manual for all models, December, 1984 (5 MB PDF)
OLD-COMPUTERS.COM all Kaypro models detailed
8-bit computers
American companies established in 1981
American companies disestablished in 1992
Companies based in San Diego
Companies that filed for Chapter 11 bankruptcy in 1990
Companies that have filed for Chapter 7 bankruptcy
Computer companies established in 1981
Computer companies disestablished in 1992
Defunct computer companies of the United States
Defunct computer hardware companies
Early microcomputers
Personal computers
Portable computers |
3447877 | https://en.wikipedia.org/wiki/Roper%20Technologies | Roper Technologies | Roper Technologies, Inc. (formerly Roper Industries, Inc.) is an American diversified industrial company that produces engineered products for global niche markets. The company is headquartered in Sarasota, Florida.
Roper provides a wide range of products and services to customers in over 100 countries. The company has four main business lines: Industrial Technology, Radio Frequency (RF) Technology, Scientific and Industrial Imaging, and Energy Systems and Controls. Roper joined the Russell 1000 index in 2004, and has annual revenues of more than US$1.23 billion, as of 2017.
History
George D. Roper founded the company in 1890, primarily as a manufacturer of home appliances, pumps, and other industrial products. Roper initiated a corporate acquisition program, supported by an initial public offering, in 1992.
In 1906 George D. Roper acquires Trahern Pump Co.
In 1957 George D. Roper sold its stove business to The Florence Stove Co. of Kankakee, Il. It would continue to operate under the name George D. Roper. This would leave the other products with the original company changing its name to Roper Pump Company.
In 1966, George D. Roper entered the outdoor lawn equipment acquiring David Bradley Manufacturing Works from Sears.
1981 Roper Pump Co. reorganized to Roper Industries.
In 1988, a battle began as General Electric and Whirlpool fought to acquire Roper for its appliances and yard equipment. In 1989, Whirlpool Corporation acquired the Roper brand.
In 1988, Electrolux purchased Roper's lawn and garden products division. Roper was added to Husqvarna and Poulan/Weedeaster divisions forming the new brand American Yard Products.
In 2001, Brian Jellison, a former executive of General Electric and Ingersoll-Rand, joined Roper as chief executive officer. The previous holding company business strategy has been replaced with an operating company model. Since 2001 Roper has completed acquisitions accounting for over half its revenues, establishing the company in global growth markets, such as radio frequency identification(RFID) and water.. Jellison died on November 2, 2018 after a brief illness.
Roper acquired Sunquest Information Systems, a maker of diagnostic and laboratory software for $1.42 billion in cash in 2012.
April 24, 2015 Roper Industries Inc. changed its corporate name to Roper Technologies, Inc.
In August 2020, Roper has agreed to acquire Vertafore, an insurance software maker, in a cash transaction valued at approximately $5.35 billion.
Subsidiaries
DAP Technologies
DAP Technologies manufactures rugged mobile computers, including portable data terminals, and tablet computers. DAP's computers are designed for harsh environments, so they can be used in logistics operations, transportation, warehouses, field service, utilities, law enforcement, and the energy industry, as well as other applications.
DAP distributes its computers in more than 60 countries, and operates subsidiaries in Paris, France, Abingdon, England, and Tokyo, Japan. The company employs 150 people.
IPA, LLC
IPA, LLC provides the most advanced range of solutions for the management and automation of healthcare linen and specialty uniforms. We design advanced software and hardware solutions that allows you to monitor and manage distribution processes efficiently while increasing staff satisfaction, reducing infection risks and reducing costs. Our solutions are made in the U.S.A and are installed in more than 1,000 hospitals worldwide.
Media Cybernetics
Media Cybernetics is a company in Maryland, United States, founded in 1981, that produces image processing software used worldwide for industrial, scientific, medical, and biotechnology applications. In the early 1980s, they developed and published Dr. Halo, a popular raster graphics editor for the IBM PC. In recent years, Media Cybernetics acquired the assets of Definitive Imaging, QED Imaging and AutoQuant Imaging.
Princeton Instruments
Princeton Instruments, Princeton Instruments, manufactures scientific imaging equipment, optical coatings, optical cameras, CCD and EMCCD cameras, spectroscopy instrumentation, and electronic sub-assemblies.
Princeton Instruments is divided into these application-specific groups:
Imaging Group provides cameras for applications that include Astronomy, BEC, Combustion, PIV, Single Molecule, Surface and Material Analysis, PSP, and Nanotechnology.
Spectroscopy Group provides spectrometers, cameras and systems for Raman, LIBS, NIR, Absorption, Fluorescence, and Luminescence.
X-Ray Group manufactures cameras for EUV, Lithography, XRS, Plasma, Diffraction, Microscopy, and Tomography applications.
Industrial Group provides MEGAPLUS cameras for Semiconductor, Web Inspection, Document and Film Capture, Digital Radiography, and Ophthalmology.
Acton Optics & Coatings provides optical components for Medical, Semiconductor, Material Processing, Analytical Instrumentation, Aerospace, and Defense applications.
The company has based in Trenton, New Jersey and Acton, Massachusetts.
Roper Pump Co.
Roper Pump Co. is part of the founding company from which Roper Industries began. George D. Roper began the company by founding the Roper legacy by entering into part ownership in the Van Wie Gas Stove Company of Cleveland, Ohio. The Van Wie plant moved to Rockford Illinois and passed into trusteeship in the early 1890s. George D. Roper became sole owner of the Van Wie Gas Stove company after the company's debts were paid off on September 1, 1894. A fire completely destroyed the facilities ten days later. The factory was rebuilt and named the Eclipse Gas Stove Company, and later expanded to include the Trahern Pump Company in 1906. The Trahern Pump Company, founded in 1857, would evolve to become Roper Pumps Co., currently located in Commerce, Georgia, United States, North America.
Roper Pump Co. mainly manufactures downhole progressing cavity pumps used in oil drilling, flow dividers used in power generation, and gear pumps used on tanker rail cars and tanker trucks. Roper Pump Co. is a US producer of small diameter progressing cavity pumps, flow dividers, and gear pumps.
TransCore and DAT Solutions
TransCore is a subsidiary of Roper Industries, based in Nashville, Tennessee, that specializes in Intelligent Transportation Systems. TransCore was acquired by Roper Industries in 2004.
Roper has divided TransCore into two units: TransCore; and DAT Solutions. TransCore manages RFID transponders and readers. In this family, their original product line reflects their acquisition of Amtech Systems in 2000 (also a supplier of production and automation systems for the manufacture of solar cells). These transponders come in a plastic case, for either windshield or external mounting. Their newer line is the eGo windshield sticker transponder system, which is a low-cost, batteryless system designed for one-time attachment to a windshield. These transponders are found in electronic toll collection, fleet tracking, payment, parking and access control applications.
In addition, TransCore has a substantial services business, designing, building and operating ITS facilities. This may be a complete electronic toll collection system, with design, build, transponders, equipment, customer service and violation enforcement, or any part of it. For example, TransCore operates the ETC systems for several E-ZPass members, even though E-ZPass has an exclusive contract with another manufacturer for transponders.
DAT Solutions
DAT Solutions offers freight load boards products for owner-operators, carriers, brokers, shippers, and 3PLs. The underlying framework is the DAT Network, the first electronic freight posting service, acquired in 2001 from the Jubitz Corporation. DAT Solutions began as Dial-A-Truck, a load board operated at the Jubitz Truck Stop in Portland, Oregon. It evolved to become the original and largest internet load board, a matchmaking service for 100 million freight loads and trucks per year. DAT Solutions offers several load board products including: DAT Power, DAT Express and TruckersEdge.
DAT Solutions also offers Tracking & Communications satellite for truck tracking, trailer tracking, and temperature and cargo monitoring systems for commercial and private fleets. They use GPS technology to transmit data from the truck or trailer to the dispatch center.
The CBORD Group, Inc.
Roper acquired The CBORD Group, Inc. on February 21, 2008, followed later in the year by acquisition of its K-12 sister company Horizon Software International, LLC. CBORD was founded in Ithaca, NY in 1975. CBORD products are used for food services, campus ID privilege control, security, residential life, and on/off-campus commerce for K-12 and higher education, acute and long-term healthcare, corporate facilities, and other campus-based institutions. The products include CS Gold, CS Access, Odyssey, Foodservice Suite, Webfood, Nutrition Service Suite, Room Service Choice, NetMenu, NetNutrition, C-Store, EventMaster, GET, UGryd, CBORD Patient, OneSource, Horizon School Technology, MyPaymentsPlus, MilleniumPlus, and GeriMenu.
CBORD has offices in: Ithaca, NY; Duluth, GA; Canton, OH; Waco, TX; and Sydney, Australia.
CBORD made headlines in 2019 with its campus card integration in Apple wallet and its partnership with nonprofit Swipe Out Hunger, to enable college students to donate meals to other students in need via technology.
Compressor Controls Corporation (CCC)
Compressor Controls Corporation (CCC) has specialized in turbomachinery controls for over 35 years. CCC is involved in a broad range of industries including oil, gas, chemical, petrochemical, refineries, LNG, pipelines, steel mills, pharmaceutical, machine-building, and power generation facilities. CCC also constructs new turbomachinery controls and retrofit existing equipment.
iTradeNetwork
A provider of supply chain management and intelligence solutions to the food industry with offices both in the US and UK.
Neptune Technology Group, Inc
Manufacturer of water meters, meter-reading equipment, software, and related equipment, based in Tallassee, Alabama.
Data Innovations
Headquartered in South Burlington, Vermont, DI provides healthcare software.
Verathon, Inc.
Verathon is a global medical device company. Two areas where Verathon has significantly impacted patient care, and become the market leader in each, are airway management and bladder volume measurement. Verathon, a subsidiary of Roper Technologies, is headquartered in Bothell, Washington, and has international subsidiaries in Canada, Europe and Asia.
ConstructConnect
ConstructConnect is a construction management software company that is based out of Cincinnati, Ohio. It is currently run by CEO Dave Conway. ConstructConnect was acquired by Roper on October 31, 2016.
References
External links
Companies listed on the New York Stock Exchange
Manufacturing companies based in Florida
Technology companies of the United States
Photography companies of the United States
Automatic identification and data capture
American companies established in 1808
Conglomerate companies of the United States
Manufacturing companies established in 1808 |
29374925 | https://en.wikipedia.org/wiki/KCA%20University | KCA University | KCA University (KCAU) is a private, non-profit institution, founded in July 1989 as Kenya College of Accountancy (KCA) by the Institute of Certified Public Accountants of Kenya (ICPAK) to improve the quality of accountancy and financial management training in the country. KCAU is located on Thika Road in Ruaraka, Nairobi, Kenya. The institution also maintains satellite colleges under the School of Professional Programmes in Nairobi CBD Githunguri, Kericho, Eldoret, Kisumu, Amagoro and Kitengela.
History
Following a study by Chart Foulks Lynch CIPFA in the UK, Kenya College of Accountancy was founded in 1987-88. The study concluded that the Kenyan economy required an additional four hundred qualified accountants every year. From an initial enrollment of 170 students in 1989, the student population has increased tremendously over the years, and now stands at over 15,000 enrolled annually.
KCA applied to the Commission for Higher Education (CHE) for university status in the year 2000 and on July 26, 2007, CHE awarded KCA a Letter of Interim Authority (LIA). Operations then began at KCA University.
Faculties and programs
Faculties
Commerce and Distance Learning
Computing and Information Management
School of Professional Programs (SPP)
Faculty of Education
Institute for Capacity Development (ICAD)
Degree programs
MBA Corporate Management
Master of Science (Commerce)
Post Graduate Diploma (Corporate Governance)
Bachelor of Science (Commerce)
Bachelor of Science (Business Management and Procurement)
Bachelor of Science (Information Technology)
Bachelor of Education
Bachelor of Science (Business Information Technology)
Bachelor of arts ( Criminology)
Bachelor of Science (Software Development)
Bachelor of Science (Applied Computing)
Bachelor of Science (Gaming and Information Technology)
Bachelor of Science (Actuarial Science)
Diploma programs
Diploma in Information Technology
Diploma in Business Information Technology
Diploma in accounting and finance
Diploma in Business Management
Diploma in procurement and logistics
Certificate programs
Certificate in County Governance
Certificate in Information Technology
Certificate in Business Information Technology
Certificate in Bridging Mathematics
Certificate in Research Methodology
References
External links
Commission for Higher Education, http://www.che.or.ke
United Nations Global Compact, http://www.unglobalcompact.org/participant/11852-KCA-University
Ministry of Higher Education, Science and Technology, https://web.archive.org/web/20110722131940/http://www.scienceandtechnology.go.ke/index.php?option=com_content&task=view&id=60&Itemid=61
Education in Nairobi
Universities and colleges in Kenya
Business schools in Africa
Educational institutions established in 1989
1989 establishments in Kenya
1980s in Nairobi |
10242544 | https://en.wikipedia.org/wiki/SAML%202.0 | SAML 2.0 | Security Assertion Markup Language 2.0 (SAML 2.0) is a version of the SAML standard for exchanging authentication and authorization identities between security domains. SAML 2.0 is an XML-based protocol that uses security tokens containing assertions to pass information about a principal (usually an end user) between a SAML authority, named an Identity Provider, and a SAML consumer, named a Service Provider. SAML 2.0 enables web-based, cross-domain single sign-on (SSO), which helps reduce the administrative overhead of distributing multiple authentication tokens to the user.
SAML 2.0 was ratified as an OASIS Standard in March 2005, replacing SAML 1.1. The critical aspects of SAML 2.0 are covered in detail in the official documents SAMLCore, SAMLBind, SAMLProf, and SAMLMeta.
Some 30 individuals from more than 24 companies and organizations were involved in the creation of SAML 2.0. In particular, and of special note, Liberty Alliance donated its Identity Federation Framework (ID-FF) specification to OASIS, which became the basis of the SAML 2.0 specification. Thus SAML 2.0 represents the convergence of SAML 1.1, Liberty ID-FF 1.2, and Shibboleth 1.3.
SAML 2.0 assertions
An assertion is a package of information that supplies zero or more statements made by a SAML authority. SAML assertions are usually made about a subject, represented by the <Subject> element. The SAML 2.0 specification defines three different kinds of assertion statements that can be created by a SAML authority. All SAML-defined statements are associated with a subject. The three kinds of assertion statements defined are as follows:
Authentication Statement: The assertion subject was authenticated by a particular means at a particular time.
Attribute Statement: The assertion subject is associated with the supplied attributes.
Authorization Decision Statement: A request to allow the assertion subject to access the specified resource has been granted or denied.
An important type of SAML assertion is the so-called "bearer" assertion used to facilitate Web Browser SSO. Here is an example of a short-lived bearer assertion issued by an identity provider (https://idp.example.org/SAML2) to a service provider (https://sp.example.com/SAML2). The assertion includes both an Authentication Assertion <saml:AuthnStatement> and an Attribute Assertion <saml:AttributeStatement>, which presumably the service provider uses to make an access control decision. The prefix saml: represents the SAML V2.0 assertion namespace.
Example of SAML
<saml:Assertion
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
ID="_d71a3a8e9fcc45c9e9d248ef7049393fc8f04e5f75"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">
3f7b3dcf-1674-4ecd-92c8-1544f346baf8
</saml:NameID>
<saml:SubjectConfirmation
Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData
InResponseTo="aaf23196-1773-2113-474a-fe114412ab72"
Recipient="https://sp.example.com/SAML2/SSO/POST"
NotOnOrAfter="2004-12-05T09:27:05Z"/>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Conditions
NotBefore="2004-12-05T09:17:05Z"
NotOnOrAfter="2004-12-05T09:27:05Z">
<saml:AudienceRestriction>
<saml:Audience>https://sp.example.com/SAML2</saml:Audience>
</saml:AudienceRestriction>
</saml:Conditions>
<saml:AuthnStatement
AuthnInstant="2004-12-05T09:22:00Z"
SessionIndex="b07b804c-7c29-ea16-7300-4f3d6f7928ac">
<saml:AuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
<saml:AttributeStatement>
<saml:Attribute
xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500"
x500:Encoding="LDAP"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.5923.1.1.1.1"
FriendlyName="eduPersonAffiliation">
<saml:AttributeValue
xsi:type="xs:string">member</saml:AttributeValue>
<saml:AttributeValue
xsi:type="xs:string">staff</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
</saml:Assertion>
Note that in the above example the <saml:Assertion> element contains the following child elements:
a <saml:Issuer> element, which contains the unique identifier of the identity provider
a <ds:Signature> element, which contains an integrity-preserving digital signature (not shown) over the <saml:Assertion> element
a <saml:Subject> element, which identifies the authenticated principal (but in this case the identity of the principal is hidden behind an opaque transient identifier, for reasons of privacy)
a <saml:Conditions> element, which gives the conditions under which the assertion is to be considered valid
a <saml:AuthnStatement> element, which describes the act of authentication at the identity provider
a <saml:AttributeStatement> element, which asserts a multi-valued attribute associated with the authenticated principal
In words, the assertion encodes the following information:
The assertion ("b07b804c-7c29-ea16-7300-4f3d6f7928ac") was issued at time "2004-12-05T09:22:05Z" by identity provider (https://idp.example.org/SAML2) regarding subject (3f7b3dcf-1674-4ecd-92c8-1544f346baf8) exclusively for service provider (https://sp.example.com/SAML2).
The authentication statement, in particular, asserts the following:
The principal identified in the <saml:Subject> element was authenticated at time "2004-12-05T09:22:00Z" by means of a password sent over a protected channel.
Likewise the attribute statement asserts that:
The principal identified in the <saml:Subject> element has the 'staff' and 'member' attributes at this institution.
SAML 2.0 protocols
The following protocols are specified in SAMLCore:
Assertion Query and Request Protocol
Authentication Request Protocol
Artifact Resolution Protocol
Name Identifier Management Protocol
Single Logout Protocol
Name Identifier Mapping Protocol
The most important of these protocols—the Authentication Request Protocol—is discussed in detail below.
Authentication Request Protocol
In SAML 1.1 Web Browser SSO Profiles are initiated by the Identity Provider (IDP), that is, an unsolicited <samlp:Response> element is transmitted from the identity provider to the service provider (via the browser). (The prefix samlp: denotes the SAML protocol namespace.)
In SAML 2.0, however, the flow begins at the service provider who issues an explicit authentication request to the identity provider. The resulting Authentication Request Protocol is a significant new feature of SAML 2.0.
When a principal (or an entity acting on the principal's behalf) wishes to obtain an assertion containing an authentication statement, a <samlp:AuthnRequest> element is transmitted to the identity provider:
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="aaf23196-1773-2113-474a-fe114412ab72"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
AssertionConsumerServiceIndex="0"
AttributeConsumingServiceIndex="0">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="true"
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/>
</samlp:AuthnRequest>
The above <samlp:AuthnRequest> element, which implicitly requests an assertion containing an authentication statement, was evidently issued by a service provider (https://sp.example.com/SAML2) and subsequently presented to the identity provider (via the browser). The identity provider authenticates the principal (if necessary) and issues an authentication response, which is transmitted back to the service provider (again via the browser).
Artifact Resolution Protocol
A SAML message is transmitted from one entity to another either by value or by reference. A reference to a SAML message is called an artifact. The receiver of an artifact resolves the reference by sending a <samlp:ArtifactResolve> request directly to the issuer of the artifact, who then responds with the actual message referenced by the artifact.
Suppose, for example, that an identity provider sends the following <samlp:ArtifactResolve> request directly to a service provider (via a back channel):
<samlp:ArtifactResolve
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="_cce4ee769ed970b501d680f697989d14"
Version="2.0"
IssueInstant="2004-12-05T09:21:58Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<!-- an ArtifactResolve message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Artifact>AAQAAMh48/1oXIM+sDo7Dh2qMp1HM4IF5DaRNmDj6RdUmllwn9jJHyEgIi8=</samlp:Artifact>
</samlp:ArtifactResolve>
In response, the service provider returns the SAML element referenced by the enclosed artifact. This protocol forms the basis of the HTTP Artifact Binding.
SAML 2.0 bindings
The bindings supported by SAML 2.0 are outlined in the Bindings specification (SAMLBind):
SAML SOAP Binding (based on SOAP 1.1)
Reverse SOAP (PAOS) Binding
HTTP Redirect Binding
HTTP POST Binding
HTTP Artifact Binding
SAML URI Binding
For Web Browser SSO, the HTTP Redirect Binding and the HTTP POST Binding are commonly used. For example, the service provider may use HTTP Redirect to send a request while the identity provider uses HTTP POST to transmit the response. This example illustrates that an entity's choice of binding is independent of its partner's choice of binding.
HTTP Redirect Binding
SAML protocol messages can be carried directly in the URL query string of an HTTP GET request. Since the length of URLs is limited in practice, the HTTP Redirect binding is suitable for short messages, such as the <samlp:AuthnRequest> message. Longer messages (e.g. those containing signed or encrypted SAML assertions, such as SAML Responses) are usually transmitted via other bindings such as the HTTP POST Binding.
SAML requests or responses transmitted via HTTP Redirect have a SAMLRequest or SAMLResponse query string parameter, respectively. Before it's sent, the message is deflated (without header and checksum), base64-encoded, and URL-encoded, in that order. Upon receipt, the process is reversed to recover the original message.
For example, encoding the <samlp:AuthnRequest> message above yields:
https://idp.example.org/SAML2/SSO/Redirect?SAMLRequest=fZFfa8IwFMXfBb9DyXvaJtZ1BqsURRC2
Mabbw95ivc5Am3TJrXPffmmLY3%2FA15Pzuyf33On8XJXBCaxTRmeEhTEJQBdmr%2FRbRp63K3pL5rPhYOpkVdY
ib%2FCon%2BC9AYfDQRB4WDvRvWWksVoY6ZQTWlbgBBZik9%2FfCR7GorYGTWFK8pu6DknnwKL%2FWEetlxmR8s
BHbHJDWZqOKGdsRJM0kfQAjCUJ43KX8s78ctnIz%2Blp5xpYa4dSo1fjOKGM03i8jSeCMzGevHa2%2FBK5MNo1F
dgN2JMqPLmHc0b6WTmiVbsGoTf5qv66Zq2t60x0wXZ2RKydiCJXh3CWVV1CWJgqanfl0%2Bin8xutxYOvZL18NK
UqPlvZR5el%2BVhYkAgZQdsA6fWVsZXE63W2itrTQ2cVaKV2CjSSqL1v9P%2FAXv4C
The above message (formatted for readability) may be signed for additional security. In practice, all the data contained in a <samlp:AuthnRequest>, such as Issuer which contains the SP ID, and NameIDPolicy, has been agreed between IdP and SP beforehand (via manual information exchange or via SAML metadata). In that case signing the request is not a security constraint. When the <samlp:AuthnRequest> contains information not known by the IdP beforehand, such as Assertion Consumer Service URL, signing the request is recommended for security purposes.
HTTP POST Binding
In the following example, both the service provider and the identity provider use an HTTP POST binding. Initially, the service provider responds to a request from the user agent with a document containing an XHTML form:
<form method="post" action="https://idp.example.org/SAML2/SSO/POST" ...>
<input type="hidden" name="SAMLRequest" value="''request''" />
... other input parameter....
</form>
The value of the SAMLRequest parameter is the base64-encoding of a <samlp:AuthnRequest> element, which is transmitted to the identity provider via the browser. The SSO service at the identity provider validates the request and responds with a document containing another XHTML form:
<form method="post" action="https://sp.example.com/SAML2/SSO/POST" ...>
<input type="hidden" name="SAMLResponse" value="''response''" />
...
</form>
The value of the SAMLResponse parameter is the base64 encoding of a <samlp:Response> element, which likewise is transmitted to the service provider via the browser.
To automate the submission of the form, the following line of JavaScript may appear anywhere on the XHTML page:
window.onload = function () { document.forms[0].submit(); }
This assumes, of course, that the first form element in the page contains the above SAMLResponse containing form element (forms[0]).
HTTP Artifact Binding
The HTTP Artifact Binding uses the Artifact Resolution Protocol and the SAML SOAP Binding (over HTTP) to resolve a SAML message by reference. Consider the following specific example. Suppose a service provider wants to send a <samlp:AuthnRequest> message to an identity provider. Initially, the service provider transmits an artifact to the identity provider via an HTTP redirect:
https://idp.example.org/SAML2/SSO/Artifact?SAMLart=artifact
Next the identity provider sends a <samlp:ArtifactResolve> request (such as the ArtifactResolveRequest shown earlier) directly to the service provider via a back channel. Finally, the service provider returns a <samlp:ArtifactResponse> element containing the referenced <samlp:AuthnRequest> message:
<samlp:ArtifactResponse
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
ID="_d84a49e5958803dedcff4c984c2b0d95"
InResponseTo="_cce4ee769ed970b501d680f697989d14"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z">
<!-- an ArtifactResponse message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="_306f8ec5b618f361c70b6ffb1480eade"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
Destination="https://idp.example.org/SAML2/SSO/Artifact"
ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact"
AssertionConsumerServiceURL="https://sp.example.com/SAML2/SSO/Artifact">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="false"
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"/>
</samlp:AuthnRequest>
</samlp:ArtifactResponse>
Of course the flow can go in the other direction as well, that is, the identity provider may issue an artifact, and in fact this is more common. See, for example, the "double artifact" profile example later in this topic.
Artifact format
In general, a SAML 2.0 artifact is defined as follows (SAMLBind):
SAML_artifact := B64 (TypeCode EndpointIndex RemainingArtifact)
TypeCode := Byte1Byte2
EndpointIndex := Byte1Byte2
Thus a SAML 2.0 artifact consists of three components: a two-byte TypeCode, a two-byte EndpointIndex, and an arbitrary sequence of bytes called the RemainingArtifact. These three pieces of information are concatenated and base64-encoded to yield the complete artifact.
The TypeCode uniquely identifies the artifact format. SAML 2.0 predefines just one such artifact, of type 0x0004. The EndpointIndex is a reference to a particular artifact resolution endpoint managed by the artifact issuer (which may be either the IdP or the SP, as mentioned earlier). The RemainingArtifact, which is determined by the type definition, is the "meat" of the artifact.
The format of a type 0x0004 artifact is further defined as follows:
TypeCode := 0x0004
RemainingArtifact := SourceId MessageHandle
SourceId := 20-byte_sequence
MessageHandle := 20-byte_sequence
Thus a type 0x0004 artifact is of size 44 bytes (unencoded). The SourceId is an arbitrary sequence of bytes, although in practice, the SourceId is the SHA-1 hash of the issuer's entityID. The MessageHandle is a random sequence of bytes that references a SAML message that the artifact issuer is willing to produce on-demand.
For example, consider this hex-encoded type 0x0004 artifact:
00040000c878f3fd685c833eb03a3b0e1daa329d47338205e436913660e3e917549a59709fd8c91f2120222f
If you look closely, you can see the TypeCode (0x0004) and the EndpointIndex (0x0000) at the front of the artifact. The next 20 bytes are the SHA-1 hash of the issuer's entityID (https://idp.example.org/SAML2) followed by 20 random bytes. The base64-encoding of these 44 bytes is what you see in the ArtifactResolveRequest example above.
SAML 2.0 profiles
In SAML 2.0, as in SAML 1.1, the primary use case is still Web Browser SSO, but the scope of SAML 2.0 is broader than previous versions of SAML, as suggested in the following exhaustive list of profiles:
SSO Profiles
Web browser SSO profile
Enhanced Client or Proxy (ECP) Profile
Identity Provider Discovery Profile
Single Logout Profile
Name Identifier Management Profile
Artifact Resolution Profile
Assertion Query/Request Profile
Name Identifier Mapping Profile
SAML Attribute Profiles
Basic Attribute Profile
X.500/LDAP Attribute Profile
UUID Attribute Profile
DCE PAC Attribute Profile
XACML Attribute Profile
Although the number of supported profiles is quite large, the Profiles specification (SAMLProf) is simplified since the binding aspects of each profile have been factored out into a separate Bindings specification (SAMLBind).
Web browser SSO profile
SAML 2.0 specifies a Web Browser SSO Profile involving an identity provider (IdP), a service provider (SP), and a principal wielding an HTTP user agent. The service provider has four bindings from which to choose while the identity provider has three, which leads to twelve possible deployment scenarios. We outline three of those deployment scenarios below.
SP redirect request; IdP POST response
This is one of the most common scenarios. The service provider sends a SAML Request to the IdP SSO Service using the HTTP-Redirect Binding. The identity provider returns the SAML Response to the SP Assertion Consumer Service using the HTTP-POST Binding.
The message flow begins with a request for a secured resource at the service provider.
1. Request the target resource at the SP
The principal (via an HTTP user agent) requests a target resource at the service provider:
https://sp.example.com/myresource
The service provider performs a security check on behalf of the target resource. If a valid security context at the service provider already exists, skip steps 2–7.
The service provider may use any kind of mechanism to discover the identity provider that will be used, e.g., ask the user, use a preconfigured IdP, etc.
2. Redirect to IdP SSO Service
The service provider generates an appropriate SAMLRequest (and RelayState, if any), then redirects the browser to the IdP SSO Service using a standard HTTP 302 redirect.
302 Redirect
Location: https://idp.example.org/SAML2/SSO/Redirect?SAMLRequest=request&RelayState=token
The RelayState token is an opaque reference to state information maintained at the service provider. The value of the SAMLRequest parameter is a deflated, base64-encoded and URL-encoded value of an <samlp:AuthnRequest> element:
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
AssertionConsumerServiceIndex="0">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="true"
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/>
</samlp:AuthnRequest>
The SAMLRequest may be signed using the SP signing key. Typically, however, this is not necessary.
3. Request the SSO Service at the IdP
The user agent issues a GET request to the SSO service at the identity provider:
GET /SAML2/SSO/Redirect?SAMLRequest=request&RelayState=token HTTP/1.1
Host: idp.example.org
where the values of the SAMLRequest and RelayState parameters are the same as those provided in the redirect. The SSO Service at the identity provider processes the <samlp:AuthnRequest> element (by URL-decoding, base64-decoding and inflating the request, in that order) and performs a security check. If the user does not have a valid security context, the identity provider identifies the user with any mechanism (details omitted).
4. Respond with an XHTML form
The SSO Service validates the request and responds with a document containing an XHTML form:
<form method="post" action="https://sp.example.com/SAML2/SSO/POST" ...>
<input type="hidden" name="SAMLResponse" value="response" />
<input type="hidden" name="RelayState" value="token" />
...
<input type="submit" value="Submit" />
</form>
The value of the RelayState parameter has been preserved from step 3. The value of the SAMLResponse parameter is the base64 encoding of the following <samlp:Response> element:
<samlp:Response
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_2"
InResponseTo="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z"
Destination="https://sp.example.com/SAML2/SSO/POST">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<saml:Assertion
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_3"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<!-- a POSTed assertion MUST be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">
3f7b3dcf-1674-4ecd-92c8-1544f346baf8
</saml:NameID>
<saml:SubjectConfirmation
Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData
InResponseTo="identifier_1"
Recipient="https://sp.example.com/SAML2/SSO/POST"
NotOnOrAfter="2004-12-05T09:27:05Z"/>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Conditions
NotBefore="2004-12-05T09:17:05Z"
NotOnOrAfter="2004-12-05T09:27:05Z">
<saml:AudienceRestriction>
<saml:Audience>https://sp.example.com/SAML2</saml:Audience>
</saml:AudienceRestriction>
</saml:Conditions>
<saml:AuthnStatement
AuthnInstant="2004-12-05T09:22:00Z"
SessionIndex="identifier_3">
<saml:AuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
</saml:Assertion>
</samlp:Response>
5. Request the Assertion Consumer Service at the SP
The user agent issues a POST request to the Assertion Consumer Service at the service provider:
POST /SAML2/SSO/POST HTTP/1.1
Host: sp.example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: nnn
SAMLResponse=response&RelayState=token
where the values of the SAMLResponse and RelayState parameters are taken from the XHTML form at step 4.
6. Redirect to the target resource
The assertion consumer service processes the response, creates a security context at the service provider and redirects the user agent to the target resource.
7. Request the target resource at the SP again
The user agent requests the target resource at the service provider (again):
https://sp.example.com/myresource
8. Respond with requested resource
Since a security context exists, the service provider returns the resource to the user agent.
SP POST Request; IdP POST Response
This is a relatively simple deployment of the SAML 2.0 Web Browser SSO Profile (SAMLProf) where both the service provider (SP) and the identity provider (IdP) use the HTTP POST binding.
The message flow begins with a request for a secured resource at the SP.
1. Request the target resource at the SP
The principal (via an HTTP user agent) requests a target resource at the service provider:
https://sp.example.com/myresource
The service provider performs a security check on behalf of the target resource. If a valid security context at the service provider already exists, skip steps 2–7.
2. Respond with an XHTML form
The service provider responds with a document containing an XHTML form:
<form method="post" action="https://idp.example.org/SAML2/SSO/POST" ...>
<input type="hidden" name="SAMLRequest" value="request" />
<input type="hidden" name="RelayState" value="token" />
...
<input type="submit" value="Submit" />
</form>
The RelayState token is an opaque reference to state information maintained at the service provider. The value of the SAMLRequest parameter is the base64 encoding of the following <samlp:AuthnRequest> element:
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
AssertionConsumerServiceIndex="0">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="true"
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"/>
</samlp:AuthnRequest>
Before the <samlp:AuthnRequest> element is inserted into the XHTML form, it is first base64-encoded.
3. Request the SSO Service at the IdP
The user agent issues a POST request to the SSO service at the identity provider:
POST /SAML2/SSO/POST HTTP/1.1
Host: idp.example.org
Content-Type: application/x-www-form-urlencoded
Content-Length: nnn
SAMLRequest=request&RelayState=token
where the values of the SAMLRequest and RelayState parameters are taken from the XHTML form at step 2. The SSO service processes the <samlp:AuthnRequest> element (by URL-decoding, base64-decoding and inflating the request, in that order) and performs a security check. If the user does not have a valid security context, the identity provider identifies the user (details omitted).
4. Respond with an XHTML form
The SSO service validates the request and responds with a document containing an XHTML form:
<form method="post" action="https://sp.example.com/SAML2/SSO/POST" ...>
<input type="hidden" name="SAMLResponse" value="response" />
<input type="hidden" name="RelayState" value="token" />
...
<input type="submit" value="Submit" />
</form>
The value of the RelayState parameter has been preserved from step 3. The value of the SAMLResponse parameter is the base64 encoding of the following <samlp:Response> element:
<samlp:Response
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_2"
InResponseTo="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z"
Destination="https://sp.example.com/SAML2/SSO/POST">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<saml:Assertion
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_3"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<!-- a POSTed assertion MUST be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient">
3f7b3dcf-1674-4ecd-92c8-1544f346baf8
</saml:NameID>
<saml:SubjectConfirmation
Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData
InResponseTo="identifier_1"
Recipient="https://sp.example.com/SAML2/SSO/POST"
NotOnOrAfter="2004-12-05T09:27:05Z"/>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Conditions
NotBefore="2004-12-05T09:17:05Z"
NotOnOrAfter="2004-12-05T09:27:05Z">
<saml:AudienceRestriction>
<saml:Audience>https://sp.example.com/SAML2</saml:Audience>
</saml:AudienceRestriction>
</saml:Conditions>
<saml:AuthnStatement
AuthnInstant="2004-12-05T09:22:00Z"
SessionIndex="identifier_3">
<saml:AuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
</saml:Assertion>
</samlp:Response>
5. Request the Assertion Consumer Service at the SP
The user agent issues a POST request to the assertion consumer service at the service provider:
POST /SAML2/SSO/POST HTTP/1.1
Host: sp.example.com
Content-Type: application/x-www-form-urlencoded
Content-Length: nnn
SAMLResponse=response&RelayState=token
where the values of the SAMLResponse and RelayState parameters are taken from the XHTML form at step 4.
6. Redirect to the target resource
The assertion consumer service processes the response, creates a security context at the service provider and redirects the user agent to the target resource.
7. Request the target resource at the SP again
The user agent requests the target resource at the service provider (again):
https://sp.example.com/myresource
8. Respond with requested resource
Since a security context exists, the service provider returns the resource to the user agent.
SP redirect artifact; IdP redirect artifact
This is a complex deployment of the SAML 2.0 Web Browser SSO Profile (SAMLProf) where both the service provider (SP) and the identity provider (IdP) use the HTTP Artifact binding. Both artifacts are delivered to their respective endpoints via HTTP GET.
The message flow begins with a request for a secured resource at the SP:
1. Request the target resource at the SP
The principal (via an HTTP user agent) requests a target resource at the service provider:
https://sp.example.com/myresource
The service provider performs a security check on behalf of the target resource. If a valid security context at the service provider already exists, skip steps 2–11.
2. Redirect to the Single Sign-on (SSO) Service at the IdP
The service provider redirects the user agent to the single sign-on (SSO) service at the identity provider. A RelayState parameter and a SAMLart parameter are appended to the redirect URL.
3. Request the SSO Service at the IdP
The user agent requests the SSO service at the identity provider:
https://idp.example.org/SAML2/SSO/Artifact?SAMLart=artifact_1&RelayState=token
where token is an opaque reference to state information maintained at the service provider and artifact_1 is a SAML artifact, both issued at step 2.
4. Request the Artifact Resolution Service at the SP
The SSO service dereferences the artifact by sending a <samlp:ArtifactResolve> element bound to a SAML SOAP message to the artifact resolution service at the service provider:
<samlp:ArtifactResolve
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:21:58Z"
Destination="https://sp.example.com/SAML2/ArtifactResolution">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<!-- an ArtifactResolve message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Artifact>''artifact_1''</samlp:Artifact>
</samlp:ArtifactResolve>
where the value of the <samlp:Artifact> element is the SAML artifact transmitted at step 3.
5. Respond with a SAML AuthnRequest
The artifact resolution service at the service provider returns a <samlp:ArtifactResponse> element (containing an <samlp:AuthnRequest> element) bound to a SAML SOAP message to the SSO service at the identity provider:
<samlp:ArtifactResponse
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
ID="identifier_2"
InResponseTo="identifier_1"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z">
<!-- an ArtifactResponse message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<samlp:AuthnRequest
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_3"
Version="2.0"
IssueInstant="2004-12-05T09:21:59Z"
Destination="https://idp.example.org/SAML2/SSO/Artifact"
ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact"
AssertionConsumerServiceURL="https://sp.example.com/SAML2/SSO/Artifact">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<samlp:NameIDPolicy
AllowCreate="false"
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress"/>
</samlp:AuthnRequest>
</samlp:ArtifactResponse>
The SSO service processes the <samlp:AuthnRequest> element and performs a security check. If the user does not have a valid security context, the identity provider identifies the user (details omitted).
6. Redirect to the Assertion Consumer Service
The SSO service at the identity provider redirects the user agent to the assertion consumer service at the service provider. The previous RelayState parameter and a new SAMLart parameter are appended to the redirect URL.
7. Request the Assertion Consumer Service at the SP
The user agent requests the assertion consumer service at the service provider:
https://sp.example.com/SAML2/SSO/Artifact?SAMLart=artifact_2&RelayState=token
where token is the token value from step 3 and artifact_2 is the SAML artifact issued at step 6.
8. Request the Artifact Resolution Service at the IdP
The assertion consumer service dereferences the artifact by sending a <samlp:ArtifactResolve> element bound to a SAML SOAP message to the artifact resolution service at the identity provider:
<samlp:ArtifactResolve
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_4"
Version="2.0"
IssueInstant="2004-12-05T09:22:04Z"
Destination="https://idp.example.org/SAML2/ArtifactResolution">
<saml:Issuer>https://sp.example.com/SAML2</saml:Issuer>
<!-- an ArtifactResolve message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Artifact>''artifact_2''</samlp:Artifact>
</samlp:ArtifactResolve>
where the value of the <samlp:Artifact> element is the SAML artifact transmitted at step 7.
9. Respond with a SAML Assertion
The artifact resolution service at the identity provider returns a <samlp:ArtifactResponse> element (containing an <samlp:Response> element) bound to a SAML SOAP message to the assertion consumer service at the service provider:
<samlp:ArtifactResponse
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
ID="identifier_5"
InResponseTo="identifier_4"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z">
<!-- an ArtifactResponse message SHOULD be signed -->
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<samlp:Response
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_6"
InResponseTo="identifier_3"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z"
Destination="https://sp.example.com/SAML2/SSO/Artifact">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<ds:Signature
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">...</ds:Signature>
<samlp:Status>
<samlp:StatusCode
Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>
</samlp:Status>
<saml:Assertion
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
ID="identifier_7"
Version="2.0"
IssueInstant="2004-12-05T09:22:05Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<!-- a Subject element is required -->
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress">
[email protected]
</saml:NameID>
<saml:SubjectConfirmation
Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
<saml:SubjectConfirmationData
InResponseTo="identifier_3"
Recipient="https://sp.example.com/SAML2/SSO/Artifact"
NotOnOrAfter="2004-12-05T09:27:05Z"/>
</saml:SubjectConfirmation>
</saml:Subject>
<saml:Conditions
NotBefore="2004-12-05T09:17:05Z"
NotOnOrAfter="2004-12-05T09:27:05Z">
<saml:AudienceRestriction>
<saml:Audience>https://sp.example.com/SAML2</saml:Audience>
</saml:AudienceRestriction>
</saml:Conditions>
<saml:AuthnStatement
AuthnInstant="2004-12-05T09:22:00Z"
SessionIndex="identifier_7">
<saml:AuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
</saml:Assertion>
</samlp:Response>
</samlp:ArtifactResponse>
10. Redirect to the target resource
The assertion consumer service processes the response, creates a security context at the service provider and redirects the user agent to the target resource.
11. Request the target resource at the SP again
The user agent requests the target resource at the service provider (again):
https://sp.example.com/myresource
12. Respond with the requested resource
Since a security context exists, the service provider returns the resource to the user agent.
Identity provider discovery profile
The SAML 2.0 Identity Provider Discovery Profile introduces the following concepts:
Common Domain
Common Domain Cookie
Common Domain Cookie Writing Service
Common Domain Cookie Reading Service
As a hypothetical example of a Common Domain, let's suppose Example UK (example.co.uk) and Example Deutschland (example.de) belong to the virtual organization Example Global Alliance (example.com). In this example, the domain example.com is the common domain. Both Example UK and Example Deutschland have a presence in this domain (uk.example.com and de.example.com, resp.).
The Common Domain Cookie is a secure browser cookie scoped to the common domain. For each browser user, this cookie stores a history list of recently visited IdPs. The name and value of the cookie are specified in the IdP Discovery Profile (SAMLProf).
After a successful act of authentication, the IdP requests the Common Domain Cookie Writing Service. This service appends the IdP's unique identifier to the common domain cookie. The SP, when it receives an unauthenticated request for a protected resource, requests the Common Domain Cookie Reading Service to discover the browser user's most recently used IdP.
Assertion query/request profile
The Assertion Query/Request Profile is a general profile that accommodates numerous types of so-called queries using the following SAML 2.0 elements:
the <samlp:AssertionIDRequest> element, which is used to request an assertion given its unique identifier (ID)
the <samlp:SubjectQuery> element, which is an abstract extension point that allows new subject-based SAML queries to be defined
the <samlp:AuthnQuery> element, which is used to request existing authentication assertions about a given subject from an Authentication Authority
the <samlp:AttributeQuery> element, which is used to request attributes about a given subject from an Attribute Authority
the <samlp:AuthzDecisionQuery> element, which is used to request an authorization decision from a trusted third party
The SAML SOAP binding is often used in conjunction with queries.
SAML attribute query
The Attribute Query is perhaps the most important type of SAML query. Often a requester, acting on behalf of the principal, queries an identity provider for attributes. Below we give an example of a query issued by a principal directly:
<samlp:AttributeQuery
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
ID="aaf23196-1773-2113-474a-fe114412ab72"
Version="2.0"
IssueInstant="2006-07-17T20:31:40Z">
<saml:Issuer
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">
[email protected],OU=User,O=NCSA-TEST,C=US
</saml:Issuer>
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">
[email protected],OU=User,O=NCSA-TEST,C=US
</saml:NameID>
</saml:Subject>
<saml:Attribute
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:2.5.4.42"
FriendlyName="givenName">
</saml:Attribute>
<saml:Attribute
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.1466.115.121.1.26"
FriendlyName="mail">
</saml:Attribute>
</samlp:AttributeQuery>
Note that the Issuer is the Subject in this case. This is sometimes called an attribute self-query. An identity provider might return the following assertion, wrapped in a <samlp:Response> element (not shown):
<saml:Assertion
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#"
ID="_33776a319493ad607b7ab3e689482e45"
Version="2.0"
IssueInstant="2006-07-17T20:31:41Z">
<saml:Issuer>https://idp.example.org/SAML2</saml:Issuer>
<ds:Signature>...</ds:Signature>
<saml:Subject>
<saml:NameID
Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">
[email protected],OU=User,O=NCSA-TEST,C=US
</saml:NameID>
<saml:SubjectConfirmation
Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml:SubjectConfirmationData>
<ds:KeyInfo>
<ds:X509Data>
<!-- principal's X.509 cert -->
<ds:X509Certificate>
MIICiDCCAXACCQDE+9eiWrm62jANBgkqhkiG9w0BAQQFADBFMQswCQYDVQQGEwJV
UzESMBAGA1UEChMJTkNTQS1URVNUMQ0wCwYDVQQLEwRVc2VyMRMwEQYDVQQDEwpT
UC1TZXJ2aWNlMB4XDTA2MDcxNzIwMjE0MVoXDTA2MDcxODIwMjE0MVowSzELMAkG
A1UEBhMCVVMxEjAQBgNVBAoTCU5DU0EtVEVTVDENMAsGA1UECxMEVXNlcjEZMBcG
A1UEAwwQdHJzY2F2b0B1aXVjLmVkdTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC
gYEAv9QMe4lRl3XbWPcflbCjGK9gty6zBJmp+tsaJINM0VaBaZ3t+tSXknelYife
nCc2O3yaX76aq53QMXy+5wKQYe8Rzdw28Nv3a73wfjXJXoUhGkvERcscs9EfIWcC
g2bHOg8uSh+Fbv3lHih4lBJ5MCS2buJfsR7dlr/xsadU2RcCAwEAATANBgkqhkiG
9w0BAQQFAAOCAQEAdyIcMTob7TVkelfJ7+I1j0LO24UlKvbLzd2OPvcFTCv6fVHx
Ejk0QxaZXJhreZ6+rIdiMXrEzlRdJEsNMxtDW8++sVp6avoB5EX1y3ez+CEAIL4g
cjvKZUR4dMryWshWIBHKFFul+r7urUgvWI12KbMeE9KP+kiiiiTskLcKgFzngw1J
selmHhTcTCrcDocn5yO2+d3dog52vSOtVFDBsBuvDixO2hv679JR6Hlqjtk4GExp
E9iVI0wdPE038uQIJJTXlhsMMLvUGVh/c0ReJBn92Vj4dI/yy6PtY/8ncYLYNkjg
oVN0J/ymOktn9lTlFyTiuY4OuJsZRO1+zWLy9g==
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml:SubjectConfirmationData>
</saml:SubjectConfirmation>
</saml:Subject>
<!-- assertion lifetime constrained by principal's X.509 cert -->
<saml:Conditions
NotBefore="2006-07-17T20:31:41Z"
NotOnOrAfter="2006-07-18T20:21:41Z">
</saml:Conditions>
<saml:AuthnStatement
AuthnInstant="2006-07-17T20:31:41Z">
<saml:AuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:TLSClient
</saml:AuthnContextClassRef>
</saml:AuthnContext>
</saml:AuthnStatement>
<saml:AttributeStatement>
<saml:Attribute
xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500"
x500:Encoding="LDAP"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:2.5.4.42"
FriendlyName="givenName">
<saml:AttributeValue
xsi:type="xs:string">Tom</saml:AttributeValue>
</saml:Attribute>
<saml:Attribute
xmlns:x500="urn:oasis:names:tc:SAML:2.0:profiles:attribute:X500"
x500:Encoding="LDAP"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.1466.115.121.1.26"
FriendlyName="mail">
<saml:AttributeValue
xsi:type="xs:string">[email protected]</saml:AttributeValue>
</saml:Attribute>
</saml:AttributeStatement>
</saml:Assertion>
In contrast to the BearerAssertion shown earlier, this assertion has a longer lifetime corresponding to the lifetime of the X.509 certificate that the principal used to authenticate to the identity provider. Moreover, since the assertion is signed, the user can push this assertion to a relying party, and as long as the user can prove possession of the corresponding private key (hence the name "holder-of-key"), the relying party can be assured that the assertion is authentic.
SAML 2.0 metadata
Quite literally, metadata is what makes SAML work (or work well). Some important uses of metadata include:
A service provider prepares to transmit a <samlp:AuthnRequest> element to an identity provider via the browser. How does the service provider know the identity provider is authentic and not some evil identity provider trying to phish the user's password? The service provider consults its list of trusted identity providers in metadata before issuing an authentication request.
In the previous scenario, how does the service provider know where to send the user with the authentication request? The service provider looks up a pre-arranged endpoint location of the trusted identity provider in metadata.
An identity provider receives a <samlp:AuthnRequest> element from a service provider via the browser. How does the identity provider know the service provider is authentic and not some evil service provider trying to harvest personally identifiable information regarding the user? The identity provider consults its list of trusted service providers in metadata before issuing an authentication response.
In the previous scenario, how does the identity provider encrypt the SAML assertion so that the trusted service provider (and only the trusted service provider) can decrypt the assertion. The identity provider uses the service provider's encryption certificate in metadata to encrypt the assertion.
Continuing with the previous scenario, how does the identity provider know where to send the user with the authentication response? The identity provider looks up a pre-arranged endpoint location of the trusted service provider in metadata.
How does the service provider know that the authentication response came from a trusted identity provider? The service provider verifies the signature on the assertion using the public key of the identity provider from metadata.
How does the service provider know where to resolve an artifact received from a trusted identity provider? The service provider looks up the pre-arranged endpoint location of the identity provider's artifact resolution service from metadata.
Metadata ensures a secure transaction between an identity provider and a service provider. Before metadata, trust information was encoded into the implementation in a proprietary manner. Now the sharing of trust information is facilitated by standard metadata. SAML 2.0 provides a well-defined, interoperable metadata format that entities can leverage to bootstrap the trust process.
Identity Provider Metadata
An identity provider publishes data about itself in an <md:EntityDescriptor> element:
<md:EntityDescriptor entityID="https://idp.example.org/SAML2" validUntil="2013-03-22T23:00:00Z"
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<!-- insert ds:Signature element (omitted) -->
<!-- insert md:IDPSSODescriptor element (below) -->
<md:Organization>
<md:OrganizationName xml:lang="en">Some Non-profit Organization of New York</md:OrganizationName>
<md:OrganizationDisplayName xml:lang="en">Some Non-profit Organization</md:OrganizationDisplayName>
<md:OrganizationURL xml:lang="en">https://www.example.org/</md:OrganizationURL>
</md:Organization>
<md:ContactPerson contactType="technical">
<md:SurName>SAML Technical Support</md:SurName>
<md:EmailAddress>mailto:[email protected]</md:EmailAddress>
</md:ContactPerson>
</md:EntityDescriptor>
Note the following details about this entity descriptor:
The entityID attribute is the unique identifier of the entity.
The validUntil attribute gives the expiration date of the metadata.
The <ds:Signature> element (which has been omitted for simplicity) contains a digital signature that ensures the authenticity and integrity of the metadata.
The organization identified in the <md:Organization> element is "responsible for the entity" described by the entity descriptor (section 2.3.2 of SAMLMeta).
The contact information in the <md:ContactPerson> element identifies a technical contact responsible for the entity. Multiple contacts and contact types are possible. See section 2.3.2.2 of SAMLMeta.
By definition, an identity provider manages an SSO service that supports the SAML Web Browser SSO profile specified in SAMLProf. See, for example, the identity provider described in the <md:IDPSSODescriptor> element shown in the next section.
SSO service metadata
The SSO service at the identity provider is described in an <md:IDPSSODescriptor> element:
<md:IDPSSODescriptor
protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo>...</ds:KeyInfo>
</md:KeyDescriptor>
<md:ArtifactResolutionService isDefault="true" index="0"
Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP"
Location="https://idp.example.org/SAML2/ArtifactResolution"/>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:NameIDFormat>urn:oasis:names:tc:SAML:2.0:nameid-format:transient</md:NameIDFormat>
<md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
Location="https://idp.example.org/SAML2/SSO/Redirect"/>
<md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
Location="https://idp.example.org/SAML2/SSO/POST"/>
<md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact"
Location="https://idp.example.org/SAML2/Artifact"/>
<saml:Attribute
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.5923.1.1.1.1"
FriendlyName="eduPersonAffiliation">
<saml:AttributeValue>member</saml:AttributeValue>
<saml:AttributeValue>student</saml:AttributeValue>
<saml:AttributeValue>faculty</saml:AttributeValue>
<saml:AttributeValue>employee</saml:AttributeValue>
<saml:AttributeValue>staff</saml:AttributeValue>
</saml:Attribute>
</md:IDPSSODescriptor>
The previous metadata element describes the SSO service at the identity provider. Note the following details about this element:
The identity provider software is configured with a private SAML signing key and/or a private back-channel TLS key. The corresponding public key is included in the <md:KeyDescriptor use="signing"> element in IdP metadata. The key material has been omitted from the key descriptor for brevity.
The Binding attribute of the <md:ArtifactResolutionService> element indicates that the SAML SOAP binding (SAMLBind) should be used for artifact resolution.
The Location attribute of the <md:ArtifactResolutionService> element is used in step 8 of the "double artifact" profile.
The value of the index attribute of the <md:ArtifactResolutionService> element is used as the EndpointIndex in the construction of a SAML type 0x0004 artifact.
The <md:NameIDFormat> elements indicate what SAML name identifier formats (SAMLCore) the SSO service supports.
The Binding attributes of the <md:SingleSignOnService> elements are standard URIs specified in the SAML 2.0 Binding specification (SAMLBind).
The Location attribute of the <md:SingleSignOnService> element that supports the HTTP POST binding is used in step 2 of the "double POST" profile.
The Location attribute of the <md:SingleSignOnService> element that supports the HTTP Artifact binding is used in step 2 of the "double artifact" profile.
The <saml:Attribute> element describes an attribute that the identity provider is willing to assert (subject to policy). The <saml:AttributeValue> elements enumerate the possible values the attribute may take on.
As noted at the beginning of this section, the values of the Location attributes are used by a service provider to route SAML messages, which minimizes the possibility of a rogue identity provider orchestrating a man-in-the-middle attack.
Service provider metadata
Like the identity provider, a service provider publishes data about itself in an <md:EntityDescriptor> element:
<md:EntityDescriptor entityID="https://sp.example.com/SAML2" validUntil="2013-03-22T23:00:00Z"
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<!-- insert ds:Signature element (omitted) -->
<!-- insert md:SPSSODescriptor element (see below) -->
<md:Organization>
<md:OrganizationName xml:lang="en">Some Commercial Vendor of California</md:OrganizationName>
<md:OrganizationDisplayName xml:lang="en">Some Commercial Vendor</md:OrganizationDisplayName>
<md:OrganizationURL xml:lang="en">https://www.example.com/</md:OrganizationURL>
</md:Organization>
<md:ContactPerson contactType="technical">
<md:SurName>SAML Technical Support</md:SurName>
<md:EmailAddress>mailto:[email protected]</md:EmailAddress>
</md:ContactPerson>
</md:EntityDescriptor>
Note the following details about this entity descriptor:
The entityID attribute is the unique identifier of the entity.
The validUntil attribute gives the expiration date of the metadata.
The <ds:Signature> element (which has been omitted for simplicity) contains a digital signature that ensures the authenticity and integrity of the metadata.
The organization identified in the <md:Organization> element is "responsible for the entity" described by the entity descriptor (section 2.3.2 of SAMLMeta).
The contact information in the <md:ContactPerson> element identifies a technical contact responsible for the entity. Multiple contacts and contact types are possible. See section 2.3.2.2 of SAMLMeta.
By definition, a service provider manages an assertion consumer service that supports the SAML Web Browser SSO profile specified in SAMLProf. See, for example, the service provider described in the <md:SPSSODescriptor> element shown in the next section.
Assertion consumer service metadata
The assertion consumer service is contained in an <md:SPSSODescriptor> element:
<md:SPSSODescriptor
protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo>...</ds:KeyInfo>
</md:KeyDescriptor>
<md:KeyDescriptor use="encryption">
<ds:KeyInfo>...</ds:KeyInfo>
</md:KeyDescriptor>
<md:ArtifactResolutionService isDefault="true" index="0"
Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP"
Location="https://sp.example.com/SAML2/ArtifactResolution"/>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:NameIDFormat>urn:oasis:names:tc:SAML:2.0:nameid-format:transient</md:NameIDFormat>
<md:AssertionConsumerService isDefault="true" index="0"
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
Location="https://sp.example.com/SAML2/SSO/POST"/>
<md:AssertionConsumerService index="1"
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact"
Location="https://sp.example.com/SAML2/Artifact"/>
<md:AttributeConsumingService isDefault="true" index="1">
<md:ServiceName xml:lang="en">Service Provider Portal</md:ServiceName>
<md:RequestedAttribute
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri"
Name="urn:oid:1.3.6.1.4.1.5923.1.1.1.1"
FriendlyName="eduPersonAffiliation">
</md:RequestedAttribute>
</md:AttributeConsumingService>
</md:SPSSODescriptor>
Note the following details about the <md:SPSSODescriptor> metadata element:
The service provider software is configured with a private SAML signing key and/or a private back-channel TLS key. The corresponding public key is included in the <md:KeyDescriptor use="signing"> element in SP metadata. The key material has been omitted from the key descriptor for brevity.
Likewise the service provider software is configured with a private SAML decryption key. A public SAML encryption key is included in the <md:KeyDescriptor use="encryption"> element in SP metadata. The key material has been omitted from the key descriptor for brevity.
The index attribute of an <md:AssertionConsumerService> element is used as the value of the AssertionConsumerServiceIndex attribute in a <samlp:AuthnRequest> element.
The Binding attributes of the <md:AssertionConsumerService> elements are standard URIs specified in the SAML 2.0 Binding specification (SAMLBind).
The Location attribute of the <md:AssertionConsumerService> element that supports the HTTP POST binding (index="0") is used in step 4 of the "double POST" profile.
The Location attribute of the <md:AssertionConsumerService> element that supports the HTTP Artifact binding (index="1") is used in step 6 of the "double artifact" profile.
The <md:AttributeConsumingService> element is used by the identity provider to formulate an <saml:AttributeStatement> element that is pushed to the service provider in conjunction with Web Browser SSO.
The index attribute of the <md:AttributeConsumingService> element is used as the value of the AttributeConsumingServiceIndex attribute in a <samlp:AuthnRequest> element.
As noted at the beginning of this section, the values of the Location attributes are used by an identity provider to route SAML messages, which minimizes the possibility of a rogue service provider orchestrating a man-in-the-middle attack.
Metadata aggregates
In the previous examples, each <md:EntityDescriptor> element is shown to be digitally signed. In practice, however, multiple <md:EntityDescriptor> elements are grouped together under an <md:EntitiesDescriptor> element with a single digital signature over the entire aggregate:
<md:EntitiesDescriptor validUntil="2013-03-22T23:00:00Z"
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"
xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<!-- insert ds:Signature element (omitted) -->
<md:EntityDescriptor entityID="https://idp.example.org/SAML2">
...
</md:EntityDescriptor>
<md:EntityDescriptor entityID="https://sp.example.com/SAML2">
...
</md:EntityDescriptor>
</md:EntitiesDescriptor>
Note the following details about the above <md:EntitiesDescriptor> element:
The digital signature (which has been omitted for brevity) covers the entire aggregate.
The validUntil XML attribute has been elevated to the parent element, implying that the expiration date applies to each child element.
The XML namespace declarations have been elevated to the parent element to avoid redundant namespace declarations.
Typically metadata aggregates are published by trusted third parties called federations who vouch for the integrity of all the metadata in the aggregate. Note that metadata aggregates can be very large, composed of hundreds or even thousands of entities per aggregate.
See also
Security Assertion Markup Language
SAML 1.1
SAML metadata
SAML-based products and services
OpenID Connect
References
Primary references:
Secondary references:
P. Mishra et al. Conformance Requirements for the OASIS Security Assertion Markup Language (SAML) V2.0 – Errata Composite. Working Draft 04, 1 December 2009. Document ID sstc-saml-conformance-errata-2.0-wd-04 https://www.oasis-open.org/committees/download.php/35393/sstc-saml-conformance-errata-2.0-wd-04-diff.pdf
N. Ragouzis et al., Security Assertion Markup Language (SAML) V2.0 Technical Overview. OASIS Committee Draft, March 2008. Document ID sstc-saml-tech-overview-2.0-cd-02 http://www.oasis-open.org/committees/download.php/27819/sstc-saml-tech-overview-2.0-cd-02.pdf
P. Madsen et al., SAML V2.0 Executive Overview. OASIS Committee Draft, April 2005. Document ID sstc-saml-tech-overview-2.0-cd-01-2col http://www.oasis-open.org/committees/download.php/13525/sstc-saml-exec-overview-2.0-cd-01-2col.pdf
J. Kemp et al. Authentication Context for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-authn-context-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf
F. Hirsch et al. Security and Privacy Considerations for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-sec-consider-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-sec-consider-2.0-os.pdf
J. Hodges et al. Glossary for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-glossary-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-glossary-2.0-os.pdf
Deprecated references:
P. Mishra et al. Conformance Requirements for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-conformance-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf
S. Cantor et al. Assertions and Protocols for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-core-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf
S. Cantor et al. Bindings for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-bindings-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf
S. Cantor et al. Profiles for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-profiles-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf
S. Cantor et al. Metadata for the OASIS Security Assertion Markup Language (SAML) V2.0. OASIS Standard, March 2005. Document ID saml-metadata-2.0-os http://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf
XML-based standards
Computer access control
Identity management
Federated identity
Identity management systems
Metadata standards
Computer security software |
6635172 | https://en.wikipedia.org/wiki/Paradise%20Cafe | Paradise Cafe | " is the 23rd studio album by Japanese singer-songwriter Miyuki Nakajima, released in October 1996. The album includes a new recording of her 1995 chart-topping hit "Wanderer's Song", and also features her own interpretation of "Lie to Me Eternally", which was originally written for the Long Time No See album recorded by Takuro Yoshida.
Following the massive success of the single "Wanderer's Song", which sold over 1 million units, Paradise Cafe immediately gained an RIAJ a platinum award for shipments of over 400,000 copies. However, the album itself apparently sold beneath expectations, staying briefly on the charts and selling approximately 200,000 units in total.
Track listing
All songs written and composed by Miyuki Nakajima.
"" [2nd ver.] – 5:02
"" – 4:45
"" – 6:00
"Leave Me Alone, Please" – 5:10
"" – 4:15
"" – 4:42
"Singles Bar" – 5:54
"" – 3:47
"" – 5:45
"" – 6:39
"Paradise Cafe " – 5:21
Personnel
Miyuki Nakajima – vocals
Ichizo Seo – keyboards, computer programming
Russ Kunkel – drums
Gregg Bissonette – drums
Atsuo Okubo- drums
Michael Thompson – electric guitar, slide guitar
Dean Parks – electric and acoustic guitar
Tsuyoshi Kon – electric guitar
Nozomi Furukawa – electric guitar
Shuji Nakamura – gut guitar, 12 string acoustic guitar
Neil Stubenhaus – electric bass
Abraham Laboriel – electric bass
Chiharu Mikuzuki – electric bass
Chuck Domanico – upright bass
Keishi Urata – computer programming
Nobuhiko Nakayama – computer programming
Manabu Ogasawara – computer programming
Jon Giltin – acoustic piano, keyboards, hammond organ
Yasuharu Nakanishi – keyboards, synth bass, acoustic piano
Elton Nagata – keyboards
Toshihiko Furumura – alto sax, tenor sax
Julia Waters – background vocals
Maxine Waters – background vocals
Oren Waters – background vocals
Walfredo Reyes Jr – percussion
Nobu Saito – percussion
Additional personnel
Kiyoshi Hiyama – Background Vocals
Yasuhiro Kido- Background Vocals
Motoyoshi Iwasaki – Background Vocals
Keiko Yamada – Background Vocals
Mai Yamane – Background Vocals
Akira Yamane – Background Vocals
David Campbell – Strings Arrangement and Conduct
Suzie Katayama – String Contractor
Sid Page – Concertmaster
Ryoichi Fujimori – Cello
Production
Composer, writer, Producer and Performer: Miyuki Nakajima
Producer and Arranger: Ichizo Seo
Arranger: David Campbell
Recording Engineer and Mixer: David Thoener, Tad Gotoh
Additional Engineer: Wyn Davis, Takanobu Ichikawa, Yuta Uematsu, Chizuru Yamada
Assistant Engineer: Jennifer Monner, Jeff DeMorris, Milton Chan, Hideki Odera, Kensuke Miura, Yukiho Wada
Mixer: Joe Chiccarelli
Assistant Mixer: Chadd Munsey, Hiroshi Tokunaga
Digital Edit: Rieko Shimoji
A & R: Kohichi Suzuki
Production Supervisor: Michio Suzuki
Assistant for Producer: Tomoo Satoh
Music Coordinater: Ruriko Sakumi Duer, Kohji Kimura, Fumio Miyata, Tomoko Takaya
Photographer: Jin Tamura, Jeffrey Bender
Designer: Hirofumi Arai
Costume Coordination: Takeshi Hazama
Hair & Make-Up: Noriko Izumisawa
Location Coordinator: Chikako DeZonia, Dean Ichiyanagi
Artist Management: Kohji Suzuki, Kohichi Okazaki
Management Desk: Atsuko Hayashi
Artist Promotion: Yoshio Kan
Disc Promotion: Tsukihiko Yoshida, Shoko Sone
Sales Promotor: Ikuko Ishigame
General Management: Takahiro Uno
DAD: Genichi Kawakami
Mastering: Tom Baker at Future Disc Systems
Charts
Weekly charts
: Limited edition issued on APO-CD
Year-end charts
Certifications
References
Miyuki Nakajima albums
1996 albums
Pony Canyon albums |
6212955 | https://en.wikipedia.org/wiki/Fred%20Kilgour | Fred Kilgour | Frederick Gridley Kilgour (January 6, 1914 – July 31, 2006) was an American librarian and educator known as the founding director of OCLC (Online Computer Library Center), an international computer library network and database. He was its president and executive director from 1967 to 1980.
Biography
Born in Springfield, Massachusetts to Edward Francis and Lillian Piper Kilgour, Kilgour earned a bachelor's degree in chemistry from Harvard College in 1935 and afterward held the position as assistant to the director of Harvard University Library.
In 1940, he married Eleanor Margaret Beach, who had graduated from Mount Holyoke College and taken a job at the Harvard College Library, where they met.
In 1942 to 1945, Kilgour served during World War II as a lieutenant in the U.S. Naval Reserve and was Executive Secretary and Acting Chairman of the U.S. government's Interdepartmental Committee for the Acquisition of Foreign Publications (IDC), which developed a system for obtaining publications from enemy and enemy-occupied areas. This organization of 150 persons in outposts around the world microfilmed newspapers and other printed information items and sent them back to Washington, DC.
An example of the kind of intelligence gathered was the Japanese "News for Sailors" reports that listed new minefields. These reports were sent from Washington, D.C. directly to Pearl Harbor and U.S. submarines in the Western Pacific. Kilgour received the Legion of Merit for his intelligence work in 1945. He worked at the United States Department of State as deputy director of the Office of Intelligence Collection and Dissemination from 1946 to 1948.
In 1948, he was named Librarian of the Yale Medical Library. At Yale he was also a lecturer in the history of science and technology and published many scholarly articles on those topics. While running the Yale University Medical Library, Kilgour began publishing studies and articles on library use and effectiveness. He asked his staff to collect empirical data, such as use of books and journals by categories of borrowers to guide selection and retention of titles. He viewed the library "not merely as a depository of knowledge," but as "an instrument of education."
At the dawn of library automation in the early 1970s, he was a member of the Library and Information Technology Association (LITA), an organization within the American Library Association, where he was president from 1973 to 1975. He joined the Ohio College Association in 1967 to develop OCLC (Online Computer Library Center) and led the creation of a library network that today links 72,000 institutions in 170 countries. It first amassed the catalogs of 54 academic libraries in Ohio, launching in 1971 and expanding to non-Ohio libraries in 1977.
Kilgour was president of OCLC from 1967 to 1980, presiding over its rapid growth from an intrastate network to an international network. In addition to creating the WorldCat database, he developed an online interlibrary loan system that libraries used to arrange nearly 10 million loans annually in 2005.
Today, OCLC has a staff of 1,200 and offices in seven countries. Its mission remains the same: to further access to the world's information and reduce library costs. In 1981 Kilgour stepped down from management but continued to serve on the OCLC Board of Trustees until 1995.
He was a distinguished research professor emeritus at the University of North Carolina at Chapel Hill's School of Information and Library Science. He taught there from 1990, retiring in 2004.
He died in 2006 was 92 years old and had lived since 1990 in Chapel Hill, North Carolina. He was survived by his wife and their daughters, Martha Kilgour and Alison Kilgour of New York City, and Meredith Kilgour Perdiew of North Edison, New Jersey; and two grandchildren and five great grandchildren.
OCLC
Based in Dublin, Ohio, OCLC and its member libraries cooperatively produce and maintain WorldCat—the OCLC Online Union Catalog, the largest OPAC in the world. Under Kilgour's leadership, the nonprofit corporation introduced a shared cataloging system in 1971 for 54 Ohio academic libraries. WorldCat contains holding records from most public and private libraries worldwide. WorldCat is available through many libraries and university computer networks.
In 1971, after four years of development, OCLC introduced its online shared cataloging system, which would achieve dramatic cost savings for libraries. For example, in the first year of system use, the Alden Library at Ohio University was able to increase the number of books it cataloged by a third, while reducing its staff by 17 positions. Word of this new idea spread on campuses across the country, starting an online revolution in libraries that continues to this day.
The shared cataloging system and database that Kilgour devised made it unnecessary for more than one library to originally catalog an item. Libraries would either use the cataloging information that already existed in the database, or they would put it in for other libraries to use. The shared catalog also provided information about materials in libraries in the rest of the network. For the first time, a user in one library could easily find out what was held in another library. The network quickly grew outside Ohio to all 50 states and then internationally.
Because of his contributions to librarianship, OCLC and LITA, jointly sponsors an award named after Kilgour. Inaugurated in 1998 and awarded annually, it highlights research on information technology with a focus on "work that "shows the promise of having a positive and substantive impact on any aspect of the publication, storage, retrieval, and dissemination of information, or the processes by which information and data are manipulated and managed."
Legacy
Kilgour is widely recognized as one of the leading figures in 20th century librarianship for his work in using computer networks to increase access to information in libraries around the world. He was among the earliest proponents of adapting computer technology to library processes.
The database that Kilgour created, now called WorldCat, is regarded as the world's largest computerized library catalog, including not only entries from large institutions such as the Library of Congress, the British Library, the Russian State Library and Singapore, but also from small public libraries, art museums and historical societies. It contains descriptions of library materials and their locations. More recently, the database provides access to the electronic full text of articles, books as well as images and sound recordings. It spans 4,000 years of recorded knowledge. It contains more than 70 million records and one billion location listings. Every 10 seconds a library adds a new record. It is available on the World Wide Web.
Inspired by Ralph H. Parker's 1936 work using punched cards for library automation, Kilgour soon began experimenting in automating library procedures at the Harvard University Library, primarily with the use of punched cards for a circulation system. He also studied under George Sarton, a pioneer in the new discipline of the history of science, and began publishing scholarly papers. He also launched a project to build a collection of microfilmed foreign newspapers to help scholars have access to newspapers from abroad. This activity quickly came to the attention of government officials in Washington, D.C.
In 1961, he was one of the leaders in the development of a prototype computerized library catalog system for the medical libraries at Columbia, Harvard and Yale Universities that was funded by the National Science Foundation. In 1965, Kilgour was named associate librarian for research and development at Yale University. He continued to conduct experiments in library automation and to promote their potential benefits in the professional literature.
In his professional writings, Kilgour was one of the earliest proponents of applying computerization to librarianship. He pointed out that the explosion of research information was placing new demands on libraries to furnish information completely and rapidly. He advocated the use of the computer to eliminate human repetitive tasks from library procedures, such as catalog card production. He recognized nearly 40 years ago the potential of linking libraries in computer networks to create economies of scale and generate "network effects" that would increase the value of the network as more participants were added.
OCLC has proved the feasibility of nationwide sharing of catalog-record creation and has helped libraries to maintain and to enhance the quality and speed of service while achieving cost control—and even cost reduction—in the face of severely reduced funding. This achievement may be the single greatest contribution to national networking in the United States. His work will have a lasting impact on the field of information science.
The main office building on the OCLC campus is named after Kilgour. The main entrance road to the OCLC campus is named Kilgour Place.
OCLC created an annual award in Kilgour's name, the Kilgour Award, which is given to a researcher who has contributed to advances information science.
Awards
In 1990, he was named Distinguished Research Professor of the School of Information and Library Science, the University of North Carolina at Chapel Hill, and served on the faculty until his retirement in 2004.
Kilgour was the author of 205 scholarly papers. He was the founder and first editor of the journal, Information Technology and Libraries. In 1999, Oxford University Press published his book The Evolution of the Book. His other books include The Library of the Medical Institution of Yale College and its Catalogue of 1865 and The Library and Information Science CumIndex.
He received numerous awards from library associations and five honorary doctorates. In 1982, the American Library Association presented him with Honorary Life Membership. The citation read:
In 1979, the American Society for Information Science and Technology gave him the Award of Merit. The citation read:
Works
Frederick G. Kilgour: The Evolution of the Book, (New York: Oxford University Press, 1998)
References
External links
Collected Papers of Frederick G. Kilgour
Interlibrary Lending Online, article by Kilgour on work at OCLC and OCLC's contribution to automating the interlibrary loan process
Frederick G. Kilgour Award
Tributes
Tribute page on Frederick G. Kilgour at OCLC
Frederick G. Kilgour 1914-2006 at Scanblog
1914 births
2006 deaths
American librarians
United States Navy personnel of World War II
Harvard University librarians
Harvard College alumni
OCLC people
People from Springfield, Massachusetts
People from Chapel Hill, North Carolina
People from Columbus, Ohio
Recipients of the Legion of Merit
United States Navy officers
Yale University staff
Military personnel from Massachusetts |
60279622 | https://en.wikipedia.org/wiki/Vineet%20Bafna | Vineet Bafna | Vineet Bafna is an Indian bioinformatician and professor of computer science and director of bioinformatics program at University of California, San Diego. He was elected a Fellow of the International Society for Computational Biology (ISCB) in 2019 for outstanding contributions to the fields of computational biology and bioinformatics. He has also been a member of the Research in Computational Molecular Biology (RECOMB) conference steering committee.
Career and research
Bafna received his Ph.D. in computer science from Pennsylvania State University in 1994 under supervision of Pavel Pevzner, and was a post-doctoral researcher at Center for Discrete Mathematics and Theoretical Computer Science. From 1999 to 2002, he worked at Celera Genomics, ultimately as director of informatics research, where he was part of the team (along with J. Craig Venter and Gene Myers) who assembled and annotated the Human Genome in 2001. He was also a member of the team that published the first diploid (six-billion-letter) genome of an individual human in 2007.
He joined the faculty at the University of California, San Diego in the Department of Computer Science and Engineering in 2003 where he now serves as professor and director of Bioinformatics program.
References
University of California, San Diego faculty
Living people
Fellows of the International Society for Computational Biology
Year of birth missing (living people)
Indian Institutes of Technology alumni
Indian bioinformaticians
Indian expatriate academics
Indian expatriates in the United States |
303251 | https://en.wikipedia.org/wiki/IP%20address%20blocking | IP address blocking | IP address blocking, or IP banning, is a configuration of a network service that blocks requests from hosts with certain IP addresses. IP address blocking is commonly used to protect against brute force attacks and to prevent access by a disruptive address. IP address blocking can be used to restrict access to or from a particular geographic area, for example, the syndication of content to a specific region through the use of Internet geolocation and blocking.
IP address blocking is possible on many systems using a hosts file. Unix-like operating systems commonly implement IP address blocking using a TCP wrapper.
Proxy servers and other methods can be used to bypass the blocking of traffic from IP addresses. However, anti-proxy strategies are available, such as DHCP lease renewal.
How it works
Every device connected to the Internet is assigned a unique IP address, which is needed to enable devices to communicate with each other. With appropriate software on the host website, the IP address of visitors to the site can be logged and can also be used to determine the visitor's geographical location.
Logging the IP address can, for example, monitor if a person has visited the site before, for example to vote more than once, as well as to monitor their viewing pattern, how long since they performed any activity on the site (and set a time out limit), besides other things.
Knowing the visitor's geo-location indicates, besides other things, the visitor's country. In some cases requests from or responses to a certain country would be blocked entirely. Geo-blocking has been used, for example, to block shows in certain countries. Such as censorship of shows deemed inappropriate especially frequent in places such as China.
Internet users may circumvent geo-blocking and censorship and protect personal identity and location to stay anonymous on the internet using a VPN connection.
On a website, an IP address block can prevent a disruptive address from access, though a warning and/or account block may be used first. Dynamic allocation of IP addresses by ISPs can complicate incoming IP address blocking, rendering it difficult to block a specific user without blocking many IP addresses (blocks of IP address ranges), thereby creating collateral damage.
Implementations
Unix-like operating systems commonly implement IP address blocking using a TCP wrapper, configured by host access control files /etc/hosts.deny and /etc/hosts.allow.
Both companies and schools offering remote user access use Linux programs such as DenyHosts or Fail2ban for protection from unauthorised access while allowing permitted remote access. This is also useful for allowing remote access to computers. It is also used for Internet censorship.
IP address blocking is possible on many systems using a hosts file, which is a simple text file containing hostnames and IP addresses. Hosts files are used by many operating systems, including Microsoft Windows, Linux, Android, and OS X.
Circumvention
Proxy servers and other methods can be used to bypass the blocking of traffic from IP addresses. However, anti-proxy strategies are available. Consumer-grade internet routers can sometimes obtain a new public IP address on demand from the internet service provider using DHCP lease renewal to circumvent individual IP address blocks, but this can be countered by blocking the range of IP addresses from which the internet service provider is assigning new IP addresses, which is usually a shared IP address prefix. However, this may impact legitimate users from the same internet service provider who have IP addresses in the same range, which inadvertently creates a denial-of-service attack.
In a 2013 United States court ruling in the case Craigslist v. 3Taps, US federal judge Charles R. Breyer held that circumventing an address block to access a website is a violation of the Computer Fraud and Abuse Act (CFAA) for "unauthorized access", punishable by civil damages.
See also
Block (Internet)
Content-control software
References
External links
Internet security
Blacklisting |
21240519 | https://en.wikipedia.org/wiki/Mintty | Mintty | mintty is a free and open source terminal emulator for Cygwin, the Unix-like environment for Windows. It features a native Windows user interface and does not require a display server; its terminal emulation is aimed to be compatible with xterm.
Mintty is based on the terminal emulation and Windows frontend parts of PuTTY, but improves on them in a number of ways, particularly regarding xterm compatibility. It is written in C. The POSIX API provided by Cygwin is used to communicate with processes running within mintty, while its user interface is implemented using the Windows API. The program icon comes from KDE's Konsole.
Towards the end of 2011, mintty became Cygwin's default terminal. Advantages over Cygwin's previous default console include a more flexible user interface and closer adherence to Unix standards and conventions. Since it is not based on the standard Windows console, however, programs written specifically for that do not work correctly in mintty. It is also available for MSYS (a more minimal Unix environment forked from Cygwin).
Originally, the project's name was styled "MinTTY", following the example of PuTTY, but it was later restyled to "mintty", which was felt to better suit the project's minimalist approach.
References
External links
Free software programmed in C
Free terminal emulators |
799662 | https://en.wikipedia.org/wiki/Dungeon%20Master%20%28video%20game%29 | Dungeon Master (video game) | Dungeon Master is a role-playing video game featuring a pseudo-3D first-person perspective. It was developed and published by FTL Games for the Atari ST in 1987, almost identical Amiga and PC (DOS) ports following in 1988 and 1992.
Dungeon Master sold 40,000 copies in its year of release alone, and went on to become the ST's best-selling game of all time. The game became the prototype for the genre of the 3D dungeon crawlers with notable clones like Eye of the Beholder.
Gameplay
In contrast to the traditional turn-based approach that was, in 1987, most common, Dungeon Master added real-time combat elements (akin to Active Time Battle). Other factors in immersion were the use of sound effects to indicate when a creature was nearby, and (primitive) dynamic lighting. Abstract Dungeons and Dragons style experience points and levels were eschewed in favor of a system where the characters' skills were improved directly via using them. Dungeon Master was not the first game to introduce these features. Dungeons of Daggorath for the TRS-80 Color Computer first employed them in 1982. Dungeon Master was, however, responsible for popularizing these elements. Other features of Dungeon Master included allowing players to directly manipulate objects and the environment by clicking the mouse in the enlarged first-person view. It also introduced some novel control methods including the spell casting system, which involved learning sequences of runes which represented the form and function of a spell's effect. For example, a fireball spell was created by mixing the fire symbol with the wing symbol.
While many previous games such as Alternate Reality: The Dungeon, The Bard's Tale, Ultima, and Wizardry offered Dungeons & Dragons-style role-playing, Dungeon Master established several new standards for role-playing video games and first-person video games in general, such as the paper doll interface.
As Theron, the player cannot progress past the first section of the game until they have selected up to four champions from a small dungeon containing 24 mirrors, each containing a frozen champion. The frozen champions are based upon a variety of fantasy archetypes to allow diversity within the player's party.
Plot
Many champions have been sent into the dungeon with the quest to recover Librasulus (the Grey Lord) firestaff. With the firestaff, Librasulus can take physical form again and defeat Lord Chaos. The player is Theron, the apprentice of the Grey Lord, that goes into the dungeon with the task to resurrect four champions, and guide them through the dungeon, to find the firestaff and defeat Lord Chaos.
If the player finds the firestaff and uses it to defeat Lord Chaos, this will be the real ending of the game. But there is also an alternative ending if the player finds the firestaff and then leaves the dungeon without destroying Lord Chaos.
Development
Originally, Dungeon Master was started with the name Crystal Dragon coded in Pascal, and targeted the Apple II platform. Doug Bell and Andy Jaros (Artwork) began development in their development studio PVC Dragon, before they joined in 1983 FTL Games. It was finished there in C programming language and published in 1987 for the Atari ST first. A slightly updated Amiga version was released the following year, which was the first video game to use 3D sound effects.
Dungeon Master was ported later to many platforms like PC, Apple IIGS, TurboGrafx-CD, SNES, Sharp X68000, PC-9801 and FM Towns. The game was also translated from English into German, French, Japanese, Chinese and Korean.
According to "The Definitive CDTV Retrospective: Part II" by Peter Olafson, Dungeon Master was ported to the Amiga CDTV but this version was never completed because FTL could not obtain reliable information from Commodore about saving games to memory cards.
Dungeon Master was also ported to Macintosh but never released.
There exists a prototype for the Atari Lynx under the name Dungeon Slayers.
The packaging cover art was designed and illustrated by David R. Darrow, for which Andy Jaros posed as the leftmost character pulling on the torch. The woman in the scene was Darrow's wife, Andrea, and the muscular man in the background is unknown, but hired by Darrow from a local fitness club. The painting itself is 25 to 30 inches high and doesn’t contain the word "Master". Darrow’s painting portrays a scene from the prologue in the manual for Dungeon Master. It shows the three (or four) main characters' last few minutes alive, and is a portrayal of the player’s challenge to defeat the antagonist, Lord Chaos. The heroes in the painting are Halk the Barbarian, Syra Child of Nature, Alex Ander – and Nabi the Prophet who’s been reduced to a bunch of skulls.
A soundtrack album, titled Dungeon Master: The Album, was released later. This album featured music composed by Darrell Harvey, Rex Baca, and Kip Martin. The original ST version and its faithful Amiga and PC ports contain no music. The album features music composed for the FM Towns game, as well as FM Towns version of Chaos Strikes Back, and some original tracks that were inspired by the games.
Reception
Dungeon Master debuted on 15 December 1987 on the Atari ST, and by early 1988 was a strong seller, becoming the best-selling game for the computer of all time; Bell estimated that at one point more than half of all Atari ST owners had purchased the game. Because of FTL's sophisticated copy protection, many who otherwise pirated their software had to purchase Dungeon Master to play the game. The Amiga version was the first prominent game to require 1 MB of RAM, likely causing many to purchase additional memory; at least one manufacturer of Amiga memory bundled Dungeon Master with its memory-expansion kit. As with Wizardry, many others offered for sale strategy guides, game trainers, and map editors, competing with FTL's own hint book.
Hosea Battles Jr. of Computer Gaming World in 1988 praised the attention to detail in the dungeons' graphics, allowing players to "practically feel the damp chill of the dungeons portrayed", as well as those of the monsters, including the multiple facial expressions on the ogres. He said the control system works "extremely well" and "one's adrenaline really flows because the game is in real-time." Battles also praised the extensive use of sound effects, uncommon to RPGs. He complained that the manual does not describe monsters or their attributes, of a "frustrating" shortage of food and water replenishments and that the lack of a map makes the game "extremely difficult". Battles called the game "fantastic" and said "It is a welcome addition to any fantasy player's library. Those who want a good fantasy/role-playing game will love this one." Scorpia stated in the magazine in 1992 that the newly released IBM PC version's graphics "are surprisingly good, all things considered" despite the game's age, but wrote that "No endgame has ever given me so much trouble or frustration". Although she believed that the game "is still eminently worth playing, even years later[, and] still has something to offer the seasoned adventurer", because of the endgame Scorpia "can't give it a blanket recommendation". In 1993 she stated that "the game still holds up well after seven years, even graphically, and is worth playing today", but because of the ending was "not for the easily-frustrated".
Computer and Video Games in 1988 called the story a "cliché" but praised the graphics, sound and controls. The reviewer said Dungeon Master is an example of a title which "changes the way we think about games" and a "must for all roleplayers". Antic called the game as "revolutionary" as Zork and Flight Simulator II, citing "spectacular" graphics and stating that the game was "almost worth buying for the sound-effects alone". Despite the "commonplace" story "where once again, an Evil Wizard has taken over control of the world", the magazine advised readers to "buy this game". Advanced Computing Entertainment said the graphics are "largely repetitive" but "wonderfully drawn" and wrote the "Sound is sparse but the effects are great." The reviewer called it a "thrilling game with plenty in it to keep you searching, fighting and pondering for a long time." He summarised the game as a "huge, immensely playable and very atmospheric mixture of role-playing and adventure. If you've been looking for a real-time role-playing game that manages to keep you interested for long periods of time, then your prayers have been answered." The Games Machine wrote: "the innovative character selection system and icon display are both neatly implemented and quick to use", praised the "superb" atmosphere - enhanced by the spare but apt sound effects - and called the game universe "believable because of its details". The magazine praised the color and clarity of the monster graphics and the shading of the surroundings. It called the story and setting a "wholly engrossing scenario [which] creates a complete world which can be manipulated at will: its depth fully reflects the two years it took to program it. The presentation - an interesting and evocative novella neither too involved to prove turbid not too short to be unhelpful - is superb." The reviewer summarised: "Dungeon Master is a role-player's dream, but capable of providing a good deal of enjoyment for any ST owner." STart told readers to "be prepared to shed every preconception you ever had about computer games. This is Dungeon Master". Noting the strong sales, the reviewer called it "a true video game phenomenon" and reported that "not talking to my boyfriend for a week because he lost our master spell list was certainly not an overreaction".
Kati Hamza of Zzap!64 said of the Amiga version: "The first-person perspective ensures an incredibly realistic atmosphere - you just can't help really getting into the feeling of walking through damp echoing caverns looking for ghosts." The reviewer also said: "The puzzles are incredibly devious, the spell system is really flexible and the need to practise magic and spells gives the whole thing that extra-special depth." The reviewer asserted: "This has to be the most amazing game of all-time, anywhere, ever". In the same issue Gordon Houghton said: "This is just about the most incredible game I've ever seen. When you pick it up you find you lose whole days of your life." He said: "The best time to play it is late at night in a room by yourself - it's guaranteed to scare the life out of you. It's like Gauntlet in 3D, but about a hundred times better. If you enjoy arcade adventures, RPGs or combat games, but it: it's the perfect combination of all three." Reviewer Maff Evans professed to be little enthused by RPGs generally but said "I know a brilliant game when I see one and this is a brilliant game." He praised the scares delivered by ambushing monsters and said "you'd have to be deaf, dumb and blind not to be affected by the atmosphere". The magazine complained that saving games is "a bit laboured" but praised the "extremely detailed and accessible" controls, "interactive, detailed and extremely atmospheric" scenery and said the clarity of the graphics made the game an unusually accessible RPG. It summarized: "you'll be playing for months" and said Dungeon Master was "The best game we've ever seen".
Also reviewing the Amiga version, Graham Kinsey of Amazing Computing wrote that Dungeon Master "completely blows away any other RPG on the Amiga market today, and may do for some time". Dave Eriksson of Amiga Computing praised the "brilliant" graphics, sound effects and replay-value and said "Dungeon Master is the most stunning role-playing game I have seen on the Amiga". Antic's Amiga Plus felt the game "captures the essence of Dungeons & Dragons role-playing games". The reviewer praised the "dazzling" graphics, called the user-friendly controls "a real joy" and said the game was the "best graphics adventure for the Amiga to date." Your Amiga called the sound "extremely well done" and said the "most striking feature of the game is the attention to detail". The reviewer said called the game "amazing" and recommended: "If you never buy another game, by [sic] this one."
Andy Smith of Advanced Computing Entertainment several months after its release called Dungeon Master "one of the all time classics" and said "What makes Dungeon Master really special (apart from the marvellous 3D graphics and eerie sound effects) are the puzzles". The game was reviewed in 1988 in Dragon #136 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4½ out of 5 stars. The Lessers reviewed the PC/MS-DOS version in 1993 in Dragon #195, giving this version 5 stars. In 1997, ten years after release, Dungeon Master got again a 5 from 5 stars score in a review.
Awards
Dungeon Master received the Special Award for Artistic Achievement from Computer Gaming World in 1988. It achieved the top place in the magazine's game rankings system, and was entered into its hall of fame in November 1989. In 1990 the game received the second-highest number of votes in a survey of Computer Gaming World readers' "All-Time Favorites". In 1996, the magazine named Dungeon Master the 49th best game ever.
The following is a comprehensive list of other awards received by the game.
Special Award for Artistic Achievement awarded in 1988 by Computer Gaming World
Adventure Game of the Year, 1988 — UK Software Industry Awards
Best Selling Atari ST Title, 1988 — UK Software Industry Awards
Best Role Playing Game, 1988 — PowerPlay Magazine (German)
Best Role Playing Game, 1988 — Tilt Magazine
Best Sound Effects, 1988 — Tilt Magazine
Game of the Year, 1988 — Computer Play Magazine
Best Atari ST Game, 1988 — Computer Play Magazine
Game of the Year, 1988 — 4th Generation Magazine (French)
"Golden Sword" Award, 1988 — The Adventurer's Club of the UK
Best Role Playing Game, 1988 — The Adventurer's Club of the UK
"Beastie Award", 1988 — Dragon Magazine
Best Atari ST Title, 1988 — Dragon Magazine
Best Game, 1989 — Amiga World Magazine
Best Role Playing Game, 1989 — Amiga World Magazine
Best Amiga Game, 1989 — Game Player's Magazine
Best Amiga Game, 1989 — Datormagazin (Swedish)
"Beastie Award" Best Apple //GS Title, 1989 — Dragon Magazine
Best Game, 1989 — Info Magazine
Best of the Amiga, 1989 — Compute magazine
Inducted as an original member in the Computer Gaming World Hall of Fame in 1989
Designated as one of the 100 Best Games by PowerPlay Magazine (German, January 1990)
16th best game of all time in Amiga Power (May 1991)
Sequels and legacy
While Dungeon Master itself was inspired by early Ultima games, it amazed Ultima developer Origin Systems's employees; Origin founder Richard Garriott said that he was "ecstatic" at discovering the "neat new things I could do" in the game. It influenced Ultima VIs graphical user interface and seamless map, and the later Ultima Underworld. Game journalist Niko Nirvi wrote that no 3D role-playing title before Ultima Underworld (1992) could challenge Dungeon Master as a game.
In 1989, FTL Games released a Dungeon Master sequel, Chaos Strikes Back.
To date, Dungeon Master retains a small but faithful following online, with several fan-made ports and remakes available or in development. Notable reception received a faithful reconstruction of the Atari ST version, called "CSBWin", which was released in 2001. Reverse engineered in six months work from the original by Paul R. Stevens, the available source code of CSBwin led to many ports for modern platforms like Windows and Linux. In 2014, Christophe Fontanel released another reverse engineering project which tries to recreate all existing versions and ports.
Reviews
Casus Belli #44 (April 1988)
See also
Legend of Grimrock
References
External links
Dungeon Master at Atari Mania
Dungeon Master at the Hall of Light
Tribute to Dungeon Master: A Video Reference at Retro Dream
1987 video games
Amiga games
Apple IIGS games
Atari ST games
Cancelled Atari Lynx games
DOS games
Fantasy video games
First-person party-based dungeon crawler video games
FM Towns games
NEC PC-9801 games
Role-playing video games
Sharp X68000 games
Single-player video games
Super Nintendo Entertainment System games
TurboGrafx-CD games
Video games developed in the United States
Video games scored by Tsukasa Tawada
Video games with expansion packs |
4088214 | https://en.wikipedia.org/wiki/Caller%20ID%20spoofing | Caller ID spoofing | Caller ID spoofing is the practice of causing the telephone network to indicate to the receiver of a call that the originator of the call is a station other than the true originating station. This can lead to a caller ID display showing a phone number different from that of the telephone from which the call was placed.
The term is commonly used to describe situations in which the motivation is considered malicious by the originator.
One effect of the widespread availability of Caller ID spoofing is that, as AARP published in 2019, "you can no longer trust call ID."
History
Caller ID spoofing has been available for years to people with a specialized digital connection to the telephone company, called an ISDN PRI circuit. Collection agencies, law-enforcement officials, and private investigators have used the practice, with varying degrees of legality. The first mainstream caller ID spoofing service was launched USA-wide on September 1, 2004 by California-based Star38.com. Founded by Jason Jepson, it was the first service to allow spoofed calls to be placed from a web interface. It stopped offering service in 2005, as a handful of similar sites were launched.
In August 2006, Paris Hilton was accused of using caller ID spoofing to break into a voicemail system that used caller ID for authentication. Caller ID spoofing also has been used in purchase scams on web sites such as Craigslist and eBay. The scamming caller claims to be calling from Canada into the U.S. with a legitimate interest in purchasing advertised items. Often the sellers are asked for personal information such as a copy of a registration title, etc., before the (scammer) purchaser invests the time and effort to come see the for-sale items. In the 2010 election, fake caller IDs of ambulance companies and hospitals were used in Missouri to get potential voters to answer the phone. In 2009, a vindictive Brooklyn wife spoofed the doctor's office of her husband's lover in an attempt to trick the other woman into taking medication which would make her miscarry.
Frequently, caller ID spoofing is used for prank calls. In December 2007, a hacker used a caller ID spoofing service and was arrested for sending a SWAT team to a house of an unsuspecting victim. In February 2008, a Collegeville, Pennsylvania, man was arrested for making threatening phone calls to women and having their home numbers appear "on their caller ID to make it look like the call was coming from inside the house."
In March 2008, several residents in Wilmington, Delaware, reported receiving telemarketing calls during the early morning hours, when the caller had apparently spoofed the caller ID to evoke Tommy Tutone’s 1981 hit "867-5309/Jenny“. By 2014, an increase in illegal telemarketers displaying the victim's own number, either verbatim or with a few digits randomised, was observed as an attempt to evade caller ID-based blacklists.
In the Canadian federal election of May 2, 2011, both live calls and robocalls are alleged to have been placed with false caller ID, either to replace the caller's identity with that of a fictitious person (Pierre Poutine of Joliette, Quebec) or to disguise calls from an Ohio call centre as Peterborough, Ontario, domestic calls. See Robocall scandal.
In June 2012, a search on Google returned nearly 50,000 consumer complaints by individuals receiving multiple continuing spoofed voice over IP (VoIP) calls on lines leased / originating from “Pacific Telecom Communications Group” located in Los Angeles, CA (in a mailbox store), in apparent violation of FCC rules. Companies such as these lease out thousands of phone numbers to anonymous voice-mail providers who, in combination with dubious companies like “Phone Broadcast Club” (who do the actual spoofing), allow phone spam to become an increasingly widespread and pervasive problem. In 2013, the misleading caller name "Teachers Phone" was reported on a large quantity of robocalls advertising credit card services as a ruse to trick students' families into answering the unwanted calls in the mistaken belief they were from local schools.
On January 7, 2013, the Internet Crime Complaint Center issued a scam alert for various telephony denial of service attacks by which fraudsters were using spoofed caller ID to impersonate police in an attempt to collect bogus payday loans, then placing repeated harassing calls to police with the victim's number displayed. While impersonation of police is common, other scams involved impersonating utility companies to threaten businesses or householders with disconnection as a means to extort money, impersonating immigration officials or impersonating medical insurers to obtain personal data for use in theft of identity. Bogus caller ID has also been used in grandparent scams, which target the elderly by impersonating family members and requesting wire transfer of money.
In 2018, one method of caller ID spoofing was called "neighbor spoofing", using either the same area code and telephone prefix of the person being called, or the name of a person or business in the area.
Technology and methods
Caller ID is spoofed through a variety of methods and different technology. The most popular ways of spoofing caller ID are through the use of VoIP or PRI lines.
Voice over IP
In the past, caller ID spoofing required an advanced knowledge of telephony equipment that could be quite expensive. However, with open source software (such as Asterisk or FreeSWITCH, and almost any VoIP company), one can spoof calls with minimal costs and effort.
Some VoIP providers allow the user to configure their displayed number as part of the configuration page on the provider's web interface. No additional software is required. If the caller name is sent with the call (instead of being generated from the number by a database lookup at destination) it may be configured as part of the settings on a client-owned analog telephone adapter or SIP phone. The level of flexibility is provider-dependent. A provider which allows users to bring their own device and unbundles service so that direct inward dial numbers may be purchased separately from outbound calling minutes will be more flexible. A carrier which doesn't follow established hardware standards (such as Skype) or locks subscribers out of configuration settings on hardware which the subscriber owns outright (such as Vonage) is more restrictive. Providers which market "wholesale VoIP" are typically intended to allow any displayed number to be sent, as resellers will want their end user's numbers to appear.
In rare cases, a destination number served by voice-over-IP is reachable directly at a known SIP address (which may be published through ENUM telephone number mapping, a .tel DNS record or located using an intermediary such as SIP Broker). Some Google Voice users are directly reachable by SIP, as are all iNum Initiative numbers in country codes +883 5100 and +888. As a federated VoIP scheme providing a direct Internet connection which does not pass through a signaling gateway to the public switched telephone network, it shares the advantages (nearly free unlimited access worldwide) and disadvantages (ernet applications).
Service providers
Some spoofing services work similarly to a prepaid calling card. Customers pay in advance for a personal identification number (PIN). Customers dial the number given to them by the company, their PIN, the destination number and the number they wish to appear as the caller ID. The call is bridged or transferred and arrives with the spoofed number chosen by the caller—thus tricking the called party.
Many providers also provide a Web-based interface or a mobile application where a user creates an account, logs in and supplies a source number, destination number, and the bogus caller ID information to be displayed. The server then places a call to each of the two endpoint numbers and bridges the calls together.
Some providers offer the ability to record calls, change the voice and send text messages.
Orange box
Another method of spoofing is that of emulating the Bell 202 FSK signal. This method, informally called orange boxing, uses software that generates the audio signal which is then coupled to the telephone line during the call. The object is to deceive the called party into thinking that there is an incoming call waiting call from the spoofed number, when in fact there is no new incoming call. This technique often also involves an accomplice who may provide a secondary voice to complete the illusion of a call-waiting call. Because the orange box cannot truly spoof an incoming caller ID prior to answering and relies to a certain extent on the guile of the caller, it is considered as much a social engineering technique as a technical hack.
Other methods include switch access to the Signaling System 7 network and social engineering telephone company operators, who place calls for you from the desired phone number.
Caller name display
Telephone exchange equipment manufacturers vary in their handling of caller name display. Much of the equipment manufactured for Bell System companies in the United States sends only the caller's number to the distant exchange; that switch must then use a database lookup to find the name to display with the calling number. Canadian landline exchanges often run Nortel equipment which sends the name along with the number. Mobile, CLEC, Internet or independent exchanges also vary in their handling of caller name, depending on the switching equipment manufacturer. Calls between numbers in differing country codes represent a further complication, as caller ID often displays the local portion of the calling number without indicating a country of origin or in a format that can be mistaken for a domestic or invalid number.
This results in multiple possible outcomes:
The name provided by the caller (in the analog telephone adapter configuration screen for voice-over-IP users or on the web interface on a spoofing provider) is blindly passed verbatim to the called party and may be spoofed at will
The name is generated from a telephone company database using the spoofed caller ID number.
A destination provider may display no name or just the geographic location of the provided telephone area code on caller ID (e.g., "ARIZONA", "CALIFORNIA", "OREGON", or "ONTARIO"). This often occurs where the destination carrier is a low-cost service (such as a VoIP provider) running no database or outdated data in which the number is not found.
If the displayed number is in the recipient's address book, some handsets will display the name from the local address book in place of the transmitted name. Some VoIP providers use Asterisk (PBX) to provide similar functionality at the server; this may lead to multiple substitutions with priority going to the destination user's own handset as the last link in the CNAM chain.
Legal considerations
Canada
Caller ID spoofing remains legal in Canada, and has recently become so prevalent that the Canadian Anti-Fraud Centre has "add[ed] an automated message about [the practice] to their fraud-reporting hotline". The CRTC estimates that 40% of the complaints they receive regarding unsolicited calls involve spoofing. The agency advises Canadians to file complaints regarding such calls, provides a list of protection options for dealing with them on its website, and, from July through December 2015, held a public consultation to identify "technical solutions" to address the issue.
On January 25, 2018, the CRTC set a target date of March 31, 2019 for the implementation of a CID authentication system. On December 9, 2019, the CRTC extended this date, announcing that they expect STIR/SHAKEN, a CID authentication system, to be implemented by September 30, 2020. On September 15, 2020, the CRTC extended the target date one more time, changing it to June 30, 2021. The CRTC is formally considering making its target date for STIR/SHAKEN mandatory.
On December 19, 2018, the CRTC announced that beginning in a year from that date, phone providers must block all calls with caller IDs that do not conform to established numbering plans.
India
According to a report from the India Department of Telecommunications, the government of India has taken the following steps against the CLI spoofing service providers:
Websites offering caller ID spoofing services are blocked in India as an immediate measure.
International long-distance operators (ILDOs), national long-distance operators (NLDOs) and access service providers have been alerted to the existence of such spoofing services, and shall collectively be prepared to take action to investigate cases of caller ID spoofing as they are reported.
As per DOT, using spoofed call service is illegal as per the Indian Telegraph Act, Sec 25(c). Using such service may lead to a fine, three years' imprisonment or both.
United Kingdom
In the UK, the spoofed number is called the "presentation number". This must be either allocated to the caller, or if allocated to a third party, it is only to be used with the third party's explicit permission.
Starting 2016, direct marketing companies are obliged to display their phone numbers. Any offending companies can be fined up to £2 million by Ofcom.
In 2021, Huw Saunders, a director at Ofcom, the UK regulator, said the current UK phone network (Public Switched Telephone Network) is being updated to a new system (Voice Over Internet Protocol), which should be in place by 2025. Saunders said, "It's only when the vast majority of people are on the new technology (VOIP) that we can implement a new patch to address this problem [of Caller ID spoofing]."
United States
Caller ID spoofing is generally legal in the United States unless done "with the intent to defraud, cause harm, or wrongfully obtain anything of value". The relevant federal statute, the Truth in Caller ID Act of 2009, does make exceptions for certain law-enforcement purposes. Callers are also still allowed to preserve their anonymity by choosing to block all outgoing caller ID information on their phone lines.
Under the act, which also targets VoIP services, it is illegal "to cause any caller identification service to knowingly transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value...." Forfeiture penalties or criminal fines of up to $10,000 per violation (not to exceed $1,000,000) could be imposed. The law maintains an exemption for blocking one's own outgoing caller ID information, and law enforcement isn't affected.
The New York Times sent the number 111-111-1111 for all calls made from its offices until August 15, 2011. The fake number was intended to prevent the extensions of its reporters appearing in call logs, and thus protect reporters from having to divulge calls made to anonymous sources. The Times abandoned this practice because of the proposed changes to the caller ID law, and because many companies were blocking calls from the well-known number.
Starting in mid-2017, the FCC pushed forward Caller ID certification implemented using a framework known as STIR/SHAKEN. SHAKEN/STIR are acronyms for Signature-based Handling of Asserted Information Using toKENs (SHAKEN) and the Secure Telephone Identity Revisited (STIR) standards. The FCC has mandated that telecom providers implement STIR/SHAKEN-based caller ID attestation in the IP portions of their networks beginning no later than June 30, 2021.
On August 1, 2019, the FCC voted to extend the Truth in Caller ID Act to international calls and text messaging. Congress passed the TRACED Act in 2019 which makes Caller ID authentication mandatory.
See also
Caller ID
Truth in Caller ID Act of 2009
References
External links
Caller ID
Deception
Confidence tricks
Telemarketing |
3152473 | https://en.wikipedia.org/wiki/RootkitRevealer | RootkitRevealer | RootkitRevealer is a proprietary freeware tool for rootkit detection on Microsoft Windows by Bryce Cogswell and Mark Russinovich. It runs on Windows XP and Windows Server 2003 (32-bit-versions only). Its output lists Windows Registry and file system API discrepancies that may indicate the presence of a rootkit. It is the same tool that triggered the Sony BMG copy protection rootkit scandal.
RootkitRevealer is no longer being developed.
See also
Sysinternals
Process Explorer
Process Monitor
ProcDump
References
Microsoft software
Computer security software
Windows security software
Windows-only software
Rootkit detection software
2006 software |
36070366 | https://en.wikipedia.org/wiki/2012%20LinkedIn%20hack | 2012 LinkedIn hack | The 2012 LinkedIn hack refers to the computer hacking of LinkedIn on June 5, 2012. Passwords for nearly 6.5 million user accounts were stolen. Yevgeniy Nikulin was convicted of the crime and sentenced to 88 months in prison.
Owners of the hacked accounts were unable to access their accounts. LinkedIn said, in an official statement, that they would email members with instructions on how they could reset their passwords. In May 2016, LinkedIn discovered an additional 100 million email addresses and passwords that had been compromised from the same 2012 breach.
History
The hack
The social networking website LinkedIn was hacked on June 5, 2012, and passwords for nearly 6.5 million user accounts were stolen by Russian cybercriminals. Owners of the hacked accounts were no longer able to access their accounts, and the website repeatedly encouraged its users to change their passwords after the incident. Vicente Silveira, the director of LinkedIn, confirmed, on behalf of the company, that the website was hacked in its official blog. He also said that the holders of the compromised accounts would find their passwords were no longer valid on the website.
In May 2016, LinkedIn discovered an additional 100 million email addresses and hashed passwords that claimed to be additional data from the same 2012 breach. In response, LinkedIn invalidated the passwords of all users that had not changed their passwords since 2012.
Leak
A collection containing data about more than 700 million users, believed to have been scraped from LinkedIn, was leaked online in September, 2021 in form of a torrent file after hackers previously tried to sell it earlier in June, 2021.
Reaction
Internet security experts said that the passwords were easy to unscramble because of LinkedIn's failure to use a salt when hashing them, which is considered an insecure practice because it allows attackers to quickly reverse the scrambling process using existing standard rainbow tables, pre-made lists of matching scrambled and unscrambled passwords. Another issue that sparked controversy was the iOS app provided by LinkedIn, which grabs personal names, emails, and notes from a mobile calendar without the user's approval. Security experts working for Skycure Security said that the application collects a user's personal data and sends it to the LinkedIn server. LinkedIn claimed the permission for this feature is user-granted, and the information is sent securely using the Secure Sockets Layer (SSL) protocol. The company added that it had never stored or shared that information with a third party.
Rep. Mary Bono Mack of the United States Congress commented on the incident, "How many times is this going to happen before Congress finally wakes up and takes action? This latest incident once again brings into sharp focus the need to pass data protection legislation." Senator Patrick Leahy said, "Reports of another major data breach should give pause to American consumers who, now more than ever, share sensitive personal information in their online transactions and networking ... Congress should make comprehensive data privacy and cybercrime legislation a top priority."
Marcus Carey, a security researcher for Rapid7, said that the hackers had penetrated the databases of LinkedIn in the preceding days. He expressed concerns that they may have had access to the website even after the attack.
Michael Aronowitz, Vice President of Saveology said, "Everyday hundreds of sites are hacked and personal information is obtained. Stealing login information from one account can easily be used to access other accounts, which can hold personal and financial information." Security experts indicated that the stolen passwords were encrypted in a way that was fairly easy to decrypt, which was one of the reasons for the data breach.
Katie Szpyrka, a long time user of LinkedIn from Illinois, United States, filed a $5 million lawsuit against LinkedIn, complaining that the company did not keep their promises to secure connections and databases. Erin O’Harra, a spokeswoman working for LinkedIn, when asked about the lawsuit, said that lawyers were looking to take advantage of that situation to again propose the bills SOPA and PIPA in the United States Congress.
An amended complaint was filed on Nov. 26, 2012 on behalf of Szpyrka and another premium LinkedIn user from Virginia, United States, named Khalilah Gilmore–Wright, as class representatives for all LinkedIn users who were affected by the breach. The lawsuit sought injunctive and other equitable relief, as well as restitution and damages for the plaintiffs and members of the class.
Response from LinkedIn
LinkedIn apologized immediately after the data breach and asked its users to immediately change their passwords. The Federal Bureau of Investigation assisted the LinkedIn Corporation in investigating the theft. As of 8 June 2012, the investigation was still in its early stages, and the company said it was unable to determine whether the hackers were also able to steal the email addresses associated with the compromised user accounts as well. LinkedIn said that the users whose passwords are compromised would be unable to access their LinkedIn accounts using their old passwords.
Arrest and conviction of suspect
On October 5, 2016, Russian hacker Yevgeniy Nikulin was detained by Czech police in Prague. The United States had requested an Interpol warrant for him.
A United States grand jury indicted Nikulin and three unnamed co-conspirators on charges of aggravated identity theft and computer intrusion. Prosecutors alleged that Nikulin stole a LinkedIn employee's username and password, using them to gain access to the corporation's network. Nikulin was also accused of hacking into Dropbox and Formspring, allegedly conspiring to sell stolen Formspring customer data, including usernames, e-mail addresses, and passwords.
Nikulin was convicted and sentenced to 88 months of imprisonment.
References
Hacking in the 2010s
LinkedIn hack
LinkedIn
Computer security exploits |
303744 | https://en.wikipedia.org/wiki/OpenZaurus | OpenZaurus | OpenZaurus is a defunct embedded operating system for the Sharp Zaurus personal mobile tool PDA.
History
In its original form, the project was a repackaging of the SharpROM, the Zaurus's factory supplied kernel and root filesystem image. In order to make the Zaurus's OS closer to the needs of the developer community, the SharpROM was altered through the use of bugfixes, software additions, and even removals in order to make the package more open.
The OpenZaurus project was revamped completely, becoming Debian-based built from source, from the ground up. Due to the change in direction, OpenZaurus became quite similar to other embedded Debian-based distributions, such as Familiar for the iPAQ. OpenZaurus, in its current form, facilitates an easy method for users to build their own custom images. The efforts of Openzaurus, along with other embedded Linux projects, were integrated into the OpenEmbedded Project, which now provides the common framework for these projects.
Variants
In addition to building a custom OpenZaurus image using OpenEmbedded metadata, The OpenZaurus distribution can be acquired in three variations for each version release.
Bootstrap: A minimal, console based image with a working root filesystem, and networking over SSH, WLAN, Bluetooth, or USB. Suitable for bootstrapping a larger, X11 system.
GPE: Everything the Bootstrap image contains, plus the X Window System and the GTK+ based GPE Palmtop Environment.
OPIE: Everything the Bootstrap image contains and the Qt based OPIE Palmtop Integrated Environment.
Status
On April 26, 2007, it was announced that the OpenZaurus project was over. Future development efforts are to focus on the Ångström distribution for embedded systems.
See also
Palm OS
Pocket PC
Windows Mobile
References
External links
Hentges.net OZ with updated OPIE releases
ARM Linux distributions
Personal digital assistant software
Debian-based distributions
Embedded Linux distributions
Operating systems using GPE Palmtop Environment
Linux distributions |
190260 | https://en.wikipedia.org/wiki/OpenDoc | OpenDoc | OpenDoc is a defunct multi-platform software componentry framework standard created by Apple in the 1990s for compound documents, intended as an alternative to Microsoft's Object Linking and Embedding (OLE). As part of the AIM alliance between Apple, IBM, and Motorola, OpenDoc is one of Apple's earliest experiments with open standards and collaborative development methods with other companieseffectively starting an industry consortium. Active development was discontinued in March 1997.
The core idea of OpenDoc is to create small, reusable components, responsible for a specific task, such as text editing, bitmap editing, or browsing an FTP server. OpenDoc provides a framework in which these components can run together, and a document format for storing the data created by each component. These documents can then be opened on other machines, where the OpenDoc frameworks substitute suitable components for each part, even if they are from different vendors. In this way users can "build up" their documents from parts. Since there is no main application and the only visible interface is the document itself, the system is known as document-centered.
At its inception, it was envisioned that OpenDoc would allow, for example, smaller, third-party developers to enter the then-competitive office suite software market, able to build one good editor instead of having to provide a complete suite.
Early efforts
OpenDoc was initially created by Apple in 1992, after Microsoft approached Apple asking for input on a proposed OLE II project. Apple had been experimenting with software components internally for some time, based on the initial work done on its Publish and Subscribe linking model and the AppleScript scripting language, which in turn was based on the HyperCard programming environment. Apple reviewed the Microsoft prototype and document and returned a list of problems they saw with the design. Microsoft and Apple, who were very competitive at the time, were unable to agree on common goals and did not work together.
At about the same time, a group of third-party developers had met at the Apple Worldwide Developers Conference (WWDC '91) and tried to establish a standardized document format, based conceptually on the Electronic Arts Interchange File Format (IFF). Apple became interested in this work, and soon dedicated some engineers to the task of building, or at least documenting, such a system. Initial work was published on the WWDC CDs, as well as a number of follow-up versions on later developer CDs. A component document system would only work with a known document format that all the components could use, and so soon the standardized document format was pulled into the component software effort. The format quickly changed from a simple one using tags to a very complex object oriented persistence layer called Bento.
Initially the effort was codenamed "Exemplar", then "Jedi", "Amber", and eventually "OpenDoc".
Competing visions
With OpenDoc entering the historic AIM alliance between Apple, IBM, and Motorola, Apple was also involved in Taligent during some of this period, which promised somewhat similar functionality although based on very different underlying mechanisms. While OpenDoc was still being developed, Apple confused things greatly by suggesting that it should be used by people porting existing software only, and new projects should instead be based on Taligent since that would be the next OS. In 1993, John Sculley called Project Amber (a codename for what would become OpenDoc) a path toward Taligent. Taligent was considered the future of the Mac platform, and work on other tools like MacApp was considerably deprioritized.
Through OpenDoc's entire lifespan, analysts and users each reportedly "had very different views" of the OpenDoc initiative. They were confused about their role, regarding how much of OpenDoc-based development would be their responsibility versus IBM's and Apple's responsibility. There were never many released OpenDoc components compared to Microsoft's ActiveX components. Therefore, reception was very mixed.
Starting in 1992, Apple had also been involved in an effort to replace MacApp development framework with a cross-platform solution known as Bedrock, from Symantec. Symantec's Think C was rapidly becoming the tool of choice for development on the Mac. Apple had been working with them to port their tools to the PowerPC when they learned of Symantec's internal porting tools. Apple proposed merging existing MacApp concepts and code with Symantec's to produce an advanced cross-platform system. Bedrock began to compete with OpenDoc as the solution for future development.
As OpenDoc gained currency within Apple, the company started to push Symantec into including OpenDoc functionality in Bedrock. Symantec was uninterested in this, and eventually gave up on the effort, passing the code to Apple. Bedrock was in a very early state of development at this point, even after 18 months of work, as the development team at Symantec suffered continual turnover. Apple proposed that the code would be used for OpenDoc programming, but nothing was ever heard of this again, and Bedrock disappeared.
As a result of Taligent and Bedrock both being Apple's officially promised future platforms, little effort had been expended on updating MacApp. Because Bedrock was discontinued in 1993 and Taligent was discontinued in 1996 without any MacOS release, this left Apple with only OpenDoc as a modern OO-based programming system.
Partnerships
The development team realized in mid-1992 that an industry coalition was needed to promote the system, and created the Component Integration Laboratories (CI Labs) with IBM and WordPerfect. IBM introduced the System Object Model (SOM) shared library system to the project, which became a major part of Apple's future efforts, in and out of OpenDoc. In 1996 the project was adopted by the Object Management Group, in part due to SOM's use of Common Object Request Broker Architecture (CORBA), maintained by the OMG.
As part of the AIM alliance between Apple, IBM, and Motorola, OpenDoc is one of Apple's earliest experiments with open standards and collaborative development methods with other companies. Apple and its partners never publicly released the source code, but did make the complete source available to developers for feedback, testing, and debugging purposes.
Release
The OpenDoc subsystem was initially released on System 7.5, and later on OS/2 Warp 4.
Products
After three years of development on OpenDoc itself, the first OpenDoc-based product release was Apple's CyberDog web browser in May 1996. The second was on August 1, 1996, of IBM's two packages of OpenDoc components for OS/2, available on the Club OpenDoc website for a 30 day free trial: the Person Pak is "components aimed at organizing names, addresses, and other personal information", for use with personal information management (PIM) applications, at $229; and the Table Pak "to store rows and columns in a database file" at $269. IBM then anticipated the release of 50 more components by the end of 1996.
The WAV word processor is a semi-successful OpenDoc word processor from Digital Harbor LLC. The Numbers & Charts package is a spreadsheet and 3D real-time charting solution from Adrenaline Software. Lexi from Soft-Linc, Inc. is a linguistic package containing a spell checker, thesaurus, and a simple translation tool which WAV and other components use. The Nisus Writer software by Nisus incorporated OpenDoc, but its implementation was hopelessly buggy. Bare Bones Software tested the market by making its BBEdit Lite freeware text editor available as an OpenDoc editor component. RagTime, a completely integrated office package with spreadsheet, publishing, and image editing was ported to OpenDoc shortly before OpenDoc was cancelled. Apple's 1996 release of ClarisWorks 5.0 (the predecessor of AppleWorks) was planned to support OpenDoc components, but this was dropped.
Educational
Another OpenDoc container application, called Dock'Em, was written by MetaMind Software under a grant from the National Science Foundation and commissioned by The Center for Research in Math and Science Education, headquartered at San Diego State University. The goal was to allow multimedia content to be included in documents describing curriculum.
A number of physics simulations were written by MetaMind Software and by Russian software firm Physicon (OpenTeach) as OpenDoc parts. Physics curricula for high school and middle school used them as their focus. With the discontinuation of OpenDoc, the simulations were rewritten as Java applets and made available from the Center as The Constructing Physics Understanding (CPU) Project by Dr. Fred Goldberg.
Components of the E-Slate educational microworlds platform were originally implemented as OpenDoc parts in C++ on both MacOS and Windows, reimplemented later (after the discontinuation of OpenDoc) as Java applets and eventually as JavaBeans.
Cancellation
OpenDoc had several hundred developers signed up but the timing was poor. Apple was rapidly losing money at the time and many in the industry press expected the company to fail.
OpenDoc was soon discontinued, with Steve Jobs (who had been at NeXT during this development) noting that they "put a bullet through [OpenDoc's] head", and most of the Apple Advanced Technology Group was laid off in a big reduction in force in March 1997. Other sources noted that Microsoft hired away three ClarisWorks developers who were responsible for OpenDoc integration into ClarisWorks.
Starting with Mac OS 8.5, OpenDoc was no longer bundled with the classic Mac OS. AppleShare IP Manager from versions 5.0 to 6.2 relied on OpenDoc, but AppleShare IP 6.3, the first Mac OS 9 compatible version (released in 1999), eliminated the reliance on OpenDoc. Apple officially relinquished the last trademark on the name "OpenDoc" on June 11, 2005.
See also
Orphaned technology for similar fates
KParts for an open source alternative
References
External links
Last release of OpenDoc with mostly all sources (for education purpose only)
Video of Steve Jobs at Apple's annual developer conference in 1997, defending Apple's decision to kill OpenDoc.
Apple Inc. software
IBM software
Orphaned technology |
34924166 | https://en.wikipedia.org/wiki/Pac-Man%20and%20the%20Ghostly%20Adventures | Pac-Man and the Ghostly Adventures | Pac-Man and the Ghostly Adventures, also known in Japan as is a computer-animated comedy-adventure children's television series featuring Namco's classic video game character Pac-Man. It was produced by 41 Entertainment, Arad Productions and Bandai Namco Entertainment for Tokyo MX (stereo version), BS11 (stereo version) and Disney XD (bilingual version). A video game based on the series was released on October 29, 2013, for PC and consoles, and November 29, 2013, for the Nintendo 3DS.
Plot
The series takes place on and around the planet Pac-World as well as its Nether-World, where time passes at an Earth-like rate of 24 hours a day and 365 days a year, also having a 7-day week. Pac-Man and his best friends Spiral and Cylindria attend Maze School, a boarding school located within the city of Pacopolis. They help to protect citizens from the threat of ghosts after the seal that locked up the Netherworld was accidentally opened by Pac at the time he was avoiding the school bully Skeebo. Ghosts are able to possess Pac-Worlder bodies although only for a time limit of a few minutes unless aided by Dr. Buttocks technology. Victims of possession usually are apparent by a red-eyed glow although this too can be prevented with Buttocks technology.
Pac-Man also has four friendly ghosts (Blinky, Pinky, Inky and Clyde) that surrendered and vowed to help him along his voyage (in exchange for being restored to the living world). Pac-Man vows to stop Betrayus and the ghosts (or any other bad guy) from taking over Pac-World while searching for his long-lost parents. He has the unique ability to eat ghosts and destroy the ectoplasm that makes up most of their bodies. Only their eyeballs survive, which he spits out. They reform their bodies using a regeneration chamber. The ghosts continually attack the city to locate the Repository, a storage chamber for the corporeal bodies of the ghosts which would allow them to live again if they possessed them. It is kept hidden to deny them this freedom and only President Spheros and Pac-Man are aware of its location. The ghosts also attack the Tree of Life to prevent Pac-Man from gaining powers to fight them. Without the power-berries, Pac-Man is not able to fly, or breathe in the Netherworld.
Characters
Main
, nickname "Pac", (voiced by Erin Mathews) is the 13-year old title character of the series. He is described by some as the last of the yellow Pac-People on Pac-World. Pac's father Zac helped in the war against Commander Betrayus. Pac is a teenager between 3'5"-4'0" tall that just found out that it's his destiny to defeat the Ghosts and send Betrayus back to the Netherworld permanently. He feels lonely because he's the only Yellow One in Pac-World and he misses his parents very badly and is determined to find them no matter what, but sometimes gets too carried away when doing so since this would lead him to bigger problems or would usually leave him less time with his friends. He's guilt-ridden because he unleashed the ghost by accident. However, the president and his friends tell Pac to get over his guilt and protect Pac-World from evil. He can eat ghosts just like the legendary Yellow Ones did back in ancient times and also has a giant appetite that may sometimes lead to trouble since he can eat anything, even things that aren't food. With the power of the berries (which are similar to Power Pellets) from the Tree of Life, he gains helpful abilities like withstanding the Netherworld's environment with the Power Berry, being able to fly upon eating the Flying Berry, breathe ice with the Minty Ice Berry, breathe fire with the Minty Fiery Berry, become cybernetic with a drill and magnet upon eating the Titanium Berry, gain a chameleon-like body upon eating the Chameleon Berry, etc. Pac is also the strongest and fastest of his species alive. With the help of his friends, Sir Cumference and the Ghost Gang (Blinky, Pinky, Inky, and Clyde), he's ready for action. Pac vows to stop Betrayus and the ghosts. Pac has had a temporary truce with Betrayus like when it came to the Ghosteroid, the Pointy Heads, and when the ghosts go missing.
(voiced by Andrea Libman) is a 13-year old pink Pac-Girl with glasses who is over 3'5" tall, over 4'0" including hair. She is a teenager who celebrates her birthday in the second season. She is tomboyish and the brains of the team, often aiding her friends Pac-Man and Spiralton. She also has trouble trusting ghosts. Pinky, who has a crush on Pac, views her as a rival, thinking Cyli also has a crush on him, although she gets highly annoyed or shown to be jealous almost every time Pinky openly flirts with Pac, another time Cyli said Pac was cute after eating a power berry that made him smart and changed his mannerisms she even told Pac she loves him at the same time as Pinky, hinting that she might have a crush on Pac. Pinky mocks her nickname, calling her "Silly" instead.
(voiced by Samuel Vincent) is a big red 16-year old teen Pac-Boy over 4'5" tall (and over 5'0" including hair) and Pac-Man's best friend and roommate who is nicknamed "Spiral". He and Pac share similar crazy personalities. Spiral loves to blow things up and he's interested in the history of Pac-World. He and Inky have the same observant-brainy trait, but Spiral isn't as serious. He refers to Pac as "Pacster".
The Ghost Gang is a group of four ghosts from the original arcade game. Though they live in the Netherworld and are ruled by Lord Betrayus, they pretend to be dumb and useless to Betrayus as they are actually good-natured spirits and secretly help Pac-Man in hopes that they can be redeemed enough to live again.
Blinky (voiced by Ian James Corlett) is a red ghost, the second shortest and oldest of the Ghost Gang Siblings. He's the default leader of the group, which differs greatly from his 1980s scaredy-cat counterpart. Also, he's crafty and can't always be trusted. He might act mean, but he deeply cares about his friends and his siblings; however, he might only be helping the Pacs because he likes to stay on the winning team. In the episode "The Pac Be with You", it is revealed that he's a Pac-Fu master and has presumably fought against Master Goo alongside other Pac-Fu masters during the ghost wars. Pac dubs his World of Odd counterpart who resembles Cowardly Lion as Uncle Ghost.
Pinky (voiced by Ashleigh Ball) is a pink ghost, the only sister, the second youngest and smallest in the Ghost Gang Siblings. She is described as sweet, girly and "a bad", but not a really bad guy" She has a crush on Pac-Man (like in Pac-Man World 2), and even got to kiss him. She has a rivalry with Cylindria because they got off on the wrong foot, thinking she might also have a crush on Pac. Despite the gang's fear of Betrayus, Pinky is often the sole member of the gang who is willing to go out of her way to help Pac in the more risky situations. She is also the first to side with Pac even when the odds are more in Betrayus' favor. In the episode "Stand By Your Pac-Man", she revealed that she can transform into a pink cyclops ghost and she assumed this form in three other episodes in the second season. Pinky can be scatterbrained and naive at times. She often argues with her brothers, but loves them nonetheless. She has expressed a desire to get her original body back from the repository, but prioritizes Pac-Man's interests instead. Pac dubs her World of Odd counterpart who resembles Dorothy Gale as Mama Ghost.
Inky (voiced by Lee Tockar) is a blue ghost, the second oldest and second tallest in the Ghost Gang Siblings. He and Blinky are close pals, though they fight sometimes. Although he is the smartest and most sarcastic of the four, he is easily distracted and lacks focus most of the time, with Blinky saying that Inky can "get lost in a thought". He has a habit of snorting when he laughs sometimes and can multiply. Even though he helps the Ghost Gang undermine Betrayus' schemes, he has differed with Pinky on prioritizing Pac-Man's interests over their own. Pac dubs his World of Odd counterpart who resembles the Tin Woodman as Papa Ghost.
Clyde (voiced by Brian Drummond) is an orange ghost, the youngest and largest in the siblings. He is the most caring member of the group, and although he often seems like a dim-witted ditz, he often comes up with pearls of great wisdom. Clyde has a soft side for soft and sweet things like Fuzbitz. Blinky gets annoyed with this. Clyde and Pac-Man also share the same characteristics when it comes to eating. Clyde is the peacekeeper of the Ghost Gang as well as the moral compass in helping Pac and his friends, although he is not above clanging heads together when necessary as Inky and Blinky have discovered. He can also speak nine languages, including monster and he can split into two halves just for fun or whenever he's unhappy. Pac dubs his World of Odd counterpart who resembles Scarecrow as Baby Ghost. As well as Blinky, he totally differs from his Hanna-Barbera series' counterpart, where he is the leader of the group and the cleverest member, being second only to the ghosts' evil leader Mezmeron. His middle name is Filbert.
, full name Lord Betrayus Sneakerus Spheros, (voiced by Sam Vincent) is the Lord of the Ghosts and the Netherworld. He has always been President Spheros' younger brother and the son of Rotunda. He really hates his older brother. Back when he was a commander, Betrayus once led a massive revolt against Pac-World in a plot to take over Pac-World (later named Pac-War 1). During this revolt and despite the fact that he had Ghosts serving him and the fact that he was well-armed, he was defeated. As punishment for his crimes, he was unusually stripped of his corporeal form (his body) and then he was banished to the Netherworld. Since then, he has been waiting and plotting for the day when he can gain steal the Tree of Life and obtain the Repository so that he can regain his corporal form. He is the most powerful fire ghost with his fiery claw hands. Unlike the other ghosts, he's white with red eyes. Betrayus makes all his Ghosts and Monsters of the Netherworld do all the hard work while he watches the action from his TV (the channels provided by "slug-cam"). He has vowed to defeat Pac-Man while plotting to steal the Power Berries and has been shown to make a temporary truce with Pac when it came to the threat of the Ghosteroid, the threat of the Pointy Heads, and whenever his ghosts go missing.
Pac-Worlders
(voiced by Sam Vincent) is the President of Pac-World, a son of Rotunda, and the older brother of Betrayus. He is over 4'5" tall. He wants Pac-Man and his friends to behave and stop his evil brother. He's easily annoyed with his security and staff constantly messing up and jumping on him for protection. He is a good friend to Pac-Man's parents and also really hates his younger brother.
(voiced by Ian James Corlett) is a goofy scientist who has personally met Pac's father and mother. He is over 4'5" tall. Sir Cumference has crazy inventions that has helped Pac-Man and his friends on occasion. Sir C is in charge of the repository and knows the Tree of Life. He has a rivalry with Dr. Buttocks and is also a friend of Pac's parents. He is also a detective.
(voiced by Matt Hill) is a blue pompous and foolish jock and school bully that often picks on Pac-Man for fun. He used to be Cylindria's steady boyfriend until he was too scared to save her from a Cyclops Ghost and chose to break up with her when Spiral confronted him about his cowardice. Pac ends up saving Cyli, and Skeebo is often jealous of Pac saving the day. He hates the color yellow, but ironically, he is blonde. He once had his mouth removed. What he lacks in people skills, he makes up for with a very beautiful tenor singing voice that makes Pac-People start crying in sadness and attracts all kinds of birds, he is always afraid of ghosts, especially The Ghost Gang.
Spheria Suprema (voiced by Ashleigh Ball) is an orange Pac-Person who happens to be Pac-Man's aunt. She is the Pac-Pong champ where she had once defeated Betrayus sometime before he led his revolt against Pac-World and is admired by Cyli. She has a Pac-Dog named Uggles.
(voiced by Erin Mathews) is one of Pac-Man's teachers. She has a Pac-Dog named Foofie.
Mr. Strictler (voiced by Mark Oliver) is a light-blue driver's ed teacher and authority figure with a stubborn personality who only appears in "Driver's Pac". He is Sherry's father. Mr. Strictler would often fail the least worthy of his students with his own daughter being one of them. When it came to Pac-Man, Mr. Strictler started writing down Pac-Man's driving violations up to the point where he was unknowingly dragged into Pac-Man's mission in the Netherworld to stop Dr. Buttocks from using a drill to bring Pacopolis into the Netherworld. Afterwards, Spheria finally forced him to be nice and reasonable. While he doesn't give Pac his license, Mr. Strictler allows him to take the test again. He also made cameo appearances in "Cave Pac-Man", "Cosmic Contest", and "Santa Pac".
Mr. Dome (voiced by Lee Tockar) is the gym teacher who taught Pac-Man and some students to be athletic in Phys Ed.
O'Drool is the Secretary of Security who first appeared in "Betrayus Turns Up the Heat". He is quite mean and blames Sir C for the cause of the heat which was actually made by Dr. Buttocks. After Dr. Buttocks' plot was thwarted by Pac-Man, O'Drool was fired by President Spheros for his arrogance and was taken away by the security. He later made cameo appearances in "Invasion of the Pointy Heads" and a few other episodes in the second season.
Kingpin Obtuse (voiced by Lee Tockar) is a dark-green crime lord of Pac-World's criminal underworld. He was paid by Lord Betrayus where he tricked Skeebo into stealing Pac-Man's Super Power Berries. In "Peace Without Slime" and "The Ghost Behind the Throne", he was assigned by Betrayus yet again to be Pacopolis's "Puppet Governor" as their attempt to take over Pac-World as a result.
Rotunda (voiced by Tabitha St. Germain) is a trigger-happy Elder who is the mother of both Stratos and Betrayus. Rotunda idolizes Pac-Man and decides to become his grandmother, much to the dismay of her two sons. Betrayus possessed her and used her body to him from being found out and had the President hypnotized into a Kindergartener Lunatic which would work out to his attempt to get the repository with original body. This backfired when they took the wrong direction to the repository area by the disillusioned Stratos. When Rotunda was back to herself, she strictly sends Betrayus back to his current home in the Netherworld. Betrayus ends up doing what his mother says. Rotunda also made a cameo appearance in "Cosmic Contest".
Zac is Pac's father. He was a skilled war operative. During Betrayus's revolt, he went missing and was presumed dead along with his wife Sunny, that led Pac to be orphaned (as the public thought). But later after Pac (who is known as Pacopolis's protector, "Pac-Man") beaten Apex twice, the evil pointy-head lord reveals he met Zac and is still alive with his wife on the pointy-head planet. Some Pac-Worlders think he died during the revolt, but Pac-Man and his friends (including Sir C) know he is still alive.
Sunny is Pac's mother. During the revolt of Betrayus, she disappeared and was presumed dead along with her husband, that led Pac to be orphaned (or so the public thought). She appears in the episode "Pac to the Future" and also "Happy Holidays and a Merry Berry Day" with her husband.
The Pacinator (voiced by Ian James Corlett impersonating Arnold Schwarzenegger) is a villainous Pac-Person that was responsible for freezing and killing most of the Yellow Ones. He made his first appearance in "Stand By Your Pac-Man" as well as making a cameo appearance in "Shadow of the Were-Pac" as a party guest. Pac-Man gets personal with him because of what he did with the other Yellow Ones. The Pacinator was hired to do this by a villain he thinks is more dangerous than Betrayus. It was presumed that he was hired by Apex.
Do-Ug (voiced by Gabe Khouth) is a Neander-Pac kid who first appeared in "Cave Pac-Man". One of Dr. Buttocks' plots ended up thawing Do-Ug from the ice that he was frozen in. He has a crush on Cylindria and is revealed to be an orphan. Due to Spiral's idiotic and careless antics, Do-Ug was hunted by several scientists. Luckily Pac and his friends managed to take him back to his home and timeline where the Pac-a-Chini was stolen. When they found the tribe's village, they tried to ask Do-ug to get it back. However, Do-ug admits that he is only a simple cave boy with no role in the tribe as such he is unable to do anything. After Pac-Man fought off a pack of Pacasaurs using his fire berry, the tribe decides to give the trio back the Pac-a-Chini. Before leaving, Pac-Man gives Do-ug the gift of fire and declares him "Keeper of the Flame", thus giving Do-ug a role within the tribe and becomes the first Neander-Pac to wield fire. The group then bids farewell to Do-ug before heading back to their own time. He also made cameo costume appearances in "The Shadow of the Were-Pac".
Danny Vaincori (voiced by Kyle Rideout) is a movie director who first appears in "Pac-Mania". He plans to create a TV show about Pac and his friends using look-alikes of them, but goes too far and made Pac and his friends look like fools and later he gets really annoyed when Pac accidentally ruins his movie sets when he mistakens a cyclops ghost prop for a real cyclops ghost. He was later possessed as part of Betrayus's plan to use his TV show as a cover-up to sneak into Pac-World and steal the Tree of Life using costumes and props as disguises (not counting the look-alikes of Pac and his friends).
Elliptika "Elli" (voiced by Kazumi Evans) is a light pink Pac-Worlder who visits Pacopolis from PacTokyo. She is the niece of President Spheros and Betrayus and the grandniece of Rotunda who makes her debut in "New Girl in Town". In the same episode, she said she remembers Pac's parents and where he might be able to find them. Elli and Pac share a mutual relationship, which nearly came to an end when she was possessed by Mavis and Pinky.
Moondog is a black-haired lavender Pac-Worlder who is Cylindria's father. He is a hippie and a gypsy who owns the family bus as a current home for them.
Starchild is a blonde Pac-Worlder who is Cylindria's mother. Like her husband Moondog, she is a hippie and a gypsy, and she still lives in the family bus after her husband gave the family treehouse to a horde of squirrels.
Grannie (voiced by Tabitha St. Germain) is a healthy elderly pink Pac-Worlder who is Starchild's mother and Cylindria's grandmother. Like Cylindria's parents, she is a hippie and a gypsy, and still lives in her son-in-law's family bus. She is usually seen with a family album, but it's quite possible that she's a bit unaware that she is embarrassing her granddaughter in front of Pac-Man and Spiral.
Santa Pac (voiced by Richard Newman) is Pac-World's version of Santa Claus who has skilled magic and fighting abilities and first appears in "Santa Pac". Betrayus kidnaps him to stop Berry Day from coming but this attempt fails. Pac-Man later rescues him and restores the holiday.
Ghosts
Butt-ler (voiced by Brian Drummond) is Betrayus' servant and butler. He is a purple Ghost with a butt-shaped head who wears a bowler hat and speaks with a British accent. During the Pac-War, he was known as Corporal Heineyhead and worked as a spy for Betrayus. He hates his older twin brother, Dr. Buttocks, even when he's pleasing his master. According to him, their mother hated them both, but she hated Buttocks even more. He and Pac-Man seem to be frenemies.
Dr. A.H. Buttocks (voiced by Brian Drummond) is the Netherworld's greatest mad scientist and Butt-ler's older twin brother who also has a butt-shaped head, but is light blue, has a cybernetic right hand, wears glasses and speaks with a German accent. He specializes in monster experimentation in hopes of achieving Pac-World domination. He sometimes suspects that the Ghost Gang are working with Pac-Man, but has had no luck convincing Betrayus of this since he usually blames him or his brother for his failures and sometimes tortures him for fun. He wants all of the workers to get out of his way. He sometimes hates his twin brother and has a rivalry with Sir Cumference. He was at the top of the class at Pac World U.
Glooky is a green Ghost with a squint in his left eye. He is a friend of Blinky.
Mavis is an orange female Ghost.
Specter (voiced by Brendan Ryan Barrett) is a ghost spy who works for Betrayus ordered to get the repository and is a lot stronger and smarter than the other ghosts since he was the first to never get eaten by Pac. Dr. Buttocks has shown some jealousy to him. He easily outsmarts Pac by possessing Uggles and other Pac people, including his friends, Spiral and Cylindria. He was actually revealed to be a traitor and wanted to rule both Pac-World and the Netherworld but got ended up exposed as a liar and a traitorous outlaw thanks to a trick done by Pac-Man, Cylindria, Spiral, the Ghost Gang (who used a slug named Larry), and a silent partner in Dr. Buttocks (who built the fake repository). Betrayus ends up having Specter placed in the deepest bowels of the Netherworld. He made a surprising appearance in "New Girl in Town" as a host for a contest. It could be possible he is doing this as both community service and a punishment for his attempt to rule Pac-World and the Netherworld. His third appearance is in "Santa Pac" where he tries to cheer up Betrayus, but fails.
Fred is a white ghost that was used as a flag due to the lack of white clothes.
Master Goo (voiced by Vincent Tong) is a calm but cocky Ninja Ghost who is a Master of Pac-Fu (the Pac-World version of kung fu). During Betrayus's deadly revolt, he discovered the dark side of Pac-Fu but was defeated by the Good Pac-Fu Masters. Years later, he became a Martial Arts Coach for Betrayus's Ghost Forces. He seems to be even stronger than Specter. With help from Blinky, Pac-Man managed to use the Pac-Fu skills he learned to defeat Master Goo after they failed to defeat him during their first encounter.
Captain Banshee is a ghost pirate who first appears in "Cap'n Banshee and his Interstellar Buccaneers". He doesn't seem to live in the Netherworld and always sails in the sky and in space. He has a little green Pac-Dragon that behaves like a parrot.
Cyclops Ghosts (voiced by Lee Tockar) are heavyset Ghosts with one eye and three horns. These ghosts are a bit like thugs.
Ogle (voiced by Brian Drummond) is a Cyclops Ghost who works as a chef at his restaurant. He cooked the food for his friends, the Ghost Gang, Betrayus, Butt-ler, and Dr. Buttocks. In "No Body Knows", it is revealed that Ogle used to be a Pac-Person.
Fire Ghosts are orange Ghosts who can emit fire from their body. Pac-Man can only eat them if he has ice powers.
Ice Ghosts are blue Ghosts who can emit ice from their body. They first appeared in "A Berry Scary Night".
Tentacle Ghosts (voiced by Lee Tockar) are 4-eyed purple-black Ghosts who look similar to an octopus.
Guardian Ghosts are large Ghosts who guard the Netherworld. They seem to wear metal masks, have glowing cyan-blue eyes, and usually carry a staff.
Aqua Ghosts are light blue Ghosts with fins on their head. They first appeared in the episode "Heebo-Skeebo".
Drill-Bit Ghosts are Ghosts that have drill bits on their heads. They are always frightened.
Alien Ghosts are Ghosts that live on the Ghosteroid.
Ghosteroid is an asteroid ghost that Dr. Buttocks once brought near Pac-World. It proved to be much of a threat so Betrayus sends the Ghost Gang to help Pac-Man get rid of it. More of them later appeared in "Cosmic Contest" and "Cap'n Banshee and his Interstellar Buccaneers".
Ghost Sharks are ghostly sharks that reside in the waters of PacLantis and guard the Berry of Youth. They first appeared in "PacLantis" where one Ghost Shark ate the Berry of Youth and was revived as a Pac-Fish after they fought against Pac and his friends, the Ghost Gang, and Betrayus, Butt-ler, and Buttocks. More Ghost Sharks also appeared in "Cosmic Contest" and "Cap'n Banshee and his Interstellar Buccaneers".
Virus Ghosts are normal ghosts that have been digitised by a Dr. Buttocks invention. They have the power to infect or control electronic devices and can travel through computer networks.
Pointy Heads
Apex (voiced by Colin Murdock) is an evil strange mysterious overlord who used a ruse to come in peace to Pac-World so that he can take over it and he might have been the one who hired the Pacinator to help him. He has a habit of cracking jokes. During his ruse, he demonstrated abilities that outdid Pac-Man. When Apex's motives have been revealed, Pac-Man ends up fighting Apex and defeats him with his bad singing. Before retreating from Pac-World, Apex jokes that he's Pac-Man's father Zac and reveals that he had met him at some point, but won't tell how. In "Invasion of the Pointy Heads", Apex returns and strikes a deal with Betrayus to take over Pac-World. He eventually turned against Betrayus in order to further his goals and revealed that he was using him all this time. Apex and his fleet were repelled when Betrayus and his Ghosts allowed themselves to be eaten by Pac-Man since the Pointy-heads can't stand belched eyeballs. Before Apex joins in the retreat, he hints that he has Pac-Man's parents in his planet. In "Cosmic Contest", he and Tip participate in a race against Pac-Man, Spiral, Betrayus, and Buttocks.
Professor Pointybrains (voiced by Brian Drummond) is a scientist who assists Apex and seems to be just as smart as Sir Cumference and Dr. Buttocks.
Tip (voiced by Gabe Khouth) is a strong minion who first appears in "Cosmic Contest". He became both friends and rivals with Pac-Man during the race.
Robots
Grinder is a lab assistant Sir Cumference made for himself. Sometimes, he and Sir Cumference hate each other and he's usually bothered by Fuzbitz.
Grindette is Grinder's wife who first appeared in "The Bride of Grinder". After Grinder's failed attempt to make her, Buttocks controlled her with a mind control chip in order to have her search for the repository and the Tree of Life, but she was able to overcome it and remove the chip. Afterwards, she and Grinder became a couple.
Grinder-Tron is a giant robot made by Dr. Buttocks that resembles an evil version of Grinder. It's the first boss in the video game Pac-Man and the Ghostly Adventures 2.
Mega-Grinder is a giant robot build by Pac-Man (under the effect of a Brain Berry) which resembles Grinder. It is used to fight Grinder-Tron.
Computer Bug is a small antivirus bug that helps Pac-Man rid the computer systems of the evil virus ghosts.
Pac-Topus is a giant mechanical octopus that only appears in "Ride the Wild Pac-Topus". It was originally an amusement park ride until it was brought to life by Dr. Buttocks's mind controlling chip, but proves uncontrollable. It is the second boss in Pac-Man and the Ghostly Adventures 2, having somehow came back to life and has escaped to Paclantis.
Cyber-Mouse is a giant digital mouse-like creature that Pac-Man, Pinky, and Clyde encounter in the computer network.
Cyber-Fluffy is a digital vision of Fluffy that Pac-Man, Pinky, and Clyde encounter in the computer network after they defeat the Cyber-Mouse.
Others
Madame Ghoulasha (voiced by Kathleen Barr) is a warty Nether-witch from the Netherworld who first appeared in "Jinxed". She has a crush on Betrayus and offered to put a jinx on Pac-Man in exchange for him marrying her, but he refused and Buttocks ends up as her groom. She also made cameo appearances in "Shadow of the Were-Pac" and "Pac's Very Scary Halloween".
Count Pacula (voiced by Samuel Vincent) is a villainous vam-pac from the Netherworld with an appearance that resembles a Pac-Person. Count Pacula can only be summoned when two moons turn blue every 100 Halloweens. He first appears in "A Berry Scary Night" where he tries to hypnotize the Pac-Worlders into finding Pac-Man under Betrayus's orders, but he also begins hypnotizing some of the ghosts as well since he is on neither side. Count Pacula has a weakness to the Garlic Berry. In "Pac's Very Scary Halloween", he is Dr. Pacenstein's neighbor but doesn't enjoy being near him. He later helps Cyli and Spiral return Pac's brain to his body after Dr. Pacenstein transfers his brain inside it. His and Pac's brain were briefly transferred and were later switched back. Like the version that was seen in the 80s' Pac-Man cartoon, Count Pacula is a spoof of Count Dracula.
Jean (voiced by Nicole Oliver) is a vile genie who first appeared in "Meanie Genie". Dr. Buttocks found her bottle and tricked Pac-Man into finding it and releasing her. After granting Pac-Man three wishes that ends up putting the ghosts on Pac-World yet keeping them from re-entering the Netherworld, she traps Pac-Man in her bottle just before she and the ghosts proceed to wreak havoc. After Sir Cumference frees Pac-Man, he fights Jean using a Wizard Berry and traps her back in her bottle (as well as undoing what she has done to the city and sending the ghosts back to the Netherworld). Pac-Man and Sir Cumference then place Jean's bottle into a rocket and launched it into space. Unbeknownst to Pac-Man and Sir Cumference, the rocket containing Jean's bottle is found by a Pointy Head spaceship.
Overlords of the Outer Regions are a group of strange and unknown celestial beings that take the form of a burst of light and first appeared in "Cosmic Contest". They were frustrated and annoyed with the Pac-People, Ghosts, and Pointy Heads fighting each other and disturbing them and decide to settle things by having two of each race against each other in which the winners can stay but the other two teams will be banished to a world of nothingness that will be far far away from their homeworlds. Because the contest ended in a tie, no one is banished by the Overlords. Before leaving, the Overlords did warn them that if they cause a ruckus again they will annihilate them all, but also leaving without helping anyone return to their homes. Before the race started, the Overlords quietly said that if Pac-Man were to reach his full potential, he could challenge even them.
Mooby is an ancient giant flying Pac-Cow that lives up in the sky in Pac-World.
The Easter Pac-Peep (voiced by Ashleigh Ball) is a humanoid marshmallow chicken made of marshmallow and the owner of Easter Egg Island who first appears in "Easter Egg Island". She became hostile and bad-tempered due to Betrayus' past pranks and has kidnapped his ghosts. Pac-Man and his friends helped Betrayus rescue them and talk some sense into the chicken, turning her kind and good-hearted again by showing the family album of Cylindria with a picture of Cylindria and her family holding an Easter basket. She's a parody of the Easter Bunny, though the only difference is that she's a hen instead of a rabbit.
Dentures of Doom (voiced by Paul Dobson) are a set of living dentures originally owned by an ancient mummy wizard who accidentally brings it to life and turns it evil. As a result, it was locked away due to its dangerous power. Many years later Pac-Man and his friends find it and it possesses Pac-Man and Buttocks in order to return to their owner. But before it can do so, Pac-Man and Grinder get in the way and return them to normal.
Mummy Wizard (voiced by Paul Dobson) is a strange humanoid wizard with a mummy-like appearance. After bringing his dentures to life and turning them evil, he was separated from them. Many years later, he and his dentures trying to reunite but Pac-Man and Grinder interfere and turn them back to normal before giving them back, this led the wizard into becoming a good wizard. He made cameo appearances in "Pac's Very Scary Halloween".
Round Deer are flying reindeer that pull Santa Pac's sled and have special fighting abilities.
Rounddolf is the leading round deer with a glowing yellow nose. He's a spoof of Rudolph the Red-Nosed Reindeer.
Dr. Pacenstein is the talking brain in the glass and lives in Transylpacia Castle which is next to Count Pacula's castle (although he and Count Pacula don't get along well). He was originally a Pac-Person and a quack scientist until being shunned by the villagers and sacrificed his body for experiments. He made a deal with Betrayus and lured Pac-Man and his friends to his castle to transfer his brain inside Pac's body and Pac's brain into his jar. Dr. Pacentstein double-crosses Betrayus and begins wreaking havoc until being defeated by Pac-Man (in Count Pacula's body). After Pac's brain is returned to his body, Dr. Pacenstein's brain ends up in a slug's body. He is a spoof of Victor Frankenstein.
Eeghost is a silent white ghost in a hood. He is Dr. Pacenstein's sidekick. Eeghost is a spoof of Igor.
Limbs are Dr. Pacenstein's hands and feet who appear to have been given minds of their own after Dr. Pacenstein sacrificed his body. They usually annoy him.
Monsters
Fluffy is a giant poodle. Although it may sound sweet, when you anger it, it will turn into a very mean monster. This "Poodle" has three heads like a Cerberus. When they are actually nice, the middle one has a soft side and sends get well cards to his victims. Pinky is usually the one to bribe Fluffy when Pac-Man needs to pass by.
Fuzbitz (voiced by Lee Tockar) is Sir Cumference's pet monster. He has a similar appetite to Pac. When angry, he turns into a more ferocious version of himself. Despite this ability, Betrayus and Dr. Buttocks thought he was useless back when Fuzbitz lived in the Netherworld. He now lives with Sir Cumference in his lab after proving that he was too much for Pac-man and his friends to handle. He sometimes annoys Grinder.
Gargoyles are large, heavy-set blue monsters with three eyes and small wings.
Slug-cams are used by Betrayus to spy on Pac-World.
Larry is one slug who is friends with the Ghost Gang.
Pac-Dragons are one-eyed red dragons that live in the Netherworld.
Stalkers is a black medium-sized monster with 2 legs, a long eel-like body, and multi-eyed face full of sharp teeth. Despite its ferocious appearance, its actually quite wimpy.
Venus Dragon Flytraps are large carnivorous plants that are indigenous to the Netherworld.
Monobats are a race of one-eyed bats.
Stone Temple Guardians are giant statues that serve as the guardians of a slime-filled temple.
Pacosaurus refers to a type of ancient dinosaur which originally roamed Pac-World, appearing in "Jurassic Pac" and "Cave Pac-Man".
Were-Pac Flea is a Netherworld flea made by Dr. Buttocks. It was given the ability to turn Pac-People, Ghosts or Pac-Wolves into Were-Pacs (the Pac-People version of werewolves). The effects will wear off when it leaves its host. It only appears in "The Shadow of the Were-Pac" where it tries to turn all the Pac-People into Were-Pacs including Pac-Man and his friends.
Space Worm is a giant worm that lives in space and has the ability to send victims to other dimensions. Pac-Man's first encounter with it caused him to be transported to the future, which is then revealed to be a dream. It made a cameo appearance in "Cap'n Banshee and his Interstellar Buccaneers". Buttocks later tried to use its powers to his advantage and sends Pac-Man to an alternate dimension of Pac-World, but it proves to be too powerful. Pac-Man later returns from the alternate dimension and repels it from Pac-World.
Hugefoot is a monster with an enormous foot. It is revealed to be a female and falls in love with Pac-Man.
Chocolate Bunnies appear in "Easter Egg Island" and they are scientists and travelers who are turned into those Chocolate-flavoured beasts and are controlled by the Easter Pac-Peep. Cyli and her family (and Betrayus) are later turned into chocolate bunnies while finding Betrayus's ghosts but are eventually returned to normal by Pac-Man, except for Betrayus.
Were-Pacs are Pac-Worlds versions of werewolves who appear in "The Shadow of the Were-Pac" (who are actually Pac-People turned into Were-Pacs by the Were-Pac Flea) and "Pac's Very Scary Halloween".
Pac-Mammoths are creatures who originally roamed Pac-World like the Pacosaurus. A Pac-Mammoth made a cameo appearance in "Pac's Very Scary Halloween" while they mostly appeared in the video game Pac-Man and the Ghostly Adventures 2.
Episodes
Crew
Avi Arad – Developer & Executive Producer
Sean Catherine Derek – Story Editor
David Earl – Storyboard Artist
Tetsuya Ishii – Lead Character Designer
Terry Klassen – Voice Director (ep. 3–present)
Masashi Kobayashi – Line Producer
Tatsuro Maruyama – Art Director
Masafumi Mima – Sound Director
Tom Ruegger – Developer, Writer
Paul Rugg – Developer, Writer
Kris Zimmerman – Voice Director (ep. 1–2)
Broadcast
The series first aired on June 15, 2013, on Disney XD, and debuted July 1, 2017, on KidsClick in the United States, on March 17, 2014, on Family Chrgd (as Disney XD) in Canada and premiered on April 5, 2014, on Tokyo MX and BS11 in Japan, and is being produced by Avi Arad. It premiered on Disney XD UK on January 11, 2014. On October 5, 2015, the series premiered on Discovery Kids in Latin America (except Brazil, which aired on Gloob instead). The show also premiered on Discovery Family on November 16, 2019.
DVD releases
Australian releases
There have been two DVD releases in Australia. These were released in Australia on October 2, 2013. The current releases in Australia are:
Pac-Man and the Ghostly Adventures: The Adventure Begins, the first DVD release containing the first 6 episodes
Pac-Man and the Ghostly Adventures: Pac to the Future, which contains another 6 episodes.
North American releases
On January 7, 2014, in North America, there were four DVD releases. The Adventure Begins DVD only contains the first episode, whereas the other three DVDs contain four episodes each:
Pac-Man and the Ghostly Adventures: Pac is Back (initially exclusive to Walmart then distributed June 24 on Amazon) includes the first four episodes
Pac-Man and the Ghostly Adventures: All You Can Eat contains the four following episodes
Pac-Man and the Ghostly Adventures: Let The Games Begin (exclusive to Target) contains the four episodes that follow after
Pac-Man and the Ghostly Adventures: Ghost Patrol was released in North America on March 18, 2014.
Pac-Man and the Ghostly Adventures: A Berry Scary Night was released in North America on September 16, 2014.
Redbox exclusives
Two additional DVDs each with three episodes, were released exclusively to Redbox:
Pac-Man and the Ghostly Adventures: Indiana Pac and the Temple of Slime
Pac-Man and the Ghostly Adventures: Mission Impacable!
Another DVD titled Pac-Man and the Ghostly Adventures: Pac To The Future was released exclusively to Redbox Canada.
Other
Jurassic Pac was released on June 2, 2015.
Movie 4-Pac was released 9 December 2014
Collection was released 4 November 2014
8-Pac was released 3 November 2015
Video games
Several video games based on the show have been developed. An endless runner for iOS and Android titled Pac-Man Dash! was released in July 2013. This is also hosted for free in Canada on the CHRGD website. In 2017, Bandai Namco Entertainment decided to discontinue Pac-Man Dash! and remove it permanently from the Apple Store and Android Market.
A 3D platformer with the same name as the TV series was released for Wii U, PlayStation 3, Xbox 360, and Windows PC in late 2013, accompanied by a 2D platformer for the Nintendo 3DS shortly after. A sequel was released in October 2014. Characters from the show have also appeared in the compilation release for PlayStation 3, Xbox 360, and Windows PC, titled Pac-Man Museum.
Release
Pac-Man and the Ghostly Adventures debuted on June 15, 2013, on Disney XD USA and March 17, 2014, on Disney XD Canada. It moved over to Family Chrgd on October 9, 2015. In the Philippines, it aired on Disney Channel on March 3 and ended on October 30, 2014, to give way for the special programming of "Monstober". It returned to air on March 2, 2015. On May 7, 2019, more than a month after KidsClick's demise, Discovery Family premiered the series on November 16, 2019 as part of a distribution deal with 41 Entertainment.
Notes
References
External links
Production website
Pac-Man and the Ghostly Adventures at Behind the Voice Actors
Japanese website
Pac-Man
2010s American animated television series
2010s American school television series
2010s American comic science fiction television series
2013 American television series debuts
2015 American television series endings
2010s Canadian animated television series
2010s Canadian comic science fiction television series
2013 Canadian television series debuts
2015 Canadian television series endings
2013 anime television series debuts
2013 Japanese television series debuts
2015 Japanese television series endings
Bandai Namco franchises
American children's animated action television series
American children's animated space adventure television series
American children's animated drama television series
American children's animated science fantasy television series
American children's animated comic science fiction television series
Canadian children's animated action television series
Canadian children's animated space adventure television series
Canadian children's animated drama television series
Canadian children's animated science fantasy television series
Canadian children's animated comic science fiction television series
Japanese children's animated action television series
Japanese children's animated space adventure television series
Japanese children's animated science fantasy television series
Japanese children's animated comic science fiction television series
American computer-animated television series
Canadian computer-animated television series
Japanese computer-animated television series
English-language television shows
Disney XD original programming
Animated series based on video games
Television series about ghosts
Animated television series about orphans
Animated television series about teenagers
OLM, Inc.
Television series about size change
Works based on Bandai Namco video games
KidsClick
Television series created by Tom Ruegger
American television shows based on video games
Canadian television shows based on video games |
40492323 | https://en.wikipedia.org/wiki/AddressSanitizer | AddressSanitizer | AddressSanitizer (or ASan) is an open source programming tool that detects memory corruption bugs such as buffer overflows or accesses to a dangling pointer (use-after-free). AddressSanitizer is based on compiler instrumentation and directly mapped shadow memory.
AddressSanitizer is currently implemented in Clang (starting from version 3.1), GCC (starting from version 4.8), Xcode (starting from version 7.0) and MSVC (widely available starting from version 16.9). On average, the instrumentation increases processing time by about 73% and memory usage by 240%.
Users
Chromium and Firefox developers are active users of AddressSanitizer; the tool has found hundreds of bugs in these web browsers. A number of bugs were found in FFmpeg and FreeType. The Linux kernel has enabled the AddressSanitizer for the x86-64 architecture as of Linux version 4.0.
KernelAddressSanitizer
The KernelAddressSanitizer (KASAN) detects dynamic memory errors in the Linux kernel. Kernel instrumentation requires a special feature in the compiler supplying the -fsanitize=kernel-address command line option, since kernels do not use the same address space as normal programs.
Examples
Heap-use-after-free
// To compile: g++ -O -g -fsanitize=address heap-use-after-free.cc
int main(int argc, char **argv) {
int *array = new int[100];
delete [] array;
return array[argc]; // BOOM
}
$ ./a.out
==5587==ERROR: AddressSanitizer: heap-use-after-free on address 0x61400000fe44 at pc 0x47b55f bp 0x7ffc36b28200 sp 0x7ffc36b281f8
READ of size 4 at 0x61400000fe44 thread T0
#0 0x47b55e in main /home/test/example_UseAfterFree.cc:5
#1 0x7f15cfe71b14 in __libc_start_main (/lib64/libc.so.6+0x21b14)
#2 0x47b44c in _start (/root/a.out+0x47b44c)
0x61400000fe44 is located 4 bytes inside of 400-byte region [0x61400000fe40,0x61400000ffd0)
freed by thread T0 here:
#0 0x465da9 in operator delete[](void*) (/root/a.out+0x465da9)
#1 0x47b529 in main /home/test/example_UseAfterFree.cc:4
previously allocated by thread T0 here:
#0 0x465aa9 in operator new[](unsigned long) (/root/a.out+0x465aa9)
#1 0x47b51e in main /home/test/example_UseAfterFree.cc:3
SUMMARY: AddressSanitizer: heap-use-after-free /home/test/example_UseAfterFree.cc:5 main
Shadow bytes around the buggy address:
0x0c287fff9f70: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c287fff9f80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c287fff9f90: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c287fff9fa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c287fff9fb0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c287fff9fc0: fa fa fa fa fa fa fa fa[fd]fd fd fd fd fd fd fd
0x0c287fff9fd0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c287fff9fe0: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
0x0c287fff9ff0: fd fd fd fd fd fd fd fd fd fd fa fa fa fa fa fa
0x0c287fffa000: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c287fffa010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
ASan internal: fe
==5587==ABORTING
Heap-buffer-overflow
// RUN: clang++ -O -g -fsanitize=address %t && ./a.out
int main(int argc, char **argv) {
int *array = new int[100];
array[0] = 0;
int res = array[argc + 100]; // BOOM
delete [] array;
return res;
}
==25372==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61400000ffd4 at pc 0x0000004ddb59 bp 0x7fffea6005a0 sp 0x7fffea600598
READ of size 4 at 0x61400000ffd4 thread T0
#0 0x46bfee in main /tmp/main.cpp:4:13
0x61400000ffd4 is located 4 bytes to the right of 400-byte region [0x61400000fe40,0x61400000ffd0)
allocated by thread T0 here:
#0 0x4536e1 in operator delete[](void*)
#1 0x46bfb9 in main /tmp/main.cpp:2:16
Stack-buffer-overflow
// RUN: clang -O -g -fsanitize=address %t && ./a.out
int main(int argc, char **argv) {
int stack_array[100];
stack_array[1] = 0;
return stack_array[argc + 100]; // BOOM
}
==7405==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7fff64740634 at pc 0x46c103 bp 0x7fff64740470 sp 0x7fff64740468
READ of size 4 at 0x7fff64740634 thread T0
#0 0x46c102 in main /tmp/example_StackOutOfBounds.cc:5
Address 0x7fff64740634 is located in stack of thread T0 at offset 436 in frame
#0 0x46bfaf in main /tmp/example_StackOutOfBounds.cc:2
This frame has 1 object(s):
[32, 432) 'stack_array' <== Memory access at offset 436 overflows this variable
Global-buffer-overflow
// RUN: clang -O -g -fsanitize=address %t && ./a.out
int global_array[100] = {-1};
int main(int argc, char **argv) {
return global_array[argc + 100]; // BOOM
}
==7455==ERROR: AddressSanitizer: global-buffer-overflow on address 0x000000689b54 at pc 0x46bfd8 bp 0x7fff515e5ba0 sp 0x7fff515e5b98
READ of size 4 at 0x000000689b54 thread T0
#0 0x46bfd7 in main /tmp/example_GlobalOutOfBounds.cc:4
0x000000689b54 is located 4 bytes to the right of
global variable 'global_array' from 'example_GlobalOutOfBounds.cc' (0x6899c0) of size 400
Limitations
AddressSanitizer does not detect any uninitialized memory reads (but this is detected by MemorySanitizer), and only detects some use-after-return bugs. It is also not capable of detecting all arbitrary memory corruption bugs, nor all arbitrary write bugs due to integer underflow/overflows (when the integer with undefined behavior is used to calculate memory address offsets). Adjacent buffers in structs and classes are not protected from overflow, in part to prevent breaking backwards compatibility.
See also
Intel MPX
The Application Verifier (AppVerif.exe) in Microsoft Windows SDK
References
External links
AddressSanitizer Google Group (no mailing list)
AddressSanitizer project page
AddressSanitizer documentation (Clang)
Security testing tools
Computer security procedures
Free memory management software
Free memory debuggers |
31848915 | https://en.wikipedia.org/wiki/George%20Jackson%20%28baseball%29 | George Jackson (baseball) | George Christopher Jackson (January 2, 1882 – November 26, 1972), known also as "Hickory" Jackson, was a professional baseball player whose career spanned 27 seasons, three of which were spent in Major League Baseball (MLB) with the Boston Rustlers/Braves (1911–13). Over his major league career, he compiled a .285 batting average with 85 runs scored, 158 hits, 24 doubles, seven triples, four home runs, 73 runs batted in, and 34 stolen bases in 152 games played. Jackson's professional career started in the minor leagues with the Jackson Senators.
The majority of Jackson's career was spent in the minor leagues. In 1911, he broke into the major leagues as a member of the Boston Rustlers. He spent parts of the next two seasons with the Boston National League club. In 1913, Jackson was sent-down the minor leagues. From there, he played with the Buffalo Bisons (1913–17), Fort Worth Panthers (1918), San Antonio Bronchos (1919), Shreveport Gassers (1920–23) Beaumont Exporters (1923), Tyler Trojans (1924–25, 1927–28), Greenville Hunters (1926), Laurel Cardinals (1929), El Dorado Lions (1930–32). Over his career in the minors, Jackson batted .297 with 2,453 hits, 443 doubles, 74 triples, and 157 home runs in 2,365 games played.
Early life
George Christopher Jackson was born on January 2, 1882, in Springfield, Missouri, to George R., and Elmyra Jackson of England, and Pennsylvania, respectively. By 1900, the Jackson family was living in Hill County, Texas. George C. Jackson worked on his family farm in Blum, Texas at a young age. In his youth, Jackson recalled loving athletics. He stated that whenever he had any down-time, he would throw a baseball against his barn and catch it, or play a pick-up game with the farm hands.
According to The Washington Post, Jackson displayed a "wonderful" ability to catch the baseball in his youth. He would use a small branch as a baseball bat. Jackson played with the Blum amateur baseball team when he was young. He was given the carfare it took to get to the ballpark by his manager in exchange for playing. Jackson worked as an acrobat at the age of 18. He had five siblings; brothers William, Kennith, and Robert; and sisters Lula, and Elmyra.
Professional career
Early minor league career (1906–1911)
Jackson's professional baseball career started in 1906 as a pitcher for the Jackson Senators of the Class-D Jackson Senators. As a member of the Senators, Jackson played with past, and future Major League Baseball players Harry Betts, Orth Collins, Bill Dammann, Tom Gettinger, Billy Kinloch, Jack Ryan, and Elmer Steele. Jackson compiled a record of 1–2 with 20 hits allowed, 16 runs allowed, and eight base on balls issued that season. In 1907, Jackson was discovered by the Dallas Giants of the Class-C Texas League whose management had heard of Jackson through the local newspapers, which described him as a "wonderful ball player". The Giants signed Jackson, and farmed him out to the Lake Charles Creoles of the Class-D Gulf Coast League, where he was used as a first baseman. Jackson was the only player on the Lake Charles club to ever go on to play in the MLB. On the season, Jackson batted .281 with 43 hits, six doubles, two triples, and one home run in 44 games played.
In 1908, the Dallas Giants, who had farmed Jackson out to the Lake Charles Creoles a year prior, asked him to report to the Dallas club. That season, he was used as an outfielder. Jackson batted .242 with 53 hits, 11 doubles, three triples, and one home run in 74 games played. Jackson re-signed with the Giants in 1909. On the season, he batted .271 with 65 runs scored, 117 hits, 21 doubles, three triples, six home runs, and 53 stolen bases in 129 games played. He was tied for third in the league in triples. Jackson again joined the Dallas club in 1910. He batted .280 with 80 runs scored, 144 hits, 17 doubles, seven triples, five home run, and 55 stolen bases in 155 games played.
Towards the end of the 1910 season, Jackson was sold by the Dallas Giants to the Memphis Turtles of the Class-A Southern Association. In those games, he compiled three hits, two of which were doubles, in 18 at-bats. At the start of the 1911 season, Jackson re-signed with the Memphis Turtles. During the season, Billy Hamilton, who was working as a scout from the Boston Rustlers was dispatched to report back on Memphis' shortstop Karl Crandall, who was the brother on MLB player Doc Crandall. When Hamilton arrived in Memphis, he was not impressed by the shortstop. However, he noticed Jackson in the outfield. Hamilton followed the Memphis club for two weeks watching Jackson. Finally, Hamilton reported his finding back to Boston's front office. He was ordered to sign Jackson. With Memphis that season, Jackson batted .260 with 78 hits, 17 doubles, four triples, and two home runs in 85 games played.
Boston Rustlers/Braves (1911–13)
In exchange for allowing the Boston Rustlers to sign Jackson, the Memphis Turtles was given cash considerations, and pitcher Cecil Ferguson. Jackson made his MLB debut on August 2, against the Pittsburgh Pirates. During his debut, he played center field, and collected three hits. The Washington Post reported that when Jackson "broke in with a bang" with Boston, and that his fielding was "far above par". They also stated that Jackson had an "unassuming disposition", "has all the confidence in his ability", "is fast on his feet", and is a "good waiter". Through late-August, he led the National League in batting average, and averaged a stolen base every game. On August 24, in a game against the St. Louis Cardinals, Jackson his a sacrifice fly in the eight inning to tie the score at 6–to–6, and hit a tenth inning with a run-scoring single, giving Boston the 8–to–7 win. The Reading Eagle described Jackson as a "sensation". With Boston that season, he batted .347 with 28 runs scored, 51 hits, 11 doubles, two triples, 25 runs batted in (RBIs), and 12 stolen bases in 39 games played. Jackson played in too few of games to qualify for the 1911 batting title.
Jackson joined the Boston club, now renamed the Braves, in March 1912 for spring training. On May 30, Jackson hit his first MLB home run, which was inside-the-park, against Brooklyn Dodgers pitcher Nap Rucker. Jackson hit his second career MLB home run on June 17, against Cincinnati Reds pitcher Bert Humphries. Jackson's third home run was also against the Reds, this time off of pitcher George Suggs on August 6. On August 26, during a game against the Pittsburgh Pirates, Jackson hit his fourth, and final home run of the season, which was inside-the-park off of King Cole. It was also the final home run of Jackson's MLB career. On the season, he batted .262 with 55 runs scored, 104 hits, 13 triple, five triples, four home runs, 48 RBIs, and 22 stolen bases in 110 games played. Jackson finished the season tied for third with John Titus in hit by pitches (10), and fifth in strikeouts (72). Jackson was also tied for third with Steve Evans, Josh Devore, Jay Kirke, and Mike Mitchell in errors by an outfielder (15). Jackson played with the Boston Braves again in 1913, but appeared in just three games. In those games, he compiled two runs scored, and three hits in 10 at-bats.
Buffalo Bisons, and Texas League (1913–1923)
On May 14, 1913, the Boston Braves traded Jackson to the Buffalo Bisons of the Double-A International League in exchange for Leslie Mannie. In his first season with Buffalo, Jackson batted .260 with 110 hits, 15 doubles, seven triples, and three home runs in 116 games played. He re-signed with the Bisons in 1914; and batted .269 with 84 hits, 17 doubles, four triples, and four home runs in 97 games played. Jackson spent his third season with the Buffalo club in 1915. He batted .255 with 51 hits, 10 doubles, one triple, and one home run in 78 games played. In 1916, Jackson again played with Buffalo. In 116 games played, he batted .325 with 146 hits, 34 doubles, nine triples, and two home runs. Jackson led the league in doubles. His last season with the Bisons came in 1917. Jackson batted .275 with 111 hits, 20 doubles, three triples, and three home runs in 112 games played.
Jackson joined the Fort Worth Panthers of the Class-B Texas League before the start of the 1918 season. On the season, he batted .305 with 74 hits, 16 doubles, one triple, and three home runs in 69 games played. Jackson signed with the San Antonio Bronchos of the Texas League on June 22, 1919. In September, Jackson suffered a leg injury. On the season, he batted .264 with 75 hits, 10 doubles, one triples, and three home runs in 81 games played. In 1920, Jackson signed with Shreveport Gassers of the Texas League, and played right field. He batted .333 with 164 hits, 31 doubles, nine triples, and six home runs in 133 games played.
Jackson re-signed with the Shreveport club in 1921. On May 5, Jackson hit a single in the eight inning of a game against the Fort Worth Panthers to tie the game at 3–to–3, and later hit a triple in the tenth inning to dive in the winning run, giving the Gassers a 4–to–3 victory. After 38 games that season, Jackson led the Texas League with 14 stolen bases. On August 3, Jackson hit a walk-off home run, giving the Gassers a 12–to–9 victory over the Houston Buffaloes. He batted .310 with 194 hits, 31 doubles, 11 triples, and 10 home runs in 160 games played that season. Jackson was tied for fourth with Joe Connolly in hits that season. In 1922, Jackson batted .344 with 141 hits, 28 doubles, two triples, and 10 home runs in 111 games played with the Gassers that season. He was tied for fourth with Tom Connolly in batting average that season. In 1923, Jackson played with the Shreveport Gassers, and the Beaumont Exporters, both of the Texas League. Between the two clubs, he batted .250 with 64 hits, 11 doubles, three triples, and four home runs in 82 games played.
Later career (1924–1932)
In 1924, Jackson joined the Tyler Trojans of the Class-D East Texas League. On the season, he batted .371 with 154 hits, 31 doubles, and 26 home runs in 110 games played. He finished the season third in the league in home runs, and fourth in batting average. He played with the Tyler club again during the 1925 season. In 92 games played, Jackson batted .362 with 127 hits, 28 doubles, and 16 home runs. In 1926, Jackson joined the Greenville Hunters, who were also in the East Texas League. He was also employed to manage the club. On the season, he batted .289 with 97 hits, 17 doubles, and 10 home runs in 90 games played. Jackson re-joined the Tyler Trojans, who were now members of the Lone Star League in 1927. He batted .294 with 126 hits, 21 doubles, and 21 home runs in 115 games that year. Jackson finished the season second in home runs. He returned as the player-manager for the Trojans in 1928. Jackson batted .331 with 105 hits, 17 doubles, one triples, and 13 triples in 87 games played.
In January 1929, Jackson attended a meeting consisting of managers of the Lone Star League. However, at the start of the 1929 season, he was hired as the player-manager of the Laurel Cardinals of the Class-D Cotton States League. On the season, he batted .288 with 69 hits, nine doubles, three triples, and one home run in 72 games played. In 1930, Jackson was signed as the player-manager of the El Dorado Lions. The Lions were members of the Cotton States League. Jackson batted .288 with 69 hits, nine doubles, three triples, and one home run in 72 games played. He re-signed with the Lions in 1931. He batted .296 with 37 hits, and eight doubles in 55 games played. His final season of professional baseball came in 1932 at the age of 50 with the El Dorado club. Jackson batted .230 with 20 hits, one double, and one home run in 34 games played. He was replaced as the manager for the Lions mid-season by Clyde Glass.
Later life
Jackson resided in Blum, Texas, with his wife Elizabeth, and their children Finis, Jack, George E., and Evelyn. Jackson's son, George E. Jackson, worked in the oil fields of Texas. By 1900, the Jackson family was living in Hill County, Texas. Jackson died in Cleburne, Texas on November 26, 1972, at the age of 90. He was buried at Blum Cemetery in Blum, Texas.
References
General references
Inline citations
External links
1882 births
1972 deaths
Sportspeople from Springfield, Missouri
Baseball players from Missouri
Major League Baseball outfielders
Baseball pitchers
Minor league baseball managers
Jackson Senators players
Dallas Giants players
Memphis Turtles players
Boston Braves players
Buffalo Bisons (minor league) players
Fort Worth Panthers players
San Antonio Bronchos players
Shreveport Gassers players
Beaumont Exporters players
Tyler Trojans players
Greenville Hunters players
Laurel Cardinals players
El Dorado Lions players
Baseball player-managers
Semi-professional baseball players |
12973210 | https://en.wikipedia.org/wiki/Shimon%20Ullman | Shimon Ullman | Shimon Ullman (שמעון אולמן, born January 28, 1948, in Jerusalem) is a professor of computer science at the Weizmann Institute of Science, Israel. Ullman's main research area is the study of vision processing by both humans and machines. Specifically, he focuses on object and facial recognition, and has made a number of key insights in this field, including with Christof Koch the idea of a visual
saliency map in the mammalian visual system to regulate selective spatial attention.
Education
He received his Ph.D. from MIT in 1977 advised by David Marr.
Research
He is the author of several books on the topic of vision, including High-level vision: Object recognition and visual cognition.
Ullman is the former head of the Department of Computer Science and Applied Mathematics at the Weizmann Institute.
Awards and honours
Ullman was awarded the 2008 David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science. In 2014 he received the EMET prize in the field of computer science for his contributions to AI and computer vision.
In 2015 Ullman was awarded the Israel Prize in mathematics and computer science.
In 2019 he won the Azriel Rosenfeld Lifetime Achievement Award in the field of computer vision.
He is the co-founder of Orbotech and a former member of Israel's Council for Higher Education.
References
1948 births
Living people
People from Jerusalem
Massachusetts Institute of Technology alumni
Israeli computer scientists
Weizmann Institute of Science faculty
Rumelhart Prize laureates
Israel Prize in mathematics recipients
Israel Prize in computer sciences recipients
Fellows of the Cognitive Science Society |
55478320 | https://en.wikipedia.org/wiki/Sara%20Hughes | Sara Hughes | Sara Elizabeth Hughes (born February 14, 1995) is an American beach volleyball player. With teammate Summer Ross, she achieved a career-high world ranking of No. 9 in August 2018. Hughes has won three tournaments on the AVP Pro Tour, as well as one gold and two bronze medals on the FIVB World Tour.
Hughes began her beach volleyball training in Huntington Beach, California, at the age of eight. As a junior, she partnered with Kelly Claes to win bronze medals at the 2013 U19 and 2014 U21 World Championships. Her partnership with Claes continued through college, where the pair won 103 consecutive collegiate matches and led the USC Trojans to back-to-back NCAA Championships in 2016 and 2017. Soon after turning professional in mid-2017, Hughes and Claes became the youngest team to win an AVP event when they won the season-ending Championship. Hughes split from Claes in early 2018 and teamed up with S. Ross. In their first year playing together, Hughes and S. Ross won their first World Tour title and entered the top ten of the world rankings.
Hughes is a right-side defender and has been noted for her speed and willingness to chase down balls. She is the 2017 FIVB Top Rookie.
Early life and junior career
Hughes was born in Long Beach, California, to Rory and Laura. She has an older brother, Connor, and an older sister, Lauren. Her mother is a former volleyball player and both her siblings played the sport in college, with Connor winning two NCAA Men's Volleyball Championships with the UC Irvine Anteaters.
Growing up in Costa Mesa, California, in a volleyball-playing family, Hughes regularly attended her siblings' practices and tournaments. During one such instance, a player's parent was impressed by eight-year-old Hughes' peppering and recommended her to local beach volleyball youth coach Bill Lovelace. According to Hughes, she first came to love the sport when Lovelace praised her ball control as the best he had ever seen for an eight-year-old. After a successful tryout, she began training under Lovelace every summer in Huntington Beach until she was 15.
A standout junior beach volleyball player, Hughes won numerous tournaments on the Amateur Athletic Union and California Beach Volleyball Association circuits. From 2004 to 2012, she was mostly partnered with Justine Wong-Orantes, playing as a blocker. With Wong-Orantes, Hughes placed ninth at the 2011 and 2012 U19 World Championships. She also finished fourth at the 2012 U21 World Championships with Summer Ross. The following year, Hughes began playing with Kelly Claes and transitioned into a full-time defender. The duo won bronze medals at the 2013 U19 and 2014 U21 World Championships.
Hughes also played club indoor volleyball as the setter for Mizuno Long Beach, and was named most valuable player after her club won the 16-U Junior Olympics national championship in 2011. She played indoor volleyball for Mater Dei High School as well and was the Orange County Player of the Year as a senior.
College
Regarded as one of the top high school recruits for both beach and indoor, Hughes committed to playing beach volleyball for the USC Trojans in her junior year of high school. Beach volleyball had just become an NCAA Emerging Sport for Women at the time and Hughes decided to forgo collegiate indoor volleyball as "sand was [her] real passion."
Hughes joined the Trojans in the 2013–14 season, partnering with Kirby Burnham as the top-flight pair throughout her freshman year. The duo won the AVCA Pairs Championship and recorded 42 wins and 4 losses by the end the season. She was teamed with Claes as the Trojan's top-flight pair for the next three seasons. As sophomores, Hughes and Claes won the AVCA Pairs title and led the Trojans to their first AVCA National Championship, completing the season with a 44–3 win–loss record. In their junior year, they won the inaugural Pac-12 Pairs Championship and were named the Pac-12 Pair of the Year. Women's beach volleyball was also promoted to an NCAA Championship sport that year, and Hughes and Claes helped the Trojans win the first-ever NCAA Beach Volleyball Championship, defeating the Florida State Seminoles' top pair in straight sets in the finals. They ended the season with an undefeated 48–0 record and were selected to the NCAA All-Tournament Team. Hughes and Claes capped off a dominant year by winning the 2016 World University Championships without dropping a set the entire tournament. As seniors, the duo repeated as Pac-12 Pairs Champions and were once again named Pair of Year. They led the Trojans to their second consecutive NCAA title, coming back from a first set loss in the finals to beat the top-flight duo from Pepperdine. Hughes and Claes completed their senior year with a 55–1 win–loss record, amassing an overall record of 147 wins and 4 losses in their three seasons together.
Between their sophomore and senior years, Hughes and Claes had a win streak of 103 collegiate matches, losing just seven sets during this run. Their streak began in April 2015 and was broken two years later in a three-set loss to a team from the Saint Mary's Gaels. Hughes was named an AVCA Division I Collegiate Beach All-American in all her four years of college. She graduated with a Bachelor's degree in business administration in 2017, and earned a Master's degree in Entrepreneurship and Innovation the following year.
Amateur career
While still in high school and college, Hughes competed as an amateur on the domestic and international professional tours. Her first professional tournament result was a 17th place at the 2011 Manhattan Beach Open. In October 2012, she debuted in her first FIVB World Tour event at the $190K Bangsaen Thailand Open, where she and teammate Kaitlin Nielsen lost in the first round of the country quota qualifier. Hughes partnered with Lane Carico to win her first international event the following year at the $8K NORCECA tournament in Boquerón, Cabo Rojo. She made her Association of Volleyball Professionals (AVP) debut playing with Geena Urango at the $75K Milwaukee Open in 2014, but did not progress past the qualifying rounds. After partnering with Kelly Claes, her results improved over the next two years, highlighted by three more NORCECA titles and two AVP semifinal appearances.
Their breakthrough came in June 2016, when Hughes and Claes narrowly lost to Olympians April Ross and Kerri Walsh Jennings with a score of 21–17, 18–21, 15–17 in the third round of the $75K AVP San Francisco Open. Despite the loss, they eventually made it to the finals of the double-elimination tournament where they were defeated once again by A. Ross and Walsh Jennings. Hughes and Claes were given a wild card entry into the main draw of the $400K Klagenfurt Major a month later, where they upset the top-seeded German team of Kira Walkenhorst and Laura Ludwig in the group stage, eventually finishing 17th; Walkenhorst and Ludwig would go on to win gold at the 2016 Summer Olympics a few weeks later.
Professional career
2017: Partnering with Claes
Hughes turned professional upon graduating from college in the summer of 2017, and turned down a partnership with three-time Olympic gold medalist Walsh Jennings, choosing instead to continue playing with her collegiate partner Claes. In their first professional season, Hughes and Claes got their highest finish in international competition at the $115K Long Beach Presidents Cup exhibition event in July, beating Germany's Walkenhorst and Ludwig in the bronze-medal match. Two weeks later, they were knocked out of the World Championships by eventual champions Walkenhorst and Ludwig for a ninth-place finish. On the AVP, the 12th-seeded pair won their first title at the $112.5K Chicago Championships in September, beating Brooke Sweat and S. Ross in straight sets in the finals. With this win, Hughes and Claes, aged 22 and 21 at the time, became the youngest team in history to win an AVP tournament. On the World Tour, their best results were fifth-place finishes at the $150K Rio de Janeiro Open and the $300K Poreč Major. Hughes and Claes ended the year ranked No. 16 in the world.
2018–present: Partnering with S. Ross
After ninth-place finishes in their first two World Tour tournaments of 2018, Hughes ended her partnership with Claes to team up with S. Ross. According to Hughes, she made the switch because she "needed to grow a little more as a volleyball player." Hughes and S. Ross entered the AVP season as the top seeds, winning two of the four events they competed in. They won their first tournament together at the $100K AVP New York Open in June by defeating Nicole Branagh and Brandie Wilkerson in the final match in two sets. The following month, they beat A. Ross and Alix Klineman in three sets to win another AVP title at the $79K Hermosa Beach Open. The duo were runners-up to A. Ross and Klineman at the $125K Chicago Championships and the $75K Hawaii Invitational in September.
Hughes and S. Ross also reached their first podium on the World Tour by taking the bronze medal at the $150K Espinho Open in July. The pair then won their first World Tour title the next month at the $150K Moscow Open. Seeded ninth in Moscow, they upset three of the top five seeds, beating the second-seeded Brazilian team of Ágatha Bednarczuk and Eduarda Santos Lisboa in the gold-medal match. After Moscow, Hughes and S. Ross were ranked No. 9 in the world, a career-best for Hughes. They were awarded a wild card entry to the World Tour Finals in Hamburg at the end of the season, in which the eight top-ranked teams and two wild cards compete for the $400K prize pool. As the tenth seeds in the Finals, they notched victories over the top-seeded German team of Chantal Laboureur and Julia Sude and the eight-seeded Dutch team of Sanne Keizer and Madelein Meppelink. However, losses to the fifth-seeded Heather Bansley and Brandie Wilkerson of Canada and the fourth-seeded Maria Antonelli and Carolina Solberg Salgado of Brazil meant they did not progress to the quarterfinals, finishing tied for seventh place. Hughes and S. Ross concluded 2018 with a third-place finish at the $150K Yangzhou Open, defeating Canada's Sarah Pavan and Melissa Humana-Paredes in the bronze-medal match.
Accolades
Hughes is the 2017 FIVB Top Rookie and the 2018 AVP Best Defender. She and Claes were named Sportswomen of the Year at the 2017 LA Sports Awards, organized by the Los Angeles Sports Council.
Style of play
Hughes is a defender and right-handed right-side player. Originally a blocker in her youth, she moved to the backcourt when she started playing with the taller Claes. Known as a fierce competitor, Hughes has been noted for her "speed and relentless pursuit of every ball." Her USC head coach Anna Collier described her as "one of the fastest and smartest defenders," with the ability to anticipate her opponents' attacks. According to three-time Olympian Holly McPeak, Hughes possesses the competitive drive, work ethic and athleticism necessary to compete at the professional level.
Of the 87 players who competed in a Major Series main draw on the 2018 World Tour, Hughes ranked 33rd for total points scored, averaging 5.61 points per set; 25th for total kills, averaging 5.21 kills per set; and 40th for number of aces, with around four percent of her serves being aces.
Personal life
Hughes' childhood idol was Misty May-Treanor, and she grew up with a poster of the three-time Olympic gold medalist in her bedroom. May-Treanor, who often trained in Huntington Beach when Hughes was young, would occasionally let the latter help with her practice sessions. May-Treanor was later the volunteer assistant coach for the USC Trojans during Hughes' freshman year, and the two have formed a close relationship according to Hughes. A ball girl at AVP tournaments in her youth, Hughes also came to know two-time Olympic medalist April Ross. A. Ross, a fellow Costa Mesa resident, would invite Hughes to her practices when Hughes was in high school.
When Hughes and Claes were just starting to compete on the professional circuits, their biggest challenge was not being able to afford a coach. As their tournament results improved, the pair received more financial assistance from USA Volleyball and began working with Volleyball Hall of Fame inductee José Loiola. Since splitting with Claes in early 2018, Hughes and new partner S. Ross continue to be coached by Loiola. She is sponsored by Mikasa Sports, Oakley, KT Tape, and Nike.
Career statistics
FIVB finals: 1 (1–0)
AVP finals: 6 (3–3)
Performance timeline
Current through the 2018 FIVB World Tour Finals.
Note: Only main draw results are considered.
References
External links
Sara Hughes at the Association of Volleyball Professionals
Sara Hughes at the Beach Volleyball Major Series
1995 births
Living people
American women's beach volleyball players
Beach volleyball defenders
FIVB World Tour award winners
Volleyball players from Long Beach, California
Sportspeople from Orange County, California
USC Trojans women's beach volleyball players |
15606102 | https://en.wikipedia.org/wiki/PackageKit | PackageKit | PackageKit is a free and open-source suite of software applications designed to provide a consistent and high-level front end for a number of different package management systems. PackageKit was created by Richard Hughes in 2007, and first introduced into an operating system as a default application in May 2008 with the release of Fedora 9.
The suite is cross-platform, though it is primarily targeted at Linux distributions which follow the interoperability standards set out by the freedesktop.org group. It uses the software libraries provided by the D-Bus and Polkit projects to handle inter-process communication and privilege negotiation respectively.
PackageKit seeks to introduce automatic updates without having to authenticate as root, fast-user-switching, warnings translated into the correct locale, common upstream GNOME and KDE tools and one software over multiple Linux distributions.
Although bug fixes are still released, no major features have been developed since around 2014, and the package's maintainer predicts that it will gradually be replaced by other tools as technologies such as Flatpak and Snap become more popular.
Software architecture
PackageKit runs as a system-activated daemon, named packagekitd, which abstracts out differences between the different systems. A library called libpackagekit allows other programs to interact with PackageKit.
Features include:
installing local files, ServicePack media and packages from remote sources
authorization using Polkit
the use of existing packaging tools
multi-user system awareness – it will not allow shutdown in critical parts of the transaction
a system-activated daemon which exits when not in use
Front-ends
pkcon is the official front-end of PackageKit, it operates from the command-line.
GTK-based:
gnome-packagekit is an official GNOME front-end for PackageKit. Unlike GNOME Software, gnome-packagekit can handle all packages, not just applications, and has advanced features that are missing in GNOME Software as of June 2020.
GNOME Software is a utility for installing the applications and updates on Linux. It is part of the GNOME Core Applications and was introduced in GNOME 3.10.
Qt-based:
Back-ends
A number of different package management systems (known as back-ends) support different abstract methods and signals used by the front-end tools. Supported back-ends include:
Advanced Packaging Tool (APT)
Conary
libdnf & librepo, the libraries upon which DNF, (the successor to yum) builds
Entropy
Opkg
pacman
PiSi
Portage
Smart Package Manager
urpmi
YUM
ZYpp
See also
AppStream
Listaller
Polkit
Red Carpet
Software Updater
References
External links
Applications using D-Bus
Free package management systems
Free software programmed in C
Free software programmed in C++
Free software programmed in Python
Linux package management-related software
Linux PMS graphical front-ends
Package management software that uses GTK
Package management software that uses Qt |
45422288 | https://en.wikipedia.org/wiki/Remote%20mobile%20virtualization | Remote mobile virtualization | Remote mobile virtualization, like its counterpart desktop virtualization, is a technology that separates operating systems and applications from the client devices that access them. However, while desktop virtualization allows users to remotely access Windows desktops and applications, remote mobile virtualization offers remote access to mobile operating systems such as Android.
Remote mobile virtualization encompasses both full operating system virtualization, referred to as virtual mobile infrastructure (VMI), and user and application virtualization, termed mobile app virtualization. Remote mobile virtualization allows a user to remotely control an Android virtual machine (VM) or application. Users can access remotely hosted applications with HTML5-enabled web browsers or thin client applications from a variety of smartphones, tablets and computers, including Apple iOS, Mac OS, Blackberry, Windows Phone, Windows desktop, and Firefox OS devices.
Virtual mobile infrastructure (VMI)
VMI refers to the method of hosting a mobile operating system on a server in a data center or the cloud. Mobile operating system environments are executed remotely and they are rendered via Mobile Optimized Display protocols through the network. Compared to virtual desktop infrastructure (VDI), VMI has to operate in low bandwidth network environments such as cellular networks with fluctuating coverage and metered access. As a result, even if a mobile phone is connected to a high speed 4G/LTE network, users may need to limit overall bandwidth usage to avoid expensive phone bills.
Most common implementations of VMI host multiple mobile OS virtual machines (VMs) on private or public cloud infrastructure and allow users to access them remotely via options such as Miracast™, the ACE Protocol or custom streaming implementations optimized for 3G/4G networks. Some implementations also allow for Multimedia redirection for better audio and video performance. Mobile operating systems hosted in the cloud are not limited to Android. Other operating systems like Firefox OS and Ubuntu Mobile can also be used as VM instances depending on uses. Microservers based on existing mobile processors can also used to host Mobile VMs as they provide full GPU access for feature-rich user interfaces. To achieve higher density, VMI implementations can use customized versions of Android that minimize memory requirements and speed up boot times.
VMI use cases
Satisfy compliance – VMI helps address data privacy regulations such as HIPAA. VMI minimizes the risks associated with mobile device theft by storing mobile data securely in data centers or the cloud, rather than on end user devices. In addition, with VMI, organizations can control and monitor access to data and can optionally generate an audit trail of user activity.
Prevent data loss caused by physical device theft – With the advent of bring your own device (BYOD) initiatives, more and more users are accessing business applications and data from their mobile devices. Because VMI hosts mobile applications in the cloud, if a mobile device is lost or stolen, no business data will be compromised.
Accelerate app development and broaden coverage – VMI allows application developers to write applications once and use them on all HTML5-compatible mobile devices. Most VMI vendors offer VMI clients for Android, iOS, and Windows Phone as well as clientless, HTML5 browser-based access. Minimize software development costs and addressing mobile fragmentation.
Streamline IT operations – With VMI, IT administrators do not need to install, manage and upgrade individual applications on end user devices. Instead, if a new application patch is released, IT can upgrade the mobile application once on a cloud or data center.
Mobile app virtualization
Mobile app virtualization technology separates mobile applications from their underlying operating system using secure containers, and is analogous to RDSH and Citrix XenApp on desktops. Compared to VMI, Mobile app virtualization only virtualizes the individual application and the user session rather than the full mobile operating system. Mobile app virtualization can offer higher density than VMI because one instance of the remote OS can serve multiple users, however the user separation is less secure than VMI and there is less context of a full mobile device. Using secure containers, each user session is isolated from one other and the output of the user session is rendered remotely to the end user. Mobile app virtualization also helps in scaling to large number of users as well as sharing hardware features like GPU and encryption engines across all user sessions as they can be managed by the underlying operating system.
Mobile app virtualization is functionally similar to VMI in that both solutions host individual users’ mobile sessions on remote servers; however, it differs from VMI in several important ways:
Mobile app virtualization sessions run in a single shared mobile operating system while VMI provides individual mobile operating system instances for each user
Where mobile app virtualization is mainly designed to virtualize individual application sessions, VMI is designed to deliver full mobile environments
Mobile app virtualization is transparent to the end user; an end user accessing an application from a different mobile operating system (e.g. iOS) than the hosted operating system (typically Android) will not have to learn a new user interface. However, Hypori has recently bridged this gap in VMI with a seamless apps mode, in which the host OS is hidden from the user.
By using one, shared operating system instead of separate operating system instances, mobile app virtualization consumes less resources than VMI.
Due to having a single mechanism for user separation (typically SEAndroid policies and containers) as opposed to multiple layers of separation, mobile app virtualization was judged to be less secure than VMI by security expert organizations such as the U.S. DoD.
The analysts at TechTarget have written excellent comparisons of desktop RDSH (analogous to MAV) and VDI (analogous to VMI), and many of the same observations hold true in comparisons of the mobile equivalents.
Mobile app virtualization use cases
VMI use cases, including compliance, accelerated app development, and streamlined IT operations – Mobile app virtualization addresses compliance, security, and operations requirements.
Live streaming of mobile applications – One end user can control applications, while multiple users can view live or recorded sessions of mobile applications. Live streaming can be used for video game walk-throughs and demos or instructional videos for mobile applications.
Visibility into encrypted traffic that uses certificate pinning – An increasing number of mobile applications use certificate pinning to identify server certificates and prevent Man in the Middle attacks. However, certificate pinning also prevents organizations from inspecting internal network traffic for attacks and data exfiltration. With mobile app virtualization, organizations can analyze all traffic, including traffic from mobile apps that use certificate pinning.
Mobile gaming as a service – Mobile app virtualization allows players with low-end entry-level phones to play graphically intensive multiplayer video games. Both VMI and mobile app virtualization can store user information in secure encrypted containers.
Mobile gaming as a service
Gaming as a service provides on-demand streaming of video games onto mobile devices, game consoles, and computers. Games run on a gaming company's server are streamed to end users' mobile devices. Traditionally, gaming as a service uses Windows-based VDI or Virtual Network Computing (VNC) technologies and uses PC-based GPUs. With mobile gaming as a service, gaming providers can host Android-based video games on microservers and stream these games over low-bandwidth cellular networks to mobile devices.
With mobile gaming as a service, users can test out or play games without downloading and installing them on their devices. This is especially advantageous for mobile devices with limited disk space, RAM and computing power. Because the game is executed remotely, even mobile devices with older generation GPUs can play mobile games with advanced 3D graphics. Mobile gaming as a service also provides a vehicle for Android application developers to reach a wider audience, including Windows Phone, Apple iOS, and Firefox OS device owners.
Mobile gaming as a service can deliver free, advertising-supported games or subscription-based gaming services.
References
Centralized computing
Remote desktop
Thin clients |
12156456 | https://en.wikipedia.org/wiki/Fatty%20Briody | Fatty Briody | Charles F. "Fatty" Briody (August 13, 1858 – June 22, 1903), nicknamed "Alderman", was a professional baseball player whose career spanned from 1877 to 1888. He played eight seasons in Major League Baseball— for the Troy Trojans (1880), Cleveland Blues (1882–1884), Cincinnati Outlaw Reds (1884), St. Louis Maroons (1885), Kansas City Cowboys (NL) (1886), Detroit Wolverines (1887) and Kansas City Cowboys (AA) (1888).
Early years
Briody was born in Lansingburgh, New York, four miles outside of Troy, New York. He spent most of his life in Lansingburgh, though he lived in Wisconsin for nine years as a child.
Professional baseball career
Minor leagues
Briody began his professional baseball career at age 18 playing for the Troy Haymakers of the League Alliance. By 1879, he was playing for New Bedford in the National Association.
On June 16, 1880, Briody received a one-game tryout in the major leagues with the Troy Trojans of the National League. Appearing as the catcher in a 9-5 loss against Cleveland, Briody went hitless in four at bats for a .000 batting average and committed three errors in ten chances for a .700 fielding percentage.
During the 1881 season, Briody played in the Eastern Championship Association for the Washington Nationals and New York Metropolitans.
Cleveland Blues
Briody played at the catcher position for the Cleveland Blues of the National League from 1882 to 1884. He made his major league debut on June 16, 1882, and appeared in 53 games as the Blues' catcher during the remainder of the 1882 season. He compiled a .258 batting average with 13 doubles and 13 RBIs. He also compiled a .902 fielding percentage with 251 putouts and 89 assists.
In 1883, the Blues acquired catcher Doc Bushong, and Briody became a backup to Bushong. Briody appeared 33 games as a catcher that year and also made appearances at first, second, and third bases. His batting average declined to .234, and his fielding percentage at catcher was .900 with 171 putouts and 46 assists.
At the start of the 1884 season, Briody resumed his role as Bushong's backup. He appeared in 42 games as catcher and improved his fielding percentage to .922 with 243 putouts and 74 assists. However, his batting average declined markedly to .169.
Cincinnati Outlaw Reds
In the middle of the 1884 season, Briody jumped leagues, joining the Cincinnati Outlaw Reds of the Union Association. In 22games for the Outlaw Reds, Briody's batting average nearly doubled—he compiled a .169 average with Cleveland and hit .337 in 89 at bats for Cincinnati.
St. Louis Maroons
After his short stint in the Union Association, Briody returned to the National League in 1885, playing for the St. Louis Maroons. He was the Maroons' catcher in 60 games and compiled an .893 fielding percentage with 243 putouts and 83 assists. However, on returning to the National League, Briody's batting average dropped to .195.
Kansas City Cowboys
In February 1886, St. Louis returned Briody to league control, and he was claimed by the Kansas City Cowboys the following month. Briody played in 54 games as a catcher for the Cowboys, compiling a .919 fielding percentage with 258 putouts and 95 assists. His batting average increased to .237.
Detroit Wolverines
In March 1887, after the Cowboys folded, Briody was again returned to league control where he was claimed by the Detroit Wolverines. The Wolverines had narrowly missed winning the 1886 National League pennant and were loaded with talent, including future Hall of Famers Dan Brouthers, Sam Thompson, and Ned Hanlon. Briody played in 33 games as the team's catcher, serving as the back-up to Charlie Ganzel and Charlie Bennett. Briody was suspended mid-season for drunkenness. The Wolverines won the National League pennant in 1887 and went on to defeat the St. Louis Browns in the 1887 World Series. Briody compiled a .227 batting average for Detroit.
Return to Kansas City
In 1888, Briody played his final season in the major leagues with the Kansas City Cowboys of the American Association. He appeared in only 13 games for the Cowboys in 1888, compiling an .896 fielding percentage and a .208 batting average.
Career statistics
In his eight-year major league career, Briody appeared in 323 games and compiled a .228 batting average with 52 doubles, seven triples, three home runs and 115 RBIs. Defensively, he was the catcher in 311 games and compiled a .910 fielding percentage with 1,560 putouts, 506 assists, 204 errors and 31 double plays.
Later years
After his playing career was over, Briody returned to Lansinburgh, New York, where he was the Committeeman for the Seventeenth Ward for many years. He also conducted a trucking business, doing work for various companies. He died in 1903 at age 44 of dilation of the heart.
References
External links
1858 births
1903 deaths
Major League Baseball catchers
Baseball players from New York (state)
19th-century baseball players
Troy Trojans players
Cleveland Blues (NL) players
Cincinnati Outlaw Reds players
St. Louis Maroons players
Kansas City Cowboys (NL) players
Detroit Wolverines players
Kansas City Cowboys players
Troy Haymakers (minor league) players
Washington Nationals (minor league) players
New Bedford (minor league baseball) players
Albany (minor league baseball) players
New York Metropolitans (minor league) players
People from Lansingburgh, New York |
29175658 | https://en.wikipedia.org/wiki/List%20of%20Russian%20IT%20developers | List of Russian IT developers | This list of Russian IT developers includes the hardware engineers, computer scientists and programmers from the Russian Empire, the Soviet Union and the Russian Federation.
See also :Category:Russian computer scientists and :Category:Russian computer programmers.
Alphabetical list
A
Georgy Adelson-Velsky, inventor of AVL tree algorithm, developer of Kaissa (the first World Computer Chess Champion)
Andrey Andreev, creator of Badoo, one of the world's largest dating sites, and the 10th largest social network in the world
Vladimir Arlazarov, DBS Ines, developer of Kaissa (the first World Computer Chess Champion)
B
Boris Babayan, developer of the Elbrus-series supercomputers, founder of Moscow Center of SPARC Technologies (MCST)
Sergey Brin, inventor of the Google web search engine
Alexander Brudno, described the alpha-beta (α-β) search algorithm
Nikolay Brusentsov, inventor of ternary computer (Setun)
C
Andrei Chernov, one of the founders of the Russian Internet and the creator of the KOI8-R character encoding
Alexey Chervonenkis, developed the Vapnik–Chervonenkis theory, also known as the "fundamental theory of learning", a key part of the computational learning theory
D
Mikhail Donskoy, a leading developer of Kaissa, the first computer chess champion
Pavel Durov, founded the VKontakte.ru social network, #35 on Alexa's Top 500 Most Visited Global Websites, the 6th largest social network in the world, and Telegram
E
Andrey Ershov, developed Rapira programming language, started the predecessor to the Russian National Corpus
G
Vadim Gerasimov, one of the original co-developers of the famous video game Tetris
Victor Glushkov, a founder of cybernetics, inventor of the first personal computer, MIR
I
K
Yevgeny Kaspersky, developer of Kaspersky anti-virus products
Anatoly Karatsuba, developed the Karatsuba algorithm (the first fast multiplication algorithm)
Leonid Khachiyan, developed the Ellipsoid algorithm for linear programming
Lev Korolyov, co-developed the first Soviet computers
Semen Korsakov, the first to use punched cards for information storage and search
Alexander Kronrod, developer of Gauss–Kronrod quadrature formula and Kaissa, the first world computer chess champion
Dmitry Kryukov, creator of the first Russian search engine, Rambler
L
Evgeny Landis, inventor of AVL tree algorithm
Sergey Lebedev, developer of the first Soviet and European electronic computers, MESM and BESM
Vladimir Levenshtein, developed the Levenshtein automaton, Levenshtein coding and Levenshtein distance
Leonid Levin, IT scientist, developed the Cook-Levin theorem (the foundation for computational complexity)
Oleg Lupanov, coined the term "Shannon effect"; developed the (k, s)-Lupanov representation of Boolean functions
M
Yuri Matiyasevich, solved Hilbert's tenth problem
Alexander Mikhailov, coined the term "informatics"
Anatoly Morozov, worked on automated control systems, problem-focused complexes, modelling, and situational management
N
Anton Nossik, godfather of the Russian internet who began Russian online news
O
Willgodt Theophil Odhner, inventor of the Odhner Arithmometer, the most popular mechanical calculator in the 20th century
P
Alexey Pajitnov, inventor of Tetris
Victor Pan, worked in the area of polynomial computations
Igor Pavlov, creator of the file archiver 7-Zip; creator of the 7z archive format
Svyatoslav Pestov, developer of jEdit text editor and Factor programming language
Vladimir Pokhilko, specialized in human-computer interaction
Yuriy Polyakov, developed an approximate method for nonlinear differential and integrodifferential equations
R
Bashir Rameyev, developer of Strela computer, the first mainframe computer manufactured serially in the Soviet Union
Alexander Razborov, won the Nevanlinna Prize for introducing the "approximation method" in proving Boolean circuit lower bounds of some essential algorithmic problems, and the Gödel Prize for the paper "Natural Proofs"
Eugene Roshal, developer of the FAR file manager, RAR file format, WinRAR file archiver
S
Ilya Segalovich, founder and one of the first programmers of Yandex, Russian search engine
Anatoly Shalyto, initiator of the Foundation for Open Project Documentation; developed Automata-based programming
Dmitry Sklyarov, computer programmer known for his 2001 arrest by American law enforcement; US v. ElcomSoft Sklyarov
Alexander Stepanov, created and implemented the C++ Standard Template Library
Igor Sysoev, creator of nginx, the popular high performance web server, and founder of NGINX, Inc.
T
Andrey Terekhov (Терехов, Андрей Николаевич), developer of Algol 68 LGU; telecommunication systems
Andrey Ternovskiy, creator of Chatroulette
Valentin Turchin, inventor of Refal programming language, introduced metasystem transition and supercompilation
V
Vladimir Vapnik, developed the theory of the support vector machine; demonstrated its performance on a number of problems of interest to the machine learning community, including handwriting recognition
Y
Sergey Yablonsky, founder of the Soviet school of mathematical cybernetics and discrete mathematics
Kateryna Yushchenko, inventor of Pointers and the author of one of the world's first high-level programming languages - Address programming language (1955) with indirect addressing and addresses of the highest rank (Pointers are analogous to indirect addressing), founder of the Soviet school of theoretical programming. "Stoke-operation" (Dereference operator of Pointers) was implemented in computer "Kyiv" at her suggestion and under her guidance.
See also
List of computer scientists
List of pioneers in computer science
List of programmers
Information technology
List of Russian inventors
It Developers
Lists of computer scientists
It Developers |
10916453 | https://en.wikipedia.org/wiki/Christoph%20Meinel | Christoph Meinel | Christoph Meinel (Univ.-Prof., Dr. sc. nat., Dr. rer. nat., * April 14, 1954 in Meißen, Germany) is a German computer scientist and professor of Internet technologies and systems at the Hasso Plattner Institute for Digital Engineering (HPI) of the University of Potsdam. He is the scientific director and CEO of the HPI and has developed the openHPI learning platform with more than 1 million enrolled learners. In 2019, he was appointed to the New Internet IPv6 Hall of Fame.
Professional Live
Christoph Meinel studied mathematics and computer science at the Humboldt-University of Berlin from 1974 to 1979, received his doctorate (Dr. rer. nat.) there in 1981 on questions of complexity theory, and habilitated (Dr. sc. nat.) in 1988 with the paper Modified branching programs and their computational power. After German reunification, he held visiting positions at the universities of Saarbrücken and Paderborn.
From 1992 to 2004 he was Professor of Theoretical Concepts and New Applications in Computer Science at the University of Trier. From 1996 to 2002 he was founding director of the Institute for Telematics e. V. in Trier, which, under the supervision of the Fraunhofer-Gesellschaft, dealt with issues in the field of Internet and Web technologies. From 1995 to 2007 he was a member of the scientific board of directors of the International Meeting and Research Center for Computer Science IBFI Schloss Dagstuhl. He was Visiting Professor and Senior Research Fellow at the University of Luxembourg (2002-2010), is Honorary Professor at the School of Computer Sciences at Beijing University of Technology (People's Republic of China), and Visiting Professor at the Universities of Shanghai and Dalian.
Since 2004, Meinel has been Institute Director and CEO of the Hasso Plattner Institute for Digital Engineering gGmbH (HPI) in Potsdam and holds the Chair of Internet Technologies and System. From 2017 to 2021, he was founding dean of the Faculty of Digital Engineering at the University of Potsdam. Meinel is a member of acatech, the German National Academy of Science and Engineering, and serves on the Board of Governors of the Technion in Haifa, among many other academic bodies. He is a teacher at the HPI School of Design Thinking and on the MOOC platform openHPI.
Research Focus
Christoph Meinel's work initially focused on theoretical computer science, and here on complexity theory and binary decision diagrams. Later, he worked on Internet technologies, IT security, and digital education and new forms of teaching and learning on the Internet, such as tele-teaching and e-learning with tele-TASK and openHP. He then conducted research on information and Internet security issues, such as the high-security network protection Lock-Keeper, protection against unwanted and offensive content, and security in service-based architectures with applications in telemedicine. He is also scientifically engaged in knowledge management issues on the Internet and new forms of Internet-based teaching and learning tele-TASK. In 2008-2022, together with Larry Leifer he was program director of the HPI-Stanford Design Thinking Research Program (HPDTRP).
The first European MOOCs platform openHPI, which he initiated and developed under his leadership, offers free interactive online courses on topics related to digital education and technologies. More than 1 million learners from around the world are enrolled (as of Feb. 2022) and can also earn qualified certificates for successful completion of online courses. In the process, the identity of course participants is verified via webcam and online learning opportunities and certifications are given weight.[1] The openHPI platform is now also used by other partners, such as openWHO, openSAP, LERNEN.cloud, KI-Campus or eGov-Campus.
Christoph Meinel is also involved in the area of digital education in schools and chairs nationwide working groups that develop concepts and secure digital working environments for learning in the future. The "Bildungscloud"[2] and in particular the HPI Schul-Cloud[3] enables a low-threshold and data protection-compliant access to digital educational offerings in German schools. The HPI Schul-Cloud, which was funded by the German federal ministry of education and Research BMBF and developed under Meinel's leadership in the years 2016 - 2021, is now operated by Dataport AöR and is in use in many thousands of schools in all German states, especially Thuringia, Brandenburg and Lower Saxony. Students and teachers can use it to access digital teaching resources in the classroom, communicate digitally, securely store and exchange texts, presentations and data, and access educational software from the Internet pseudonymously.
Meinel holds several international patents (for example Lock-Keeper) and has developed, among others, the tele-teaching system tele-TASK, the online collaboration tool Tele-Board or the very popular ID-Leak-Checker service.
Publishing and Editorial Work
Christoph Meinel is author, co-author, or editor of 25 text books. Among these are "Blogoshere and its Exploration",
"Internetworking",
"Digitale Kommunikation",
"WWW – Kommunikation, Internetworking, Web-Technologien",
"Design Thinking – Innovation lernen, Ideenwelten öffnen",
"Mathematische Grundlagen der Informatik",
"Algorithmen und Datenstrukturen im VLSI-Design. OBDDs – Grundlagen und Anwendungen",
and of various conference proceedings.
Christoph Meinel has published more than 700 per-reviewed scientific papers in international scientific journals and conferences. He holds various international patents (e.g. Lock-Keeper, tele-TASK, and Teleboard),
Meinel is editor of the book series Understanding Innovation published by Springer-Verlag, the scientific online journals ECCC and ECDTR, the Internet-Bridge Germany-China. 1994, he was the initiator and co-founder of the online journal on complexity theory ECCC - Electronic Colloquium on Computational Complexity and its editor in chief until 2016.
MOOCs and Teleteaching
Many lectures and lecture series' of Christoph Meinel are recorded and freely accessible on the Internet via the tele-TASK portal. In 2012 his team designed the first European MOOC platform openHPI which provides many free online courses in the field of IT-technologies. He also offers various massive open online courses, for example on Blockchains, Internet Security for Beginners, Internet and Web Technologies with about 10.000 enrolled learners each. As a chairman of various Cloud-Learning initiatives Christoph Meinel leads innovations in the field of the digital transformation of the German schools and education sector. With projects such as "Bildungscloud" and "Schul-Cloud" he promote the use of digital media in schools and other educational instances.
References
External links
Personal Homepage
Hasso-Plattner-Institut, Potsdam, Germany
List of Publications on the DBLP-Server
MOOCs platform of HPI
tele-TASK portal - archive of more than 5.000 lecture videos
German IPv6 Council
ECDTR - The Electronic Colloqium on Design Thinking Research
Living people
University of Potsdam faculty
20th-century German inventors
German computer scientists
1954 births
German male writers |
30339321 | https://en.wikipedia.org/wiki/GMER | GMER | GMER is a software tool written by a Polish researcher Przemysław Gmerek, for detecting and removing rootkits. It runs on Microsoft Windows and has support for Windows NT, 2000, XP, Vista, 7, 8 and 10. With version 2.0.18327 full support for Windows x64 is added.
At the time of first release in 2004 it introduced innovative rootkit detection techniques and quickly gained popularity for its effectiveness. It was incorporated into a few antivirus tools including Avast! antivirus and SDFix.
For several months in 2006 and 2007, the tool's website was the target of heavy DDoS attacks attempting to block its downloads.
References
External links
Spyware removal
Windows security software
Antivirus software
Rootkit detection software |
5548053 | https://en.wikipedia.org/wiki/Coding%20best%20practices | Coding best practices | Coding best practices are a set of informal rules that the software development community employs to help improve software quality.
Many computer programs remain in use for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance and enhancement by people other than the original authors.
In ninety-ninety rule, Tom Cargill is credited with an explanation as to why programming projects often run late: "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." Any guidance which can redress this lack of foresight is worth considering.
The size of a project or program has a significant effect on error rates, programmer productivity, and the amount of management needed.
Software quality
As listed below, there are many attributes associated with good software. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency. Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes.
Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it:
Maintainability
Dependability
Efficiency
Usability
Weinberg has identified four targets which a good program should meet:
Does a program meet its specification ("correct output for each possible input")?
Is the program produced on schedule (and within budget)?
How adaptable is the program to cope with changing requirements?
Is the program efficient enough for the environment in which it is used?
Hoare has identified seventeen objectives related to software quality, including:
Clear definition of purpose.
Simplicity of use.
Ruggedness (difficult to misuse, kind to errors).
Early availability (delivered on time when needed).
Reliability.
Extensibility in the light of experience.
Brevity.
Efficiency (fast enough for the purpose to which it is put).
Minimum cost to develop.
Conformity to any relevant standards.
Clear, accurate and precise user documents.
Prerequisites
Before coding starts, it is important to ensure that all necessary prerequisites have been completed (or have at least progressed far enough to provide a solid foundation for coding). If the various prerequisites are not satisfied then the software is likely to be unsatisfactory, even if it is completed.
From Meek & Heath: "What happens before one gets to the coding stage is often of crucial importance to the success of the project."
The prerequisites outlined below cover such matters as:
how is development structured? (life cycle)
what is the software meant to do? (requirements)
the overall structure of the software system (architecture)
more detailed design of individual components (design)
choice of programming language(s)
For small simple projects involving only one person, it may be feasible to combine architecture with design and adopt a very simple life cycle.
Life cycle
A software development methodology is a framework that is used to structure, plan, and control the life cycle of a software product. Common methodologies include waterfall, prototyping, iterative and incremental development, spiral development, agile software development, rapid application development, and extreme programming.
The waterfall model is a sequential development approach; in particular, it assumes that the requirements can be completely defined at the start of a project. However, McConnell quotes three studies which indicate that, on average, requirements change by around 25% during a project. The other methodologies mentioned above all attempt to reduce the impact of such requirement changes, often by some form of step-wise, incremental, or iterative approach. Different methodologies may be appropriate for different development environments.
Requirements
McConnell states: "The first prerequisite you need to fulfill before beginning construction is a clear statement of the problem the system is supposed to solve."
Meek and Heath emphasise that a clear, complete, precise, and unambiguous written specification is the target to aim for. Note that it may not be possible to achieve this target, and the target is likely to change anyway (as mentioned in the previous section).
Sommerville distinguishes between less detailed user requirements and more detailed system requirements. He also distinguishes between functional requirements (e.g. update a record) and non-functional requirements (e.g. response time must be less than 1 second).
Architecture
Hoare points out: "there are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies; the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."
Software architecture is concerned with deciding what has to be done, and which program component is going to do it (how something is done is left to the detailed design phase, below). This is particularly important when a software system contains more than one program since it effectively defines the interface between these various programs. It should include some consideration of any user interfaces as well, without going into excessive detail.
Any non-functional system requirements (response time, reliability, maintainability, etc.) need to be considered at this stage.
The software architecture is also of interest to various stakeholders (sponsors, end-users, etc.) since it gives them a chance to check that their requirements can be met.
Design
The main purpose of design is to fill in the details which have been glossed over in the architectural design. The intention is that the design should be detailed enough to provide a good guide for actual coding, including details of any particular algorithms to be used. For example, at the architectural level, it may have been noted that some data has to be sorted, while at the design level it is necessary to decide which sorting algorithm is to be used. As a further example, if an object-oriented approach is being used, then the details of the objects must be determined (attributes and methods).
Choice of programming language(s)
Mayer states: "No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes. Understanding the problem and associated programming requirements is necessary for choosing the language best suited for the solution."
From Meek & Heath: "The essence of the art of choosing a language is to start with the problem, decide what its requirements are, and their relative importance since it will probably be impossible to satisfy them all equally well. The available languages should then be measured against the list of requirements, and the most suitable (or least unsatisfactory) chosen."
It is possible that different programming languages may be appropriate for different aspects of the problem. If the languages or their compilers permit, it may be feasible to mix routines written in different languages within the same program.
Even if there is no choice as to which programming language is to be used, McConnell provides some advice: "Every programming language has strengths and weaknesses. Be aware of the specific strengths and weaknesses of the language you're using."
Coding standards
This section is also really a prerequisite to coding, as McConnell points out: "Establish programming conventions before you begin programming. It's nearly impossible to change code to match them later."
As listed near the end of Coding conventions, there are different conventions for different programming languages, so it may be counterproductive to apply the same conventions across different languages. It is important to note that there is no one particular coding convention for any programming language. Every organization has a custom coding standard for each type of software project. It is therefore imperative that the programmer chooses or makes up a particular set of coding guidelines before the software project commences. Some coding conventions are generic which may not apply for every software project written with a particular programming language.
The use of coding conventions is particularly important when a project involves more than one programmer (there have been projects with thousands of programmers). It is much easier for a programmer to read code written by someone else if all code follows the same conventions.
For some examples of bad coding conventions, Roedy Green provides a lengthy (tongue-in-cheek) article on how to produce unmaintainable code.
Commenting
Due to time restrictions or enthusiastic programmers who want immediate results for their code, commenting of code often takes a back seat. Programmers working as a team have found it better to leave comments behind since coding usually follows cycles, or more than one person may work on a particular module. However, some commenting can decrease the cost of knowledge transfer between developers working on the same module.
In the early days of computing, one commenting practice was to leave a brief description of the following:
Name of the module
Purpose of the Module
Description of the Module
Original Author
Modifications
Authors who modified code with a description on why it was modified.
The "description of the module" should be as brief as possible but without sacrificing clarity and comprehensiveness.
However, the last two items have largely been obsoleted by the advent of revision control systems. Modifications and their authorship can be reliably tracked by using such tools rather than by using comments.
Also, if complicated logic is being used, it is a good practice to leave a comment "block" near that part so that another programmer can understand what exactly is happening.
Unit testing can be another way to show how code is intended to be used.
Naming conventions
Use of proper naming conventions is considered good practice. Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace them with meaningful ones, causing confusion.
It is usually considered good practice to use descriptive names.
Example: A variable for taking in weight as a parameter for a truck can be named TrkWeight or TruckWeightKilograms, with TruckWeightKilograms being the preferable one, since it is instantly recognisable. See CamelCase naming of variables.
Keep the code simple
The code that a programmer writes should be simple. Complicated logic for achieving a simple thing should be kept to a minimum since the code might be modified by another programmer in the future. The logic one programmer implemented may not make perfect sense to another. So, always keep the code as simple as possible.
For example, consider these equivalent lines of C code:
if (hours < 24 && minutes < 60 && seconds < 60)
{
return true;
}
else
{
return false;
}
and
if (hours < 24 && minutes < 60 && seconds < 60)
return true;
else
return false;
and
switch (hours < 24 && minutes < 60 && seconds < 60){
case true:
return true;
break;
case false:
return false;
break;
default:
return false;
}
and
return hours < 24 && minutes < 60 && seconds < 60;
The 1st approach, which is much more commonly used, is considerably larger than the 4th. In particular, it consumes 5 times more screen vertical space (lines), and 97 characters versus 52 (though editing tools may reduce the difference in actual typing). It is arguable, however, which is "simpler". The first has an explicit if/then else, with an explicit return value obviously connected with each; even a novice programmer should have no difficulty understanding it. The 2nd merely discards the braces, cutting the "vertical" size in half with little change in conceptual complexity. In most languages the "return" statements could also be appended to the prior lines, bringing the "vertical" size to only one more line than the 4th form.
The fourth form obviously minimizes the size, but may increase the complexity: It leaves the "true" and "false" values implicit, and intermixes the notions of "condition" and "return value". It is likely obvious to most programmers, but a novice might not immediately understand that the result of evaluating a condition is actually a value (of type Boolean, or its equivalent in whatever language), and thus can be manipulated or returned. In more realistic examples, the 4th form could have problems due to operator precedence, perhaps returning an unexpected type, where the prior forms would in some languages report an error. Thus, "simplicity" is not merely a matter of length, but of logical and conceptual structure; making code shorter may make it less or more complex.
For large, long lived programs using verbose alternatives could contribute to bloat.
Compactness can allow coders to view more code per page, reducing scrolling gestures and keystrokes. Given how many times code might be viewed in the process of writing and maintaining, it might amount to a significant savings in programmer keystrokes in the life of the code. This might not seem significant to a student first learning to program but, when producing and maintaining large programs the reduction of how many lines of code there are allows for more of the code to fit on screen, minor code simplification may improve productivity, and also lessen finger, wrist and eye strain, which are common medical issues suffered by production coders and information workers.
Terser coding speeds compilation very slightly, as fewer symbols need to be processed. Furthermore, the 3rd approach may allow similar lines of code to be more easily compared, particularly when many such constructs can appear on one screen at the same time.
Finally, very terse layouts may better utilize modern wide-screen computer displays, depending on monitor layout and setup. In the past, screens were limited to 40 or 80 characters (such limits originated far earlier: manuscripts, printed books, and even scrolls, have for millennia used quite short lines (see for example Gutenberg Bible). Modern screens can easily display 200 or more characters, allowing extremely long lines. Most modern coding styles and standards do not take up that entire width. Thus, if using one window as wide as the screen, a great deal of available space is wasted. On the other hand, with multiple windows, or using an IDE or other tool with various information in side panes, the available width for code is in the range familiar from earlier systems.
It is also worth noting that the human visual system is greatly affected by line length; very long lines slightly increase reading speed, but reduce comprehension Text Columns: How Long is Too Long? and add to eye-tracking errors. Some studies suggest that longer lines fare better online than in print Human Factors International, but this still only goes up to about 10 inches, and mainly for raw speed of reading prose.
Portability
Program code should not contain "hard-coded" (literal) values referring to environmental parameters, such as absolute file paths, file names, user names, host names, IP addresses, URLs, UDP/TCP ports. Otherwise the application will not run on a host that has a different design than anticipated. A careful programmer can parametrize such variables and configure them for the hosting environment outside of the application proper (for example in property files, on an application server, or even in a database). Compare the mantra of a "single point of definition"
(SPOD).
As an extension, resources such as XML files should also contain variables rather than literal values, otherwise the application will not be portable to another environment without editing the XML files. For example, with J2EE applications running in an application server, such environmental parameters can be defined in the scope of the JVM and the application should get the values from there.
Scalability
Design code with scalability as a design goal because very often in software projects, new features are always added to a project which becomes bigger. Therefore, facility to add new features to a software code base becomes a invaluable method in writing software
Reusability
Re-use is a very important design goal in software development. Re-use cuts development costs and also reduces time for development if the components or modules which are reused are already tested. Very often, software projects start with a existing baseline which contains the project in its prior version and depending on the project, many of existing software modules and components are reused which reduces development and testing time therefore increasing the probability of delivering a software project on schedule.
Construction guidelines in brief
A general overview of all of the above:
Know what the code block must perform
Maintain naming conventions which are uniform throughout.
Indicate a brief description of what a variable is for (reference to commenting)
Correct errors as they occur.
Keep your code simple
Design code with scalability and reuse in mind.
Code development
Code building
A best practice for building code involves daily builds and testing, or better still continuous integration, or even continuous delivery.
Testing
Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts, and test cases are developed while the application is being designed and coded.
Debugging the code and correcting errors
Programmers tend to write the complete code and then begin debugging and checking for errors. Though this approach can save time in smaller projects, bigger and complex ones tend to
have too many variables and functions that need attention. Therefore, it is good to debug every module once you are done and not the entire program. This saves time in the long run so that one does not end up wasting a lot of time on figuring out what is wrong. Unit tests for individual modules, and/or functional tests for web services and web applications, can help with this.
Deployment
Deployment is the final stage of releasing an application for users. Some best practices are:
Keep the installation structure simple: Files and directories should be kept to a minimum. Don’t install anything that’s never going to be used.
Keep only what is needed: The software configuration management activities must make sure this is enforced. Unused resources (old or failed versions of files, source code, interfaces, etc.) must be archived somewhere else to keep newer builds lean.
Keep everything updated: The software configuration management activities must make sure this is enforced. For delta-based deployments, make sure the versions of the resources that are already deployed are the latest before deploying the deltas. If not sure, perform a deployment from scratch (delete everything first and then re-deploy).
Adopt a multi-stage strategy: Depending on the size of the project, sometimes more deployments are needed.
Have a roll back strategy: There must be a way to roll-back to a previous (working) version.
Rely on automation for repeatable processes: There's far too much room for human error, deployments should not be manual. Use a tool that is native to each operating system or, use a scripting language for cross-platform deployments.
Re-create the real deployment environment: Consider everything (routers, firewalls, web servers, web browsers, file systems, etc.)
Do not change deployment procedures and scripts on-the-fly and, document such changes: Wait for a new iteration and record such changes appropriately.
Customize deployment: Newer software products such as APIs, micro-services, etc. require specific considerations for successful deployment.
Reduce risk from other development phases: If other activities such as testing and configuration management are wrong, deployment surely will fail.
Consider the influence each stakeholder has: Organizational, social, governmental considerations.
See also
Best practice
List of tools for static code analysis
Motor Industry Software Reliability Association (MISRA)
Software Assurance
Software quality
List of software development philosophies
The Cathedral and the Bazaar - book comparing top-down vs. bottom-up open-source software
Davis 201 Principles of Software Development
Where's the Theory for Software Engineering?
Don't Make Me Think (Principles of intuitive navigation and information design)
References
General
Enhancing the Development Life Cycle to Product Secure Software, V2.0 Oct. 2008 describes the security principles and practices that software developers, testers, and integrators can adopt to achieve the twin objectives of producing more secure software-intensive systems, and verifying the security of the software they produce.
External links
Paul Burden, co-author of the MISRA C Coding Standards and PRQA's representative on the MISRA C working group for more than 10 years discusses a common coding standard fallacy: "we don't need a coding standard!, we just need to catch bugs!"
Software development process
Computer programming |
21391751 | https://en.wikipedia.org/wiki/Turing%20test | Turing test | The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.
The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at the University of Manchester. It opens with the words: "I propose to consider the question, 'Can machines think? Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words." Turing describes the new form of the problem in terms of a three-person game called the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?" This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".
Since Turing first introduced his test, it has proven to be both highly influential and widely criticised, and it has become an important concept in the philosophy of artificial intelligence. Some of these criticisms, such as John Searle's Chinese room, are themselves controversial.
History
Philosophical background
The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing test in his 1637 Discourse on the Method when he writes:
Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion.
Denis Diderot formulates in his Pensées philosophiques a Turing-test criterion, though with the important implicit limiting assumption maintained, of the participants being natural living beings, rather than considering created artifacts:
"If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation."
This does not mean he agrees with this, but that it was already a common argument of materialists at that time.
According to dualism, the mind is non-physical (or, at the very least, has non-physical properties) and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.
In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined." (This suggestion is very similar to the Turing test, but is concerned with consciousness rather than intelligence. Moreover, it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test.
Alan Turing
Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956. It was a common topic among the members of the Ratio Club, an informal group of British cybernetics and electronics researchers that included Alan Turing.
Turing, in particular, had been tackling the notion of machine intelligence since at least 1941 and one of the earliest-known mentions of "computer intelligence" was made by him in 1947. In Turing's report, "Intelligent Machinery", he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour" and, as part of that investigation, proposed what may be considered the forerunner to his later tests:
It is not difficult to devise a paper machine which will play a not very bad game of chess. Now get three men as subjects for the experiment. A, B and C. A and C are to be rather poor chess players, B is the operator who works the paper machine. ... Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.
"Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think? As he highlights, the traditional approach to such a question is to start with definitions, defining both the terms "machine" and "intelligence". Turing chooses not to do so; instead he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words." In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?" The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man."
To demonstrate this approach Turing proposes a test inspired by a party game, known as the "imitation game", in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.) Turing described his new version of the game as follows:
We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"
Later in the paper, Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man. While neither of these formulations precisely matches the version of the Turing test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in a BBC radio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.
Turing's paper considered nine putative objections, which include all the major arguments against artificial intelligence that have been raised in the years since the paper was published (see "Computing Machinery and Intelligence").
ELIZA and PARRY
In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test. The program, known as ELIZA, worked by examining a user's typed comments for keywords. If a keyword is found, a rule that transforms the user's comments is applied, and the resulting sentence is returned. If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments. In addition, Weizenbaum developed ELIZA to replicate the behaviour of a Rogerian psychotherapist, allowing ELIZA to be "free to assume the pose of knowing almost nothing of the real world." With these techniques, Weizenbaum's program was able to fool some people into believing that they were talking to a real person, with some subjects being "very hard to convince that ELIZA [...] is not human." Thus, ELIZA is claimed by some to be one of the programs (perhaps the first) able to pass the Turing test, even though this view is highly contentious (see below).
Kenneth Colby created PARRY in 1972, a program described as "ELIZA with attitude". It attempted to model the behaviour of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. To validate the work, PARRY was tested in the early 1970s using a variation of the Turing test. A group of experienced psychiatrists analysed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the "patients" were human and which were computer programs. The psychiatrists were able to make the correct identification only 52 percent of the time – a figure consistent with random guessing.
In the 21st century, versions of these programs (now known as "chatbots") continue to fool people. "CyberLover", a malware program, preys on Internet users by convincing them to "reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers". The program has emerged as a "Valentine-risk" flirting with people "seeking relationships online in order to collect their personal data".
The Chinese room
John Searle's 1980 paper Minds, Brains, and Programs proposed the "Chinese room" thought experiment and argued that the Turing test could not be used to determine if a machine can think. Searle noted that software (such as ELIZA) could pass the Turing test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people are. Therefore, Searle concludes, the Turing test cannot prove that a machine can think. Much like the Turing test itself, Searle's argument has been both widely criticised and highly endorsed.
Arguments such as Searle's and others working on the philosophy of mind sparked off a more intense debate about the nature of intelligence, the possibility of intelligent machines and the value of the Turing test that continued through the 1980s and 1990s.
Loebner Prize
The Loebner Prize provides an annual platform for practical Turing tests with the first competition held in November 1991. It is underwritten by Hugh Loebner. The Cambridge Center for Behavioral Studies in Massachusetts, United States, organised the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.
The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press and academia. The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing test (discussed below): The winner won, at least in part, because it was able to "imitate human typing errors"; the unsophisticated interrogators were easily fooled; and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research.
The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries. Artificial Linguistic Internet Computer Entity (A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AI Jabberwacky won in 2005 and 2006.
The Loebner Prize tests conversational intelligence; winners are typically chatterbot programs, or Artificial Conversational Entities (ACE)s. Early Loebner Prize rules restricted conversations: Each entry and hidden-human conversed on a single topic, thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007, the interaction time allowed in Loebner Prizes was more than twenty minutes.
Versions
Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one that he describes as the "Standard Interpretation". While there is some debate regarding whether the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent, and their strengths and weaknesses are distinct.
Huma Shah points out that Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions. Shah argues there is one imitation game which Turing described could be practicalised in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalises naturally to all of human performance capacity, verbal as well as nonverbal (robotic).
Imitation game
Turing's original article describes a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) is of either sex. In the imitation game, player C is unable to see either player A or player B, and can communicate with them only through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.
Turing then asks:
"What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" These questions replace our original, "Can machines think?"
The second version appeared later in Turing's 1950 paper. Similar to the original imitation game test, the role of player A is performed by a computer. However, the role of player B is performed by a man rather than a woman.
Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?
In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision.
Standard root interpretation
The standard interpretation is not included in the original paper, but is both accepted and debated.
Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer could imitate a human. While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was and thus conflates the second version with this one, while others, such as Traiger, do not – this has nevertheless led to what can be viewed as the "standard interpretation." In this version, player A is a computer and player B a person of either sex. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human. The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine. There are issues about duration, but the standard interpretation generally considers this limitation as something that should be reasonable.
Imitation game vs. standard Turing test
Controversy has arisen over which of the alternative formulations of the test Turing intended. Sterrett argues that two distinct tests can be extracted from his 1950 paper and that, pace Turing's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test", whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test", noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the standard Turing test (STT) has the problems that its critics cite but feels that, in contrast, the original imitation game test (OIG test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour". The general structure of the OIG test could even be used with non-verbal versions of imitation games.
Still other writers have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game.
Saygin has suggested that maybe the original game is a way of proposing a less biased experimental design as it hides the participation of the computer. The imitation game also includes a "social hack" not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not.
Should the interrogator know about the computer?
A crucial piece of any laboratory test is that there should be a control. Turing never makes clear whether the interrogator in his tests is aware that one of the participants is a computer. He states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement. When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation. As Ayse Saygin, Peter Swirski, and others have highlighted, this makes a big difference to the implementation and outcome of the test. An experimental study looking at Gricean maxim violations using transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994 and 1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.
Strengths
Tractability and simplicity
The power and appeal of the Turing test derives from its simplicity. The philosophy of mind, psychology, and modern neuroscience have been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of the philosophy of artificial intelligence cannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic attempt to answer a difficult philosophical question.
Breadth of subject matter
The format of the test allows the interrogator to give the machine a wide variety of intellectual tasks. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include." John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well."
To pass a well-designed Turing test, the machine must use natural language, reason, have knowledge and learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate skilled use of well designed vision and robotics as well. Together, these represent almost all of the major problems that artificial intelligence research would like to solve.
The Feigenbaum test is designed to take advantage of the broad range of topics available to a Turing test. It is a limited form of Turing's question-answer game which compares the machine against the abilities of experts in specific fields such as literature or chemistry. IBM's Watson machine achieved success in a man versus machine television quiz show of human knowledge, Jeopardy!
Emphasis on emotional and aesthetic intelligence
As a Cambridge honours graduate in mathematics, Turing might have been expected to propose a test of computer intelligence requiring expert knowledge in some highly technical field, and thus anticipating a more recent approach to the subject. Instead, as already noted, the test which he described in his seminal 1950 paper requires the computer to be able to compete successfully in a common party game, and this by performing as well as the typical man in answering a series of questions so as to pretend convincingly to be the woman contestant.
Given the status of human sexual dimorphism as one of the most ancient of subjects, it is thus implicit in the above scenario that the questions to be answered will involve neither specialised factual knowledge nor information processing technique. The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined:
Interrogator: Will X please tell me the length of his or her hair?
Contestant: My hair is shingled, and the longest strands are about nine inches long.
When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry:
Interrogator: In the first line of your sonnet which reads, "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day." That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence; and in light of an increasing awareness of the threat from an AI run amok, it has been suggested that this focus perhaps represents a critical intuition on Turing's part, i.e., that emotional and aesthetic intelligence will play a key role in the creation of a "friendly AI". It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a "standard interpretation" of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution.
Weaknesses
Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward. Numerous experts in the field, including cognitive scientist Gary Marcus, insist that the Turing test only shows how easy it is to fool humans and not an indication of machine intelligence.
Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. It assumes that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing only behaviour and the value of comparing the machine with a human. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.
Human intelligence vs. intelligence in general
The Turing test does not directly test whether the computer behaves intelligently. It tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways:
Some human behaviour is unintelligent The Turing test requires that the machine be able to execute all human behaviours, regardless of whether they are intelligent. It even tests for behaviours that may not be considered intelligent at all, such as the susceptibility to insults, the temptation to lie or, simply, a high frequency of typing mistakes. If a machine cannot imitate these unintelligent behaviours in detail it fails the test.
This objection was raised by The Economist, in an article entitled "artificial stupidity" published shortly after the first Loebner Prize competition in 1992. The article noted that the first Loebner winner's victory was due, at least in part, to its ability to "imitate human typing errors." Turing himself had suggested that programs add errors into their output, so as to be better "players" of the game.
Some intelligent behaviour is inhuman The Turing test does not test for highly intelligent behaviours, such as the ability to solve difficult problems or come up with original insights. In fact, it specifically requires deception on the part of the machine: if the machine is more intelligent than a human being it must deliberately avoid appearing too intelligent. If it were to solve a computational problem that is practically impossible for a human to solve, then the interrogator would know the program is not human, and the machine would fail the test.
Because it cannot measure intelligence that is beyond the ability of humans, the test cannot be used to build or evaluate systems that are more intelligent than humans. Because of this, several test alternatives that would be able to evaluate super-intelligent systems have been proposed.
Consciousness vs. the simulation of consciousness
The Turing test is concerned strictly with how the subject acts – the external behaviour of the machine. In this regard, it takes a behaviourist or functionalist approach to the study of the mind. The example of ELIZA suggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all.
John Searle has argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking." His Chinese room argument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has a mind, consciousness, or intentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.)
Turing anticipated this line of criticism in his original paper, writing:
Naïveté of interrogators
In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill, or naïveté of the questioner.
Turing does not specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".
Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogators" are not even aware of the possibility that they are interacting with computers. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required.
Early Loebner Prize competitions used "unsophisticated" interrogators who were easily fooled by the machines. Since 2004, the Loebner Prize organisers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.
One interesting feature of the Turing test is the frequency of the confederate effect, when the confederate (tested) humans are misidentified by the interrogators as machines. It has been suggested that what interrogators expect as human responses is not necessarily typical of humans. As a result, some individuals can be categorised as machines. This can therefore work in favour of a competing machine. The humans are instructed to "act themselves", but sometimes their answers are more like what the interrogator expects a machine to say. This raises the question of how to ensure that the humans are motivated to "act human".
Silence
A critical aspect of the Turing test is that a machine must give itself away as being a machine by its utterances. An interrogator must then make the "right identification" by correctly identifying the machine as being just that. If however a machine remains silent during a conversation, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.
Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.
Impracticality and irrelevance: the Turing test and AI research
Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test." There are several reasons.
First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as object recognition or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.
Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.
Turing did not intend for his idea to be used to test the intelligence of programs — he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence. John McCarthy argues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science."
Variations
Numerous other versions of the Turing test, including those expounded above, have been raised through the years.
Reverse Turing test and CAPTCHA
A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalyst Wilfred Bion, who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. In his 2000 book, among several other original points with regard to the Turing test, literary scholar Peter Swirski discussed in detail the idea of what he termed the Swirski test—essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version.
Carrying this idea forward, R. D. Hinshelwood described the mind as a "mind recognizing apparatus". The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human.
CAPTCHA is a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human.
Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA.
In 2013, researchers at Vicarious announced that they had developed a system to solve CAPTCHA challenges from Google, Yahoo!, and PayPal up to 90% of the time.
In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy.
In 2015, Shuman Ghosemajumder, former click fraud czar of Google, stated that there were cybercriminal sites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.
Subject matter expert Turing test
Another variation is described as the subject-matter expert Turing test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a "Feigenbaum test" and was proposed by Edward Feigenbaum in a 2003 paper.
"Low-level" cognition test
Robert French (1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied by cognitive science. Such questions reveal the precise details of the human embodiment of thought and can unmask a computer unless it experiences the world as humans do.
Total Turing test
The "Total Turing test" variation of the Turing test, proposed by cognitive scientist Stevan Harnad, adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiring computer vision) and the subject's ability to manipulate objects (requiring robotics).
Electronic health records
A letter published in Communications of the ACM describes the concept of generating a synthetic patient population and proposes a variation of Turing test to assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" and further the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade."
Minimum intelligent signal test
The minimum intelligent signal test was proposed by Chris McKinstry as "the maximum abstraction of the Turing test", in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems like anthropomorphism bias, and does not require emulation of unintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like an IQ test than an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured.
Hutter Prize
The organisers of the Hutter Prize believe that compressing natural language text is a hard AI problem, equivalent to passing the Turing test.
The data compression test has some advantages over most versions and variations of a Turing test, including:
It gives a single number that can be directly used to compare which of two machines is "more intelligent."
It does not require the computer to lie to the judge
The main disadvantages of using data compression as a test are:
It is not possible to test humans this way.
It is unknown what particular "score" on this test—if any—is equivalent to passing a human-level Turing test.
Other tests based on compression or Kolmogorov complexity
A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test. or by tests which are completely derived from Kolmogorov complexity.
Other related tests in this line are presented by Hernandez-Orallo and Dowe.
Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based on Solomonoff's inductive inference) into a working practical test of machine intelligence.
Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers.
Ebert test
The Turing test inspired the Ebert test proposed in 2011 by film critic Roger Ebert which is a test whether a computer-based synthesised voice has sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh.
Universal Turing test inspired black-box-based machine intelligence metrics
Based on the large diversity of intelligent systems, the Turing test inspired universal metrics should be used, which are able to measure the machine intelligence and compare the systems based on their intelligence. A property of an intelligence metric should be the treating of the aspect of variability in intelligence. Black-box-based intelligence metrics, like the MetrIntPair and MetrIntPairII, are universal since they do not depend on the architecture of the systems whose intelligence they measure. MetrIntPair is an accurate metric that can simultaneously measure and compare the intelligence of two systems. MetrIntPairII is an accurate and robust metric that can simultaneously measure and compare the intelligence of any number of intelligent systems. Both metrics use specific pairwise based intelligence measurements and can classify the studied systems in intelligence classes.
Conferences
Turing Colloquium
1990 marked the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and, saw renewed interest in the test. Two significant events occurred in that year: The first was the Turing Colloquium, which was held at the University of Sussex in April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing test in terms of its past, present, and future; the second was the formation of the annual Loebner Prize competition.
Blay Whitby lists four major turning points in the history of the Turing test – the publication of "Computing Machinery and Intelligence" in 1950, the announcement of Joseph Weizenbaum's ELIZA in 1966, Kenneth Colby's creation of PARRY, which was first described in 1972, and the Turing Colloquium in 1990.
2005 Colloquium on Conversational Systems
In November 2005, the University of Surrey hosted an inaugural one-day meeting of artificial conversational entity developers,
attended by winners of practical Turing tests in the Loebner Prize: Robby Garner, Richard Wallace and Rollo Carpenter. Invited speakers included David Hamill, Hugh Loebner (sponsor of the Loebner Prize) and Huma Shah.
2008 AISB Symposium
In parallel to the 2008 Loebner Prize held at the University of Reading,
the Society for the Study of Artificial Intelligence and the Simulation of Behaviour (AISB), hosted a one-day symposium to discuss the Turing test, organised by John Barnden, Mark Bishop, Huma Shah and Kevin Warwick.
The speakers included the Royal Institution's Director Baroness Susan Greenfield, Selmer Bringsjord, Turing's biographer Andrew Hodges, and consciousness scientist Owen Holland. No agreement emerged for a canonical Turing test, though Bringsjord expressed that a sizeable prize would result in the Turing test being passed sooner.
The Alan Turing Year, and Turing100 in 2012
Throughout 2012, a number of major events took place to celebrate Turing's life and scientific impact. The Turing100 group supported these events and also, organised a special Turing test event in Bletchley Park on 23 June 2012 to celebrate the 100th anniversary of Turing's birth.
See also
Natural language processing
Artificial intelligence in fiction
Blindsight
Causality
Computer game bot Turing Test
Explanation
Explanatory gap
Functionalism
Graphics Turing Test
Ex Machina (film)
Hard problem of consciousness
List of things named after Alan Turing
Mark V. Shaney (Usenet bot)
Mind-body problem
Mirror neuron
Philosophical zombie
Problem of other minds
Reverse engineering
Sentience
Simulated reality
Social bot
Technological singularity
Theory of mind
Uncanny valley
Voight-Kampff machine (fictitious Turing test from Blade Runner)
Winograd Schema Challenge
SHRDLU
Notes
References
.
. Reprinted in .
.
. Page numbers above refer to a standard pdf print of the article. See also Searle's original draft.
(reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence edited by James H. Moor, Kluwer Academic 2003)
(reprinted in The Turing Test: The Elusive Standard of Artificial Intelligence edited by James H. Moor, Kluwer Academic 2003)
Further reading
.
Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. Multiple tests of artificial-intelligence efficacy are needed because, "just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence." One such test, a "Construction Challenge", would test perception and physical action—"two important elements of intelligent behavior that were entirely absent from the original Turing test." Another proposal has been to give machines the same standardized tests of science and other disciplines that schoolchildren take. A so far insuperable stumbling block to artificial intelligence is an incapacity for reliable disambiguation. "[V]irtually every sentence [that people generate] is ambiguous, often in multiple ways." A prominent example is known as the "pronoun disambiguation problem": a machine has no way of determining to whom or what a pronoun in a sentence—such as "he", "she" or "it"—refers.
Warwick, Kevin and Shah, Huma (2016), "Turing's Imitation Game: Conversations with the Unknown", Cambridge University Press.
External links
The Turing Test – an Opera by Julian Wagstaff
The Turing Test – How accurate could the Turing test really be?
Turing Test: 50 Years Later reviews a half-century of work on the Turing Test, from the vantage point of 2000.
Bet between Kapor and Kurzweil, including detailed justifications of their respective positions.
Why The Turing Test is AI's Biggest Blind Alley by Blay Witby
Jabberwacky.com An AI chatterbot that learns from and imitates humans
New York Times essays on machine intelligence part 1 and part 2
Computer Science Unplugged teaching activity for the Turing test.
Wiki News: "Talk:Computer professionals celebrate 10th birthday of A.L.I.C.E."
Alan Turing
Human–computer interaction
History of artificial intelligence
Philosophy of artificial intelligence
1950 in computing
Computer-related introductions in 1950 |
249858 | https://en.wikipedia.org/wiki/Tying%20%28commerce%29 | Tying (commerce) | Tying (informally, product tying) is the practice of selling one product or service as a mandatory addition to the purchase of a different product or service. In legal terms, a tying sale makes the sale of one good (the tying good) to the de facto customer (or de jure customer) conditional on the purchase of a second distinctive good (the tied good). Tying is often illegal when the products are not naturally related. It is related to but distinct from freebie marketing, a common (and legal) method of giving away (or selling at a substantial discount) one item to ensure a continual flow of sales of another related item.
Some kinds of tying, especially by contract, have historically been regarded as anti-competitive practices. The basic idea is that consumers are harmed by being forced to buy an undesired good (the tied good) in order to purchase a good they actually want (the tying good), and so would prefer that the goods be sold separately. The company doing this bundling may have a significantly large market share so that it may impose the tie on consumers, despite the forces of market competition. The tie may also harm other companies in the market for the tied good, or who sell only single components.
One effect of tying can be that low quality products achieve a higher market share than would otherwise be the case.
Tying may also be a form of price discrimination: people who use more razor blades, for example, pay more than those who just need a one-time shave. Though this may improve overall welfare, by giving more consumers access to the market, such price discrimination can also transfer consumer surpluses to the producer. Tying may also be used with or in place of patents or copyrights to help protect entry into a market, discouraging innovation.
Tying is often used when the supplier makes one product that is critical to many customers. By threatening to withhold that key product unless others are also purchased, the supplier can increase sales of less necessary products.
In the United States, most states have laws against tying, which are enforced by state governments. In addition, the U.S. Department of Justice enforces federal laws against tying through its Antitrust Division.
Types
Horizontal tying is the practice of requiring consumers to pay for an unrelated product or service together with the desired one. A hypothetical example would be for Bic to sell its pens only with Bic lighters. (However, a company may offer a limited free item with another purchase as a promotion.)
Vertical tying is the practice of requiring customers to purchase related products or services together, from the same company. For example, a company might mandate that its automobiles could only be serviced by its own dealers. In an effort to curb this, many jurisdictions require that warranties not be voided by outside servicing; for example, see the Magnuson-Moss Warranty Act in the United States.
In United States law
Certain tying arrangements are illegal in the United States under both the Sherman Antitrust Act, and Section 3 of the Clayton Act. A tying arrangement is defined as "an agreement by a party to sell one product but only on the condition that the buyer also purchases a different (or tied) product, or at least agrees he will not purchase the product from any other supplier." Tying may be the action of several companies as well as the work of just one firm. Success on a tying claim typically requires proof of four elements: (1) two separate products or services are involved; (2) the purchase of the tying product is conditioned on the additional purchase of the tied product; (3) the seller has sufficient market power in the market for the tying product; (4) a not insubstantial amount of interstate commerce in the tied product market is affected.
For at least three decades, the Supreme Court defined the required "economic power" to include just about any departure from perfect competition, going so far as to hold that possession of a copyright or even the existence of a tie itself gave rise to a presumption of economic power. The Supreme Court has since held that a plaintiff must establish the sort of market power necessary for other antitrust violations in order to prove sufficient "economic power" necessary to establish a per se tie. More recently, the Court has eliminated any presumption of market power based solely on the fact that the tying product is patented or copyrighted.
In recent years, changing business practices surrounding new technologies have put the legality of tying arrangements to the test. Although the Supreme Court still considers some tying arrangements as per se illegal, the Court actually uses a rule-of-reason analysis, requiring an analysis of foreclosure effects and an affirmative defense of efficiency justifications.
Apple products
The tying of Apple products is an example of commercial tying that has caused recent controversy. When Apple initially released the iPhone on June 29, 2007, it was sold exclusively with AT&T (formerly Cingular) contracts in the United States. To enforce this exclusivity, Apple employed a type of software lock that ensured the phone would not work on any network besides AT&T's. Related to the concept of bricking, any user who tried to unlock or otherwise tamper with the locking software ran the risk of rendering their iPhone permanently inoperable.
This caused complaints among many consumers, as they were forced to pay an additional early termination fee of $175 if they wanted to unlock the device safely for use on a different carrier. Other companies such as Google complained that tying encourages a more closed-access-based wireless service. Many questioned the legality of the arrangement, and in October 2007 a class-action lawsuit was filed against Apple, claiming that its exclusive agreement with AT&T violates California antitrust law. The suit was filed by the Law Office of Damian R. Fernandez on behalf of California resident Timothy P. Smith, and ultimately sought to have an injunction issued against Apple to prevent it from selling iPhones with any kind of software lock.
In July 2010, federal regulators clarified the issue when they determined it was lawful to unlock (or in other terms, "jail break") the iPhone, declaring that there was no basis for copyright law to assist Apple in protecting its restrictive business model.
Jail breaking is removing operating system or hardware restrictions imposed on an iPhone (or other device). If done successfully, this allows one to run any application on the phone they choose, including applications not authorized by Apple. Apple told regulators that modifying the iPhone operating system leads to the creation of an infringing derivative work that is protected by copyright law. This means that the license on the operating system forbids software modification. However, regulators agreed that modifying an iPhone's firmware/operating system to enable it to run an application that Apple has not approved fits comfortably within the four corners of fair use.
Microsoft products
Another prominent case involving a tying claim was United States v. Microsoft. By some accounts, Microsoft ties together Microsoft Windows, Internet Explorer, Windows Media Player, Outlook Express and Microsoft Office. The United States claimed that the bundling of Internet Explorer (IE) to sales of Windows 98, making IE difficult to remove from Windows 98 (e.g., not putting it on the "Remove Programs" list), and designing Windows 98 to work "unpleasantly" with Netscape Navigator constituted an illegal tying of Windows 98 and IE. Microsoft's counterargument was that a web browser and a mail reader are simply part of an operating system, included with other personal computer operating systems, and the integration of the products was technologically justified. Just as the definition of a car has changed to include things that used to be separate products, such as speedometers and radios, Microsoft claimed the definition of an operating system has changed to include their formerly separate products. The United States Court of Appeals for the District of Columbia Circuit rejected Microsoft's claim that Internet Explorer was simply one facet of its operating system, but the court held that the tie between Windows and Internet Explorer should be analyzed deferentially under the Rule of Reason. The U.S. government claim settled before reaching final resolution.
As to the tying of Office, parallel cases against Microsoft brought by State Attorneys General included a claim for harm in the market for office productivity applications. The Attorneys General abandoned this claim when filing an amended complaint. The claim was revived by Novell where they alleged that manufacturers of computers ("OEMs") were charged less for their Windows bulk purchases if they agreed to bundle Office with every PC sold than if they gave computer purchasers the choice whether or not to buy Office along with their machines — making their computer prices less competitive in the market. The Novell litigation has since settled.
Microsoft has also tied its software to the third-party Android mobile operating system, by requiring manufacturers that license patents it claims covers the OS and smartphones to ship Microsoft Office Mobile and Skype applications on the devices.
Anti-tying provision of the Bank Holding Company Act
In 1970, Congress enacted section 106 of the Bank Holding Company Act Amendments of 1970 (BHCA), the anti-tying provision, which is codified at 12 U.S.C. § 1972. The statute was designed to prevent banks, whether large or small, state or federal, from imposing anticompetitive conditions on their customers. Tying is an antitrust violation, but the Sherman and Clayton Acts did not adequately protect borrowers from being required to accept conditions to loans issued by banks, and section 106 was specifically designed to apply to and remedy such bank misconduct.
Banks are allowed to take measures to protect their loans and to safeguard the value of their investments, such as requiring security or guaranties from borrowers. The statute exempts so-called “traditional banking practices” from its per se illegality, and thus its purpose is not so much to limit banks' lending practices, as it is to ensure that the practices used are fair and competitive. A majority of claims brought under the BHCA are denied. Banks still have quite a bit of leeway in fashioning loan agreements, but when a bank clearly steps over the bounds of propriety, the plaintiff is compensated with treble damages.
At least four regulatory agencies including the Federal Reserve Board oversee the activities of banks, their holding companies, and other related depository institutions. While each type of depository institution has a “primary regulator”, the nation's “dual banking” system allows concurrent jurisdiction among the different regulatory agencies. With respect to the anti-tying provision, the Fed takes the preeminent role in relation to the other financial institution regulatory agencies, which reflects that it was considered the least biased (in favor of banks) of the regulatory agencies when section 106 was enacted.
In European Law
Tying is the "practice of a supplier of one product, the tying product, requiring a buyer also to buy a second product, the tied product". The tying of a product can take various forms, that of contractual tying where a contract binds the buyer to purchase both products together, refusal to supply until the buyer agrees to purchase both products, withdrawal or withholding of a guarantee where the dominant seller will not provide the benefit of guarantee until the seller accepts to purchase that party's product, technical tying occurs when the products of the dominant party are physically integrated and making impossible to buy the one without the other and bundling where two products are sold in the same package with one price. This practises are prohibited under Article 101(1)(e) and Article 102(2)(d) and may amount to an infringement of the statute if other conditions are satisfied. However, it is noteworthy that the Court is willing to find an infringement beyond those listed in Article 102(2)(d), see Tetra Pak v Commission.
Enforcement under European Law
The Guidance on Article 102 Enforcement Priorities sets out in which circumstances it will be appropriate taking actions against tying practises. First, it must be established whether the accused undertaking has a dominant position in the tying or tied product market. Subsequently, the next step is to determine whether the dominant undertaking tied two distinct products. This is important as two identical products cannot be considered tied under Article 102(2)(d) formulation that states products will be considered tied if they have no connects ‘by their nature or commercial usage’. This arises problems in the legal definition of what will amount to tying in scenarios of selling cars with tires or selling a car with a radio. Hence, the Commission provides guidance on this issue by citing the judgement in Microsoft and states that "two products are distinct if, in the absence of tying or bundling, a substantial number of customers would purchase or would have purchased the tying product without also buying the tied product from the same supplier, thereby allowing stand-alone production for both the tying and the tied product". Next issue is whether the customer was coerced to purchase both the tying and the tied products as Article 102(2)(d) suggests: ‘making the conclusion of contracts subject to acceptance by the other parties of supplementary obligations’. In situations of contractual stipulation, it is clear that the test will be satisfied; for an example of a non-contractual tying see Microsoft. Furthermore, for an undertaking to be deemed anti-competitive is whether the tie is capable of having foreclosure effect. Some examples of tying practises having an anti-competitive foreclosure effect in case law are the IBM, Eurofix-Bauco v Hilti, Telemarketing v CLT, British Sugar and Microsoft. Subsequently, the defence available for the dominant undertaking is that it can provide that tying is objectively justified or enhances efficiency and the commission is willing to consider claims that are tying may result in economic efficiency in production or distribution that will bring benefit to the consumers.
See also
Complementary good
Iunctim
Loss leader
The OSx86 Project or 'Hackintosh', breaking the tie Apple holds between its hardware and Mac OS X to run the operating system on non-Apple hardware.
Product bundling
Product churning
Vendor lock-in
Digital rights management
Tying of the iPhone to AT&T
References
Bibliography
Donald Turner, Tying Arrangements Under the Antitrust Laws, 72 Harv. L. Rev. 50 (1958);
George J. Stigler, A Note On Block Booking, 1963 Supreme Court Review 152;
Kenneth Dam, Fortner Enterprises v. United States Steel: Neither a Borrower Nor A Lender Be, 1969 S. Ct. Rev. 1;
Timothy D. Naegele, Are All Bank Tie-Ins Illegal?, 154 Bankers Magazine 46 (1971);
Richard A. Posner, Antitrust: An Economic Perspective, 171-84 (1976);
Joseph Bauer, A Simplified Approach to Tying Arrangements: A Legal and Economic Analysis, 33 Vanderbilt Law Review 283 (1980);
Richard Craswell, Tying Requirements in Competitive Markets: The Consumer Protection Rationale, 62 Boston University L. Rev. 661 (1982);
Roy Kenney and Benjamin Klein, The Economics of Block Booking, 26 J. Law & Economics 497 (1983);
Timothy D. Naegele, The Anti-Tying Provision: Its Potential Is Still There, 100 Banking Law Journal 138 (1983);
Victor Kramer, The Supreme Court and Tying Arrangements: Antitrust As History, 69 Minnesota L. Rev. 1013 (1985);
Benjamin Klein and Lester Saft, The Law and Economics of Franchise Tying Contracts, 28 J. Law and Economics 245 (1985);
Alan Meese, Tying Meets The New Institutional Economics: Farewell to the Chimera of Forcing, 146 U. Penn. L. Rev. 1 (1997);
Christopher Leslie, Unilaterally Imposed Tying Arrangements and Antitrust's Concerted Action Requirement, 60 Ohio St. L.J. 1773 (1999);
John Lopatka and William Page, The Dubious Search For Integration in the Microsoft Trial, 31 Conn. L. Rev. 1251 (1999);
Alan Meese, Monopoly Bundling in Cyberspace: How Many Products Does Microsoft Sell?, 44 Antitrust Bull. 65 (1999);
Keith N. Hylton and Michael Salinger, Tying Law and Policy: A Decision-Theoretic Approach, 69 Antitrust L. J. 469 (2001);
Michael D. Whinston, Exclusivity and Tying in U.S. v. Microsoft: What We Know, and Don't Know, 15 Journal of Economic Perspectives, 63-80 (2001);
Christopher Leslie, Cutting Through Tying Theory with Occam's Razor: A Simple Explanation of Tying Arrangements, 78 Tul. L. Rev. 727 (2004); and
Timothy D. Naegele, The Bank Holding Company Act's Anti-Tying Provision: 35 Years Later, 122 Banking Law Journal 195 (2005).
Anti-competitive practices
Competition law
Business models
Bundled products or services |
43780358 | https://en.wikipedia.org/wiki/Middle%20East%20Eye | Middle East Eye | Middle East Eye (MEE) is a London-based online news outlet covering events in the Middle East and North Africa. MEE describes itself as an "independently funded online news organization that was founded in April 2014." MEE seeks to be the primary portal of Middle East news, and describes its target audience as "all those communities of readers living in and around the region that care deeply for its fate".
MEE has been accused of being backed by Qatar. The governments of Saudi Arabia, UAE, Egypt and Bahrain accuse MEE of pro-Muslim Brotherhood bias and receiving Qatari funding. As a consequence, they demanded MEE to be shut down following the Saudi-led blockade of Qatar. MEE’s response to the accusation is that it is independent of any government or movement and is not funded by Qatar.
Organisation
MEE is edited by David Hearst, a former foreign leader writer for the British daily, The Guardian. MEE is owned by Middle East Eye Ltd, a UK company incorporated in 2013 under the sole name of Jamal Awn Jamal Bessasso. It employs about 20 full-time staff in its London office.
Coverage
Middle East Eye covers a range of topics across the Middle East. According to its website, it reports on events in 22 different countries. Content is separated into different categories on its website including news, opinion and essays.
Since the foundation of the media outlet, it has provided exclusives on a number of major events in the Middle East, which have often been picked up by other media outlets globally. In early June 2017, an anonymous hacker group began distributing emails to multiple news outlets that they had hacked from the inbox of Yousef Otaiba, the UAE's ambassador in Washington D.C. This included providing details from leaked emails of Mohammed bin Salman and US officials. This revelation on 14 August 2017, led to other media outlets to print other material from the leaked emails. According to The New York Times, the hacked emails appeared to benefit Qatar and be the work of hackers working for Qatar, a common subject of the distributed emails.
On July 29, 2016, MEE published a story alleging that the government of the United Arab Emirates, aided by Palestinian exile Mohammed Dahlan, had funnelled significant sums of money to conspirators of the 2016 Turkish coup d'état attempt two weeks earlier. In 2017, Dahlan brought a lawsuit of libel against the MEE in a London court seeking damages of up to £250,000. However, Dahlan abandoned the suit shortly before the case was to begin. In a statement, Dahlan maintained that the story was “fully fabricated” but claimed that he has “achieved his goals in the English courts," and was now planning to sue Facebook in Dublin where the article was “widely published”. However, according to MEE and their lawyers, by dropping the claim, Dahlan will be forced to pay all the legal costs, of both parties, estimated to be in excess of £500,000.
In November 2019, the Turkish government officially accused Dahlan of involvement in the 2016 Turkish coup d'état attempt and is offering $700,000 for information leading to his capture.
Criticism of coverage
Saudi Arabia accused MEE of being a news outlet funded by Qatar (both directly and indirectly).
On 22 June 2017, during the Qatar diplomatic crisis, Saudi Arabia, the United Arab Emirates (UAE), Egypt, and Bahrain, as part of a list of 13 demands, demanded that Qatar close Middle East Eye, which they saw as sympathetic to the Muslim Brotherhood and a Qatari-funded and aligned outlet. Middle East Eye denied it has ever received Qatari funds.
Notable contributors
Jamal Khashoggi
Jamal Khashoggi wrote for MEE prior to joining The Washington Post.
According to a post on the MEE website, Khashoggi wrote for them over a period of two years. According to MEE, his op-eds were not credited to him at the time due to concerns for his safety because many of his articles for MEE are critical of Saudi Arabia and its policies, and Saudi Arabia's rift with Qatar. Khashoggi, a Washington Post columnist, was assassinated when he entered the Saudi consulate in Turkey on 2 October 2018. After initial denials, Saudi Arabia stated that he was killed by rogue assassins inside the consulate building with "premeditated intention".
Middle East tensions
Blocking
In 2016, the United Arab Emirates blocked the Middle East Eye in what was a countrywide ban. MEE says it contacted the UAE embassy in London for an explanation, but never received a response. Saudi Arabia also blocked the website across the country in May 2017. Following protests against the President Abdel Fattah el-Sisi in September and October 2019, Egypt also blocked the website.
2017–2018 Qatar diplomatic crisis
In June 2017, Saudi Arabia, UAE, Egypt and Bahrain ended their diplomatic relationships with Qatar, followed by a list of 13 demands to restore diplomatic relations. MEE was mentioned in one of the demands to be shut down by Qatar even though the news organisation denies receiving funds from them stating that 'the demand as an attempt to "extinguish any free voice which dares to question what they are doing."' In a statement responding to the demand, the publication's editor-in-chief said "MEE covers the area without fear or favour, and we have carried reports critical of the Qatari authorities, for instance how workers from the subcontinent are treated on building projects for the 2022 World Cup."
Cyberattack
In April 2020, MEE was one of 20 websites targeted by hackers that cybersecurity experts, ESET, have linked to an Israeli surveillance company called Candiru. The website was impacted using a Watering hole attack which serves malicious code to certain visitors allowing the attackers to compromise their PCs.
References
External links
Official website
2014 establishments in the United Kingdom
Publications established in 2014
Mass media in London
Mass media in the Middle East
Muslim Brotherhood |
3401042 | https://en.wikipedia.org/wiki/Emanuel%20Tov | Emanuel Tov | Emanuel Tov, (; born September 15, 1941, Amsterdam, Netherlands as Menno Toff) is an Israeli, emeritus J. L. Magnes Professor of Bible Studies in the Department of Bible at the Hebrew University of Jerusalem. He has been intimately involved with the Dead Sea Scrolls for many decades, and from 1991, he was appointed Editor-in-Chief of the Dead Sea Scrolls Publication Project.
Biography
Emanuel Tov was born in Amsterdam, the Netherlands on September 15, 1941, during the German occupation. During the Holocaust, when Tov was one year old, his parents Juda (Jo) Toff and Toos Neeter were deported to concentration camps, and they entrusted him to the care of (a Christian family), and following the war he grew up with his uncle and aunt as one of their children.
From age 14, he was active in the Zionistic youth movement Habonim and served as one of its leaders. At age 18, the movement motivated him to go to Israel for training as a leader and in 1960 he became the general secretary of that movement in the Netherlands. In 1961, he immigrated to Israel.
Emanuel Tov is married to Lika (née Aa). Tov and Lika have three children (a daughter Ophira, and two sons, Ariel and Amitai) and four granddaughters.
Education
Tov earned his primary studies at Boerhaaveschool and continued at Kohnstamm School, in South Amterdam. At the age of 12, Tov started studying Latin and Greek language at Spinoza Lyceum, where he knew his future wife Lika Aa. At the age of 18, he finished his studies at a "gymnasium", where he learned classical and modern European languages, and at the same time learned Hebrew at Talmud Torah from his Bar Mitzvah.
Tov spent a year in Israel (from 1959 to 1960) at Machon L'Madrichei Chutz La'Aretz, studying for leadership in the youth movement Habonim. He sang in the choir and learned to play the flute. Tov then returns to the Netherlands.
Tov decides to return to study at the Hebrew University of Jerusalem in October 1961. In 1964 he completed B.A. in Bible and Greek literature, and in 1967 he received his M.A. in Hebrew Bible, while serving also as an assistant in the Bible Dept. and at the Hebrew University Bible Project. In 1967–1969, he continued his studies at the Dept. for Near Eastern Studies and Languages at Harvard University. His dissertation, written under the guidance of Professors Shemaryahu Talmon of the Hebrew University and Frank Moore Cross of Harvard University, was submitted to the Hebrew University in 1973 as "The Septuagint Translation of Jeremiah and Baruch" (summa cum laude), awarded him the degree of PhD at the Hebrew University.
Upon his return to Israel, he served as an "assistant" at the University of Haifa and at the Hebrew University.
Teaching
In 1986, he was appointed Professor at the Hebrew University and in 1990 he became the J. L. Magnes Professor of Bible Studies. Served as visiting Professor at the Universities of Oxford, Uppsala, Doshisha (Kyoto), Macquarie and Sydney (Australia), Stellenbosch (South Africa), Vrije Universiteit (Amsterdam), University of Pennsylvania (Philadelphia), the Pontifical Gregorian University (Rome), Halle (Germany), the Protestants Theologische Universiteit (Amsterdam), and the Pontificial Biblical University (Rome). He stayed at Institutes for Advanced Studies at the Hebrew University in Jerusalem, NIAS (the Netherlands), Annenberg (Philadelphia), Oxford Centre for Postgraduate Hebrew Studies and the Lichtenberg Kolleg (Göttingen, Germany).
Academic work
He was one of the editors of the Hebrew University Bible Project. He is a member of the editorial board of the journals Dead Sea Discoveries and the Journal of Jewish Studies, and served on the Academic committee of the Magnes Press. He is the co-founder and chairman (1991–2000) of the Dead Sea Scrolls Foundation, a Member of the Academic Committee of the Orion Center for the Study of the Dead Sea Scrolls, and Senior Associate Fellow and an Honorary Fellow of the Oxford Centre for Postgraduate Hebrew Studies.
From 1990 to 2009 he served as the Editor-in-Chief of the international Dead Sea Scrolls Publication Project, which during those years produced 33 volumes of the series Discoveries in the Judean Desert as well as two concordances.
He also published an electronic edition of all the extra-biblical Qumran scrolls and a six-volume printed edition of the scrolls meant for the general public.
He also created electronic editions of the Hebrew and Greek Bible.
Prizes and honorary titles
1999 – 2004 – Humboldt Research Prize, Germany
2003 – Ubbo Emmius medal, University of Groningen
2004 – Emet Prize for Biblical Research, Israel
2006 – Appointed Corresponding Fellow of the British Academy
2008 – Honorary doctorate from the University of Vienna
2009 – Israel Prize in biblical studies
2010 – Samaritan Medal for Humanitarian Achievement
2012 – Appointed Member of the Israel Academy of Sciences and Humanities
2017 – Appointed Member of the American Academy of Arts and Sciences
2019 – Honorary doctorate from the University of Salzburg
2021 – Honorary doctorate from the University of Copenhagen
Research
Septuagint
Emanuel Tov's studies on the Septuagint focused first on inner-translational developments and gradually moved to the importance of this translation for the study of the Bible: the early revisions of the Septuagint, translation technique, the reconstruction of the Hebrew parent text of the Greek translation, the value of the Septuagint for the textual study of the Hebrew Bible, the importance of certain Septuagint books for the exegesis of the Hebrew books and the understanding of their literary development, the place of the Hebrew source of the Septuagint in the development of the text of the Bible.
Tov's initial publications on the Septuagint deal with that translation's early revisions that were intended to approximate the Greek text to the Hebrew text current in Israel from the 1st century BCE until the 2nd century CE. For that research, he established sound principles by determining the criteria for defining and characterizing the revisions. His preoccupation with matters of translation technique and the reconstruction of the Hebrew parent text of the Septuagint was influenced by his practical work in the HUBP (Hebrew University Bible Project). In that research, he combined the field work in that project with the formulation of abstract rules for the evaluation of details in the Septuagint, constantly cross-fertilizing both areas. These rules were formulated in his theoretical book on the Septuagint that grew out of his courses at the Hebrew University, each year on a different Bible book
Subsequently, the focus of Tov's interest moved to the importance of the Septuagint for biblical scholarship, both for textual and literary criticism. In several books, the Septuagint reflects a Hebrew basis that needs to be taken into consideration in the exegesis of those books beyond small details, both when, according to Tov, the Hebrew parent text of the Septuagint preceded the Masoretic Text (Joshua, 1 Samuel 16–18, Jeremiah, Ezekiel, etc.) and when it serves as an exegetical layer reacting to the forerunner of the Masoretic Text (1 Kings, Esther, and Daniel). According to Tov, in all these books the exegete of the Hebrew books must take the Greek translation into consideration. A precondition for this procedure is that the analysis of the translation technique as described in the previous paragraph will have established that the Septuagint is a good source for analyzing the text that lay in front of him. From among all the early witnesses of the biblical text, the best ones for analyzing the stages of its literary development are the Masoretic Text and the Septuagint, several Qumran texts and the Samaritan Pentateuch. Tov believes that the analysis of early witnesses such as the Septuagint enriches our exegesis and helps us in understanding the last stages of the development of the biblical literature in specific books. In his more recent work, Tov characterizes the source of LXX-Pentateuch as displaying harmonizing features shared with the Samaritan Pentateuch and related texts.
Development of the Bible text
Emanuel Tov does not describe the development of the biblical text based on abstract theories, but tries to take the evidence of the ancient manuscripts and versions as his point of departure. It is clear that in antiquity many versions of the Bible were circulating, as is evident from the textual plurality at Qumran. All the manuscripts differed from one another, but within that plurality one may recognize some groups (families). Tov qualified this plurality by providing the internal statistics of the different types of the Qumran scrolls. He also described the socio-religious background of some groups of the Judean Desert scrolls.
An important link in this argumentation is the group of the so-called 4QReworked Pentateuch texts. Ten years after Tov published this group of documents, he realized that these texts do not reflect a single non-biblical rewritten Pentateuch composition, but a cluster of biblical texts that included many exegetical elements. These texts reflect a link in the series of developing biblical texts.
Tov's studies on the Septuagint and 4QReworked Pentateuch led him to new thoughts regarding the development of the last stages of the biblical books and the original text of these books. In his view, the early stages of the biblical books such as reflected in the Septuagint of 1 Samuel, Jeremiah, and Ezekiel, show that the formulations of these books developed stage by stage. This reconstructed development makes it difficult to posit an original text of the biblical books in the usual sense of the word. In Tov's view, there was not one original text, but a series of "original texts." This view developed after the appearance of the second edition of his Textual Criticism of the Hebrew Bible (2001) and was emphasized more in
the third edition (2012).
The development of the text of the Torah
In studies primarily carried out in the 2010s, Tov focused on the
special textual status of the Torah.
In his view, the textual development of the Torah differed from that
of all other Scripture books. Quite unusually its textual witnesses may be divided into two text blocks. "Block I" contains the Masoretic Text group consisting of proto-MT scrolls and the followers of MT, among them a group of tefillin. "Block II" consists of a large group consisting of the source of LXX, the SP group, the Qumran tefillin, and more. The latter block usually contains a popularizing text featuring harmonizing and facilitating readings, while block I contains a more original text.
Dead Sea Scrolls and the Qumran scribes
Emanuel Tov dealt with various aspects of the Qumran scrolls, but his most central publications pertain to the Qumran scribes. In 2004, he published a detailed monograph on the scribal practices reflected in the Qumran scrolls, suggesting that the information about these scribal practices allows us to obtain a better understanding of the Qumran scrolls.
This monograph describes the technical aspects of all the Judean Desert texts, such as the measurements of the columns and sheets, the beginnings and ends of scrolls, systems of correcting mistakes, orthography systems, and a classification of the scrolls according to these parameters.
An important part of this description is Tov's theory on the Qumran scribes. Since 1986, Tov has suggested the division of the Qumran scrolls into two groups distinguished by external features. Group 1 is written in a special spelling (forms like ki’ כיא for כי everywhere), specials linguistic forms (like אביכמה in 1QIsa-a for אֲבִיכֶם MT in Isa 51:2 and מואדה in the same scroll for MT מְאֹד in Isa 47:6), and special scribal habits (writing the divine name in the old Hebrew script, erasing elements with lines and writing cancellation dots above and below words and letters, writing dots in the margins guiding the drawing of the lines, etc.). The great majority of the Qumran sectarian scrolls belong to this group; hence Tov's suggestion that these scrolls were written by sectarian scribes, possibly at Qumran. These scribes copied biblical as well as extra-biblical scrolls, altogether one-third of the Qumran scrolls, while the other scrolls (group 2) were brought to Qumran from outside, from one or more localities. Several tefillin found at Qumran were also written in the Qumran Scribal Practice, thus adding a social dimension to this practice as the contents of these tefillin differed from the "rabbinic" tefillin found elsewhere in the Judean Desert. An independent C-14 examination of the material on which some of the scrolls written in the Qumran Scribal Practice were written indicated in 2020 that they differed from the other scrolls.
Computer-assisted research of the Bible and the Dead Sea Scrolls
Emanuel Tov believes that the examination of the Bible and Dead Sea Scrolls needs to be aided by computer-assisted research and that therefore databases and computer programs need to be developed. He supervised the electronic encoding of the Leningrad Codex in the 1980s.
At that time, he also embarked upon a research project together with Prof. Robert A. Kraft of the University of Pennsylvania (CATSS = Computer Assisted Tools for Septuagint Studies). That project, based in Philadelphia and Jerusalem, created a comparative database of all the words in the Masoretic Text and the Septuagint. It was published as a module within the Accordance program (subsequently also within Bible Works and Logos). With the aid of that program, which allows for advanced searches and statistical research, several such investigations have been carried out by Tov and others.
Another database edited by Tov contains all the texts and images of the non-biblical Dead Sea Scrolls, in the original languages and in translation, with morphological analysis and search programs. All these programs serve the international community.
Honorary volumes
Emanuel, Studies in Hebrew Bible, Septuagint, and Dead Sea Scrolls in Honor of Emanuel Tov (ed. S. M. Paul, R. A. Kraft, L. H. Schiffman, and W. W. Fields, with the assistance of E. Ben-David; VTSup 94; Leiden/Boston: E.J. Brill, 2003).
From Qumran to Aleppo: A Discussion with Emanuel Tov about the Textual History of Jewish Scriptures in Honor of his 65th Birthday (ed. A. Lange et al.; FRLANT 230; Göttingen: Vandenhoeck & Ruprecht, 2009)
Books authored
1. The Book of Baruch also Called I Baruch (Greek and Hebrew) (Texts and Translations 8, Pseudepigrapha Series 6; Missoula, Mont.: Scholars Press, 1975).
2. The Septuagint Translation of Jeremiah and Baruch: A Discussion of an Early
Revision of Jeremiah 29–52 and Baruch 1:1–3:8 (HSM 8; Missoula, Mont.: Scholars Press, 1976).
3. The Text-Critical Use of the Septuagint in Biblical Research (Jerusalem Biblical Studies 3; Jerusalem: Simor, 1981).
3*. The Text-Critical Use of the Septuagint in Biblical Research (Second Edition, Revised and Enlarged; Jerusalem Biblical Studies 8; Jerusalem: Simor, 1997).
3**. The Text-Critical Use of the Septuagint in Biblical Research
(Third Edition, Completely Revised and Enlarged; Winona Lake, IN:
Eisenbrauns, 2015).
4. With J. R. Abercrombie, W. Adler, and R. A. Kraft: Computer Assisted Tools for Septuagint Studies (CATSS), Volume 1, Ruth (SCS 20; Atlanta, Georgia: Scholars Press, 1986).
5. A Computerized Data Base for Septuagint Studies: The Parallel Aligned Text of the Greek and Hebrew Bible (CATSS Volume 2; JNSLSup 1; 1986).
6. With D. Barthélemy, D. W. Gooding, and J. Lust: The Story of David and Goliath, Textual and Literary Criticism, Papers of a Joint Venture (OBO 73; Fribourg/Göttingen: Éditions universitaires/Vandenhoeck & Ruprecht, 1986).
7. Textual Criticism of the Bible: An Introduction (Heb.; Jerusalem: Bialik Institute, 1989).
7*. Second corrected printing of: Textual Criticism of the Bible: An Introduction (Heb.; Jerusalem: Bialik Institute, 1997).
7**. Textual Criticism of the Bible: An Introduction (2nd ed., revised
and expanded; The Biblical Encyclopaedia Library 31; Heb.;
Jerusalem: Bialik Institute, 2013).
7a. Expanded and updated version of 7: Textual Criticism of the Hebrew Bible (Minneapolis and Assen/Maastricht: Fortress Press and Van Gorcum, 1992).
7a*. Textual Criticism of the Hebrew Bible (2d rev. ed.; Minneapolis and Assen: Fortress Press/Royal Van Gorcum, 2001).
7a**. Textual Criticism of the Hebrew Bible (3rd ed., revised and
expanded; Minneapolis: Fortress Press, 2012).
7b. German version of 7a (revised and updated): Der Text der Hebräischen Bibel: Handbuch der Textkritik (trans. H.-J. Fabry; Stuttgart/Berlin/Cologne: Kohlhammer, 1997).
7c. Russian version of 7b (revised and updated): Tekstologiya Vetchoga Zaveta (trans. K. Burmistrov and G. Jastrebov; Moscow: Biblisko-Bagaslovski Institut Sv. Apostola Andrjeya [St. Andrews Theological Seminary], 2001).
8. With the collaboration of R. A. Kraft: The Greek Minor Prophets Scroll from Nahal Hever (8HevXIIgr) (The Seiyal Collection I) (DJD VIII; Oxford: Clarendon, 1990).
8*. Revised edition of 8: The Greek Minor Prophets Scroll from Nahal Hever (8HevXIIgr) (The Seiyal Collection I) (DJD VIII; Oxford: Clarendon, "Reprinted with corrections 1995").
9. With the collaboration of S. J. Pfann: The Dead Sea Scrolls on Microfiche: A Comprehensive Facsimile Edition of the Texts from the Judean Desert, with a Companion Volume (Leiden: E.J. Brill/IDC, 1993).
9*. Revised edition of 9: Companion Volume to The Dead Sea Scrolls Microfiche Edition (2d rev. ed.; Leiden: E.J. Brill/IDC, 1995).
10. With C. Rabin and S. Talmon: The Hebrew University Bible, The Book of Jeremiah (Jerusalem: Magnes Press, 1997).
11. The Greek and Hebrew Bible – Collected Essays on the Septuagint (VTSup 72; Leiden/ Boston/Cologne: E.J. Brill, 1999).
11.* Unchanged paperback edition of The Greek and Hebrew Bible – Collected Essays on the Septuagint (Atlanta: Society of Biblical
Literature, 2006).
12a. With D. W. Parry: The Dead Sea Scrolls Reader, Part 1, Texts Concerned with Religious Law (Leiden/Boston: E.J. Brill, 2004)
12b. With D. W. Parry: The Dead Sea Scrolls Reader, Part 2, Exegetical Texts (Leiden/ Boston: E.J. Brill, 2004).
12c. With D. W. Parry: The Dead Sea Scrolls Reader, Part 3, Parabiblical Texts (Leiden/ Boston: E.J. Brill, 2005).
12d. With D. W. Parry: The Dead Sea Scrolls Reader, Part 4, Calendrical and Sapiential Texts (Leiden/Boston: E.J. Brill, 2004).
12e. With D. W. Parry: The Dead Sea Scrolls Reader, Part 5, Poetic and Liturgical Texts (Leiden/Boston: E.J. Brill, 2005).
12f. With D. W. Parry: The Dead Sea Scrolls Reader, Part 6, Additional Genres and Unclassified Texts (Leiden/Boston: E.J. Brill, 2005).
12*. With D.W. Parry, and in association with G.I. Clements: The
Dead Sea Scrolls Reader, Volumes 1–2 (2nd edition, revised and
expanded; Leiden: Brill, 2014).
13. Scribal Practices and Approaches Reflected in the Texts Found in the Judean Desert (STDJ 54; Leiden/Boston: E.J. Brill, 2004).
14. Hebrew Bible, Greek Bible, and Qumran – Collected Essays (TSAJ 121; Tübingen: Mohr Siebeck, 2008).
15. Revised Lists of the Texts from the Judaean Desert (Leiden/Boston: Brill, 2010).
16. Textual Criticism of the Hebrew Bible, Qumran, Septuagint:
Collected Writings, Volume 3 (VTSup 167; Leiden: Brill, 2015).
17. Textual Developments, Collected Essays, Volume 4, VTSup 181 (Leiden: Brill, 2019).
Electronic publications
1. The Dead Sea Scrolls Database (Non-Biblical Texts) (The Dead Sea Scrolls Electronic Reference Library, vol. 2; Prepared by the Foundation for Ancient Research and Mormon Studies [FARMS]) (Leiden: E.J. Brill, 1999).
2. In collaboration with A. Groves: The Hebrew text in ˚nt, JPS Hebrew–English Tanakh: The Traditional Hebrew Text and the New JPS Translation (2d. ed.; Philadelphia: The Jewish Publication Society, 1999).
3. The Parallel Aligned Text of the Greek and Hebrew Bible (division of the CATSS database, directed by R. A. Kraft and E. Tov), module in the Accordance computer program, 2002 (with updates 2003–).
3a. The Parallel Aligned Text of the Greek and Hebrew Bible (division of the CATSS database, directed by R. A. Kraft and E. Tov), module in the Logos computer program, 2004 (with updates, 2005–).
3b. With F. H. Polak: The Parallel Aligned Text of the Greek and Hebrew Bible (division of the CATSS database, directed by R. A. Kraft and E. Tov), module in the Bible Works computer program, version 7, 2005 (with updates, 2006–).
4. "Electronic Resources Relevant to the Textual Criticism of Hebrew Scripture," TC: A Journal of Biblical Textual Criticism 8 (2003)
5. The Dead Sea Scrolls Electronic Library, Brigham Young University, Revised Edition 2006, part of the Dead Sea Scrolls Electronic Reference Library of E.J. Brill Publishers (Leiden: E.J. Brill, 2006). https://brill.com/view/db/dsno?rskey=YpVhkL&result=1
6. "Electronic Tools for the Textual Criticism of the Hebrew Bible – 2013 Introduction and List"
7. "Electronic Bible Editions on the Internet (2014)"
8. "The (Proto-)Masoretic Text: A Ten-Part Series," http://thetorah.com/proto-masoretic-text/ = article 325 (2017).
Books edited
1. The Hebrew and Greek Texts of Samuel, 1980 Proceedings IOSCS, Vienna (Jerusalem: Academon, 1980).
2. A Classified Bibliography of Lexical and Grammatical Studies on the Language of the Septuagint and Its Revisions (3rd ed.; Jerusalem: Academon, 1982).
3. With C. Rabin: Textus, Studies of the Hebrew University Bible Project, vol. 11 (Jerusalem: Magnes Press, 1984).
4. Textus, Studies of the Hebrew University Bible Project, vol. 12 (Jerusalem: Magnes Press, 1985).
5. Textus, Studies of the Hebrew University Bible Project, vol. 13 (Jerusalem: Magnes Press, 1986).
6. With M. Klopfenstein, U. Luz, and S. Talmon: Mitte der Schrift? Ein jüdisch–christliches Gespräch. Texte der Berner Symposions 1985 (Judaica et Christiana 11; Bern: Peter Lang, 1987).
7. Textus, Studies of the Hebrew University Bible Project, vol. 14 (Jerusalem: Magnes Press, 1988). 183 pp.
8. Textus, Studies of the Hebrew University Bible Project, vol. 15 (Jerusalem: Magnes Press, 1990).
9. With M. Fishbane and with the assistance of W. Fields: "Sha’arei Talmon": Studies in the Bible, Qumran, and the Ancient Near East Presented to Shemaryahu Talmon (Winona Lake, IN: Eisenbrauns, 1992).
10. With A. Hurvitz and S. Japhet: I. L. Seeligmann, Studies in Biblical Literature (Heb.; Jerusalem: Magnes Press, 1992).
10*. With A. Hurvitz and S. Japhet: I. L. Seeligmann, Studies in Biblical Literature (Heb.; 2d rev. ed.; Jerusalem: Magnes Press, 1996).
11. Max L. Margolis, The Book of Joshua in Greek, Part V: Joshua 19:39–24:33 (Monograph Series, Annenberg Research Institute; Philadelphia 1992).
12. J. Jarick with the collaboration of G. Marquis, A Comprehensive Bilingual Concordance of the Hebrew and Greek Texts of the Book of Ecclesiastes (CATSS: Basic Tools Volume 3; SCS 36; Atlanta, GA: Scholars Press, 1993).
13. Area editor (Dead Sea Scrolls) in The Oxford Dictionary of the Jewish Religion (ed. R. J. Z. Werblowsky and G. Wigoder; New York/Oxford: Oxford University Press, 1997).
14. Area editor in Encyclopedia of the Dead Sea Scrolls, vols. 1–2 (ed. L. H. Schiffman and J. C. VanderKam; Oxford/New York: Oxford University Press, 2000).
15. With L. H. Schiffman and J. VanderKam: The Dead Sea Scrolls: Fifty Years After Their Discovery – Proceedings of the Jerusalem Congress, July 20–25, 1997 (Jerusalem: Israel Exploration Society/The Shrine of the Book, Israel Museum, 2000).
16. F. H. Polak and G. Marquis, A Classified Index of the Minuses of the Septuagint, Part I: Introduction; Part II: The Pentateuch (CATSS Basic Tools 4, 5; Stellenbosch: Print24.com, 2002).
17. With E. D. Herbert: The Bible as Book – The Hebrew Bible and the Judaean Desert Discoveries (London: British Library & Oak Knoll Press in association with The Scriptorium: Center for Christian Antiquities, 2002).
18. With P. W. Flint and J. VanderKam: Studies in the Hebrew Bible, Qumran and the Septuagint Presented to Eugene Ulrich (VTSup 101; Leiden: E.J. Brill, 2006).
19. With M. Bar-Asher: Meghillot, Studies in the Dead Sea Scrolls V–VI, A Festschrift for Devorah Dimant (Haifa/Jerusalem: University of Haifa, The Publication Project of the Qumran Scrolls/The Bialik Institute, 2007).
20. With M. Bar-Asher, D. Rom-Shiloni, and N. Wazana: Shai le-Sara Japhet (Jerusalem: Bialik Institute, 2007).
21. With C. A. Evans: Exploring the Origins of the Bible – Canon Formation in Historical, Literary, and Theological Perspective (Grand Rapids, MI: Baker Academic, 2008).
22. With A. Lange, M. Weigold, and B.H. Reynolds III: The Dead Sea
Scrolls in Context: Integrating the Dead Sea Scrolls in the Study of
Ancient Texts, Languages, and Cultures, Vols. I–II (VTSup 140/I–II;
Leiden/Boston: Brill, 2011)
23. With Armin Lange, Textual History of the Bible, The Hebrew Bible, Vol. 1A, Overview Articles (Leiden: Brill, 2016).
24. With Armin Lange, Textual History of the Bible, The Hebrew Bible, Vol. 1B, Pentateuch, Former and Latter Prophets (Leiden: Brill, 2017).
25. With Armin Lange, Textual History of the Bible, The Hebrew Bible, Vol. 1C, Pentateuch, Former and Latter Prophets (Leiden: Brill, 2017).
26. With Kipp Davis and Robert Duke, Dead Sea Scrolls in the Museum Collection, Publications of Museum of the Bible 1, ed. Michael W. Holmes; Semitic Texts Series, ed. Emanuel Tov; managing ed. Jerry A. Pattengale (Leiden: Brill, 2016).
27. Textus, A Journal on Textual Criticism of the Hebrew Bible, Vol. 27 (Leiden: Brill, 2018).
28. Textus, A Journal on Textual Criticism of the Hebrew Bible, Vol. 28 (Leiden: Brill, 2019).
29. Textus, A Journal on Textual Criticism of the Hebrew Bible, Vol. 29.1 (Leiden: Brill, 2020).
30. With Gershom Qiprisçi as consulting editor: Biblia Hebraica Petropolitana, The Pentateuch and the Davidic Psalter, A Synoptic Edition of Hebrew Biblical Texts: The Masoretic Text, The Samaritan Pentateuch, the Dead Sea Scrolls, Vols. 1–6, Manuscripta Orientalia, Supplement Series 1 (St. Petersburg/Leiden: 2020), with introductions by Emanuel Tov in English and Russian.
31. Textus, A Journal on Textual Criticism of the Hebrew Bible, Vol. 29.2 (Leiden: Brill, 2020).
32. Textus, A Journal on Textual Criticism of the Hebrew Bible, Vol. 30.1 (Leiden: Brill, 2021).
Editor-in-Chief, Discoveries in the Judaean Desert
1. P. W. Skehan, E. Ulrich, and J. E. Sanderson, Qumran Cave 4.IV: Palaeo-Hebrew and Greek Biblical Manuscripts (DJD IX; Oxford: Clarendon, 1992).
2. E. Qimron and J. Strugnell, Qumran Cave 4.V: Miqsat Ma’ase ha-Torah (DJD X; Oxford: Clarendon, 1994).
3. E. Eshel et al., in consultation with J. VanderKam and M. Brady, Qumran Cave 4.VI: Poetical and Liturgical Texts, Part 1 (DJD XI; Oxford: Clarendon, 1998).
4. E. Ulrich and F. M. Cross, eds., Qumran Cave 4.VII: Genesis to Numbers (DJD XII; Oxford: Clarendon, 1994 [repr. 1999]).
5. H. Attridge et al., in consultation with J. VanderKam, Qumran Cave 4.VIII: Parabiblical Texts, Part 1 (DJD XIII; Oxford: Clarendon, 1994).
6. E. Ulrich and F. M. Cross, eds., Qumran Cave 4.IX: Deuteronomy, Joshua, Judges, Kings (DJD XIV; Oxford: Clarendon, 1995 [repr. 1999]).
7. E. Ulrich et al., Qumran Cave 4.X: The Prophets (DJD XV; Oxford: Clarendon, 1997).
8. E. Ulrich et al., Qumran Cave 4.XI: Psalms to Chronicles (DJD XVI; Oxford: Clarendon, 2000).
9. F. M. Cross, D. W. Parry, R. Saley, E. Ulrich, Qumran Cave 4.XII: 1–2 Samuel (DJD XVII; Oxford: Clarendon, 2005).
10. J. M. Baumgarten, Qumran Cave 4.XIII: The Damascus Document (4Q266–273) (DJD XVIII; Oxford: Clarendon, 1996).
11. M. Broshi et al., in consultation with J. VanderKam, Qumran Cave 4.XIV: Parabiblical Texts, Part 2 (DJD XIX; Oxford: Clarendon, 1995).
12. T. Elgvin et al., in consultation with J. A. Fitzmyer, S.J., Qumran Cave 4.XV: Sapiential Texts, Part 1 (DJD XX; Oxford: Clarendon, 1997).
13. S. Talmon, J. Ben-Dov, and U. Glessmer, Qumran Cave 4.XVI: Calendrical Texts (DJD XXI; Oxford: Clarendon, 2001).
14. G. Brooke et al., in consultation with J. VanderKam, Qumran Cave 4.XVII: Parabiblical Texts, Part 3 (DJD XXII; Oxford: Clarendon, 1996).
15. F. García Martínez, E. J. C. Tigchelaar, and A. S. van der Woude, Qumran Cave 11.II: 11Q2–18, 11Q20–31 (DJD XXIII; Oxford: Clarendon, 1998).
16. M. J. W. Leith, Wadi Daliyeh I: The Wadi Daliyeh Seal Impressions (DJD XXIV; Oxford: Clarendon, 1997).
17. É. Puech, Qumran Cave 4.XVIII: Textes hébreux (4Q521–4Q528, 4Q576–4Q579) (DJD XXV; Oxford: Clarendon, 1998).
18. P. Alexander and G. Vermes, Qumran Cave 4.XIX: 4QSerekh Ha-Yah≥ad and Two Related Texts (DJD XXVI; Oxford: Clarendon, 1998).
19. H. M. Cotton and A. Yardeni, Aramaic, Hebrew, and Greek Documentary Texts from Nah≥al H≥ever and Other Sites, with an Appendix Containing Alleged Qumran Texts (The Seiyâl Collection II) (DJD XXVII; Oxford: Clarendon, 1997).
20. D. M. Gropp, Wadi Daliyeh II: The Samaria Papyri from Wadi Daliyeh; E. Schuller et al., in consultation with J. VanderKam and M. Brady, Qumran Cave 4.XXVIII: Miscellanea, Part 2 (DJD XXVIII; Oxford: Clarendon, 2001).
21. E. Chazon et al., in consultation with J. VanderKam and M. Brady, Qumran Cave 4.XX: Poetical and Liturgical Texts, Part 2 (DJD XXIX; Oxford: Clarendon, 1999).
22. D. Dimant, Qumran Cave 4.XXI: Parabiblical Texts, Part 4: Pseudo-Prophetic Texts (DJD XXX; Oxford: Clarendon, 2001).
23. É. Puech, Qumran Cave 4.XXII: Textes araméens, première partie: 4Q529–549 (DJD XXXI; Oxford: Clarendon, 2001).
24. D. Pike and A. Skinner, in consultation with J. VanderKam and M. Brady, Qumran Cave 4.XXIII: Unidentified Fragments (DJD XXXIII; Oxford: Clarendon, 2001).
25. J. Strugnell, D. J. Harrington, S.J., and T. Elgvin, in consultation with J. A. Fitzmyer, S.J., Qumran Cave 4.XXIV: 4QInstruction (Musar leMevîn): 4Q415 ff. (DJD XXXIV; Oxford: Clarendon, 1999).
26. J. Baumgarten et al., Qumran Cave 4.XXV: Halakhic Texts (DJD XXXV; Oxford: Clarendon, 1999).
27. S. J. Pfann, Cryptic Texts; P. Alexander et al., in consultation with J. VanderKam and M. Brady, Qumran Cave 4.XXVI: Miscellanea, Part 1 (DJD XXXVI; Oxford: Clarendon, 2000).
28. H. Cotton et al., in consultation with J. VanderKam and M. Brady, Miscellaneous Texts from the Judaean Desert (DJD XXXVIII; Oxford: Clarendon, 2000).
29. E. Tov (ed.), The Texts from the Judaean Desert: Indices and an Introduction to the Discoveries in the Judaean Desert Series (DJD XXXIX; Oxford: Clarendon, 2002).
30. M. G. Abegg, Jr., with J. E. Bowley and E. M. Cook, in consultation with E. Tov, The Dead Sea Scrolls Concordance I. The Non-Biblical Texts from Qumran (Leiden: E.J. Brill, 2003).
31. H. Stegemann with E. Schuller, and C. Newsom (translations), Qumran Cave 1.III: 1QHodayota with Incorporation of 1QHodayotb and 4QHodayota–f (DJD XL; Oxford: Clarendon, 2009).
32. É. Puech, Qumran Cave 4.XXVII: Textes araméens, deuxième partie: 4Q550–575a, 580–587 et Appendices (DJD XXXVII; Oxford: Clarendon, 2009).
33. E. Ulrich and P. W. Flint, Qumran Cave 1.II: The Isaiah Scrolls (DJD XXXII; Oxford: Clarendon, 2010).
See also
List of Israel Prize recipients
References
External links
Emanuel Tov, official webpage
Interview posted at the website of the Israel Academy of Sciences
Biography of Emanuel Tov after he received the Israel Prize, Hebrew University site
Biography of Emanuel Tov at the Emet Prize site
Prof. Tov and the Hebrew University of Jerusalem
Prof. Tov and Brigham Young University
Tov and the Institute of Antiquity and Christianity
1941 births
Living people
Academics of the Oxford Centre for Hebrew and Jewish Studies
Dead Sea Scrolls
Dutch emigrants to Israel
Dutch Jews
Dutch Zionists
Harvard University alumni
Hebrew language
Hebrew University of Jerusalem faculty
Israel Prize in biblical studies recipients
Israeli biblical scholars
Linguists from Israel
Jewish biblical scholars
Corresponding Fellows of the British Academy
20th-century Jewish biblical scholars
21st-century Jewish biblical scholars |
1298314 | https://en.wikipedia.org/wiki/Michael%20Cisco | Michael Cisco | Michael Cisco (born October 13, 1970) is an American writer, Deleuzian academic, and teacher currently living in New York City. He is best known for his first novel, The Divinity Student, winner of the International Horror Guild Award for Best First Novel of 1999. His novel The Great Lover was nominated for the 2011 Shirley Jackson Award for Best Novel of the Year, and declared the Best Weird Novel of 2011 by the Weird Fiction Review. He has described his work as "de-genred" fiction.
Biography
Michael Cisco was born and raised in Glendale, California. He attended Sarah Lawrence College as an undergraduate, where he received his bachelor's degree in 1992. As part of his undergraduate studies, Cisco studied at Oxford University for one year. He obtained his master's degree from SUNY Buffalo in 1994 and both his Masters of Philosophy in 2002 and PhD in 2004 at New York University. Cisco is a Professor at the City University of New York.
Bibliography
Novels
The Divinity Student (1999)
The Tyrant (2003)
The San Veneficio Canon (2004)
The Traitor (2007)
The Narrator (2010)
The Great Lover (2011)
Celebrant (2012)
Member (2013)
The Golem (2013) [e-Book, Cheeky Frawg]
Animal Money (2015)
Wretch of the Sun (2016)
Unlanguage (2018)
PEST (2022)
Nonfiction
Weird Fiction: A Genre Study (2022)
Collections
Secret Hours (2007)
Antisocieties (2021)
Chapbooks
The Knife Dance (2016)
Do You Mind if We Dance with Your Legs? (2020)
Translations
Headache by Julio Cortázar (2014)
The Sound of the Mill by Marcel Béalu (2014)
The Supper by Alfonso Reyes (2015)
Short fiction
Uncollected Letter (1996)
The Water Nymphs (1996)
Reliquaries (1997)
Translation (1998)
For No Eyes (1998)
He Will Be There (1999)
Herbert West--Reincarnated Part VI: The Chaos into Time (2000)
The Genius of Assassins (2002)
Clear Rice Sickness (2003)
Ledru's Disease (2003)
Noumenal Fluke (2003)
The House of Solemn Children (A Broken Story) (2003)
Zschokke's Chancres (2003)
The Scream (2003)
Reminiscences (2003)
The Life of Dr. Thackery T. Lambshead (1900-) (2003)
The City of God (2004)
Dr. Bondi's Methods (2007)
I Will Teach You (2007)
Ice Age of Dreams (2007)
The Chaos Into Time (2007)
The Death of Edgar Allan Poe (2007)
The Depredations of Mur (2007)
The Firebrands of Torment (2007)
Two Fragments (2007)
What He Chanced to Mould in Play (2007)
Machines of Concrete Light and Dark (2009)
Last Drink Bird Head (2009)
Mr. Wosslynne (2009)
Modern Cities Exist Only to Be Destroyed (2009)
Violence, Child of Trust (2010)
The Cadaver Is You (2011)
Bread and Water (2011)
This Is Tumor Speaking (2012)
The Vile Game of Gunter and Landau (2012)
Visiting Maze (2012)
The Penury (2013)
The Secrets of the Universe (2013)
Unlanguage (excerpt) (2014)
Learn to Kill (2014)
Excerpt from Unlanguage (2015)
Infestations (2015)
The Figmon (2015)
The Righteousness of Conical Men (2016)
Rock n' Roll Death Squad (2017)
Bet the Farm (2018)
Their Silent Faces (2019)
Cisco's work can also be found in The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, Album Zutique, Leviathan III, Leviathan IV, Phantom, Lovecraft Unbound, Last Drink Bird Head, Cinnabar's Gnosis: A Homage to Gustav Meyrink, Black Wings, The Thackery T. Lambshead Cabinet of Curiosities, The Master in the Cafe Morphine: A Homage to Mikhail Bulgakov, Blood and Other Cravings, DADAOISM, This Hermetic Legislature: A Homage to Bruno Schulz, and The Weird.
His essay on author Sadeq Hedayat, "Eternal Recurrence in The Blind Owl," appeared in the journal, Iranian Studies Other critical articles by Cisco have appeared in The New Weird, The Encyclopedia of the Vampire, The Weird Fiction Review, and Lovecraft Studies.
Centipede Press has published a limited edition box set, composed of four novels and a collection of short fiction.
All four novels are published for the first time in individual hardcover editions.
Each book features a new introduction by Jeffrey Ford (The Traitor), Rhys Hughes (The Tyrant), Joseph S. Pulver (Secret Hours), Paul Tremblay (The Golem) and Ann VanderMeer (The Divinity Student).
Dim Shores published the novella The Knife Dance in 2016. The project was curated by Joseph S. Pulver.
Nightscape Press released Cisco's novella, Do You Mind if We Dance with Your Legs? in 2020 for their charitable chapbook series. One-third of all physical chapbook sales benefit the Los Angeles LGBT Center.
Cisco was on the Editorial Board of Vastarien Literary Journal as Associate Editor.
Weird Fiction: A Genry Study is forthcoming from Palgrave Macmillan.
Awards and nominations
The Divinity Student won the International Horror Guild Award for Best First Novel of 1999.
The Great Lover was nominated for a Shirley Jackson Award in 2012 and was declared the Best Weird Novel of 2011 by the Weird Fiction Review.
Unlanguage was nominated for Best Horror Novel by Locus in 2019.
See also
List of horror fiction authors
References
External links
1970 births
Living people
American horror writers
American fantasy writers
20th-century American novelists
21st-century American novelists
American male novelists
Writers from New York City
20th-century American male writers
21st-century American male writers
Novelists from New York (state)
Weird fiction writers |
7261437 | https://en.wikipedia.org/wiki/Margo%20Seltzer | Margo Seltzer | Margo Ilene Seltzer is a professor and researcher in computer systems. She is currently the Canada 150 Research Chair in Computer Systems and the Cheriton Family Chair in Computer Science at the University of British Columbia. Previously, Seltzer was the Herchel Smith Professor of Computer Science at Harvard University's John A. Paulson School of Engineering and Applied Sciences and director at the Center for Research on Computation and Society.
Education
Seltzer received her A.B. in Applied Mathematics at Harvard/Radcliffe College in 1983, where she was teaching assistant under Harry R. Lewis at Harvard University. In 1992, she received her Ph.D. in Computer Science from the University of California, Berkeley where her dissertation, "File System Performance and Transaction Support", was supervised by Michael Stonebraker. Her work in log-structured file systems, databases, and wide-scale caching is especially well-known, and she was lead author of the BSD-LFS paper.
Career
Academia
Seltzer became an Assistant Professor of Computer Science at Harvard University in 1992, and an Associate Professor in 1997. She held endowed chairs as a Gordon McKay Professor of Computer Science in 2000, and as the Herchel Smith Professor of Computer Science in 2004. From 2005 to 2010, Seltzer was designated a Harvard College Professor in recognition of "particularly distinguished contributions to undergraduate teaching." Seltzer was the Associate Dean of the School of Engineering and Applied Sciences from 2002 to 2006, and an advisor to the Harvard Undergraduate Women in Computer Science.
In September 2018, Seltzer joined the faculty at the University of British Columbia Department of Computer Science as the Canada 150 Research Chair in Computer Systems and the Cheriton Family Chair in Computer Science. In February 2019, she was elected a member of the National Academy of Engineering.
Business
Seltzer was Chief Technical Officer of Sleepycat Software (developers of the Berkeley DB embedded database) from 1996 to 2006, when the company was acquired by Oracle Corporation. She served as an architect on the Oracle Berkeley DB team for several years before transferring to Oracle Labs where she continues to act as an architect.
Seltzer was a director of USENIX from 2005 to 2014, serving as vice president for one year and president for two. In 2019, she received the USENIX Lifetime Achievement Award for her seminal work on BerkeleyDB and provenance systems and her dedication to the USENIX community at large.
In 2011, Seltzer was made a Fellow of the Association for Computing Machinery (the Association's highest member grade) in recognition of "outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community." In July 2020, Seltzer accepted the SIGMOD Software Systems award on behalf of the Sleepycat Software team.
Personal life
She is married to software developer Keith Bostic.
References
External links
http://mis-misinformation.blogspot.com/
Appreciation of Margo Seltzer for Ada Lovelace Day by Aaron Swartz
Living people
Year of birth missing (living people)
American computer scientists
American women computer scientists
University of California, Berkeley alumni
Harvard University faculty
Fellows of the Association for Computing Machinery
Radcliffe College alumni
American chief technology officers
Women chief technology officers
University of British Columbia faculty
American women academics
21st-century American women |
167079 | https://en.wikipedia.org/wiki/Smartphone | Smartphone | A smartphone is a computing platform portable device that combines mobile telephone and computing functions into one unit. They are distinguished from feature phones by their stronger hardware capabilities and extensive mobile operating systems, which facilitate wider software, internet (including web browsing over mobile broadband), and multimedia functionality (including music, video, cameras, and gaming), alongside core phone functions such as voice calls and text messaging. Smartphones typically contain a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, include various sensors that can be leveraged by pre-included and third-party software (such as a magnetometer, proximity sensors, barometer, gyroscope, accelerometer and more), and support wireless communications protocols (such as Bluetooth, Wi-Fi, or satellite navigation).
Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone personal digital assistant (PDA) devices with support for cellular telephony, but were limited by their bulky form, short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with the exponential scaling and miniaturization of MOS transistors down to sub-micron levels (Moore's law), the improved lithium-ion battery, faster digital mobile data networks (Edholm's law), and more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
In the 2000s, NTT DoCoMo's i-mode platform, BlackBerry, Nokia's Symbian platform, and Windows Mobile began to gain market traction, with models often featuring QWERTY keyboards or resistive touchscreen input, and emphasizing access to push email and wireless internet. Following the rising popularity of the iPhone in the late 2000s, the majority of smartphones have featured thin, slate-like form factors, with large, capacitive screens with support for multi-touch gestures rather than physical keyboards, and offer the ability for users to download or purchase additional applications from a centralized store, and use cloud storage and synchronization, virtual assistants, as well as mobile payment services. Smartphones have largely replaced PDAs, handheld/palm-sized PCs and portable media players (PMP).
Improved hardware and faster wireless communication (due to standards such as LTE) have bolstered the growth of the smartphone industry. In the third quarter of 2012, one billion smartphones were in use worldwide. Global smartphone sales surpassed the sales figures for feature phones in early 2013.
History
The development of the smartphone was enabled by several key technological advances. The exponential scaling and miniaturization of MOSFETs (MOS transistors) down to sub-micron levels during the 1990s2000s (as predicted by Moore's law) made it possible to build portable smart devices such as smartphones, as well as enabling the transition from analog to faster digital wireless mobile networks (leading to Edholm's law). Other important enabling factors include the lithium-ion battery, an indispensable energy source enabling long battery life, invented in the 1980s and commercialized in 1991, and the development of more mature software platforms that allowed mobile device ecosystems to develop independently of data providers.
Forerunner
In the early 1990s, IBM engineer Frank Canova realised that chip-and-wireless technology was becoming small enough to use in handheld devices. The first commercially available device that could be properly referred to as a "smartphone" began as a prototype called "Angler" developed by Canova in 1992 while at IBM and demonstrated in November of that year at the COMDEX computer industry trade show. A refined version was marketed to consumers in 1994 by BellSouth under the name Simon Personal Communicator. In addition to placing and receiving cellular calls, the touchscreen-equipped Simon could send and receive faxes and emails. It included an address book, calendar, appointment scheduler, calculator, world time clock, and notepad, as well as other visionary mobile applications such as maps, stock reports and news.
The IBM Simon was manufactured by Mitsubishi Electric, which integrated features from its own wireless personal digital assistant (PDA) and cellular radio technologies. It featured a liquid-crystal display (LCD) and PC Card support. The Simon was commercially unsuccessful, particularly due to its bulky form factor and limited battery life, using NiCad batteries rather than the nickel–metal hydride batteries commonly used in mobile phones in the 1990s, or lithium-ion batteries used in modern smartphones.
The term "smart phone" was not coined until a year after the introduction of the Simon, appearing in print as early as 1995, describing AT&T's PhoneWriter Communicator. The term "smartphone" was first used by Ericsson in 1997 to describe a new device concept, the GS88.
PDA/phone hybrids
Beginning in the mid-late 1990s, many people who had mobile phones carried a separate dedicated PDA device, running early versions of operating systems such as Palm OS, Newton OS, Symbian or Windows CE/Pocket PC. These operating systems would later evolve into early mobile operating systems. Most of the "smartphones" in this era were hybrid devices that combined these existing familiar PDA OSes with basic phone hardware. The results were devices that were bulkier than either dedicated mobile phones or PDAs, but allowed a limited amount of cellular Internet access. PDA and mobile phone manufacturers competed in reducing the size of devices. The bulk of these smartphones combined with their high cost and expensive data plans, plus other drawbacks such as expansion limitations and decreased battery life compared to separate standalone devices, generally limited their popularity to "early adopters" and business users who needed portable connectivity.
In March 1996, Hewlett-Packard released the OmniGo 700LX, a modified HP 200LX palmtop PC with a Nokia 2110 mobile phone piggybacked onto it and ROM-based software to support it. It had a 640×200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls, and to create and receive text messages, emails and faxes. It was also 100% DOS 5.0 compatible, allowing it to run thousands of existing software titles, including early versions of Windows.
In August 1996, Nokia released the Nokia 9000 Communicator, a digital cellular PDA based on the Nokia 2110 with an integrated system based on the PEN/GEOS 3.0 operating system from Geoworks. The two components were attached by a hinge in what became known as a clamshell design, with the display above and a physical QWERTY keyboard below. The PDA provided e-mail; calendar, address book, calculator and notebook applications; text-based Web browsing; and could send and receive faxes. When closed, the device could be used as a digital cellular telephone.
In June 1999 Qualcomm released the "pdQ Smartphone", a CDMA digital PCS smartphone with an integrated Palm PDA and Internet connectivity.
Subsequent landmark devices included:
The Ericsson R380 (December 2000) by Ericsson Mobile Communications, the first phone running the operating system later named Symbian (it ran EPOC Release 5, which was renamed Symbian OS at Release 6). It had PDA functionality and limited Web browsing on a resistive touchscreen utilizing a stylus. While it was marketed as a "smartphone", users could not install their own software on the device.
The Kyocera 6035 (February 2001), a dual-nature device with a separate Palm OS PDA operating system and CDMA mobile phone firmware. It supported limited Web browsing with the PDA software treating the phone hardware as an attached modem.
The Nokia 9210 Communicator (June 2001), the first phone running Symbian (Release 6) with Nokia's Series 80 platform (v1.0). This was the first Symbian phone platform allowing the installation of additional applications. Like the Nokia 9000 Communicator it's a large clamshell device with a full physical QWERTY keyboard inside.
Handspring's Treo 180 (2002), the first smartphone that fully integrated the Palm OS on a GSM mobile phone having telephony, SMS messaging and Internet access built into the OS. The 180 model had a thumb-type keyboard and the 180g version had a Graffiti handwriting recognition area, instead.
Japanese cell phones
In 1999, Japanese wireless provider NTT DoCoMo launched i-mode, a new mobile internet platform which provided data transmission speeds up to 9.6 kilobits per second, and access web services available through the platform such as online shopping. NTT DoCoMo's i-mode used cHTML, a language which restricted some aspects of traditional HTML in favor of increasing data speed for the devices. Limited functionality, small screens and limited bandwidth allowed for phones to use the slower data speeds available. The rise of i-mode helped NTT DoCoMo accumulate an estimated 40 million subscribers by the end of 2001, and ranked first in market capitalization in Japan and second globally. Japanese cell phones increasingly diverged from global standards and trends to offer other forms of advanced services and smartphone-like functionality that were specifically tailored to the Japanese market, such as mobile payments and shopping, near-field communication (NFC) allowing mobile wallet functionality to replace smart cards for transit fares, loyalty cards, identity cards, event tickets, coupons, money transfer, etc., downloadable content like musical ringtones, games, and comics, and 1seg mobile television. Phones built by Japanese manufacturers used custom firmware, however, and didn't yet feature standardized mobile operating systems designed to cater to third-party application development, so their software and ecosystems were akin to very advanced feature phones. As with other feature phones, additional software and services required partnerships and deals with providers.
The degree of integration between phones and carriers, unique phone features, non-standardized platforms, and tailoring to Japanese culture made it difficult for Japanese manufacturers to export their phones, especially when demand was so high in Japan that the companies didn't feel the need to look elsewhere for additional profits.
The rise of 3G technology in other markets and non-Japanese phones with powerful standardized smartphone operating systems, app stores, and advanced wireless network capabilities allowed non-Japanese phone manufacturers to finally break in to the Japanese market, gradually adopting Japanese phone features like emojis, mobile payments, NFC, etc. and spreading them to the rest of the world.
Early smartphones
Phones that made effective use of any significant data connectivity were still rare outside Japan until the introduction of the Danger Hiptop in 2002, which saw moderate success among U.S. consumers as the T-Mobile Sidekick. Later, in the mid-2000s, business users in the U.S. started to adopt devices based on Microsoft's Windows Mobile, and then BlackBerry smartphones from Research In Motion. American users popularized the term "CrackBerry" in 2006 due to the BlackBerry's addictive nature. In the U.S., the high cost of data plans and relative rarity of devices with Wi-Fi capabilities that could avoid cellular data network usage kept adoption of smartphones mainly to business professionals and "early adopters."
Outside the U.S. and Japan, Nokia was seeing success with its smartphones based on Symbian, originally developed by Psion for their personal organisers, and it was the most popular smartphone OS in Europe during the middle to late 2000s. Initially, Nokia's Symbian smartphones were focused on business with the Eseries, similar to Windows Mobile and BlackBerry devices at the time. From 2006 onwards, Nokia started producing consumer-focused smartphones, popularized by the entertainment-focused Nseries. Until 2010, Symbian was the world's most widely used smartphone operating system.
The touchscreen personal digital assistant (PDA)-derived nature of adapted operating systems like Palm OS, the "Pocket PC" versions of what was later Windows Mobile, and the UIQ interface that was originally designed for pen-based PDAs on Symbian OS devices resulted in some early smartphones having stylus-based interfaces. These allowed for virtual keyboards and/or handwriting input, thus also allowing easy entry of Asian characters.
By the mid-2000s, the majority of smartphones had a physical QWERTY keyboard. Most used a "keyboard bar" form factor, like the BlackBerry line, Windows Mobile smartphones, Palm Treos, and some of the Nokia Eseries. A few hid their full physical QWERTY keyboard in a sliding form factor, like the Danger Hiptop line. Some even had only a numeric keypad using T9 text input, like the Nokia Nseries and other models in the Nokia Eseries. Resistive touchscreens with stylus-based interfaces could still be found on a few smartphones, like the Palm Treos, which had dropped their handwriting input after a few early models that were available in versions with Graffiti instead of a keyboard.
Form factor and operating system shifts
The late 2000s and early 2010s saw a shift in smartphone interfaces away from devices with physical keyboards and keypads to ones with large finger-operated capacitive touchscreens. The first phone of any kind with a large capacitive touchscreen was the LG Prada, announced by LG in December 2006. This was a fashionable feature phone created in collaboration with Italian luxury designer Prada with a 3" 240x400 pixel screen, a 2-Megapixel digital camera with 144p video recording ability, an LED flash, and a miniature mirror for self portraits.
In January 2007, Apple Computer introduced the iPhone. It had a 3.5" capacitive touchscreen with twice the common resolution of most smartphone screens at the time, and introduced multi-touch to phones, which allowed gestures such as "pinching" to zoom in or out on photos, maps, and web pages. The iPhone was notable as being the first device of its kind targeted at the mass market to abandon the use of a stylus, keyboard, or keypad typical of contemporary smartphones, instead using a large touchscreen for direct finger input as its main means of interaction.
The iPhone's operating system was also a shift away from previous ones that were adapted from PDAs and feature phones, to one powerful enough to avoid using a limited, stripped down web browser requiring pages specially formatted using technologies such as WML, cHTML, or XHTML that previous phones supported and instead run a version of Apple's Safari browser that could easily render full websites not specifically designed for phones.
Later Apple shipped a software update that gave the iPhone a built-in on-device App Store allowing direct wireless downloads of third-party software. This kind of centralized App Store and free developer tools quickly became the new main paradigm for all smartphone platforms for software development, distribution, discovery, installation, and payment, in place of expensive developer tools that required official approval to use and a dependence on third-party sources providing applications for multiple platforms.
The advantages of a design with software powerful enough to support advanced applications and a large capacitive touchscreen affected the development of another smartphone OS platform, Android, with a more BlackBerry-like prototype device scrapped in favor of a touchscreen device with a slide-out physical keyboard, as Google's engineers thought at the time that a touchscreen could not completely replace a physical keyboard and buttons. Android is based around a modified Linux kernel, again providing more power than mobile operating systems adapted from PDAs and feature phones. The first Android device, the horizontal-sliding HTC Dream, was released in September 2008.
In 2012, Asus started experimenting with a convertible docking system named PadFone, where the standalone handset can when necessary be inserted into a tablet-sized screen unit with integrated supportive battery and used as such.
In 2013 and 2014, Samsung experimented with the hybrid combination of compact camera and smartphone, releasing the Galaxy S4 Zoom and K Zoom, each equipped with integrated 10× optical zoom lens and manual parameter settings (including manual exposure and focus) years before these were widely adapted among smartphones. The S4 Zoom additionally has a rotary knob ring around the lens and a tripod mount.
While screen sizes have increased, manufacturers have attempted to make smartphones thinner at the expense of utility and sturdiness, since a thinner frame is more vulnerable to bending and has less space for components, namely battery capacity.
Operating system competition
The iPhone and later touchscreen-only Android devices together popularized the slate form factor, based on a large capacitive touchscreen as the sole means of interaction, and led to the decline of earlier, keyboard- and keypad-focused platforms. Later, navigation keys such as the home, back, menu, task and search buttons have also been increasingly replaced by nonphysical touch keys, then virtual, simulated on-screen navigation keys, commonly with access combinations such as a long press of the task key to simulate a short menu key press, as with home button to search. More recent "bezel-less" types have their screen surface space extended to the unit's front bottom to compensate for the display area lost for simulating the navigation keys. While virtual keys offer more potential customizability, their location may be inconsistent among systems and/or depending on screen rotation and software used.
Multiple vendors attempted to update or replace their existing smartphone platforms and devices to better-compete with Android and the iPhone; Palm unveiled a new platform known as webOS for its Palm Pre in late-2009 to replace Palm OS, which featured a focus on a task-based "card" metaphor and seamless synchronization and integration between various online services (as opposed to the then-conventional concept of a smartphone needing a PC to serve as a "canonical, authoritative repository" for user data). HP acquired Palm in 2010 and released several other webOS devices, including the Pre 3 and HP TouchPad tablet. As part of a proposed divestment of its consumer business to focus on enterprise software, HP abruptly ended development of future webOS devices in August 2011, and sold the rights to webOS to LG Electronics in 2013, for use as a smart TV platform.
Research in Motion introduced the vertical-sliding BlackBerry Torch and BlackBerry OS 6 in 2010, which featured a redesigned user interface, support for gestures such as pinch-to-zoom, and a new web browser based on the same WebKit rendering engine used by the iPhone. The following year, RIM released BlackBerry OS 7 and new models in the Bold and Torch ranges, which included a new Bold with a touchscreen alongside its keyboard, and the Torch 9860—the first BlackBerry phone to not include a physical keyboard. In 2013, it replaced the legacy BlackBerry OS with a revamped, QNX-based platform known as BlackBerry 10, with the all-touch BlackBerry Z10 and keyboard-equipped Q10 as launch devices.
In 2010, Microsoft unveiled a replacement for Windows Mobile known as Windows Phone, featuring a new touchscreen-centric user interface built around flat design and typography, a home screen with "live tiles" containing feeds of updates from apps, as well as integrated Microsoft Office apps. In February 2011, Nokia announced that it had entered into a major partnership with Microsoft, under which it would exclusively use Windows Phone on all of its future smartphones, and integrate Microsoft's Bing search engine and Bing Maps (which, as part of the partnership, would also license Nokia Maps data) into all future devices. The announcement led to the abandonment of both Symbian, as well as MeeGo—a Linux-based mobile platform it was co-developing with Intel. Nokia's low-end Lumia 520 saw strong demand and helped Windows Phone gain niche popularity in some markets, overtaking BlackBerry in global market share in 2013.
In mid-June 2012, Meizu released its mobile operating system, Flyme OS.
Many of these attempts to compete with Android and iPhone were short-lived. Over the course of the decade, the two platforms became a clear duopoly in smartphone sales and market share, with BlackBerry, Windows Phone, and "other" operating systems eventually stagnating to little or no measurable market share. In 2015, BlackBerry began to pivot away from its in-house mobile platforms in favor of producing Android devices, focusing on a security-enhanced distribution of the software. The following year, the company announced that it would also exit the hardware market to focus more on software and its enterprise middleware, and began to license the BlackBerry brand and its Android distribution to third-party OEMs such as TCL for future devices.
In September 2013, Microsoft announced its intent to acquire Nokia's mobile device business for $7.1 billion, as part of a strategy under CEO Steve Ballmer for Microsoft to be a "devices and services" company. Despite the growth of Windows Phone and the Lumia range (which accounted for nearly 90% of all Windows Phone devices sold), the platform never had significant market share in the key U.S. market, and Microsoft was unable to maintain Windows Phone's momentum in the years that followed, resulting in dwindling interest from users and app developers. After Balmer was succeeded by Satya Nadella (who has placed a larger focus on software and cloud computing) as CEO of Microsoft, it took a $7.6 billion write-off on the Nokia assets in July 2015, and laid off nearly the entire Microsoft Mobile unit in May 2016.
Prior to the completion of the sale to Microsoft, Nokia released a series of Android-derived smartphones for emerging markets known as Nokia X, which combined an Android-based platform with elements of Windows Phone and Nokia's feature phone platform Asha, using Microsoft and Nokia services rather than Google.
Camera advancements
The first commercial camera phone was the Kyocera Visual Phone VP-210, released in Japan in May 1999. It was called a "mobile videophone" at the time, and had a 110,000-pixel front-facing camera. It could send up to two images per second over Japan's Personal Handy-phone System (PHS) cellular network, and store up to 20 JPEG digital images, which could be sent over e-mail. The first mass-market camera phone was the J-SH04, a Sharp J-Phone model sold in Japan in November 2000. It could instantly transmit pictures via cell phone telecommunication.
By the mid-2000s, higher-end cell phones commonly had integrated digital cameras. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008.
Many early smartphones didn't have cameras at all, and earlier models that had them had low performance and insufficient image and video quality that could not compete with budget pocket cameras and fulfill user's needs. By the beginning of the 2010s almost all smartphones had an integrated digital camera. The decline in sales of stand-alone cameras accelerated due to the increasing use of smartphones with rapidly improving camera technology for casual photography, easier image manipulation, and abilities to directly share photos through the use of apps and web-based services. By 2011, cell phones with integrated cameras were selling hundreds of millions per year. In 2015, digital camera sales were 35.395 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak.
Contributing to the rise in popularity of smartphones being used over dedicated cameras for photography, smaller pocket cameras have difficulty producing bokeh in images, but nowadays, some smartphones have dual-lens cameras that reproduce the bokeh effect easily, and can even rearrange the level of bokeh after shooting. This works by capturing multiple images with different focus settings, then combining the background of the main image with a macro focus shot.
In 2007 the Nokia N95 was notable as a smartphone that had a 5.0 Megapixel (MP) camera, when most others had cameras with around 3 MP or less than 2 MP. Some specialized feature phones like the LG Viewty, Samsung SGH-G800, and Sony Ericsson K850i, all released later that year, also had 5.0 MP cameras. By 2010 5.0 MP cameras were common; a few smartphones had 8.0 MP cameras and the Nokia N8, Sony Ericsson Satio, and Samsung M8910 Pixon12 feature phone had 12 MP. The main camera of the 2009 Nokia N86 uniquely features a three-level aperture lens.
The Altek Leo, a 14-megapixel smartphone with 3x optical zoom lens and 720p HD video camera was released in late 2010.
In 2011, the same year the Nintendo 3DS was released, HTC unveiled the Evo 3D, a 3D phone with a dual five-megapixel rear camera setup for spatial imaging, among the earliest mobile phones with more than one rear camera.
The 2012 Samsung Galaxy S3 introduced the ability to capture photos using voice commands.
In 2012 Nokia announced and released the Nokia 808 PureView, featuring a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4 Zeiss all-aspherical one-group lens. The high resolution enables four times of lossless digital zoom at 1080p and six times at 720p resolution, using image sensor cropping. The 2013 Nokia Lumia 1020 has a similar high-resolution camera setup, with the addition of optical image stabilization and manual camera settings years before common among high-end mobile phones, although lacking expandable storage that could be of use for accordingly high file sizes.
In the same year, Nokia introduced mobile optical image stabilization with the Lumia 920, enabling prolonged exposure times for low-light photography and smoothing out handheld video shake whose appearance would magnify over a larger display such as a monitor or television set, which would be detrimental to watching experience.
Since 2012, smartphones have become increasingly able to capture photos while filming, whose resolution may vary, where Samsung uses the highest image sensor resolution at the video's aspect ratio, which at 16:9 is 6 Megapixels (3264×1836) on the Galaxy S3 and 9.6 Megapixels (4128×2322) on the Galaxy S4. The earliest iPhones with such functionality, iPhone 5 and 5s, captured simultaneous photos at 0.9 Megapixels (1280×720) while filming.
Starting in 2013 on the Xperia Z1, Sony experimented with real-time augmented reality camera effects such as floating text, virtual plants, volcano, and a dinosaur walking in the scenery. Apple later did similarly in 2017 with the iPhone X.
In the same year, iOS 7 introduced the later widely implemented viewfinder intuition, where exposure value can be adjusted through vertical swiping, after focus and exposure has been set by tapping, and even while locked after holding down for a brief moment. On some devices, this intuition may be restricted by software in video/slow motion modes and for front camera.
In 2013, Samsung unveiled the Galaxy S4 Zoom smartphone with the grip shape of a compact camera and a 10× optical zoom lens, as well as a rotary knob ring around the lens, as used on higher-end compact cameras, and an ISO 1222 tripod mount. It is equipped with manual parameter settings, including for focus and exposure. The successor 2014 Samsung Galaxy K Zoom brought resolution and performance enhancements, but lacks the rotary knob and tripod mount to allow for a more smartphone-like shape with less protruding lens.
The 2014 Panasonic Lumix DMC-CM1 was another attempt at mixing mobile phone with compact camera, so much so that it inherited the Lumix brand. While lacking optical zoom, its image sensor has a format of 1", as used in high-end compact cameras such as the Lumix DMC-LX100 and Sony CyberShot DSC-RX100 series, with multiple times the surface size of a typical mobile camera image sensor, as well as support for light sensitivities of up to ISO 25600, well beyond the typical mobile camera light sensitivity range. As of 2021, no successor has been released.
In 2013 and 2014, HTC experimentally traded in pixel count for pixel surface size on their One M7 and M8, both with only four megapixels, marketed as UltraPixel, citing improved brightness and less noise in low light, though the more recent One M8 lacks optical image stabilization.
The One M8 additionally was one of the earliest smartphones to be equipped with a dual camera setup. Its software allows generating visual spacial effects such as 3D panning, weather effects, and focus adjustment ("UFocus"), simulating the postphotographic selective focussing capability of images produced by a light-field camera. HTC returned to a high-megapixel single-camera setup on the 2015 One M9.
Meanwhile, in 2014, LG Mobile started experimenting with time-of-flight camera functionality, where a rear laser beam that measures distance accelerates autofocus.
Phase-detection autofocus was increasingly adapted throughout the mid-2010s, allowing for quicker and more accurate focussing than contrast detection.
In 2016 Apple introduced the iPhone 7 Plus, one of the phones to popularize a dual camera setup. The iPhone 7 Plus included a main 12 MP camera along with a 12 MP telephoto camera. In early 2018 Huawei released a new flagship phone, the Huawei P20 Pro, one of the first triple camera lens setups with Leica optics. In late 2018, Samsung released a new mid-range smartphone, the Galaxy A9 (2018) with the world's first quad camera setup. The Nokia 9 PureView was released in 2019 featuring a penta-lens camera system.
2019 saw the commercialization of high resolution sensors, which use pixel binning to capture more light. 48 MP and 64 MP sensors developed by Sony and Samsung are commonly used by several manufacturers. 108 MP sensors were first implemented in late 2019 and early 2020.
Video resolution
With stronger getting chipsets to handle computing workload demands at higher pixel rates, mobile video resolution and framerate has caught up with dedicated consumer-grade cameras over years.
In 2009 the Samsung Omnia HD became the first mobile phone with 720p HD video recording. In the same year, Apple brought video recording initially to the iPhone 3GS, at 480p, whereas the 2007 original iPhone and 2008 iPhone 3G lacked video recording entirely.
720p was more widely adapted in 2010, on smartphones such as the original Samsung Galaxy S, Sony Ericsson Xperia X10, iPhone 4, and HTC Desire HD.
The early 2010s brought a steep increase in mobile video resolution. 1080p mobile video recording was achieved in 2011 on the Samsung Galaxy S2, HTC Sensation, and iPhone 4s.
In 2012 and 2013, select devices with 720p filming at 60 frames per second were released: the Asus PadFone 2 and HTC One M7, unlike flagships of Samsung, Sony, and Apple. However, the 2013 Samsung Galaxy S4 Zoom does support it.
In 2013, the Samsung Galaxy Note 3 introduced 2160p (4K) video recording at 30 frames per second, as well as 1080p doubled to 60 frames per second for smoothness.
Other vendors adapted 2160p recording in 2014, including the optically stabilized LG G3. Apple first implemented it in late 2015 on the iPhone 6s and 6s Plus.
The framerate at 2160p was widely doubled to 60 in 2017 and 2018, starting with the iPhone 8, Galaxy S9, LG G7, and OnePlus 6.
Sufficient computing performance of chipsets and image sensor resolution and its reading speeds have enabled mobile 4320p (8K) filming in 2020, introduced with the Samsung Galaxy S20 and Redmi K30 Pro, though some upper resolution levels were foregone (skipped) throughout development, including 1440p (2.5K), 2880p (5K), and 3240p (6K), except 1440p on Samsung Galaxy front cameras.
Mid-class
Among mid-range smartphone series, the introduction of higher video resolutions was initially delayed by two to three years compared to flagship counterparts. 720p was widely adapted in 2012, including with the Samsung Galaxy S3 Mini, Sony Xperia go, and 1080p in 2013 on the Samsung Galaxy S4 Mini and HTC One mini.
The proliferation of video resolutions beond 1080p has been postponed by several years. The mid-class Sony Xperia M5 supported 2160p filming in 2016, whereas Samsung's mid-class series such as the Galaxy J and A series were strictly limited to 1080p in resolution and 30 frames per second at any resolution for six years until around 2019, whether and how much for technical reasons is unclear.
Setting
A lower video resolution setting may be desirable to extend recording time by reducing space storage and power consumption.
The camera software of some sophisticated devices such as the LG V10 is equipped with separate controls for resolution, frame rate, and bit rate, within a technically supported range of pixel rate.
Slow motion video
A distinction between different camera software is the method used to store high frame rate video footage, with more recent phones retaining both the image sensor's original output frame rate and audio, while earlier phones do not record audio and stretch the video so it can be played back slowly at default speed.
While the stretched encoding method used on earlier phones enables slow motion playback on video player software that lacks manual playback speed control, typically found on older devices, if the aim were to achieve a slow motion effect, the real-time method used by more recent phones offers greater versatility for video editing, where slowed down portions of the footage can be freely selected by the user, and exported into a separate video. A rudimentary video editing software for this purpose is usually precluded. The video can optionally be played back at normal (real-time) speed, acting as usual video.
Development
The earliest smartphone known to feature a slow motion mode is the 2009 Samsung i8000 Omnia II, which can record at QVGA (320×240) at 120 fps (frames per second). Slow motion was not available on the 2010 Galaxy S1, 2011 Galaxy S2, 2011 Galaxy Note 1, and 2012 Galaxy S3 flagships.
In early 2012, the HTC One X allowed 768×432 pixel slow motion filming at an undocumented frame rate. The output footage has been measured as a third of real-time speed.
In late 2012, the Galaxy Note 2 brought back slow motion, with D1 (720×480) at 120 fps. In early 2013, the Galaxy S4 and HTC One M7 recorded at that frame rate with 800×450, followed by the Note 3 and iPhone 5s with 720p (1280×720) in late 2013, the latter of which retaines audio and original sensor frame rate, as with all later iPhones. In early 2014, the Sony Xperia Z2 and HTC One M8 adapted this resolution as well. In late 2014, the iPhone 6 doubled the frame rate to 240fps, and in late 2015, the iPhone 6s added support for 1080p (1920×1080) at 120 frames per second. In early 2015, the Galaxy S6 became the first Samsung mobile phone to retain the sensor framerate and audio, and in early 2016, the Galaxy S7 became the first Samsung mobile phone with 240fps recording, also at 720p.
In early 2015, the MT6795 chipset by MediaTek promised 1080p@480fps video recording. The project's status remains indefinite.
Since early 2017, starting with the Sony Xperia XZ, smartphones have been released with a slow motion mode that unsustainably records at framerates multiple times as high, by temporarily storing frames on the image sensor's internal burst memory. Such a recording endures few real-time seconds at most.
In late 2017, the iPhone 8 brought 1080p at 240fps, as well as 2160p at 60fps, followed by the Galaxy S9 in early 2018. In mid-2018, the OnePlus 6 brought 720p at 480fps, sustainable for one minute.
In early 2021, the OnePlus 9 Pro became the first phone with 2160p at 120fps.
HDR video
The first smartphones to record HDR video were the early 2013 Sony Xperia Z and mid-2013 Xperia Z Ultra, followed by the early 2014 Galaxy S5, all at 1080p.
Audio recording
Mobile phones with multiple microphones usually allow video recording with stereo audio for spaciality, with Samsung, Sony, and HTC initially implementing it in 2012 on their Samsung Galaxy S3, Sony Xperia S, and HTC One X. Apple implemented stereo audio starting with the 2018 iPhone Xs family and iPhone XR.
Front cameras
Photo
Emphasis is being put on the front camera since the mid-2010s, where front cameras have reached resolutions as high as typical rear cameras, such as the 2015 LG G4 (8 megapixels), Sony Xperia C5 Ultra (13 megapixels), and 2016 Sony Xperia XA Ultra (16 megapixels, optically stabilized). The 2015 LG V10 brought a dual front camera system where the second has a wider angle for group photography. Samsung implemented a front-camera sweep panorama (panorama selfie) feature since the Galaxy Note 4 to extend the field of view.
Video
In 2012, the Galaxy S3 and iPhone 5 brought 720p HD front video recording (at 30 fps). In early 2013, the Samsung Galaxy S4, HTC One M7 and Sony Xperia Z brought 1080p Full HD at that framerate, and in late 2014, the Galaxy Note 4 introduced 1440p video recording on the front camera. Apple adapted 1080p front camera video with the late 2016 iPhone 7.
In 2019, smartphones started adapting 2160p 4K video recording on the front camera, six years after rear camera 2160p commenced with the Galaxy Note 3.
Display advancements
In the early 2010s, larger smartphones with screen sizes of at least diagonal, dubbed "phablets", began to achieve popularity, with the 2011 Samsung Galaxy Note series gaining notably wide adoption. In 2013, Huawei launched the Huawei Mate series, sporting a HD (1280x720) IPS+ LCD display, which was considered to be quite large at the time.
Some companies began to release smartphones in 2013 incorporating flexible displays to create curved form factors, such as the Samsung Galaxy Round and LG G Flex.
By 2014, 1440p displays began to appear on high-end smartphones. In 2015, Sony released the Xperia Z5 Premium, featuring a 4K resolution display, although only images and videos could actually be rendered at that resolution (all other software was shown at 1080p).
New trends for smartphone displays began to emerge in 2017, with both LG and Samsung releasing flagship smartphones (LG G6 and Galaxy S8), utilizing displays with taller aspect ratios than the common 16:9 ratio, and a high screen-to-body ratio, also known as a "bezel-less design". These designs allow the display to have a larger diagonal measurement, but with a slimmer width than 16:9 displays with an equivalent screen size.
Another trend popularized in 2017 were displays containing tab-like cut-outs at the top-centre—colloquially known as a "notch"—to contain the front-facing camera, and sometimes other sensors typically located along the top bezel of a device. These designs allow for "edge-to-edge" displays that take up nearly the entire height of the device, with little to no bezel along the top, and sometimes a minimal bottom bezel as well. This design characteristic appeared almost simultaneously on the Sharp Aquos S2 and the Essential Phone, which featured small circular tabs for their cameras, followed just a month later by the iPhone X, which used a wider tab to contain a camera and facial scanning system known as Face ID. The 2016 LG V10 had a precursor to the concept, with a portion of the screen wrapped around the camera area in the top-left corner, and the resulting area marketed as a "second" display that could be used for various supplemental features.
Other variations of the practice later emerged, such as a "hole-punch" camera (such as those of the Honor View 20, and Samsung's Galaxy A8s and Galaxy S10)—eschewing the tabbed "notch" for a circular or rounded-rectangular cut-out within the screen instead, while Oppo released the first "all-screen" phones with no notches at all, including one with a mechanical front camera that pops up from the top of the device (Find X), and a 2019 prototype for a front-facing camera that can be embedded and hidden below the display, using a special partially-translucent screen structure that allows light to reach the image sensor below the panel. The first implementation was the ZTE Axon 20 5G, with a 32 MP sensor manufactured by Visionox.
Displays supporting refresh rates higher than 60 Hz (such as 90 Hz or 120 Hz) also began to appear on smartphones in 2017; initially confined to "gaming" smartphones such as the Razer Phone (2017) and Asus ROG Phone (2018), they later became more common on flagship phones such as the Pixel 4 (2019) and Samsung Galaxy S21 series (2021). Higher refresh rates allow for smoother motion and lower input latency, but often at the cost of battery life. As such, the device may offer a means to disable high refresh rates, or be configured to automatically reduce the refresh rate when there is low on-screen motion.
Multi-tasking
An early implementation of multiple simultaneous tasks on a smartphone display are the picture-in-picture video playback mode ("pop-up play") and "live video list" with playing video thumbnails of the 2012 Samsung Galaxy S3, the former of which was later delivered to the 2011 Samsung Galaxy Note through a software update. Later that year, a split-screen mode was implemented on the Galaxy Note 2, later retrofitted on the Galaxy S3 through the "premium suite upgrade".
The earliest implementation of desktop and laptop-like windowing was on the 2013 Samsung Galaxy Note 3.
Foldable smartphones
Smartphones utilizing flexible displays were theorized as possible once manufacturing costs and production processes were feasible. In November 2018, the startup company Royole unveiled the first commercially available foldable smartphone, the Royole FlexPai. Also that month, Samsung presented a prototype phone featuring an "Infinity Flex Display" at its developers conference, with a smaller, outer display on its "cover", and a larger, tablet-sized display when opened. Samsung stated that it also had to develop a new polymer material to coat the display as opposed to glass. Samsung officially announced the Galaxy Fold, based on the previously-demonstrated prototype, in February 2019 for an originally-scheduled release in late-April. Due to various durability issues with the display and hinge systems encountered by early reviewers, the release of the Galaxy Fold was delayed to September to allow for design changes.
In November 2019, Motorola unveiled a variation of the concept with its re-imagining of the Razr, using a horizontally-folding display to create a clamshell form factor inspired by its previous feature phone range of the same name. Samsung would unveil a similar device known as the Galaxy Z Flip the following February.
Other developments in the 2010s
The first smartphone with a fingerprint reader was the Motorola Atrix 4G in 2011. In September 2013, the iPhone 5S was unveiled as the first smartphone on a major U.S. carrier since the Atrix to feature this technology. Once again, the iPhone popularized this concept. One of the barriers of fingerprint reading amongst consumers was security concerns, however Apple was able to address these concerns by encrypting this fingerprint data onto the A7 Processor located inside the phone as well as make sure this information could not be accessed by third-party applications and is not stored in iCloud or Apple servers
In 2012, Samsung introduced the Galaxy S3 (GT-i9300) with retrofittable wireless charging, pop-up video playback, 4G-LTE variant (GT-i9305) quad-core processor.
In 2013, Fairphone launched its first "socially ethical" smartphone at the London Design Festival to address concerns regarding the sourcing of materials in the manufacturing followed by Shiftphone in 2015. In late 2013, QSAlpha commenced production of a smartphone designed entirely around security, encryption and identity protection.
In October 2013, Motorola Mobility announced Project Ara, a concept for a modular smartphone platform that would allow users to customize and upgrade their phones with add-on modules that attached magnetically to a frame. Ara was retained by Google following its sale of Motorola Mobility to Lenovo, but was shelved in 2016. That year, LG and Motorola both unveiled smartphones featuring a limited form of modularity for accessories; the LG G5 allowed accessories to be installed via the removal of its battery compartment, while the Moto Z utilizes accessories attached magnetically to the rear of the device.
Microsoft, expanding upon the concept of Motorola's short-lived "Webtop", unveiled functionality for its Windows 10 operating system for phones that allows supported devices to be docked for use with a PC-styled desktop environment.
Samsung and LG used to be the "last standing" manufacturers to offer flagship devices with user-replaceable batteries.
But in 2015, Samsung succumbed to the minimalism trend set by Apple, introducing the Galaxy S6 without a user-replaceable battery.
In addition, Samsung was criticised for pruning long-standing features such as MHL, MicroUSB 3.0, water resistance and MicroSD card support, of which the latter two came back in 2016 with the Galaxy S7 and S7 Edge.
As of 2015, the global median for smartphone ownership was 43%. Statista forecast that 2.87 billion people would own smartphones in 2020.
Major technologies that began to trend in 2016 included a focus on virtual reality and augmented reality experiences catered towards smartphones, the newly introduced USB-C connector, and improving LTE technologies.
In 2016, adjustable screen resolution known from desktop operating systems was introduced to smartphones for power saving, whereas variable screen refresh rates were popularized in 2020.
In 2018, the first smartphones featuring fingerprint readers embedded within OLED displays were announced, followed in 2019 by an implementation using an ultrasonic sensor on the Samsung Galaxy S10.
In 2019, the majority of smartphones released have more than one camera, are waterproof with IP67 and IP68 ratings, and unlock using facial recognition or fingerprint scanners.
Other developments in the 2020s
In 2020, the first smartphones featuring high-speed 5G network capability were announced.
Since 2020, smartphones have decreasingly been shipped with rudimentary accessories like a power adapter and headphones that have historically been almost invariably within the scope of delivery. This trend was initiated with Apple's iPhone 12, followed by Samsung and Xiaomi on the Galaxy S21 and Mi 11 respectively, months after having mocked the same through advertisements. The reason cited is reducing environmental footprint, though reaching raised charging rates supported by newer models demands a new charger shipped through separate packaging with its own environmental footprint.
With the development of the PinePhone and Librem 5 in the 2020s, there are intensified efforts to make open source GNU/Linux for smartphones a major alternative to iOS and Android. Moreover, associated software enabled convergence (beyond convergent and hybrid apps) by allowing the smartphones to be used like a desktop computer when connected to a keyboard, mouse and monitor.
Hardware
A typical smartphone contains a number of metal–oxide–semiconductor (MOS) integrated circuit (IC) chips, which in turn contain billions of tiny MOS field-effect transistors (MOSFETs). A typical smartphone contains the following MOS IC chips:
Application processor (CMOS system-on-a-chip)
Flash memory (floating-gate MOS memory)
Cellular modem (baseband RF CMOS)
RF transceiver (RF CMOS)
Phone camera image sensor (CMOS image sensor)
Power management integrated circuit (power MOSFETs)
Display driver (LCD or LED driver)
Wireless communication chips (Wi-Fi, Bluetooth, GPS receiver)
Sound chip (audio codec and power amplifier)
Gyroscope
Capacitive touchscreen controller (ASIC and DSP)
RF power amplifier (LDMOS)
Some are also equipped with an FM radio receiver, a hardware notification LED, and an infrared transmitter for use as remote control. Few have additional sensors such as thermometer for measuring ambient temperature, hygrometer for humidity, and a sensor for ultraviolet ray measurement.
Few exotic smartphones designed around specific purposes are equipped with uncommon hardware such as a projector (Samsung Beam i8520 and Samsung Galaxy Beam i8530), optical zoom lenses (Samsung Galaxy S4 Zoom and Samsung Galaxy K Zoom), thermal camera, and even PMR446 (walkie-talkie radio) transceiver.
Central processing unit
Smartphones have central processing units (CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in a CMOS (complementary metal–oxide–semiconductor) system-on-a-chip (SoC) application processor.
The performance of mobile CPU depends not only on the clock rate (generally given in multiples of hertz) but also on the memory hierarchy. Because of these challenges, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications.
Buttons
Smartphones are typically equipped with a power button and volume buttons. Some pairs of volume buttons are unified. Some are equipped with a dedicated camera shutter button. Units for outdoor use may be equipped with an "SOS" emergency call and "PTT" (push-to-talk button). The presence of physical front-side buttons such as the home and navigation buttons has decreased throughout the 2010s, increasingly becoming replaced by capacitive touch sensors and simulated (on-screen) buttons.
As with classic mobile phones, early smartphones such as the Samsung Omnia II were equipped with buttons for accepting and declining phone calls. Due to the advancements of functionality besides phone calls, these have increasingly been replaced by navigation buttons such as "menu" (also known as "options"), "back", and "tasks". Some early 2010s smartphones such as the HTC Desire were additionally equipped with a "Search" button (🔍) for quick access to a web search engine or apps' internal search feature.
Since 2013, smartphones' home buttons started integrating fingerprint scanners, starting with the iPhone 5s and Samsung Galaxy S5.
Functions may be assigned to button combinations. For example, screenshots can usually be taken using the home and power buttons, with a short press on iOS and one-second holding Android OS, the two most popular mobile operating systems. On smartphones with no physical home button, usually the volume-down button is instead pressed with the power button. Some smartphones have a screenshot and possibly screencast shortcuts in the navigation button bar or the power button menu.
Display
One of the main characteristics of smartphones is the screen. Depending on the device's design, the screen fills most or nearly all of the space on a device's front surface. Many smartphone displays have an aspect ratio of 16:9, but taller aspect ratios became more common in 2017, as well as the aim to eliminate bezels by extending the display surface to as close to the edges as possible.
Screen sizes
Screen sizes are measured in diagonal inches. Phones with screens larger than 5.2 inches are often called "phablets". Smartphones with screens over 4.5 inches in size are commonly difficult to use with only a single hand, since most thumbs cannot reach the entire screen surface; they may need to be shifted around in the hand, held in one hand and manipulated by the other, or used in place with both hands. Due to design advances, some modern smartphones with large screen sizes and "edge-to-edge" designs have compact builds that improve their ergonomics, while the shift to taller aspect ratios have resulted in phones that have larger screen sizes whilst maintaining the ergonomics associated with smaller 16:9 displays.
Panel types
Liquid-crystal displays (LCDs) and organic light-emitting diode (OLED) displays are the most common. Some displays are integrated with pressure-sensitive digitizers, such as those developed by Wacom and Samsung, and Apple's Force Touch system. A few phones, such as the YotaPhone prototype, are equipped with a low-power electronic paper rear display, as used in e-book readers.
Alternative input methods
Some devices are equipped with additional input methods such as a stylus for higher precision input and hovering detection, and/or a self-capacitive touch screens layer for floating finger detection. The latter has been implemented on few phones such as the Samsung Galaxy S4, Note 3, S5, Alpha, and Sony Xperia Sola, making the Galaxy Note 3 the only smartphone with both so far.
Hovering can enable preview tooltips such as on the video player's seek bar, in text messages, and quick contacts on the dial pad, as well as lock screen animations, and the simulation of a hovering mouse cursor on web sites.
Some styluses support hovering as well and are equipped with a button for quick access to relevant tools such as digital post-it notes and highlighting of text and elements when dragging while pressed, resembling drag selection using a computer mouse. Some series such as the Samsung Galaxy Note series and LG G Stylus series have an integrated tray to store the stylus in.
Few devices such as the iPhone 6s until iPhone Xs and Huawei Mate S are equipped with a pressure-sensitive touch screen, where the pressure may be used to simulate a gas pedal in video games, access to preview windows and shortcut menus, controlling the typing cursor, and a weight scale, the latest of which has been rejected by Apple from the App Store.
Some early 2010s HTC smartphones such as the HTC Desire (Bravo) and HTC Legend are equipped with an optical track pad for scrolling and selection.
Notification light
Many smartphones except Apple iPhones are equipped with low-power light-emitting diodes besides the screen that are able to notify the user about incoming messages, missed calls, low battery levels, and facilitate locating the mobile phone in darkness, with marginial power consumption.
To distinguish between the sources of notifications, the colour combination and blinking pattern can vary. Usually three diodes in red, green, and blue (RGB) are able to create a multitude of colour combinations.
Sensors
Smartphones are equipped with a multitude of sensors to enable system features and third-party applications.
Common sensors
Accelerometers and gyroscopes enable automatic control of screen rotation. Uses by third-party software include bubble level simulation. An ambient light sensor allows for automatic screen brightness and contrast adjustment, and an RGB sensor enables the adaption of screen colour.
Many mobile phones are also equipped with a barometer sensor to measure air pressure, such as Samsung since 2012 with the Galaxy S3, and Apple since 2014 with the iPhone 6. It allows estimating and detecting changes in altitude.
A magnetometer can act as a digital compass by measuring Earth's magnetic field.
Rare sensors
Samsung equips their flagship smartphones since the 2014 Galaxy S5 and Galaxy Note 4 with a heart rate sensor to assist in fitness-related uses and act as a shutter key for the front-facing camera.
So far, only the 2013 Samsung Galaxy S4 and Note 3 are equipped with an ambient temperature sensor and a humidity sensor, and only the Note 4 with an ultraviolet radiation sensor which could warn the user about excessive exposure.
A rear infrared laser beam for distance measurement can enable time-of-flight camera functionality with accelerated autofocus, as implemented on select LG mobile phones starting with LG G3 and LG V10.
Due to their currently rare occurrence among smartphones, not much software to utilize these sensors has been developed yet.
Storage
While eMMC (embedded multi media card) flash storage was most commonly used in mobile phones, its successor, UFS (Universal Flash Storage) with higher transfer rates emerged throughout the 2010s for upper-class devices.
Capacity
While the internal storage capacity of mobile phones has been near-stagnant during the first half of the 2010s, it has increased steeper during its second half, with Samsung for example increasing the available internal storage options of their flagship class units from 32 GB to 512 GB within only 2 years between 2016 and 2018.
Memory cards
The space for data storage of some mobile phones can be expanded using MicroSD memory cards, whose capacity has multiplied throughout the 2010s (→ ). Benefits over USB on the go storage and cloud storage include offline availability and privacy, not reserving and protruding from the charging port, no connection instability or latency, no dependence on voluminous data plans, and preservation of the limited rewriting cycles of the device's permanent internal storage.
In case of technical defects which make the device unusable or unbootable as a result of liquid damage, fall damage, screen damage, bending damage, malware, or bogus system updates, etc., data stored on the memory card is likely rescueable externally, while data on the inaccessible internal storage would be lost. A memory card can usually immediately be re-used in a different memory-card-enabled device with no necessity for prior file transfers.
Some dual-SIM mobile phones are equipped with a hybrid slot, where one of the two slots can be occupied by either a SIM card or a memory card. Some models, typically of higher end, are equipped with three slots including one dedicated memory card slot, for simultaneous dual-SIM and memory card usage.
Physical location
The location of both SIM and memory card slots vary among devices, where they might be located accessibly behind the back cover or else behind the battery, the latter of which denies hot swapping.
Mobile phones with non-removable rear cover typically house SIM and memory cards in a small tray on the handset's frame, ejected by inserting a needle tool into a pinhole.
Some earlier mid-range phones such as the 2011 Samsung Galaxy Fit and Ace have a sideways memory card slot on the frame covered by a cap that can be opened without tool.
File transfer
Originally, mass storage access was commonly enabled to computers through USB. Over time, mass storage access was removed, leaving the Media Transfer Protocol as protocol for USB file transfer, due to its non-exclusive access ability where the computer is able to access the storage without it being locked away from the mobile phone's software for the duration of the connection, and no necessity for common file system support, as communication is done through an abstraction layer.
However, unlike mass storage, Media Transfer Protocol lacks parallelism, meaning that only a single transfer can run at a time, for which other transfer requests need to wait to finish. In addition, the direct access of files through MTP is not supported. Any file is wholly downloaded from the device before opened.
Sound
Some audio quality enhancing features, such as Voice over LTE and HD Voice have appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used in long-distance calls. Audio quality can be improved using a VoIP application over WiFi. Cellphones have small speakers so that the user can use a speakerphone feature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear.
Some mobile phones such as the HTC One M8 and the Sony Xperia Z2 are equipped with stereophonic speakers to create spacial sound when in horizontal orientation.
Audio connector
The 3.5mm headphone receptible (coll. "headphone jack") allows the immediate operation of passive headphones, as well as connection to other external auxiliary audio appliances. Among devices equipped with the connector, it is more commonly located at the bottom (charging port side) than on the top of the device
The decline of the connector's availability among newly released mobile phones among all major vendors commenced in 2016 with its lack on the Apple iPhone 7. An adapter reserving the charging port can retrofit the plug.
Battery-powered, wireless Bluetooth headphones are an alternative. Those tend to be costlier however due to their need for internal hardware such as a Bluetooth transceiver, and a Bluetooth coupling is required ahead of each operation.
Battery
A smartphone typically uses a lithium-ion battery due to its high energy density.
Batteries chemically wear down as a result of repeated charging and discharging throughout ordinary usage, losing both energy capacity and output power, which results in loss of processing speeds followed by system outages. Battery capacity may be reduced to 80% after few hundred recharges, and the drop in performance accelerates with time.
Some mobile phones are designed with batteries that can be interchanged upon expiration by the end user, usually by opening the back cover. While such a design had initially been used in most mobile phones, including those with touch screen that were not Apple iPhones, it has largely been usurped throughout the 2010s by permanently built-in, non-replaceable batteries; a design practice criticized for planned obsolescence.
Due to limitations of electrical currents that existing USB cables' copper wires could handle, charging protocols which make use of elevated voltages such as Qualcomm Quick Charge and MediaTek Pump Express have been developed to increase the power throughput for faster charging. The smartphone's integrated charge controller (IC) requests the elevated voltage from a supported charger. "VOOC" by Oppo, also marketed as "dash charge", took the counter approach and increased current to cut out some heat produced from internally regulating the arriving voltage in the end device down to the battery's charging terminal voltage, but is incompatible with existing USB cables, as it requires the thicker copper wires of high-current USB cables. Later, USB Power Delivery (USB-PD) was developed with the aim to standardize the negotiation of charging parameters across devices of up to 100 Watts, but is only supported on cables with USB-C on both endings due to the connector's dedicated PD channels.
While charging rates have been increasing, with 15 watts in 2014, 20 Watts in 2016, and 45 watts in 2018, the power throughput may be throttled down significantly during operation of the device.
Wireless charging has been widely adapted, allowing for intermittent recharging without wearing down the charging port through frequent reconnection, with Qi being the most common standard, followed by Powermat. Due to the lower efficiency of wireless power transmission, charging rates are below that of wired charging, and more heat is produced at similar charging rates.
By the end of 2017, smartphone battery life has become generally adequate; however, earlier smartphone battery life was poor due to the weak batteries that could not handle the significant power requirements of the smartphones' computer systems and color screens.
Smartphone users purchase additional chargers for use outside the home, at work, and in cars and by buying portable external "battery packs". External battery packs include generic models which are connected to the smartphone with a cable, and custom-made models that "piggyback" onto a smartphone's case. In 2016, Samsung had to recall millions of the Galaxy Note 7 smartphones due to an explosive battery issue. For consumer convenience, wireless charging stations have been introduced in some hotels, bars, and other public spaces.
Cameras
Cameras have become standard features of smartphones. As of 2019 phone cameras are now a highly competitive area of differentiation between models, with advertising campaigns commonly based on a focus on the quality or capabilities of a device's main cameras.
Images are usually saved in the JPEG file format; some high-end phones since the mid-2010s also have RAW imaging capability.
Space constraints
Typically smartphones have at least one main rear-facing camera and a lower-resolution front-facing camera for "selfies" and video chat. Owing to the limited depth available in smartphones for image sensors and optics, rear-facing cameras are often housed in a "bump" that's thicker than the rest of the phone. Since increasingly thin mobile phones have more abundant horizontal space than the depth that is necessary and used in dedicated cameras for better lenses, there's additionally a trend for phone manufacturers to include multiple cameras, with each optimized for a different purpose (telephoto, wide angle, etc.).
Viewed from back, rear cameras are commonly located at the top center or top left corner. A cornered location benefits by not requiring other hardware to be packed around the camera module while increasing ergonomy, as the lens is less likely to be covered when held horizontally.
Modern advanced smartphones have cameras with optical image stabilisation (OIS), larger sensors, bright lenses, and even optical zoom plus RAW images. HDR, "Bokeh mode" with multi lenses and multi-shot night modes are now also familiar. Many new smartphone camera features are being enabled via computational photography image processing and multiple specialized lenses rather than larger sensors and lenses, due to the constrained space available inside phones that are being made as slim as possible.
Dedicated camera button
Some mobile phones such as the Samsung i8000 Omnia 2, some Nokia Lumias and some Sony Xperias are equipped with a physical camera shutter button.
Those with two pressure levels resemble the point-and-shoot intuition of dedicated compact cameras. The camera button may be used as a shortcut to quickly and ergonomically launch the camera software, as it is located more accessibly inside a pocket than the power button.
Back cover materials
Back covers of smartphones are typically made of polycarbonate, aluminium, or glass. Polycarbonate back covers may be glossy or matte, and possibly textured, like dotted on the Galaxy S5 or leathered on the Galaxy Note 3 and Note 4.
While polycarbonate back covers may be perceived as less "premium" among fashion- and trend-oriented users, its utilitarian strengths and technical benefits include durability and shock absorption, greater elasticity against permanent bending like metal, inability to shatter like glass, which facilitates designing it removable; better manufacturing cost efficiency, and no blockage of radio signals or wireless power like metal.
Accessories
A wide range of accessories are sold for smartphones, including cases, memory cards, screen protectors, chargers, wireless power stations, USB On-The-Go adapters (for connecting USB drives and or, in some cases, a HDMI cable to an external monitor), MHL adapters, add-on batteries, power banks, headphones, combined headphone-microphones (which, for example, allow a person to privately conduct calls on the device without holding it to the ear), and Bluetooth-enabled powered speakers that enable users to listen to media from their smartphones wirelessly.
Cases range from relatively inexpensive rubber or soft plastic cases which provide moderate protection from bumps and good protection from scratches to more expensive, heavy-duty cases that combine a rubber padding with a hard outer shell. Some cases have a "book"-like form, with a cover that the user opens to use the device; when the cover is closed, it protects the screen. Some "book"-like cases have additional pockets for credit cards, thus enabling people to use them as wallets.
Accessories include products sold by the manufacturer of the smartphone and compatible products made by other manufacturers.
However, some companies, like Apple, stopped including chargers with smartphones in order to "reduce carbon footprint," etc., causing many customers to pay extra for charging adapters.
Software
Mobile operating systems
A mobile operating system (or mobile OS) is an operating system for phones, tablets, smartwatches, or other mobile devices.
Mobile operating systems combine features of a personal computer operating system with other features useful for mobile or handheld use; usually including, and most of the following considered essential in modern mobile systems; a touchscreen, cellular, Bluetooth, Wi-Fi Protected Access, Wi-Fi, Global Positioning System (GPS) mobile navigation, video- and single-frame picture cameras, speech recognition, voice recorder, music player, near field communication, and infrared blaster. By Q1 2018, over 383 million smartphones were sold with 85.9 percent running Android, 14.1 percent running iOS and a negligible number of smartphones running other OSes. Android alone is more popular than the popular desktop operating system Windows, and in general smartphone use (even without tablets) exceeds desktop use. Other well-known mobile operating systems are Flyme OS and Harmony OS.
Mobile devices with mobile communications abilities (e.g., smartphones) contain two mobile operating systemsthe main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device.
Mobile app
A mobile app is a computer program designed to run on a mobile device, such as a smartphone. The term "app" is a short-form of the term "software application".
Application stores
The introduction of Apple's App Store for the iPhone and iPod Touch in July 2008 popularized manufacturer-hosted online distribution for third-party applications (software and computer programs) focused on a single platform. There are a huge variety of apps, including video games, music products and business tools. Up until that point, smartphone application distribution depended on third-party sources providing applications for multiple platforms, such as GetJar, Handango, Handmark, and PocketGear. Following the success of the App Store, other smartphone manufacturers launched application stores, such as Google's Android Market (later renamed to the Google Play Store) and RIM's BlackBerry App World, Android-related app stores like Aptoide, Cafe Bazaar, F-Droid, GetJar, and Opera Mobile Store. In February 2014, 93% of mobile developers were targeting smartphones first for mobile app development.
Sales
Since 1996, smartphone shipments have had positive growth. In November 2011, 27% of all photographs created were taken with camera-equipped smartphones. In September 2012, a study concluded that 4 out of 5 smartphone owners use the device to shop online. Global smartphone sales surpassed the sales figures for feature phones in early 2013. Worldwide shipments of smartphones topped 1 billion units in 2013, up 38% from 2012's 725 million, while comprising a 55% share of the mobile phone market in 2013, up from 42% in 2012. In 2013, smartphone sales began to decline for the first time. In Q1 2016 for the first time the shipments dropped by 3 percent year on year. The situation was caused by the maturing China market. A report by NPD shows that fewer than 10% of US citizens have bought $1,000+ smartphones, as they are too expensive for most people, without introducing particularly innovative features, and amid Huawei, Oppo and Xiaomi introducing products with similar feature sets for lower prices. In 2019, smartphone sales declined by 3.2%, the largest in smartphone history, while China and India were credited with driving most smartphone sales worldwide. It is predicted that widespread adoption of 5G will help drive new smartphone sales.
By manufacturer
In 2011, Samsung had the highest shipment market share worldwide, followed by Apple. In 2013, Samsung had 31.3% market share, a slight increase from 30.3% in 2012, while Apple was at 15.3%, a decrease from 18.7% in 2012. Huawei, LG and Lenovo were at about 5% each, significantly better than 2012 figures, while others had about 40%, the same as the previous years figure. Only Apple lost market share, although their shipment volume still increased by 12.9%; the rest had significant increases in shipment volumes of 36–92%. In Q1 2014, Samsung had a 31% share and Apple had 16%. In Q4 2014, Apple had a 20.4% share and Samsung had 19.9%. In Q2 2016, Samsung had a 22.3% share and Apple had 12.9%. In Q1 2017, IDC reported that Samsung was first placed, with 80 million units, followed by Apple with 50.8 million, Huawei with 34.6 million, Oppo with 25.5 million and Vivo with 22.7 million.
Samsung's mobile business is half the size of Apple's, by revenue. Apple business increased very rapidly in the years 2013 to 2017. Realme, a brand owned by Oppo, is the fastest-growing phone brand worldwide since Q2 2019. In China, Huawei and Honor, a brand owned by Huawei, have 46% of market share combined and posted 66% annual growth as of 2019, amid growing Chinese nationalism. In 2019, Samsung had a 74% market share in 5G smartphones while 5G smartphones had 1% of market share in China.
Research has shown that iPhones are commonly associated with wealth, and that the average iPhone user has 40% more annual income than the average Android user. Women are more likely than men to own an iPhone. TrendForce predicts that foldable phones will start to become popular in 2021.
By operating system
Use
Contemporary use and convergence
The rise in popularity of touchscreen smartphones and mobile apps distributed via app stores along with rapidly advancing network, mobile processor, and storage technologies led to a convergence where separate mobile phones, organizers, and portable media players were replaced by a smartphone as the single device most people carried. Advances in digital camera sensors and on-device image processing software more gradually led to smartphones replacing simpler cameras for photographs and video recording. The built-in GPS capabilities and mapping apps on smartphones largely replaced stand-alone satellite navigation devices, and paper maps became less common. Mobile gaming on smartphones greatly grew in popularity, allowing many people to use them in place of handheld game consoles, and some companies tried creating game console/phone hybrids based on phone hardware and software. People frequently have chosen not to get fixed-line telephone service in favor of smartphones. Music streaming apps and services have grown rapidly in popularity, serving the same use as listening to music stations on a terrestrial or satellite radio. Streaming video services are easily accessed via smartphone apps and can be used in place of watching television. People have often stopped wearing wristwatches in favor of checking the time on their smartphones, and many use the clock features on their phones in place of alarm clocks. Mobile phones can also be used as a digital note taking, text editing and memorandum device whose computerization facilitates searching of entries.
Additionally, in many lesser technologically developed regions smartphones are people's first and only means of Internet access due to their portability, with personal computers being relatively uncommon outside of business use. The cameras on smartphones can be used to photograph documents and send them via email or messaging in place of using fax (facsimile) machines. Payment apps and services on smartphones allow people to make less use of wallets, purses, credit and debit cards, and cash. Mobile banking apps can allow people to deposit checks simply by photographing them, eliminating the need to take the physical check to an ATM or teller. Guide book apps can take the place of paper travel and restaurant/business guides, museum brochures, and dedicated audio guide equipment.
Mobile banking and payment
In many countries, mobile phones are used to provide mobile banking services, which may include the ability to transfer cash payments by secure SMS text message. Kenya's M-PESA mobile banking service, for example, allows customers of the mobile phone operator Safaricom to hold cash balances which are recorded on their SIM cards. Cash can be deposited or withdrawn from M-PESA accounts at Safaricom retail outlets located throughout the country and can be transferred electronically from person to person and used to pay bills to companies.
Branchless banking has been successful in South Africa and the Philippines. A pilot project in Bali was launched in 2011 by the International Finance Corporation and an Indonesian bank, Bank Mandiri.
Another application of mobile banking technology is Zidisha, a US-based nonprofit micro-lending platform that allows residents of developing countries to raise small business loans from Web users worldwide. Zidisha uses mobile banking for loan disbursements and repayments, transferring funds from lenders in the United States to borrowers in rural Africa who have mobile phones and can use the Internet.
Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines in Espoo were enabled to work with SMS payments. Eventually, the idea spread and in 1999, the Philippines launched the country's first commercial mobile payments systems with mobile operators Globe and Smart.
Some mobile phones can make mobile payments via direct mobile billing schemes, or through contactless payments if the phone and the point of sale support near field communication (NFC). Enabling contactless payments through NFC-equipped mobile phones requires the co-operation of manufacturers, network operators, and retail merchants.
Facsimile
Some apps allows for sending and receiving facsimile (Fax), over a smartphone, including facsimile data (composed of raster bi-level graphics) generated directly and digitally from document and image file formats.
Criticism and issues
Social impacts
In 2012, University of Southern California study found that unprotected adolescent sexual activity was more common among owners of smartphones. A study conducted by the Rensselaer Polytechnic Institute's (RPI) Lighting Research Center (LRC) concluded that smartphones, or any backlit devices, can seriously affect sleep cycles. Some persons might become psychologically attached to smartphones resulting in anxiety when separated from the devices. A "smombie" (a combination of "smartphone" and "zombie") is a walking person using a smartphone and not paying attention as they walk, possibly risking an accident in the process, an increasing social phenomenon. The issue of slow-moving smartphone users led to the temporary creation of a "mobile lane" for walking in Chongqing, China. The issue of distracted smartphone users led the city of Augsburg, Germany to embed pedestrian traffic lights in the pavement.
While driving
Mobile phone use while driving—including calling, text messaging, playing media, web browsing, gaming, using mapping apps or operating other phone features—is common but controversial, since it is widely considered dangerous due to what is known as distracted driving. Being distracted while operating a motor vehicle has been shown to increase the risk of accidents. In September 2010, the US National Highway Traffic Safety Administration (NHTSA) reported that 995 people were killed by drivers distracted by phones. In March 2011 a US insurance company, State Farm Insurance, announced the results of a study which showed 19% of drivers surveyed accessed the Internet on a smartphone while driving. Many jurisdictions prohibit the use of mobile phones while driving. In Egypt, Israel, Japan, Portugal and Singapore, both handheld and hands-free calling on a mobile phone (which uses a speakerphone) is banned. In other countries, including the UK and France, and in many US states, calling is only banned on handheld phones, while hands-free calling is permitted.
A 2011 study reported that over 90% of college students surveyed text (initiate, reply or read) while driving.
The scientific literature on the danger of driving while sending a text message from a mobile phone, or texting while driving, is limited. A simulation study at the University of Utah found a sixfold increase in distraction-related accidents when texting. Due to the complexity of smartphones that began to grow more after, this has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo, GPS or satnav.
A 2010 study reviewed the incidence of phone use while cycling and its effects on behavior and safety. In 2013 a national survey in the US reported the number of drivers who reported using their phones to access the Internet while driving had risen to nearly one of four. A study conducted by the University of Vienna examined approaches for reducing inappropriate and problematic use of mobile phones, such as using phones while driving.
Accidents involving a driver being distracted by being in a call on a phone have begun to be prosecuted as negligence similar to speeding. In the United Kingdom, from 27 February 2007, motorists who are caught using a handheld phone while driving will have three penalty points added to their license in addition to the fine of £60. This increase was introduced to try to stem the increase in drivers ignoring the law. Japan prohibits all use of phones while driving, including use of hands-free devices. New Zealand has banned handheld phone use since 1 November 2009. Many states in the United States have banned text messaging on phones while driving. Illinois became the 17th American state to enforce this law. As of July 2010, 30 states had banned texting while driving, with Kentucky becoming the most recent addition on July 15.
Public Health Law Research maintains a list of distracted driving laws in the United States. This database of laws provides a comprehensive view of the provisions of laws that restrict the use of mobile devices while driving for all 50 states and the District of Columbia between 1992, when first law was passed through December 1, 2010. The dataset contains information on 22 dichotomous, continuous or categorical variables including, for example, activities regulated (e.g., texting versus talking, hands-free versus handheld calls, web browsing, gaming), targeted populations, and exemptions.
Legal
A "patent war" between Samsung and Apple started when the latter claimed that the original Galaxy S Android phone copied the interfaceand possibly the hardwareof Apple's iOS for the iPhone 3GS. There was also smartphone patents licensing and litigation involving Sony Mobile, Google, Apple Inc., Samsung, Microsoft, Nokia, Motorola, HTC, Huawei and ZTE, among others. The conflict is part of the wider "patent wars" between multinational technology and software corporations. To secure and increase market share, companies granted a patent can sue to prevent competitors from using the methods the patent covers. Since the 2010s the number of lawsuits, counter-suits, and trade complaints based on patents and designs in the market for smartphones, and devices based on smartphone OSes such as Android and iOS, has increased significantly. Initial suits, countersuits, rulings, license agreements, and other major events began in 2009 as the smartphone market stated to grow more rapidly by 2012.
Medical
With the rise in number of mobile medical apps in the market place, government regulatory agencies raised concerns on the safety of the use of such applications. These concerns were transformed into regulation initiatives worldwide with the aim of safeguarding users from untrusted medical advice. According to the findings of these medical experts in recent years, excessive smartphone use in society may lead to headaches, sleep disorders and insufficient sleep, while severe smartphone addiction may lead to physical health problems, such as hunchback, muscle relaxation and uneven nutrition.
Impacts on cognition and mental health
There is a debate about beneficial and detrimental impacts of smartphones or smartphone-uses on cognition and mental health.
Security
Smartphone malware is easily distributed through an insecure app store. Often, malware is hidden in pirated versions of legitimate apps, which are then distributed through third-party app stores. Malware risk also comes from what is known as an "update attack", where a legitimate application is later changed to include a malware component, which users then install when they are notified that the app has been updated. As well, one out of three robberies in 2012 in the United States involved the theft of a mobile phone. An online petition has urged smartphone makers to install kill switches in their devices. In 2014, Apple's "Find my iPhone" and Google's "Android Device Manager" can locate, disable, and wipe the data from phones that have been lost or stolen. With BlackBerry Protect in OS version 10.3.2, devices can be rendered unrecoverable to even BlackBerry's own Operating System recovery tools if incorrectly authenticated or dissociated from their account.
Leaked documents published by WikiLeaks, codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the United States Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including iOS and Android). In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software.
Guidelines for mobile device security were issued by NIST and many other organizations. For conducting a private, in-person meeting, at least one site recommends that the user switch the smartphone off and disconnect the battery.
Sleep
Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affects melatonin levels and sleep cycles. In an effort to alleviate these issues, "Night Mode" functionality to change the color temperature of a screen to a warmer hue based on the time of day to reduce the amount of blue light generated became available through several apps for Android and the f.lux software for jailbroken iPhones. iOS 9.3 integrated a similar, system-level feature known as "Night Shift." Several Android device manufacturers bypassed Google's initial reluctance to make Night Mode a standard feature in Android and included software for it on their hardware under varying names, before Android Oreo added it to the OS for compatible devices.
It has also been theorized that for some users, addiction to use of their phones, especially before they go to bed, can result in "ego depletion." Many people also use their phones as alarm clocks, which can also lead to loss of sleep.
Lifespan
In mobile phones released since the second half of the 2010s, operational life span commonly is limited by built-in batteries which are not designed to be interchangeable. The life expectancy of batteries depends on usage intensity of the powered device, where activity (longer usage) and tasks demanding more energy expire the battery earlier.
Lithium-ion and Lithium-polymer batteries, those commonly powering portable electronics, additionally wear down more from fuller charge and deeper discharge cycles, and when unused for an extended amount of time while depleted, where self-discharging may lead to a harmful depth of discharge.
The functional life span of mobile phones may be limited by lack of software update support, such as deprecation of TLS cipher suites by certificate authority with no official patches provided for earlier devices.
See also
Comparison of smartphones
E-reader
Lists of mobile computers
List of mobile software distribution platforms
Media Transfer Protocol
Mobile Internet device
Portable media player
Second screen
Smartphone kill switch
Smartphone zombie
Notes
References
External links
Smartphones
Cloud clients
Consumer electronics
Information appliances
Mobile computers
Personal computing |
41337508 | https://en.wikipedia.org/wiki/Fireboats%20of%20Jamaica | Fireboats of Jamaica | The Jamaican Fire Brigade operates several fireboats of Jamaica.
According to a 2003 article in the Jamaica Gleaner the three fireboats then nominally operated by the Fire Brigade were all in a state of disrepair, and had all been out of service for months—or in the case of one vessel—years. According to another Gleaner article the stations were dangerously over-run with rats and other vermin.
In 2005 the Jamaica Star reported that after the fireboat assigned to the Kingston Fire Boat Station had been out of service for most of 2004—being sent for repair four separate times, the staff were assigned to other duties when the fireboat was placed permanently offline.
By 2012, the fireboat that served Montego Bay, Jamaica's second most important port, had been without a fireboat for over five years, as the previous boat had been written off as not worth repairing, but had yet to be replaced.
Both the mayor of Montego Bay, Glendon Harris, and Jamaica Senator Robert Montague have called for the urgent supply of new fireboats.
In November 2013 Alrick Hacker a Senior Deputy Superintendent of the Fire Brigade, defended the Brigade's replacement plans and outlined the interim measures in place until the new boats were in service.
Jamaica's three new Jamaican Coast Guard patrol cutters all mount a water cannon in their bow, and Hacker said the Coast Guard had been requested to help out until the Brigade's new fireboats were ready. He said the Police Marine units also had some firefighting capability, and they too had been asked to help out. He asserted that modern cruise ships, the foreign vessels of most acute concern, were built more fire-safe than in the past, and had the capability to fight their own fires, to a certain extent.
Hacker asserted that new technology would allow the Fire Brigade to replace their older, relatively large vessels with smaller, faster, more capable vessels, that would be cheaper to operate and to maintain.
In 2018 the Fire Brigade acquired two new fireboats.
References
Fire |
52792865 | https://en.wikipedia.org/wiki/Noonlight | Noonlight | Noonlight, formerly SafeTrek, is a connected safety platform and mobile app that can trigger requests to emergency services. Noonlight users can trigger an alarm by clicking a button. Users can connect other smart devices, to automatically trigger alarms for them.
The app is available for both Android and iOS devices. It has protected over 1 million users since 2013 and handled over 50,000 emergencies across the U.S. Noonlight was developed by Zach Winkler, the company's CEO.
Background
The app was created in 2013 at the University of Missouri in response to the high number of crime reports and the slow identification of the location of people making calls by 911 services. Winkler notes that "what most of us don't realize is that 9-1-1 really doesn’t have your location when you call them. It takes them up to six minutes sometimes to get a 300-meter accuracy reading of where you are". Noonlight is able to obtain users' exact GPS location within 5 meters in seconds of triggering the alarm.
In February 2017, SafeTrek was one among the top 25 promising Start Ups, according to CNBC.
Connected safety
Noonlight originally started as a way to get students from one point to another safely, and now it allows users to connect other apps and smart devices through the Noonlight app. This creates a way for users to get help even when they are unable or unaware of the emergency by allowing the connected app or device to trigger the Noonlight alarm for them. This solution also sends first responders vital details to make them more aware of the situation and better prepared. Also, users who already have memberships through some of these connected devices now have access to the safety button or other safety features through Noonlight.
Downtown St. Louis Collaboration
In 2016, Downtown St. Louis partnered with SafeTrek and offered a six-month free subscription to 4,000 downtown residents, available on a first-come, first-served basis.
In 2018, the St. Louis MetroLink contracted with Noonlight to offer their riders access to Noonlight's safety app. Similarly, Washington University in St. Louis offers a free subscription to all students, faculty, and staff.
Data collection and analysis
The data collected in the background is used to better assess the danger potential of specific areas throughout the U.S. and is sold to insurance agencies for individual user’s driving risk assessment For example, data released by the Noonlight app showed an intersection of Southern Methodist University's campus where users of Noonlight feel most unsafe. The Noonlight team has worked alongside local police departments to create a complementary police dashboard which marks and tracks locations of users, as well as passes along vital information to first responders collected through connected apps and devices. The intention is that policy makers, agency leaders, and individuals can make better and more informed decisions, improve resource utilization, and ultimately prevent emergencies from happening in the first place.
References
External links
Security software
Android (operating system) software
IOS software
Location-based software
Law enforcement in the United States
Proprietary software
Emergency management software
University of Missouri |
43142500 | https://en.wikipedia.org/wiki/Cheetah%20Mobile | Cheetah Mobile | Cheetah Mobile Inc () is a Chinese mobile Internet company headquartered in Beijing. The creator of some of the most popular global mobile apps, it has more than 634 million monthly active users as of Jan 2017.
History
Formation
Chen Rui (:zh:陳睿 current CEO of Bilibili) founded Cheetah Mobile. The company was established in 2010 as a merger of Kingsoft Security (Chen served as General Manager) and Conew Image, and grew to be the second-largest internet security software provider in China, according to iResearch. The company is located at 1st Yaojiayuan South Rd, Chaoyang District, Beijing, China.
Initial public offering
In 2014, Cheetah Mobile launched an IPO selling 13 million American depositary shares at US$14 per share, and thereby raised US$168 million. The IPO was managed by Morgan Stanley, JP Morgan Chase & Co., and Credit Suisse Group. Kingsoft and Tencent are major investors in Cheetah Mobile, holding 54% and 18% respectively.
Post IPO
In the late 2015, Cheetah Mobile announced that it had entered into a global strategic partnership with Yahoo. The company incorporated Yahoo's search and native advertising platforms within its own apps. As a result of this, Cheetah Mobile stated that its revenue generated from Yahoo increased by 30 percent daily within the first two weeks.
In February 2016, Cheetah Mobile and Cubot launched the CheetahPhone, an Android 6.0 Marshmallow based smartphone, at MWC in Barcelona, Spain.
Acquisition
On August 2, 2016, Cheetah Mobile announced its acquisition of a French startup News Republic for $57 million. News Republic is a news aggregator.
Ad fraud accusation
In March 2020, Cheetah Mobile was banned from Google Play due to their scheme of ad fraud, resulting in all of their games being removed as part of a 600 app deletion.
Products
Cheetah Mobile's ad supported products include:
Computer applications
Clean Master for PC – It claims to improve performance by erasing junk files and optimizing device memory. A premium version is available that recovers lost files and updates drivers, among other claims. It is available for PC and Android.
CM Browser – a web browser based on Chromium, it claims to be the first dual-core security browser in China.
Games
Big Bang 2048 – Similar to the game 2048, but with numbers replaced by animals.
Just Get 10 – A puzzle where players tap adjacent tiles with the same number, which will then pop up. Tapping again merges the position to where it was tapped.
Don't Tap The White Tile – Players must avoid white tiles.
Piano Tiles 2: Don't Tap The White Tile 2 – The sequel of Don't Tap The White Tile, including new gameplay and songs.
Rolling Sky - A fast-paced game where players must roll a ball through different levels.
Tap Tap Dash - A fast-paced game where players must tap to keep their character from falling off the platform throughout different levels.
Dancing Line - A rhythm game to tap in the beat.
Arrow.io - an archery game that ranges in 4 arenas.
Tap Tap Fish: AbyssRium - a marine aquarium game which the goal is to collect all the hidden fishes.
Mobile applications
AnTuTu
Armorfly - A browser which claims to have high privacy and security.
Battery Doctor – Claims to extend smartphone battery standby time.
Clean Master – Claims to improve smartphone performance and free storage space by erasing junk files, optimizing memory, and providing full protection from viruses, trojans, and other malware. There is significant controversy online to whether the application is actually effective or not.
Cloud Space of CM Security - A cloud backup tool to back up user's photos, call logs, contact information and SMS messages.
CM Backup – A cloud backup service.
CM Browser – A mobile web browser with antivirus security functions.
CM Flashlight – An ad supported flashlight app with a built-in compass.
CM Keyboard - A keyboard application that allows customizing the phone's keyboard.
CM Launcher 3D – Launcher only compatible with Android devices.
CM Locker - An Android lockscreen app.
Security Master – An antivirus application for Android phones.
CM Security VPN - A free VPN application.
CM Speed Booster - An Android optimizer.
CM Swipe - A quick access tool for easy access to apps and tools with just one hand.
CM TouchMe - An assistive tool to quickly access system operations or apps, inspired by the assistive touch tool in iOS.
CM Transfer - A file transfer tool to exchange photos, videos, music & apps offline.
CM QR Code & Bar Code Scanner - An ad supported QR code scanner tool hidden in QR & Bar Codes.
File Manager – Popular for Android, bought in early 2014 from Rhythm Software, which is based in Haidian, Beijing, PRC .
GoTap! - A data management tool that claims to help users manage or reduce mobile data usage and battery.
Heartbleed Scanner - A heartbleed virus scanner application that scans the Android operating system to see if the device is vulnerable to the 2014 Heartbleed exploit.
Notification Cleaner - A notification manager tool.
QuickPic Gallery – A photo gallery application. Which was acquired from Q-Supreme team in 2015.
Ransomware Killer - A ransomware virus killer application that claims to kill malware on an infected Android phone.
Simplelocker Cleaner - A locker cleaner application that performs a full scan of an Android device, and checks for example if a Cryptolocker virus is present. Claims to use a special anti-hijack solution to remove an infection.
Speed Test - A WiFi speed test tool that helps scan WiFi connections, check the security and the speed of the connection, and then optimize the speed.
Struts 2 Web Server Scanner - A web server scanner application that scans browser history and detects whether recently visited websites are affected by the Struts 2 flaw.
Stubborn Trojan Killer - A mobile antivirus application that claims to get rid of stubborn trojans that can't be deleted by other common antivirus apps.
WhatsCall - A caller application with free global and secure calls.
2Face - An account management application for instant simultaneous access to two exact copies of applications such as social, gaming and messaging apps on a single device.
AI
Artificial Intelligence - AI technology platform to power Cheetah Mobile full line of products.
OrionOS – a platform for smart devices in collaboration with OrionStar
Cheetah Voicepod – a voice recognition speaker based on AI
Cheetah GreetBot - a receptionist robot to deal with customers
Cheetah FriendBot - an educational robot for children
Cheetah VendBot - a vending machine robot
Commercial
Cheetah Ads - Cheetah Mobile's self-operated ad platform offering a wide range of ad formats, from high-performing display and native ads, to full-screen vertical video.
Big Data
Cheetah Data - a global mobile big data analysis platform.
Controversies
Despite the popularity of its Clean Master Android App, it was reported in 2014 that ads promoting Clean Master manipulate Android users with deceptive tactics when browsing websites within the app's advertising framework. In April 2014, Ferenc László Nagy from Sophos Labs captured some pop-up ads that led to Clean Master, warning the device had been infected with a virus.
In July 2014, Cheetah Mobile encouraged users to uninstall Google Chrome and replace it with Cheetah Mobile's own browser during Clean Master's clean up and optimization process. This practice allowed Cheetah Mobile to gain unfair position in the marketplace and led to a Google crackdown.
In December 2018, Cheetah Mobile was implicated in a massive click fraud scheme, leading Google to remove two of its apps from its Play Store. Cheetah Mobile has denied the charges. In February 2020, Google banned nearly 600 apps on the Play Store including all Cheetah Mobile's apps "for violating our disruptive ads policy and disallowed interstitial policy."
As of 10 March 2020, all apps made by Cheetah Mobile, along with the benchmarking AnTuTu apps, have been banned from the Google Play Store.
References
External links
Online companies of China
Software companies based in Beijing
Computer security software companies
Video game development companies
Chinese companies established in 2009
Software companies established in 2009
Companies based in Beijing
Chinese brands
Companies listed on the New York Stock Exchange
Mobile software |
3249942 | https://en.wikipedia.org/wiki/Monitor%20mode | Monitor mode | Monitor mode, or RFMON (Radio Frequency MONitor) mode, allows a computer with a wireless network interface controller (WNIC) to monitor all traffic received on a wireless channel. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the eight modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad hoc, Repeater, Mesh, Wi-Fi Direct, TDLS and Monitor mode.
Uses
Uses for monitor mode include: geographical packet analysis, observing of widespread traffic and acquiring knowledge of Wi-Fi technology through hands-on experience. It is especially useful for auditing unsecure channels (such as those protected with WEP). Monitor mode can also be used to help design Wi-Fi networks. For a given area and channel, the number of Wi-Fi devices currently being used can be discovered. This helps to create a better Wi-Fi network that reduces interference with other Wi-Fi devices by choosing the least used Wi-Fi channels.
Software such as KisMAC or Kismet, in combination with packet analyzers that can read pcap files, provide a user interface for passive wireless network monitoring.
Limitations
Usually the wireless adapter is unable to transmit in monitor mode and is restricted to a single wireless channel, though this is dependent on the wireless adapter's driver, its firmware, and features of its chipset. Also, in monitor mode the adapter does not check to see if the cyclic redundancy check (CRC) values are correct for packets captured, so some captured packets may be corrupted.
Operating system support
The Microsoft Windows Network Driver Interface Specification (NDIS) API has supported extensions for monitor mode since NDIS version 6, first available in Windows Vista. NDIS 6 supports exposing 802.11 frames to the upper protocol levels, while previous versions only exposed fake Ethernet frames translated from the 802.11 frames. Monitor mode support in NDIS 6 is an optional feature and may or may not be implemented in the client adapter driver. The implementation details and compliance with the NDIS specifications vary from vendor to vendor. In many cases, monitor mode support is not properly implemented by the vendor. For example, Ralink drivers report incorrect dBm readings and Realtek drivers do not include trailing 4-byte CRC values.
For versions of Windows prior to Windows Vista, some packet analyzer applications such as Wildpackets' OmniPeek and TamoSoft's CommView for WiFi provide their own device drivers to support monitor mode.
Linux's interfaces for 802.11 drivers support monitor mode and many drivers offer that support. STA drivers (Ralink, Broadcom) and every other manufacturer’s provided driver doesn’t support monitor mode.
FreeBSD, NetBSD, OpenBSD, and DragonFly BSD also provide an interface for 802.11 drivers that supports monitor mode, and many drivers for those operating systems support monitor mode as well. In Mac OS X 10.4 and later releases, the drivers for AirPort Extreme network adapters allow the adapter to be put into monitor mode. Libpcap 1.0.0 and later provides an API to select monitor mode when capturing on those operating systems.
See also
Promiscuous mode
Comparison of open-source wireless drivers
References
External links
A List of Wi-Fi adapters supporting monitor mode on Linux
Network analyzers
Wireless networking |
20133112 | https://en.wikipedia.org/wiki/Alpha%2021364 | Alpha 21364 | The Alpha 21364, code-named "Marvel", also known as EV7 is a microprocessor developed by Digital Equipment Corporation (DEC), later Compaq Computer Corporation, that implemented the Alpha instruction set architecture (ISA).
History
The Alpha 21364 was revealed in October 1998 by Compaq at the 11th Annual Microprocessor Forum, where it was described as an Alpha 21264 with a 1.5 MB 6-way set-associative on-die secondary cache, an integrated Direct Rambus DRAM memory controller and an integrated network controller for connecting to other microprocessors. Changes to the Alpha 21264 core included a larger victim buffer, which was quadrupled in capacity to 32 entries, 16 for the Dcache and 16 for the Scache. It was reported by the Microprocessor Report that Compaq considered implementing minor changes to branch predictor to improve branch prediction accuracy and doubling the miss buffer in capacity to 16 entries instead of 8 in the Alpha 21264.
It was expected to be taped-out in late 1999, with samples available in early 2000 and volume shipments in late 2000. However, the original schedule was delayed, with the tape-out in April 2001 instead of late 1999. The Alpha 21364 was introduced on 20 January 2002 when systems using the microprocessor debuted. It operated at 1.25 GHz, but production models in the AlphaServer ES47, ES80 and GS1280 operated at 1.0 GHz or 1.15 GHz. Unlike previous Alpha microprocessors, the Alpha 21364 was not sold on the open market.
The Alpha 21364 was originally intended to be succeeded by the Alpha 21464, code-named EV8, a new implementation of the Alpha ISA with four-way simultaneous multithreading (SMT). It was first presented in October 1999 at the 12th Annual Microprocessor Forum, but was cancelled on 25 June 2001 at a late stage of development.
Development
The development of the Alpha 21364 was most focused on features that would improve memory performance and multiprocessor scalability. The focus on memory performance was the result of a forward-looking article published in Microprocessor Report titled, "It's the Memory, Stupid!" written by Richard L. Sites, who co-led the definition of the Alpha architecture. The article concluded that, "Over the coming decade, memory subsystem design will be the only important design issue for microprocessors."
Description
The Alpha 21364 was an Alpha 21264 with a 1.75 MB on-die secondary cache, two integrated memory controllers and an integrated network controller.
Core
The Alpha 21364's core is based on the EV68CB, a derivative of the Alpha 21264. The only modification was a larger victim buffer, now quadrupled in capacity to 32 entries. The 32 entries of victim buffer is divided equally into 16 entries each for the Dcache and Scache. Although the Alpha 21364 is a fourth-generation implementation of the Alpha Architecture, aside from this modification, the core is otherwise identical to the EV68CB derivative of the Alpha 21264.
Scache
The secondary cache (termed "Scache") is a unified cache with a capacity of 1.75 MB. It is 7-way set associative, uses a 64-byte line size, and has a write-back policy. The cache is protected by single-bit error correction, double-bit error detection (SECDED) error-correcting code (ECC). It is connected to the cache controller by a 128-bit data path. Access to the cache is fully pipelined, yielding a sustainable bandwidth of 16 GB/s at 1.0 GHz.
The time required for data requested from the cache to when it can be used is 12 cycles. The 12-cycle latency was considered by observers, such as the Microprocessor Report, to be significant. The latency of the Scache was not reduced further as it would have not improved performance. The Alpha 21264 core upon which the Alpha 21364 was based on was designed to use an external cache built from commodity SRAM, which has a significantly higher latency than the on-die Scache of the Alpha 21364. Thus, it could only accept data at a limited rate. Once improving latency saw no further gains, the designers focused on reducing the power consumed by the Scache. Compaq was not willing to remedy this deficiency as it would have required the Alpha 21264 core to be modified significantly. The high latency of the Scache permitted the cache tags be looked up first to determine if the Scache contained the requested data and in which bank it was located in before powering up the Scache bank and accessing it. This avoided unproductive Scache accesses, reducing power consumption.
The tag store consisted of 5.75 million transistors and data store of 108 million transistors.
Memory controller
The Alpha 21364 has two integrated memory controllers that support Rambus DRAM (RDRAM) that operate at two thirds of the microprocessor's clock frequency, or 800 MHz at 1.2 GHz. Compaq designed custom memory controllers for the Alpha 21364, giving them capabilities not found in standard RDRAM memory controllers such as having all the 128 pages open, reducing the access latency to those pages; and proprietary fault-tolerant features.
Each memory controller provides five RDRAM channels that support PC800 Rambus inline memory modules (RIMMs). Four of the channels are used to provide memory, while the fifth is used to provide RAID-like redundancy. Each channel is 16 bits wide, operates at 400 MHz and transfers data on both the rising and falling edges of the clock signal (double data rate) for a transfer rate of 800 MT/s, yielding 1.6 GB/s of bandwidth. The total memory bandwidth of the eight channels is 12.8 GB/s.
Cache coherence is provided by the memory controllers. Each memory controller has a cache coherence engine. The Alpha 21364 uses a directory cache coherence scheme where part of the memory is used to store Modified, Exclusive, Shared, Invalid (MESI) coherency data.
R-box
The R-box contains the network router. The network router connected the microprocessor to other microprocessors using four ports named North, South, East and West. Each port consisted of two 39-bit unidirectional links operating at 800 MHz. 32 bits were for data and 7 bits were for ECC. The network router also has a fifth port, used for I/O. This port connects to an IO7 application specific integrated circuit (ASIC), which was a bridge to an AGP 4x channel and two PCI-X buses. The I/O port consisted of two unidirectional 32-bit links operating at 200 MHz, yielding a peak bandwidth of 3.2 GB/s. The I/O port link operated at a quarter of the clock frequency to simplify the design of the I/O ASIC.
The Alpha 21364 can connect to as many as 127 other microprocessors using two network topologies: shuffle and an 2D torus. The shuffle topology had more direct paths to other microprocessors, reducing latency and therefore improving performance, but was limited to connecting up to eight microprocessors as a result of its nature. The 2D torus topology enabled the network to feature up to 128 microprocessors.
In multiprocessing systems, each microprocessor is a node with its own memory. Accessing the memory of other nodes is possible, but with a latency. The latency increases with distance, thus the Alpha 21364 implements non-uniform memory access (NUMA) multiprocessing. I/O is also distributed in an identical fashion. An Alpha 21364 microprocessor in a multiprocessing system did not have to have its RIMM slots populated with memory or its I/O port populated with devices. It could use another microprocessor's memory and I/O.
Fault tolerance
The Alpha 21364 could operate in lock-step for fault-tolerant computers. This feature was a result in Compaq's decision to migrate Tandem's Himalaya fault-tolerant servers from the MIPS architecture to Alpha. The machines however never used the microprocessor as the decision to phase out the Alpha in favor of the Itanium was made before the availability of the Alpha 21364.
Fabrication
The Alpha 21364 contained 152 million transistors. The die measured 21.1 mm by 18.8 mm for an area of 397 mm². It was fabricated by International Business Machines (IBM) in their 0.18 µm, seven-level copper complementary metal–oxide–semiconductor (CMOS) process. It was packaged in a 1,443-land flip-chip land grid array (LGA). It used a 1.65 V power supply and a 1.5 V external interface for a maximum power dissipation of 155 W at 1.25 GHz.
Alpha 21364A
The Alpha 21364A, code-named EV79, previously EV78, was a further development of the Alpha 21364. It was intended to be the last Alpha microprocessor developed. Scheduled to be introduced in 2004, it was cancelled on 23 October 2003, with HP cited performance and schedule issues as reasons. A replacement, the EV7z was announced on the same day.
A prototype of the microprocessor was presented by Hewlett-Packard at the International Solid State Circuits Conference in February 2003. It operated at 1.45 GHz, had a die area of 251 mm², used a 1.2 V power supply, and dissipated 100 W (estimated).
The Alpha 21364A was to have improved upon the Alpha 21364 by featuring higher clock frequencies in the range of ~1.6 to ~1.7 GHz and support for 1066 Mbit/s RDRAM memory. It was to be fabricated by IBM in their 0.13 µm silicon on insulator (SOI) process. As a result of the more advanced process, there were reductions in die size, power supply voltage (1.2 V compared to 1.65 V), and in power consumption and dissipation.
EV7z
The EV7z was a further development of the Alpha 21364. It was the last Alpha microprocessor developed and introduced. The EV7z became known on 23 October 2003 when HP announced they had cancelled the Alpha 21364A and would be replacing it with the EV7z. The EV7z was introduced on 16 August 2004 when the only computer using the microprocessor, AlphaServer GS1280, was introduced. It was discontinued on 27 April 2007 when the computer it was featured in was discontinued. It operated at 1.3 GHz, supported PC1066 RIMMs and was fabricated in the same 0.18 µm process as the Alpha 21364. Compared to the Alpha 21364, the EV7z was 14 to 16 percent faster, but was still slower than the Alpha 21364A it replaced, which was estimated to outperform the Alpha 21364 by 25 percent at 1.5 GHz.
Notes
References
"EV7 AlphaServers unleashed as chip line heads into sunset". (21 January 2003). The Register.
Bannon, Peter (4 January 2002). "Alpha 21364 (EV7)".
Compaq Computer Corporation. Compiler Writer’s Guide for the 21264/21364, Revision 2.0, January 2002.
Diefendorff, Keith (6 December 1999). "Compaq Chooses SMT for Alpha". Microprocessor Report, Volume 13, Number 16.
Glaskowsky, Peter N. (24 March 2003). "Moore, Moore and More at ISSCC". Microprocessor Report.
Grodstein, Joel; et al. (2002). "Power and CAD considerations for the 1.75Mbyte, 1.2GHz L2 cache on the Alpha 21364 CPU". GLVLSI '02.
Gwenapp, Linley (26 October 1998). "Alpha 21364 to Ease Memory Bottleneck". Microprocessor Report.
Hewlett-Packard Development Company, L.P. (20 January 2004). HP Introduces Most Powerful Generation of AlphaServer Systems. Press release.
Hewlett-Packard Development Company, L.P. (16 August 2004). HP Expands UNIX Server and StorageWorks Portfolios to Offer Customers Greater Value and Flexibility on Standards-based Platforms. Press release.
Jain, A. et al. (2001). "A 1.2 GHz Alpha microprocessor with 44.8 GB/s chip pin bandwidth". ISSCC Digest of Technical Papers.
Krewell, Kevin (24 March 2003). "EV7 Stresses Memory Bandwidth". Microprocessor Report.
Mukherjee, Shubhendu S.; Bannon, Peter; Lang, Steve; Spink, Aaron; Webb, David (2002). "The Alpha 21364 Network Architecture". IEEE Micro. pp. 26–35.
Seznec, Andre; et al. (25–29 May 2002). "Design Tradeoffs for the Alpha EV8 Conditional Branch Predictor". Proceedings of the 29th IEEE-ACM International Symposium on Computer Architecture.
Shannon, Terry (24 October 2003). "HP is Dealt a Delay in its HP-UX OS and Alpha Processor Roadmap". Shannon Knows HPC, Volume 10, Number 51.
Further reading
Kowaleski, J.A., Jr. et al. (2003). "Implementation of an Alpha microprocessor in SOI". ISSCC Digest of Technical Papers. pp. 248–249, 491.
Tsuk, M. et al. (2001). "Modeling and measurement of the Alpha 21364 package". Electrical Performance of Electrical Packaging. pp. 283–286.
Xanthopoulos, T. et al. (2001). "The design and analysis of the clock distribution network for a 1.2GHz Alpha microprocessor". ISSCC Digest of Technical Papers. pp. 402–403.
DEC microprocessors
Superscalar microprocessors
64-bit microprocessors |
194467 | https://en.wikipedia.org/wiki/Parity%20bit | Parity bit | A parity bit, or check bit, is a bit added to a string of binary code. Parity bits are a simple form of error detecting code. Parity bits are generally applied to the smallest units of a communication protocol, typically 8-bit octets (bytes), although they can also be applied separately to an entire message string of bits.
The parity bit ensures that the total number of 1-bits in the string is even or odd. Accordingly, there are two variants of parity bits: even parity bit and odd parity bit. In the case of even parity, for a given set of bits, the occurrences of bits whose value is 1 are counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences of 1s in the whole set (including the parity bit) an even number. If the count of 1s in a given set of bits is already even, the parity bit's value is 0. In the case of odd parity, the coding is reversed. For a given set of bits, if the count of bits with a value of 1 is even, the parity bit value is set to 1 making the total count of 1s in the whole set (including the parity bit) an odd number. If the count of bits with a value of 1 is odd, the count is already odd so the parity bit's value is 0. Even parity is a special case of a cyclic redundancy check (CRC), where the 1-bit CRC is generated by the polynomial x+1.
If a bit is present at any point otherwise dedicated to a parity bit but is not used for parity, it may be referred to as a mark parity bit if the parity bit is always 1, or a space parity bit if the bit is always 0. In such cases where the value of the bit is constant, it may be called a stick parity bit even though its function has nothing to do with parity. The function of such bits varies with the system design, but examples of functions for such bits include timing management or identification of a packet as being of data or address significance. If its actual bit value is irrelevant to its function, the bit amounts to a don't-care term.
Parity
In mathematics parity can refer to the evenness or oddness of an integer, which, when written in its binary form, can be determined just by examining only its least significant bit.
In information technology parity refers to the evenness or oddness, given any set of binary digits, of the number of those bits with value one. Because parity is determined by the state of every one of the bits, this property of parity—being dependent upon all the bits and changing its value from even to odd parity if any one bit changes—allows for its use in error detection and correction schemes.
In telecommunications the parity referred to by some protocols is for error-detection. The transmission medium is preset, at both end points, to agree on either odd parity or even parity. For each string of bits ready to transmit (data packet) the sender calculates its parity bit, zero or a one, to make it conform to the agreed parity, even or odd. The receiver of that packet first checks that the parity of the packet as a whole is in accordance with the preset agreement, then, if there was a parity error in that packet, requests a re-transmission of that packet.
In computer science the parity stripe or parity disk in a RAID array provides error-correction. Parity bits are written at the rate of one parity bit per n bits, where n is the number of disks in the array. When a read error occurs, each bit in the error region is recalculated from its set of n bits. In this way, using one parity bit creates "redunancy" for a region from the size of one bit to the size of one disk. See below.
In electronics, transcoding data with parity can be very efficient, as XOR gates output what is equivalent to a check bit that creates an even parity, and XOR logic design easily scales to any number of inputs. XOR and AND structures comprise the bulk of most integrated circuitry.
Error detection
If an odd number of bits (including the parity bit) are transmitted incorrectly, the parity bit will be incorrect, thus indicating that a parity error occurred in the transmission. The parity bit is only suitable for detecting errors; it cannot correct any errors, as there is no way to determine which particular bit is corrupted. The data must be discarded entirely, and re-transmitted from scratch. On a noisy transmission medium, successful transmission can therefore take a long time, or even never occur. However, parity has the advantage that it uses only a single bit and requires only a number of XOR gates to generate. See Hamming code for an example of an error-correcting code.
Parity bit checking is used occasionally for transmitting ASCII characters, which have 7 bits, leaving the 8th bit as a parity bit.
For example, the parity bit can be computed as follows. Assume Alice and Bob are communicating and Alice wants to send Bob the simple 4-bit message 1001.
This mechanism enables the detection of single bit errors, because if one bit gets flipped due to line noise, there will be an incorrect number of ones in the received data. In the two examples above, Bob's calculated parity value matches the parity bit in its received value, indicating there are no single bit errors. Consider the following example with a transmission error in the second bit using XOR:
There is a limitation to parity schemes. A parity bit is only guaranteed to detect an odd number of bit errors. If an even number of bits have errors, the parity bit records the correct number of ones, even though the data is corrupt. (See also error detection and correction.) Consider the same example as before with an even number of corrupted bits:
Bob observes even parity, as expected, thereby failing to catch the two bit errors.
Usage
Because of its simplicity, parity is used in many hardware applications where an operation can be repeated in case of difficulty, or where simply detecting the error is helpful. For example, the SCSI and PCI buses use parity to detect transmission errors, and many microprocessor instruction caches include parity protection. Because the I-cache data is just a copy of main memory, it can be disregarded and re-fetched if it is found to be corrupted.
In serial data transmission, a common format is 7 data bits, an even parity bit, and one or two stop bits. This format accommodates all the 7-bit ASCII characters in an 8-bit byte. Other formats are possible; 8 bits of data plus a parity bit can convey all 8-bit byte values.
In serial communication contexts, parity is usually generated and checked by interface hardware (e.g., a UART) and, on reception, the result made available to a processor such as the CPU (and so too, for instance, the operating system) via a status bit in a hardware register in the interface hardware. Recovery from the error is usually done by retransmitting the data, the details of which are usually handled by software (e.g., the operating system I/O routines).
When the total number of transmitted bits, including the parity bit, is even, odd parity has the advantage that the all-zeros and all-ones patterns are both detected as errors. If the total number of bits is odd, only one of the patterns is detected as an error, and the choice can be made based on which is expected to be the more common error.
RAID array
Parity data is used by RAID arrays (redundant array of independent/inexpensive disks) to achieve redundancy. If a drive in the array fails, remaining data on the other drives can be combined with the parity data (using the Boolean XOR function) to reconstruct the missing data.
For example, suppose two drives in a three-drive RAID 5 array contained the following data:
To calculate parity data for the two drives, an XOR is performed on their data:
The resulting parity data, 10111001, is then stored on Drive 3.
Should any of the three drives fail, the contents of the failed drive can be reconstructed on a replacement drive by subjecting the data from the remaining drives to the same XOR operation. If Drive 2 were to fail, its data could be rebuilt using the XOR results of the contents of the two remaining drives, Drive 1 and Drive 3:
as follows:
The result of that XOR calculation yields Drive 2's contents. 11010100 is then stored on Drive 2, fully repairing the array.
XOR logic is also equivalent to even parity (because a XOR b XOR c XOR ... may be treated as XOR(a,b,c,...) which is an n-ary operator which is true if and only if an odd number of arguments are true). So the same XOR concept above applies similarly to larger RAID arrays with parity, using any number of disks. In the case of a RAID 3 array of 12 drives, 11 drives participate in the XOR calculation shown above and yield a value that is then stored on the dedicated parity drive.
Extensions and variations on the parity bit mechanism "double," "dual," or "diagonal" parity, are used in RAID-DP.
History
A parity track was present on the first magnetic tape data storage in 1951. Parity in this form, applied across multiple parallel signals, is known as a transverse redundancy check. This can be combined with parity computed over multiple bits sent on a single signal, a longitudinal redundancy check. In a parallel bus, there is one longitudinal redundancy check bit per parallel signal.
Parity was also used on at least some paper-tape (punched tape) data entry systems (which preceded magnetic tape systems). On the systems sold by British company ICL (formerly ICT) the paper tape had 8 hole positions running across it, with the 8th being for parity. 7 positions were used for the data, e.g., 7-bit ASCII. The 8th position had a hole punched in it depending on the number of data holes punched.
See also
BIP-8
Parity function
Single-event upset
8-N-1
Check digit
References
External links
Different methods of generating the parity bit, among other bit operations
Binary arithmetic
Data transmission
Error detection and correction
Parity (mathematics)
RAID
fr:Somme de contrôle#Exemple : bit de parité |
2093019 | https://en.wikipedia.org/wiki/Palamedes%20%28mythology%29 | Palamedes (mythology) | Palamedes () was a Euboean prince as the son of King Nauplius in Greek mythology.
He joined the rest of the Greeks in the expedition against Troy. He is also credited with several inventions: Pausanias in his Description of Greece (2.20.3) says that in Corinth there is a Temple of Fortune to which Palamedes dedicated the dice that he had invented; Plato in The Republic (Book 7) remarks (through the character of Socrates) that Palamedes claimed to have invented numbers; and others note him in connection with the alphabet.
Family
Palamedes's mother was either Clymene (daughter of King Catreus of Crete), Hesione, or Philyra. He was the brother of Oeax and Nausimedon.
Mythology
Although he is a major character in some accounts of the Trojan War, Palamedes is not mentioned in Homer's Iliad.
After Paris took Helen to Troy, Agamemnon sent Palamedes to Ithaca to retrieve Odysseus, who had promised to defend the marriage of Helen and Menelaus. Odysseus did not want to honor his oath, so he plowed his fields with a donkey and an ox both hitched to the same plow, so the beasts of different sizes caused the plow to pull chaotically. Palamedes guessed what was happening and put Odysseus' son, Telemachus, in front of the plow. Odysseus stopped working and revealed his sanity.
The ancient sources show differences in regards to the details of how Palamedes met his death. Odysseus never forgave Palamedes for ruining his attempt to stay out of the Trojan War. When Palamedes advised the Greeks to return home, Odysseus hid gold in his tent and wrote a fake letter purportedly from Priam. The letter was found and the Greeks accused him of being a traitor. Palamedes was stoned to death by the Greek army. According to other accounts, Odysseus and Diomedes warriors drowned him during a fishing expedition. Still another version relates that he was lured into a well in search of treasure, and then was crushed by stones.
In ancient literature
Ovid discusses Palamedes' role in the Trojan War in the Metamorphoses. Palamedes' fate is described in Virgil's Aeneid. In the Apology, Plato describes Socrates as looking forward to speaking with Palamedes after death, and intimates in the Phaedrus that Palamedes authored a work on rhetoric. Euripides and many other dramatists have written dramas about his fate. The orator Gorgias also wrote a Defense of Palamedes, describing the defense speech that Palamedes gave when charged with treason.
Greek alphabet
Hyginus claims Palamedes created eleven letters of the Greek alphabet:
Reception
Vondel play
The major Dutch playwright Joost van den Vondel wrote in 1625 the play Palamedes, based on the Greek myth. The play had a clear topical political connotation: the unjust killing of Palamedes stands for the execution of the statesman Johan van Oldenbarnevelt six years earlier—which Vondel, like others in the Dutch Republic, considered a judicial murder. In Vondel's version, responsibility for Palamedes' killing is attributed mainly to Agamemnon; the play's harsh and tyrannical Agamemnon was clearly intended to portray Prince Maurits of Nassau. Authorities in Amsterdam found no difficulty in deciphering the political meanings behind Vondel's Classical allusions and imposed a heavy fine on the playwright.
20th century
In one modern account, The Luck of Troy by Roger Lancelyn Green, Palamedes was double-dealing with the Trojans.
Notes
References
Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website.
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
D. R. Reinsch, "Die Palamedes-Episode in der Synopsis Chronike des Konstantinos Manasses und ihre Inspirationsquelle," in Byzantinische Sprachkunst. Studien zur byzantinischen Literatur gewidmet Wolfram Hoerandner zum 65. Geburtstag. Hg. v. Martin Hinterberger und Elisabeth Schiffer. Berlin-New York, Walter de Gruyter, 2007 (Byzantinisches Archiv, 20), 266-276.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2).
Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses translated by Brookes More (1859-1942). Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses. Hugo Magnus. Gotha (Germany). Friedr. Andr. Perthes. 1892. Latin text available at the Perseus Digital Library.
Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library.
Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library.
External links
Palamedes at Greek Mythology Link
Achaean Leaders
Characters in Greek mythology
Greek mythological heroes
People executed by stoning |
7121456 | https://en.wikipedia.org/wiki/ExFAT | ExFAT | exFAT (Extensible File Allocation Table) is a file system introduced by Microsoft in 2006 and optimized for flash memory such as USB flash drives and SD cards. exFAT was proprietary until 28 August 2019, when Microsoft published its specification. Microsoft owns patents on several elements of its design.
exFAT can be used where NTFS is not a feasible solution (due to data-structure overhead), but a greater file-size limit than the standard FAT32 file system (i.e. 4 GB) is required.
exFAT has been adopted by the SD Association as the default file system for SDXC cards larger than 32 GB.
History
exFAT was introduced in late 2006 as part of Windows CE 6.0, an embedded Windows operating system. Most of the vendors signing on for licences are manufacturers of embedded systems or device manufacturers that produce media formatted with exFAT. The entire File Allocation Table (FAT) family, exFAT included, is used for embedded systems because it is lightweight and is better suited for solutions that have low memory and low power requirements, and can be easily implemented in firmware.
Features
Because file size references are stored in eight instead of four bytes, the file size limit has increased to (, or about , which is otherwise limited by a maximum volume size of , or ), raised from () in a standard FAT32 file system. Therefore, for the typical user, this enables seamless interoperability between Windows and platforms for files in excess of 4 GB.
Other specifications, features, and requirements of the exFAT file system include:
Scalability to large disk sizes: ca. 128 PB () maximum, 512 TB () recommended maximum, raised from the 32-bit limit (2 TB for a sector size of 512 bytes) of standard FAT32 partitions.
Support for up to 2,796,202 files per directory. Microsoft documents a limit of 65,534 (216−2) files per sub-directory for their FAT32 implementation, but other operating systems have no special limit for the number of files in a FAT32 directory. FAT32 implementations in other operating systems allow an unlimited number of files up to the number of available clusters (that is, up to 268,304,373 files on volumes without long filenames).
Maximum number of files on volume C, to 4,294,967,285 (, up from ca. in standard FAT32).
Free space allocation and delete performance improved due to introduction of a free space bitmap.
Timestamp granularity of 10 ms for Create and Modified times (down from 2 s of FAT, but not as fine as NTFS's 100 ns).
Timestamp granularity for Last Access time to double seconds (FAT had date only).
Timestamps come with a time zone marker in offset relative to UTC (starting with Vista SP2).
Optional support for access control lists (not currently supported in Windows Desktop/Server versions).
Optional support for TexFAT, a transactional file system standard (optionally WinCE activated function, not supported in Windows Desktop/Server versions).
Boundary alignment offset for the FAT table.
Boundary alignment offset for the data region.
Provision for OEM-definable parameters to customize the file system for specific device characteristics.
Valid Data Length (VDL): through the use of two distinct lengths fields, one for "allocated space" and the other for "valid data", exFAT can preallocate a file without leaking data that was previously on-disk.
Cluster size up to 32 MB.
Metadata integrity with checksums.
Template based metadata structures.
Removal of the physical and directory entries that appear in subdirectories.
exFAT no longer stores the short 8.3 filename references in directory structure and natively uses extended file names, whereas legacy FAT versions implement extended file names through the VFAT extension.
Windows XP requires update KB955704 to be installed and Windows Vista requires its SP1 or SP2 be installed. Windows Vista is unable to use exFAT drives for ReadyBoost. Windows 7 removes this limitation, enabling ReadyBoost caches larger than 4 GB. Windows 10 only allows formatting exFAT on volumes sized 32 GB or larger with the default user interface, and FAT32 format is suggested for lower sizes; command-line utilities still accept a full range of file systems and allocation unit sizes.
The standard exFAT implementation is not journaled and only uses a single file allocation table and free space map. FAT file systems instead used alternating tables, as this allowed recovery of the file system if the media was ejected during a write (which occurs frequently in practice with removable media). The optional TexFAT component adds support for additional backup tables and maps, but may not be supported.
The exFAT format allows individual files larger than 4 GB, facilitating long continuous recording of HD video which can exceed the 4 GB limit in less than an hour. Current digital cameras using FAT32 will break the video files into multiple segments of approximately 2 or 4 GB.
Efficiency
With the increase of capacity and the increase of data being transferred, the write operation needs to be made more efficient. SDXC cards, running at UHS-I have a minimum guaranteed write speed of 10 MBps and exFAT plays a factor in achieving that throughput through the reduction of the file system overhead in cluster allocation. This is achieved through the introduction of a separate cluster bitmap where the reservation state of each cluster (reserved/free) is tracked by only one bit, reducing writes to the much larger FAT that originally served this purpose.
Additionally, a single bit in the directory record indicates that the file is contiguous (unfragmented), telling the exFAT driver to ignore the FAT. This optimization is analogous to an extent in other file systems, except that it only applies to whole files, as opposed to contiguous parts of files.
Adoption
exFAT is supported in Windows XP and Windows Server 2003 with update KB955704, Windows Embedded CE 6.0, Windows Vista with Service Pack 1, Windows Server 2008, Windows 7, Windows 8, Windows Server 2008 R2 (except Windows Server 2008 Server Core), Windows 10, macOS starting from 10.6.5, Linux via FUSE or natively starting from kernel 5.4, and iPadOS starting from 13.1.
Companies can integrate exFAT into a specific group of consumer devices, including cameras, camcorders, and digital photo frames for a flat fee. Mobile phones, PCs, and networks have a different volume pricing model.
exFAT is supported in a number of media devices such as modern flat-panel TVs, media centers, and portable media players.
exFAT is the official file system of SDXC cards. As mentioned in the respective article, this implies any device not supporting exFAT (such as the Nintendo 3DS) might not legally advertise itself as SDXC compatible despite supporting such cards as mass storage devices per se.
Some vendors of other flash media, including USB pen drives, compact flash (CF) and solid-state drives (SSD) ship some of their high-capacity media pre-formatted with the exFAT file system. For example, Sandisk ships their 256 GB CF cards as exFAT.
Microsoft has entered into licensing agreements with BlackBerry, Panasonic, Sanyo, Sony, Canon, Aspen Avionics, Audiovox, Continental, Harman, LG Automotive and BMW.
Mac OS X Snow Leopard 10.6.5 and later can create, read, write, verify, and repair exFAT file systems.
Linux has support for exFAT via FUSE since 2009. In 2013, Samsung Electronics published a Linux driver for exFAT under GPL.
On 28 August 2019, Microsoft published the exFAT specification and released the patent to the OIN members. The Linux kernel introduced native exFAT support with the 5.4 release.
ChromeOS can read and write to exFAT partitions.
Technical specialities
File name lookup
exFAT employs a filename hash-based lookup phase to speed certain cases, which is described in US Patent 8321439, Quick File Name Lookup Using Name Hash. Appendix A of the document contains details helpful in understanding the file system.
File and cluster pre-allocation
Like NTFS, exFAT can pre-allocate disk space for a file by just marking arbitrary space on disk as 'allocated'. For each file, exFAT uses two separate 64-bit length fields in the directory: the Valid Data Length (VDL) which indicates the real size of the file, and the physical data length.
To provide improvement in the allocation of cluster storage for a new file, Microsoft incorporated a method to pre-allocate contiguous clusters and bypass the use of updating the FAT table and on December 10, 2013 the US patent office granted patent US8606830. One feature of exFAT (used in the exFAT implementation within embedded systems) provides atomic transactions for the multiple steps of updating the file system metadata. The feature, called Transaction Safe FAT, or TexFAT, was granted a patent by the US patent office under US7613738 on November 3, 2009.
Directory file set
exFAT and the rest of the FAT family of file systems does not use indexes for file names, unlike NTFS which uses B-trees for file searching. When a file is accessed, the directory must be sequentially searched until a match is found. For file names shorter than 16 characters in length, one file name record is required but the entire file is represented by three 32-byte directory records. This is called a directory file set, and a 256 MB sub-directory can hold up to 2,796,202 file sets. (If files have longer names, this number will decrease but this is the maximum based on the minimum three-record file set.) To help improve the sequential searching of the directories (including the root) a hash value of the file name is derived for each file and stored in the directory record. When searching for a file, the file name is first converted to upper case using the upcase table (file names are case insensitive) and then hashed using a proprietary patented algorithm into a 16-bit (2 byte) hash value. Each record in the directory is searched by comparing the hash value. When a match is found, the file names are compared to ensure that the proper file was located in case of hash collisions. This improves performance because only 2 bytes have to be compared for each file. This significantly reduces the CPU cycles because most file names are more than 2 characters (bytes) in size and virtually every comparison is performed on only 2 bytes at a time until the intended file is located.
Metadata and checksums
exFAT introduces metadata integrity through the use of checksums. There are three checksums currently in use.
The Volume Boot Record (VBR) is a 12 sector region that contains the boot records, BIOS Parameter Block (BPB), OEM parameters and the checksum sector. (There are two VBR type regions, the main VBR and the backup VBR.) The checksum sector is a checksum of the previous 11 sectors, with the exception of three bytes in the boot sector (Flags and percent used). This provides integrity of the VBR by determining if the VBR was modified. The most common cause could be a boot sector virus, but this would also catch any other corruption to the VBR.
A second checksum is used for the upcase table. This is a static table and should never change. Any corruption in the table could prevent files from being located because this table is used to convert the filenames to upper case when searching to locate a file.
The third checksum is in the directory file sets. Multiple directory records are used to define a single file and this is called a file set. This file set has metadata including the file name, time stamps, attributes, address of first cluster location of the data, file lengths, and the file name. A checksum is taken over the entire file set and a mismatch would occur if the directory file set was accidentally or maliciously changed.
When the file system is mounted, and the integrity check is conducted, these hashes are verified. Mounting also includes comparison of the version of the exFAT file system by the driver to make sure the driver is compatible with the file system it is trying to mount, and to make sure that none of the required directory records are missing (for example, the directory record for the upcase table and Allocation Bitmap are required and the file system can't run if they are missing). If any of these checks fail, the file system should not be mounted, although in certain cases it may mount read-only.
The file system provides extensibility through template-based metadata definitions using generic layouts and generic patterns.
Flash optimizations
exFAT contains a few features that, according to Microsoft, makes it flash-friendly:
Boundary alignment for filesystem structures. The offsets for the FAT and the cluster heap is adjustable at format-time, so that writes to these areas will happen in as few flash blocks as possible.
An "OEM Parameters" field can be used to record features such as block size of the underlying storage. One single type for flash storage is pre-defined.
The lack of a journal, so that less data is written. (Although FAT32 also lacks a journal.)
The first feature requires support from the formatting software. Compliant implementations will follow existing offsets. The OEM parameter may be ignored. Implementations may also use TRIM to reduce wear.
Other implementations
Legal status
exFAT was a proprietary file system until 2019 when Microsoft released the specification and allowed OIN members to use their patents. This lack of documentation along with the threat of a patent infringement lawsuit, as happened previously when Microsoft sued various companies over the VFAT long file name patent (before it expired), hampered the development of free and open-source drivers for exFAT, and led to a situation where Linux distributions couldn't even tell users how to get an exFAT driver. Accordingly, exFAT official support was effectively limited to Microsoft's own products and those of Microsoft's licensees. This, in turn, inhibited exFAT's adoption as a universal exchange format, as it was safer and easier for vendors to rely on FAT32 than it was to pay Microsoft or risk being sued.
Interoperability requires that certain results be achieved in a particular, predefined way (an algorithm). For exFAT, this potentially requires every implementation to perform certain procedures in exactly the same way as Microsoft's implementation. Some of the procedures used by Microsoft's implementation are patented, and these patents are owned by Microsoft. A license to use these algorithms can be purchased from Microsoft, and some companies including Apple, Google and Samsung have done so. However, in the open-source ecosystem, users have typically responded to vendors being unwilling to pay for patent licenses by procuring an implementation for themselves from unofficial sources. For example, this is what happened with LAME when MP3 patents were still valid. (Alternatively, the user may decide that the feature is unimportant to them.)
Regardless of whether open-source or not, Microsoft stated that "a license is required in order to implement exFAT and use it in a product or device." Unlicensed distribution of an exFAT driver would make the distributor liable for financial damages if the driver is found to have violated Microsoft's patents. While the patents may not be enforceable, this can only be determined through a legal process, which is expensive and time-consuming. It may also be possible to achieve the intended results without infringing Microsoft's patents. In October 2018, Microsoft released 60,000 patents to the Open Invention Network members for Linux systems, but exFAT patents were not initially included at the time. There was, however, discussion within Microsoft of whether Microsoft should allow exFAT in Linux devices, which eventually resulted in Microsoft publishing the official specification for open usage and releasing the exFAT patents to the OIN in August 2019.
List of implementations
A FUSE-based implementation named fuse-exfat, or exfat-fuse, with read/write support is available for FreeBSD, multiple Linux distributions, and older versions of Mac OS X. It supports TRIM. An implementation called exFATFileSystem, based on fuse-exfat, is available for AmigaOS 4.1.
A Linux kernel implementation by Samsung Electronics is available. It was initially released on GitHub unintentionally, and later released officially by Samsung in compliance with the GPLv2 in 2013. (This release does not make exFAT royalty-free, as licensing from Samsung does not remove Microsoft's patent rights.) A version of this driver was first incorporated into version 5.4 of the Linux kernel. A much newer version of the driver, with several bug fixes and improved reliability, was incorporated into kernel 5.7. Prior to its being merged into the Linux kernel, this newer version had already seen adoption on Android smartphones, and continued to be used on both Linux and Android thereafter.
Proprietary read/write solutions licensed and derived from the Microsoft exFAT implementation are available for Android, Linux, and other operating systems from Paragon Software Group and Tuxera.
XCFiles (from Datalight) is a proprietary, full-featured implementation, intended to be portable to 32-bit systems. Rtfs (from EBS Embedded Software) is a full-featured implementation for embedded devices.
Two experimental, unofficial solutions are available for DOS. The loadable USBEXFAT driver requires Panasonic's USB stack for DOS and only works with USB storage devices; the open-source EXFAT executable is an exFAT file system reader, and requires the HX DOS extender to work. There are no native exFAT real-mode DOS drivers, which would allow usage of, or booting from, exFAT volumes.
The renaming of ExFAT file system labels is natively supported by Microsoft Windows Explorer, while Linux relies on the third-party exfatlabel tool.
See also
Design of the FAT file system
List of file systems
Comparison of file systems
Memory Stick XC
Universal Disk Format
Notes
References
External links
exFAT specification
File System Functionality Comparison of exFAT, FAT32, NTFS, UDF
exFAT overview in Windows Embedded CE
Transaction-Safe FAT File System (TexFAT) overview in Windows Mobile 6.5
Personal Storage : Opportunities and challenges for pocket-sized storage devices in the Windows world (PowerPoint presentation at WinHEC 2006)
exFAT File System Licensing
Reverse Engineering the Microsoft exFAT File System, SANS Institute.
, "Quick Filename Lookup Using Name Hash"; Microsoft Corp; contains exFAT specification revision 1.00.
, "Contiguous File Allocation In An Extensible File System"; Microsoft Corp.
exFAT ships on all SDXC Cards, SD Card Association
The Extended FAT file system:Differentiating with FAT32 file system, Linux Conference, October 2011.
Benefits of exFAT over FAT32
2006 software
Flash file systems
Windows CE
Windows disk file systems
File systems supported by the Linux kernel
Computer-related introductions in 2006
he:ExFAT |
492001 | https://en.wikipedia.org/wiki/Psion%20Series%205 | Psion Series 5 | The Psion Series 5 was a personal digital assistant (PDA) from Psion. It came in two main variants, the Series 5 (launched in 1997) and the Series 5mx (1999), the latter having a faster processor, clearer liquid crystal display (LCD), and updated software. There was also a rare Series 5mx Pro, which differed only in having the operating system (OS) loaded into random-access memory (RAM) and hence upgradeable. Ericsson marketed a version of the Series 5mx renamed as MC218.
The Psion Series 5 was a major upgrade from the Psion Series 3. A Psion Series 4 does not exist, due to Psion's concern of tetraphobia in their Asian markets. The external appearance of the Psion Series 5 and the Psion Series 5mx are broadly similar, but their mainboards and other internal components were different and not interchangeable. The screens are not interchangeable because of different screen cables.
The Series 5 was the first to feature a unique sliding clamshell design, whereby the keyboard slides forward as the device opens to counterbalance the display, and brace it such that touchscreen actuation does not topple the device, a feature mentioned in the granted European patent EP 0766166B1. This novel design approach was the work of Martin Riddiford, an industrial designer for Design. A simplified version of this design was also used in the Psion Revo.
The moving parts and hinges can wear out or break. The most serious common problem arose because of a design fault in the screen cable where tooling holes caused needless stresses due to extra bending of the cable at this point each time the Psion Series 5 was opened or closed, and eventually leading to failure of the cable, which caused a serious display malfunction and horizontal lines appearing. The screen cable to the Psion Series 5 was more durable than the screen cable of the Psion Series 5mx. There was an after-market cable available for the 5mx which aimed to eliminate this problem.
At its heart was the 32-bit ARM710-based CL-PS7110 central processing unit (CPU) running at 18 MHz (Series 5) or 36 MHz (5mx), with 4, 8, or 16 MB of RAM. It was powered by two AA batteries, typically giving 10–20 hours of use. The display is a touch-sensitive, backlit half-Video Graphics Array (VGA, pixel) LCD with 16 greyscales. The keyboard, which has a key-pitch of 12.5 mm, is generally considered to be amongst the best for its size, with large-travel keys and touch-type capability. Both RS-232 and infra-red serial connections were provided. A speaker and microphone were also provided, giving dictation as well as music playing ability. External storage was on CompactFlash.
The EPOC operating system, since renamed Symbian OS, was built-in, along with application software including a word processor, spreadsheet, database, email, contact and diary manager, and Psion's Open programming language (OPL) for developing software. A Java virtual machine, the mobile browser STNC HitchHiker and synchronizing software for Microsoft Windows was bundled with the 5mx as optional installations and, later, the Executive Edition of the 5mx was bundled with various hardware and software extras including version 3.62 of the Opera web browser and a mains electric outlet adapter. A wealth of third-party software was also available, including games, utilities, navigation, reference, communication, and productivity applications, and standard programming tools like Perl and Python.
An open-source software project, OpenPsion, formerly PsiLinux, supports Linux on the Psion 5mx and other Psion PDAs.
Psion's experience designing for this form factor and attention to detail made these machines a favourite with power users, many of whom kept using them, despite their age and the appearance of Symbian OS for mobile phones and other PDAs with more impressive specifications.
In 2017, a team of original Psion engineers created a startup company, Planet Computers, to make an Android device in a similar form factor, the Gemini (PDA). The Cosmo Communicator is a development of this device, enhancements including an external touchscreen on the rear of the clamshell lid, mainly to facilitate use as a mobile phone without opening. It is due for launch in mid-2019.
See also
Psion (company)
Psion Organiser
Psion Series 3
Psion Series 7
Gemini (PDA)
References
External links
The Tucows - EPOC software index.
3-Lib, Psion freeware and shareware library
www.openpsion.org - Linux for Psion Handhelds
Some more Series 5 pictures
Psion:the last computer A detailed history of Psion around the time of the Series 5
Unofficial Psion F.A.Q
Discussion of Psion 5mx with Some Useful Macros, Including a Macro5 Switcher Macro
Detailed Technical Specifications of Psion Series 5mx
Psion devices
Personal information managers
Computer-related introductions in 1997 |
47842868 | https://en.wikipedia.org/wiki/Suhayya%20Abu-Hakima | Suhayya Abu-Hakima | Suhayya "Sue" Abu-Hakima is a Canadian technology entrepreneur and inventor of artificial intelligence (AI) applications for wireless communication and computer security. As of 2020, her company Amika Mobile has been known as Alstari Corporation as she exited her emergency and communications business to Genasys in October 2020. Since 2007, she had served as President and CEO of Amika Mobile Corporation; she similarly founded and served as President and CEO of AmikaNow! from 1998 to 2004. A frequent speaker on entrepreneurship, AI, security, messaging and wireless, she has published and presented more than 125 professional papers and holds 30 international patents in the fields of content analysis, messaging, and security. She has been an adjunct professor in the School of Information Technology and Engineering at the University of Ottawa and has mentored many high school, undergraduate, and graduate students in science and technology more commonly known as STEM now. She was named to the Order of Ontario, the province's highest honor, in 2011 for innovation and her work in public safety and computer security technology.
Early life and education
Suhayya Abu-Hakima was born in the Middle East and grew up in Montreal, where her father and mother were both professors at McGill University. She has five siblings.
In 1982 Abu-Hakima graduated from McGill University with a bachelor's degree in engineering, specializing in computers and communications. At Carleton University in Ottawa, she earned her honours master's degree in engineering in 1988 focussed on AI, submitting the thesis, "Rationale: A Tool for Developing Knowledge-based Systems that Explain by Reasoning Explicitly". She earned her honours PhD in artificial intelligence in 1994; her PhD thesis, "Automating Model Acquisition by Fault Knowledge Re-use: Introducing the Diagnostic Remodeler Algorithm", was supervised by Professor Nick Dawes of the Computer and Systems Engineering Department and Professor Franz Oppacher of the School of Computer Science at Carleton University. Her pioneering AI publications are still cited today, decades later.
Career
Abu-Hakima began her career at Bell-Northern Research after receiving her bachelor's degree in 1982. Her accomplishments included the creation of "speech and hand-printed character recognition applications, the Invisible Terminal and AI for telecom". The Invisible Terminal, which she designed in 1983, facilitated wireless communication between mobile, pad-sized terminals and the main network.
In 1987 she joined the National Research Council Canada, where she developed AI applications for "real-world problems" in various fields, including aircraft engine diagnosis and telecommunications network management. She founded and led the Seamless Personal Information Networking laboratory in the NRC's Institute of Information Technology focussed on AI. In 1996 she co-invented a technology for unified messaging networks.
In July 1998 she formed her first startup company, AmikaNow!, which integrated AI applications in its wireless software for mobile phones and computers. In 2004, Entrust acquired AmikaNow! Corporation's content analysis, and compliance technology, and Abu-Hakima served as Vice President of Content Technology for Entrust from 2004 to 2006.
In March 2007 Abu-Hakima co-founded another startup, Amika Mobile, in Ottawa. Using wireless technology created by AmikaNow!, the new company's flagship product, the Amika Mobility Server, auto-generates connections with mobile phones and computers to deliver emergency converged email, SMS, Pop-up, Voice and any emergency alerts and accepts responses. As of 2015, the company has won more than two dozen international awards for its emergency mass notification systems. Amika Mobile has won the US GOVIES awards for 6 consecutive years 2015, 2016 and 2017 2018, 2019 and 2020 before the acquisition by Genasys. Amika Mobile has also won the ASIS 2015 Judge's Choice and Best Security Product.
In October 2020 Abu-Hakima co-founded her third tech start-up after 2 successful exits. Her new start-up is Alstari Corporation which is currently in early stages.
Other activities
Abu-Hakima holds 48 international patents in content analysis, messaging, security and converged emergency alerts. She is a frequent speaker on entrepreneurship, technology, security, emergency communications and AI, and has published and presented more than 125 professional papers.
She has been an adjunct professor in the School of Information Technology and Engineering at the University of Ottawa.
She is active as a community volunteer and mentor. In Ottawa, she is credited with the creation of more than 250 high tech jobs focussed on AI, messaging, content analysis, and security in both tech as well as business. She has served as Vice Chair and Director for the Ontario Centres of Excellence, and as an advisor on the Private Sector Advisory Board for the National Centres of Excellence (2007–2014). She has also served in an advisory capacity on the Big Data Institute Advisory Board at Dalhousie University (2013) and the strategic advisory board at McGill University (2014). In 2003 she was a member of the Prime Minister's Task Force on Women Entrepreneurs. She has mentored many high school, undergraduate and graduate students for education and careers in science and technology.
She is very active in championing women in business as well as STEM professions. She has been called upon in 2011 and again in 2017 by the Government of Canada Operations Committee as a witness on federal procurement policies and how they affect small business as well as women entrepreneurs in STEM.
From 1994 to 1998 she was an editor of Computational Intelligence, the magazine of the Canadian Artificial Intelligence Organization.
Honors and awards
In 2015 and 2016 she was named as a Canada Most Powerful Top 100 in the Trailblazers and Trendsetters Category by the WXN.
WXN Reveals Full List of 2015 Canada's Most Powerful Women: (globenewswire.com) https://www.globenewswire.com/news-release/2015/11/25/1309858/0/en/WXN-Reveals-Full-List-of-2015-Canada-s-Most-Powerful-Women-Top-100-Award-Winners.html
In 2014 she was named one of the Top 25 Women of Influence by the Women of Influence organization. Four Standout Women in Tech for the 2014 Women of Influence Awards | WhatsYourTech.ca
In 2012 she was a recipient of the Queen Elizabeth II Diamond Jubilee Medal, being noted as "a technology visionary and tireless volunteer".
In January 2011 she was named to the Order of Ontario.
In 2007 she was named an Outstanding Women Entrepreneur by the Canadian Advanced Technology Association.
Personal
Abu-Hakima is the mother of two children. She resides in Kanata, Ontario.
Selected bibliography
Selected patents
"Alert Broadcasting to Unconfigured Communications Devices" (2013) (with Kenneth E. Grigg)
"Alert broadcasting to a plurality of diverse communications devices" (2012)
"Collaborative Multi-Agent System for Dynamic Management of Electronic Services in a Mobile Global Network Environment" (2011) (with Kenneth E. Grigg)
"Processing of network content and services for mobile or fixed devices" (2011)
"Auto-discovery of diverse communications devices for alert broadcasting" (2010)
"Concept Identification System and Method for Use in Reducing and/or Representing Text Content of an Electronic Document" (2004)
"Apparatus and method for context-based highlighting of an electronic document" (2004)
"Apparatus and method for interpreting and intelligently managing electronic messages" (2002)
"Concept-based message/document viewer for electronic communications and internet searching" (2003)
References
External links
"The Entrepreneur: Heretic or hero of innovation?" (video) TEDx Kenata, March 26, 2015
"Government Operations Committee on Oct. 20th, 2011" Opening statement by Sue Abu-Hakima
"Government Operations Committee on Dec 7th, 2017" Opening Statement and Meeting Minutes by Suhayya Abu-Hakima.
"PM Meets With Business, Professional, Academic and Government Stakeholders" (image) November 16, 2010
Selected papers on CiteSeer
Year of birth missing (living people)
Businesspeople from Montreal
Businesspeople from Ottawa
Canadian inventors
Canadian women business executives
Carleton University alumni
McGill University Faculty of Engineering alumni
Members of the Order of Ontario
Living people
Women inventors
National Research Council (Canada) |
443870 | https://en.wikipedia.org/wiki/Bluecurve | Bluecurve | Bluecurve is a desktop theme for GNOME and KDE created by the Red Hat Artwork project. The main aim of Bluecurve was to create a consistent look throughout the Linux environment, and provide support for various Freedesktop.org desktop standards. It has been used in Red Hat Linux since version 8.0, and Fedora Core.
The Bluecurve window borders and GTK+ (widget) theme have been replaced by those from Clearlooks (the first in Fedora Core 4, the second in FC5). However, the old Bluecurve themes (windowing and widget) are still installed by default and can be selected in the theme manager. The Bluecurve icon set remains installed in Fedora 7, but has been replaced as the default by Echo.
There has been controversy surrounding the theme, especially the alterations to KDE, which were sufficient to cause developer Bernhard Rosenkraenzer to quit Red Hat, "mostly in mutual agreement — I don't want to work on crippling KDE, and they don't want an employee who admits RHL 8.0's KDE is crippleware." Others simply criticize it for giving the same look to both desktops, even though they are obviously different in many ways. This approach was subsequently emulated by Mandrake Linux with their "Galaxy" theme, which was also available for GNOME and KDE, and in Kubuntu 6.06 with the GTK-Qt theme engine (enabled by default).
Enterprising GUI artists have created themes that emulate the Bluecurve theme on other operating systems, including Microsoft Windows. Users can also replace their default Windows icons with icons that emulate Bluecurve, using the IconPackager application. One such set can be downloaded at WinCustomize.
See also
Crystal — LGPL icon set by Everaldo Coelho
Icon (computing)
Nuvola — LGPL icon set by David Vignoni
Oxygen Project — LGPL icon set for KDE
Palette (computing)
QtCurve
Tango Desktop Project — developers of a public domain icon set
Theme (computing)
References
External links
Fedora Artwork
Waikato Linux Users Group wiki article
Bluecurve icon pack for IconPackager (Windows)
Computer icons
Free software projects
GNOME
KDE
Red Hat software |
8551616 | https://en.wikipedia.org/wiki/Payment%20Card%20Industry%20Data%20Security%20Standard | Payment Card Industry Data Security Standard | The Payment Card Industry Data Security Standard (PCI DSS) is an information security standard for organizations that handle branded credit cards from the major card schemes.
The PCI Standard is mandated by the card brands but administered by the Payment Card Industry Security Standards Council. The standard was created to increase controls around cardholder data to reduce credit card fraud.
Validation of compliance is performed annually or quarterly, by a method suited to the volume of transactions handled:
Self-Assessment Questionnaire (SAQ) — smaller volumes
external Qualified Security Assessor (QSA) — moderate volumes; involves an Attestation on Compliance (AOC)
firm-specific Internal Security Assessor (ISA) — larger volumes; involves issuing a Report on Compliance (ROC)
History
Five different programs have been started by card companies:
Visa's Cardholder Information Security Program
MasterCard's Site Data Protection
American Express's Data Security Operating Policy
Discover's Information Security and Compliance
the JCB's Data Security Program
The intentions of each were roughly similar: to create an additional level of protection for card issuers by ensuring that merchants meet minimum levels of security when they store, process, and transmit cardholder data. To cater out the interoperability problems among the existing standards, the combined effort made by the principal credit card organizations resulted in the release of version 1.0 of PCI DSS in December 2004. PCI DSS has been implemented and followed across the globe.
The Payment Card Industry Security Standards Council (PCI SSC) was then formed, and these companies aligned their individual policies to create the PCI DSS. MasterCard, American Express, Visa, JCB International and Discover Financial Services established the PCI SSC in September 2006 as an administration/governing entity which mandates the evolution and development of PCI DSS. Independent/private organizations can participate in PCI development after proper registration. Each participating organization joins a particular SIG (Special Interest Group) and contributes to the activities which are mandated by the SIG.
The following versions of the PCI DSS have been made available:
Requirements
The PCI Data Security Standard specifies twelve requirements for compliance, organized into six logically related groups called "control objectives". The six groups are:
Build and Maintain a Secure Network and Systems
Protect Cardholder Data
Maintain a Vulnerability Management Program
Implement Strong Access Control Measures
Regularly Monitor and Test Networks
Maintain an Information Security Policy
Each version of PCI DSS (Payment Card Industry Data Security Standard) has divided these six requirements into a number of sub-requirements differently, but the twelve high-level requirements have not changed since the inception of the standard. Each requirement/sub-requirement is additionally elaborated into three sections.
Requirement Declaration: It defines the main description of the requirement. The endorsement of PCI DSS is done on the proper implementation of the requirements.
Testing Processes: The processes and methodologies carried out by the assessor for the confirmation of proper implementation.
Guidance: It explains the core purpose of the requirement and the corresponding content which can assist in the proper definition of the requirement.
The twelve requirements for building and maintaining a secure network and systems can be summarized as follows:
Installing and maintaining a firewall configuration to protect cardholder data. The purpose of a firewall is to scan all network traffic, block untrusted networks from accessing the system.
Changing vendor-supplied defaults for system passwords and other security parameters. These passwords are easily discovered through public information and can be used by malicious individuals to gain unauthorized access to systems.
Protecting stored cardholder data. Encryption, hashing, masking and truncation are methods used to protect cardholder data.
Encrypting transmission of cardholder data over open, public networks. Strong encryption, including using only trusted keys and certifications reduces the risk of being targeted by malicious individuals through hacking.
Protecting all systems against malware and performing regular updates of anti-virus software. Malware can enter a network through numerous ways, including Internet use, employee email, mobile devices or storage devices. Up-to-date anti-virus software or supplemental anti-malware software will reduce the risk of exploitation via malware.
Developing and maintaining secure systems and applications. Vulnerabilities in systems and applications allow unscrupulous individuals to gain privileged access. Security patches should be immediately installed to fix vulnerabilities and prevent exploitation and compromise of cardholder data.
Restricting access to cardholder data to only authorized personnel. Systems and processes must be used to restrict access to cardholder data on a “need to know” basis.
Identifying and authenticating access to system components. Each person with access to system components should be assigned a unique identification (ID) that allows accountability of access to critical data systems.
Restricting physical access to cardholder data. Physical access to cardholder data or systems that hold this data must be secure to prevent unauthorized access or removal of data.
Tracking and monitoring all access to cardholder data and network resources. Logging mechanisms should be in place to track user activities that are critical to prevent, detect or minimize the impact of data compromises.
Testing security systems and processes regularly. New vulnerabilities are continuously discovered. Systems, processes and software need to be tested frequently to uncover vulnerabilities that could be used by malicious individuals.
Maintaining an information security policy for all personnel. A strong security policy includes making personnel understand the sensitivity of data and their responsibility to protect it.
Updates and supplemental information
The PCI SSC (Payment Card Industry Security Standards Council) has released several supplemental pieces of information to clarify various requirements. These documents include the following
Information Supplement: Requirement 11.3 Penetration Testing
Information Supplement: Requirement 6.6 Code Reviews and Application Firewalls Clarified
Navigating the PCI DSS - Understanding the Intent of the Requirements
PCI DSS Applicability in an EMV Environment
Prioritized Approach for PCI DSS
Prioritized Approach Tool
PCI DSS Quick Reference Guide
PCI DSS Virtualization Guidelines
PCI DSS Tokenization Guidelines
PCI DSS 2.0 Risk Assessment Guidelines
The lifecycle for Changes to the PCI DSS and PA-DSS
Guidance for PCI DSS Scoping and Segmentation
PCI DSS v4.0 is expected to be published in Q1 2022
Reporting levels
All companies who are subject to PCI DSS standards must be PCI compliant. However, how they prove and report their compliance is based on how many transactions they process per year and how they process those transactions. The acquirer or payment brands may also choose to manually place an organization into a reporting level at their discretion.
At a high level, the merchant levels are as follows:
Level 1 – Over 6 million transactions annually
Level 2 – Between 1 and 6 million transactions annually
Level 3 – Between 20,000 and 1 million transactions annually (or any e-commerce merchant)
Level 4 – Less than 20,000 transactions annually
Each card issuer maintains their own table of compliance levels as well as a separate table for service providers.
Validation of compliance
Compliance validation involves the evaluation and confirmation that the security controls & procedures have been properly implemented as per the policies recommended by PCI DSS. In short, the PCI DSS, security validation/testing procedures are mutually a compliance validation tool. A PCI DSS assessment has the following entities.
Qualified Security Assessor (QSA)
A Qualified Security Assessor is an individual bearing a certificate that has been provided by the PCI Security Standards Council. This certified person can audit merchants for Payment Card Industry Data Security Standard (PCI DSS) compliance. QSAs are the independent groups/entities which have been certified by PCI SSC for compliance confirmation in organization procedures. The confirmation just assigns that a QSA has tended to all the separate prerequisites which are mandatory to do PCI DSS appraisals.
Internal Security Assessor (ISA)
An Internal Security Assessor is an individual who has earned a certificate from the PCI Security Standards Company for their sponsoring organization. This certified person has the ability to perform PCI self-assessments for their organization. This ISA program was designed to help Level 2 merchants meet the new Mastercard compliance validation requirements. ISA certification empowers a worker to do an inward appraisal of his/her association and propose security solutions/ controls for the PCI DSS compliance. As the ISAs are upheld by the organization for the PCI SSC affirmation, they are in charge of cooperation and participation with QSAs.
Report on Compliance (ROC)
A Report on Compliance is a form that has to be filled by all level 1 merchants Visa merchants undergoing a PCI DSS (Payment Card Industry Data Security Standard) audit. The ROC form is used to verify that the merchant being audited is compliant with the PCI DSS standard. ROC confirms that policies, strategies, approaches & workflows are appropriately implemented/developed by the organization for the protection of cardholders against scams/frauds card-based business transactions. A template “ROC Reporting Template” available on the PCI SSC site contains detailed guidelines about the ROC.
Self-Assessment Questionnaire (SAQ)
The PCI DSS self-assessment questionnaires (SAQs) are validation tools intended to assist merchants and service providers to report the results of their PCI DSS self-assessment. There are eight different types of SAQs, each with a different level of complexity. The most basic is the SAQ-A, consisting of just 22 questions; the most complex is the SAQ-D, consisting of 329 questions.
-
The Self-Assessment Questionnaire is a set of Questionnaires documents that merchants are required to complete every year and submit to their transaction Bank. Another component of SAQ is Attestation of Compliance (AOC) where each SAQ question is replied to based on the internal PCI DSS self-evaluation. Each SAQ question must be replied with yes or no alternative. In the event that a question has the appropriate response "no", at that point the association must highlight its future implementation aspects.
Compliance versus validation of compliance
Although the PCI DSS must be implemented by all entities that process, store or transmit cardholder data, formal validation of PCI DSS compliance is not mandatory for all entities. Currently, both Visa and MasterCard require merchants and service providers to be validated according to the PCI DSS. Visa also offers an alternative program called the Technology Innovation Program (TIP) that allows qualified merchants to discontinue the annual PCI DSS validation assessment. These merchants are eligible if they are taking alternative precautions against counterfeit fraud such as the use of EMV or Point to Point Encryption.
Issuing banks are not required to go through PCI DSS validation although they still have to secure the sensitive data in a PCI DSS compliant manner. Acquiring banks are required to comply with PCI DSS as well as to have their compliance validated by means of an audit.
In the event of a security breach, any compromised entity which was not PCI DSS compliant at the time of breach will be subject to additional card scheme penalties, such as fines.
Legislation
Compliance with PCI DSS is not required by federal law in the United States. However, the laws of some U.S. states either refer to PCI DSS directly or make equivalent provisions. The legal scholars Edward Morse and Vasant Raval have argued that, by enshrining PCI DSS compliance in legislation, the card networks have reallocated the externalized cost of fraud from the card issuers to merchants.
In 2007, Minnesota enacted a law prohibiting the retention of some types of payment card data subsequent to 48 hours after authorization of the transaction.
In 2009, Nevada incorporated the standard into state law, requiring compliance of merchants doing business in that state with the current PCI DSS, and shielding compliant entities from liability. The Nevada law also allows merchants to avoid liability by other approved security standards.
In 2010, Washington also incorporated the standard into state law. Unlike Nevada's law, entities are not required to be compliant to PCI DSS, but compliant entities are shielded from liability in the event of a data breach.
Risk management to protect cardholder data
Under PCI DSS's requirement 3, merchants and financial institutions are implored to protect their clients’ sensitive data with strong cryptography. Non-compliant solutions will not pass the audit. A typical risk management program can be structured in 3 steps:
Identify all known risks and record/describe them in a risk register. For example, hardware security modules (HSM) that are used in the cryptographic key management process could potentially introduce their own risks if compromised, whether physically or logically. HSMs create a root of trust within the system. However, while it is unlikely, if the HSM is compromised, this could compromise the entire system.
Develop a risk management program is to analyze all identified risks. Included in this analysis should be a mix of qualitative and quantitative techniques to determine what risk treatment methods should be used to reduce the possibility of risks. For example, an organization might analyze the risk of using a cloud HSM versus a physical device that they use on site.
Treat the risks in response to the risk analysis that was previously performed. For example, employing different treatments to protect client information stored in a cloud HSM versus ensuring security both physically and logically for an onsite HSM, which could include implementing controls or obtaining insurance to maintain an acceptable level of risk.
Continuous monitoring and review are part of the process of reducing PCI DSS cryptography risks. This includes maintenance schedules and predefined escalation and recovery routines when security weaknesses are discovered.
Controversies and criticisms
Visa and Mastercard impose fines for non-compliance.
Stephen and Theodora "Cissy" McComb, owners of Cisero's Ristorante and Nightclub in Park City, Utah, were allegedly fined for a breach for which two forensics firms could not find evidence as having occurred: "The PCI system is less a system for securing customer card data than a system for raking in profits for the card companies via fines and penalties. Visa and MasterCard impose fines on merchants even when there is no fraud loss at all, simply because the fines 'are profitable to them."
Michael Jones, CIO of Michaels' Stores, testified before a U.S. Congress subcommittee regarding the PCI DSS: "(...the PCI DSS requirements...) are very expensive to implement, confusing to comply with, and ultimately subjective, both in their interpretation and in their enforcement. It is often stated that there are only twelve 'Requirements' for PCI compliance. In fact there are over 220 sub-requirements; some of which can place an incredible burden on a retailer and many of which are subject to interpretation."
Others have suggested that PCI DSS is a step toward making all businesses pay more attention to IT security, even if minimum standards are not enough to completely eradicate security problems. For example, Bruce Schneier has spoken in favour of PCI DSS:
"Regulation—SOX, HIPAA, GLBA, the credit-card industry's PCI, the various disclosure laws, the European Data Protection Act, whatever—has been the best stick the industry has found to beat companies over the head with. And it works. Regulation forces companies to take security more seriously, and sells more products and services."
PCI Council General Manager Bob Russo's responded to the objections of the National Retail Federation: "[PCI is a structured] blend...[of] specificity and high-level concepts [that allows] stakeholders the opportunity and flexibility to work with Qualified Security Assessors (QSAs) to determine appropriate security controls within their environment that meet the intent of the PCI standards."
Compliance and compromises
According to Visa Chief Enterprise Risk Officer Ellen Richey (2018): "...no compromised entity has yet been found to be in compliance with PCI DSS at the time of a breach."
In 2008, a breach of Heartland Payment Systems, an organization validated as compliant with PCI DSS, resulted in the compromising of one hundred million card numbers. Around this same time Hannaford Brothers and TJX Companies, also validated as PCI DSS compliant, were similarly breached as a result of the alleged coordinated efforts of Albert "Segvec" Gonzalez and two unnamed Russian hackers.
Assessments examine the compliance of merchants and services providers with the PCI DSS at a specific point in time and frequently utilize a sampling methodology to allow compliance to be demonstrated through representative systems and processes. It is the responsibility of the merchant and service provider to achieve, demonstrate, and maintain their compliance at all times both throughout the annual validation/assessment cycle and across all systems and processes in their entirety. Although it could be that a breakdown in merchant and service provider compliance with the written standard was to blame for the breaches, Hannaford Brothers had received its PCI DSS compliance validation one day after it had been made aware of a two-month-long compromise of its internal systems. The failure of this to be identified by the assessor suggests that incompetent verification of compliance undermines the security of the standard.
Other criticism lies in that compliance validation is required only for Level 1-3 merchants and may be optional for Level 4 depending on the card brand and acquirer. Visa's compliance validation details for merchants state that level 4 merchants compliance validation requirements are set by the acquirer, Visa level 4 merchants are "Merchants processing less than 20,000 Visa e-commerce transactions annually and all other merchants processing up to 1 million Visa transactions annually". At the same time, over 80% of payment card compromises between 2005 and 2007 affected Level 4 merchants; they handle 32% of transactions.
See also
Penetration test
Vulnerability management
Wireless LAN
Wireless security
References
External links
Official PCI Security Standards Council Site
A guide to PCI compliance
Payment cards
Computer law
Information privacy
Security compliance |
48482493 | https://en.wikipedia.org/wiki/Open%20Agriculture%20Initiative | Open Agriculture Initiative | The MIT Open Agriculture Initiative (OpenAg) was founded in 2015 by Caleb Harper as an initiative of the MIT Media Lab at the Massachusetts Institute of Technology. The project closed in April 2020 with the departure of Harper from MIT, although the closure was only officially confirmed by MIT a month later in May. The project aimed to develop controlled-environment agriculture platforms called "Food Computers" that operated on a variety of scales, and which might have been used for experimental, educational, or personal use. All of the hardware, software, and data would have been open source, with the intention of creating a standardized open platform for agricultural research and experimentation. In theory, had the project succeeded, it would have enhanced transparency in the agricultural industry and made urban agriculture easier to perform, easing access to fresh foods.
The OpenAg project received criticism that much of its early positive publicity was based on results that were either exaggerated or outright faked. Staff members told stories of purchasing potted plants from stores and demonstrating them as if they had been grown in "Food Computers". Gizmodo called the project a "Theranos for plants" and said that few, if any, of the Food Computers successfully grew a plant. The project was also buffeted by the resignation of Joichi Ito from the MIT Media Lab, who had helped procure much of the funding for the project.
Food Computer
The Open Agriculture Initiative coined the term "Food Computer" to describe their main product. Originally developed under the MIT CityFARM project, the Food Computer was a controlled-environment agriculture platform that used soilless agriculture technologies including hydroponic and aeroponic systems to grow crops indoors. The Food Computer also used an array of sensors to monitor the internal climate within a specialized growing chamber and adjust it so that the environmental conditions would remain consistent and optimum.
The climate inside of a growing chamber was supposed to be tightly controlled to enhance food production and quality. The data on the climate conditions during a given harvest cycle could be logged online as a "climate recipe", and the phenotypic expressions (observable characteristics) of the plant could also be monitored and recorded. These recipes were recorded in an online database that was to be openly accessible so that climate conditions could be downloaded by other users around the globe.
The term Food Computer was applied generally to any of the Open Agriculture Initiative's controlled-environment systems, or specifically to the smallest model, which was also called a Personal Food Computer. The tabletop-sized unit was intended for use in homes, classrooms, and small-scale experimental facilities. The mid-sized model, or Food Server, was the size of an internationally-standardized shipping container, and would utilize vertical farming structures. It was intended for use in cafeterias, restaurants, local grocers, and large-scale experimental facilities. The largest versions of the Food Computer were to be warehouse-sized Food Data Centers that would function on the level of industrial crop production.
Food Computers were never commercially available. , there were six prototype Personal Food Computers operating in schools around the Boston area, and three Food Servers operating at MIT, Michigan State University, and Unidad Guadalajara (Cinvestav) in Mexico. Build directions and schematics were available for makers and hobbyists, while more-widespread availability was expected once manufacturing began.
Open Phenome Library
Various climate conditions including temperature, relative humidity, carbon dioxide and oxygen levels, pH of water, electrical conductivity of water, and exposure to various nutrients, fertilizers, and chemicals influence whether a plant grows and also how it grows. Different climate conditions can lead to different phenotypic expressions in plants that are genotypically very similar or identical. The various traits that a plant expresses, including color, size, texture, yield, growth rate, flavor, and nutrient density, make up its phenome. OpenAg aimed to crowd source related research, and to create an open library of phenome data that relates external climate conditions to specific phenotypic expressions in various plants.
Affiliations and funding
The Open Agriculture Initiative was primarily funded through the MIT Media Lab, which was almost 100% industrially funded through corporate memberships. The Open Agriculture Initiative had also received specific endorsements from members such as IDEO, Lee Kum Kee, Target, Unilever, and Welspun. OpenAg also received additional investments and philanthropic contributions from companies and institutions unaffiliated with the Media Lab.
Criticism
Allegations of scientific misconduct
In September 2019, former employees at Fenome, the startup spun off from OpenAg, openly discussed the failure of their Food Computers to maintain the controlled environment required for growing food. They alleged that photographic results and growth data had been falsified for presentations to investors and the general public. A series of internal emails sent by Babak Babakinejad, the former lead scientist of the project, backed up these allegations.
Further investigations in November showed that the Food Computers which principal investigator Harper claimed had been sent to a refugee camp for Syrian refugees in Azraq had instead been sent to a World Food Programme office in Amman, where they failed to grow food.
Environmental concerns
In September 2019, Boston radio station WBUR published a report detailing charges that the OpenAg initiative lab at MIT's Bates Research and Engineering Center in Middleton had been dumping nitrogen-laden hydroponics solution into the wastewater system at levels above the state's mandated limits of 10 ppm, leading to an investigation by the Department of Environmental Protection (MassDEP).
MIT was assessed a fine of $25,125 for this violation. MIT agreed to pay $15,000 and to close a wastewater injection well, and to prepare a wastewater management plan for MassDEP approval.
References
External links
OpenAg Overview
MIT Media Lab
Horticulture
Open-source hardware
Hydroculture |
22168784 | https://en.wikipedia.org/wiki/Caradrina | Caradrina | Caradrina is a genus of moths of the family Noctuidae. The genus was erected by Ferdinand Ochsenheimer in 1816. It is divided into eight subgenera, including Paradrina and Platyperigea, which are treated as separate genera by some authors.
By 1989, about 189 described species were included in the genus.
Description
Its eyes are naked and without lashes. The proboscis is well developed. Palpi upturned, where the second joint evenly clothed with hair. Thorax and abdomen tuftless. Tibia spineless. Cilia non-crenulate.
Species
Caradrina abruzzensis (Draudt, 1933)
Caradrina adriennea Hacker & Gyulai, 2004
Caradrina africarabica (Plante, 1998)
Caradrina afrotropicalis Hacker, 2004
Caradrina agenjoi (Boursin, 1936)
Caradrina agrotina Staudinger, 1891
Caradrina alana Druce, 1890
Caradrina albina Eversmann, 1848
Caradrina aldegaitheri Wiltshire, 1986
Caradrina alfierii (Boursin, 1937)
Caradrina altissima Hacker, 2004
Caradrina ammoxantha Boursin, 1957
Caradrina amseli (Boursin, 1936)
Caradrina armeniaca (Boursin, 1936)
Caradrina asinina Saalmüller, 1891
Caradrina aspersa Rambur, 1834
Caradrina asymmetrica (Boursin, 1936)
Caradrina atriluna Guenée, 1852
Caradrina atrostriga (Barnes & McDunnough, 1912)
Caradrina avis Pinker, 1979
Caradrina azim Boursin, 1957
Caradrina bactriana Boursin, 1967
Caradrina baltistana Hacker, 2004
Caradrina belucha Swinhoe, 1885
Caradrina beta (Barnes & Benjamin, 1926)
Caradrina bistrigata Bremer & Grey, 1853
Caradrina bodenheimeri Amsel, 1935
Caradrina boursini (Wagner, 1936)
Caradrina brandti (Boursin, 1939)
Caradrina callicora (Le Cerf, 1922)
Caradrina camina (Smith, 1894)
Caradrina casearia Staudinger, [1900]
Caradrina chinensis Leech, 1900
Caradrina clavipalpis (Scopoli, 1763) – pale mottled willow
Caradrina conditorana Pinker, 1979
Caradrina danieli Rungs, 1950
Caradrina derogata (Walker, 1865)
Caradrina diabolica (Boursin, 1942)
Caradrina didyma (Boursin, 1939)
Caradrina distigma Chrétien, 1913
Caradrina distinctoides Poole, 1989 (syn. C. distincta (Barnes, 1928))
Caradrina doleropsis (Boursin, 1939)
Caradrina draudti (Boursin, 1936)
Caradrina dubitata (Maassen, 1890)
Caradrina dukei Krüger, 2005
Caradrina eberti (Hacker, 1992)
Caradrina ectomelaena (Hampson, 1916)
Caradrina edentata (Berio, 1941)
Caradrina eremicola (Plante, 1998)
Caradrina eremocosma (Boursin, 1937)
Caradrina eucrinospila (Boursin, 1936)
Caradrina eugraphis Janse, 1938
Caradrina eva Boursin, 1963
Caradrina expansa Alphéraky, 1897
Caradrina falciuncula (Varga & Ronkay, 1991)
Caradrina fergana Staudinger, [1892]
Caradrina fibigeri Hacker, 2004
Caradrina filipjevi (Boursin, 1936)
Caradrina flava Oberthür, 1876
Caradrina flavirena Guenée, 1852
Caradrina flavitincta (Hampson, 1909)
Caradrina fulvafusca Hacker, 2004
Caradrina furcivalva (Hacker, 1992)
Caradrina fuscicornis Rambur, 1832
Caradrina fuscifusa (Varga & Ronkay, 1991)
Caradrina fuscomedia Hacker, 2004
Caradrina gandhara Hacker, 2004
Caradrina genitalana Hacker, 2004
Caradrina germainii (Duponchel, 1835)
Caradrina gilva (Donzel, 1837)
Caradrina glaucistis Hampson, 1902
Caradrina gyulaii Hacker, 2004
Caradrina hedychroa (Boursin, 1936)
Caradrina hemipentha (Boursin, 1939)
Caradrina heptarchia (Boursin, 1936)
Caradrina himachala Hacker, 2004
Caradrina himaleyica Kollar, 1844
Caradrina hoenei Hacker & Kononenko, 2004
Caradrina hypocnephas Boursin, [1968]
Caradrina hypoleuca Boursin, [1968]
Caradrina hypostigma (Boursin, 1932)
Caradrina ibeasi (Fernandez, 1918)
Caradrina immaculata Motschulsky, 1860
Caradrina ingrata Staudinger, 1897
Caradrina inopinata Hacker, 2004
Caradrina intaminata (Walker, 1865)
Caradrina inumbrata (Staudinger, 1900)
Caradrina inumbratella Pinker, 1979
Caradrina isfahana Hacker, 2004
Caradrina jacobsi (Rothschild, 1914)
Caradrina kadenii Freyer, 1836 – Clancy's rustic
Caradrina kashmiriana Boursin, [1968]
Caradrina katherina Wiltshire, 1947
Caradrina kautti Hacker, 2004
Caradrina khorassana (Boursin, 1942)
Caradrina klapperichi Boursin, 1957
Caradrina kravchenkoi Hacker, 2004
Caradrina lanzarotensis Pinker, 1962
Caradrina leucopis Hampson, 1902
Caradrina levantina Hacker, 2004
Caradrina likiangia (Berio, 1977)
Caradrina lobbichleri Boursin, 1970
Caradrina localis Wiltshire, 1986
Caradrina marginata Hacker, 2004
Caradrina melanosema (Hampson, 1914)
Caradrina melanura Alphéraky, 1897
Caradrina melanurina (Staudinger, 1901)
Caradrina mendica Maassen, 1890
Caradrina meralis Morrison, 1875
Caradrina merzbacheri Boursin, 1960
Caradrina minoica Hacker, 2004
Caradrina mirza Boursin, 1957
Caradrina mona (Barnes & McDunnough, 1912)
Caradrina monssacralis (Varga & Ronkay, 1991)
Caradrina montana Bremer, 1864 (syn. C. extima (Walker, 1865)) – civil rustic
Caradrina morosa Lederer, 1853
Caradrina morpheus (Hufnagel, 1766) – mottled rustic
Caradrina muelleri Hacker, 2004
Caradrina multifera Walker, [1857] – speckled rustic
Caradrina nadir Boursin, 1957
Caradrina naumanni Hacker, 2004
Caradrina nekrasovi Hacker, 2004
Caradrina noctivaga Bellier, 1863
Caradrina oberthuri (Rothschild, 1913)
Caradrina olivascens Hacker, 2004
Caradrina owgarra Bethune-Baker, 1908
Caradrina pallidula Saarmüller, 1891
Caradrina panurgia (Boursin, 1939)
Caradrina parthica Hacker, 2004
Caradrina parvaspersa (Boursin, 1936)
Caradrina persimilis (Rothschild, 1920)
Caradrina personata (Kuznetzov, 1958)
Caradrina pertinax Staudinger, 1878
Caradrina petraea Tengström, 1869
Caradrina pexicera (Hampson, 1909)
Caradrina phanosciera (Boursin, 1939)
Caradrina poecila (Boursin, 1939)
Caradrina prospera (Kuznetzov, 1958)
Caradrina proxima Rambur, 1837
Caradrina pseudadelpha (Boursin, 1939)
Caradrina pseudagrotis (Hampson, 1918)
Caradrina pseudalpina (Boursin, 1942)
Caradrina pseudocosma (Plante, 1998)
Caradrina pulvis (Boursin, 1939)
Caradrina pushkara Hacker, 2004
Caradrina rebeli Staudinger, 1901
Caradrina rjabovi (Boursin, 1936)
Caradrina ronkayrorum Hacker, 2004
Caradrina roxana (Boursin, 1937)
Caradrina rudebecki Krüger, 2005
Caradrina salzi (Boursin, 1936)
Caradrina sarhadica (Boursin, 1942)
Caradrina scotoptera (Püngeler, 1914)
Caradrina selini Boisduval, 1840
Caradrina senecai Hacker, 2004
Caradrina shugnana Hacker, 2004
Caradrina signa (D. S. Fletcher, 1961)
Caradrina singularis Hacker, 2004
Caradrina sinistra (Janse, 1938)
Caradrina sogdiana (Boursin, 1936)
Caradrina soudanensis (Hampson, 1918)
Caradrina squalida Eversmann, 1842
Caradrina stenoeca Wiltshire, 1986
Caradrina stenoptera (Boursin, 1939)
Caradrina stilpna Boursin, 1957
Caradrina superciliata Wallengren, 1856
Caradrina surchica (Boursin, 1937)
Caradrina suscianja (Mentzer, 1981)
Caradrina syriaca Staudinger, [1892]
Caradrina tenebrata Hampson, 1902
Caradrina terrea Freyer, 1840
Caradrina tibetica (Draudt, 1950)
Caradrina tolima Maassen, 1890
Caradrina torpens Guenée, 1852
Caradrina transoxanica Hacker, 2004
Caradrina turatii (Boursin, 1936)
Caradrina turbulenta (Warren, 1911)
Caradrina turcomana Hacker, 2004
Caradrina umbratilis (Draudt, 1933)
Caradrina vargai Hacker, 2004
Caradrina variolosa Motschulsky, 1860
Caradrina vicina Staudinger, 1870
Caradrina warneckei (Boursin, 1936)
Caradrina wiltshirei (Boursin, 1936)
Caradrina wullschlegeli Püngeler, 1903
Caradrina xanthopis (Hampson, 1909)
Caradrina xanthorhoda (Boursin, 1937)
Caradrina xiphophora Boursin, 1967
Caradrina zaghrobia Hacker, 2004
Caradrina zandi Wiltshire, 1952
Caradrina zernyi (Boursin, 1936)
Caradrina zuleika Boursin, 1957
References
Caradrinini |
12916449 | https://en.wikipedia.org/wiki/FpGUI | FpGUI | fpGUI, the Free Pascal GUI toolkit, is a cross-platform graphical user interface toolkit developed by Graeme Geldenhuys. fpGUI is open source and free software, licensed under a Modified LGPL license. The toolkit has been implemented using the Free Pascal compiler, meaning it is written in the Object Pascal language.
fpGUI consists only of graphical widgets or components, and a cross-platform 2D drawing library. It doesn't implement database layers, 3D graphics, XML parsers etc. It also doesn't rely on any huge third party libraries like GTK or Qt. All the extras come straight from what is available with the Free Pascal Component Library (FCL) which comes standard with the Free Pascal compiler.
History
The first version of fpGUI was written by Sebastian Günther back in 2000. The project was then abandoned in 2002. fpGUI was a successor to an earlier OO GTK wrapper, fpGTK, and was pretty much a fresh start to allow multiple (backend) widgetsets, most notably win32. The toolkit was used for some internal FPC tooling (e.g. the fpdoc editor), but there were still a lot of things outstanding before the toolkit could be truly useful and used in real life applications by end-users. Most of these tools where migrated to the maturing Lazarus in the 2004-2006 timeframe.
Graeme Geldenhuys revived the toolkit in mid-2006 where Sebastian left off. He continued developing the toolkit for the next year. Merging three sub-projects (fpGFX, fpIMG and fpGUI) into a single project fpGUI. Graeme extended the number of components and amount of backend graphics layer, and improved the overall toolkit. The supported platforms at that stage was Linux and FreeBSD via X11 and Microsoft Windows via GDI. After a few months Felipe Monteiro de Carvalho joined the development team adding support for Windows Mobile devices and extending the graphics support and design. Felipe also started working on Mac OS X support via Carbon.
At the beginning of June 2007 Graeme found some major design issues in the source base. This prevented fpGUI from being truly useful in real applications. After numerous prototypes the fpGUI project was completely rewritten. Past experience helped a lot and new design ideas were implemented. The code base ended up being much simpler with a cleaner design. One of the major changes was that all widgets were now based on a multi-handle (windowed) design. Each widget now has a window handle. Other GUI toolkits that follow a similar design are GTK, Xt and FLTK to name a few. GUI toolkits that follow the opposite design are toolkits like the latest Qt and MSEgui.
Example program
The following program shows a single window with a "Quit" button in the bottom right. On the canvas (background) of the window it paints all the standard built-in images used with fpGUI.
program stdimglist;
{$mode objfpc}{$H+}
uses
Classes, SysUtils,
fpg_base, fpg_main, fpg_form, fpg_imgfmt_bmp, fpg_button;
type
TMainForm = class(TfpgForm)
private
btnClose: TfpgButton;
procedure btnCloseClick(Sender: TObject);
protected
procedure HandlePaint; override;
public
constructor Create(aowner: TComponent); override;
procedure AfterCreate; override;
end;
{ TMainForm }
procedure TMainForm.AfterCreate;
begin
SetPosition(100,100,700,500);
WindowTitle := 'fpGUI Standard Image Listing';
// Place button in bottom right corner.
btnClose := CreateButton(self, Width-90, Height-35, 75, 'Quit', @btnCloseClick);
btnClose.ImageName := 'stdimg.quit';
btnClose.Anchors := [anRight, anBottom];
end;
procedure TMainForm.btnCloseClick(Sender: TObject);
begin
Close;
end;
procedure TMainForm.HandlePaint;
var
n: integer;
x: TfpgCoord;
y: TfpgCoord;
sl: TStringList;
img: TfpgImage;
begin
Canvas.BeginDraw; // begin double buffering
inherited HandlePaint;
sl := TStringList.Create;
x := 8;
y := 8;
fpgImages.ListImages(sl);
for n := 0 to sl.Count-1 do
begin
Canvas.DrawString(x, y, sl[n]+':');
img := TfpgImage(sl.Objects[n]);
if img <> nil then
Canvas.DrawImage(x+130, y, img);
inc(y, img.Height+8);
if y > Height-32 then // largest images are 32 in height
begin
inc(x, 200);
y := 8;
end;
end;
Canvas.EndDraw;
sl.Free;
end;
constructor TMainForm.Create(aowner: TComponent);
begin
inherited Create(aowner);
(* PRIOR TO v1.4:
// Place button in bottom right corner.
btnClose := CreateButton(self, Width-90, Height-35, 75, 'Quit', @btnCloseClick);
btnClose.ImageName := 'stdimg.quit';
btnClose.Anchors := [anRight, anBottom];
*)
end;
procedure MainProc;
var
frm : TMainForm;
begin
fpgApplication.Initialize;
frm := TMainForm.Create(nil);
try
frm.Show;
fpgApplication.Run;
finally
frm.Free;
end;
end;
begin
MainProc;
end.
Here is a screenshot of the above program when run under Linux.
Licensing
fpGUI is statically linked into programs and is licensed using a modified version of LGPL specially designed to allow static linking to proprietary programs. The only code you need to make available are any changes you made to the fpGUI toolkit - nothing more.
Software written with fpGUI
Master MathsUsed in a computer-based training system. As well as a basic accounting and administration package for franchisees.
A Visual Form Designer which is now included as part of fpGUI. It allows the developer to create user interfaces at a much faster pace.
Unimesur and various toolsWritten by Jean-Marc, the Unimesur program allows to convert measurements of flows of liquids and gases, between mass and volume units. All results were verified for the exactness of the conversion factors.
fpGUI DocViewAn INF help file viewer that currently works on Windows, Linux and FreeBSD. INF is the default help format of fpGUI, and is also the help format used in OS/2 (and also eComStation and ArcaOS).
Free Pascal Testing FrameworkA cross-platform unit testing framework with a Console and GUI test runner.
See also
Lazarus (software)
Widget toolkit
Qt
wxWidgets
GTK+
FOX toolkit
FLTK
References
External links
official fpGUI Toolkit website
fpGUI's SourceForge.net project page
Free Pascal compiler
Lazarus IDE
Free computer libraries
Free Pascal
Free software programmed in Pascal
Pascal (programming language) libraries
Programming tools for Windows
Software using the LGPL license
Widget toolkits
X-based libraries
Pascal (programming language) software |
32599260 | https://en.wikipedia.org/wiki/Chippewa%20Operating%20System | Chippewa Operating System | The Chippewa Operating System (COS) is a discontinued operating system developed by Control Data Corporation for the CDC 6600, generally considered the first supercomputer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.
The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000. Its design influenced the later CDC Kronos and SCOPE operating systems. Its name was based on the Chippewa Falls research and development center of CDC in Wisconsin.
It is distinct from and preceded the Cray Operating System (also called "COS") at Cray.
See also
History of supercomputing
Timeline of operating systems
Bibliography
References
Discontinued operating systems
Supercomputer operating systems
CDC operating systems |
30175830 | https://en.wikipedia.org/wiki/George%20Farmer%20%28running%20back%29 | George Farmer (running back) | George Farmer (born July 4, 1993) is a former American football running back. He graduated in 2011 from Junípero Serra High School in Gardena, California, and entered the NFL draft after his redshirt junior year at USC.
Early years
Farmer was regarded as a five-star recruit by Rivals.com, and was listed as the No. 1 wide receiver prospect in the class of 2011. He has been featured as Sports Illustrated′s "High School Player of the Week" in October 2010, and participated in the 2010 U.S. Army All-American Bowl.
At Junípero Serra High School, Farmer was a teammate of Colorado wideout and future Seahawks teammate Paul Richardson and San Jose Spartan cornerback Bené Benwikere as well as former USC Trojan wideouts Robert Woods and Marqise Lee.
Farmer is the son of former NFL player George Farmer, who played for both the Los Angeles Rams and Miami Dolphins in the 1980s.
Track and field
Also an excellent sprinter, Farmer finished second in the 100 meters to Remontay McClain—in a photo finish (both at 10.40 sec)—at the 2010 CIF track meet.
Personal bests
College career
Farmer played at USC as a freshman in the 2011 season, appearing in 4 games mostly as a running back, while catching 4 passes for 42 yards and returning 3 kickoffs for a total of 59 yards.
The next year, he suffered a serious bite from a brown recluse spider, before the start of the summer workouts. He was used even more sparingly in his sophomore season, recording only 1 reception for 7 yards.
Farmer tore his ACL and MCL in a preseason practice, causing him to miss the entire season. Despite only grabbing 30 catches for 363 yards with four touchdowns in an injury-plagued USC Trojan career, Farmer decided to enter the 2015 NFL Draft after his junior year.
Professional career
Dallas Cowboys
Farmer was signed as an undrafted free agent by the Dallas Cowboys on May 5, 2015. With several suitors, the Cowboys gave him a $15,000 signing bonus and a guaranteed cash payment of $55,000 dollars for the season, to outbid other National Football League teams. He was waived on August 16 to make room for wide receiver David Porter.
Seattle Seahawks
On August 22, 2015, Farmer signed with the Seattle Seahawks as a free agent, who converted him to a cornerback. He was subsequently released on August 31. After being released at the conclusion of the preseason, the Seattle Seahawks resigned Farmer to their practice squad on September 22, 2015.
In the 2016 offseason, the Seahawks moved Farmer from cornerback to running back due to many injuries to their running backs. On August 30, 2016, he was waived/injured by the Seahawks and placed on injured reserve. On September 3, 2016, he was released from the Seahawks' injured reserve. He was re-signed to the practice squad on November 1, 2016 but was released on November 4. He was again re-signed to the practice squad the following week.
He was promoted to the active roster on November 23, 2016. He was released on December 6, 2016 and re-signed back to the practice squad. He signed a reserve/future contract with the Seahawks on January 16, 2017. On May 9, 2017, he was released by the Seahawks.
References
External links
USC Trojans bio
DyeStat profile for George Farmer
1993 births
Living people
Players of American football from Los Angeles
American football wide receivers
USC Trojans football players
Dallas Cowboys players
Seattle Seahawks players
Junípero Serra High School (Gardena, California) alumni |
62874378 | https://en.wikipedia.org/wiki/Song-Chun%20Zhu | Song-Chun Zhu | Song-Chun Zhu () is a Chinese computer scientist and applied mathematician known for his work in computer vision, cognitive artificial intelligence and robotics. Zhu currently works at Peking University and was previously a professor in the Departments of Statistics and Computer Science at the University of California, Los Angeles. Zhu also previously served as Director of the UCLA Center for Vision, Cognition, Learning and Autonomy (VCLA).
In 2005, Zhu founded the Lotus Hill Institute, an independent non-profit organization to promote international collaboration within the fields of computer vision and pattern recognition. Zhu has published extensively and lectured globally on artificial intelligence, and in 2011, he became an IEEE Fellow (Institute of Electrical and Electronics Engineers) for "contributions to statistical modeling, learning and inference in computer vision."
Zhu has two daughters, Stephanie and Yi. Zhu Yi () is a competitive figure skater.
Early life and education
Born and raised in Ezhou, China, Zhu found inspiration, when he was young, in the development of computers playing chess, sparking his interest in artificial intelligence. In 1991, Zhu earned his B.S. in Computer Science from the University of Science and Technology of China at Hefei. During his undergraduate years, Zhu, finding the computational theory of vision by the late MIT neuroscientist David Marr deeply influential, aspired to pursue a general unified theory of vision and AI. In 1992, Zhu continued his study of computer vision at the Harvard Graduate School of Arts and Sciences. At Harvard, Zhu studied under the supervision of American mathematician David Mumford and gained an introduction to "probably approximately correct" (PAC) learning under the instruction of Leslie Valiant. Zhu concluded his studies at Harvard in 1996 with a Ph.D. in Computer Science and followed Mumford to the Division of Applied Mathematics at Brown University as a postdoctoral fellow.
Career
Following his postdoctoral fellowship, Zhu lectured briefly in Stanford University's Computer Science Department. In 1998, he joined Ohio State University as an assistant professor in the Departments of Computer Science and Cognitive Science. In 2002, Zhu joined the University of California, Los Angeles in the Departments of Computer Science and Statistics as associate professor, rising to the rank of full professor in 2006. At UCLA, Zhu established the Center for Vision, Cognition, Learning and Autonomy. His chief research interest has resided in pursuing a unified statistical and computational framework for vision and intelligence, which includes the Spatial, Temporal, and Causal And-Or graph (STC-AOG) as a unified representation and numerous Monte Carlo methods for inference and learning.
In 2005, Zhu established an independent non-profit organization in his hometown of Ezhou, the Lotus Hill Institute (LHI). LHI has been involved with collecting large-scale dataset of images and annotating the objects, scenes, and activities, having received contributions from many renowned scholars, including Harry Shum. The institute also features a full-time annotation team for parsing image structures, having amassed over 500,000 images to date.
Since establishing LHI, Zhu has organized numerous workshops and conferences, along with serving as the general chair for both the 2012 Conference on Computer Vision and Pattern Recognition (CVPR) in Providence, Rhode Island, where he presented Ulf Grenander with a Pioneer Medal, and the 2019 CVPR held in Long Beach, California.
In July 2017, Zhu founded DMAI in Los Angeles as an AI startup engaged in developing a unified cognitive AI platform.
In September 2020, Zhu returned to China to join Peking University to lead its Institute for Artificial Intelligence, thus joining another Chinese AI expert in the US and a long-time acquaintance of Zhu, Microsoft's former head of artificial intelligence and research, Harry Shum. Shum was also appointed by Peking University in August to chair the academic committee of the Institute of Artificial Intelligence.
Zhu is working on setting up a new and separate AI research institute - Beijing Institute for General Artificial Intelligence (BIGAI). According to the introduction, based on "small data for big task" paradigm, BIGAI focuses on advanced AI technology, multi-disciplinary integration, international academic exchange, to nurture the new generation of young AI talents. The institute is expected to gather professional researchers, scholars and experts, to put Zhu's theoretical framework of artificial intelligence into practice, and jointly promoting Chinese original AI technologies and building a new generation of general AI platforms.
Research and work
Zhu has published over three hundred articles in peer-reviewed journals and proceedings in the following four phases:
Pioneering statistical models to formulate concepts in Marr’s framework
In the early 1990s, Zhu, with collaborators in the pattern theory group, developed advanced statistical models for computer vision. Focusing upon developing a unifying statistical framework for the early vision representations presented in David Marr's posthumously published work titled Vision, they first formulated textures in a new Markov random field model, called FRAME, using a minimax entropy principle to introduce discoveries in neuroscience and psychophysics to Gibbs distributions in statistical physics. Then they proved the equivalence between the FRAME model and the micro-canonical ensemble, which they named the Julesz ensemble. This work received the Marr Prize honorary nomination during the International Conference on Computer Vision (ICCV) in 1999.
During the 1990s, Zhu developed two new classes of nonlinear partial differential equations (PDEs). One class for image segmentation is called region competition. This work connecting PDEs to statistical image models received the Helmholtz Test of Time Award in ICCV 2013. The other class, called GRADE (Gibbs Reaction and Diffusion Equations) was published in 1997 and, employs a Langevin dynamics approach for inference and learning Stochastic gradient descent (SGD).
In the early 2000s, Zhu formulated textons using generative models with sparse coding theory and integrated both the texture and texton models to represent primal sketch. With Ying Nian Wu, Zhu advanced the study of perceptual transitions between regimes of models in information scaling and proposed a perceptual scale space theory to extend the image scale space.
Expanding Fu's grammar paradigm by stochastic and-or graph
From 1999 until 2002, with his Ph.D. student Zhuowen Tu, Zhu developed a data-driven Markov chain Monte Carlo (DDMCMC) paradigm to traverse the entire state-space by extending the jump-diffusion work of Grenander-Miller. With another Ph.D. student, Adrian Barbu, he generalized the cluster sampling algorithm (Swendsen-Wang) in physics from Ising/Potts models to arbitrary probabilities. This advancement in the field made the split-merge operators reversible for the first time in the literature and achieved 100-fold speedups over Gibbs sampler and jump-diffusion. This accomplishment led to the work on image parsing that won the Marr Prize in ICCV 2003.
In 2004, Zhu moved to high level vision by studying stochastic grammar. The grammar method dated back to the syntactic pattern recognition approach advocated by King-Sun Fu in the 1970s. Zhu developed grammatical models for a few key vision problems, such as face modeling, face aging, clothes, object detection, rectangular structure parsing, and the sort. He wrote a monograph with Mumford in 2006 titled A Stochastic Grammar of Images. In 2007, Zhu and co-authors received a Marr Prize nomination. The following year, Zhu received the J.K. Aggarwal Prize from the International Association of Pattern Recognition for "contributions to a unified foundation for visual pattern conceptualization, modeling, learning, and inference."
Zhu has extended the and-or graph models to the spatial, temporal, and causal and-or graph (STC-AOG) to express the compositional structures as a unified representation for objects, scenes, actions, events, and causal effects in physical and social scene understanding problems.
Exploring the "dark matter of AI" cognition and visual commonsense
Since 2010, Zhu has collaborated with scholars from cognitive science, AI, robotics, and language to explore what he calls the "Dark Matter of AI"—the 95% of the intelligent processing not directly detectable in sensory input.
Together they have augmented the image parsing and scene understanding problem by cognitive modeling and reasoning about the following aspects: functionality (functions of objects and scenes, the use of tools), intuitive physics (supporting relations, materials, stability, and risk), intention and attention (what people know, think, and intend to do in social scene), causality (the causal effects of actions to change object fluents), and utility (the common values driving human activities in video). The results are disseminated through a series of workshops.
There are numerous other topics Zhu has explored during this period, including the following: formulating AI concepts such as tools, container, liquids; integrating three-dimensional scene parsing and reconstruction from single images by reasoning functionality, physical stability, situated dialogues by joint video and text parsing; developing communicative learning; and mapping the energy landscape of non-convex learning problems.
Pursuing a "small-data for big task" paradigm for general AI
In a widely circulated public article written in Chinese in 2017, Zhu referred to popular data-driven deep learning research as a "big data for small task" paradigm that trains a neural network for each specific task with massive annotated data, resulting in uninterpretable models and narrow AI. Zhu, instead, advocated for a "small data for big task" paradigm to achieve general AI.
Zhu constructed a large-scale physics-realistic VR/AR environment for training and testing autonomous AI agents charged with executing a large amount of daily tasks. This VR/AR platform received the Best Paper Award at the ACM TURC conference in 2019. The agents integrate capabilities within the fields of vision, language, cognition, learning, and robotics, in the process developing physical and social commonsense and communicating with humans using a cognitive architecture.
Awards and honors
1999 – Marr Prize honorary nomination, Seventh Int’l Conference on Computer Vision, Corfu, Greece
2001 – Sloan Research Fellow in Computer Science, Alfred Sloan Foundation
2001 – Career Award, National Science Foundation
2001 – Young Investigator Award, Office of Naval Research
2003 – Marr Prize, Ninth Int’l Conf. on Computer Vision, Nice, France
2007 – Marr Prize honorary nomination at the 11th ICCV at Rio, Brazil2008
2008 – J.K. Aggarwal Prize, Int’l Association of Pattern Recognition.
2011 – Fellow, IEEE Computer Society.
2013 – Helmholtz Test-of-Time Award at the 14th Int’l Conf. on Computer Vision at Sydney, Australia
2017 – Computational Modeling Prize, Cognitive Science Society
2019 – Best Paper Award, ACM TURC Conference
Publications
Books
S.C. Zhu and D.B. Mumford, A Stochastic Grammar of Images, monograph, now Publishers Inc. 2007.
A.Barbu and S.C. Zhu, Monte Carlo Methods, Springer, Published in 2019.
S.C. Zhu, AI: The Era of Big Integration – Unifying Disciplines within Artificial Intelligence, DMAI, Inc., Published in 2019.
S.C. Zhu and Y.N. Wu, Concepts and Representations in Vision and Cognition, Draft taught for 10+ years, Springer, Preparing for 2020.
Papers
Zhu, S. C., Wu, Y., & Mumford, D. (1998). FRAME: filters, random fields, and minimax entropy towards a unified theory for texture modeling. International Journal of Computer Vision, 27(2) pp. 1–20.
Y. N. Wu, S. C. Zhu and X. W. Liu, (2000). Equivalence of Julesz Ensemble and FRAME models International Journal of Computer Vision, 38(3), 247–265.
Tu, Z. and Zhu, S.-C. Image Segmentation by Data Driven Markov Chain Monte Carlo, IEEE Trans. on PAMI, 24(5), 657–673, 2002.
Barbu, A. and Zhu, S.-C., Generalizing Swendsen-Wang to Sampling Arbitrary Posterior Probabilities, IEEE Trans. on PAMI, 27(8), 1239–1253, 2005.
Tu, Z., Chen, X.,Yuille, & Zhu, S.-C. (2003). Image parsing: unifying segmentation, detection, and recognition. Proceedings Ninth IEEE International Conference on Computer Vision.
Zhu, S. C., & Yuille, A. (1996). Region competition: unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(9), 884–900.
Zhu, S. C., & Mumford, D. (1997). Prior learning and Gibbs reaction-diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(11), 1236–1250.
Zhu, S.-C., Guo, C., Wang, Y., & Xu, Z. (2005). What are Textons? International Journal of Computer Vision, 62(1/2), 121–143.
Zhu, S.-C., & Mumford, D. (2006). A Stochastic Grammar of Images. Foundations and Trends in Computer Graphics and Vision, 2(4), 259–362.
Guo, C. Zhu, S.-C. and Wu, Y.(2007), Primal sketch: Integrating Texture and Structure. Computer Vision and Image Understanding, vol. 106, issue 1, 5–19.
Y.N. Wu, C.E. Guo, and S.C. Zhu (2008), From Information Scaling of Natural Images to Regimes of Statistical Models, Quarterly of Applied Mathematics, vol. 66, no. 1, 81–122.
B. Zheng, Y. Zhao, J. Yu, K. Ikeuchi, and S.C. Zhu (2015), Scene Understanding by Reasoning Stability and Safety, Int'l Journal of Computer Vision, vol. 112, no. 2, pp221–238, 2015.
Y. Zhu, Y.B. Zhao and S.C. Zhu (2015), Understanding Tools: Task-Oriented Object Modeling, Learning and Recognition, Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
Fire, A. and S.C. Zhu (2016), Learning Perceptual Causality from Video, ACM Trans. on Intelligent Systems and Technology, 7(2): 23.
Y.X. Zhu, C. Jiang, Y. Zhao, D. Terzopoulos and S.C. Zhu (2016), Inferring Forces and Learning Human Utilities from Video, Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
D. Xie, T. Shu, S. Todorovic and S.C. Zhu (2018), Learning and Inferring “Dark Matter” and Predicting Human Intents and Trajectories in Videos, IEEE Trans on Pattern Analysis and Machine Intelligence, 40(7): 1639–1652.
Zhu, Y. et al. (2020) Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Human-like Commonsense, Engineering special issue on AI.
S.C. Zhu, (2019) AI: The Era of Big Integration – Unifying Disciplines within Artificial Intelligence, DMAI, Inc..
References
External links
UCLA Center for Vision, Cognition, Learning and Autonomy
Song-Chun Zhu's page at UCLA
Lotus Hill Institute for Computer vision and Information Science
Computer Vision and Pattern Recognition, Long Beach, CA, 2019
DM Group, the Dark Matter of AI
ACM图灵大会上的“华山论剑”:朱松纯对话沈向洋 Dialogue by Drs. Song-Chun Zhu and Harry Shum at ACM TURC 2019
The DBLP Computer Science Bibliography
Int’l Workshop on Vision Meets Cognition: Functionality, Physics, Intent, Causality
浅谈人工智能, 《视觉求索》 公众号, November, 2017
1968 births
Living people
American people of Chinese descent
Harvard University alumni
University of Science and Technology of China alumni
University of California, Los Angeles faculty
Fellow Members of the IEEE |
176927 | https://en.wikipedia.org/wiki/Computational%20geometry | Computational geometry | Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity.
Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation.
The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization.
Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction).
The main branches of computational geometry are:
Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975.
Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971.
Although most algorithms of computational geometry have been developed (and are being developed) for electronic computers, some algorithms were developed for unconventional computers (e.g. optical computers )
Combinatorial computational geometry
The primary goal of research in combinatorial computational geometry is to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects: points, line segments, polygons, polyhedra, etc.
Some of these problems seem so simple that they were not regarded as problems at all until the advent of computers. Consider, for example, the Closest pair problem:
Given n points in the plane, find the two with the smallest distance from each other.
One could compute the distances between all the pairs of points, of which there are n(n-1)/2, then pick the pair with the smallest distance. This brute-force algorithm takes O(n2) time; i.e. its execution time is proportional to the square of the number of points. A classic result in computational geometry was the formulation of an algorithm that takes O(n log n). Randomized algorithms that take O(n) expected time, as well as a deterministic algorithm that takes O(n log log n) time, have also been discovered.
Problem classes
The core problems in computational geometry may be classified in different ways, according to various criteria. The following general classes may be distinguished.
Static problems
In the problems of this category, some input is given and the corresponding output needs to be constructed or found. Some fundamental problems of this type are:
Convex hull: Given a set of points, find the smallest convex polyhedron/polygon containing all the points.
Line segment intersection: Find the intersections between a given set of line segments.
Delaunay triangulation
Voronoi diagram: Given a set of points, partition the space according to which points are closest to the given points.
Linear programming
Closest pair of points: Given a set of points, find the two with the smallest distance from each other.
Farthest pair of points
Largest empty circle: Given a set of points, find a largest circle with its center inside of their convex hull and enclosing none of them.
Euclidean shortest path: Connect two points in a Euclidean space (with polyhedral obstacles) by a shortest path.
Polygon triangulation: Given a polygon, partition its interior into triangles
Mesh generation
Boolean operations on polygons
The computational complexity for this class of problems is estimated by the time and space (computer memory) required to solve a given problem instance.
Geometric query problems
In geometric query problems, commonly known as geometric search problems, the input consists of two parts: the search space part and the query part, which varies over the problem instances. The search space typically needs to be preprocessed, in a way that multiple queries can be answered efficiently.
Some fundamental geometric query problems are:
Range searching: Preprocess a set of points, in order to efficiently count the number of points inside a query region.
Point location: Given a partitioning of the space into cells, produce a data structure that efficiently tells in which cell a query point is located.
Nearest neighbor: Preprocess a set of points, in order to efficiently find which point is closest to a query point.
Ray tracing: Given a set of objects in space, produce a data structure that efficiently tells which object a query ray intersects first.
If the search space is fixed, the computational complexity for this class of problems is usually estimated by:
the time and space required to construct the data structure to be searched in
the time (and sometimes an extra space) to answer queries.
For the case when the search space is allowed to vary, see "Dynamic problems".
Dynamic problems
Yet another major class is the dynamic problems, in which the goal is to find an efficient algorithm for finding a solution repeatedly after each incremental modification of the input data (addition or deletion input geometric elements). Algorithms for problems of this type typically involve dynamic data structures. Any of the computational geometric problems may be converted into a dynamic one, at the cost of increased processing time. For example, the range searching problem may be converted into the dynamic range searching problem by providing for addition and/or deletion of the points. The dynamic convex hull problem is to keep track of the convex hull, e.g., for the dynamically changing set of points, i.e., while the input points are inserted or deleted.
The computational complexity for this class of problems is estimated by:
the time and space required to construct the data structure to be searched in
the time and space to modify the searched data structure after an incremental change in the search space
the time (and sometimes an extra space) to answer a query.
Variations
Some problems may be treated as belonging to either of the categories, depending on the context. For example, consider the following problem.
Point in polygon: Decide whether a point is inside or outside a given polygon.
In many applications this problem is treated as a single-shot one, i.e., belonging to the first class. For example, in many applications of computer graphics a common problem is to find which area on the screen is clicked by a pointer. However, in some applications, the polygon in question is invariant, while the point represents a query. For example, the input polygon may represent a border of a country and a point is a position of an aircraft, and the problem is to determine whether the aircraft violated the border. Finally, in the previously mentioned example of computer graphics, in CAD applications the changing input data are often stored in dynamic data structures, which may be exploited to speed-up the point-in-polygon queries.
In some contexts of query problems there are reasonable expectations on the sequence of the queries, which may be exploited either for efficient data structures or for tighter computational complexity estimates. For example, in some cases it is important to know the worst case for the total time for the whole sequence of N queries, rather than for a single query. See also "amortized analysis".
Numerical computational geometry
This branch is also known as geometric modelling and computer-aided geometric design (CAGD).
Core problems are curve and surface modelling and representation.
The most important instruments here are parametric curves and parametric surfaces, such as Bézier curves, spline curves and surfaces. An important non-parametric approach is the level-set method.
Application areas of computational geometry include shipbuilding, aircraft, and automotive industries.
See also
List of combinatorial computational geometry topics
List of numerical computational geometry topics
CAD/CAM/CAE
List of geometric algorithms
Solid modeling
Computational topology
Computer representation of surfaces
Digital geometry
Discrete geometry (combinatorial geometry)
Space partitioning
Tricomplex number
Robust geometric computation
Wikiversity:Topic:Computational geometry
Wikiversity:Computer-aided geometric design
References
Further reading
List of books in computational geometry
Journals
Combinatorial/algorithmic computational geometry
Below is the list of the major journals that have been publishing research in geometric algorithms. Please notice with the appearance of journals specifically dedicated to computational geometry, the share of geometric publications in general-purpose computer science and computer graphics journals decreased.
ACM Computing Surveys
ACM Transactions on Graphics
Acta Informatica
Advances in Geometry
Algorithmica
Ars Combinatoria
Computational Geometry: Theory and Applications
Communications of the ACM
Computer Aided Geometric Design
Computer Graphics and Applications
Computer Graphics World
Discrete & Computational Geometry
Geombinatorics
Geometriae Dedicata
IEEE Transactions on Graphics
IEEE Transactions on Computers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Information Processing Letters
International Journal of Computational Geometry and Applications
Journal of Combinatorial Theory, series B
Journal of Computational Geometry
Journal of Differential Geometry
Journal of the ACM
Journal of Algorithms
Journal of Computer and System Sciences
Management Science
Pattern Recognition
Pattern Recognition Letters
SIAM Journal on Computing
SIGACT News; featured the "Computational Geometry Column" by Joseph O'Rourke
Theoretical Computer Science
The Visual Computer
External links
Computational Geometry
Computational Geometry Pages
Geometry In Action
"Strategic Directions in Computational Geometry—Working Group Report" (1996)
Journal of Computational Geometry
(Annual) Winter School on Computational Geometry
Computational Geometry Lab
Computational fields of study
Geometry processing |
1505381 | https://en.wikipedia.org/wiki/Numerical%20weather%20prediction | Numerical weather prediction | Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed for significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that govern the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.
History
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations. In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use). Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau. In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model. Following Phillips' work, several groups began working to create general circulation models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.
As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s. By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.
The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS). Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.
Initialization
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. The data are then used in the model as the starting point for a forecast.
A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.
Computation
An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.
These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future, while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future, and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future. The visual output produced by a model solution is known as a prognostic chart, or prog.
Parameterization
Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between and in length. A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale. The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity.
The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag. This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes. Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.
Domains
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.
The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about ) level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. This coordinate system receives its name from the independent variable used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.
Model output statistics
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.
Ensembles
In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days, making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. These uncertainties limit forecast model accuracy to about five or six days into the future.
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere. Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.
Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions. Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible. The ECMWF model, the Ensemble Prediction System, uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts.
In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty; this problem becomes particularly severe for forecasts of the weather about ten days in advance. When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general. Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7.
In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach. Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.
Applications
Air quality modeling
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition. In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion. Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface, which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.
Climate modeling
A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate. Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey. When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved.
Ocean surface modeling
The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics. The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.
Tropical cyclone forecasting
Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.
In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating. Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.
Wildfire modeling
On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.
A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations. More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated. The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under , forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally.
See also
Atmospheric physics
Atmospheric thermodynamics
Tropical cyclone forecast model
Types of atmospheric models
References
Further reading
From Turbulence to sCl
External links
NOAA Supercomputer upgrade
NOAA Supercomputers
Air Resources Laboratory
Fleet Numerical Meteorology and Oceanography Center
European Centre for Medium-Range Weather Forecasts
UK Met Office
Computational science
Numerical climate and weather models
Applied mathematics
Weather prediction
Computational fields of study |
2065486 | https://en.wikipedia.org/wiki/Enlightened%20Sound%20Daemon | Enlightened Sound Daemon | In computing, the Enlightened Sound Daemon (ESD or EsounD) was the sound server for Enlightenment and GNOME. Esound is a small sound daemon for both Linux and UNIX. ESD was created to provide a consistent and simple interface to the audio device, so applications do not need to have different driver support written per architecture. It was also designed to enhance capabilities of audio devices such as allowing more than one application to share an open device. ESD accomplishes these things while remaining transparent to the application, meaning that the application developer can simply provide ESD support and let it do the rest. On top of this, the API is designed to be very similar to the current audio device API, making it easy to port to ESD.
ESD will mix the simultaneous audio output of multiple running programs, and output the resulting stream to the sound card.
ESD can also manage network-transparent audio. As such, an application that supports ESD can output audio over the network, to any connected computer that is running an ESD server.
ESD support must be specifically written and added into applications, as ESD does not emulate normal audio hardware APIs. Since ESD has been around for over a decade, earlier than almost any other sound server, a very large number of Unix applications have support for ESD output built-in, or available as add-ons.
ESD was maintained as part of the GNOME project, but as of April 2009, all ESD modules in GNOME have been ported to libcanberra for event sounds or GStreamer/PulseAudio for everything else.
PulseAudio 2.0 completely drops ESounD support.
Architecture Overview
Esound (ESD) is a stand-alone sound daemon that abstracts the system sound device to multiple clients. Under Linux using the Open Sound System (OSS), as well as other UNIX systems, typically only one process may open the sound device. This is not acceptable in a desktop environment like GNOME, as it is expected that many applications will be making sounds (music decoders, event based sounds, video conferencing, etc.). The ESD daemon connects to the sound device and accepts connections from multiple clients, mixing the incoming audio streams and sending the result to the sound device. Connections are only allowed to clients that can authenticate successfully, alleviating the concern that unauthorized users can eavesdrop via the sound device. In addition to accepting client connections from the local machine, ESD can be configured to accept client connections from remote hosts that authenticate successfully.
Applications wanting to contact the ESD daemon do so using the libesd library. Much like with file i/o, an ESD connection is first opened. The ESD daemon will be spawned automatically by libesd if a daemon is not already present. Data is then either read or written to the ESD daemon. For an ESD client local to the machine that the ESD daemon is running on, the data is transferred through a local socket, then written to the sound device by the ESD daemon. For a client on a remote machine, the data is sent by libesd on the remote machine over the network to the ESD daemon. The process is completely transparent to the application using ESD.
See also
PulseAudio – prevailing sound server for Linux desktop use
Sndio - sound server from OpenBSD
JACK Audio Connection Kit – prevailing sound server for professional audio production
PipeWire - new, in development, unified sound and video server which aims to be able to replace PulseAudio, JACK and GStreamer
References
External links
Current Gnome EsounD source archive (current Gnome releases)
GNOME
Free audio software
Enlightenment Foundation Libraries
Audio libraries
Audio software for Linux |
25315 | https://en.wikipedia.org/wiki/Quality%20of%20service | Quality of service | Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.
In the field of computer networking and other packet-switched telecommunication networks, quality of service refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.
Quality of service is particularly important for the transport of traffic with special requirements. In particular, developers have introduced Voice over IP technology to allow computer networks to become as useful as telephone networks for audio conversations, as well as supporting new applications with even stricter network performance requirements.
Definitions
In the field of telephony, quality of service was defined by the ITU in 1994. Quality of service comprises requirements on all the aspects of a connection, such as service response time, loss, signal-to-noise ratio, crosstalk, echo, interrupts, frequency response, loudness levels, and so on. A subset of telephony QoS is grade of service (GoS) requirements, which comprises aspects of a connection relating to capacity and coverage of a network, for example guaranteed maximum blocking probability and outage probability.
In the field of computer networking and other packet-switched telecommunication networks, teletraffic engineering refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow. For example, a required bit rate, delay, delay variation, packet loss or bit error rates may be guaranteed. Quality of service is important for real-time streaming multimedia applications such as voice over IP, multiplayer online games and IPTV, since these often require fixed bit rate and are delay sensitive. Quality of service is especially important in networks where the capacity is a limited resource, for example in cellular data communication.
A network or protocol that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, for example during a session establishment phase. During the session it may monitor the achieved level of performance, for example the data rate and delay, and dynamically control scheduling priorities in the network nodes. It may release the reserved capacity during a tear down phase.
A best-effort network or service does not support quality of service. An alternative to complex QoS control mechanisms is to provide high quality communication over a best-effort network by over-provisioning the capacity so that it is sufficient for the expected peak traffic load. The resulting absence of network congestion reduces or eliminates the need for QoS mechanisms.
QoS is sometimes used as a quality measure, with many alternative definitions, rather than referring to the ability to reserve resources. Quality of service sometimes refers to the level of quality of service, i.e. the guaranteed service quality. High QoS is often confused with a high level of performance, for example high bit rate, low latency and low bit error rate.
QoS is sometimes used in application layer services such as telephony and streaming video to describe a metric that reflects or predicts the subjectively experienced quality. In this context, QoS is the acceptable cumulative effect on subscriber satisfaction of all imperfections affecting the service. Other terms with similar meaning are the quality of experience (QoE), mean opinion score (MOS), perceptual speech quality measure (PSQM) and perceptual evaluation of video quality (PEVQ).
History
A number of attempts for layer 2 technologies that add QoS tags to the data have gained popularity in the past. Examples are frame relay, asynchronous transfer mode (ATM) and multiprotocol label switching (MPLS) (a technique between layer 2 and 3). Despite these network technologies remaining in use today, this kind of network lost attention after the advent of Ethernet networks. Today Ethernet is, by far, the most popular layer 2 technology. Conventional Internet routers and network switches operate on a best effort basis. This equipment is less expensive, less complex and faster and thus more popular than earlier more complex technologies that provide QoS mechanisms.
Ethernet optionally uses 802.1p to signal the priority of a frame.
There were four type of service bits and three precedence bits originally provided in each IP packet header, but they were not generally respected. These bits were later re-defined as Differentiated services code points (DSCP).
With the advent of IPTV and IP telephony, QoS mechanisms are increasingly available to the end user.
Qualities of traffic
In packet-switched networks, quality of service is affected by various factors, which can be divided into human and technical factors. Human factors include: stability of service quality, availability of service, waiting times and user information. Technical factors include: reliability, scalability, effectiveness, maintainability and network congestion.
Many things can happen to packets as they travel from origin to destination, resulting in the following problems as seen from the point of view of the sender and receiver:
Goodput Due to varying load from disparate users sharing the same network resources, the maximum throughput that can be provided to a certain data stream may be too low for real-time multimedia services.
Packet loss The network may fail to deliver (drop) some packets due to network congestion. The receiving application may ask for this information to be retransmitted, possibly resulting in congestive collapse or unacceptable delays in the overall transmission.
Errors Sometimes packets are corrupted due to bit errors caused by noise and interference, especially in wireless communications and long copper wires. The receiver has to detect this, and, just as if the packet was dropped, may ask for this information to be retransmitted.
Latency It might take a long time for each packet to reach its destination because it gets held up in long queues, or it takes a less direct route to avoid congestion. In some cases, excessive latency can render an application such as VoIP or online gaming unusable.
Packet delay variation Packets from the source will reach the destination with different delays. A packet's delay varies with its position in the queues of the routers along the path between source and destination, and this position can vary unpredictably. Delay variation can be absorbed at the receiver, but in so doing increases the overall latency for the stream.
Out-of-order delivery When a collection of related packets is routed through a network, different packets may take different routes, each resulting in a different delay. The result is that the packets arrive in a different order than they were sent. This problem requires special additional protocols for rearranging out-of-order packets. The reordering process requires additional buffering at the receiver, and, as with packet delay variation, increases the overall latency for the stream.
Applications
A defined quality of service may be desired or required for certain types of network traffic, for example:
Streaming media specifically
Internet protocol television (IPTV)
Audio over Ethernet
Audio over IP
Voice over IP (VoIP)
Videotelephony
Telepresence
Storage applications such as iSCSI and Fibre Channel over Ethernet
Circuit emulation service
Safety-critical applications such as remote surgery where availability issues can be hazardous
Network operations support systems either for the network itself, or for customers' business critical needs
Online games where real-time lag can be a factor
Industrial control systems protocols such as EtherNet/IP which are used for real-time control of machinery
These types of service are called inelastic, meaning that they require a certain minimum bit rate and a certain maximum latency to function. By contrast, elastic applications can take advantage of however much or little bandwidth is available. Bulk file transfer applications that rely on TCP are generally elastic.
Mechanisms
Circuit switched networks, especially those intended for voice transmission, such as Asynchronous Transfer Mode (ATM) or GSM, have QoS in the core protocol, resources are reserved at each step on the network for the call as it is set up, there is no need for additional procedures to achieve required performance. Shorter data units and built-in QoS were some of the unique selling points of ATM for applications such as video on demand.
When the expense of mechanisms to provide QoS is justified, network customers and providers can enter into a contractual agreement termed a service-level agreement (SLA) which specifies guarantees for the ability of a connection to give guaranteed performance in terms of throughput or latency based on mutually agreed measures.
Over-provisioning
An alternative to complex QoS control mechanisms is to provide high quality communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates. This approach is simple for networks with predictable peak loads. This calculation may need to appreciate demanding applications that can compensate for variations in bandwidth and delay with large receive buffers, which is often possible for example in video streaming.
Over-provisioning can be of limited use in the face of transport protocols (such as TCP) that over time increase the amount of data placed on the network until all available bandwidth is consumed and packets are dropped. Such greedy protocols tend to increase latency and packet loss for all users.
The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their traffic demands. This limits usability of over-provisioning. Newer more bandwidth intensive applications and the addition of more users results in the loss of over-provisioned networks. This then requires a physical update of the relevant network links which is an expensive process. Thus over-provisioning cannot be blindly assumed on the Internet.
Commercial VoIP services are often competitive with traditional telephone service in terms of call quality even without QoS mechanisms in use on the user's connection to their ISP and the VoIP provider's connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or worse. The mathematics of packet traffic indicate that network requires just 60% more raw capacity under conservative assumptions.
IP and Ethernet efforts
Unlike single-owner networks, the Internet is a series of exchange points interconnecting private networks. Hence the Internet's core is owned and managed by a number of different network service providers, not a single entity. Its behavior is much more unpredictable.
There are two principal approaches to QoS in modern packet-switched IP networks, a parameterized system based on an exchange of application requirements with the network, and a prioritized system where each packet identifies a desired service level to the network.
Integrated services ("IntServ") implements the parameterized approach. In this model, applications use the Resource Reservation Protocol (RSVP) to request and reserve resources through a network.
Differentiated services ("DiffServ") implements the prioritized model. DiffServ marks packets according to the type of service they desire. In response to these markings, routers and switches use various scheduling strategies to tailor performance to expectations. Differentiated services code point (DSCP) markings use the first 6 bits in the ToS field (now renamed as the DS field) of the IP(v4) packet header.
Early work used the integrated services (IntServ) philosophy of reserving network resources. In this model, applications used RSVP to request and reserve resources through a network. While IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of thousands of reservations. It was believed that this approach would not scale with the growth of the Internet, and in any event was antithetical to the end-to-end principle, the notion of designing networks so that core routers do little more than simply switch packets at the highest possible rates.
Under DiffServ, packets are marked either by the traffic sources themselves or by the edge devices where the traffic enters the network. In response to these markings, routers and switches use various queuing strategies to tailor performance to requirements. At the IP layer, DSCP markings use the 6 bit DS field in the IP packet header. At the MAC layer, VLAN IEEE 802.1Q can be used to carry 3 bit of essentially the same information. Routers and switches supporting DiffServ configure their network scheduler to use multiple queues for packets awaiting transmission from bandwidth constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth reserved for each queue.
In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter (e.g., VoIP or videoconferencing) are given priority over packets in other queues. Typically, some bandwidth is allocated by default to network control packets (such as Internet Control Message Protocol and routing protocols), while best-effort traffic might simply be given whatever bandwidth is left over.
At the Media Access Control (MAC) layer, VLAN IEEE 802.1Q and IEEE 802.1p can be used to distinguish between Ethernet frames and classify them. Queueing theory models have been developed on performance analysis and QoS for MAC layer protocols.
Cisco IOS NetFlow and the Cisco Class Based QoS (CBQoS) Management Information Base (MIB) are marketed by Cisco Systems.
One compelling example of the need for QoS on the Internet relates to congestive collapse. The Internet relies on congestion avoidance protocols, primarily as built into Transmission Control Protocol (TCP), to reduce traffic under conditions that would otherwise lead to congestive collapse. QoS applications, such as VoIP and IPTV, require largely constant bitrates and low latency, therefore they cannot use TCP and cannot otherwise reduce their traffic rate to help prevent congestion. Service-level agreements limit traffic that can be offered to the Internet and thereby enforce traffic shaping that can prevent it from becoming overloaded, and are hence an indispensable part of the Internet's ability to handle a mix of real-time and non-real-time traffic without collapse.
Protocols
Several QoS mechanisms and schemes exist for IP networking.
The type of service (ToS) field in the IPv4 header (now superseded by DiffServ)
Differentiated services (DiffServ)
Integrated services (IntServ)
Resource Reservation Protocol (RSVP)
RSVP-TE
QoS capabilities are available in the following network technologies.
Multiprotocol Label Switching (MPLS) provides eight QoS classes
Frame Relay
X.25
Some DSL modems
Asynchronous transfer mode (ATM)
Ethernet supporting IEEE 802.1Q with Audio Video Bridging and Time-Sensitive Networking
Wi-Fi supporting IEEE 802.11e
HomePNA home networking over coax and phone wires
The G.hn home networking standard provides QoS by means of contention-free transmission opportunities (CFTXOPs) which are allocated to flows which require QoS and which have negotiated a contract with the network controller. G.hn also supports non-QoS operation by means of contention-based time slots.
End-to-end quality of service
End-to-end quality of service can require a method of coordinating resource allocation between one autonomous system and another. The Internet Engineering Task Force (IETF) defined the Resource Reservation Protocol (RSVP) for bandwidth reservation as a proposed standard in 1997. RSVP is an end-to-end bandwidth reservation and admission control protocol. RSVP was not widely adopted due to scalability limitations. The more scalable traffic engineering version, RSVP-TE, is used in many networks to establish traffic-engineered Multiprotocol Label Switching (MPLS) label-switched paths. The IETF also defined Next Steps in Signaling (NSIS) with QoS signalling as a target. NSIS is a development and simplification of RSVP.
Research consortia such as "end-to-end quality of service support over heterogeneous networks" (EuQoS, from 2004 through 2007) and fora such as the IPsphere Forum developed more mechanisms for handshaking QoS invocation from one domain to the next. IPsphere defined the Service Structuring Stratum (SSS) signaling bus in order to establish, invoke and (attempt to) assure network services. EuQoS conducted experiments to integrate Session Initiation Protocol, Next Steps in Signaling and IPsphere's SSS with an estimated cost of about 15.6 million Euro and published a book.
A research project Multi Service Access Everywhere (MUSE) defined another QoS concept in a first phase from January 2004 through February 2006, and a second phase from January 2006 through 2007. Another research project named PlaNetS was proposed for European funding circa 2005.
A broader European project called "Architecture and design for the future Internet" known as 4WARD had a budget estimated at 23.4 million Euro and was funded from January 2008 through June 2010.
It included a "Quality of Service Theme" and published a book. Another European project, called WIDENS (Wireless Deployable Network System), proposed a bandwidth reservation approach for mobile wireless multirate adhoc networks.
Limitations
Strong cryptography network protocols such as Secure Sockets Layer, I2P, and virtual private networks obscure the data transferred using them. As all electronic commerce on the Internet requires the use of such strong cryptography protocols, unilaterally downgrading the performance of encrypted traffic creates an unacceptable hazard for customers. Yet, encrypted traffic is otherwise unable to undergo deep packet inspection for QoS.
Protocols like ICA and RDP may encapsulate other traffic (e.g. printing, video streaming) with varying requirements that can make optimization difficult.
The Internet2 project found, in 2001, that the QoS protocols were probably not deployable inside its Abilene Network with equipment available at that time. The group predicted that “logistical, financial, and organizational barriers will block the way toward any bandwidth guarantees” by protocol modifications aimed at QoS.
They believed that the economics would encourage network providers to deliberately erode the quality of best effort traffic as a way to push customers to higher priced QoS services. Instead they proposed over-provisioning of capacity as more cost-effective at the time.
The Abilene network study was the basis for the testimony of Gary Bachula to the US Senate Commerce Committee's hearing on Network Neutrality in early 2006. He expressed the opinion that adding more bandwidth was more effective than any of the various schemes for accomplishing QoS they examined. Bachula's testimony has been cited by proponents of a law banning quality of service as proof that no legitimate purpose is served by such an offering. This argument is dependent on the assumption that over-provisioning isn't a form of QoS and that it is always possible. Cost and other factors affect the ability of carriers to build and maintain permanently over-provisioned networks.
Mobile (cellular) QoS
Mobile cellular service providers may offer mobile QoS to customers just as the wired public switched telephone network services providers and Internet service providers may offer QoS. QoS mechanisms are always provided for circuit switched services, and are essential for inelastic services, for example streaming multimedia.
Mobility adds complications to QoS mechanisms. A phone call or other session may be interrupted after a handover if the new base station is overloaded. Unpredictable handovers make it impossible to give an absolute QoS guarantee during the session initiation phase.
Standards
Quality of service in the field of telephony was first defined in 1994 in ITU-T Recommendation E.800. This definition is very broad, listing 6 primary components: Support, Operability, Accessibility, Retainability, Integrity and Security. In 1998 the ITU published a document discussing QoS in the field of data networking. X.641 offers a means of developing or enhancing standards related to QoS and provide concepts and terminology that should assist in maintaining the consistency of related standards.
Some QoS-related IETF Request for Comments (RFC)s are , and ; both these are discussed above. The IETF has also published two RFCs giving background on QoS: , and .
The IETF has also published as an informative or best practices document about the practical aspects of designing a QoS solution for a DiffServ network. The document tries to identify applications commonly run over an IP network, groups them into traffic classes, studies the treatment required by these classes from the network, and suggests which of the QoS mechanisms commonly available in routers can be used to implement those treatments.
See also
Application service architecture
BSSGP
Bufferbloat
Class of service
Cross-layer interaction and service mapping
LEDBAT
Low-latency queuing (LLQ)
Micro Transport Protocol
Net neutrality
QPPB
Series of tubes
Subjective video quality
Tiered Internet service
Traffic classification
Notes
References
Further reading
Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, )
Lelli, F. Maron, G. Orlando, S. Client Side Estimation of a Remote Service Execution. 15th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, 2007. MASCOTS '07.
QoS Over Heterogeneous Networks by Mario Marchese (Wiley, 2007, )
External links
Cisco's Internetworking Technology Handbook
Internet architecture
Network performance
Streaming
Services marketing
Telecommunications engineering
Teletraffic |
10259881 | https://en.wikipedia.org/wiki/List%20of%20libraries%20in%20Pakistan | List of libraries in Pakistan | This is a list of major libraries in Pakistan.
Sindh
Karachi
Al-Firdous Baldia Public Library, Baldia Town
Al-Huda Library, Nazimabad
Allama Iqbal Library
Allama Shabir Ahmad Usmani Library, Nazimabad
Baba-e-Urdu Kutubkhana
Bedil Library, Sharfabad Gulshan Town
Board of Intermediate Karachi Library, North Nazimabad
Central Library, Korangi No.5, Korangi
Children Library, Nazimabad
Community Center, Gulshan Town
Defence Central Library DHA
Dr. Mahmood Hussain Central Library, University of Karachi, Gulshan Town
Edhi Library, Gulshan-e-Iqbal, WS-9/12, Main University Road, Gulshan Town
Faiz-e-Aam Library, Lyari Town
Faran Club Library, Gulshan Town
Ghalib Library, Nazimabad
Ghulam Husain Khaliq Dina Hall Library, Saddar Town
Hashim Gazder Library, Jamila Street, Ranchore Lines
Hasrat Mohani Library, Liaquatabad No.9, Liaquatabad
Hungooraabad Library, Hungooraabad, Lyari Town
Ibrahim Ali Bhai Auditorium & Library
Iqbal Shaheed Library, Behar Colony, Lyari Town
Iqra Library, New Kumhar Wara, Lyari Town
Jehangir Park Reading Room Library, Jehangir Park, Saddar Town
Karachi Metropolitan Corporation Library, Shahrah-e-Liaquat, Saddar Town
KMC Children Library, near Hadi Market, Nazimabad
KMC Library
Kutub Khana Khas Anjuman-e-Taraqqi-e-Urdu Pakistan, Baba-e-Urdu Road, Saddar Town
Liaquat Hall Library, Abdullah Haroon Road, Saddar Town
Liaquat Memorial Library, Stadium Road, Gulshan Town
Lyari Municipal Library, Old Salater House, Lyari Town
Lyari Text Book Library, Chakiwara, Lyari Town
Main Library, Aga Khan University, Stadium Road, Gulshan Town
Main Library, Hamdard University, Madina-tal-Hikmat, Muhammad Bin Qasim Avenue, Karachi, Sindh
Main Library, NED University of Engineering & Technology, Karachi, Sindh
Mansoora Library, Dastagir Society, Federal B. Area
Moosa Lane Reading Room Library, Moosa Lane, Lyari Town
Moulana Hasrat Mohani Library, Usmanabad, Lyari Town
Mujahid Park Library, Rexer Line, Lyari Town
Nasir-Arif Hussain Memorial Library & Research Center, Gulberg Town
National Book Foundation Library
Nawa Lane Library, Gabol Park, Lyari Town
Noorani Welfare Library, Ranchore Line, Lyari Town
Pakistan Arab Cultural Association Library, Iftikhar Chambers, Altaf Road, P.O. Box 5752, Karachi, Sindh
Pakistan National Centre Library
Rangoon Wala Hall and Community Centre Library, Dhoraji Colony
Sardar Abdul-Rab Nishtar Library, near Lyari General Hospital, Lyari Town
Satellite Library, Sango Lane, Lyari Town
Shia Imami Ismaili community libraries in different areas of Karachi; Kharader Library and Reading Room established 1908, the first library in Karachi; its academic library also, Kharadar Ismaili Jamat Khana
Sheikh Mufeed Library, Islamic Research Center, Allama Ibne Hassan Jarchvi Road, Federal B. Area
Shohada-e-Pakistan Library, Usmanabad, Lyari Town
Sindh Archives, Clifton Town
Super Market Library, Super Market, Liaquatabad
Syed Mehmood Shah Library, Lee Market, Lyari Town
Taimuriya Library North Nazimabad
Umer Lane Library, Umer Lane, Lyari Town
Jacobabad
Abdul Karim Gadai Municipal Library, Municipal Committee Building, Jacobabad, Sindh
Khyber Pakhtunkhwa
Peshawar
Main Library, University of Agriculture, Peshawar
Main Library, University of Engineering & Technology, Peshawar
Main Library, University of Peshawar, Peshawar
Municipal Corporation Library, Peshawar City, Peshawar
Nishtar Municipal Public Library, Town Hall, G. T. Road, Peshawar
Central Sciences Degree College Library, Peshawar
Islamia College University Central Library Peshawar
Mardan
Cantonment Library, Cantonment Board, Mardan
Mardan Public Library, Directorate of Archives, Shami Road, Mardan
Abbottabad
Main Library, Pakistan Military Academy, Abbottabad
Municipal Committee Library, Abbottabad
Municipal Library, Company Garden, Abbottabad
Dera Ismail Khan
Main Library, Gomal University Dera Ismail Khan
Municipal Central Library, Town Hall, Dera Ismail Khan
Kohat
Jinnah Municipal Library, Shah Faisal Gate, Kohat
Topi
Main Library, Ghulam Ishaq Khan Institute of Engineering & Technology, Topi District-Swabi
Risalpur
Main Library, Military College of Engineering, Risalpur
Mansehra
Municipal Committee Library, Mansehra
Mingora
Municipal Public Library, Mingora
Swat Public Library, Swat (http://www.kppls.pk/)
Bannu
Municipal Public Library, Bannu
Public Libraries Network
Main Public Library and Archive in Khyber Pakhtunkhwa is the Directorate of Archives & Libraries, Jail Road Peshawar. It has now developed its
District or Branch Libraries in different districts of Khyber Pakhutnkhwa. These libraries in future will be like the chain of one Big network of Public Libraries and Archives. Its district libraries are:
Timergara Public Library
Nowshera Public Library
Peshawar Public Library
Mardan Public Library
Haripur Public Library
Abbottabad Public Library
Bannu Public Library
Laki Marwat Public Library
DI Khan Public Library
Kohat Public Library
Swat Public Library
Charsada Public Library
Ghazi Public Library
Swabi Public Library
Khushal Khan Khattak memorial library, Akora Khattak
chitral public library
Punjab
Mailsi
Masood Jhandir Research Library
Murree
Iqbal Municipal Library
Chiniot
Omar Hayat Mahal Library
Bahawalpur
Central Library Bahawalpur
e-Library, Bahawalpur (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Faisalabad
Following are three public libraries in Faisalabad city:
Allama Iqbal Library
Municipal Library
e-Library, Faisalabad (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Toba Tek Singh
Govt. Public Library, Iqbal Bazaar, Kamalia.
e-Library, Toba Tek Singh (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Lahore
Atomic Energy Minerals Centre Library
Babar Ali Library, Aitchison College
Dr Baqir's Library
Dyal Singh Trust Library, established in Lahore in 1908 in pursuance of the will of the Sardar Dyal Singh Majithia; first setup in the Exchange Building, the residence of Sardar Dyal Singh
The Ewing Memorial Library, built in 1943 and named for Dr. Sir J.C.R. Ewing, the second principal of the college; one of the oldest and best college libraries in Lahore; now gradually transforming itself into a state-of-the-art university library
Government College Library, Government College University
Islamia College Library, Islamia College
Lahore University of Management Sciences Library, Lahore University of Management Sciences
Library Information Services CIIT Lahore
National Library of Engineering Sciences, plays a vital role in achieving the objectives of the institution like study & teaching, research & extension services, and dissemination of information; fully air conditioned with a seating capacity of about 400 readers at its different floors
Pakistan Administrative Staff College Library
People's Bank Library
Provincial Assembly of the Punjab Library
Punjab Public Library
Punjab University Library, Punjab University
Quaid-e-Azam Library, a highly detailed model of a newly constructed library, named after Quaid-e-Azam Muhammad Ali Jinnah; located in the most famous gardens of Lahore, named "Lawrence Gardens" by the British.
e-Library, Lahore (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Sahiwal
Government Jinnah Public Library
e-Library, Sahiwal (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Government Urdu Addab Library ( Urdu Literature Library)
Sargodha
Jinnah Hall Library
Central Library, University of Sargodha
e-Library, Sargodha (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Multan
Garrison Public Library
e-Library, Multan (digital library established by Punjab Information Technology Board and Sports Board Punjab)
Balochistan
Quetta
Buitems Library, Quetta
Cantonment Public Library, Cantonment Board, Shahrah-e-Aziz Bhatti, Quetta
Quaid-e-Azam Library, Quetta
Main Library, University of Balochistan, Quetta
Mastung
Library of Cadet College, Mastung
Municipal Public Library, Mastung
Sarawan digital Library, Mastung
Lasbela
LUAWMS Library, Lasbela
Winder
Sassui Library, Winder
Tubat
Molana Abdul Haq Library, Turbat Kech
Islamabad Capital Territory
Following are some of the public libraries in Islamabad Capital Territory:
PHRC Central Library
National Library of Pakistan
Islamabad Public Library
Islamabad Community Library,
Sector F-11 Markaz.
Islamabad Community Library,
Sector G-7, Near Iqbal Hall
Islamabad Community Library,
Street - 9, Sector G-8/2
Islamabad Community Library,
Sector G-11 Markaz
Islamabad Community Library,
Sector I-8 Markaz
Islamabad Community Library, Street - 57, Sector I-10/1
See also
List of libraries
External links
Pakistan Library Network, a list of libraries maintained by PLANWEL
Libraries in Karachi
Library Information Services, CIIT Lahore
References
Libraries
Pakistan |
4780628 | https://en.wikipedia.org/wiki/Change%20management%20%28engineering%29 | Change management (engineering) | The change request management process in systems engineering is the process of requesting, determining attainability, planning, implementing, and evaluating of changes to a system. Its main goals are to support the processing and traceability of changes to an interconnected set of factors.
Introduction
There is considerable overlap and confusion between change request management, change control and configuration management. The definition below does not yet integrate these areas.
Change request management has been embraced for its ability to deliver benefits by improving the affected system and thereby satisfying "customer needs," but has also been criticized for its potential to confuse and needlessly complicate change administration. In some cases, notably in the Information Technology domain, more funds and work are put into system maintenance (and change request management) than into the initial creation of a system. Typical investment by organizations during initial implementation of large ERP systems is 15 to 20 percent of overall budget.
In the same vein, Hinley describes two of Lehman's laws of software evolution:
The law of continuing change: Systems that are used must change, or else automatically become less useful.
The law of increasing complexity: Through changes, the structure of a system becomes ever more complex, and more resources are required to simplify it.
Change request management is also of great importance in the field of manufacturing, which is confronted with many changes due to increasing and worldwide competition, technological advances and demanding customers. Because many systems tend to change and evolve as they are used, the problems of these industries are experienced to some degree in many others.
Notes: In the process below, it is arguable that the change committee should be responsible not only for accept/reject decisions, but also prioritization, which influences how change requests are batched for processing.
The process and its deliverables
For the description of the change request management process, the meta-modeling technique is used. Figure 1 depicts the process-data diagram, which is explained in this section.
Activities
There are six main activities, which jointly form the change request management process. They are: Identify potential change, Analyze change request, Evaluate change, Plan change, Implement change and Review and close change. These activities are executed by four different roles, which are discussed in Table 1. The activities (or their sub-activities, if applicable) themselves are described in Table 2.
Deliverables
Besides activities, the process-data diagram (Figure 1) also shows the deliverables of each activity, i.e. the data. These deliverables or concepts are described in Table 3; in this context, the most important concepts are: CHANGE REQUEST and CHANGE LOG ENTRY.
A few concepts are defined by the author (i.e. lack a reference), because either no (good) definitions could be found, or they are the obvious result of an activity. These concepts are marked with an asterisk (‘*’). Properties of concepts have been left out of the model, because most of them are trivial and the diagram could otherwise quickly become too complex. Furthermore, some concepts (e.g. CHANGE REQUEST, SYSTEM RELEASE) lend themselves for the versioning approach as proposed by Weerd, but this has also been left out due to diagram complexity constraints.
Besides just ‘changes’, one can also distinguish deviations and waivers. A deviation is an authorization (or a request for it) to depart from a requirement of an item, prior to the creation of it. A waiver is essentially the same, but than during or after creation of the item. These two approaches can be viewed as minimalistic change request management (i.e. no real solution to the problem at hand).
Examples
A good example of the change request management process in action can be found in software development. Often users report bugs or desire new functionality from their software programs, which leads to a change request. The product software company then looks into the technical and economical feasibility of implementing this change and consequently it decides whether the change will actually be realized. If that indeed is the case, the change has to be planned, for example through the usage of function points. The actual execution of the change leads to the creation and/or alteration of software code and when this change is propagated it probably causes other code fragments to change as well. After the initial test results seem satisfactory, the documentation can be brought up to date and be released, together with the software. Finally, the project manager verifies the change and closes this entry in the change log.
Another typical area for change request management in the way it is treated here, is the manufacturing domain. Take for instance the design and production of a car. If for example the vehicle’s air bags are found to automatically fill with air after driving long distances, this will without a doubt lead to customer complaints (or hopefully problem reports during the testing phase). In turn, these produce a change request (see Figure 2 on the right), which will probably justify a change. Nevertheless, a – most likely simplistic – cost and benefit analysis has to be done, after which the change request can be approved. Following an analysis of the impact on the car design and production schedules, the planning for the implementation of the change can be created. According to this planning, the change can actually be realized, after which the new version of the car is hopefully thoroughly tested before it is released to the public.
In industrial plants
Since complex processes can be very sensitive to even small changes, proper management of change to industrial facilities and processes is recognized as critical to safety. In the US, OSHA has regulations that govern how changes are to be made and documented. The main requirement is that a thorough review of a proposed change be performed by a multi-disciplinary team to ensure that as many possible viewpoints are used to minimize the chances of missing a hazard. In this context, change request management is known as Management of Change, or MOC. It is just one of many components of Process Safety Management, section 1910.119(l).1
See also
Change control
Change request management
Engineering Change Order, Change request
PRINCE2
ITIL
Versioning
Release management
Software release life cycle
Application lifecycle management
Systems engineering
Issue tracking system
References
Further reading
Crnković I., Asklund, U. & Persson-Dahlqvist, A. (2003). Implementing and Integrating Product Data Management and Software Configuration Management. London: Artech House.
Dennis, A., Wixom, B.H. & Tegarden, D. (2002). System Analysis & Design: An Object-Oriented Approach with UML. Hoboken, New York: John Wiley & Sons, Inc.
Georgetown University (n.d.). Data Warehouse: Glossary. Retrieved April 13, 2006 from: https://web.archive.org/web/20060423164505/http://uis.georgetown.edu/departments/eets/dw/GLOSSARY0816.html.
Hinley, D.S. (1996). Software evolution management: a process-oriented perspective. Information and Software Technology, 38, 723-730.
Huang, G.H. & Mak, K.L. (1999). Current practices of engineering change management in UK manufacturing industries. International Journal of Operations & Production Management, 19(1), 21-37.
IEEE (1991). Standard Glossary of Software Engineering Terminology (ANSI). The Institute of Electrical and Electronics Engineers Inc. Retrieved April 13, 2006 from: http://www.ee.oulu.fi/research/ouspg/sage/glossary/#reference_6 .
Mäkäräinen, M. (2000). Software change management processes in the development of embedded software. PhD dissertation. Espoo: VTT Publications. Available online: http://www.vtt.fi/inf/pdf/publications/2000/P416.pdf.
NASA (2005). NASA IV&V Facility Metrics Data Program - Glossary and Definitions. Retrieved March 4, 2006 from: https://web.archive.org/web/20060307232014/http://mdp.ivv.nasa.gov/mdp_glossary.html.
Pennsylvania State University Libraries (2004). CCL Manual: Glossary of Terms and Acronyms. Retrieved April 13, 2006 from: https://web.archive.org/web/20060615021317/http://www.libraries.psu.edu/tas/ cataloging/ccl/glossary.htm.
Princeton University (2003). WordNet 2.0. Retrieved April 13, 2006 from: http://dictionary.reference.com/search?q=release.
Rajlich, V. (1999). Software Change and Evolution. In Pavelka, J., Tel, G. & Bartošek, M. (Eds.), SOFSEM'99, Lecture Notes in Computer Science 1725, 189-202.
Rigby, K. (2003). Managing Standards: Glossary of Terms. Retrieved April 1, 2006 from: https://web.archive.org/web/20060412081603/http://sparc.airtime.co.uk/users/wysywig/gloss.htm.
Scott, J.A. & Nisse, D. (2001). Software Configuration Management, Guide to Software Engineering Body of Knowledge, Chapter 7, IEEE Computer Society Press.
Vogl, G. (2004). Management Information Systems: Glossary of Terms. Retrieved April 13, 2006 from Uganda Martyrs University website: https://web.archive.org/web/20060411160145/http://www.321site.com/greg/courses/mis1/glossary.htm.
Weerd, I. van de (2006). Meta-modeling Technique: Draft for the course Method Engineering 05/06. Retrieved March 1, 2006 from: https://bscw.cs.uu.nl/bscw/bscw.cgi/d1009019/Instructions for the process-data diagram.pdf [restricted access].
Change management
Systems engineering |
40564233 | https://en.wikipedia.org/wiki/MIPI%20Debug%20Architecture | MIPI Debug Architecture | MIPI Alliance Debug Architecture provides a standardized infrastructure for debugging deeply embedded systems in the mobile and mobile-influenced space. The MIPI Alliance MIPI Debug Working Group has released a portfolio of specifications; their objective is to provide standard debug protocols and standard interfaces from a system on a chip (SoC) to the debug tool. The whitepaper Architecture Overview for Debug summarizes all the efforts. In recent years, the group focused on specifying protocols that improve the visibility of the internal operations of deeply embedded systems, standardizing debug solutions via the functional interfaces of form factor devices, and specifying the use of I3C as debugging bus.
The term "debug"
The term "debug" encompasses the various methods used to detect, triage, trace, and potentially eliminate mistakes, or bugs, in hardware and software. Debug includes control/configure methods, stop/step mode debugging, and various forms of tracing.
Control/configure methods
Debug can be used to control and configure components, including embedded systems, of a given target system. Standard functions include setting up hardware breakpoints, preparing and configuring the trace system, and examining system states.
Stop/step mode debugging
In stop/step mode debugging, the core/microcontroller is stopped through the use of breakpoints and then "single-stepped" through the code by executing instructions one at a time. If the other cores/microcontrollers of the SoC have finished synchronously, the overall state of the system can be examined. Stop/step mode debugging includes control/configure techniques, run control of a core/microcontroller, start/stop synchronization with other cores, memory and register access, and additional debug features such as performance counter and run-time memory access.
Tracing
Traces allow an in-depth analysis of the behavior and the timing characteristics of an embedded system. The following traces are typical:
A "core trace" provides full visibility of program execution on an embedded core. Trace data are created for the instruction execution sequence (sometimes referred to as an instruction trace) and data transfers (sometimes referred to as a data trace). An SoC may generate several core traces.
A "bus trace" provides complete visibility of the data transfers across a specific bus.
A "system trace" provides visibility of various events/states inside the embedded system. Trace data can be generated by instrument application code and by hardware modules within the SoC. An SoC may generate several system traces.
Visibility of SoC-internal operations
Tracing is the tool of choice to monitor and analyze what is going on in a complex SoC. There are several well established non-MIPI core-trace and bus-trace standards for the embedded market. Thus, there was no need for the MIPI Debug Working Group to specify new ones. But no standard existed for a "system trace" when the Debug Working Group published its first version of the MIPI System Trace Protocol (MIPI STP) in 2006.
MIPI System Software Trace (MIPI SyS-T)
The generation of system trace data from the software is typically done by inserting additional function calls, which produce diagnostic information valuable for the debug process. This debug technique is called instrumentation. Examples are: printf-style string generating functions, value information, assertions, etc. The purpose of MIPI System Software Trace (MIPI SyS-T) is to define a reusable, general-purpose data protocol and instrumentation API for debugging. The specification defines message formats that allow a trace-analysis tool to decode the debug messages, either into human-readable text or to signals optimized for automated analysis.
Since verbose textual messages stress bandwidth limits for debugging, so-called "catalog messages" are provided. Catalog messages are compact binary messages that replace strings with numeric values. The translation from the numeric value to a message string is done by the trace analysis tool, with the help of collateral XML information. This information is provided during the software-build process using an XML schema that is part of the specification as well.
The SyS-T data protocol is designed to work efficiently on top of lower-level transport links such as those defined by the MIPI System Trace Protocol. SyS-T protocol features such as timestamping or data-integrity checksums can be disabled if the transport link already provides such capabilities. The use of other transport links—such as UART, USB, or TCP/IP—is also possible.
The MIPI Debug Working Group will provide an open-source reference implementation for the SyS-T instrumentation API, a SyS-T message pretty printer, and a tool to generate the XML collateral data as soon as the Specification for System Software Trace (SyS-T) is approved.
MIPI System Trace Protocol (MIPI STP)
The MIPI System Trace Protocol (MIPI STP) specifies a generic protocol that allows the merging of trace streams originated from anywhere in the SoC to a trace stream of 4-bit frames. It was intentionally designed to merge system trace information. The MIPI System Trace Protocol uses a channel/master topology that allows the trace receiving analysis tool to collate the individual trace streams for analysis and display. The protocol additionally provides the following features: stream synchronization and alignment, trigger markers, global timestamping, and multiple stream time synchronization.
The stream of STP packets produced by the System Trace Module can be directly saved to trace RAM, directly exported off-chip, or can be routed to a "trace wrapper protocol" (TWP) module to merge with further trace streams. ARM's CoreSight System Trace Macrocell, which is compliant with MIPI STP, is today an integral part of most multi-core chips used in the mobile space.
The last MIPI board-adopted version of Specification for System Trace Protocol (STPSM) is version 2.2 (February 2016).
MIPI Trace Wrapper Protocol (MIPI TWP)
The MIPI Trace Wrapper Protocol enables multiple trace streams to be merged into a single trace stream (byte streams). A unique ID is assigned to each trace stream by a wrapping protocol. The detection of byte/word boundaries is possible even if the data is transmitted as a stream of bits. Inert packets are used if a continuous export of trace data is required. MIPI Trace Wrapper Protocol is based on ARM's Trace Formatter Protocol specified for ARM CoreSight.
The last MIPI board-adopted version of Specification for Trace Wrapper Protocol (TWPSM) is version 1.1 (December 2014).
From dedicated to functional interfaces
Dedicated debug interfaces
In the early stages of product development, it is common to use development boards with dedicated and readily accessible debug interfaces for connecting the debug tools. SoCs employed in the mobile market rely on two debug technologies: stop-mode debugging via a scan chain and stop-mode debugging via memory-mapped debug registers.
The following non-MIPI debug standards are well established in the embedded market: IEEE 1149.1 (5-pin) and ARM Serial Wire Debug (2-pin), both using single-ended pins. Thus, there was no need for the MIPI Debug Working Group to specify a stop-mode debug protocol or to specify a debug interface.
Trace data generated and merged to a trace stream within the SoC can be streamed, via a dedicated unidirectional trace interface, off-chip to a trace analysis tool. The MIPI Debug Architecture provides specifications for both parallel and serial trace ports.
The MIPI Parallel Trace Interface (MIPI PTI) specifies how to pass the trace data to multiple data pins and a clock pin (single-ended). The specification includes signal names and functions, timing, and electrical constraints. The last MIPI board-adopted version of Specification for Parallel Trace Interface is version 2.0 (October 2011).
The MIPI High-Speed Trace Interface (MIPI HTI) specifies how to stream trace data over the physical layer of standard interfaces, such as PCI Express, DisplayPort, HDMI, or USB. The current version of the specification allows for one to six lanes. The specification includes:
The PHY layer, which represents the electrical and clocking characteristics of the serial lanes.
The LINK layer, which defines how the trace is packaged into the Aurora 8B/10B protocol.
A programmer's model for controlling the HTI and providing status information.
The HTI is a subset of the High Speed Serial Trace Port (HSSTP) specification defined by ARM. The last MIPI board-adopted version of Specification for High-speed Trace Interface is version 1.0 (July 2016).
Board developers and debug tool vendors benefit from standard debug connectors and standard pin mappings. The MIPI Recommendation for Debug and Trace Connectors recommends 10-/20-/34-pin board-level connectors (MIPI10/20/34). Seven different pin mappings that address a wide variety of debug scenarios have been specified. They include standard JTAG (IEEE 1149.1), cJTAG (IEEE 1149.7) and 4-bit parallel trace interfaces (mainly used for system traces), supplemented by the ARM-specific Serial Wire Debug (SWD) standard. MIPI10/20/34 debug connectors became the standard for ARM-based embedded designs.
Many embedded designs in the mobile space use high-speed parallel trace ports (up to 600 megabits per second per pin). MIPI recommends a 60-pin Samtec QSH/QTH connector named MIPI60, which allows JTAG/cJTAG for run control, up to 40 trace data signals, and up to 4 trace clocks. To minimize complexity, the recommendation defines four standard configurations with one, two, three, or four trace channels of varying width.
The last MIPI board-adopted version of MIPI Alliance Recommendation for Debug and Trace Connectors is version 1.1 (March 2011).
PHY and pin overlaid interfaces
Readily-accessible debug interfaces are not available in the product's final form factor. This hampers the identification of bugs and performance optimization in the end product. Since the debug logic is still present in the end product, an alternative access path is needed. An effective way is to equip a mobile terminal's standard interface with a multiplexer that allows for accessing the debug logic. The switching between the interface's essential function and the debug function can be initiated by the connected debug tool or by the mobile terminal's software. Standard debug tools can be used under the following conditions:
A switching protocol is implemented on the debug tool and in the mobile terminal.
A debug adapter exists that connects the debug tool to the standard interface. The debug adapter has to assist the switching protocol if required.
A mapping from the standard interface pins to the debug pins is specified.
The MIPI Narrow Interface for Debug and Test (MIPI NIDnT) covers debugging via the following standard interfaces:
microSD, USB 2.0 Micro-B/-AB receptacle, USB Type-C receptacle, and DisplayPort. The last MIPI board-adopted version of Specification for Narrow Interface for Debug and Test (NIDnTSM) is version 1.2 (December 2017).
Network interfaces
Instead of re-using the pins, debugging can also be done via the protocol stack of a standard interface or network. Here debug traffic co-exists with the traffic of other applications using the same communication link. The MIPI Debug Working Group named this approach GigaBit Debug. Since no debug protocol existed for this approach, the MIPI Debug Working Group specified its SneakPeak debug protocol.
MIPI SneakPeek Protocol (MIPI SPP) moved from a dedicated interface for basic debugging towards a protocol-driven interface:
It translates incoming command packets into read/write accesses to memory, memory-mapped debug registers, and other memory-mapped system resources.
It translates command results (status information and read data coming from memory, memory-mapped debug registers, and other memory-mapped system resources) to outgoing response packets.
Since SneakPeek accepts packets coming through an input buffer and delivers packets through an output buffer, it can be easily connected to any standard I/O or network.
The MIPI Alliance Specification for SneakPeek Protocol describes the basic concepts, the required infrastructure, the packets, and the data flow. The last MIPI board-adopted version of Specification for SneakPeek Protocol (SPPSM) is version 1.0 (August 2015).
The MIPI Gigabit Debug Specification Family is providing details for mapping debug and trace protocols to standard I/Os or networks available in mobile terminals. These details include: endpoint addressing, link initialization and management, data packaging, data-flow management, and error detection and recovery. The last MIPI board-adopted version of Specification for Gigabit Debug for USB (MIPI GbD USB) is version 1.1 (March 2018). The last MIPI board-adopted version of Specification for Gigabit Debug for Internet Protocol Sockets (MIPI GbD IPS) is version 1.0 (July 2016).
I3C as debug bus
Current debug solutions, such as JTAG and ARM CoreSight, are statically structured, which makes for limited scalability regarding the accessibility of debug components/devices. MIPI Debug for I3C specifies a scalable, 2-pin, single-ended debug solution, which has the advantage of being available for the entire product lifetime. The I3C bus can be used as a debug bus only, or the bus can be shared between debug and its essential function as data acquisition bus for sensors. Debugging via I3C works in principle as follows:
The I3C bus is used for the physical transport, and the native I3C functionality is used to configure the bus and to hot-join new components.
The debug protocol is wrapped into dedicated I3C commands. Supported debug protocols are JTAG, ARM CoreSight, and MIPI SneakPeek Protocol.
References
External links
Debugging |
57773765 | https://en.wikipedia.org/wiki/Frank%20Heart | Frank Heart | Frank Evans Heart (May 15, 1929 – June 24, 2018) was an American computer engineer influential in computer networking. After nearly 15 years working for MIT Lincoln Laboratory, Heart worked for Bolt, Beranek and Newman from 1966 to 1994, during which he led a team that designed the first routing computer for the ARPANET, the predecessor to the Internet.
Background
Born to a Jewish family in The Bronx, New York City, Heart grew up in Yonkers. His father was an engineer at the Otis Elevator Company; his mother was an insurance agent.
Entering as an electrical engineering major, Heart enrolled at the Massachusetts Institute of Technology (MIT) in 1947, entering a five-year master's degree program which he alternated semesters between work and school. During one summer, he worked as a power transformer tester at a General Electric factory. In 1951, he enrolled in MIT's new computer programming course taught by Gordon Welchman. Taking the course led Heart to complete his undergraduate coursework early. During his graduate studies, Heart was a research assistant on Whirlwind I, a computer that controlled an aircraft-tracking radar defense system; Whirlwind would be transferred to the MIT Lincoln Laboratory, the on-campus military contractor. Heart received both bachelor's and master's degrees in electrical engineering in 1952.
Career
At Lincoln Lab, Heart remained as a staff member after completing his master's degree. Eventually, Heart became a team lead for projects in building real-time computing systems where measuring devices gathered data by phone lines connected to computers. Katie Hafner and Matthew Lyon wrote about Heart's management style in their 1996 book Where Wizards Stay Up Late: The Origins of the Internet:
In 1966, Heart left Lincoln Lab after being recruited by research and development company Bolt, Beranek and Newman (BBN). In August 1968, BBN won a request for proposal from ARPA to build the first Interface Message Processor (IMP), a computer that transmitted data and interconnected a network known today as a router. Jerry Elkind assigned Heart to be project manager.
With Severo Ornstein as hardware lead and Will Crowther the software lead, Heart's team of ten engineers used a rugged Honeywell DDP-516 minicomputer to engineer the IMP, whose special function was to switch data among the computers on the ARPANET. The team also invented remote diagnostics for computers by equipping IMPs with remote control capabilities. By September 6, 1968, Heart's team finalized the nearly 200-page, $100,000 IMP proposal, which was BBN's most expensive project to date. The first IMP was installed at the University of California, Los Angeles on September 1, Labor Day of 1969, and the second was installed at the Stanford Research Institute in Menlo Park, California, a month later on October 1.
Heart, wrote Hafner and Lyon, had become "a highly regarded and valuable project manager" at BBN, because his teams had members "committed to a common mission rather than a personal agenda" and "who took personal responsibility for what they did." Influenced by working at Lincoln Lab for Jay Forrester, the inventor of core memory, Heart prioritized reliability over cost, performance, or other factors, being "a most defensive driver when it came to engineering." He also preferred that his programming teams code working products rather than simulations or software tools. By 1971, Heart's IMP team had grown to 30 and transitioned to a lighter Honeywell 316 for the IMP.
In 1972, Heart appeared in the ARPANET documentary Computer Networks: The Heralds of Resource Sharing.
In 1989, the federal government decommissioned ARPANET. Most of the IMPs were disassembled; a few remain in museums and computer labs. However, many of Heart's core principles, such as reliability and error detection and correction, still exist within the Internet. Heart's final position at BBN was as president of the systems and technology division; he would retire from BBN by the summer of 1994.
In 2014, Heart was inducted into the Internet Hall of Fame.
Personal life
While working at Lincoln Laboratory, Heart met Jane Sundgaard, one of the company's first women programmers. They married in 1959 and had three children, and the family lived in Lincoln, Massachusetts, during Heart's career at BBN. Jane Heart died in 2014. On June 24, 2018, Frank Heart died of melanoma at age 89 in a retirement community in Lexington, Massachusetts.
Notes
Bibliography
External links
Internet Hall of Fame bio
A History of Computer Communications: 1968-1988
1929 births
2018 deaths
American computer scientists
ARPANET
Internet pioneers
20th-century American Jews
MIT School of Engineering alumni
MIT Lincoln Laboratory people
People from Yonkers, New York
Deaths from cancer in Massachusetts
Deaths from melanoma
Scientists from New York (state)
Scientists from the Bronx
People from Lincoln, Massachusetts
21st-century American Jews |
30605518 | https://en.wikipedia.org/wiki/1938%20USC%20Trojans%20football%20team | 1938 USC Trojans football team | The 1938 USC Trojans football team represented the University of Southern California (USC) in the 1938 college football season. In their 14th year under head coach Howard Jones, the Trojans compiled a 9–2 record (6–1 against conference opponents), finished in a tie for the Pacific Coast Conference championship, defeated Duke in the 1939 Rose Bowl, and outscored their opponents by a combined total of 172 to 65.
Schedule
References
USC
USC Trojans football seasons
Pac-12 Conference football champion seasons
Rose Bowl champion seasons
USC Trojans football |
31237675 | https://en.wikipedia.org/wiki/D.%20P.%20Agrawal%20%28academic%29 | D. P. Agrawal (academic) | D. P. Agarwal is the ex Chairman of the Union Public Service Commission of India. He was the predecessor of newly appointed chairman Rajni Razdan.
Biography
D. P. Agarwal received his M.Tech. and Ph.D. degrees from IIT Delhi in 1972 and 1978 respectively. Prior to this, he graduated in Mechanical Engineering with honors from Aligarh Muslim University in 1970. He taught at department of mechanical engineering in IIT Delhi and was also the warden of Kumaon hostel. Professor Agrawal was the Chairman, Union Public Service Commission (UPSC), New Delhi since 16 August 2008, prior to which he was a Member in the Union Public Service Commission since 31 October 2003. The Chairman is appointed by the President of India and is at 9A in the Government of India Warrant of precedence. His wanted to join civil services after graduation from AMU and wrote UPSC civil services examination. Prior to the present position, Professor Agrawal was the founder Director of the Indian Institute of Information Technology and Management (IIITM), Gwalior. Agrawal started his career as a faculty in the Department of Mechanical Engineering, IIT Delhi in 1975. In 1994, from being a Professor and Dean at IIT Delhi, he took over as the Joint Educational Adviser (Technical) in the Ministry of Human Resource Development, Government of India. While at the Ministry, he initiated policies for the growth of quality technical education and contributed to the development of Centers for Excellence in higher technical and polytechnic education. Professor Agrawal was the Managing Director of Educational Consultants India Ltd. (EdCIL), a Government of India PSU, where he brought about major changes in the organizational work culture including decentralized decision making and transferring functional responsibilities to lower executives. Professor Agrawal has been very active in Teaching and Research throughout his career and has supervised over 100 theses including 20 Ph.D.s and published over 150 research papers. His research papers have received awards in India & abroad. Though many of these awards are no less than Nobel prize, he was not given full professorship by IIT Delhi. He has been an editorial member of national and international journals and has been a key note speaker at various conference sessions. Professor Agrawal is a fellow/ member of a number of professional institutions and societies in India and abroad. He has received many awards including, Eminent Engineer 2003, Engineer of the Year 2006 conferred by the Institution of Engineers, Honorary Fellowship of ISTE, 2006, Honorary D. Sc. degree of Jiwaji University, Gwalior 2009, and the Honorary Fellowship of IETE in 2011. He is a highly respected public speaker and has articulated issues of innovation, quality education, public services, and Information technology for the development of the common man.
He held the post of Director, Indian Institute of Information technology & Management, Gwalior. In addition to the chairman there are ten other members.
Reforms in UPSC
Prof Agrawal has been instrumental in revolutionizing the working of UPSC by implementing two major initiatives in his role as Chairman:
First was to revamp the recruitment process of UPSC by introducing aptitude tests in the recruitment examination process.
Second was the introduction of e-governance initiatives at UPSC including online application process for recruitment exams
References
Living people
IIT Delhi faculty
Chairmen of Union Public Service Commission
Year of birth missing (living people) |
3001700 | https://en.wikipedia.org/wiki/Crash%20reporter | Crash reporter | A crash reporter is usually a system software whose function is to identify reporting crash details and to alert when there are crashes, in production or on development / testing environments. Crash reports often include data such as stack traces, type of crash, trends and version of software. These reports help software developers- Web, SAAS, mobile apps and more, to diagnose and fix the underlying problem causing the crashes. Crash reports may contain sensitive information such as passwords, email addresses, and contact information, and so have become objects of interest for researchers in the field of computer security.
Implementing crash reporting tools as part of the development cycle has become a standard, and crash reporting tools have become a commodity, many of them are offered for free, like Crashlytics.
Many giant industry players, that are part of the software development eco-system have entered the game. Companies such as Twitter, Google and others are putting a lot of efforts on encouraging software developers to use their APIs, knowing this will increase their revenues down the road (through advertisements and other mechanisms). As they realize that they must offer elegant solutions for as many as possible development issues, otherwise their competitors will take actions, they keep adding advanced features. Crash reporting tools make an important development functionality that giant companies include in their portfolio of solutions.
Many crash reporting tools are specialized in mobile app. Many of them are SDKs.
macOS
In macOS there is a standard crash reporter in . Crash Reporter.app sends the Unix crash logs to Apple for their engineers to look at. The top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log in their email client to send to the application vendor for them to use. Crash Reporter.app has 3 main modes: display nothing on crash, display "Application has crashed" dialog box or display Crash Report window.
Windows
Microsoft Windows includes a crash reporting service called Windows Error Reporting that prompts users to send crash reports to Microsoft for online analysis. The information goes to a central database run by Microsoft. It consists of diagnostic information that helps the company or development team responsible for the crash to debug and resolve the issue if they choose to do so. Crash reports for third party software are available to third party developers who have been granted access by Microsoft.
The system considers all parts of the debug and release process, such that targeted bug fixes can be applied through Windows Update. In other words, only people experiencing a particular type of crash can be offered the bug fix, thus limiting exposure to an issue.
According to Der Spiegel, the Microsoft crash reporter has been exploited by NSA's Tailored Access Operations (TAO) unit to hack into the computers of Mexico's Secretariat of Public Security. According to the same source, Microsoft crash reports are automatically harvested in NSA's XKeyscore database, in order to facilitate such operations.
CrashRpt
Another error reporting library for Windows is CrashRpt. CrashRpt library is a light-weight open source error handling framework for applications created in Microsoft Visual C++ and running under Windows. The library is distributed under New BSD License.
CrashRpt intercepts unhandled exceptions, creates a crash minidump file, builds a crash descriptor in XML format, presents an interface to allow user to review the crash report, and finally it compresses and sends the crash report to the software support team.
CrashRpt also provides a server-side command line tool for crash report analysis named crprober. The tool is able to read all received crash reports from a directory and generate a summary file in text format for each crash report. It also groups similar crash reports making it easier to determine the most popular problems. The crprober tool does not provide any graphical interface, so it is rather cryptic and difficult to use.
There is also an open-source server software named CrashFix Server that can store, organize and analyze crash reports sent by CrashRpt library. It can group similar crash reports, has a built-in bug tracker and can generate statistical reports. CrashFix server provides a web-based user interface making it possible for several project members to collaborate (upload debugging symbols, browse crash reports and associate bugs with crash reports).
Linux
ABRT
ABRT (Automated Bug Reporting Tool) is claimed as a distro-independent while deployed only on Fedora and Red Hat Enterprise Linux distributions. ABRT intercepts core dumps or tracebacks from applications and (after user-confirmation) sends bug reports to various bug-tracking systems, such as Fedora Bugzilla .
Ubuntu Error tracker
Ubuntu hosts a public error tracker at errors.ubuntu.com which collects hundreds of thousands of error reports daily from millions of machines. If a program crashes on Ubuntu, a crash handler (such as Apport) will notify the user and offer to report the crash. If the user chooses to report the crash, the details (possibly including a core dump) will be uploaded to an Ubuntu server (daisy.ubuntu.com) for analysis. A core dump is automatically processed to create a stack trace and crash signature. The crash signature is used to classify subsequent crash reports caused by the same error.
GNOME
Bug Buddy is the crash reporting tool used by the GNOME platform. When an application using the GNOME libraries crashes, Bug Buddy generates a stack trace using gdb and invites the user to submit the report to the GNOME bugzilla. The user can add comments and view the details of the crash report.
KDE
The crash reporting tool used by KDE is called Dr. Konqi. The user can also get a backtrace using gdb.
Mozilla
Talkback
Talkback (also known as the Quality Feedback Agent) was the crash reporter used by Mozilla software up to version 1.8.1 to report crashes of its products to a centralized server for aggregation or case-by-case analysis. Talkback is proprietary software licensed to the Mozilla Corporation by SupportSoft. If a Mozilla product (e.g. Mozilla Firefox, Mozilla Thunderbird) were to crash with Talkback enabled, the Talkback agent would appear, prompting the user to provide optional information regarding the crash. Talkback does not replace the native OS crash reporter which, if enabled, will appear along with the Talkback agent.
Talkback has been replaced by Breakpad in Firefox since version 3.
Breakpad
Breakpad (previously called Airbag) is an open-source replacement for Talkback. Developed by Google and Mozilla, it is used in current Mozilla products such as Firefox and Thunderbird. Its significance is being the first open source multi-platform crash reporting system.
Since 2007, Breakpad is included in Firefox on Windows and Mac OS X, and Linux. Breakpad is typically paired with Socorro which receives and classifies crashes from users.
Breakpad itself is only part of a crash reporting system, as it includes no reporting mechanism.
Crashpad
Crashpad is an open-source crash reporter used by Google in Chromium. It was developed as a replacement for Breakpad due to an update in macOS 10.10 which removed API's used by Breakpad. Crashpad currently consists of a crash-reporting client and some related tools for macOS and Windows, and is considered substantially complete for those platforms. Crashpad became the crash reporter client for Chromium on macOS as of March 2015, and on Windows as of November 2015.
World of Warcraft
World of Warcraft is another program to use its own crash reporter, "Error Reporter". The error reporter may not detect crashes all the time; sometimes the OS crash reporter is invoked instead. Error Reporter has even been known to crash while reporting errors.
Mobile OSs
Android and iOS operating systems also have built in crash reporting functionality.
References
External links
How to create useful crash reports using KDE
KernelOops Linux kernel bug count site
ABRT - Automated Bug-Reporting Tool
A review of mobile crash reporting tools
Operating system technology
Software anomalies |
29090 | https://en.wikipedia.org/wiki/Software%20testing | Software testing | Software testing is the act of examining the artifacts and the behavior of the software under test by validation and verification. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but not necessarily limited to:
analyzing the product requirements for completeness and correctness in various contexts like industry perspective, business perspective, feasibility and viability of implementation, usability, performance, security, infrastructure considerations, etc.
reviewing the product architecture and the overall design of the product
working with product developers on improvement in coding techniques, design patterns, tests that can be written as part of code based on various techniques like boundary conditions, etc.
executing a program or application with the intent of examining behavior
reviewing the deployment infrastructure and associated scripts & automation
take part in production activities by using monitoring & observability techniques
Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors.
Overview
Although software testing can determine the correctness of software under the assumption of some specific hypotheses (see the hierarchy of testing difficulty below), testing cannot identify all the failures within the software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles — principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions. The scope of software testing may include the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing assists in making this assessment.
Faults and failures
Software faults occur through the following process: A programmer makes an error (mistake), which results in a fault (defect, bug) in the software source code. If this fault is executed, in certain situations the system will produce wrong results, causing a failure.
Not all faults will necessarily result in failures. For example, faults in the dead code will never result in failures. A fault that did not reveal failures may result in a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single fault may result in a wide range of failure symptoms.
Not all software faults are caused by coding errors. One common source of expensive defects is requirement gaps, i.e., unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, performance, and security.
Input combinations and preconditions
A fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product. This means that the number of faults in a software product can be very large and defects that occur infrequently are difficult to find in testing and debugging. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do) — usability, scalability, performance, compatibility, and reliability — can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.
Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.
Economics
A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided, if better software testing was performed.
Outsourcing software testing because of costs is very common, with China, the Philippines, and India being preferred destinations.
Roles
Software testing can be done by dedicated software testers; until the 1980s, the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing, different roles have been established, such as test manager, test lead, test analyst, test designer, tester, automation developer, and test administrator. Software testing can also be performed by non-dedicated software testers.
History
Glenford J. Myers initially introduced the separation of debugging from testing in 1979. Although his attention was on breakage testing ("A successful test case is one that detects an as-yet undiscovered error."), it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification.
Testing approach
Static, dynamic, and passive testing
There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas executing programmed code with a given set of test cases is referred to as dynamic testing.
Static testing is often implicit, like proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for these are either using stubs/drivers or execution from a debugger environment.
Static testing involves verification, whereas dynamic testing also involves validation.
Passive testing means verifying the system behavior without any interaction with the software product. Contrary to active testing, testers do not provide any test data but look at system logs and traces. They mine for patterns and specific behavior in order to make some kind of decisions. This is related to offline runtime verification and log analysis.
Exploratory approach
Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design, and test execution. Cem Kaner, who coined the term in 1984, defines exploratory testing as "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."
The "box" approach
Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that the tester takes when designing test cases. A hybrid approach called grey-box testing may also be applied to software testing methodology. With the concept of grey-box testing—which develops tests from specific design elements—gaining prominence, this "arbitrary distinction" between black- and white-box testing has faded somewhat.
White-box testing
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) verifies the internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing, an internal perspective of the system (the source code), as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration, and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
Techniques used in white-box testing include:
API testing – testing of the application using public and private APIs (application programming interfaces)
Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
Mutation testing methods
Static testing methods
Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Code coverage as a software metric can be reported as a percentage for:
Function coverage, which reports on functions executed
Statement coverage, which reports on the number of lines executed to complete the test
Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Pseudo-tested functions and methods are those that are covered but not specified (it is possible to remove their body without breaking any test case).
Black-box testing
Black-box testing (also known as functional testing) treats the software as a "black box," examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing, and specification-based testing.
Specification-based testing aims to test the functionality of software according to the applicable requirements. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.
Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.
One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight." Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Component interface testing
Component interface testing is a variation of black-box testing, with the focus on the data values beyond just the related actions of a subsystem component. The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units. The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected performance in the next unit.
Visual testing
The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.
At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing, therefore, requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.
Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence she or he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.
Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of the tester(s) to base testing off documented methods and then improvise variations of those tests can result in more rigorous examination of defect fixes. However, unless strict documentation of the procedures are maintained, one of the limits of ad hoc testing is lack of repeatability.
Grey-box testing
Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests while executing those tests at the user, or black-box level. The tester will often have access to both "the source code and the executable binary." Grey-box testing may also include reverse engineering (using dynamic code analysis) to determine, for instance, boundary values or error messages. Manipulating input data and formatting output do not qualify as grey-box, as the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.
Testing levels
Broadly speaking, there are at least three levels of testing: unit testing, integration testing, and system testing. However, a fourth level, acceptance testing, may be included by developers. This may be in the form of operational acceptance testing or be simple end-user (beta) testing, testing to ensure the software meets functional expectations. Based on the ISTQB Certified Test Foundation Level syllabus, test levels includes those four levels, and the fourth level is named acceptance testing. Tests are frequently grouped into one of these levels by where they are added in the software development process, or by the level of specificity of the test.
Unit testing
Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.
These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.
Unit testing is a software development process that involves a synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development life cycle. Unit testing aims to eliminate construction errors before code is promoted to additional testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, unit testing might include static code analysis, data-flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software testing practices.
Integration testing
Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.
Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.
Integration tests usually involve a lot of code, and produce traces that are larger than those produced by unit tests. This has an impact on the ease of localizing the fault when an integration test fails. To overcome this issue, it has been proposed to automatically cut the large tests in smaller pieces to improve fault localization.
System testing
System testing tests a completely integrated system to verify that the system meets its requirements. For example, a system test might involve testing a login interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.
Acceptance testing
Acceptance testing commonly includes the following four types:
User acceptance testing (UAT)
Operational acceptance testing (OAT)
Contractual and regulatory acceptance testing
Alpha and beta testing
UAT as well as alpha and beta testing are described in the next testing types section.
Operational acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests that are required to verify the non-functional aspects of the system.
In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.
Contractual acceptance testing is performed based on the contract's acceptance criteria defined during the agreement of the contract, while regulatory acceptance testing is performed based on the relevant regulations to the software product. Both of these two testings can be performed by users or independent testers. Regulation acceptance testing sometimes involves the regulatory agencies auditing the test results.
Testing types, techniques and tactics
Different labels and ways of grouping testing may be testing types, software testing tactics or techniques.
Installation testing
Most software systems have installation procedures that are needed before they can be used for their main purpose. Testing these procedures to achieve an installed software system that may be used is known as installation testing.
Compatibility testing
A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a Web application, which must render in a Web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment were capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.
Smoke and sanity testing
Sanity testing determines whether it is reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.
Regression testing
Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Regression testing is typically the largest test effort in commercial software development, due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. In regression testing, it is important to have strong assertions on the existing behavior. For this, it is possible to generate and add new assertions in existing test cases, this is known as automatic test amplification.
Acceptance testing
Acceptance testing can mean one of two things:
A smoke test is used as a build acceptance test prior to further testing, e.g., before integration or regression.
Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.
Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing before the software goes to beta testing.
Beta testing
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made available to the open public to increase the feedback field to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).
Functional vs non-functional testing
Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.
Continuous testing
Continuous testing is the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate. Continuous testing includes the validation of both functional requirements and non-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.
Destructive testing
Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines. Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.
Software performance testing
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.
Usability testing
Usability testing is to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. This is not a kind of testing that can be automated; actual human users are needed, being monitored by skilled UI designers.
Accessibility testing
Accessibility testing may include compliance with standards such as:
Americans with Disabilities Act of 1990
Section 508 Amendment to the Rehabilitation Act of 1973
Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)
Security testing
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."
Internationalization and localization
Testing for internationalization and localization validates that the software can be used with different languages and geographic regions. The process of pseudolocalization is used to test the ability of an application to be translated to another language, and make it easier to identify when the localization process may introduce new bugs into the product.
Globalization testing verifies that the software is adapted for a new culture (such as different currencies or time zones).
Actual translation to human languages must be tested, too. Possible localization and globalization failures include:
Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
Technical terminology may become inconsistent, if the project is translated by several people without proper coordination or if the translator is imprudent.
Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
Untranslated messages in the original language may be left hard coded in the source code.
Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
Software may use a keyboard shortcut that has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
Software may lack support for the character encoding of the target language.
Fonts and font sizes that are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable, if the font is too small.
A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
Software may lack proper support for reading or writing bi-directional text.
Software may display images with text that was not localized.
Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.
Development testing
Development Testing is a software development process that involves the synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Development Testing aims to eliminate construction errors before code is promoted to other testing; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development process.
Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software testing practices.
A/B testing
A/B testing is a method of running a controlled experiment to determine if a proposed change is more effective than the current approach. Customers are routed to either a current version (control) of a feature, or to a modified version (treatment) and data is collected to determine which version is better at achieving the desired outcome.
Concurrent testing
Concurrent or concurrency testing assesses the behaviour and performance of software and systems that use concurrent computing, generally under normal usage conditions. Typical problems this type of testing will expose are deadlocks, race conditions and problems with shared memory/resource handling.
Conformance testing or type testing
In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
Output comparison testing
Creating a display expected output, whether as data comparison of text or screenshots of the UI, is sometimes called snapshot testing or Golden Master Testing unlike many other forms of testing, this cannot detect failures automatically and instead requires that a human evaluate the output for inconsistencies.
Property testing
Property testing is a testing technique where, instead of asserting that specific inputs produce specific expected outputs, the practitioner randomly generates many inputs, runs the program on all of them, and asserts the truth of some "property" that should be true for every pair of input and output. For example, every input to a sort function should have the same length as its output. Every output from a sort function should be a monotonically increasing list.
Property testing libraries allow the user to control the strategy by which random inputs are constructed, to ensure coverage of degenerate cases, or inputs featuring specific patterns that are needed to fully exercise aspects of the implementation under test.
Property testing is also sometimes known as "generative testing" or "QuickCheck testing" since it was introduced and popularized by the Haskell library "QuickCheck."
Metamorphic testing
Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes.
VCR testing
VCR testing, also known as "playback testing" or "record/replay" testing, is a testing technique for increasing the reliability and speed of regression tests that involve a component that is slow or unreliable to communicate with, often a third-party API outside of the tester's control. It involves making a recording ("cassette") of the system's interactions with the external component, and then replaying the recorded interactions as a substitute for communicating with the external system on subsequent runs of the test.
The technique was popularized in web development by the Ruby library vcr.
Testing process
Traditional waterfall development model
A common practice in waterfall development is that testing is performed by an independent group of testers. This can happen:
after the functionality is developed, but before it is shipped to the customer. This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.
at the same moment the development project starts, as a continuous process until the project finishes.
However, even in the waterfall development model, unit testing is often done by the software development team even when further testing is done by a separate team.
Agile or XP development model
In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). The tests are expected to fail initially. Each failing test is followed by writing just enough code to make it pass. This means the test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process).
The ultimate goals of this test process are to support continuous integration and to reduce defect rates.
This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.
A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. This part could be complex when running tests with a lack of programming knowledge.
Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team.
Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything and that the software product as a whole is still working correctly.
Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.
Automated testing
Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.
While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.
Testing tools
Program testing and fault detection can be aided significantly by testing tools and debuggers.
Testing/debug tools include features such as:
Program monitors, permitting full or partial monitoring of program code, including:
Instruction set simulator, permitting complete instruction level monitoring and trace facilities
Hypervisor, permitting complete control of the execution of program code including:-
Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code
Code coverage reports
Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
Automated functional Graphical User Interface (GUI) testing tools are used to repeat system-level tests through the GUI
Benchmarks, allowing run-time performance comparisons to be made
Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage
Some of these features may be incorporated into a single composite tool or an Integrated Development Environment (IDE).
Capture and replay
Capture and replay consists in collecting end-to-end usage scenario while interacting with an application and in turning these scenarios into test cases. Possible applications of capture and replay include the generation of regression tests. The SCARPE tool selectively captures a subset of the application under study as it executes. JRapture captures the sequence of interactions between an executing Java program and components on the host system such as files, or events on graphical user interfaces. These sequences can then be replayed for observation-based testing.
Saieva et al. propose to generate ad-hoc tests that replay recorded user execution traces in order to test candidate patches for critical security bugs. Pankti collects object profiles in production to generate focused differential unit tests. This tool enhances capture and replay with the systematic generation of derived test oracles.
Measurement in software testing
Quality measures include such topics as correctness, completeness, security and ISO/IEC 9126 requirements such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.
Hierarchy of testing difficulty
Based on the number of test cases required to construct a complete test suite in each context (i.e. a test suite such that, if it is applied to the implementation under test, then we collect enough information to precisely determine whether the system is correct or incorrect according to some specification), a hierarchy of testing difficulty has been proposed.
It includes the following testability classes:
Class I: there exists a finite complete test suite.
Class II: any partial distinguishing rate (i.e., any incomplete capability to distinguish correct systems from incorrect systems) can be reached with a finite test suite.
Class III: there exists a countable complete test suite.
Class IV: there exists a complete test suite.
Class V: all cases.
It has been proved that each class is strictly included in the next. For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and temporal machines with rational timeouts, belong to Class II.
Testing artifacts
A software testing process can produce several artifacts. The actual artifacts produced are a factor of the software development model used, stakeholder and organisational needs.
Test plan A test plan is a document detailing the approach that will be taken for intended test activities. The plan may include aspects such as objectives, scope, processes and procedures, personnel requirements, and contingency plans. The test plan could come in the form of a single plan that includes all test types (like an acceptance or system test plan) and planning considerations, or it may be issued as a master test plan that provides an overview of more than one detailed test plan (a plan of a plan). A test plan can be, in some cases, part of a wide "test strategy" which documents overall testing approaches, which may itself be a master test plan or even a separate artifact.
Traceability matrix A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.
Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and the actual result. Clinically defined, a test case is an input and an expected result. This can be as terse as 'for condition x your derived result is y', although normally test cases describe in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repositories. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
Test script A test script is a procedure or programming code that replicates user actions. Initially, the term was derived from the product of work created by automated regression test tools. A test case will be a baseline to create test scripts using a tool or a program.
Test suite The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test fixture or test data In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. There are techniques to generate test data.
Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.
Test run A report of the results from running a test case or a test suite
Certifications
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. Note that a few practitioners argue that the testing field is not ready for certification, as mentioned in the controversy section.
Controversy
Some of the major software testing controversies include:
Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles, whereas government and military software providers use this methodology but also the traditional test-last models (e.g., in the Waterfall model).
Manual vs. automated testing Some writers believe that test automation is so expensive relative to its value that it should be used sparingly. The test automation then can be considered as a way to capture and implement the requirements. As a general rule, the larger the system and the greater the complexity, the greater the ROI in test automation. Also, the investment in tools and expertise can be amortized over multiple projects with the right level of knowledge sharing within an organization.
Is the existence of the ISO 29119 software testing standard justified? Significant opposition has formed out of the ranks of the context-driven school of software testing about the ISO 29119 standard. Professional testing associations, such as the International Society for Software Testing, have attempted to have the standard withdrawn.
Some practitioners declare that the testing field is not ready for certification No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.
Studies used to show the relative expense of fixing defects There are opposing views on the applicability of studies used to show the relative expense of fixing defects depending on their introduction and detection. For example:
It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.
The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:
The "smaller projects" curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to "smaller projects in general" is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs "Safeguard" project specifically disclaims having collected the fine-grained data that Boehm's data points suggest. The IBM study (Fagan's paper) contains claims that seem to contradict Boehm's graph and no numerical results that clearly correspond to his data points.
Boehm doesn't even cite a paper for the TRW data, except when writing for "Making Software" in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn't contain the sort of data that would support Boehm's claims.
Related processes
Software verification and validation
Software testing is used in association with verification and validation:
Verification: Have we built the software right? (i.e., does it implement the requirements).
Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms defined with contradictory definitions. According to the IEEE Standard Glossary of Software Engineering Terminology:
Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
And, according to the ISO 9000 standard:
Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
The contradiction is caused by the use of the concepts of requirements and specified requirements but with different meanings.
In the case of IEEE standards, the specified requirements, mentioned in the definition of validation, are the set of problems, needs and wants of the stakeholders that the software must solve and satisfy. Such requirements are documented in a Software Requirements Specification (SRS). And, the products mentioned in the definition of verification, are the output artifacts of every phase of the software development process. These products are, in fact, specifications such as Architectural Design Specification, Detailed Design Specification, etc. The SRS is also a specification, but it cannot be verified (at least not in the sense used here, more on this subject below).
But, for the ISO 9000, the specified requirements are the set of specifications, as just mentioned above, that must be verified. A specification, as previously explained, is the product of a software development process phase that receives another specification as input. A specification is verified successfully when it correctly implements its input specification. All the specifications can be verified except the SRS because it is the first one (it can be validated, though). Examples: The Design Specification must implement the SRS; and, the Construction phase artifacts must implement the Design Specification.
So, when these words are defined in common terms, the apparent contradiction disappears.
Both the SRS and the software must be validated. The SRS can be validated statically by consulting with the stakeholders. Nevertheless, running some partial implementation of the software or a prototype of any kind (dynamic testing) and obtaining positive feedback from them, can further increase the certainty that the SRS is correctly formulated. On the other hand, the software, as a final and running product (not its artifacts and documents, including the source code) must be validated dynamically with the stakeholders by executing the software and having them to try it.
Some might argue that, for SRS, the input is the words of stakeholders and, therefore, SRS validation is the same as SRS verification. Thinking this way is not advisable as it only causes more confusion. It is better to think of verification as a process involving a formal and technical input document.
Software quality assurance
Software testing may be considered a part of a software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned with the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called defect rate. What constitutes an acceptable defect rate depends on the nature of the software; a flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
Software testing is an activity to investigate software under test in order to provide quality-related information to stakeholders. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from reaching customers.
See also
References
Further reading
What is Software Testing? - Answered by community of Software Testers at Software Testing Board
External links
"Software that makes Software better" Economist.com
Software-testing.com community website
Computer occupations |
30635226 | https://en.wikipedia.org/wiki/Vietnamese-German%20University | Vietnamese-German University | The Vietnamese-German University (VGU) is a Vietnamese public university, located in Binh Duong, Vietnam. In its administrative and academic structure, VGU follows the German model and standards. VGU was founded officially in March 2008 under the form of a partnership between Vietnam and Germany. In September 2008, VGU had its first intake of students. The university currently offers Bachelor’s and Master's programs, covering the fields of engineering, natural sciences, and commerce.
History
The Vietnamese German University is founded on the cooperation between the Socialist Republic of Vietnam and the German Federal State of Hesse. The initial idea concerning the founding of a Vietnamese German University came up in 2006. The first agreements were arranged between the Vietnamese then-Minister for Education and Training (MOET), Prof. Dr. Nguyen Thien Nhan, and the Hessen State Minister for Higher Education, Research and the Arts (HMWK), Sir Udo Corts. In May 2007 they signed a Memorandum of Understanding in presence of the then German Federal President Horst Köhler and the President of the Socialist Republic of Vietnam, Nguyễn Minh Triết. The foundation of VGU took place in March 2008. Its first president was Prof. Dr. Wolf Rieck, then President of the University of Applied Sciences, Frankfurt am Main, Germany. Six months later, September 10, 2008, VGU was opened officially by the then Hessen State Premier, Roland Koch, the Vietnamese then-Minister of Education and Training cum Deputy Prime Minister, Prof. Dr. Nguyen Thien Nhan, and Prof. Dr. Wolf Rieck, President of VGU. Starting in September 2008, 35 students enrolled for VGU's first study program, “Electrical Engineering and Information Technology”, run by the University of Applied Sciences, Frankfurt am Main. Since 2009, VGU has offered four Master's programs, covering engineering areas as well as natural sciences. In the following academic year 2011, two additional Bachelor programs will be launched. A new Master's program will take place in close collaboration with VGU's newly founded “Vietnamese German Transport Research Centre” (VGTRC). VGU will widen its academic profile by implementing Finance and Economics in its study programs.
VGU receives significant support from the VGU-Consortium, a non-profit organization which is considered as the academic backbone of VGU. More than 30 universities in Germany have joined the VGU-Consortium, including the TU9, an association of the leading German Universities of Technology. Officially founded in 2009, the VGU-Consortium supports VGU's academic and administrative issues.
VGU's development is also progressed by the foundation of the University Council in February 2010. Prof. Dr. Nguyen Thien Nhan, Vietnamese Deputy Prime Minister cum former Minister of Education and Training (MOET), was nominated its chairman. Another important milestone for VGU's development is a World Bank loan of 180 million USD, which was confirmed in June 2010. The loan is mainly provided for the construction of a new campus in Bình Dương Province, adjacent to Ho Chi Minh City, which will be opened in 2016/17.
Progress also happens in its academic development. VGU opened its first research centre, the “Vietnamese German Transport Research Centre”, in March 2010. It is part of a large, interdisciplinary research centre, the “Research Centre for High-Tech and Sustainability” which VGU is about to establish. Under its roof, VGU plans to establish five research centres:
Traffic, Transport, Mobility and Logistics;
Renewable Energy Technologies, Lightning Technologies;
Sustainable Urban Development;
“Green” Technologies and Resource Management;
Biodiversity/Climate Change, Biotechnology.
VGU's “Research Centre for High-Tech Engineering and Sustainability” is to be constructed on the new campus in Binh Duong Province, adjacent to Ho Chi Minh City. In March 2010, the first research centre, the “Vietnamese German Transport Research Centre” (VGTRC) was already founded. It is located in Thủ Đức, about four km away from VGU's current campus.
The foundation of the Vietnamese German University is a part of Vietnamese reform programs concerning the education sector (Higher Education Reform Agenda, HERA). Considering the further modernization of Vietnam, there is a growing need for a better higher education to push up the country's economic development. Therefore, Vietnamese higher education sector must be improved. The Vietnamese government has implemented a reform project, the “New Model University Project” (NMUP): four new universities are to be established within the next few years. Each of them is supported by a partner country and follows the university model of that country. VGU is a part of this program. Four universities will have strong influences on Vietnamese universities and meet Vietnam's education needs.
Model, Strategies and Organization
The foundation of VGU is a novelty: VGU is the first Vietnamese public university which is built up together with international partners and has an autonomous status.
Model and strategies
The Vietnamese German University follows the German university model. VGU's strategy is to import excellent German study programs and customize it to meet the needs of Vietnamese higher education. VGU aims to contribute to Vietnam's economic development.
Model
VGU follows the German model and standards in its academic and administrative structures. According to the German university model, research and teaching are combined. Also the university's autonomous status is part of Germany's higher education system: regarding VGU, the institutional and academic freedom is guaranteed in its foundation documents. Furthermore, VGU has a close cooperation with industry: companies in Vietnam as well as in Germany are VGU's cooperation partners. Hereby, technology transfer and innovation are supported.
Strategies
VGU's main concern is to contribute to Vietnam's economic modernization and development. Therefore, VGU offers a modern education with international standards: working in close collaboration with German partner universities, VGU offers German-accredited programs and is run by German partner universities. The curriculum are continuously adjusted to suit the needs of Vietnamese higher education. Students receive a German university degree after their study at VGU. The University aims to provide an excellent education which grants its students the best opportunities on the job market. The students, educated at VGU, are to push Vietnam's development.
Furthermore, VGU aims to enhance and strengthen the research competencies of young Vietnamese academics. The strategy of “Capacity Building” is integrated into VGU's research activities within the next few years.
Another strategy is to further educate VGU's academia and administrative employees. It is VGU's goal to ensure a “Gradual Handover”: most administrative and academic positions are to be handed over to Vietnamese employees, thus the number of German staff will be reduced.
Organization
In its academic and administrative structure, VGU follows the German university model. Its autonomous status is highly important.
Executive Board of the university
The University Council is the most important body in VGU's organizational structure. It consists of twenty members. Its members are nominated by the Vietnamese Ministry of Education and Training (MOET) as well as by the State Ministry for Higher Education, Research and the Arts (HMWK): each Ministry nominates ten members of the University Council. Its Chairman is Prof. Dr. Nguyen Thien Nhan, Vietnamese former-Minister for Education and Training (MOET). VGU's Presidential Board consists of one President and four Vice Presidents which are to be nominated soon. Since March 2008, Prof. Dr. Wolf Rieck has acted as the President of VGU. Vietnamese side and German side are in charge of the nomination of 2 Vice Presidents each. The Academic Senate and an Advisory Board, consisting of twelve members, complete VGU's organizational structure.
Academic structures
VGU's academic structures are being developed and progressing within the next few years according to VGU's general development. Interdisciplinary research centres and graduate schools will be established.
Stakeholders
The Vietnamese German University is based on the cooperation between the Socialist Republic of Vietnam and the Federal Republic of Germany, especially the State of Hesse. Both countries aim to build an acknowledged research university in Vietnam. VGU is substantially supported by German as well as Vietnamese institutions. Being a Vietnamese State University, Vietnam is in charge of covering the running costs. German institutions such as the State of Hesse, the German Federal government and the State of Baden-Württemberg support VGU financially. The German Academic Exchange Service (DAAD) and the World University Services (WUS, Germany) contribute to VGU's further development by their long-term experience in higher education. Furthermore, German and multi-national companies support VGU.
The VGU-Consortium, a non-profit organization plays a significant role for VGU's progress. It consists of more than 30 universities which include the TU9, an association Germany's leading Universities of Technology. The VGU-Consortium coordinates the cooperation with German partner universities and is in charge of academic as well as administrative issues. The VGU-Consortium also seeks to contribute to the internationalization of the German higher education sector.
An important milestone in VGU's development is the loan of US$180 million, provided by the World Bank. Approved in June 2010, the loan is mainly used for the construction of the new campus in Binh Duong Province, a blooming town adjacent to Ho Chi Minh City.
Studies
The Vietnamese German University (VGU) follows the model of German universities of technology. In its teaching and research it focuses on engineering and natural sciences.
German programs of study
During the transitional phase, VGU imports German programs of study to Vietnam which are run by German partner universities. Bachelor and Master's programs are offered. Within the next few years, VGU will also offer PhD programs. In this transitional phase, students receive two (02) degrees: 01 from German university degree provided by the German partner university and 01 degree from Vietnamese-German University (VGU). The university degrees follow the criteria defined by the Bologna Process. After having established the necessary academic structures, VGU will offer study programmes which are independent from German partner universities. They will be offered by VGU itself. Also the university degrees will be provided by VGU as an acknowledged, autonomous and independent university.
To grant the quality of VGU's programs of study, different measures for quality assurance have been installed, e.g., the programs of study must be accredited in Germany. Furthermore, the teaching staff as well as the programs are evaluated and continuous further training will be delivered to the teaching staff. VGU's programs of study are focused on real-world experiences and industry to improve the student's opportunities on the job market. VGU's profile is furthermore determined by its close cooperation with industry in Germany and Vietnam: research and teaching are to be strengthened by this close cooperation, well-educated future employees are to be trained and technology transfer is to be ensured. The close link to Germany is enforced by exchange semesters to German universities which are offered in some of the programs of study. VGU, considering itself as a gateway to Germany, also offers the possibility of undertaking an internship in German companies.
Teaching
German professors, lecturing at VGU's German partner universities, are in charge of teaching at VGU. Staying in Ho Chi Minh City for a couple of weeks, they offer single modules in blocked seminars, supported by Vietnamese lecturers. German program coordinators, teaching at VGU's German partner universities and staying in Vietnam for a longer period of time, organize and run the programs of study, customize them to needs of Vietnamese higher education. The medium of instruction at VGU is English. Students are provided with English language courses, run by native teachers. Bachelor’s students are to participate in a mandatory “Foundation Year” before the chosen program of study starts: mainly English skills as well as specific basic knowledge such as methodology, needed for the programs of study, are taught. Furthermore, VGU offers German language courses, teaching German and providing students with information about German politics, economy and culture.
Undergraduate programs
Since September 2008, VGU has offered the Bachelor’s program in “Electrical Engineering and Information Technology”. In 2011, “Finance Management” and “Business Information” are offered. The curriculum are run with a close cooperation with German partner universities, e.g. the University of Applied Sciences Frankfurt am Main, Germany is responsible for the Bachelor’s program in “Electrical Engineering and Information Technology”. VGU's Bachelor students are to participate in a mandatory “Foundation Year”: it is designed to bridge the gap between Vietnamese and German secondary education. The program focuses mainly on improving the student’s English skills, given the fact that the language of instruction at VGU is English. Furthermore, specific basic knowledge such as research methodology that are crucial to the programs study at VGU, is also taught.
Post-graduate programs
VGU currently offers eight (08) Masters programs:
“Business Information Systems” (BIS) in cooperation with the Heilbronn University of Applied Sciences (HHN)
“Computational Engineering” (COM) in cooperation with Ruhr University Bochum (RUB)
“Mechatronics and Sensor Systems Technology” (MST) is run by the University of Applied Sciences, Karlsruhe (HSKA)
“Sustainable Urban Development” (SUD) by the Technical University of Darmstadt (TUD)
"Global Production Engineering and Management" (GPE) in cooperation with the Technische Universität Berlin (TUB)
"Master in Business Administration" (MBA) is in cooperation with the University of Leipzig (LU)
"Water Technology, Reuse and Management" (WTE) is in cooperation with the Technical University of Darmstadt (TUD)
"IT Security" (ITS) is in cooperation with Hochschule Darmstadt
Tuition Fees and Scholarships
The tuition fees become due at the beginning of each semester. Due to generous support by Vietnamese and German institutions, the tuition fees at VGU are quite moderate. Furthermore, VGU offers various scholarships to 25% of the students. The scholarships are valued at 25% to 100% of VGU's tuition fees.
Research
It is VGU's goal to become a leading research university in Vietnam and the region within the next few years. Therefore, interdisciplinary research schools and centers, following the German model, are to be established. VGU plans to build up a “Research Centre for High-Tech Engineering and Sustainability”. It is to be structured as a foundation for single research centers. Five main areas of research have been identified:
Energy and Lightning Technology,
Traffic, Transport and Logistics,
Water Technologies and Water Resource Management,
Sustainable Urban Development, and
Green Technologies and Resource Management.
In March 2010, VGU's first research centre, the “Vietnamese German Transport Research Centre” (VGTRC), was founded.
Campus
VGU campus is located at Binh Duong New City, Vietnam. Also a bigger, more modern and more equipped new campus is getting build. The World Bank loan which was approved in June 2010, is mainly for the construction of this new campus. The campus will be located in Binh Duong Province, 45 km away from HCMC. Next to its teaching buildings, the plans also include VGU's research centers and dormitories. The construction of the new campus is to follow the latest technologies: renewable energies, ecological construction methods, etc.
References
External links
http://www.hessen.de/irj/HMWK_Internet?cid=fe4a30288f1e35a9cde455e08a551d5d
Universities in Ho Chi Minh City
Educational institutions established in 2008
2008 establishments in Vietnam |
493414 | https://en.wikipedia.org/wiki/Patrick%20Volkerding | Patrick Volkerding | Patrick Volkerding (born October 20, 1966) is the founder and maintainer of the Slackware Linux distribution. Volkerding is Slackware's "Benevolent Dictator for Life" (BDFL), and is also known informally as "The Man".
Personal life
Volkerding earned a Bachelor of Science in computer science from Minnesota State University Moorhead in 1993. Volkerding is a Deadhead, and by April 1994 had already attended 75 concerts.
Volkerding is a Church of the SubGenius affiliate/member. The use of the word slack in "Slackware" is an homage to J. R. "Bob" Dobbs. About the SubGenius influence on Slackware, Volkerding has stated: "I'll admit that it was SubGenius inspired. In fact, back in the 2.0 through 3.0 days we used to print a dobbshead on each CD."
Volkerding is an avid homebrewer and beer lover. Early versions of Slackware would entreat users to send him a bottle of local beer in appreciation for his work.
Volkerding was married in 2001 to Andrea and has a daughter Briah Cecilia Volkerding (b. 2005).
Illness
In 2004, Volkerding announced via mailing list post that he was suffering from actinomycosis, a serious illness requiring multiple rounds of antibiotics and with an uncertain prognosis. This announcement caused a number of tech news outlets to wonder about the future of the Slackware project. As of 2012, both Volkerding and the Slackware project were reported to be in a healthy state again.
Slackware Linux
Michael Johnston of Morse Telecommunications paid Volkerding $1 per copy sold. After that six-month contract, Robert Bruce of Walnut Creek CDROM began publishing Slackware as a CD-ROM set. Robert Bruce later became Volkerding's partner in Slackware Linux, Inc. with Volkerding owning a non-controlling, minority, 40% share. Due to underpayment, Patrick Volkerding, "told them to take it down or I'd suspend the DNS for the Slackware store".
Walnut Creek CDROM, 60% owner of Slackware Linux, Inc. was sold to BSDi and later to Wind River Systems.
Chris Lumens and others worked for Slackware, but due to underpayment, these people lost their jobs.
For the last several years Volkerding has managed Slackware with the help of many volunteers and testers.
Awards
In 2014, Volkerding received the O'Reilly Open Source Award.
Works
Volkerding, Patrick, and Reichard, Kevin, Linux System Commands, IDG Books/M&T Books, 2000,
Volkerding, Patrick, Reichard, Kevin, and Foster-Johnson, Eric, Instalación y configuración de Linux, Temas profesionales. Madrid: Anaya Multimedia, 1999
Volkerding, Patrick, Reichard, Kevin, and Foster-Johnson, Eric, LINUX Configuration and Installation, M&T Books, 1998,
Volkerding, Patrick, and Reichard, Kevin, Linux in Plain English, MIS:Press, 1997,
References
Interviews
Interview with LinuxQuestions.org, 2012
Linux Link Tech Show interview (audio), 2006
Slashdot interview with Patrick Volkerding, 2000
Linux Journal interview
The Age (Australia) Interview
Hacker Public Radio Interview, 2011
External links
Patrick Volkerding's home page
1966 births
Living people
Linux people
Slackware
American SubGenii
Free software programmers
Minnesota State University Moorhead alumni |
27838130 | https://en.wikipedia.org/wiki/Danger%20from%20the%20Deep | Danger from the Deep | Danger from the Deep, often abbreviated as DftD, is an open-source World War II German U-boat simulation for PC, striving for technical and historical accuracy.
Development
The project was registered in 2003 on sourceforge.net and is since then developed as open source software under the GPLv2. In 2004 it reached beta status.
The game targets Multi-platform, supporting FreeBSD, OpenBSD, Mac OS X, Linux distributions, and Microsoft Windows by utilizing SDL and OpenGL. Hardware addressed is OpenGL 1.5 (while recommending "OpenGL 2.0 or greater") with around 256 MB of RAM, 1 GHz processor and common PC input devices (keyboard, mouse).
Development is intermittent. As of June 11 2020 the latest commit to the Git repo was May 10, 2020. The last downloadable release was May 8, 2010
Reception
A Linux Journal review from 2010 received DftD quite positive.
In 2004 The Wargamer recommended the game to "serious sim gamers" which should "head over to Danger from the Deep's official web site and take a look.". In 2011 an Ars Technica article on the history of simulation games noted Danger from the Deep as: "These days, submarine sims [...] are kept alive by the open-source Danger from the Deep".
The game was downloaded between 2003 and April 2017 1.3 million times alone from SourceForge, chip.de counted another 100,000 downloads.
See also
External links
References
Linux games
Submarine simulation video games
Open-source video games |
401962 | https://en.wikipedia.org/wiki/ADABAS | ADABAS | Adabas, a contraction of “adaptable database system," is a database package that was developed by Software AG to run on IBM mainframes. It was launched in 1971 as a non-relational database. As of 2019, Adabas is marketed for use on a wider range of platforms, including Linux, Unix, and Windows.
Adabas can store multiple data relationships in the same table.
History
Initially released by Software AG in 1971 on IBM mainframe systems using DOS/360, OS/MFT, or OS/MVT, Adabas is currently available on a range of enterprise systems, including BS2000, z/VSE, z/OS, Unix, Linux, and Microsoft Windows. Adabas is frequently used in conjunction with Software AG's programming language Natural; many applications that use Adabas as a database on the back end are developed with Natural. In 2016, Software AG announced that Adabas and Natural would be supported through the year 2050 and beyond.
Adabas is one of the three major inverted list DBMS packages, the other two being Computer Corporation of America’s Model 204 and ADR’s Datacom/DB.
4GL support
Since the 1979 introduction of Natural the popularity of Adabas databases has grown. By 1990, SAS was supporting Adabas.
Non-relational
In a 2015 white paper, IBM said, "applications that are written in a pre-relational database, such as Adabas, are no longer mainstream and do not follow accepted IT industry standards." However, an Adabas database can be designed in accordance with the relational model. While there are tools and services to facilitate converting Adabas to various relational databases, such migrations are usually costly.
Hardware zIIP boost
IBM's zIIP (System z Integrated Information Processor) special purpose processors permit "direct, real-time SQL access to Adabas" (even though the data may still stored in a non-relational form).
Adabas Data Model
Adabas is an acronym for Adaptable Data Base System (originally written in all caps; today only the initial cap is used for the product name.)
Adabas is an inverted list data base, with the following characteristics or terminology:
Works with tables (referred to as files) and rows (referred to as records) as the major organizational units
Columns (referred to as fields) are components of rows
No embedded SQL engine. SQL access via the Adabas SQL Gateway was introduced through an acquired company, CONNX, in 2004. It provides ODBC, JDBC, and OLE DB access to Adabas and enables SQL access to Adabas using COBOL programs.
Search facilities may use indexed fields or non-indexed fields or both.
Does not natively enforce referential integrity constraints, and parent-child relations must be maintained by application code.
Supports two methods of denormalization: repeating groups in a record ("periodic groups") and multiple value fields in a record ("multi-value fields").
Adabas is typically used in applications that require high volumes of data processing or in high transaction online analytical processing environments.
Adabas access is normally through Natural modules using one of several Natural statements including READ, FIND, and HISTOGRAM. These statements generate additional commands, under the covers, like open and close file. Adabas data can also be retrieved via direct calls.
Example of Natural program running against Adabas
FIND EMPLOYEE WITH NAME = 'JONES' OR = 'BAKER'
AND CITY = 'BOSTON' THRU 'NEW YORK'
AND CITY NE 'CHAPEL HILL'
SORTED BY NAME
WHERE SALARY < 28000
DISPLAY NAME FIRST-NAME CITY SALARY
END-FIND
END
Output of Program:
NAME FIRST-NAME CITY SALARY
---------------------------------
BAKER PAULINE DERBY 4450
JONES MARTHA KALAMAZOO 21000
JONES KEVIN DERBY 7000
Natural (4GL)
Natural is a proprietary fourth-generation programming language. It was not part of the initial (1971) Adabas release.
Natural programs can be "run" interpretively or "executed" as compiled objects. Compiled programs can more directly use operating system services, and run faster.
Proponents say that Natural has evolved from a competitor of COBOL
to "being in competition with Java as language of choice for writing services (SOA)."
About Natural
Natural, which includes a built-in screen-oriented editor, has two main components: the system and the language.
The system is the central vehicle of communication between the user and all other components of the processing environment.
The language is structured and less procedural than conventional languages.
Natural objects (programs, maps, data areas, etc.) are stored in libraries, similar in structure to a DOS directory, and can be named with identifiers up to 8 characters.
Objects, even if they are of different types, cannot have the same name.
Natural provides both online and batch execution and programming testing utilities.
Versions exist for z/OS, z/VSE, BS2000/OS, Linux, Unix and Windows.
Language features
Natural works not only with Adabas files, but also supports Oracle,
DB2, and others.
Sample code:
DEFINE DATA LOCAL
01 EMPLOYEES VIEW OF EMPLOYEES
02 SALARY (1)
END-DEFINE
READ EMPLOYEES BY NAME
AT END OF DATA
DISPLAY
MIN (EMPLOYEES.SALARY(1)) (EM=ZZZ,ZZZ,ZZ9)
AVER(EMPLOYEES.SALARY(1)) (EM=ZZZ,ZZZ,ZZ9)
MAX (EMPLOYEES.SALARY(1)) (EM=ZZZ,ZZZ,ZZ9)
END-ENDDATA
END-READ
END
Output:
Page 1 18-08-22 16:42:22
ANNUAL ANNUAL ANNUAL
SALARY SALARY SALARY
----------- ----------- -----------
0 240,976 6,380,000
The language is strongly-typed, using explicit typing of variables, which may be one of:
Alphanumeric
Numeric Zoned decimal up to 27 total digits, of which a total of 7 may be to the right of decimal point
Packed Decimal, same limits as "Numeric")
Integer (1, 2 or 4 bytes, ranging from -128 to 127 / -32,768 to 32,767 and -2,147,483,648 to 2,147,483,647)
Date
Logical (True or False)
Binary (a single byte, according to the translator)
Control variable paralleling CICS map attribute
Floating Point (4 or 8 bytes)
The system file
The system file is an Adabas file reserved for use by Natural, which contains, but is not limited to, the following:
All Natural programs, both in source format (programs) and in object format (compiled), grouped in libraries;
File Definition Modules, or Data Definition Modules (DDM), with the definition for the Natural or Adabas files and their userviews;
Natural error messages;
The texts of the Help function.
The system file is not limited to Adabas. Natural can also store programs in VSAM on mainframe operating systems. Natural uses the file system on Windows and various Unix implementations.
Programs
Natural objects are identified by names up to 8 characters, the first of which must be alphabetical.
The Natural program editor allows source in rows of up to 72 positions. Lines are numbered by 4 digits. This numbering is generated by Natural during program creation. Line numbers used by the compiler and editors, and can have important logical functions in the programs.
Within the lines, instructions (statements or program commands) have no positional parameters.
Comments can be included in two ways:
Full-line comments are identified by a "*" or "**" prefix.
Annotated code lines have a "/*" - everything to its right is a comment.
Examples:
0010 * These two lines (0010 and 0020)
0020 ** are comments.
0030 FORMAT LS = 80 /* As well as this part of the line (0030)
0040 * NOTE: The "/*" form has no space between the SLASH and ASTERISK.
.
.
0200 END
"END" or "." indicates the end of a program.
A Hello World code example:
* Hello World in NATURAL
WRITE 'Hello World!'
END
See also
Adabas D
References
Bibliography
External links
ADABAS product home page
ADABAS Developer Community
ADABAS Discussion Forum
Proprietary database management systems
Software AG
IBM mainframe software |
2672653 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Software%20Engineering | IEEE Transactions on Software Engineering | The IEEE Transactions on Software Engineering is a monthly peer-reviewed scientific journal published by the IEEE Computer Society. It was established in 1975 and covers the area of software engineering. It is considered the leading journal in this field.
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded and Current Contents/Engineering, Computing & Technology. According to the Journal Citation Reports, the journal has a 2013 impact factor of 2.292.
Past Editors in Chief
See also
IEEE Software
IET Software
References
External links
Transactions on Software Engineering
Computer science journals
Software engineering publications
Monthly journals
Publications established in 1975
English-language journals |
52634803 | https://en.wikipedia.org/wiki/List%20of%20Nginx%E2%80%93MySQL%E2%80%93PHP%20packages | List of Nginx–MySQL–PHP packages | This is a list of notable NMP (Nginx, MySQL/MariaDB, PHP) solution stacks for all computer platforms; these software bundles are used to run dynamic Web sites or servers.
Alternate/similar software stacks
AMP - Apache, MariaDB, Perl/PHP/Python on Windows, macOS, Linux and others
LAPP - Linux, Apache, PostgreSQL, Perl/PHP/Python
LLMP - Linux, Lighttpd, MySQL/MariaDB, Perl/PHP/Python
References
Lists of software |
38648078 | https://en.wikipedia.org/wiki/Ecoute | Ecoute | Ecoute is a standalone music player application for macOS and iOS.
It was created by Louka Desroziers (Developer) and Julien Sagot (Interface Designer). The current version is 3.0.8 for macOS and 2.5.7 for iOS.
Mac version
Ecoute plays content from the iTunes library. Music, videos, and podcasts goes into this small application that does not require iTunes to be launched while it uses iTunes' music library, playlists, and related information such as MP3 metatags. Users may switch back and forth between using Ecoute and iTunes. Ecoute does not require separately importing music or information; it uses the same files that iTunes does.
Ecoute is a lightweight player. Features include the ability to search for missing artwork, and customizable themes. Users may share information about the music they are listening to on Facebook, Twitter, and Last.fm. Ecoute supports Growl alerts, so the listener can see exactly what is playing.
The Desktop Controller is a very simple controller that sits in the background of the desktop. It shows the Album artwork, along with the details of the track currently playing and provides access to the music navigation controls.
iOS version
The makers of Ecoute have launched an official iPhone app in the App Store, on 17 August 2012. Ecoute for iOS serves as a music player with Twitter integration, AirPlay support, music filters, podcast support, and more.
Reviews
Nick Mead from Softonic summarised his review as follow: "Ecoute is an excellent, lightweight alternative to the increasingly bloated issues that you may have been having with iTunes". Federico Viticci from MacStories described the Mac OS X version of Ecoute as a "small, powerful alternative to iTunes". Lukas Hermann from MacStories described the iOS application as "the best music player for iOS". Shane Richmond from The Daily Telegraph also praised the iOS application, saying that it is "much more pleasant and user-friendly design than Apple's iPod app".
References
External links
Official website
MacOS media players
IOS software |
4561755 | https://en.wikipedia.org/wiki/1880%20in%20baseball | 1880 in baseball |
Champions
National League: Chicago White Stockings
National Association: Washington Nationals
Inter-league playoff: Washington (NA) def. Chicago (NL), 4 games to 3 (1 tie game)
National League final standings
Statistical leaders
Events
January–March
February 5 – The Worcester Ruby Legs are admitted to the National League.
March 31 – The Worcester Ruby Legs offer the Providence Grays $1,000 for negotiating rights with Providence player-manager George Wright. The Grays refuse the offer and Wright remains the reserved property of Providence.
April–June
April 21 – George Wright turns down the Providence Grays final contract offer. As a reserved player obligated to Providence, Wright has no other option but to sit out the season (although he does mysteriously appear in 1 game on May 29 for the Boston Red Caps).
April 28 – Lew Brown, catcher for the Boston Red Caps, arrives drunk for an exhibition game and is suspended for the entire season by the Red Caps.
May 1 – The Cincinnati Stars make their major league debut with a 4–3 loss to the Chicago White Stockings at Bank Street Grounds.
May 1 – Roger Connor and Mickey Welch make their debuts for the Troy Trojans. Troy loses 13–1 to the Worcester Ruby Legs, who win their first National League game.
May 1 – Ned Hanlon makes his debut for the Cleveland Blues in a losing effort. Hanlon will be elected to the Hall-of-Fame in 1996.
May 5 – Charley "Old Hoss" Radbourn debuts for the Providence Grays.
May 20 – Chicago White Stockings manager Cap Anson begins alternating Larry Corcoran and Fred Goldsmith to form the first pitching rotation in major league history.
May 29 – The Chicago White Stockings set a National League record by winning their 13th consecutive game, a record they will shatter in 4 weeks.
George Wright is acquired by the Boston Red Stockings from the Providence Grays.
June 10 – 1879 home run champ Charley Jones of the Boston Red Caps becomes the first player to hit 2 homers in one inning in a Boston victory over the Buffalo Bisons.
June 12 – Lee Richmond of the Worcester Ruby Legs pitches the first perfect game in professional history in a 1–0 victory over the Cleveland Blues.
June 17 – John Montgomery Ward of the Providence Grays pitches the 2nd perfect game in 6 days as the Grays defeat Pud Galvin and the Buffalo Bisons 5-0. The National League would not see another perfect game until 1964.
July–September
July 8 – The Chicago White Stockings win their 21st consecutive game. This record will stand until 1916 when it is broken by the New York Giants. It still stands as the 2nd longest winning streak in major league history.
July 11 – The Chicago Tribune publishes runs batted in for the first time.
July 17 – Harry Stovey of the Worcester Ruby Legs hits his first big league home run. Stovey will become the first player in history to reach 100 career home runs.
August 6 – Tim Keefe makes his major league debut with the Troy Trojans, pitching a 4-hitter in defeating the Cincinnati Stars. Keefe will end up with 342 career wins and be elected to the Hall-of-Fame in 1964.
August 19 – Larry Corcoran of the Chicago White Stockings pitches a no-hitter against the Boston Red Caps.
August 20 – Pud Galvin pitches a no-hitter for the Buffalo Bisons against the Worcester Ruby Legs. It is the 2nd day in a row that the National League has seen a no-hitter.
August 27 – Bill Crowley of the Buffalo Bisons records 4 assists from the outfield for the second time this season, having done it previously on May 24. Crowley remains the only outfielder to ever have 4 assists in one game on two separate occasions.
September 1 – Charley Jones of the Boston Red Caps refuses to play after the club fails to pay him $378 in back pay. The team responds by suspending, fining and black-listing him. Jones will never again play in the National League, although he will appear again beginning in 1883 in the American Association.
September 2 – The first night game is played in Nantasket Beach, Massachusetts. The Jordan Marsh and R. H. White department stores from Boston play to a 16–16 tie.
September 8 – The Polo Grounds in New York City are leased by a new Metropolitan team being led by Jim Mutrie.
September 9 – Buck Ewing makes his debut for the Troy Trojans.
September 15 – John O'Rourke, older brother of Jim O'Rourke, becomes the first player to hit 4 doubles in one game.
September 15 – The Chicago White Stockings clinch the pennant with a 5–2 win over the Cincinnati Stars.
September 29 – The Polo Grounds hosts its first baseball game as the newly formed New York Metropolitans defeat the National Association champion Washington Nationals 4–2. Approximately 2,500 people attend the game, the largest crowd to see a game in New York City in several years.
September 30 – The last place Cincinnati Stars win their final game 2–0 in front of 183 fans. This will be the last game for this troubled franchise, although the city will see the current version of the Reds begin play in 1882.
October–December
October 4 – The National League prohibits the sale of alcoholic beverages in member parks and also prohibits member parks from being rented out on Sundays. These rulings are directly aimed at the Cincinnati Stars club who routinely did both in order to raise additional money for their continual struggling finances.
October 6 – The Cincinnati Stars refuse to abide by the new rules set down and are immediately kicked out of the National League.
December 8 – The National League rejects the Washington Nationals bid for membership and accepts the Detroit Wolverines as its newest member.
December 9 – The National League re-elects William Hulbert as president and adopts several new rules for 1881. Among the new rules are reducing called balls for a walk down to 7 and moving the pitching box back 5 feet to the new distance of 50 feet.
Births
January–April
January 5 – Dutch Jordan
January 13 – Goat Anderson
January 21 – Emil Batch
January 22 – Bill O'Neill
January 23 – Julián Castillo
January 27 – Bill Burns
February 6 – Frank LaPorte
February 7 – Dave Williams
February 14 – Claude Berry
February 16 – Carl Lundgren
March 2 – Danny Hoffman
March 10 – Judge Nagle
March 22 – Ernie Quigley
April 12 – Addie Joss
April 18 – Sam Crawford
April 20 – Charlie Smith
May–August
May 7 – Mickey Doolan
June 12 – Matty McIntyre
June 30 – Davy Jones
July 4 – George Mullin
July 14 – Ed Hug
July 22 – George Gibson
July 27 – Jack Doscher
July 27 – Irish McIlveen
July 27 – Joe Tinker
July 29 – Chief Meyers
August 12 – Christy Mathewson
August 30 – Charlie Armbruster
September–December
September 2 – Fred Payne
September 10 – Harry Niles
September 10 – Barney Pelty
September 12 – Boss Schmidt
September 23 – Heinie Wagner
September 29 – Harry Lumley
October 3 – Henry Thielman
October 12 – Pete Hill
October 21 – Jack Hayden
October 25 – Weldon Henley
October 25 – Bill Brennan
November 20 – George McBride
November 21 – Simmy Murch
November 25 – Frank Corridon
December 2 – Tom Doran
December 17 – Cy Falkenberg
December 23 – Doc Gessler
Deaths
November 23 – Jack McDonald, 36?, right fielder who hit .258 with the 1872 Brooklyn Atlantics.
External links
1880 season at baseball-reference.com
Charlton's Baseball Chronology at BaseballLibrary.com
Year by Year History at Baseball-Almanac.com
Retrosheet.org |
17066929 | https://en.wikipedia.org/wiki/Find%20%28Windows%29 | Find (Windows) | In computing, find is a command in the command-line interpreters (shells) of a number of operating systems. It is used to search for a specific text string in a file or files. The command sends the specified lines to the standard output device.
Overview
The find command is a filter to find lines in the input data stream that contain or don't contain a specified string and send these to the output data stream. It does not support wildcard characters.
The command is available in DOS, Digital Research FlexOS, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows, and ReactOS. On MS-DOS, the command is available in versions 2 and later. DR DOS 6.0 and Datalight ROM-DOS include an implementation of the command. The FreeDOS version was developed by Jim Hall and is licensed under the GPL.
The Unix command find performs an entirely different function, analogous to forfiles on Windows. The rough equivalent to the Windows find is the Unix grep.
Syntax
FIND [/V] [/C] [/N] [/I] "string" [[drive:][path]filename[...]]
Arguments:
"string" This command-line argument specifies the text string to find.
[drive:][path]filename Specifies a file or files in which to search the specified string.
Flags:
/V Displays all lines NOT containing the specified string.
/C Displays only the count of lines containing the string.
/N Displays line numbers with the displayed lines.
/I Ignores the case of characters when searching for the string.
Note:
If a pathname is not specified, FIND searches the text typed at the prompt
or piped from another command.
Examples
C:\>find "keyword" < inputfilename > outputfilename
C:\>find /V "any string" FileName
See also
Findstr, Windows and ReactOS command-line tool to search for patterns of text in files.
find (Unix), a Unix command that finds files by attribute, very different from Windows find
grep, a Unix command that finds text matching a pattern, similar to Windows find
forfiles, a Windows command that finds files by attribute, similar to Unix find
Regular expression
List of DOS commands
References
Further reading
External links
Open source FIND implementation that comes with MS-DOS v2.0
External DOS commands
Microcomputer software
Microsoft free software
OS/2 commands
ReactOS commands
Pattern matching
Windows administration |
14720182 | https://en.wikipedia.org/wiki/Simple%20DNS%20Plus | Simple DNS Plus |
Overview
Simple DNS Plus is a DNS server software product that runs on x86 and x64 editions of Windows operating system.
All options and settings are available directly from a Windows user interface.
It provides wizards for common tasks such as setting up new zones, importing data, making bulk updates, etc.
It has full support for IPv6. It has an option to control protocol preference (IPv4 / IPv6) on dual-stack computers, and it can even act as IPv6-to-IPv4 or IPv4-to-IPv6 forwarder.
It has full support for internationalized domain names (IDNs). You can enter domain names with native characters directly (no punycode conversion needed), and have an option to display native character or punycoded domain names anywhere in the user interface, and quickly switch between these modes.
You can create DNS records or entire zone files from other applications or web-sites and prompt Simple DNS Plus to dynamically load and use this through command line options, a simple HTTP API, and a full .NET/COM programming API.
Simple DNS Plus is based on the Microsoft .NET Framework 4.8 and is 100% managed code, protecting it from common security issues such as buffer overruns, and making it run natively on both 32 bit and 64 bit CPUs and Windows versions, including Windows Vista.
History / Versions
Version numbers, date released, and new feature highlights
Version 1.00 - 3 June 1999
First official release
Version 2.00 - 10 December 1999
Binding to specific local IP addresses
Limit recursion to one or more IP address ranges
IP address blocking
Support for AAAA and SRV records
Run as NT/Windows service
Reverse zone wizard
Wildcard records
Standard zone transfers
Cache snapshot viewer
Version 3.00 - 24 August 2000
Import wizard
Zone file sharing
Support for HINFO, MB, MG, MINFO, MR AFSDB, ISDN, RP, RT, X25, NSAP, and ATMA records
Standard Zone files compatible with BIND
Command line options
Version 3.20 - 2 April 2001
Super Master/Slave
HTTP API
Dynamic updates
Incremental zone transfers
Support for A6, DNAME records
Version 3.50 - 3 October 2003
Separation of service and GUI
NXDOMAIN redirect
Support for LOC, NAPTR records
Version 3.60 - 27 June 2004
TSIG signed dynamic updates
Domain specific forwarding
Stealth DNS
Version 4.00 - 10 April 2005
Automatic SPF records
NAT IP Alias
Record and zone comments
Bulk update wizard
Zone groups
Version 5.0 - 17 January 2008
Version 5.0 was re-written for the .NET Framework 2.0
Windows Vista / Windows Server 2008 support
IPv6 support
IDN support
Plug-in system
Quick Zone Templates
Support for , ,
Version 5.1 - 8 July 2008
Suspending zones
Remote logging to syslog server
Response Filtering to prevent DNS rebinding attacks
Support for ,
Version 5.2 - 23 April 2009
Windows 7 / Windows server core support
Remote Management
DNSSEC hosting
Secure Zone Transfers (TSIG signed)
Check Internet Delegations wizard
Windows Performance Counters
DNS request "rules" for plug-ins
, , , , , , ,
Version 5.3 - 27 October 2015
ALIAS-records (Auto Resolved Alias)
DNS0x20
HTTP API can share port 80 / domain / partial URL with IIS
New authentication options for HTTP API
CERT-records (Certificate / CRL)
TLSA-records (Transport Layer Security Authentication)
Version 6.0 - 20 April 2016
Zone version control
All DNS data and program settings in single database file
View/Save as standard zone file
Export standard boot file and zone files
SHA-256 and SHA-512 in DNSSEC signatures
SHA-256 and SHA-384 hashes in DNSSEC DS-records
CAA-records (Certification Authority Authorization)
Version 7.0 - 19 May 2018
New HTTP API (v. 2)
HTTP API - CORS support
HTTP API - SSL support
HTTP API - debugging log files
New zone account-ID setting
Import zones from a Simple DNS Plus v. 6.0 / 7.x database file
Enhanced auto IP address blocking
Version 8.0 - 2 July 2018
Automatic DNSSEC signing
Automatic DNSSEC ZSK rollover
On-line DNSSEC keys
Scheduled automatic deletion of on-line DNSSEC keys
Combining on-line and off-line DNSSEC keys
New function to remove/disable DNSSEC for a zone
DNSSEC records hidden in GUI
New HTTP API commands for DNSSEC
Version 9.0 - 28 September 2021
DNS over TLS (DoT) and DNS over HTTPS (DoH)
New "Bind SSL certificate" helper function
"HTTPS" DNS record type
Version 9.1 - 28 October 2021
JavaScript plug-in
New asynchronous plug-in interface
Plug-in query order enhanced
See also
Comparison of DNS server software
External links
Simple DNS Plus
Don Moore's May 2004 DNS Internet survey
András Salamon's DNS Resources Directory
DNS software |
41781042 | https://en.wikipedia.org/wiki/Base%20and%20bounds | Base and bounds | In computing base and bounds refers to a simple form of virtual memory where access to computer memory is controlled by one or a small number of sets of processor registers called base and bounds registers.
In its simplest form each user process is assigned a single contiguous segment of main memory. The operating system loads the physical address of this segment into a base register and its size into a bound register. Virtual addresses seen by the program are added to the contents of the base register to generate the physical address. The address is checked against the contents of the bounds register to prevent a process from accessing memory beyond its assigned segment.
The operating system is not constrained by the hardware and can access all of physical memory.
This technique protects memory used by one process against access or modification by another. By itself it does not protect memory from erroneous access by the owning process. It also allows programs to be easily relocated in memory, since only the base and bounds registers have to be modified when the program is moved.
Some computer systems extended this mechanism to multiple segments, such as the i bank and d bank for instructions and data on the UNIVAC 1100 series computers or the separation of memory on the DEC PDP-10 system into a read/write "low" segment for the user process and a read-only "high" segment for sharable code.
Segmented virtual memory is a further generalization of this mechanism to a large number of segments. Usually the segment table is kept in memory rather than registers.
See also
Memory management (operating systems)
Memory management unit
References
Memory management
Virtual memory |
6486874 | https://en.wikipedia.org/wiki/Telelogic | Telelogic | Telelogic AB was a software business headquartered in Malmö, Sweden. Telelogic was founded in 1983 as a research and development arm of Televerket, the Swedish department of telecom (now part of TeliaSonera). It was later acquired by IBM Rational, and exists under the IBM software group.
Telelogic had operations in 22 countries and had been publicly traded since 1999. On June 11, 2007, IBM announced that it had made a cash offer to acquire Telelogic. On August 29, 2007, the European Union opened an investigation into the acquisition. On March 5, 2008, European regulators approved the acquisition of Telelogic by the Swedish IBM subsidiary Watchtower AB. On April 28, 2008, IBM completed its purchase of Telelogic.
Former Products
Focal Point - System for management of product and project portfolios.
DOORS - Requirements tracking tool.
System Architect - Enterprise Architecture and Business Architecture modeling tool.
Tau - SDL and UML modeling tool.
Synergy - Task-based version control and configuration management system.
Rhapsody - Systems engineering and executable UML modeling tool.
DocExpress - Technical documentation tool, discontinued after the acquisition and superseded by Publishing Engine.
Publishing Engine - Technical documentation tool
All of these products have been continued under IBM's Rational Software division in the systems engineering and Product lifecycle management (PLM) "solutions" software line.
Acquisitions
Telelogic acquired the following companies between 1999 and 2007:
References
Unified Modeling Language
Systems Modeling Language
SysML Partners
Software companies of Sweden
Telecommunications companies of Sweden
IBM acquisitions
Defunct software companies
Companies established in 1983
Companies based in Malmö |
627136 | https://en.wikipedia.org/wiki/Null%20modem | Null modem | Null modem is a communication method to directly connect two DTEs (computer, terminal, printer, etc.) using an RS-232 serial cable. The name stems from the historical use of RS-232 cables to connect two teleprinter devices or two modems in order to communicate with one another; null modem communication refers to using a crossed-over RS-232 cable to connect the teleprinters directly to one another without the modems.
It is also used to serially connect a computer to a printer, since both are DTE, and is known as a Printer Cable.
The RS-232 standard is asymmetric as to the definitions of the two ends of the communications link, assuming that one end is a DTE and the other is a DCE, e.g. a modem. With a null modem connection the transmit and receive lines are crosslinked. Depending on the purpose, sometimes also one or more handshake lines are crosslinked. Several wiring layouts are in use because the null modem connection is not covered by the RS-232 standard.
Origins
Originally, the RS-232 standard was developed and used for teleprinter machines which could communicate with each other over phone lines. Each teleprinter would be physically connected to its modem via an RS-232 connection and the modems could call each other to establish a remote connection between the teleprinters. If a user wished to connect two teleprinters directly without modems (null modem) then they would crosslink the connections. The term null modem may also refer to the cable or adapter itself as well as the connection method. Null modem cables were a popular method for transferring data between the early personal computers from the 1980s to the early 1990s.
Cables and adapters
A null modem cable is a RS-232 serial cable where the transmit and receive lines are crosslinked. In some cables there are also handshake lines crosslinked. In many situations a straight-through serial cable is used, together with a null modem adapter. The adapter contains the necessary crosslinks between the signals.
Wiring diagrams
Below is a very common wiring diagram for a null modem cable to interconnect two DTEs (e.g. two PCs) providing full handshaking, which works with software relying on proper assertion of the Data Carrier Detect (DCD) signal:
Applications
The original application of a null modem was to connect two teleprinter terminals directly without using modems. As the RS-232 standard was adopted by other types of equipment, designers needed to decide whether their devices would have DTE-like or DCE-like interfaces. When an application required that two DTEs (or two DCEs) needed to communicate with each other, then a null modem was necessary.
Null modems were commonly used for file transfer between computers, or remote operation. Under the Microsoft Windows operating system, the direct cable connection can be used over a null modem connection. The later versions of MS-DOS were shipped with the InterLnk program. Both pieces of software allow the mapping of a hard disk on one computer as a network drive on the other computer. No Ethernet hardware (such as a network interface card or a modem) is required for this. On the Commodore Amiga system, a null modem connection was a common way of playing multiplayer games between two machines.
The popularity and availability of faster information exchange systems such as Ethernet made the use of null modem cables less common. In modern systems, such a cable can still be useful for kernel mode development, since it allows the user to remotely debug a kernel with a minimum of device drivers and code (a serial driver mainly consists of two FIFO buffers and an interrupt service routine). KGDB for Linux, ddb for BSD, and WinDbg or KD for Windows can be used to remotely debug systems, for example. This can also provide a serial console through which the in-kernel debugger can be dropped to in case of kernel panics, in which case the local monitor and keyboard may not be usable anymore (the GUI reserves those resources and dropping to the debugger in the case of a panic won't free them).
Another context where these cables can be useful is when administering "headless" devices providing a serial administration console (i.e. managed switches, rackmount server units, and various embedded systems). An example of embedded systems that widely use null modems for remote monitoring include RTUs, device controllers, and smart sensing devices. These devices tend to reside in close proximity and lend themselves to short run serial communication through protocols such as DNP3, Modbus, and other IEC variants. The Electric, Oil, Gas, and Water Utilities are slow to respond to newer networking technologies which may be due to large investments in capital equipment that has useful service life measured in decades. Serial ports and null modem cables are still widely used in these industries with Ethernet just slowly becoming a widely available option.
Types of null modem
Connecting two DTE devices together requires a null modem that acts as a DCE between the devices by swapping the corresponding signals (TD-RD, DTR-DSR, and RTS-CTS). This can be done with a separate device and two cables, or using a cable wired to do this. If devices require Carrier Detect, it can be simulated by connecting DSR and DCD internally in the connector, thus obtaining CD from the remote DTR signal. One feature of the Yost standard is that a null modem cable is a "rollover cable" that just reverses pins 1 through 8 on one end to 8 through 1 on the other end.
No hardware handshaking
The simplest type of serial cable has no hardware handshaking. This cable has only the data and signal ground wires connected. All of the other pins have no connection. With this type of cable flow control has to be implemented in the software. The use of this cable is restricted to data-traffic only on its cross-connected Rx and Tx lines. This cable can also be used in devices that do not need or make use of modem control signals.
Loopback handshaking
Because of the compatibility issues and potential problems with a simple null modem cable, a solution was developed to trick the software into thinking there was handshaking available. However, the cable pin out merely loops back, and does not physically support the hardware flow control.
This cable could be used with more software but it had no actual enhancements over its predecessor. The software would work thinking it had hardware flow control but could suddenly stop when higher speeds were reached and with no identifiable reason.
Partial handshaking
In this cable the flow control lines are still looped back to the device. However, they are done so in a way that still permits Request To Send (RTS) and Clear To Send (CTS) flow control but has no actual functionality. The only way the flow control signal would reach the other device is if the opposite device checked for a Carrier Detect (CD) signal (at pin 1 on a DE-9 cable and pin 8 on a DB-25 cable). As a result, only specially designed software could make use of this partial handshaking. Software flow control still worked with this cable.
Full handshaking
This cable is incompatible with the previous types of cables' hardware flow control, due to a crossing of its RTS/CTS pins. With suitable software, the cable is capable of much higher speeds than its predecessors. It also supports software flow control.
Virtual null modem
A virtual null modem is a communication method to connect two computer applications directly using a virtual serial port. Unlike a null modem cable, a virtual null modem is a software solution which emulates a hardware null modem within the computer. All features of a hardware null modem are available in a virtual null modem as well. There are some advantages to this:
Higher transmission speed of serial data, limited only by computer performance and network speed
Virtual connections over local network or Internet, mitigating cable length restrictions
Virtually unlimited number of virtual connections
No need for a serial cable
The computer's physical serial ports remain free
For instance, DOSBox has allowed older DOS games to use virtual null modems.
Another common example consists of Unix pseudoterminals (pty) which present a standard tty interface to user applications, including virtual serial controls. Two such ptys may easily be linked together by an application to form a virtual null modem communication path.
See also
Crossover cable
Debugging
Direct cable connection
LapLink cable
Rollover cable
Serial Line Internet Protocol
References
External links
Modems
Multiplayer null modem games
Out-of-band management
fr:Modem#Modem nul |
4605905 | https://en.wikipedia.org/wiki/List%20of%20University%20of%20Michigan%20faculty%20and%20staff | List of University of Michigan faculty and staff | The University of Michigan has 6,200 faculty members and roughly 38,000 employees which include National Academy members, and Nobel and Pulitzer Prize winners. Several past presidents have gone on to become presidents of Ivy League universities.
Notable faculty: Nobel Laureates
Joseph Brodsky, Nobel Prize, Literature 1987
Donald A. Glaser professor of physics, developed in 1954 the world's first liquid bubble chamber to study high-energy subatomic particles and won the Nobel Prize in physics for his invention in 1960
Charles B. Huggins, Nobel Prize in Physiology or Medicine, 1966
Lawrence R. Klein, '30 alumnus; a member of the economics department and the Institute for Social Research. Won the 1980 Nobel Prize in economics for his econometric models forecasting short-term economic trends and policies.
Gérard Mourou, co-winner of Nobel Prize, Physics, 2018
Wolfgang Pauli, winner of Nobel Prize, Physics, 1945
Martin L. Perl, Physics Nobel Prize 1995
Norman F. Ramsey, Physics Nobel Prize 1989
Peyton Rous, Nobel Prize in Physiology or Medicine, 1966
Hamilton O. Smith Nobel Prize, for Physiology or Medicine, 1978
Charles H. Townes, Nobel Prize for Physics, 1964
Martinus Veltman, professor emeritus, John D. MacArthur Professor of Physics. 1999 Nobel Prize for Physics.
Carl Wieman, one of three scientists who shared the 2001 Nobel Prize in Physics joined the U-M faculty immediately following his Ph.D. from Stanford in 1977 and was an assistant professor in the Department of Physics from 1979 to 1984. Now at Colorado.
Notable faculty: past and present
Madeleine K. Albright, visiting scholar. Albright served as United States Secretary of State from 1997 to 2001 and at the time was the highest-ranking woman in the history of the U.S. government. From 1993 to 1997, Albright was the United States' Permanent Representative to the United Nations and a member of President Clinton's Cabinet and National Security Council.
W. H. Auden, poet
Charles Baxter, former director of the MFA program in creative writing; novelist, poet, and essayist; author of 2000 National Book Award finalist The Feast of Love.
Ruth Behar (born Havana, Cuba, 1956) is a Jewish Cuban American anthropologist, poet, and writer who teaches at the University of Michigan. MacArthur Foundation award winner.
Seymour Blinder, professor emeritus of chemistry and physics
R. Stephen Berry (born 1931 in Denver, Colorado) is a U.S. professor of physical chemistry. MacArthur Foundation award winner. He is the James Franck Distinguished Service Professor Emeritus at The University of Chicago and special advisor to the Director for National Security, at Argonne National Laboratory. He joined the Chicago faculty in 1964, having been an assistant professor at Yale University and, between 1957 and 1960, an instructor at the University of Michigan.
William Bolcom, composer. In 2006 he was awarded four Grammy Awards for his composition "Songs of Innocence and Experience": Best Classical Album, Best Choral Performance, Best Classical Contemporary Composition and Producer of the Year, Classical.
Kenneth Boulding, noted economist and faculty member 1949–1967
Richard Brauer Accepted a position at University of Michigan in Ann Arbor in 1948. In 1949 Brauer was awarded the Cole Prize from the American Mathematical Society for his paper "On Artin's L-series with general group characters".
Henry Billings Brown, instructor in law, later US Supreme Court justice
Mark Burns, Carlos Mastrangelo, and David Burke invented a DNA analysis "lab on a microchip."
Evan H. Caminker: Dean of Law School
Anne Carson (born Toronto, Ontario June 21, 1950) is a Canadian poet, essayist, and translator, as well as a professor of classics and comparative literature at the University of Michigan.MacArthur Foundation award winner.
Carl Cohen, notable for using Michigan Freedom of Information Act (FOIA) in 1996 to identify U-M's policy of racial categorization in admissions, leading to the Grutter and Gratz v. Bollinger lawsuits. Professor of Philosophy specializing in ethics for 50 years as of 2006, civil rights activist, proponent and founder of Michigan Civil Rights Initiative, and author of books on affirmative action and animal rights issues.
Wilbur Joseph Cohen (June 10, 1913, Milwaukee, Wisconsin – May 17, 1987, Seoul, South Korea) was an American social scientist and federal civil servant. He was one of the key architects in the creation and expansion of the American welfare state and was involved in the creation of both the New Deal and Great Society programs.
Juan Cole, notable for his weblog "Informed Comment", covering events in the Middle East
Thomas M. Cooley, law professor, celebrated 19th century legal scholar, and Chief Justice of the Supreme Court of Michigan
Christopher Chetsanga, (full professor 1979), discovered two enzymes that repair DNA after x-irradiation. Pro Vice Chancellor 1991–1992 and acting vice chancellor 1992–1993 University of Zimbabwe.
Arthur Copeland, mathematician
Brian Coppola, professor of chemistry, who was recognized as a 2009 U.S. Professor of the Year by the Carnegie Foundation for the Advancement of Teaching and the Council for Advancement and Support of Education, and as the 2012 recipient of the Robert Foster Cherry Award for Great Teaching, administered by Baylor University.
Pierre Dansereau, Canadian ecologist known as one of the "fathers of ecology".
Michael Daugherty (born April 28, 1954) is an American composer, pianist, and teacher. Michael Daugherty went home with three awards from the 2011 Grammys. His “Metropolis Symphony,” inspired by the Superman comics, won for best classical contemporary composition, best orchestral performance (along with the composer's “Deus ex Machina,” performed by the Nashville Symphony) and best engineering.
Michael Duff gained his PhD in theoretical physics in 1972 at Imperial College, London, under Nobel Laureate Abdus Salam. In September 1999 he moved to the University of Michigan, where he is Oskar Klein Professor of Physics. In 2001, he was elected first director of the Michigan Center for Theoretical Physics and was re-elected in 2004. He has since become the principal of the Faculty of Physical Sciences at Imperial College London in Spring 2005.
Francis Collins led the Human Genome Project and is the current director of the National Institutes of Health.
John Dewey, co-founder of pragmatism. During his time at Michigan, Dewey twice won the all-campus euchre tournament.
Igor Dolgachev, mathematician
Sidney Fine and longest serving faculty member. Chief biographer of Frank Murphy.
William Frankena, moral philosopher; Department of Philosophy 1937–78, chair 1947–61; "renowned for his learning in the history of ethics"; "played an especially critical role in defense of fundamental academic freedoms during the McCarthy era."
Erich Fromm, psychologist
Robert Frost Michigan Poet-in-Residence.
Alice Fulton, United States poet, author, and feminist. She received her undergraduate degree in creative writing in 1976 from Empire State College and her Master of Fine Arts degree from Cornell University in 1982. In 1991, she was awarded a MacArthur Foundation fellowship for her poetry. She taught creative writing at University of Michigan from 1983 to 2001.
William Gehring, professor of psychology
Susan Gelman, psychologist
Herman Heine Goldstine, a mathematician, a winner of the National Medal of Science, worked on the ENIAC, as the Electronic Numerical Integrator and Computer was code named. Taught at the University of Michigan but left when war broke out to become a ballistics officer in the Army.
Samuel Goudsmit also known as Samuel Abraham Goudsmit. Was a professor at the University of Michigan between 1927 and 1946. Conceived – with George Uhlenback – the idea of Quantum Spin. During WWII he performed research at the MIT Radiation Laboratory, but most importantly served as the chief of the ALSOS group for the Manhattan Project, charged with assessing the German ability to build an atomic bomb.
Edward Gramlich, professor of economics and member, Federal Reserve Board
Linda Gregerson is the Frederick G.L. Huetwell Professor at University of Michigan. Among her collections of poetry are Waterborne" (2002), The Woman Who Died in Her Sleep (1996) and Fire in the Conservatory (1982). She has won many awards and fellowships, among them Guggenheim, Mellon and National Endowment for the Arts fellowships, the Kingsley Tufts Poetry Award and the Isabel MacCaffrey Award.
Robert L. Griess is a mathematician working on finite simple groups. He constructed the monster group using the Griess algebra.
Kristin Ann Hass
William Donald "Bill" Hamilton, F.R.S. (August 1, 1936 – March 7, 2000) was a British evolutionary biologist, considered one of the greatest evolutionary theorists of the 20th century. Worked with Robert Axelrod on the Prisoner's Dilemma.
Donald Hall, English professor and United States Poet Laureate 2006–2007
Thomas Hales solved a nearly four-century-old problem called the Kepler conjecture. Hales is now at the University of Pittsburgh.
Paul Halmos, mathematician specializing in functional analysis.
Eric J. Hill, professor of practice in architecture.
Melvin Hochster, commutative algebraist. Among his many honors, received the Frank Nelson Cole Prize in Algebra in 1980; received a Guggenheim Fellowship in 1982. In 1992, he was elected to both the American Academy of Sciences and the National Academy of Sciences.
Andrew Hoffman, an expert in environmental pollution and sustainable enterprise. Professor Hoffman is co-director of the MBA'MS Corporate Environmental Management Program.
Daniel Hunt Janzen (born 1939 in Milwaukee, Wisconsin, US) is an evolutionary ecologist, naturalist, and conservationist. Before joining the faculty at the University of Pennsylvania he taught at the University of Kansas (1965–1968), the University of Chicago (1969–72) and at the University of Michigan. MacArthur Foundation award winner.
William Le Baron Jenney(1832—1907) was an American architect and engineer who is known for building the first skyscraper in 1884 and became known as the Father of the American skyscraper.
Gerome Kamrowski, worked in New York in the 1930s and early 1940s with such artists as William Baziotes, Robert Motherwell and Jackson Pollock, and was at the forefront of the development of American Surrealism and Abstract Expressionism. His work from this period is in the collections of The Metropolitan Museum of Art, MOMA, The Guggenheim Museum, the Whitney Museum of American Art and other major museums worldwide. Faculty, University of Michigan School of Art 1948–82 (Emeritus)
Gordon Kane, Victor Weisskopf Collegiate Professor of Physics
H. David Hume, inventor of the human nephron filter ("HNF"), or the artificial kidney.
Peter J. Khan, associate professor of electrical and computer engineering and as head of the Microwave Solid-State Circuits Group of the Cooley Electronics Laboratory. Now a member of the Universal House of Justice, the nine-person international elected body which coordinates the activities of the Baha'i Faith throughout the world.
Chihiro Kikuchi, professor of nuclear engineering, developed in 1957 the ruby maser, a device for amplifying electrical impulses by stimulated emission of radiation
Oskar Klein assumed a post at the University of Michigan, a post he won through the generosity and intervention of his friend Niels Bohr. His first work in Ann Arbor dealt with the anomalous Zeeman effect.
Adrienne Koch, historian, specialist in American history of the 18th century
Yoram Koren – James J. Duderstadt University Professor of Manufacturing and Paul G. Goebel Professor of Mechanical Engineering in the College of Engineering, inventor of the Reconfigurable Manufacturing System and director of the NSF Engineering Research Center for Reconfigurable Manufacturing Systems
Kenneth Lieberthal, China expert and member of the National Security Council during the Clinton Administration.
Emmett Leith and Juris Upatnieks (COE: MSE EE 1965) created the first working hologram in 1962
Catharine MacKinnon, feminist legal theorist.
Jason Mars, conversational AI researcher, founder of ClincAI, author.
Paul McCracken, economist. Chairmen emeritus: President's Council of Economic Advisers
George E. Mendenhall, professor emeritus: Department of Near Eastern Studies and author.
Gerald Meyers, professor at the University of Michigan Ross School of Business School, former chairman of American Motors Corporation
William Ian Miller, legal and social theorist; author of The Anatomy of Disgust.
Hugh L. Montgomery, Number Theorist. In 1975, with Robert Charles Vaughan, showed that "most" even numbers were expressible as the sum of two primes.
Thylias Moss developed Limited Fork Poetics, is Professor of English and Art & Design, author of Tokyo Butter (2006), Slave Moth (2004), and is a MacArthur Foundation award winner.
Professor Gérard A. Mourou, director of the National Science Foundation Center for Ultrafast Optical Science. With students D. Strickland, S. Williamson, P. Maine, and M. Pessot, demonstrated the technique known as Chirped pulse amplification or ("CPA").
James V. Neel professor of human genetics, in 1940s discovered that defective genes cause sickle cell anemia
Nicholas Negroponte also known as Nicholas P Negroponte. Founder of MIT's Media Lab.
Reed M. Nesbit, urologist, pioneer of transurethral resection of the prostate
Dirk Obbink, papyrologist, 2001 MacArthur Fellowship winner for his work at both Oxyrhynchus and Herculaneum. Holds appointments at both Oxford University and the University of Michigan
James Olds neuroscientist, co-discovered the Brain's Pleasure Center.
Will Potter, award-winning author, internationally recognized civil liberties advocate, and TED Senior Fellow. He is a Distinguished Lecturer and Senior Academic Innovation Fellow at the University of Michigan
Anatol Rapoport, From 1955 to 1970 Rapoport was Professor of Mathematical Biology and Senior Research Mathematician. He is the author of over 300 articles and of Two-Person Game Theory (1999) and N-Person Game Theory (2001), among many other well-known books on fights, games, violence and peace. His autobiography, Certainties and Doubts: A Philosophy of Life, was released in 2001. A founding member, in 1955, of the Mental Health Research Institute (MHRI) at the University of Michigan.
Arthur Rich, professor of physics, developed in 1988 with research investigator James C. Van House first positron microscope
Gottlieb Eliel Saarinen, Architect
Jonas Salk, assistant professor of epidemiology (deceased) ()
Vojislav Šešelj, Serbian political scientist and nationalist leader.
Anton Shammas, professor of comparative literature and modern Middle Eastern literature; Poet, playwright, essayist, and translator of Arab-Christian descent; acclaimed author of the novel Arabesques.
Lawrence Sklar, William K. Frankena Collegiate Professor and Professor of Philosophy, Guggenheim fellow 1974.
* Elliot Soloway, software teaching tools, founder of GoKnow
Kannan Soundararajan was awarded the 2004 Salem Prize, joint winner of the 2005 SASTRA Ramanujan Prize
Theodore J. St. Antoine, law school dean and labor arbitrator
Stephen Timoshenko created the first US bachelor's and doctoral programs in engineering mechanics. His 18 textbooks have been published in 36 languages.
Amos Tversky Deceased. Behavioral economist and frequent co-author with Daniel Kahneman 2002 Nobel Prize ()
A. Galip Ulsoy – C.D. Mote, Jr. Distinguished University Professor and William Clay Ford Professor of Manufacturing in the College of Engineering, co-inventor of the Reconfigurable Manufacturing System, and deputy director of the NSF Engineering Research Center for Reconfigurable Manufacturing Systems
Douglas E. Van Houweling, president and CEO of Internet2
Raymond Louis Wilder, began teaching at the University of Michigan in 1926, where he remained until his retirement in 1967. Wilder's work focused on set-theoretic topology, manifolds and use of algebraic techniques.
Milford H. Wolpoff, professor of anthropology and adjunct associate research scientist, UM Museum of Anthropology; recognized globally as the leading proponent of the multiregional hypothesis for human evolution.
Trevor D. Wooley Department Chair, Department of Mathematics, University of Michigan. Salem Prize, 1998. Alfred P. Sloan Research Fellow, 1993–1995.
American Association for the Advancement of Science
Fellows of the American Association for the Advancement of Science. Founded in 1848, AAAS is the world's largest general scientific society and publisher of the journal Science. The tradition of AAAS Fellows began in 1874.
Sharon Glotzer, (2013). Ph.D., is an American chemical engineer and physicist and the Stuart W. Churchill Professor at the University of Michigan.
Huda Akil, (2000). Ph.D., Gardner C. Quarton Professor of Neurosciences in psychiatry, professor of psychiatry and co-director and senior research scientist of the U-M Mental Health Research Institute.
Bernard W. Agranoff, (1998). Director of the Neuroscience Lab, the Ralph Waldo Gerard Professor of Neurosciences, professor of biological chemistry and research scientist in the Department of Psychiatry and the Mental Health Research Institute.
Sushil Atreya, Ph.D., (2005). Professor of atmospheric and space sciences. Atreya is honored for contributions to planetary atmosphere structure.
Laurence A. Boxer, (1998). Associate chair for research pediatrics and communicable diseases and professor of pediatrics and communicable diseases.
George J. Brewer, (2000) M.D., professor of genetics and internal medicine.
Charles M. Butter, (1992). Professor of psychology
Valerie Castle, M.D., (2005). Chair and Ravitz Foundation Professor of Pediatrics and Communicable Diseases.
Brian Coppola, Ph.D., (2001). Arthur F. Thurnau Professor of Chemistry.
Dimitri Coucouvanis, Ph.D., (2005). Lawrence S. Bartell Collegiate Professor of Chemistry.
James Coward, (2004) Professor of Medicinal Chemistry and Professor of Chemistry
Jack E. Dixon, (2000). Ph.D., Minor J. Coon Professor of Biological Chemistry, chair of the Department of Biological Chemistry and new co-director of UM"s Life Sciences Institute.
Rodney Ewing, (2004). Donald R. Peacor Collegiate Professor of Geological Sciences, professor of materials science and engineering, and professor of nuclear engineering and radiological sciences
William R. Farrand, (1992). Professor of geological sciences and curator, Museum of Anthropology.
Carol A. Fierke, Ph.D., (2006). Jerome and Isabella Karle collegiate Professor of Chemistry. Chair and Professor of Chemistry.
Daniel Fisher, (2004). Claude W. Hibbard Collegiate Professor of Paleontology, professor of geological sciences, professor of ecology and evolutionary biology, and curator of paleontology
Vincent L. Pecoraro, (2000). John T. Groves Collegiate Professor of Chemistry
James Penner-Hann, (2004). Professor of chemistry
H. David Humes, (1998). Chair of the Department of Internal Medicine and the John G. Searle Professor of Internal Medicine.
James S. Jackson, Ph.D., (2005). Daniel Katz Distinguished University Professor of Psychology and director, Institute for Social Research.
Harold K. Jacobson, (2000). Jesse Siddal Reeves Professor of Political Science, and senior research scientist, Center for Political Studies.
George W. Kling, (1998). Assistant professor of biology and assistant research scientist in the Center for Great Lakes and Aquatic Sciences.
Arthur Lupia, (2004). Professor of political science, research professor at the Institute for Social Research, and principal investigator of the American National Election Studies.
Anne McNeil, (2017) Arthur F. Thurnau Professor of Chemistry and Macromolecular Science and Engineering
Miriam H. Meisler, (2001). Professor of human genetics and neurology, Myron Levine Distinguished University Professor of Human Genetics.
Henry Mosberg, (2004). Professor of medicinal chemistry
Franco Nori, (2007). Elected AAAS Fellow for his contributions to condensed matter physics, nanoscience, quantum optics, and quantum information.
Melanie Sanford, (2016) Moses Gomberg Collegiate Professor of Chemistry and Arthur F. Thurnau Professor of Chemistry
Kamal Sarabandi, (2016) Rufus S. Teesdale Professor of Engineering, director of Radiation Laboratory, Department of Electrical Entering and Computer Science.
Artur Schnabel Pianist and classical composer
Martin Sichel, (1998). Professor of aerospace engineering.
Nicholas H. Steneck, (1992). Professor of history and director, Medical Center Historical Center for the Health Sciences.
Sarah Thomason, (2010). William H. Gedney Professor of Linguistics
George Uhlenbeck also known as George Eugene Uhlenbeck. With fellow student Samuel Goudsmit at Leiden, Uhlenbeck proposed the idea of electron spin in 1925, fulfilling Wolfgang Pauli's stated need for a "fourth quantum number.” Served as Professor: University of Michigan (1939–43). Max Planck Medal 1964 (with Samuel Goudsmit).
Stanley J. Watson, (2000). Ph.D., M.D., Raphael Professor of Neurosciences in Psychiatry and co-director and research scientist at MHRI.
Max S. Wicha, (2000). M.D., professor of internal medicine and director of the U-M Comprehensive Cancer Center.
Milford Wolpoff, ( ) who has been elected to the rank of Fellow in the American Association for the Advancement of Science.
Youxue Zhang, Ph.D., (2005). Professor of geological sciences. Zhang was selected for making exceptional advances in a wide range of geological frontiers, including the origin and evolution of the Earth, explosive volcanism and gas-driven lake eruptions.
Business Week "Management Gurus"
Gary Hamel, MBA PhD Co-Author "The Core Competence of the Corporation"
Dave Ulrich, Human Resources – Michigan (Ranked #1)
Noel Tichy, Leadership – Michigan, (Ranked #9)
C.K. Prahalad, C.K. Prahalad, Strategy, International Business – Michigan/ PRAJA, (Ranked #10)
Institute of Medicine
Bernard W. Agranoff (1991), professor of biological chemistry; professor of psychiatry, Medical School
Huda Akil (1994), Gardner C. Quarton Distinguished Professor of Neurosciences in Psychiatry, Medical School
William Barsan (2003), professor, Department of Emergency Medicine, Medical School
John D. Birkmeyer, M.D., George D. Zuidema Professor of Surgery, division of gastrointestinal surgery, department of surgery, University of Michigan, Ann Arbor
Michael Boehnke, PH.D., Richard G. Cornell Collegiate Professor of Biostatistics, department of biostatistics, School of Public Health, University of Michigan, Ann Arbor
Edward Bove (1985), head, Section of Cardiac Surgery, Medical School
Noreen M Clark (2000), dean, Marshall H. Becker Professor of Public Health, School of Public Health
Mary Sue Coleman (1997), president, professor of biochemistry, Medical School, & chemistry, College of Literature, Science, & the Arts
Francis S. Collins (1991), professor of internal medicine; professor of human genetics, Medical School
Jerome Conn (1970), Louis Harry Newburgh university Distinguished Professor Emeritus of Internal Medicine, Medical School
Minor J. Coon (1987), Victor C. Vaughn Distinguished University Professor of Biological Chemistry, Medical School
Jack Dixon (1993), Minor J. Coon Professor of Biological Chemistry, Medical School
Avedis Donabedian (1971), Sinai Distinguished Professor Emeritus of Public Health, School of Public Health
Rhetaugh Dumas (1984), Dean Emerita, School of Nursing
Stefan Fajans (1985), professor emeritus of internal medicine, Medical School
Christopher R. Friese (2020), Elizabeth Tone Hosmer Professor of Nursing, School of Nursing, & health management and policy, School of Public Health
Sid Gilman (1995), William J. Herdman Professor of Neurology, Medical School
David Ginsburg (1999), professor of internal medicine & human genetics, Medical School
Lazar Greenfield (1995), Frederick A. Coller Distinguished Professor, Surgery, Medical School
Ada Sue Hinshaw (1989), dean, School of Nursing
Julian Hoff (1999), professor of surgery, Medical School
James S. House (1999), professor of sociology, College of Literature, Science, & the Arts
James Jackson (2002), professor of psychology, College of Literature, Science, & the Arts
Robert L. Kahn (2002), professor emeritus of psychology, College of Literature, Science, & the Arts
George Kaplan (2001), professor of epidemiology, School of Public Health
David E. Kuhl (1989), professor of internal medicine; professor of radiology, Medical School
Allen S. Lichter (2001), dean, professor of radiation oncology, Medical School
Roderick Little (2011), professor of biostatistics, School of Public Health
Martha L. Ludwig, Ph.D., research biophysicist and J. Lawrence Oncley Distinguished Professor, department of biological chemistry, University of Michigan, Ann Arbor
Howard Markel (1993), George E. Wantz Distinguished Professor of the History of Medicine and director of the Center for the History of Medicine
Rowena Matthews elected to The Institute of Medicine of the National Academy of Sciences.
Catherine G. McLaughlin, Ph.D., professor, department of health management and policy, and director, Economic Research Institute on the Uninsured, University of Michigan School of Public Health, Ann Arbor
James V. Neel (1972), Lee R. Dice distinguished university professor emeritus of Human genetics, Medical School
Gilbert S. Omenn (1979), professor of internal medicine & Human genetics, Medical School, and of public health, School of Public Health
Nancy Reame (1996), professor of nursing, School of Nursing
June Osborn (1986), professor of epidemiology; professor of pediatrics and communicable diseases, Medical School
Alan R. Saltiel, elected in 2005 to The Institute of Medicine of the National Academy of Sciences. Saltiel is the John Jacob Abel Collegiate Professor in Life Sciences and Professor of Internal Medicine and Physiology. He is the third LSI faculty member to be named to the Institute of Medicine.
Thomas L. Schwenk (2002), professor of family medicine, Medical School
Harold Shapiro (1989), former UM president
Peter Ward (1990), Godfrey D. Stobbe Professor of Pathology, Medical School
Kenneth Warner (1996), Richard D. Remington Collegiate Professor of Public Health; professor of health management & policy, School of Public Health
Stanley J. Watson (1994), Theophile Raphael Collegiate Professor of Neurosciences, Medical School
Stephen J. Weiss (2001), Upjohn Professor of Internal Medicine and Oncology, Medical School
David R. Williams (2001), Harold W. Cruse Collegiate Professor of Sociology, College of Literature, Science, & the Arts, and professor of epidemiology, School of Public Health
George Zuldema (1971), vice provost for medical affairs emeritus, and professor emeritus of surgery, Medical School
MacArthur Foundation award winners
, 40 MacArthur winners — 16 of them university alumni — have served as Michigan faculty
Elizabeth S. Anderson (born 5 December 1959) is an American philosopher.
William A. Christian, (Alumnus: 1986), religious studies scholar.
Philip DeVries, (Alumnus: 1988), 1962 alumnus who won as a biologist.
William H. Durham, (Alumnus: 1983), 1973 graduate, anthropologist.
Aaron Dworkin, (Alumnus: 2005) M.A. 1998, Fellow and founder and president of Detroit-based Sphinx Organization, which strives to increase the number of African-Americans and Latinos having careers in classical music.
Steven Goodman, (Alumnus: 2005) A.B.D., Fellow is an adjunct research investigator in the U-M Museum of Zoology's bird division, and a conservation biologist in the Department of Zoology at Chicago's Field Museum of Natural History.
David Green, (Alumnus: 2004), alumnus, executive director, Project Impact.
Ann Ellis Hanson, (Alumna: 1992), visiting associate professor of Greek and Latin.
John Henry Holland,(Alumnus: 1992), professor of electrical engineering and computer science, College of Engineering; professor of psychology, College of Literature, Science, and the Arts.
Vonnie C. McLoyd, (Alumna: M.A. (1973) and Ph.D. (1975)), professor of psychology and research scientist at the Center for Human Growth and Development
Natalia Molina (Professor and Alumna) Molina received her Ph.D. and M.A. from the University of Michigan.
Cecilia Muñoz, (Alumna: 2000), vice president of the National Council of La Raza.
Amos Tversky, (Alumnus: 1984), 1965 alumnus, psychologist.
Karen K. Uhlenbeck, (Alumna: 1983), 1964, mathematician.
Henry T. Wright, (Alumnus: 1993) Fellow, and Anthropologist.
George Zweig, (Alumnus: 1981), 1959 alumnus, physicist.
, 24 non-alumni MacArthur winners have served as Michigan faculty.
Susan Alcock, (Faculty: 2000), professor of classical anthropology and classics, College of Literature, Science, and the Arts.
Robert Axelrod, (Faculty: 1987) Fellow for public policy. Dr. Axelrod is a game theoretician. Author of "The Evolution of Cooperation".
Ruth Behar, (Faculty: 1988) Fellow, and Anthropologist.
R. Stephen Berry (post-doctoral fellow) is a U.S. professor of physical chemistry.
Joseph Brodsky, (Faculty: 1981), professor of Slavic languages and literature.
Jason De León is an associate professor in the Department of Anthropology who studies violence, materiality and the social process of migration between Latin America and the United States.
Alice Fulton, (Faculty: 1991) Fellow and Professor of English from 1983 to 2001, won the Library of Congress Rebekah Johnson Bobbitt National Prize for Poetry in 2002.
Kun-Liang Guan, (Faculty: 1998) Fellow and biochemist and associate professor of biological chemistry and senior research associate at the Institute of Gerontology.
Thomas C. Holt, (Faculty: 1990) professor of history, director of Center for Afroamerican and African Studies.
Stephen Lee, (Faculty: 1993) Fellow, solid state chemistry.
Michael Marletta, (Faculty: 1995) Fellow, biochemist and John Gideon Searle Professor of Medicinal Chemistry and Pharmacognosy in the College of Pharmacy and professor of biological chemistry in the Medical School.
Khaled Mattawa (born 1964) (Faculty: 2014) is a Libyan poet, and a renowned Arab-American writer and designated a fellow in 2014
Tiya Miles, (Faculty: 2011) professor of American culture, Afroamerican & African studies, history, and Native American studies
Thylias Moss, (Faculty: 1996), Fellow and Professor of English, also Professor of Art & Design (2006).
Erik Mueggler, (Faculty: 2002), Katherine Verdery Collegiate Professor of Anthropology, College of Literature, Science, and the Arts.
Margaret Murnane (born 1959) is Distinguished Professor of Physics at the University of Colorado at Boulder, having moved there in 1999, with past positions at the University of Michigan (1996-1999) and Washington State University.
Dirk Obbink (faculty) is an American-born papyrologist and Classicist.
Sherry B. Ortner, (Faculty: 1990), professor of anthropology and women's studies
Derek Peterson, a professor in the departments of History and Afroamerican and African Studies, has done scholarly work about the intellectual and cultural history of eastern Africa.
Melanie Sanford, (Faculty: 2011), Moses Gomberg Collegiate professor of chemistry
Rebecca J. Scott, (Faculty: 1990) Fellow and Professor of History.won the 2006 Frederick Douglass Book Prize for Degrees of Freedom: Louisiana and Cuba After Slavery. The $25,000 prize is awarded by the Gilder Lehrman Center for the Study of Slavery, Resistance, and Abolition at Yale University.
Bright Sheng (Faculty: 2001), professor of composition and music theory, School of Music.
Richard Wrangham, (Faculty: 1987) professor of anthropology.
Yukiko Yamashita, (Faculty: 2011) assistant professor of cell & developmental biology
United States National Academy of Engineering
Linda M. Abriola (2003), professor of civil and environmental engineering, College of Engineering
Ellen Arruda (2017), professor and chair of mechanical engineering, College of Engineering
Dennis Assanis (2008), former Jon R. and Beverly S. Holt Professor of Mechanical Engineering and Arthur F. Thurnau Professor, College of Engineering
Peter Banks (1993), dean, College of Engineering
Pallab Bhattacharya (2008), Charles M. Vest Distinguished University Professor and James R. Mellor Professor of Electrical engineering and Computer Science, College of Engineering
William Brown (1992), adjunct professor of electrical engineering, College of Engineering
Don B. Chaffin (1994), G. Lawton and Louise G. Johnson Professor of Industrial & Operations Engineering, College of Engineering
Lynn Conway (1989), professor of electrical engineering and computer science, College of Engineering
James W. Daily (1975), professor emeritus of fluid mechanics and hydraulic engineering, College of Engineering
Stephen W. Director (1989), Robert J. Vlasic Dean of Engineering, College of Engineering
James J. Duderstadt (1987), president emeritus, professor of nuclear engineering and radiological sciences, College of Engineering
Gerard Faeth (1991), Arthur B. Modine Professor of Aerospace Engineering, College of Engineering
Elmer G. Gilbert (1994), professor of aerospace engineering and of electrical engineering & computer science, College of Engineering
Steven A. Goldstein (2005), Henry Ruppenthal Family Professor of Orthopaedic Surgery and Bioengineering
George Haddad (1994), Robert J. Hiller professor of electrical engineering and computer science, College of Engineering
Robert D. Hanson (1982), professor of civil engineering, College of Engineering
Bruce G. Johnston (1979), professor emeritus of structural engineering, College of Engineering
Donald Katz (1968), professor emeritus of chemical engineering, College
Glenn Knoll (1999), professor of nuclear engineering and radiological sciences, College of Engineering
Yoram Koren (2004), James J. Duderstadt Distinguished University Professor and Paul G. Goebel Professor of Mechanical Engineering, College of Engineering
Ronald G. Larson (2003), George Granger Brown Professor of Chemical Engineering, College of Engineering
Emmett Leith (1982), Schlumberger Professor of Engineering, College of Engineering
Jyoti Mazumder (2012), Robert H. Lurie Professor of Mechanical Engineering and Professor of Materials Science and Engineering, College of Engineering
Gerard A. Mourou (2002), A.D. Moore Distinguished Professor of Electrical Engineering & and Computer Science, College of Engineering
Stephen M. Pollock (2002), Herrick Professor of Industrial & Operations Engineering, College of Engineering
Tresa M. Pollock (2005), the L. H. and F. E. Van Vlack Professor of Materials Science and Engineering
Frank E. Richart, Jr. (1969), Walter Johnson Emmons Professor Emeritus of Civil Engineering, College of Engineering
Albert Schultz (1993), Vennema Professor of Mechanical Engineering & Applied Mathematics, College of Engineering
Chen-To Tai (1987), professor emeritus of electrical engineering & computer science, College of Engineering
Fawwaz Ulaby (1995), R. Jamison and Betty Williams Professor of Electrical Engineering & Computer Science, College of Engineering
Galip Ulsoy (2006), C.D. Mote Jr. Distinguished University Professor of Mechanical Engineering and William Clay Ford Professor of Manufacturing, College of Engineering
Walter Weber (1985), Earnest Boyce professor of Civil & Environmental engineering, College of Engineering
Kensall D. Wise (1998), J. Reid & Polly Anderson Professor of Manufacturing Technology, College of Engineering
Richard D. Woods (2003), professor of civil & environmental engineering, College of Engineering
Ralph T. Yang (2005), Dwight T. Benton Professor of Chemical Engineering
Chia-Shun Yih (1980), Stephen P. Timoshenko Distinguished University Professor Emeritus of Fluid Mechanics, College of Engineering
United States National Academy of Sciences
Mathew Alpern (1991), professor emeritus of physiological optics, Medical School
Richard D. Alexander (1974), Theodore H. Hubell Distinguished University Professor Emeritus of Evolutionary Biology, College of Literature, Science & the Arts
Robert Axelrod (1986), Arthur W. Bromage Distinguished University Professor of Political Science & Public Policy, School of Public Policy
Hyman Bass (1982), professor of education, School of Education, & mathematics, College of Literature, Science & the Arts
Philip Bucksbaum 2004
Jerome Conn (1969), Louis Harry Newburgh University Professor Emeritus of Internal Medicine, Medical School
Philip Converse (1973), Robert Cooley Angell Distinguished University Professor Emeritus of Sociology & Political Science, College of Literature, Science & the Arts
Clyde Coombs (1982), professor emeritus of psychology, College of Literature, Science & the Arts
Minor J. Coon (1983), Victor C. Vaughn Distinguished University Professor Emeritus of Biological Chemistry, Medical School
H. Richard Crane (1966), George P. Williams Distinguished University, physicist
Horace W. Davenport (1974), William Beaumont Professor Emeritus of Physiology, Medical School
Thomas M. Donahue (1983), Edward H. White II Distinguished University Professor Emeritus of Planetary Science, College of Engineering
Lennard A. Fisk (2003), Thomas M. Donahue Collegiate Professor of Space Science, College of Engineering
Kent V. Flannery (1978), James B. Griffin Distinguished University Professor of Anthropological Archaeology, College of Literature Science & the Arts
Ronald Freedman (1974), Roderick D. McKenzie Professor Emeritus of Sociology, College of Literature, Science & the Arts, professor emeritus of physics, College of Literature, Science, & the Arts
Katherine Freese (2020), George E. Uhlenbeck Professor Emerita of Physics
William Fulton (1997), M. S. Keeler Professor, mathematics, College of Literature, Science & the Arts
Stanley M. Garn (1976), professor emeritus of nutrition, School of Public Health
Frederick Gehring (1989), T.H. Hildebrandt Distinguished University Professor of Mathematics
Sharon Glotzer, (2014), Stuart W. Churchill Professor of Chemical Engineering. Professor of Materials Science & Engineering, Physics, Applied Physics and Macromolecular Science and Engineering.
Melvin Hochster (1992), Raymond L. Wilder Professor of Mathematics, College of Literature, Science & the Arts
Raymond Kelly 2004
Martha L. Ludwig (2003), professor of biological chemistry, Medical School
Joyce Marcus (1997), professor of anthropology, College of Literature, Science & the Arts
Vincent Massey (1995), professor of biological chemistry, Medical School
Rowena G. Matthews (2002), G. Robert Greenberg Distinguished University Professor, biological chemistry, Medical School
James N. Morgan (1975), professor emeritus of economics, College of Literature, Science & the Arts
James V. Neel (1963), Lee R. Dice Distinguished University Professor Emeritus of Human Genetics, Medical School
Richard Nisbett (2002), Theodore M. Newcomb Distinguished University Professor, psychology, College of Literature, Science, & the Arts
James Olds (1969), professor of psychology
J. Lawrence Oncley (1947), professor emeritus of biological chemistry, Medical School
Kenneth Pike (1985), professor emeritus of linguistics, College of Literature, Science & the Arts
Melanie Sanford (2016) Moses Gomberg Collegiate Professor of Chemistry and Arthur F. Thurnau Professor of Chemistry
Edward Smith (1996), professor of psychology, College of Literature, Science & the Arts
Martinus Veltman (2000), John D. MacArthur Professor of Physics, College of Literature, Science, & the Arts
Warren Wagner (1985), Jr., professor emeritus of botany, School of Natural Resources & the Environment
Henry Wright (1994), professor of anthropology, College of Literature, Science & the Arts; curator, Museum of Anthropology
Robert D. Drennan (1975), professor of anthropology, school of arts and sciences
National Medal of Science
The National Medal of Science is the nation's highest honor for scientific achievement. Five other Michigan researchers won the award between 1974 and 1986. Congress established the award program in 1959. It honors individuals for pioneering scientific research.
Hyman Bass honored by President Bush in a White House ceremony for the National Medal of Science in 2006.
H. Richard Crane (1986), George P. Williams Distinguished University Professor Emeritus of Physics, College of Literature, Science & the Arts
Elizabeth Crosby (1979), professor of anatomy, Medical School
Donald Katz (1982), professor emeritus of chemical engineering, College of Engineering
Emmett Leith (1979), Schlumberger Professor of Engineering, College of Engineering
James Neel (1974), Lee R. Dice Distinguished University Professor Emeritus of Human Genetics, Medical School
Pulitzer Prize-winning faculty
Leslie Bassett (1966), professor of music; music, for Variations for Orchestra.
William Bolcom (1988), professor of music composition; music, for Twelve New Etudes for Piano.
Ross Lee Finney (1937), professor of music; music, for a string quartet.
Robert Frost, a former faculty member won four Pulitzer Prizes through the years.
Percival Price (1934), carillonneur and professor of campanology; music, for Saint Lawrence Symphony.
Leland Stowe (1930), professor of journalism; correspondence, for his work as a reporter on the foreign staff of the New York Herald Tribune.
David C. Turnley (1990), professor of art and design; photography, for images of the political uprisings in China and Eastern Europe.
Claude H. Van Tyne (1930), professor and chairman of the history department; American History, for The War of Independence.
Heather Ann Thompson (2017), professor of American history; for her book on the Attica Prison uprising of 1971.
University of Michigan-Ann Arbor faculty
Alton L Becker, PhD, professor of linguistics
Judith Becker, PhD, Glenn McGeoch Professor (emeritus) of Music
Lois Wladis Hoffman, PhD, professor emerita, Department of Psychology.
Lawrence W. Jones, PhD, professor emeritus, Department of Physics
Ralph Lydic, PhD, Bert La Du Professor, Department of Anesthesiology and Molecular and Integrative Physiology.
William P. Malm, PhD, professor (emeritus) of music
Leopoldo Pando Zayas, PhD, professor of physics, specializing in string theory
Elizabeth Yakel, PhD, professor and senior associate dean for academic affairs at the iSchool, specializing in digital archives and digital preservation
Weiping Zou, MD, PhD, Charles B de Nancrede Professor of Surgery, Immunology and Biology; director for translational research
Former administrators
Erastus Otis Haven (1820–1881), president (1863–69), later Bishop of the Methodist Episcopal Church
Lee Bollinger, president, now president of Columbia University
Nancy Cantor, provost, now chancellor of Syracuse University
Paul Danos, UM associate dean, now dean at Dartmouth College's Tuck School of Business
Steven Director UM engineering dean, now provost of Northeastern University
Walter Harrison, vice president, now at University of Hartford
Maureen Hartford, vice president, later president of Meredith College
Harlan Hatcher (1898–1998), president (1951–1967)
C. C. Little, president (1925–1929), noted cancer researcher and tobacco industry scientist.
J. Bernard "Bernie" Machen, provost, later president of the University of Florida
Frank H. T. Rhodes, vice president, later president of Cornell University
Harold Shapiro, president; later president of Princeton University
Edward A. Snyder, senior associate dean, later dean at University of Chicago Business School
Andrew Dickson White, UM professor of literature, co-founder of Cornell University
B. Joseph White, dean, Ross School, later president of the University of Illinois
Linda Wilson, UM vice president, later president of Radcliffe College
References
External links
Faculty and staff at the University of Michigan
UM Faculty and staff resources
UM Faculty and staff services
The Michigan Daily Salary Supplement lists the salaries of UM faculty and staff
University of Michigan faculty and staff |
31611383 | https://en.wikipedia.org/wiki/FreeBSD%20version%20history | FreeBSD version history |
FreeBSD 1
Released in November 1993. 1.1.5.1 was released in July 1994.
FreeBSD 2
2.0-RELEASE was announced on 22 November 1994. The final release of FreeBSD 2, 2.2.8-RELEASE, was announced on 29 November 1998. FreeBSD 2.0 was the first version of FreeBSD to be claimed legally free of AT&T Unix code with approval of Novell. It was the first version to be widely used at the beginnings of the spread of Internet servers.
2.2.9-RELEASE was released April 1, 2006 as a fully functional April Fools' Day prank.
FreeBSD 3
FreeBSD 3.0-RELEASE was announced on 16 October 1998. The final release, 3.5-RELEASE, was announced on 24 June 2000. FreeBSD 3.0 was the first branch able to support symmetric multiprocessing (SMP) systems, using a Giant lock and marked the transition from a.out to ELF executables. USB support was first introduced with FreeBSD 3.1, and the first Gigabit network cards were supported in 3.2-RELEASE.
FreeBSD 4
4.0-RELEASE appeared in March 2000 and the last 4-STABLE branch release was 4.11 in January 2005 supported until 31 January 2007. FreeBSD 4 was lauded for its stability, was a favorite operating system for ISPs and web hosting providers during the first dot-com bubble, and is widely regarded as one of the most stable and high-performance operating systems of the whole Unix lineage. Among the new features of FreeBSD 4, kqueue(2) was introduced (which is now part of other major BSD systems) and Jails, a way of running processes in separate environments.
Version 4.8 was forked by Matt Dillon to create DragonFly BSD.
FreeBSD 5
After almost three years of development, the first 5.0-RELEASE in January 2003 was widely anticipated, featuring support for advanced multiprocessor and application threading, and for the UltraSPARC and IA-64 platforms. The first 5-STABLE release was 5.3 (5.0 through 5.2.1 were cut from -CURRENT). The last release from the 5-STABLE branch was 5.5 in May 2006.
The largest architectural development in FreeBSD 5 was a major change in the low-level kernel locking mechanisms to enable better symmetric multi-processor (SMP) support. This released much of the kernel from the MP lock, which is sometimes called the Giant lock. More than one process could now execute in kernel mode at the same time. Other major changes included an M:N native threading implementation called Kernel Scheduled Entities (KSE). In principle this is similar to Scheduler Activations. Starting with FreeBSD 5.3, KSE was the default threading implementation until it was replaced with a 1:1 implementation in FreeBSD 7.0.
FreeBSD 5 also significantly changed the block I/O layer by implementing the GEOM modular disk I/O request transformation framework contributed by Poul-Henning Kamp. GEOM enables the simple creation of many kinds of functionality, such as mirroring (gmirror), encryption (GBDE and GELI). This work was supported through sponsorship by DARPA.
While the early versions from the 5.x were not much more than developer previews, with pronounced instability, the 5.4 and 5.5 releases of FreeBSD confirmed the technologies introduced in the FreeBSD 5.x branch had a future in highly stable and high-performing releases.
FreeBSD 6
FreeBSD 6.0 was released on 4 November 2005. The final FreeBSD 6 release was 6.4, on 11 November 2008. These versions extended work on SMP and threading optimization along with more work on advanced 802.11 functionality, TrustedBSD security event auditing, significant network stack performance enhancements, a fully preemptive kernel and support for hardware performance counters (HWPMC). The main accomplishments of these releases include removal of the Giant lock from VFS, implementation of a better-performing optional libthr library with 1:1 threading and the addition of a Basic Security Module (BSM) audit implementation called OpenBSM, which was created by the TrustedBSD Project (based on the BSM implementation found in Apple's open source Darwin) and released under a BSD-style license.
FreeBSD 7
FreeBSD 7.0 was released on 27 February 2008. The final FreeBSD 7 release was 7.4, on 24 February 2011. New features included SCTP, UFS journaling, an experimental port of Sun's ZFS file system, GCC4, improved support for the ARM architecture, jemalloc (a memory allocator optimized for parallel computation, which was ported to Firefox 3), and major updates and optimizations relating to network, audio, and SMP performance. Benchmarks showed significant performance improvements compared to previous FreeBSD releases as well as Linux. The new ULE scheduler was much improved but a decision was made to ship the 7.0 release with the older 4BSD scheduler, leaving ULE as a kernel compile-time tunable. In FreeBSD 7.1 ULE was the default for the i386 and AMD64 architectures.
DTrace support was integrated in version 7.1, and NetBSD and FreeBSD 7.2 brought support for multi-IPv4/IPv6 jails.
Code supporting the DEC Alpha architecture (supported since FreeBSD 4.0) was removed in FreeBSD 7.0.
FreeBSD 8
FreeBSD 8.0 was officially released on 25 November 2009. FreeBSD 8 was branched from the trunk in August 2009. It features superpages, Xen DomU support, network stack virtualization, stack-smashing protection, TTY layer rewrite, much updated and improved ZFS support, a new USB stack with USB 3.0 and xHCI support added in FreeBSD 8.2, multicast updates including IGMPv3, a rewritten NFS client/server introducing NFSv4, and AES acceleration on supported Intel CPUs (added in FreeBSD 8.2). Inclusion of improved device mmap() extensions enables implementation of a 64-bit Nvidia display driver for the x86-64 platform. A pluggable congestion control framework, and support for the ability to use DTrace for applications running under Linux emulation were added in FreeBSD 8.3. FreeBSD 8.4, released on 7 June 2013, was the final release from the FreeBSD 8 series.
FreeBSD 9
FreeBSD 9.0 was released on 12 January 2012. Key features of the release include a new installer (bsdinstall), UFS journaling, ZFS version 28, userland DTrace, NFSv4-compatible NFS server and client, USB 3.0 support, support for running on the PlayStation 3, Capsicum sandboxing, and LLVM 3.0 in the base system. The kernel and base system could be built with Clang, but FreeBSD 9.0 still used GCC4.2 by default. The PlayStation 4 video game console uses a derived version of FreeBSD 9.0, which Sony Computer Entertainment dubbed "Orbis OS". FreeBSD 9.1 was released on 31 December 2012. FreeBSD 9.2 was released on 30 September 2013. FreeBSD 9.3 was released on 16 July 2014.
FreeBSD 10
On 20 January 2014, the FreeBSD Release Engineering Team announced the availability of FreeBSD 10.0-RELEASE. Key features include the deprecation of GCC in favor of Clang, a new iSCSI implementation, VirtIO drivers for out-of-the-box KVM support, and a FUSE implementation.
FreeBSD 10.1 Long Term Support Release
FreeBSD 10.1-RELEASE was announced 14 November 2014, and was supported for an extended term until 31 December 2016. The subsequent 10.2-RELEASE reached EoL on the same day.
In October 2017 the 10.4-RELEASE (final release of this branch) was announced, and support for the 10 series was terminated in October 2018.
FreeBSD 11
On 10 October 2016, the FreeBSD Release Engineering Team announced the availability of FreeBSD 11.0-RELEASE.
FreeBSD 12
FreeBSD 12.0-RELEASE was announced in December 2018.
Version history
The following table presents a version release history for the FreeBSD operating system.
Timeline
The timeline shows that the span of a single release generation of FreeBSD lasts around 5 years. Since the FreeBSD project makes effort for binary backward (and limited forward) compatibility within the same release generation, this allows users 5+ years of support, with trivial-to-easy upgrading within the release generation.
References
FreeBSD
History of free and open-source software
Lists of operating systems
Software version histories |
12460694 | https://en.wikipedia.org/wiki/Mapilab | Mapilab | The MAPILab company is a developer of software for message exchange and team collaboration. MAPILab have produced software for Microsoft Outlook, Microsoft Exchange Server, Microsoft Office SharePoint Server, and Microsoft Excel. The company was founded in 2003. The office of the company is located in the Russian city of Kaliningrad. There are 30 employees in the company, all of which have a higher education. Being a Microsoft Gold Certified Partner, the MAPILab company receives the prerelease versions of Microsoft products and updates beforehand and has extended access to technical information and support, which allows to maintain a high quality of products all the time. The MAPILab software supports English, German and Russian languages. All products have a trial version and are available for downloading from the company website. The quality of the MAPILab titles was certified by Microsoft and VeriTest. Some of them have received MSD2D People Choice and PC Magazine Best Soft awards. The MAPILab products are placed on the Microsoft Office Online and Windows Catalog websites.
Product lines
Microsoft Outlook Add-ins - More than 20 various add-ins extend Microsoft Outlook functionality and increase productivity: eliminate duplicates, manage attachments, add other useful features.
Software for Microsoft Exchange Server - Tools for organizations using Microsoft Exchange Server: Server-side e-mail sorting rules, POP3 connectors, reporting solution;
Groupware Solutions - Shared folders, folder synchronization and other solutions required to build reliable and low-cost collaboration systems based on Microsoft Outlook.
MAPILab Statistics for SharePoint - a solution for collecting and analysing data about the usage of site collections of SharePoint product family. The product offers a number of finished reports both for evaluating the common visiting traffic of sites and for reviewing the smallest details such as user sessions and hits.
Microsoft Excel Add-ons - 7 tools for work automation and productivity improvement: remove duplicated rows and cells; compare spreadsheets; fix broken links and many others.
References
External links
MAPILab home page
Native POP3 Connector for Exchange 2000/2003
Print Agent for Exchange
MAPILab Disclaimers for Exchange
Attachment Save for Exchange
MAPILab POP3 Connector for Exchange 2007/2010
Software companies of Russia
Companies based in Kaliningrad
Networking software companies
Companies established in 2003
Russian brands |
17857659 | https://en.wikipedia.org/wiki/Computer%20Control%20Company | Computer Control Company | Computer Control Company, Inc. (1953–1966), informally known as 3C, was a pioneering minicomputer company known for its DDP-series (Digital Data Processor) computers, notably:
DDP-24 24-bit (1963)
DDP-224 24-bit (1965)
DDP-116 16-bit (1965)
DDP-124 24-bit (1966) using monolithic ICs
It was founded in 1953 by Dr. Louis Fein, the physicist who had earlier designed the Raytheon RAYDAC computer.
The company moved to Framingham, Massachusetts in 1959. Prior to the introduction of the DDP-series it developed a series of digital logical modules, initially based on vacuum tubes.
In 1966 it was sold to Honeywell, Inc. As the Computer Controls division of Honeywell, it introduced further DDP-series computers, and was a $100,000,000 business until 1970 when Honeywell purchased GE's computer division and discontinued development of the DDP line.
In a 1970 essay, Murray Bookchin used the DDP-124 as his example of computer progress:
One of the oddest of the DDP series was the DDP 19 -- of which only 3 were built on custom order for the U.S. Weather service. Its architecture was based on a 19-bit word structure consisting of six octal bytes plus a sign bit, which in arithmetic operations could create the unusual value of "negative zero". One of these machines was donated by the government to the Milwaukee Area Technical College in 1972, which included a drum-based line printer and dual Ampex magnetic tape drives. It was used for a limited number of students as an "extra credit project device" for the next 2-3 years, after which it was (unfortunately) scrapped to make space for newer equipment. The fate of the other two units is unknown.
Notes
References
External links
Oral history interview with Louis Fein at Charles Babbage Institute, University of Minnesota, Minneapolis. Fein discusses establishing computer science as an academic discipline at Stanford Research Institute (SRI) as well as contacts with the University of California—Berkeley, the University of North Carolina, Purdue, International Federation for Information Processing and other institutions.
The 3C Legacy Project
Computer Control Company Reunion Website
Minicomputers
Computer companies of the United States
Defunct computer hardware companies
Defunct computer companies based in Massachusetts
Companies based in Framingham, Massachusetts
Computer companies established in 1953
Electronics companies established in 1953
Computer companies disestablished in 1966
1966 mergers and acquisitions
1953 establishments in Massachusetts
1966 disestablishments in Massachusetts |
23391661 | https://en.wikipedia.org/wiki/Washington%20State%20Department%20of%20Information%20Services | Washington State Department of Information Services | The Washington State Department of Information Services (DIS) is a Governor's Cabinet-level agency that provides information technology assistance to state and local agencies, school districts, tribal organizations, and qualifying nonprofit groups in Washington.
The Legislature created DIS in 1987 from the consolidation of the state's four independent data processing and communications systems. The agency now employs nearly 450 workers who provide more than 100 technology services. It merged into the Department of Enterprise Services (DES) in 2011.
Statutory authority
The legislative intent in creating DIS was to make government information and services more available, accessible and affordable. The Legislature also created the Information Services Board (ISB) to provide coordinated planning and management of state information technology services. DIS provides staff support to the ISB, and acts to oversee compliance with ISB policy in agency technology operations and investments. Chapter 43.105 RCW establishes the ISB structure and outlines DIS' statutory authority.
Powers and duties granted to DIS include:
To provide technology services on a cost-recovery basis to state agencies, local governments and public benefit nonprofit entities
To perform work delegated by the ISB, including the review of agency portfolios, the review of agency investment plans and requests, and implementation of statewide and interagency policies, standards and guidelines
To review and make recommendations on agencies' funding requests for technology projects and to monitor the progress of those projects after they receive funding
To review and approve standards and common specifications for new or expanded telecommunications networks proposed by agencies, public post-secondary institutions, educational service districts or statewide or regional providers of K-12 information technology services
To collaborate with the ISB and agencies in the preparation of a statewide strategic technology plan and its related Washington State Digital Government Plan
To prepare, with direction from the ISB, a biennial state performance report on information technology
References
External links
TechMall (DIS technology service catalog)
Access Washington (Official state government website)
Information Services
Government of Washington (state) |
56039813 | https://en.wikipedia.org/wiki/GE%20Digital | GE Digital | GE Digital is a subsidiary of the American multinational conglomerate corporation General Electric. Headquartered in San Ramon, California, the company provides software and industrial internet of things (IIoT) services to industrial companies.
GE Digital's primary focus is to provide industrial software and services in four markets:
Manufacturing applications serving discrete and process industries, as well as water utilities and economy-scale digital transformation projects
Electric and Telecommunications Utilities
Oil & Gas industry and related adjacent markets (petrochemicals, chemicals manufacturing)
Power generation (gas, steam, solar, wind, hydro and related balance of plant operations and service support);
History
1980-2010: Automation Software
1980: GE introduces the first Ethernet-enabled protection relay, a device that detects faults in systems.
1986: GE and Fanuc combine to create GE Fanuc Automation Corporation, which manufactures programmable logic controllers—one of the fundamental buildings blocks of what's come to be known as the Industrial Internet of Things.
1995: GE Fanuc launch first HMI/SCADA on a 32-bit system (CIMPLICITY).
1996: Saturn Corp implement CIMPLICITY MMI, MES and SCADA solution.
1999: CIMPLICITY Saturn's implementation mention in Bill Gates book “Business @ the Speed of Thought: Succeeding in the Digital Economy”.
2001: GE Measurement and Control is established. It creates many types of sensors, instruments, and control systems for aerospace, the oil and gas industry, and power generation.
2002: GE Fanuc Automation Completes Acquisition of Intellution (iFIX products).
2003: GE Fanuc Automation Completes Acquisition of Mountain Systems (Historian and Plant Apps products).
2007: GE Fanuc Automation Corporation becomes GE Fanuc Intelligent Platforms.
2011 – 2015: Internal industrial software development
2011: GE establishes a software Center of Excellence focused on developing industrial software.
2013: GE develops Predix, its platform for IIoT applications, designed to help GE businesses transform their operations.
2015 – 2017: Launch of Predix
2016: GE launches Predix to the market, making its suite of applications available to industrial customers and partners globally. GE Digital also announced its acquisition of ServiceMax, to extend Predix and analytics across field service processes. This acquisition closed in January 2017.
2016: GE Digital acquires Roanoke VA based software company Meridium that specializes in helping industrial customers predict when machinery might fail and also offers analysis that can enhance the efficiency of operation
2018: Plans for standalone business
2018: GE announced its intended sale of a majority stake of ServiceMax
2018: GE announces plans to establish a new, $1.2 billion independent company focused on building a comprehensive Industrial Internet of Things (IIoT) software portfolio comprising GE Digital, GE Power Digital and GE Grid Solutions.
2019-2020: Refining Focus
2019: GE announces that the APM teams and customer accounts previously part of BHGE will also become part of the ‘NewCo’ announced in 2018.
On July 1, 2019, Patrick Byrne joined GE as chief executive officer of GE's Digital business reporting to GE CEO and chairman H. Lawrence Culp Jr.
Following the GE October 2019 earnings call, Culp announced GE would retain its digital business.
On June 3, 2020, Byrne's role was expanded to include VP lean transformation for GE.
Projects
Digital Twin Blueprints
In March 2020, GE Digital announced the GE Digital Core Digital Twin Blueprint library had exceeded 300 types of industrial assets, used to manage 8000 customer assets remotely via its Industrial Managed Services center.
New York Power Authority (NYPA)
In October 2017, GE announced a software and professional services agreement with the New York State Power Authority (NYPA). NYPA intended to work with GE to explore the digitalization, from its 16 generating facilities and 1,400 miles of electricity transmission network, to the more than 1,000 public buildings it monitors throughout the state.
Chery Jaguar Land Rover
2012, Chery Jaguar Land Rover Automotive Co., Ltd. (CJLR) is a 50:50 independent joint venture formed between Chinese auto manufacturer Chery Automobile Co., Ltd. and UK auto manufacturer Jaguar Land Rover. With a factory in Changshu, China, CJLR produces 130,000 high-end luxury vehicles per year. The company uses GE Digital's Proficy MES in their engine manufacturing facility in Changshu, connecting more than 100,000 integration points on a real time basis across 500 machines on the shop floor.
Power System Operation Corporation (POSOCO)
On April 5, 2020, the Prime Minister of India asked citizens to turn off their lights for nine minutes in a show of solidarity in the fight against COVID-19. With meticulous planning by India's Power System Operation Corporation (POSOCO), national and state agencies, and supported by the GE Digital Grid Software team and advanced energy management system (AEMS), the nation's power grid withstood a 31- gigawatt drop and recovery.
Shanghai Automobile Gear Works
China's Shanghai Automobile Gear Works (SAQW) is a subsidiary of China-owned SAIC Motor Corporation. The company manufactures, markets, and exports automotive transmissions and key components for passenger and commercial vehicles. With 7000 employees across 5 heating treatment lines, SAGW produces more than 3.8 million units annually. SAGW has transformed their manufacturing processes by using GE Digital's Proficy Plant Applications to create a "Process Digital Twin", improving equipment utilization by 20% and reducing inspection costs by 40%. The availability of real-time data has led to a 30% reduction in inventory and an 80% reduction in required storage space.
Wacker
Wacker Chemical Corporation is a global manufacturer of highly developed specialty chemicals. In Charleston, Tennessee, Wacker produces up to 20,000 tons of polycristaline silicon each year, a critical component for the solar photovoltaic and wider electronics sector. By law, critical assets, such as the pressure vessels Wacker uses in their manufacturing process, must be maintained every two years. Wacker use of APM from GE Digital extends pressure vessel scheduled maintenance from every two years to a maximum of every 10 years, saving millions of dollars annually.
References
External links
GE Digital Website
Predix Developer Website
General Electric subsidiaries
Companies based in San Ramon, California |
3968527 | https://en.wikipedia.org/wiki/Extended%20boot%20record | Extended boot record | An extended boot record (EBR), or extended partition boot record (EPBR), is a descriptor for a logical partition under the common DOS disk drive partitioning system. In that system, when one (and only one) partition record entry in the master boot record (MBR) is designated an extended partition, then that partition can be subdivided into a number of logical partitions. The actual structure of that extended partition is described by one or more EBRs, which are located inside the extended partition. The first (and sometimes only) EBR will always be located on the very first sector of the extended partition.
Unlike primary partitions, which are all described by a single partition table within the MBR, and thus limited in number, each EBR precedes the logical partition it describes. If another logical partition follows, then the first EBR will contain an entry pointing to the next EBR; thus, multiple EBRs form a linked list. This means the number of logical drives that can be formed within an extended partition is limited only by the amount of available disk space in the given extended partition.
While in Windows versions up to XP logical partitions within the extended partition were aligned following conventions called "drive geometry" or "CHS", since Windows Vista they are aligned to a 1-MiB boundary. Due to this difference in alignment, the Logical Disk Manager of XP (Disk Management) may delete these extended partitions without warning.
EBR structure and values
EBRs have essentially the same structure as the MBR; except only the first two entries of the partition table are supposed to be used, besides having the mandatory boot record signature (or magic number) of at the end of the sector. This 2-byte signature appears in a disk editor as first and last, because IBM-compatible PCs store hexadecimal words in little-endian order (see table below).
Structures
The IBM Boot Manager (included with OS/2 operating systems and some early versions of Partition Magic), adds at least one 9-byte entry (starting at offset ) to each EBR sector. The entry consists of a flag value byte (indicating if the partition is on the IBM Boot Manager menu) followed by an 8-byte ASCII string which is the name to be used on the menu. If the partition is not included on the boot menu (such as data only partitions), the flag byte is zero; in which case, the following 8-byte field may contain an ASCII representation of that partition's starting sector number (in hexadecimal).
The partition type of an extended partition is ( addressing) or (LBA addressing).
DR DOS 6.0 and higher support secured extended partitions using , which are invisible to other operating systems. Since non-LBA-enabled versions of DR-DOS up to including 7.03 do not recognize the partition type and other operating systems do not recognize the type, this can also be utilized to occupy space up to the first 8 GB of the disk for use under DR-DOS (for logical drives in secured or non-secured partitions), and still use to allocate the remainder of the disk for LBA-enabled operating systems in a non-conflictive fashion.
Similarly, Linux supports the concept of a second extended partition chain with type — this type is hidden (unknown) for other operating systems supporting only one chain. Other extended partition types which may hold EBRs include the deliberately hidden types , , and , the access-restricted types and , and the secured types and . However, these should be treated private to the operating systems and tools supporting them and should not be mounted otherwise.
The CHS addresses of a partition are hard to interpret without knowledge of the (virtual) disk geometry, because CHS to LBA translations are based on the number of heads and the number of sectors per track. However, the given LBA start address and the given partition size in sectors permit to calculate a disk geometry matching the given CHS addresses where that is at all possible. CHS addressing with 24 bits always uses 6 bits for up to 63 sectors per track (1...63), and INT 13h disk access generally uses 8 bits for up to 256 heads (0...255), leaving 10 bits for up to 1024 cylinders (0...1023). ATA CHS addresses always use 4 bits for up to 16 heads (0...15), this leaves 14 bits for up to 16,383 cylinders () in ATA-5 24 bits CHS address translations.
Values
The following are general rules that apply only to values found in the 4-byte fields of an EBR's partition table entries (cf. tables above). These values depend upon the partitioning tool(s) used to create or alter them, and in fact, most operating systems that use the extended partitioning scheme (including Microsoft MS-DOS and Windows, and Linux) ignore the "partition size" value in entries which point to another EBR sector. One exception is that value must be one or greater for Linux operating systems.
The first entry of an EBR partition table points to the logical partition belonging to that EBR:
Starting sector = relative offset between this EBR sector and the first sector of the logical partition
Note: This is often the same value for each EBR on the same hard disk; usually 63 for Windows XP or older.
Number of sectors = total count of sectors for this logical partition
Note: Any unused sectors between EBR and logical drive are not considered part of the logical drive.
The second entry of an EBR partition table will contain zero-bytes if it's the last EBR in the extended partition; otherwise, it points to the next EBR in the EBR chain.
Partition type code = ( addressing) or (LBA addressing).
in other words, the EBR must have a valid partition type, just as a partition must have a valid partition type.
Starting sector = relative address of next EBR within extended partition
in other words: Starting sector = LBA address of next EBR minus LBA address of extended partition's first EBR
Number of sectors = total count of sectors for next logical partition, but count starts from the next EBR sector
Note: Unlike the first entry in an EBR's partition table, this number of sectors count includes the next logical partition's EBR sector along with the other sectors in its otherwise unused track. (Compare Diagram 1 and 2 below.)
Remarks:
The diagrams above are not to scale: The thin white lines between each "EBR" and its logical "partition" represent the remainder of an unused area usually 63 sectors in length; including the single EBR sector (shown at a greatly exaggerated size).
On some systems, a large gap of unused space may exist between the end of a logical partition and the next EBR, or between the last logical partition and the end of the whole extended partition itself, if any previously created logical partition has been deleted or resized (shrunk).
The interleaving of EBRs and partitions shown above is typical but not required. It is legitimate to have two or more consecutive EBRs followed by two or more regions of partition data.
Naming
Linux and similar operating systems designate IDE hard disks as /dev/hda for the first hard disk, /dev/hdb for the second hard disk, and so on. Likewise SCSI and in later kernels also IDE and SATA hard disks are identified as /dev/sda for the first disk, etc.
The up to four partitions defined in the master boot record are designated as /dev/hda1 ... /dev/hda4 for /dev/hda. The fifth partition in this scheme, e.g., /dev/hda5, corresponds to the first logical drive. The sixth partition /dev/hda6 would then correspond to the second logical drive, or in other words, the extended partition containers are not counted. Only the outermost extended partition defined in the MBR (one of /dev/hda1 ... /dev/hda4) has a name in this scheme.
Examples
This shows an extended partition with 6,000 sectors and 3 logical partitions.
Remark: Neither a tiny extended partition with only 3 MB nor a hard drive with 20 sectors per track are realistic but these values have been chosen to make this example more readable.
Snapshot
The following output of a command line tool shows the layout of a disk with two logical drives. Details for the FAT and NTFS partitions stripped, the line annotated with Linux is /dev/hda6 with an extended file system. The begin of /dev/hda5 shows that the involved operating systems PC DOS 7, Windows NT, and Debian do not insist on any extended partition alignment with a gap: \\.\PHYSICALDRIVE0 (assuming geometry CHS 99999 255 63) id. [3189-3188]
MBR CHS 0 0 1 at 0, end 0 0 1, size 1
unused CHS 0 0 2 at 1, end 0 0 63, size 62
1:*06: CHS 0 1 1 at 63, end 260 254 63, size 4192902 bigFAT
2: 05: CHS 261 0 1 at 4192965, end 757 254 63, size 7984305 => EXT
3: 17: CHS 758 0 1 at 12177270, end 1522 254 63, size 12289725 NTFS
4: 1C: CHS 1523 0 1 at 24466995, end 1825 254 63, size 4867695 FAT32
(extended offset 4192965) total 29334690
=> EXT CHS 261 0 1 at 0, end 261 0 1, size 1
5: 06: CHS 261 0 2 at 1, end 384 254 63, size 1992059 bigFAT
6: 05: CHS 385 0 1 at 1992060, end 757 254 63, size 5992245 => EXT
(extended offset 6185025) total 7984305
=> EXT CHS 385 0 1 at 0, end 385 0 1, size 1
unused CHS 385 0 2 at 1, end 385 0 63, size 62
6: 83: CHS 385 1 1 at 63, end 757 254 63, size 5992182 Linux
7: 00: CHS 0 0 0 at 0, end 0 0 0, size 0 unused
total 5992245
bigFAT CHS 0 1 1 at 63, end 260 254 63, size 4192902
PC DOS 7 (cluster size 64, number 65506) total 4192902
NTFS CHS 758 0 1 at 12177270, end 1522 254 63, size 12289725
[1C81-013D] (cluster size 8, number 1536215) total 12289725
FAT32 CHS 1523 0 1 at 24466995, end 1825 254 63, size 4867695
[C417-9E22] (cluster size 8, number 607271) total 4867695
bigFAT CHS 261 0 2 at 4192966, end 384 254 63, size 1992059
FAT SWAP (cluster size 32, number 62236) total 1992059
For another example see the "Linux Partition HOWTO".
Footnotes
See also
Master Boot Record (MBR)
Volume Boot Record (VBR)
Disk partitioning
BSD disklabel
Logical Block Addressing (LBA)
Disk editor
Partition alignment
Logical Disk Manager
References
AT Attachment
Booting
Disk partitions
Linux
Windows architecture |
2343529 | https://en.wikipedia.org/wiki/Olivetti%20X/OS | Olivetti X/OS | X/OS was a Unix from the computer manufacturer Olivetti. It was based on 4.2BSD with some UNIX System V support. It ran on their LSX line of computers, which was based on the Motorola 68000-series CPUs.
Unix variants |
2792554 | https://en.wikipedia.org/wiki/Trojan%20Range | Trojan Range | The Trojan Range () is a mountain range rising to , extending northward from Mount Francais along the east side of Iliad Glacier, Anvers Island, in the Palmer Archipelago of the British Antarctic Territory. It was surveyed by the Falkland Islands Dependencies Survey (FIDS) in 1955 and named by the UK Antarctic Place-Names Committee (UK-APC) for the Trojans, one of the opposing sides in the Trojan War in Homer's Iliad.
List of geographical features
Mountains
Mount Français () is a majestic, snow-covered mountain of 2,760 m, which forms the summit of Anvers Island, standing southeast of the center of the island and 6 miles north of Borgen Bay. It was first seen by the Belgian Antarctic Expedition, who explored the southeast coast of the island in 1898 and later sighted by the French Antarctic Expedition, 1903–05, under Jean-Baptiste Charcot, who named it for the expedition ship Francais.
Mount Hector () is a snow-covered mountain, 2,225 metres, between Mount Francais and Mount Priam in the southern part of the Trojan Range. Surveyed by the FIDS in 1955. Named by the UK-APC for Hector, son of Priam and Commander in Chief of the Trojan and allied armies against the Achaeans in Homer's Iliad.
Mount Priam () is the central mass of the Trojan Range, standing 4 miles north of Mount Francais. It is flat-topped and snow-covered and rises to 1,980 m. Surveyed in 1955 by the Falkland Islands Dependencies Survey (FIDS), it was named by the UK Antarctic Place-Names Committee (UK-APC) for Priam, King of Troy in Homer's Iliad. Xanthus Spur is a mainly ice-covered spur extending northwestward from Mount Priam for three miles. It was named for Xanthus, son of Zeus and the god of one of the two chief rivers of the Trojan plain.
Other features
Bull Ridge () is a ridge lying south of Mount Francais, from which it is separated by a distinct col. It was surveyed by FIDS in 1955–57 and named by UK-APC for George J. Bull, diesel mechanic at Signy Island station in 1955 and general assistant and mountaineer at Arthur Harbour in 1956, who took part in the survey.
References
Mountain ranges of the Palmer Archipelago
Geography of Anvers Island |
1162056 | https://en.wikipedia.org/wiki/Sage%20300 | Sage 300 | Sage 300 is the name for the mid market Sage ERP line of enterprise management and accounting applications (formerly Sage ACCPAC), primarily serving small and medium-sized businesses. Since 2004, Sage 300 is developed by Sage. In 2012, Sage renamed ACCPAC to Sage 300.
Features
Sage 300 is a Windows based range of ERP software, running on Microsoft SQL. This can run under a Windows environment and has an option of being hosted by Sage.
Sage 300 is a modular system with the following core suite of modules. The full list of modules developed in the Sage 300 API is also available.
Financials suite
General ledger
Bank services
Tax services
Accounts payable
Accounts receivable
Multi-company
Operations suite
Inventory control
Purchase orders
(Sales) Order Entry
Payroll
US and Canadian payroll
Core options
Multi-currency
Project and job costing
Transaction analysis and optional fields
It is multi-user, multi-currency, and multi-language. It is available in six languages: English, Spanish, French, Italian and Chinese (Simplified and Traditional).
History
The original product, EasyBusiness Systems, was developed for the CP/M operating system in 1976 by the Basic Software Group and distributed by Information Unlimited Software. This was ported to MS-DOS and the IBM-PC in 1983.
Computer Associates acquired Information Unlimited Software in 1983 and ran it as an independent business unit. Easy Business Systems added payroll processing in 1984 and supported multiuser networking at this time. In 1987, it implemented a multi-window interface to allow moving between different modules. Easy Business Systems was renamed Accpac Plus in 1987 with the release of version 5. Accpac became popular in Canada with support of Canadian public accounting firms that would sell and support the software. The name Accpac is an acronym for 'A Complete and Comprehensive Program for Accounting Control'.
The first Windows version, CA-Accpac/2000, popularly known as ACCPAC for Windows, was developed in the early 1990s and released in October 1994. The Windows version marked the move to client/server and was developed with all new code in COBOL with Computer Associates development tools (these components were redeveloped in 2001 in Accpac Advantage Series with a core business layer developed in C and a user interface layer developed in Visual Basic).
In October 1996 ACCPAC for Windows 2.0 was released. In August 2001, the company presented ACCPAC Advantage Series 5.0, its first web-based version. The web interface was rebuilt in Sage 300 2016 for cross browser support, running on IIS with ASP.Net, a web API was added in the 2017.
Sage 300 initially ran on Btrieve Databases and then supported a variety of database backends. Since Sage 300 2016 only the MS SQL database is supported.
Sage Software acquired Accpac from Computer Associates in 2004. Sage renamed it Sage Accpac ERP in 2006, then Sage ERP Accpac in 2009. Sage dropped the Accpac name in 2012 when it was renamed to Sage 300 ERP.
According to Sage Business Partner Acuity "Sage 300 is no longer available for purchase in the UK. There will be no further updates or upgrades for existing users, nor the option to purchase additional licenses for those already using Sage 300. These changes have already taken effect, as of 31st May 2021. As far as support for Sage 300 users is concerned, direct support from Sage will also be withdrawn from 30th September 2022."
Branding, editions and versions
See also
Comparison of accounting software
Sage Group
References
External links
The Sage 300 ERP Web Site
Accounting software
Software companies of Canada
Financial software companies
300 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.