id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
1623112 | https://en.wikipedia.org/wiki/Jochen%20Liedtke | Jochen Liedtke | Jochen Liedtke (26 May 1953 – 10 June 2001) was a German computer scientist, noted for his work on microkernel operating systems, especially in creating the L4 microkernel family.
Vita
Education
In the mid-1970s Liedtke studied for a diploma degree in mathematics at the Bielefeld University. His thesis project was to build a compiler for the programming language ELAN, which had been launched for teaching programming in German schools. The compiler was written in ELAN.
Post grad
After his graduation in 1977, he remained at Bielefeld and worked on an Elan environment for the Zilog Z80 microprocessor. This required a runtime system (environment), which he named Eumel ("Extendable Multiuser Microprocessor ELAN-System", but also a colloquial north-German term for a likeable fool). Eumel grew into a complete multi-tasking, multi-user operating system supporting orthogonal persistence, which started shipping (by whom? to whom?) in 1980 and was later ported to Zilog Z8000, Motorola 68000 and Intel 8086 processors. As these processors lacked memory protection, Eumel implemented a virtual machine which added the features missing from the hardware. More than 2000 Eumel systems shipped, mostly to schools, and some to legal practices as a text processing platform.
In 1984, he joined the (GMD), the German National Research Center for Computer Science, which is now a part of the Fraunhofer Society. There, he continued his work on Eumel. In 1987, when microprocessors supporting virtual memory became widely available in the form of the Intel 80386, Liedtke started to design a new operating system to succeed Eumel, which he called L3 ("Liedtke's 3rd system", after Eumel and the ALGOL 60 interpreter he had written in high school). L3 was designed to achieve better performance by using the latest hardware features, and was implemented from scratch. It was mostly backward-compatible with Eumel, thus benefiting from the existing Eumel ecosystem. L3 started to ship in 1989, with total deployment of at least 500.
Both Eumel and L3 were microkernel systems, a popular design in the 1980s. However, by the early 1990s, microkernels had received a bad reputation, as systems built on top were performing poorly, culminating in the billion-dollar failure of the IBM Workplace OS. The reason was claimed to be inherent in the operating-system structure imposed by microkernels. Liedtke, however, observed that the message-passing operation (IPC), which is fundamentally important for microkernel performance, was slow in all existing microkernels, including his own L3 system. His conclusion was that radical redesign was needed. He did this by re-implementing L3 from scratch, dramatically simplifying the kernel, resulting in an order-of-magnitude decrease in IPC cost.
The resulting kernel was later renamed "L4". Conceptually, the main novelty of L4 was its complete reliance on external pagers (page fault handlers), and the recursive construction of address spaces.
This led to a complete family of microkernels, with many independent implementations of the same principles.
Liedtke also worked on computer architecture, inventing guarded page tables as a means to implement a sparsely-mapped 64-bit address space. In 1996, Liedtke completed a PhD on guarded page tables at the Technical University of Berlin.
In the same year he joined the Thomas J. Watson Research Center, where he continued to work on L4 (for political reason called the Lava Nucleus (LN), microkernels were unfashionable at IBM after the Workplace OS disaster). The main project during his IBM time was the Saw Mill project, which attempted to turn Linux into an L4-based multi-server OS.
In April 1999, he took up the System Architecture Chair at the University of Karlsruhe. There, he continued to collaborate with IBM on Saw Mill, but at the same time worked on a new generation of L4 (version 4). Several experimental kernels were developed during that time, including Hazelnut, the first L4 kernel that was ported (in contrast to re-implemented) to a different architecture (from x86 to ARM). Work on the new version was completed after his death by Liedtke's students Volkmar Uhlig, Uwe Dannowski, and Espen Skoglund. It was released under the name Pistachio in 2002.
References
External links
In Memoriam Jochen Liedtke (1953 - 2001)
List of Liedtke's publications related to microkernels
1953 births
2001 deaths
German computer scientists
Computer systems researchers
Kernel programmers |
10520679 | https://en.wikipedia.org/wiki/Multithreading%20%28computer%20architecture%29 | Multithreading (computer architecture) | In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB).
Where multiprocessing systems include multiple complete processing units in one or more cores, multithreading aims to increase utilization of a single core by using thread-level parallelism, as well as instruction-level parallelism. As the two techniques are complementary, they are combined in nearly all modern systems architectures with multiple multithreading CPUs and with CPUs with multiple multithreading cores.
Overview
The multithreading paradigm has become more popular as efforts to further exploit instruction-level parallelism have stalled since the late 1990s. This allowed the concept of throughput computing to re-emerge from the more specialized field of transaction processing. Even though it is very difficult to further speed up a single thread or single program, most computer systems are actually multitasking among multiple threads or programs. Thus, techniques that improve the throughput of all tasks result in overall performance gains.
Two major techniques for throughput computing are multithreading and multiprocessing.
Advantages
If a thread gets a lot of cache misses, the other threads can continue taking advantage of the unused computing resources, which may lead to faster overall execution, as these resources would have been idle if only a single thread were executed. Also, if a thread cannot use all the computing resources of the CPU (because instructions depend on each other's result), running another thread may prevent those resources from becoming idle.
Disadvantages
Multiple threads can interfere with each other when sharing hardware resources such as caches or translation lookaside buffers (TLBs). As a result, execution times of a single thread are not improved and can be degraded, even when only one thread is executing, due to lower frequencies or additional pipeline stages that are necessary to accommodate thread-switching hardware.
Overall efficiency varies; Intel claims up to 30% improvement with its Hyper-Threading Technology, while a synthetic program just performing a loop of non-optimized dependent floating-point operations actually gains a 100% speed improvement when run in parallel. On the other hand, hand-tuned assembly language programs using MMX or AltiVec extensions and performing data prefetches (as a good video encoder might) do not suffer from cache misses or idle computing resources. Such programs therefore do not benefit from hardware multithreading and can indeed see degraded performance due to contention for shared resources.
From the software standpoint, hardware support for multithreading is more visible to software, requiring more changes to both application programs and operating systems than multiprocessing. Hardware techniques used to support multithreading often parallel the software techniques used for computer multitasking. Thread scheduling is also a major problem in multithreading.
Types of multithreading
Interleaved/Temporal multithreading
Coarse-grained multithreading
The simplest type of multithreading occurs when one thread runs until it is blocked by an event that normally would create a long-latency stall. Such a stall might be a cache miss that has to access off-chip memory, which might take hundreds of CPU cycles for the data to return. Instead of waiting for the stall to resolve, a threaded processor would switch execution to another thread that was ready to run. Only when the data for the previous thread had arrived, would the previous thread be placed back on the list of ready-to-run threads.
For example:
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued, which is a load instruction that misses in all caches.
Cycle : thread scheduler invoked, switches to thread .
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued.
Conceptually, it is similar to cooperative multi-tasking used in real-time operating systems, in which tasks voluntarily give up execution time when they need to wait upon some type of the event. This type of multithreading is known as block, cooperative or coarse-grained multithreading.
The goal of multithreading hardware support is to allow quick switching between a blocked thread and another thread ready to run. Switching from one thread to another means the hardware switches from using one register set to another. To achieve this goal, the hardware for the program visible registers, as well as some processor control registers (such as the program counter), is replicated. For example, to quickly switch between two threads, the processor is built with two sets of registers.
Additional hardware support for multithreading allows thread switching to be done in one CPU cycle, bringing performance improvements. Also, additional hardware allows each thread to behave as if it were executing alone and not sharing any hardware resources with other threads, minimizing the amount of software changes needed within the application and the operating system to support multithreading.
Many families of microcontrollers and embedded processors have multiple register banks to allow quick context switching for interrupts. Such schemes can be considered a type of block multithreading among the user program thread and the interrupt threads.
Interleaved multithreading
The purpose of interleaved multithreading is to remove all data dependency stalls from the execution pipeline. Since one thread is relatively independent from other threads, there is less chance of one instruction in one pipelining stage needing an output from an older instruction in the pipeline. Conceptually, it is similar to preemptive multitasking used in operating systems; an analogy would be that the time slice given to each active thread is one CPU cycle.
For example:
Cycle : an instruction from thread is issued.
Cycle : an instruction from thread is issued.
This type of multithreading was first called barrel processing, in which the staves of a barrel represent the pipeline stages and their executing threads. Interleaved, preemptive, fine-grained or time-sliced multithreading are more modern terminology.
In addition to the hardware costs discussed in the block type of multithreading, interleaved multithreading has an additional cost of each pipeline stage tracking the thread ID of the instruction it is processing. Also, since there are more threads being executed concurrently in the pipeline, shared resources such as caches and TLBs need to be larger to avoid thrashing between the different threads.
Simultaneous multithreading
The most advanced type of multithreading applies to superscalar processors. Whereas a normal superscalar processor issues multiple instructions from a single thread every CPU cycle, in simultaneous multithreading (SMT) a superscalar processor can issue instructions from multiple threads every CPU cycle. Recognizing that any single thread has a limited amount of instruction-level parallelism, this type of multithreading tries to exploit parallelism available across multiple threads to decrease the waste associated with unused issue slots.
For example:
Cycle : instructions and from thread and instruction from thread are simultaneously issued.
Cycle : instruction from thread , instruction from thread , and instruction from thread are all simultaneously issued.
Cycle : instruction from thread and instructions and from thread are all simultaneously issued.
To distinguish the other types of multithreading from SMT, the term "temporal multithreading" is used to denote when instructions from only one thread can be issued at a time.
In addition to the hardware costs discussed for interleaved multithreading, SMT has the additional cost of each pipeline stage tracking the thread ID of each instruction being processed. Again, shared resources such as caches and TLBs have to be sized for the large number of active threads being processed.
Implementations include DEC (later Compaq) EV8 (not completed), Intel Hyper-Threading Technology, IBM POWER5/POWER6/POWER7/POWER8/POWER9, IBM z13/z14/z15, Sun Microsystems UltraSPARC T2, Cray XMT, and AMD Bulldozer and Zen microarchitectures.
Implementation specifics
A major area of research is the thread scheduler that must quickly choose from among the list of ready-to-run threads to execute next, as well as maintain the ready-to-run and stalled thread lists. An important subtopic is the different thread priority schemes that can be used by the scheduler. The thread scheduler might be implemented totally in software, totally in hardware, or as a hardware/software combination.
Another area of research is what type of events should cause a thread switch: cache misses, inter-thread communication, DMA completion, etc.
If the multithreading scheme replicates all of the software-visible state, including privileged control registers and TLBs, then it enables virtual machines to be created for each thread. This allows each thread to run its own operating system on the same processor. On the other hand, if only user-mode state is saved, then less hardware is required, which would allow more threads to be active at one time for the same die area or cost.
See also
Super-threading
Speculative multithreading
References
External links
A Survey of Processors with Explicit Multithreading, ACM, March 2003, by Theo Ungerer, Borut Robi and Jurij Silc
Operating System | Difference between Multitasking, Multithreading and Multiprocessing GeeksforGeeks, 6 Sept. 2018.
Central processing unit
Instruction processing
Microprocessors
Parallel computing
Threads (computing) |
8259762 | https://en.wikipedia.org/wiki/Hershel%20Dennis | Hershel Dennis | Hershel Henry Dennis IV (born July 12, 1984) is a former American football running back. He played college football as a student athlete at the University of Southern California. During his six-year career, the Trojans went 70–8 making Dennis the first player to play on six Pacific-10 Championship squads and the player with the most wins in college football history. His nickname since childhood is "Patch"
High school career
Dennis prepped at Long Beach Polytechnic High School and was a part of the "Long Beach Poly Five", five highly recruited players including Manuel Wright, Marcedes Lewis, Darnell Bing, and Winston Justice. As of May 2007, Dennis still held the career rushing and touchdown marks at Long Beach Poly, as well as the single season rushing record. Dennis made his final decision between USC and the University of Oregon; his father wanted him to go to Oregon, but he chose USC out of respect for his mother, Rose Teofilo, who wanted him closer to home.
He was also on Poly's track team, with bests of 10.7 seconds in the 100 meters, 22.3 seconds in the 200 meters, 6.75 meters in the long jump and 1.98 meters in the high jump, and basketball team. Current Trojans Vincent Joseph, Travon Patterson and Alfred Rowe also prepped at Poly.
College career
As a freshman in the 2002 season, Dennis was a reserve behind a stable of experienced seniors including running backs Justin Fargas and Sultan McCullough as well as fullback/tailbacks Malaefou MacKenzie and Sunny Byrd; however, he did see play time in all 13 games, highlighted by a 38-yard touchdown run against rival UCLA. In 2003, he was the starting tailback in all 13 games over Reggie Bush and LenDale White.
However, Dennis lost the starting spot for the 2004 season, having been suspended from the first two games due to a resulting criminal investigation (that later cleared him of any wrongdoing) and violating team rules. This allowed the tandem of White and Bush to emerge and relegate Dennis to the role of reserve for most of the season. Dennis did appear in a total of 9 games in the 2004 season, but was beset by a serious, season-ending torn knee ligament in practices before the 2005 Orange Bowl. During this period, Dennis considered transferring to another program, but his mother pressured him to stay and graduate as he is the only one of Teofilo’s children to have attended a university.
The same torn knee ligament kept Dennis from participating in the entire 2005 season, having already used his redshirt year he lost a year of eligibility. Going into the 2006 season, Dennis was the leading career rusher among current student athletes. Unfortunately, in spring practice Dennis was again beset by season-ending knee injury, causing him to entirely miss a second year in a row. While this would have eliminated his final year of eligibility, Dennis and USC applied to the NCAA for a sixth year of eligibility, given the extreme circumstances. Dennis returned to full-contact practice in the 2007 spring practice In May 2007, Dennis graduated with his bachelor's degree in sociology. The NCAA granted his application for a sixth year of eligibility in June 2007.
During his final game, the 2008 Rose Bowl game against Illinois, Dennis rushed for a touchdown, his first of the season and first since 2004, that led teammates to rush the goal from the sidelines in celebration (and an excessive celebration penalty).
At the end of his six-year USC career, Dennis became the first player to play on six Pac-10 Championship squads. In addition to conference accolades, Dennis also made central contributions to teams that finished in the BCS top 5 during 6 consecutive seasons (2002–07). With 70 wins (and only 8 losses) in that span, Hershel Dennis is the "winningest player" in College Football history.
Professional career
Dennis signed to play with the Sioux Falls Storm as a running back in the Indoor Football League (IFL) during the 2011 season.
Personal
Dennis is of African-American descent on his father's side and Samoan on his mother's. His father, also named Hershel Dennis, played running back for North Carolina A&T. Dennis has his initials tattooed on the back of his arms above the elbow (left: "H", right: "D"). He has three children.
He is the cousin of Canadian Football League defensive lineman DeQuin Evans, and had been instrumental in obtaining permission for his cousin to sit in on classes at USC, enabling Evans to enroll in college and put his troubled youth behind him via a career in sports.
References
External links
USC Bio
1984 births
Living people
American football running backs
USC Trojans football players
American sportspeople of Samoan descent
Players of American football from Long Beach, California
Sioux Falls Storm players
African-American players of American football
21st-century African-American sportspeople
20th-century African-American people |
617121 | https://en.wikipedia.org/wiki/Game%20semantics | Game semantics | Game semantics (, translated as dialogical logic) is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player, somewhat resembling Socratic dialogues or medieval theory of Obligationes.
History
In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic, and it was further developed by Kuno Lorenz. At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach known in the literature as GTS (game-theoretical semantics). Since then, a number of different game semantics have been studied in logic.
Shahid Rahman (Lille) and collaborators developed dialogical logic into a general framework for the study of logical and philosophical issues related to logical pluralism. Beginning 1994 this triggered a kind of renaissance with lasting consequences. This new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science, computational linguistics, artificial intelligence, and the formal semantics of programming languages, for instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages by means of games. New results in linear logic by Jean-Yves Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky, J. van Benthem, A. Blass, D. Gabbay, M. Hyland, W. Hodges, R. Jagadeesan, G. Japaridze, E. Krabbe, L. Ong, H. Prakken, G. Sandu, D. Walton, and J. Woods, who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference. There has also been an alternative perspective on proof theory and meaning theory, advocating that Wittgenstein's "meaning as use" paradigm as understood in the context of proof theory, where the so-called reduction rules (showing the effect of elimination rules on the result of introduction rules) should be seen as appropriate to formalise the explanation of the (immediate) consequences one can draw from a proposition, thus showing the function/purpose/usefulness of its main connective in the calculus of language.(, , , , )
Classical logic
The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions. Each move of the game consists of allowing the owner of the dominant connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its dominant connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy, while it will be false whenever the Falsifier has the winning strategy.
If the formula contains negations or implications, other, more complicated, techniques may be used. For example, a negation should be true if the thing negated is false, so it must have the effect of interchanging the roles of the two players.
More generally, game semantics may be applied to predicate logic; the new rules allow a dominant quantifier to be removed by its "owner" (the Verifier for existential quantifiers and the Falsifier for universal quantifiers) and its bound variable replaced at all occurrences by an object of the owner's choosing, drawn from the domain of quantification. Note that a single counterexample falsifies a universally quantified statement, and a single example suffices to verify an existentially quantified one. Assuming the axiom of choice, the game-theoretical semantics for classical first-order logic agree with the usual model-based (Tarskian) semantics. For classical first-order logic the winning strategy for the Verifier essentially consists of finding adequate Skolem functions and witnesses. For example, if S denotes then an equisatisfiable statement for S is . The Skolem function f (if it exists) actually codifies a winning strategy for the Verifier of S by returning a witness for the existential sub-formula for every choice of x the Falsifier might make.
The above definition was first formulated by Jaakko Hintikka as part of his GTS interpretation. The original version of game semantics for classical (and intuitionistic) logic due to Paul Lorenzen and Kuno Lorenz was not defined in terms of models but of winning strategies over formal dialogues (P. Lorenzen, K. Lorenz 1978, S. Rahman and L. Keiff 2005). Shahid Rahman and Tero Tulenheimo developed an algorithm to convert GTS-winning strategies for classical logic into the dialogical winning strategies and vice versa.
Formal dialogues and GTS games may be infinite and use end-of-play rules rather than letting players decide when stop playing. Reaching this decision by standard means for strategic inferences (iterated elimination of dominated strategies or IEDS) would, in GTS and formal dialogues, be equivalent to solving the Halting Problem and exceeds the reasoning abilities of human agents. GTS avoids this with a rule to test formulas against an underlying model; logical dialogues, with a non-repetition rule (similar to Threefold Repetition in Chess). Genot and Jacot (2017) proved that players with severely bounded rationality can reason to terminate a play without IEDS.
For most common logics, including the ones above, the games that arise from them have perfect information—that is, the two players always know the truth values of each primitive, and are aware of all preceding moves in the game. However, with the advent of game semantics, logics, such as the independence-friendly logic of Hintikka and Sandu, with a natural semantics in terms of games of imperfect information have been proposed.
Intuitionistic logic, denotational semantics, linear logic, logical pluralism
The primary motivation for Lorenzen and Kuno Lorenz was to find a game-theoretic (their term was dialogical, in German ) semantics for intuitionistic logic. Andreas Blass was the first to point out connections between game semantics and linear logic. This line was further developed by Samson Abramsky, Radhakrishnan Jagadeesan, Pasquale Malacaria and independently Martin Hyland and Luke Ong, who placed special emphasis on compositionality, i.e. the definition of strategies inductively on the syntax. Using game semantics, the authors mentioned above have solved the long-standing problem of defining a fully abstract model for the programming language PCF. Consequently, game semantics has led to fully abstract semantic models for a variety of programming languages, and to new semantic-directed methods of software verification by software model checking.
and Helge Rückert extended the dialogical approach to the study of several non-classical logics such as modal logic, relevance logic, free logic and connexive logic. Recently, Rahman and collaborators developed the dialogical approach into a general framework aimed at the discussion of logical pluralism.
Quantifiers
Foundational considerations of game semantics have been more emphasised by Jaakko Hintikka and Gabriel Sandu, especially for independence-friendly logic (IF logic, more recently information-friendly logic), a logic with branching quantifiers. It was thought that the principle of compositionality fails for these logics, so that a Tarskian truth definition could not provide a suitable semantics. To get around this problem, the quantifiers were given a game-theoretic meaning. Specifically, the approach is the same as in classical propositional logic, except that the players do not always have perfect information about previous moves by the other player. Wilfrid Hodges has proposed a compositional semantics and proved it equivalent to game semantics for IF-logics.
More recently and the team of dialogical logic in Lille implemented dependences and independences within a dialogical framework by means of a dialogical approach to intuitionistic type theory called immanent reasoning.<ref> S. Rahman, Z. McConaughey, A. Klev, N. Clerbout: Immanent Reasoning or Equality in Action. A Plaidoyer for the Play level. Springer (2018). https://www.springer.com/gp/book/9783319911489.
For an application of the dialogical approach to intuitionistic type theory to the axiom of choice see S. Rahman and N. Clerbout: Linking Games and Constructive Type Theory: Dialogical Strategies, CTT-Demonstrations and the Axiom of Choice. Springer-Briefs (2015). https://www.springer.com/gp/book/9783319190624.</ref>
Computability logic
Japaridze’s computability logic is a game-semantical approach to logic in an extreme sense, treating games as targets to be serviced by logic rather than as technical or foundational means for studying or justifying logic. Its starting philosophical point is that logic is meant to be a universal, general-utility intellectual tool for ‘navigating the real world’ and, as such, it should be construed semantically rather than syntactically, because it is semantics that serves as a bridge between real world and otherwise meaningless formal systems (syntax). Syntax is thus secondary, interesting only as much as it services the underlying semantics. From this standpoint, Japaridze has repeatedly criticized the often followed practice of adjusting semantics to some already existing target syntactic constructions, with Lorenzen’s approach to intuitionistic logic being an example. This line of thought then proceeds to argue that the semantics, in turn, should be a game semantics, because games “offer the most comprehensive, coherent, natural, adequate and convenient mathematical models for the very essence of all ‘navigational’ activities of agents: their interactions with the surrounding world”. Accordingly, the logic-building paradigm adopted by computability logic is to identify the most natural and basic operations on games, treat those operators as logical operations, and then look for sound and complete axiomatizations of the sets of game-semantically valid formulas. On this path a host of familiar or unfamiliar logical operators have emerged in the open-ended language of computability logic, with several sorts of negations, conjunctions, disjunctions, implications, quantifiers and modalities.
Games are played between two agents: a machine and its environment, where the machine is required to follow only effective strategies. This way, games are seen as interactive computational problems, and the machine's winning strategies for them as solutions to those problems. It has been established that computability logic is robust with respect to reasonable variations in the complexity of allowed strategies, which can be brought down as low as logarithmic space and polynomial time (one does not imply the other in interactive computations) without affecting the logic. All this explains the name “computability logic” and determines applicability in various areas of computer science. Classical logic, independence-friendly logic and certain extensions of linear and intuitionistic logics turn out to be special fragments of computability logic, obtained merely by disallowing certain groups of operators or atoms.
See also
Computability logic
Dependence logic
Ehrenfeucht–Fraïssé game
Independence-friendly logic
Interactive computation
Intuitionistic logic
Ludics
References
Bibliography
Books
T. Aho and A-V. Pietarinen (eds.) Truth and Games. Essays in honour of Gabriel Sandu. Societas Philosophica Fennica (2006)..
J. van Benthem, G. Heinzmann, M. Rebuschi and H. Visser (eds.) The Age of Alternative Logics. Springer (2006)..
R. Inhetveen: Logik. Eine dialog-orientierte Einführung., Leipzig 2003
L. Keiff Le Pluralisme Dialogique. Thesis Université de Lille 3 (2007).
K. Lorenz, P. Lorenzen: Dialogische Logik, Darmstadt 1978
P. Lorenzen: Lehrbuch der konstruktiven Wissenschaftstheorie, Stuttgart 2000
O. Majer, A.-V. Pietarinen and T. Tulenheimo (editors). Games: Unifying Logic, Language and Philosophy. Springer (2009).
S. Rahman, Über Dialogue protologische Kategorien und andere Seltenheiten. Frankfurt 1993
S. Rahman and H. Rückert (editors), New Perspectives in Dialogical Logic. Synthese 127 (2001) .
S. Rahman and N. Clerbout: Linking Games and Constructive Type Theory: Dialogical Strategies, CTT-Demonstrations and the Axiom of Choice. Springer-Briefs (2015). https://www.springer.com/gp/book/9783319190624.
S. Rahman, Z. McConaughey, A. Klev, N. Clerbout: Immanent Reasoning or Equality in Action. A Plaidoyer for the Play levell. Springer (2018). https://www.springer.com/gp/book/9783319911489.
J. Redmond & M. Fontaine, How to play dialogues. An introduction to Dialogical Logic. London, College Publications (Col. Dialogues and the Games of Logic. A Philosophical Perspective N° 1). ()
Articles
S. Abramsky and R. Jagadeesan, Games and full completeness for multiplicative linear logic. Journal of Symbolic Logic 59 (1994): 543-574.
A. Blass, A game semantics for linear logic. Annals of Pure and Applied Logic 56 (1992): 151-166.
J.M.E.Hyland and H.L.Ong On Full Abstraction for PCF: I, II, and III. Information and computation, 163(2), 285-408.
E.J. Genot and J. Jacot, Logical Dialogues with Explicit Preference Profiles and Strategy Selection, Journal of Logic, Language and Information 26, 261–291 (2017). doi.org/10.1007/s10849-017-9252-4
D.R. Ghica, Applications of Game Semantics: From Program Analysis to Hardware Synthesis. 2009 24th Annual IEEE Symposium on Logic In Computer Science: 17-26. .
G. Japaridze, Introduction to computability logic. Annals of Pure and Applied Logic 123 (2003): 1-99.
G. Japaridze, In the beginning was game semantics. In Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo (editors), Games: Unifying logic, Language and Philosophy. Springer (2009).
Krabbe, E. C. W., 2001. "Dialogue Foundations: Dialogue Logic Restituted [title has been misprinted as "...Revisited"]," Supplement to the Proceedings of the Aristotelian Society 75: 33-49.
S. Rahman and L. Keiff, On how to be a dialogician. In Daniel Vanderken (ed.), Logic Thought and Action, Springer (2005), 359-408. .
S. Rahman and T. Tulenheimo, From Games to Dialogues and Back: Towards a General Frame for Validity. In Ondrej Majer, Ahti-Veikko Pietarinen and Tero Tulenheimo (editors), Games: Unifying logic, Language and Philosophy. Springer (2009).
External links
Computability Logic Homepage
GALOP: Workshop on Games for Logic and Programming Languages
Game Semantics or Linear Logic?
Logic in computer science
Mathematical logic
Philosophical logic
Quantifier (logic)
Game theory
Semantics |
36724395 | https://en.wikipedia.org/wiki/Cubieboard | Cubieboard | Cubieboard is a single-board computer, made in Zhuhai, Guangdong, China. The first short run of prototype boards were sold internationally in September 2012, and the production version started to be sold in October 2012. It can run Android 4 ICS, Ubuntu 12.04 desktop, Fedora 19 ARM Remix desktop, Armbian, Arch Linux ARM, a Debian-based Cubian distribution, FreeBSD, or OpenBSD.
It uses the AllWinner A10 SoC, popular on cheap tablets, phones and media PCs. This SoC is used by developers of the lima driver, an open-source driver for the ARM Mali GPU. At the 2013 FOSDEM demo it ran ioquake 3 at 47 fps in 1024×600.
The Cubieboard team managed to run an Apache Hadoop computer cluster using the Lubuntu Linux distribution.
Technical specifications
Cubieboard1
The little motherboard utilizes the AllWinner A10 capabilities
SoC: AllWinner A10
CPU: Cortex-A8 @ 1 GHz CPU,
GPU Mali-400 MP
video acceleration: CedarX able to decode 2160p video
display controller: unknown, supports HDMI 1080p
512 MiB (beta) or 1GiB (final) DDR3
4 GB NAND flash built-in, 1x microSD slot, 1x SATA port.
10/100 Ethernet connector
2x USB Host, 1x USB OTG, 1x CIR.
96 extend pin including I²C, SPI, LVDS
Dimensions: 10 cm × 6 cm
Cubieboard2
The second version, sold since June 2013, enhances the board mainly by replacing the Allwinner A10 SoC with an Allwinner A20 which contains 2 ARM Cortex-A7 MPCore CPUs and a dual fragment shader Mali-400 GPU (Mali-400MP2).
This board is used by Fedora to test and develop the Allwinner SoC port of the distribution.
There is also a version available with two microSD card slots.
Cubietruck (Cubieboard3)
The third version has a new and larger PCB layout and features the following hardware:
SoC: Allwinner A20
CPU: ARM Cortex-A7 @ 1 GHz dual-core
GPU: Mali-400 MP2
display controller: Mali-400 GPU, supports HDMI 1080p, no LVDS support
2 GiB DDR3 @ 480 MHz
8 GB NAND flash built-in, 1x microSD slot, 1x SATA 2.0 port (Hard Disk of 2,5").
10/100/1000 RTL8211E Gigabit Ethernet
2x USB Host, 1x USB OTG, 1x CIR.
S/PDIF, headphone, VGA and HDMI audio out, mic and line-in via extended pins
Wi-Fi and Bluetooth on board with PCB antenna (Broadcom BCM4329/BCM40181)
54 extended pins including I²C, SPI
Dimensions: 11 cm × 8 cm
There is no LVDS support any longer. The RTL8211E NIC allows transfer rates up to 630–638 Mbit/s (sending while 5–10% idle) and 850–860 Mbit/s (receiving while 0–2% idle) when simultaneous TCP connections are established (testing was done utilising iperf with three clients against Cubietruck running Lubuntu)
To connect a 3.5" HDD the necessary 12 V power can be delivered by a 3.5 inch HDD addon package which can be used to power the Cubietruck itself as well. Also new is the option to power the Cubietruck from LiPo batteries.
Cubieboard 4
On May 4, 2014 CubieTech announced the Cubieboard 4, the board is also known as CC-A80.
It is based on an Allwinner A80 SoC (quad Cortex-A15, quad Cortex-A7 big.LITTLE), thereby replacing the Mali GPU with a PowerVR GPU. The board was officially released on 10 March 2015.
SoC: Allwinner A80
CPU: 4x Cortex-A15 and 4x Cortex-A7 implementing ARM big.LITTLE
GPU: PowerVR G6230 (Rogue)
video acceleration: A new generation of display engine that supports H.265, 4K resolution codec and 3-screen simultaneous output
display controller: unknown, supports:
microUSB 3.0 OTG
Cubietruck Plus (Cubieboard 5)
The fifth version has the same PCB layout and almost the same features as the CubieTruck.
SoC: Allwinner H8
CPU: ARM Cortex-A7 @ 2 GHz octa-core
GPU: PowerVR SGX544 @ 700 MHz
display controller: Toshiba TC358777XBG, supports HDMI 1.4 1080p and DisplayPort, no LVDS support
2 GiB DDR3
8 GB EMMC flash built-in, 1x microSD slot, 1x SATA 2.0 port (Hard Disk of 2,5") via USB bridge.
10/100/1000M RJ45 Gigabit Ethernet
2x USB Host, 1x USB OTG, 1x CIR.
S/PDIF, headphone, and HDMI audio out, mic and line-in via 3.5mm jack, and onboard mic.
Wi-Fi (dual-radio 2.4 and 5 GHz) and Bluetooth on board with PCB antenna
70 extended pins including I²C, SPI
Dimensions: 11 cm × 8 cm
See also
List of open-source hardware projects
References
External links
Cubieboard on Debian project wiki
Cubieboard on Fedora Linux distribution wiki
Cubieboard on ArchLinux ARM documentation
Cubieboard2 on Void Linux distribution wiki
Cubieboard on OpenSuse wiki
Cubieboard on FreeBSD wiki
NetBSD/evbarm on Allwinner Technology SoCs on NetBSD wiki
Single-board computers |
1539324 | https://en.wikipedia.org/wiki/FIPS%20140 | FIPS 140 | The 140 series of Federal Information Processing Standards (FIPS) are U.S. government computer security standards that specify requirements for cryptography modules.
, FIPS 140-2 and FIPS 140-3 are both accepted as current and active. FIPS 140-3 was approved on March 22, 2019 as the successor to FIPS 140-2 and became effective on September 22, 2019. FIPS 140-3 testing began on September 22, 2020, although no FIPS 140-3 validation certificates have been issued yet. FIPS 140-2 testing is still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022), creating an overlapping transition period of one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date.
Purpose of FIPS 140
The National Institute of Standards and Technology (NIST) issues the 140 Publication Series to coordinate the requirements and standards for cryptographic modules which include both hardware and software components for use by departments and agencies of the United States federal government. FIPS 140 does not purport to provide sufficient conditions to guarantee that a module conforming to its requirements is secure, still less that a system built using such modules is secure. The requirements cover not only the cryptographic modules themselves but also their documentation and (at the highest security level) some aspects of the comments contained in the source code.
User agencies desiring to implement cryptographic modules should confirm that the module they are using is covered by an existing validation certificate. FIPS 140-1 and FIPS 140-2 validation certificates specify the exact module name, hardware, software, firmware, and/or applet version numbers. For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Vendors do not always maintain their baseline validations.
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of validated cryptographic modules is required by the United States Government for all unclassified uses of cryptography. The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Security levels
FIPS 140-2 defines four levels of security, simply named "Level 1" to "Level 4". It does not specify in detail what level of security is required by any particular application.
FIPS 140-2 Level 1 the lowest, imposes very limited requirements; loosely, all components must be "production-grade" and various egregious kinds of insecurity must be absent.
FIPS 140-2 Level 2 adds requirements for physical tamper-evidence and role-based authentication.
FIPS 140-2 Level 3 adds requirements for physical tamper-resistance (making it difficult for attackers to gain access to sensitive information contained in the module) and identity-based authentication, and for a physical or logical separation between the interfaces by which "critical security parameters" enter and leave the module, and its other interfaces.
FIPS 140-2 Level 4 makes the physical security requirements more stringent, and requires robustness against environmental attacks.
In addition to the specified levels, Section 4.1.1 of the specification describes additional attacks that may require mitigation, such as differential power analysis. If a product contains countermeasures against these attacks, they must be documented and tested, but protections are not required to achieve a given level. Thus, a criticism of FIPS 140-2 is that the standard gives a false sense of security at Levels 2 and above because the standard implies that modules will be tamper-evident and/or tamper-resistant, yet modules are permitted to have side channel vulnerabilities that allow simple extraction of keys.
Scope of requirements
FIPS 140 imposes requirements in eleven different areas:
Cryptographic module specification (what must be documented)
Cryptographic module ports and interfaces (what information flows in and out, and how it must be segregated)
Roles, services and authentication (who can do what with the module, and how this is checked)
Finite state model (documentation of the high-level states the module can be in, and how transitions occur)
Physical security (tamper evidence and resistance, and robustness against extreme environmental conditions)
Operational environment (what sort of operating system the module uses and is used by)
Cryptographic key management (generation, entry, output, storage and destruction of keys)
EMI/EMC
Self-tests (what must be tested and when, and what must be done if a test fails)
Design assurance (what documentation must be provided to demonstrate that the module has been well designed and implemented)
Mitigation of other attacks (if a module is designed to mitigate against, say, TEMPEST attacks then its documentation must say how)
Brief history
FIPS 140-1, issued on 11 January 1994, was developed by a government and industry working group, composed of vendors and users of cryptographic equipment. The group identified the four "security levels" and eleven "requirement areas" listed above, and specified requirements for each area at each level.
FIPS 140-2, issued on 25 May 2001, takes account of changes in available technology and official standards since 1994, and of comments received from the vendor, tester, and user communities. It was the main input document to the international standard ISO/IEC 19790:2006 Security requirements for cryptographic modules issued on 1 March 2006. NIST issued Special Publication 800-29 outlining the significant changes from FIPS 140-1 to FIPS 140-2.
FIPS 140-3, issued on 22 March 2019 and announced in May 2019 is currently in the overlapping transition period to supersede FIPS 140-2 and aligns the NIST guidance around two international standards documents: ISO/IEC 19790:2012(E) Information technology — Security techniques — Security requirements for cryptographic modules and ISO/IEC 24759:2017(E) Information technology — Security techniques — Test requirements for cryptographic modules. In the first draft version of the FIPS 140-3 standard, NIST introduced a new software security section, one additional level of assurance (Level 5) and new Simple Power Analysis (SPA) and Differential Power Analysis (DPA) requirements. The draft issued on 11 Sep 2009, however, reverted to four security levels and limits the security levels of software to levels 1 and 2.
Criticism
Due to the way in which the validation process is set up, a software vendor is required to re-validate their FIPS-validated module for every change, no matter how small, to the software; this re-validation is required even for obvious bug or security fixes. Since validation is an expensive process, this gives software vendors an incentive to postpone changes to their software and can result in software that does not receive security updates until the next validation. The result may be that validated software is less safe than a non-validated equivalent.
This criticism has been refuted more recently by some industry experts who instead put the responsibility on the vendor to narrow their validation boundary. As most of the re-validation efforts are triggered by bugs and security fixes outside the core cryptographic operations, a properly scoped validation is not subject to the common re-validation as described.
See also
Common Criteria
FIPS 140-2
FIPS 140-3
:Category: Computer security standards
:Category: Cryptography standards
References
External links
Computer security standards
Cryptography standards
Standards of the United States |
3014352 | https://en.wikipedia.org/wiki/Harald%20Welte | Harald Welte | Harald Welte (1979), also known as LaForge, is a German programmer.
Welte is the founder of the free software project Osmocom and was formerly involved in the netfilter/iptables and Openmoko projects. He is a member of the Chaos Computer Club.
Biography
Until 2007, Welte was the chairman of the core team responsible for the netfilter/iptables project. He is also credited with writing the UUCP over SSL how-to, and contributions to User-mode Linux and international encryption kernel projects, among others.
Welte has become prominent for his work with gpl-violations.org, an organisation he set up in 2004 to track down and prosecute violators of the GPL, which had been untested in court until then.
Welte was part of Openmoko team, a project to create a smartphone platform using free software. However, in 2007, Welte announced his withdrawal from Openmoko, citing internal friction and demotivation. He continues to contribute as a volunteer to the project.
On 25 July 2008, VIA Technologies appointed Harald Welte as its open source liaison. According to VIA, in his role as open source liaison Welte will be responsible for helping refine VIA's open source strategy and optimise its support for Linux. Welte will also "assist VIA to develop drivers that are in line with the standards and best practices of Linux kernel development, enhance the quality and public availability of VIA documentation, and improve interaction with the open source development community".
Welte is the founder of the Osmocom project.
Awards
On 19 March 2008, the Free Software Foundation (FSF) announced that it had awarded the Award for the Advancement of Free Software for 2007 to Welte, stating that
"The awards committee honored both Welte's technical contributions to projects like the Linux kernel and the OpenMoko mobile platform project, and his community leadership in safeguarding the freedom of free software users by successfully enforcing the GNU General Public License in over one hundred cases since the gpl-violations.org project began in 2004."
On 22 July 2008, Welte received the Defender of Rights Open Source Award, presented to him by Chris DiBona, who indicated the award was primarily for Welte's work on gpl-violations.org.
References
Sources
Duration 00:40:31. Covers Welte's pioneering work and software license compliance generally.
External links
Harald Welte's blog
Interview at the 3rd international GPLv3 conference, June 22, 2006
Audio and video of a panel discussion including Welte, June 23, 2006
Interview in LWN.net, June 12, 2006
Interview at under-linux.org
Personal interview with Harald Welte about Openmoko (German-speaking)
Interview with Harald Welte about gpl-violations.org (German), June 22, 2004
Podcast at CIO.com
Audio podcast: Harald Welte on gpl-violations.org and GSM security
1979 births
Living people
Linux kernel programmers
People from Berlin
Members of Chaos Computer Club |
20210316 | https://en.wikipedia.org/wiki/Molecular%20design%20software | Molecular design software | Molecular design software is notable software for molecular modeling, that provides special support for developing molecular models de novo.
In contrast to the usual molecular modeling programs, such as for molecular dynamics and quantum chemistry, such software directly supports the aspects related to constructing molecular models, including:
Molecular graphics
interactive molecular drawing and conformational editing
building polymeric molecules, crystals, and solvated systems
partial charges development
geometry optimization
support for the different aspects of force field development
Comparison of software covering the major aspects of molecular design
Notes and references
See also
External links
molecular design IUPAC term definition.
Journal of Computer-Aided Molecular Design
Molecular Modeling resources
Materials modelling and computer simulation codes
Click2Drug.org Directory of in silico (computer-aided) drug design tools.
Journal of Chemical Information and Modeling
Journal of Computational Chemistry
Science software
Computational chemistry software
Molecular dynamics software
Molecular modelling software |
1153926 | https://en.wikipedia.org/wiki/HPE%20Integrity%20Servers | HPE Integrity Servers | HPE Integrity is a series of server computers produced by Hewlett Packard Enterprise (formerly Hewlett-Packard) since 2003, based on the Itanium processor. The Integrity brand name was inherited by HP from Tandem Computers via Compaq.
In 2015 HP released the Superdome X line of Integrity Servers based on the x86 Architecture. It is a 'small' Box holding up to 8 dual Socket Blades and supporting up to 16 processors/240 cores (when populated with Intel Xeon E7-2890 or E7-2880 Processors).
General
Over the years, Integrity systems have supported Windows Server, HP-UX 11i, OpenVMS, NonStop, Red Hat Enterprise Linux and SUSE Linux Enterprise Server operating systems on Integrity servers. As of 2020 the operating systems that are supported are HP-UX 11i, OpenVMS and NonStop.
Early Integrity servers were based on two closely related chipsets. The zx1 chipset supported up to 4 CPUs and up to 8 PCI-X busses. They consisted of three distinct application-specific integrated circuits; a memory and I/O controller, a scalable memory adapter and an I/O adapter. The PA-8800 and PA-8900 microprocessors use the same bus as the Itanium 2 processors, allowing HP to also use this chipset for the HP 9000 servers and C8000 workstations.
The memory and I/O controller can be attached directly to up to 12 DDR SDRAM slots. If more slots than this are needed, two scalable memory adapters can be attached instead, allowing up to 48 memory slots. The chipset supports DIMM sizes up to 4 GB, theoretically allowing a machine to support up to 192 GB of RAM, although the largest supported configuration was 128 GB.
The sx1000 chipset supported up to 64 CPUs and up to 192 PCI-X buses. The successor chipsets were the zx2 and sx2000 respectively.
Entry-level servers
rx1600 series
The 1U rx1600 server is based on the zx1 chipset and has support for one or two 1 GHz Deerfield Itanium 2 CPUs.
The 1U rx1620 server is based on the zx1 chipset and has support for one or two 1.3/1.6 GHz Fanwood Itanium 2 CPUs.
Common for the series is:
Memory: Up to 8 DIMMs
Storage: Dual-channel Ultra-320 SCSI controller with support for two hot-swappable Ultra 320 drives
One external Ultra 320 SCSI port
Network:10/100/1000BaseTX LAN port
An additional 10/100BaseTX LAN port
General-purpose RS-232 serial port
Possibility for redundant power supplies
Two USB ports
Two PCI-X slots
Optional features are:
SCSI RAID controller
Management processor card (required for support of Windows Server 2003)
CD/DVD-ROM (required for installation of Windows Server 2003 and OpenVMS)
The series support five operating systems:
HP-UX 11i v2 or later
OpenVMS for Itanium
Windows Server 2003 for Itanium
Windows Server 2008 for Itanium
Linux with kernel that supports Itanium
rx2600 series
The 2U rx2600 server is based on the zx1 chipset and has support for one or two 1.0/1.4 GHz Deerfield/Madison, 1.3 GHz Madison or 1.5 GHz Madison CPUs.
The 2U rx2620 server is based on the zx1 chipset and has support for one or two 1.6 GHz Fanwood/Madison or 1.4/1.6 GHz Montecito CPUs.
The 2U rx2660 server is based on the zx2 chipset and has support for one or two 1.6 GHz Montvale or 1.42/1.66 GHz Montvale CPUs.
Common for the series is:
Memory: Up to 12 DIMMs (8 DIMMs for the rx2660)
Storage: Dual channel Ultra 320 SCSI controller with support for two internal hard drives
Support for three hot-swappable Ultra 320 drives
One external Ultra 320 SCSI port
Network: 10/100/1000BaseTX LAN port
Two general-purpose RS-232 serial ports
Possibility for redundant power supplies
Two USB ports
Four PCI-X slots
Optional features are:
SCSI RAID controller
Management processor card (required for support of Windows Server 2003)
CD/DVD-ROM (Required for installation of Windows Server 2003 and OpenVMS)
The series support four operating systems:
HP-UX 11i v2 or later
OpenVMS for Itanium
Windows Server 2003 for Itanium
Linux with kernel that supports Itanium
rx2800 series
The 2U rx2800 i2 server used the Intel Itanium 9300 series chipset with 8, 4 or 2 processor cores available.
Among features common for the series:
Memory: Up to 24 DIMMs
Storage: HP p410i 3 Gb SAS Controller
Support for 8 hot-plug Serial Attached SCSI (SAS) small form factor (SFF) 2.5 inch drives
Network: Quad port 10/100/1000 Base TX LAN
Six expansion slots
The series supports three operating systems: HP-UX 11i v3, OpenVMS v 8.4 for Itanium, and Windows Server for Itanium.
rx3600
The 4U rx3600 is based on the zx2 chipset and has two CPU sockets which support Montecito or Montvale processors. It supports up to 96 GB of main memory, using 24 four-gigabyte DIMMs.
Standard features include:
Memory: Up to 24 ECC memory PC2 4200 DDR2 DIMMs (with two optional 12-slot memory carrier boards)
Storage: Eight 2.5" Serial Attached SCSI (SAS) disk-drive slots
8-port SAS host bus adapter, supporting two RAID 1 volumes and a hot spare
Network: Dual-port 10/100/1000Base TX LAN
Integrated HP Integrated Lights-Out2 (iLO) management processor
Choice of I/O backplane: PCI-X with 8 available PCI-X slots, or the PCI Express I/O backplane, containing 4 x8 PCI Express slots, and 4 PCI-X slots. The system also has 2 reserved PCI-X slots, one of which is used for a PCI-X dual-port 10/100/1000Base TX LAN card, and the other is reserved for an optional internal Smart Array RAID card.
N+1 hot-swappable fans
Hot-swappable power supply
Hot-swappable PCI-X I/O cards
Optional features:
up to 8 SAS disk drives, with support for up to 72 GB 15K-RPM or 146 GB 10K-RPM disks.
Redundant hot-swappable power supply
8-port HP Smart Array P400 or P600, or 16-port P800 SAS RAID controllers available for internal disks, supporting RAID 1/5/6 and hot spare disks.
rx4600 series
The 7U rx4610 utilizes Itanium 1 CPUs with support for up to four single-core CPUs, 64GB RAM and 10 PCI 64-bit slots.
The 4U rx4640 utilizes Itanium 2 CPUs with support for up to four dual-core CPUs, 128 GB RAM and 6 PCI-X slots.
Both of these models come with integrated USB and video as default, which enables support for Microsoft Windows operating systems.
The rx4640 is architecturally the same box as the rp4440, and can be changed from PA-RISC to Itanium 2 support with the flip of a switch.
rx5670
The discontinued 7U rx5670 server has four CPU sockets which support McKinley and Madison processors. It is zx1-based and can have up to 48 DIMM slots, supporting 256 MB to 2 GB DIMMs which must be loaded in matched sets of four (quads). It has 9 PCI-X slots and 1 PCI slot available.
rx6600
The 7U rx6600 is based on the zx2 chipset and has four CPU sockets that support Montecito and Montvale CPUs. It supports 384 GB memory using 48 eight-gigabyte DIMMs.
Standard features include:
Memory: Up to 48 ECC memory PC2 4200 DDR2 DIMMs (with the 48-slot memory board)
Storage: Sixteen 2.5" SAS disk drive slots
8-port SAS host bus adapter, supporting two RAID 1 volumes and a hot spare
Network: Dual-port 10/100/1000Base TX LAN
Integrated iLO2 management processor
Choice of I/O backplane: PCI-X with 8 available PCI-X slots, or the PCI Express I/O backplane, containing 4 x8 PCI Express slots, and 4 PCI-X slots. The system also has 2 reserved PCI-X slots, one of which is used for a PCI-X dual-port 10/100/1000Base TX LAN card, and the other is reserved for an optional internal Smart Array RAID card.
N+1 hot-swappable fans
Hot-swappable power supply
Hot swappable PCI-X I/O cards
Optional features:
Up to 16 SAS disk drives, with support for 72 GB 15K-RPM / 146 GB 10K-RPM or 300 GB 10K-RPM disks
Redundant hot swappable power supply
8-port HP Smart Array P400 or P600, or 16-port P800 SAS RAID controllers available for internal disks, supporting RAID 1/5/6 and hot-spare disks
Mid-range servers
HP's mid-range and high-end (Superdome) servers are based on cell boards, sometimes called cells, which contain the chipset, Processors, memory, and I/O bus. This design allows the servers to be divided into hardware partitions, or groups of cell boards. Each partition is able to perform as if it were a separate server.
rx7600 series
The 10U rx7620 is based on the SX1000 chipset which supports both PA-RISC and Itanium 2 CPUs.
The 10U rx7640 is based on the SX2000 chipset which supports both PA-RISC and Itanium 2 CPUs.
Maximum of 2 cell boards
4 CPU sockets per cell board
16 DIMM slots per cell board
Maximum of 4 SCSI disks and 2 tape and/or CD/DVD-ROM internally (half to each cell board)
7 hot-pluggable I/O slots per cell board plus 1 core I/O slot per cell board for SX1000
The rx7600 series are the smallest cell-based servers from HP. Just like the bigger rx8600 (see below) and the HP Superdome (see below), these servers can be partitioned, either as one big partition (two cells in one partition) or as two independent cells.
rx8600 series
The 17U rx8620 is based on the SX1000 chipset which supports both PA-RISC and Itanium 2 CPUs.
The 17U rx8640 is based on the SX2000 chipset which supports both PA-RISC and Itanium 2 CPUs.
The rx8620 was announced in 2003, along with the rx4640 and rx7620
Maximum of four cell boards
Four CPU sockets per cell board
16 DIMM slots per cell board
Maximum of four SCSI disks and two tape and/or CD/DVD-ROM internally (half to cell board 0, half to cell board 1)
Eight hot-pluggable I/O-slots per cell board and 1 core I/O slot per cell board for SX1000
Just like the smaller rx7600 (see above) and the HP Superdome (see below), the rx8600 can be partitioned using any combination of the four available cell boards (minimum of one, maximum of four separated partitions).
The maximum number of partitions is four when used with an I/O-expander unit. Because each partition requires an available I/O slot, and the rx8600 series' integrated I/O-chassis statically maps its two I/O slots to cell board 0 and 1, an rx8600 series system is limited to two partitions unless an IOX is installed.
Cells can be freely moved from a rx7600 series to a rx8600 series as long as the chipset is the same on the cells, and the firmware is compatible.
High-end servers
Superdome
The Superdome server is available in several models, including the SD-16, SD-32, and SD-64. HP announced Superdome 2 in April 2010, offering resiliency improvements, a modular, bladed design, common components and crossbar fabric that routes transactions to the optimal pathway between blades and I/O. Superdome 2 addresses requirements for high-performance computing by providing flexible scalability and fault tolerance necessary for mission-critical workloads.
In November 2011 HP announced Project Odyssey, a development roadmap to unify server architectures on a single platform. The roadmap includes blades with Intel Xeon processors for the HP Superdome 2 enclosure (code name “DragonHawk”) and the scalable c-Class blade enclosures (code named “HydraLynx”), while supporting Windows and Linux environments with features from HP-UX within the next two years.
References
External links
HPE Integrity Servers official web page
Integrity
Very long instruction word computing |
34984839 | https://en.wikipedia.org/wiki/IPad%20%283rd%20generation%29 | IPad (3rd generation) | The iPad (3rd generation) (marketed as The new iPad, colloquially referred to as the iPad 3) is a tablet computer, developed and marketed by Apple Inc. The third device in the iPad line of tablets, it added a Retina Display, the new Apple A5X chip with a quad-core graphics processor, a 5-megapixel camera, HD 1080p video recording, voice dictation, and support for LTE networks in North America. It shipped with iOS 5, which also provides a platform for audio-visual media, including electronic books, periodicals, films, music, computer games, presentations and web browsing.
In the United States and Canada, nine variations of the third-generation iPad were offered, compared to six in the rest of the world, although some countries had only the Wi-Fi only model. Each variation was available with black or white front glass panels, with options for 16, 32, or 64 GB of storage. In North America, connectivity options were Wi-Fi only, Wi-Fi + 4G (LTE) on Verizon, AT&T, Telus, Rogers, or Bell. For the rest of the world outside North America, connectivity options are Wi-Fi only (on the Wi-Fi model) or Wi-Fi + 3G (on the Wi-Fi + Cellular model), with the latter unavailable in some countries, as 4G (LTE) connectivity for the device is not available outside North America. The Wi-Fi + Cellular model includes GPS capability.
Initially, the cellular version was titled and marketed worldwide as the "Wi-Fi + 4G" model, but due to regional differences in classification of 4G (LTE) connectivity outside of North America, Apple later rebranded and altered their marketing to call this the "Wi-Fi + Cellular" model.
The tablet was released in ten countries on March 16, 2012. It gained mostly positive reviews, earning praise for its Retina display, processor and 4G (LTE) capabilities. However, controversy arose when the LTE incompatibilities became known. Three million units were sold in the first three days.
After only seven months (221 days) of official availability, the third-generation iPad was discontinued on October 23, 2012, following the announcement of the fourth-generation iPad. The third-generation iPad had the shortest lifespan of any iOS product. It is also the last iPad to support the 30-pin dock connector, as the fourth-generation iPad and later use the Lightning connector.
History
Speculation about the product began shortly after Apple released the iPad 2, which featured front and back cameras as well as a dual-core Apple A5 processor. Speculation increased after news of a 2,048-by-1,536 pixel screen leaked.
During this time, the tablet was called the "iPad 3", a colloquial name sometimes still used after the release. On February 9, 2012, John Paczkowski of All Things Digital stated that "Apple’s not holding an event in February—strange, unusual or otherwise. But it is holding one in March—to launch its next iPad."
Another common rumor at the time was that the tablet would have an Apple A6 processor.
On February 29, 2012, Apple announced a media event scheduled for March 7, 2012, at the Yerba Buena Center for the Arts. The company did not predisclose the subject of the event, but analysts widely expected the event to announce a new version of the iPad. The announcement affected the tablet resale market positively.
At the event, Apple CEO Tim Cook introduced iOS 5.1, a Japanese version of Siri, and the third-generation Apple TV before the third-generation iPad. Cook claimed that the new product would be one of the main contributors to the emerging "post-PC world"—a world in which digital life would not be tied to the PC.
The March 16, 2012, release included eight countries including Australia, Canada, Japan, Singapore, the United Kingdom and the United States. The March 23, 2012, release included many European countries, Mexico and Macau. The April 20, 2012, release added a dozen countries including South Korea and Malaysia. The April 27, 2012, release added nine more countries, including India and South Africa. May 2012 releases added 31 countries, including Brazil and Turkey.
On October 23, 2012, upon the announcement of the fourth-generation iPad, the third-generation iPad was discontinued. In response to criticism from its owners, the return policy of select Apple Stores was briefly extended to thirty days to allow customers to exchange the third-generation model for the fourth-generation model.
Features
Software
The third-generation iPad shipped with iOS 5.1, which was released on March 7, 2012. It can act as a hotspot with some carriers, sharing its internet connection over Wi-Fi, Bluetooth, or USB, providing that it is a Wi-Fi + Cellular model. It can also access the App Store, a digital application distribution platform for iOS developed and maintained by Apple. The service allows users to browse and download applications from the iTunes Store that were developed with Xcode and the iOS SDK and were published through Apple. From the App Store, GarageBand, iMovie, iPhoto, and the iWork apps (Pages, Keynote, and Numbers) are available.
The iPad comes with several pre-installed applications, including Safari, Mail, Photos, Videos, YouTube, Music, iTunes, App Store, Maps, Notes, Calendar, Game Center, Photo Booth, and Contacts. Like all iOS devices, the iPad can sync content and other data with a Mac or PC using iTunes, although iOS 5 and later can be managed and backed up without a computer. Although the tablet is not designed to make phone calls over a cellular network, users can use a headset or the built-in speaker and microphone and place phone calls over Wi-Fi or cellular using a VoIP application, such as Skype. The device has dictation, using the same voice recognition technology as the iPhone 4S. The user speaks and the iPad types what they say on the screen provided that the iPad is connected to a Wi-Fi or cellular network.
The third-generation device has an optional iBooks application, which displays books and other EPUB-format content downloaded from the iBookstore. Several major book publishers including Penguin Books, HarperCollins, Simon & Schuster and Macmillan have committed to publishing books for the device. Despite being a direct competitor to both the Amazon Kindle and Barnes & Noble Nook, both Amazon.com and Barnes & Noble offer e-reader apps for the iPad.
On September 19, 2012, iOS 6, which contains 200 new features, was released. The iOS 6 update includes new features such as Apple Maps, which replaced a mapping application operated by Google, Facebook integration and the ability to operate Siri on the third-generation iPad.
The third-generation iPad is compatible with iOS 7, which was released in 2013. Although complete support, some newer features such as AirDrop that were released to newer models were not supported. This is the similar support that was also given on the iPhone 4S.
iOS 8 is also supported by the third-generation iPad. However, some features have been stripped down.
iOS 9 supports the third-generation iPad as well. It is the fifth major iOS release that this model supports. The iOS 9 public beta was also compatible with it. This model has been supported for more than 4 years.
iOS 9.3.5 is the latest and final version to support the Wi-Fi only iPad 3rd generation model while the Wi-Fi + Cellular models run iOS 9.3.6.
2019 GPS rollover update
On July 22, 2019, Apple released iOS 9.3.6 for the WiFi + Cellular models of the third-generation iPad to fix issues caused by the GPS Week Number Rollover. The issues would impact accuracy of GPS location and set the device's date and time to an incorrect value, preventing connection to HTTPS servers and, consequently, Apple's servers for activation, iCloud and the iTunes and App stores. The WiFi model is not affected by the rollover as it lacks a GPS chipset.
Jailbreaking
Researchers demonstrated within hours of the product release that the third-generation iPad can be "jailbroken" to use applications and programs that are not authorized by Apple. The third-generation iPad can be jailbroken with Redsn0w 0.9.12 or Absinthe 2.0. Jailbreaking violates the factory warranty. One of the main reasons for jailbreaking is to expand the feature set limited by Apple and its App Store. Most jailbreaking tools automatically install Cydia, a native iOS APT client used for finding and installing software for jailbroken iOS devices. Many apps unapproved by Apple are extensions and customizations for iOS and other apps. Users install these programs to personalize and customize the interface, adding desired features and fixing annoyances, and simplify app development by providing access to the filesystem and command-line tools.
However, Apple often patches the exploits used by jailbreaking teams with iOS updates. This is why the iPad 3rd Generation is not always jailbreakable.
Hardware
The device has an Apple A5X SoC with a 1 GHz dual-core 32-bit Cortex-A9 CPU and a quad-core PowerVR SGX543MP4 GPU; 1 GB of RAM; a 5-megapixel, rear-facing camera capable of 1080p video recording; and a VGA front-facing videophone camera designed for FaceTime. The display resolution is 2,048 by 1,536 (QXGA) with 3.1 million pixels—four times more than the iPad 2—providing even scaling from the prior model.
The new iPad is thicker than its predecessor by 0.6 mm and is heavier by 51 grams for the Wi-Fi model (652 grams).
The Wi-Fi + Cellular models (both at 662 grams) are 49 grams heavier for the AT&T model and 55 grams heavier for the Verizon model compared to the respective iPad 2 3G models (AT&T 3G iPad 2 is 613 grams, and Verizon 3G iPad 2 is 607 grams).
There are four physical switches on the third-generation iPad, including a home button near the display that returns the user to the home screen, and three plastic switches on the sides: wake/sleep and volume up/down, plus a software-controlled switch whose function varies with software update. The display responds to other sensors: an ambient light sensor to adjust screen brightness and a 3-axis accelerometer to sense orientation and to switch between portrait and landscape modes. Unlike the iPhone and iPod Touch's built-in applications, which work in three orientations (portrait, landscape-left and landscape-right), the iPad's built-in applications support screen rotation in all four orientations, including upside-down. Consequently, the device has no intrinsic "native" orientation; only the relative position of the home button changes.
The tablet is manufactured either with or without the capability to communicate over a cellular network; all models can connect to a wireless LAN. The third-generation iPad optionally has 16, 32, or 64 GB of internal flash memory, with no expansion option. Apple sells a "camera connection kit" with an SD card reader, but it can only be used to transfer photos and videos.
The audio playback of the third-generation iPad has a frequency response of 20 Hz to 20,000 Hz. Without third-party software it can play the following audio formats: HE-AAC, AAC, Protected AAC, MP3, MP3 VBR, Audible formats (2, 3, 4, AEA, AAX, and AAX+), ALAC, AIFF, and WAV. A preliminary tear-down of the third-generation iPad by IHS iSuppli showed the likely costs for a 16 GB Wi-Fi + Cellular model at $358.30, 32 GB at $375.10, and 64 GB at $408.70 respectively.
This iPad uses an internal rechargeable lithium-ion polymer (LiPo) battery. The batteries are made in Taiwan by Simplo Technology (60%) and Dynapack International Technology. The iPad is designed to be charged with a high current of 2 amps using the included 10 W USB power adapter and USB cord with a USB connector at one end and a 30-pin dock connector at the other end. While it can be charged by an older USB port from a computer, these are limited to 500 milliamps (0.5 amps). As a result, if the iPad is in use while powered by a computer, it may charge very slowly, or not at all. High-power USB ports found in newer computers and accessories provide full charging capabilities.
Apple claims that the battery can provide up to 10 hours of video, 140 hours of audio playback, or one month on standby; people say the battery lasts about 8 hours doing normal tasks. Like any rechargeable battery, the iPad's battery loses capacity over time. However, the iPad's battery is not user-replaceable. In a program similar to iPod and iPhone battery-replacement programs, Apple promised to replace an iPad that does not hold an electrical charge with a refurbished unit for a fee of US$99 plus $6.95 shipping. User data is not preserved/transferred. The refurbished unit comes with a new case. The warranty on the refurbished unit may vary between jurisdictions.
Accessories
The Smart Cover, introduced with the iPad 2, is a screen protector that magnetically attaches to the face of the iPad. The cover has three folds which allow it to convert into a stand, which is also held together by magnets. The Smart Cover can also assume other positions by folding it. While original iPad owners could purchase a black case that included a similarly folding cover, the Smart Cover is simpler, easily detachable, and protects only the screen. Smart Covers have a microfiber bottom that cleans the front of the iPad, and wakes up the unit when the cover is removed. It comes in five colors of both polyurethane and the more expensive leather.
Apple offers several other accessories, most of which are adapters for the proprietary 30-pin dock connector, the only port besides the headphone jack. A dock holds the iPad upright at an angle, and has a dock connector and audio line-out port. The iPad can use Bluetooth keyboards that also work with Macs and PCs. The iPad can be charged by a standalone power adapter ("wall charger") compatible with iPods and iPhones, and a 10-watt charger is included.
Discontinued date
The iPad (3rd generation) dropped update support in 2016, making iOS 9.3.5 the latest version for Wi-Fi Only models while cellular models run iOS 9.3.6
when it was replaced by the iPad (4th generation).
Critical reception
The third-generation iPad received positive reviews, receiving praise for its Retina display, camera, processor and LTE capabilities. According to Walt Mossberg of All Things Digital, the new model "has the most spectacular display...seen in a mobile device" and holds the crown as "the best tablet on the planet." Jonathan Spira, writing in Frequent Business Traveler, claimed that it "seems to make everything sharper and clearer."
Issues
Cellular problems
Criticism followed the news that in markets outside the US, the tablet cannot communicate with LTE due to its use of 700 MHz and 700/2,100 MHz frequencies, respectively, versus 800 MHz, 1.8 GHz and 2.6 GHz used elsewhere. Soon after the launch, the Australian Competition and Consumer Commission (ACCC) took Apple to court for breaking four provisions of Australian consumer law. They alleged that Apple's promotion of the tablet in Australia as the 'iPad Wi-Fi + 4G' misled customers, as the name indicates that it would work on Australia's then-current 4G network. Apple responded to this by offering a full refund to all customers in Australia who purchased the Wi-Fi + Cellular model (when it was previously named "Wi-Fi + 4G") of the iPad.
On April 20, 2012, Apple stated that HSPA+ networks in Australia are 4G, even though the speeds are slower than that of LTE. A month later, on June 21, 2012, Apple was sued for A$2.25 million for false advertising in Australia. In its advertisements Apple claimed that the new iPad was 4G LTE compatible. However, it didn't work with the Telstra LTE mobile data network in Australia. Apple was fined A$2.25 million and was ordered to pay A$300,000 in costs.
Apple agreed to remove all references to 4G (LTE) capability in its UK advertising but as of August had not done so. There was no widespread 4G (LTE) network in the UK at the time, and the third-generation iPad would also be incompatible with future 4G (LTE) networks when they did roll-out there. The Advertising Standards Authority received consumer complaints on the matter. Apple offered to refund customers who bought the device after being misled by the advertising. The result of numerous complaints and lawsuits against Apple regarding the use of the term 4G in their advertisements prompted Apple to rename its "4G" service to "Cellular", with this change appearing on Apple's website on May 13, 2012.
Overheating
Many users reported abnormally high temperatures on the casing of the unit, especially after running 3D games. If used while plugged in, the rear of the new iPad became as much as hotter than an iPad 2. The difference unplugged was . Thermal imaging tests revealed that the iPad can reach . At this temperature it was warm to touch but not uncomfortable when held for a brief period. In a follow-up report, Consumer Reports said, they "don't believe the temperatures we recorded in our tests of the new iPad represent a safety concern."
Performance
The claimed superiority of the A5X over the Tegra 3 processor was questioned around launch time by competitor Nvidia; some benchmarks later confirmed the iPad's superiority in graphics performance, while other benchmarks show that the Tegra 3 has greater performance in some areas.
Criticism
Consumer Reports gave the third-generation iPad a top rating and recommendation, claiming that the tablet was "superb", "very good", and "very fast", and that the 4G network, the Retina display, and overall performance were positive attributes. They elaborated on the display quality, stating that the third-generation iPad was "the best we’ve seen." The iPad's new display was a large enough improvement to prompt Consumer Reports to rate it "excellent," and consequently downgraded the display of other tablets (including the iPad 2) from "excellent" to "very good."
As with the preceding models (see the parent article on the iPad), iOS' closed and proprietary nature garnered criticism, particularly by digital rights advocates such as the Electronic Frontier Foundation, computer engineer and activist Brewster Kahle, Internet-law specialist Jonathan Zittrain, and the Free Software Foundation who protested the iPad's introductory event and have targeted the iPad with their "Defective by Design" campaign.
Commercial reception
Pre-orders were so high for the third-generation iPad that later orders were quoted shipping times of "two to three weeks" after the order was placed. Apple said that "customer response to the new iPad has been off the charts and the quantity available for pre-order has been purchased." Despite the delayed shipping, many users chose to purchase the iPad online instead of waiting in line at the Apple Store. According to an Apple press release, three million units were sold in the first three days. The iPad was purchased mainly by a younger, male demographic. Most of the buyers were either "die-hard Apple fans" or had previously purchased an iPad. An Apple retailer in Dayton, Ohio, claimed that the demand for the tablet was "chaotic" and claimed that its launch was "drastically more significant than the iPad 2 launch." By Q2 of 2012, Apple would hit an all-time high, claiming 69.6 percent of the global tablet market.
Timeline
See also
List of iPad accessories
Comparison of tablet computers
E-reader
References
External links
iPad launch event video at Yerba Buena Center for the Arts
3rd generation
iPad (3rd generation)
Tablet computers introduced in 2012
Tablet computers
Touchscreen portable media players |
48548028 | https://en.wikipedia.org/wiki/University%20of%20Cagayan%20Valley | University of Cagayan Valley | The University of Cagayan Valley (UCV) is a private non-sectarian university in Tuguegarao, Cagayan Valley, Philippines. It was formerly known as Cagayan Teachers College and then Cagayan Colleges Tuguegarao.
History
To sustain its university status and to keep abreast with global requirements, the university in 2013 built a four-story laboratory building to meet the growing needs of its clientele.
On July 24, 2014, the university launched its intention to be certified for International Standard Organization (ISO) 9001:2008 Quality Management System by the British Standard Institution (BSI).
On October 7, 2015, through painstaking challenges and hard work, BSi finally issued its Certificate of Registration on ISO 9001-2008 Quality Management System with Certificate No. FS 636133 to the University of Cagayan Valley.
Accreditation
UCV is ISO Quality Management System Certified 9000:2015 by British Standards Institution (BSI), the College of Maritime Education is also ISO certified by Det Norske Veritas (DNV) of Oslo, Norway, and programs are accredited by Philippine Association of Colleges and Universities Commission on Accreditation (PACUCOA).
The university is also accredited by TESDA, which offers Technical and Vocational Education and Training Program (TVET) under the Technical Education and Skills Development Authority and Commission on Higher Education (CHED).
BSI Quality Management System - ISO 9001:2015
On July 24, 2014, the university launched its intention to be certified for International Standard Organization (ISO) 9001:2008 Quality Management System with British Standard Institution (BSI).
On October 7, 2015m through painstaking challenges and hard work, BSi finally issued its Certificate of Registration on ISO 9001-2008 Quality Management System with Certificate No. FS 636133 to the University of Cagayan Valley.
ISO certified by DNV-GL for maritime education
The College of Maritime Education, which offers Bachelor of Science in Marine Engineering and Bachelor of Science in Marine Transportation, is [ISO 9001] Certified by DNV (Det Norske Veritas) Norway.
PACUCOA
The following are programs accredited by the Philippine Association of Colleges and Universities Commission on Accreditation (PACUCOA):
Curricular offerings
Graduate school programs
Doctor of Philosophy major in Human Resource Development, Educational Management
Doctor of Philosophy in Criminal Justice with specialization in Criminology
Doctor of Public Administration
Doctor of Jurisprudence/Juris Doctor/Doctor of Law
Master of Arts in Education
Master in Business Administration
Master in Public Administration
Master of Science in Criminology
Master of Science in Hospitality Management
Master of Engineering
Post baccalaureate program
Bachelor of Laws and Letters
Undergraduate programs
Bachelor of Arts major in English and Political Science
Bachelor of Public Administration
Bachelor of Science in Social Work
Bachelor of Physical Education
Bachelor of Science in Accounting Technology
Bachelor of Science in Business Administration major in Financial Management, Human Resource Management and Marketing Management
Bachelor of Science in Criminology
Bachelor of Science in Criminology through ETEEAP
Bachelor in Elementary Education major in Pre-school Education
Bachelor in Secondary Education major in English, Filipino, Math and Social Studies
Bachelor of Science in Hotel and Restaurant Management
Bachelor of Science in Computer Engineering
Bachelor of Science in Electrical Engineering
Bachelor of Science in Mechanical Engineering
Bachelor of Science in Marine Engineering
Bachelor of Science in Marine Transportation
Bachelor of Science in Nursing
Bachelor of Science in Midwifery
Bachelor of Science in Electrical Technology
Bachelor of Science in Electronics Technology
Bachelor of Science in Industrial Technology
Bachelor of Science in Information Technology
Vocational programs
Ladderized education programs
2-year Consumer Electronics Servicing NC II leading to Bachelor of Science in Electronics Technology
2-year Food and Beverage NC II leading to Bachelor of Science in Hotel & Restaurant Management
2-year Computer Hardware Servicing NC II leading to Bachelor of Science in Information Technology
Terminal Programs
Diploma in Midwifery
Automotive Servicing NC II
Consumer Electronics Servicing NC II
Housekeeping NC II
Building Wiring Installation NC II
Electrical Installation & Maintenance NC II
Basic education program
High School Program
Senior High School (Grades 11–12)
Academic Tracks:
Accountancy, Business and Management(ABM) Strand, Humanities and Social Sciences (HumSS) Strand, Science, Technology, Engineering and Mathematics (STEM) Strand, General Academics (GAs) Strand, Pre-Bacc. Maritime Specialization
Technical-Vocational Livelihood Tracks:
Home Economics Strand (Housekeeping and Food & Beverages Services), ICT Strand (Computer Programming), Industrial Arts Strand (Automotive Servicing NC II, Consumer Electronics Servicing and Electrical Installation Maintenance)
Sports Track
Junior High School (Grades 7–10)
Grade School Program (Grades 1 - 6)
Pre-School Program
Kinder
Preparatory
UCV officials
Top management
Susan Esther N. Perez-Mari, MSc-FM, MD, Ph.D. - University President
Mellita Perez-Quimpo, R.Ph. - Senior Executive Vice President
Antonio B. Talamayan, Ph.D. - VP for Academic Affairs
Atty. Venancio C. Del Rosario Jr., JD, - VP for Legal Affairs
Freddie B. Corsino, Ph.D. - VP for Research, Publications, Planning and Development
Gregoria GJ Gocal, DPA - VP for Administration
Manrico B. Baricaua, CPA, MBA - VP for Finance
Directors and heads of offices
Vina V. Temanel - Director, Human Resource Development
Julius Bernabe Reyes - Director, Press Relations
Orlando C. Turingan Jr. - Chair of the Institutional Review Board (IRB), Recording Secretary of the BOD and Concierge Supervisor
Sarah Jane F. Maggay, RN, MSN - Director, Health and Food Services
John Mauro Manuel, LLB, JD, Ph.D. - Director, Community Extension and Services and Linkages
Mary Rose F. Esteban, DPA - Director, Student Affairs
Hector A. Palattao - Director, Student Information and Assessment System
Rizalina Maguddayao, RL, MLIS - Director, Library System
Engr. Kern Mantrix Mabazza - Planning Engineer/Management Trainee
Engr. Nikko Roland L. Manuel - Director, Physical Plant and Maintenance/Management Trainee
Jonasan A. Delos Santos - Electrician Supervisor
Crisencia L. Orcales - Director, Properties and Laboratories
Michelle Lauigan, RGC, RPm, MAGC - Director, Guidance Center
Jonel A. Agna, MIT - Director, Educational Management Information System and System Development
Editha M. Muhallin, DPA - Director, National Service Training Program
Joan Q. Tumacay - Director, Physical Education and Sports Development
Lolita B. Cabalza, Ph.D. - Director, Quality Assurance
Hartbert S. Perez - Chief Audit Executive and Financial & Business Development Officer and Mechanic
Joselito Dominic A. Perez - Procurement Head
Maryann S. Perez - Liaison Officer, College of Health
Adarito V. Corsino, LLB, MBA - University Registrar
Artemio C. Lasquero - Manager, University Shop
Iris Mariae L. Tesani, CPA - University Accountant
Kristen-Loi M. Callueng, CPA - Internal Auditor
Crisencia L. Orcales - Director, Properties and Laboratory Services
Shirley M. Perez - Chief Cashier
Atty. Glenn M. Macababbad - Legal Counsel
Katrina Lorraine S. Perez - Chief Security
Engr. Benedict Loste - Director, Physical Plant
Academic deans
Atty. Venancio C. Del Rosario Jr., JD - School of Law
Engr. Emmanuel Miguel, ME - School of Engineering
Norma C. Guillermo, Ph.D. - School of Liberal Arts and School of Teacher Education
Shirley S. Domingo, RCrim, DPA, Ph.D. - School of Criminology
Randy R. Peralta, DBM, Ph.D.- School of Business Administration and Governance
Bernadette Kate D. Gammad, MSHM - School of Hospitality Management
Adriane Gabrielle S. Perez, RN, RM, MSN - College of Health
3/Engr. Marlowe Cris B. Mencero - College of Maritime Education
Nathaline Bauit, RSW, MSSW - College of Social Work
Alvin T. Talay, MIT, DIT - College of Information Technology
Mario T. Soriano, Ph.D.- College of Technology
Samuel Simeon C. Bunagan, Ph.D. - Basic Education
Program heads, coordinators and principals
Dominic A. Ong, LPT, MST - Principal, Senior High School
Geraldine S. Baluyan, LPT, MAEd - Asst. Principal, Senior High School
Maria Rosario M. Co, LPT, MAEd - Principal, Junior High School
Rebecca M. Pastores - OIC Principal, Elementary Laboratory
3/Engr. Marlowe Mencero- Shipboard Training Officer, College of Maritime Education
Nilda T. Crejado, MBA, MSHM - Program Head, School of Hospitality Management
Maricel S. Palo, RCrim, MSCrim - Program Head, School of Criminology
Antonette P. Angeles, MIT- Learner Information System Coordinator, Senior High School
Corazon Estavillo, LPT - Department Head, Physical Education
Subject chairs
Felix M. Calubaquib, Ph.D. - Natural and Biological Sciences Department
Larry S. Villanueva, Ph.D. - Language Department and Program Head for Liberal Arts
Jeffrey T. Battung, MAEd- Social and Behavioral Science Department
References
Universities and colleges in Cagayan
Education in Tuguegarao |
30901402 | https://en.wikipedia.org/wiki/2011%20Canadian%20government%20hackings | 2011 Canadian government hackings | In February 2011, news sources revealed that the Government of Canada suffered cyber attacks by foreign hackers using IP addresses from China. The hackers managed to infiltrate three departments within the government and transmit classified information back to them. The attacks resulted in the government cutting off internet access in the departments affected and various responses from both the Canadian government and the Chinese government.
History
In May 2010 a memo by the Canadian Security Intelligence Service (CSIS) from 2009 was released to the public that warned that cyber attacks on Canadian government, university, and industry computers was showing growth in 2009 and that the threat of cyber attacks was "one of the fastest growing and most complicated issues" facing CSIS. Minister of Public Safety Vic Toews stated in January 2011 that cyber attacks are a serious threat to Canada and that attacks on government computers have grown "quite substantial." In the fall of 2010 the federal government began to strategize ways to prevent cyber attacks and create response plans, which would include $90 million over five years in combating cyber threats.
Auditor General Sheila Fraser has previously warned that the federal government's computer systems risk being breached. In 2002 she stated that the government's internet security was not adequate and warned that it had "weaknesses in the system" and urged improving security to deal with the vulnerabilities. In 2005 she said the government still has to "translate its policies and standards into consistent, cost-effective practices that will result in a more secure IT environment in departments and agencies."
Cyber attack
The cyber attack was first detected in January 2011 and implemented as a phishing scheme. Emails with seemingly innocuous attachments were sent, supposedly by known public servants. The attachments contained malware which infected the computer and exfiltrated key information such as passwords. This information, once sent back to the hackers, could then be used to remotely access the computer and forward the email (with infecting attachment) onto others in order to proliferate the virus.
Affected departments included Treasury Board and the federal Finance Department, as well as a DND agency advising the Canadian armed forces on science and technology. Once detected, Canadian cybersecurity officials shut down all internet access from these departments in order to halt the exfiltration of information from hijacked computers. This left thousands of public servants without internet access.
While the cyber attacks were traced back to Chinese IP addresses, there is "no way of knowing whether the hackers are Chinese, or some other nationality routing their cybercrimes through China to cover their tracks".
Response
When the attacks were detected internet access in the two departments was shut down to prevent stolen information from being sent back to the hackers. The Prime Minister's office have only claimed the hackers made an "attempt to access" servers and did not comment further. A spokesman for Treasury Board Minister Stockwell Day said there were no indications that any data related to Canadians was compromised. CSIS officials have advised the government to not name China as the attacker and not talk about the attacks, while a government official stated Chinese espionage has become a problem for Canada and other countries.
On February 17, Prime Minister Stephen Harper stated that the government has "a strategy in place to try and evolve our systems as those who would attack them become more sophisticated" and that cyber attacks are "a growing issue of importance, not just in this country, but across the world." The same day, Stockwell Day also stated that the attacks weren't " the most aggressive [attack] but it was a significant one, significant that they were going after financial records."
The Chinese government has denied involvement in the attacks. Foreign Ministry Spokesman Ma Zhaoxu said at a press conference on February 17 that the Chinese government opposes hacking and other criminal acts, saying that "the allegation that China supports hacking is groundless."
See also
Cyber attack during the Paris G20 Summit
References
2011 in Canada
Canada–China relations
Internet in Canada
Cyberwarfare in China |
959147 | https://en.wikipedia.org/wiki/Richard%20Blumenthal | Richard Blumenthal | Richard Blumenthal (; born February 13, 1946) is an American lawyer and politician serving as the senior United States senator from Connecticut, a seat he has held since 2011. A member of the Democratic Party, he is one of the wealthiest members of the Senate, with a net worth over $100 million. He served as Attorney General of Connecticut from 1991 to 2011.
Born in Brooklyn, New York, Blumenthal attended Riverdale Country School, a private school in the Bronx. He graduated from Harvard College, where he was editor-in-chief of The Harvard Crimson. He studied for a year at Trinity College, Cambridge, in England before attending Yale Law School, where he was editor-in-chief of the Yale Law Journal. At Yale, he was a classmate of Bill and Hillary Clinton. From 1970 to 1976, Blumenthal served in the United States Marine Corps Reserve, attaining the rank of sergeant.
After law school, Blumenthal passed the bar and served as administrative assistant and law clerk for several Washington, D.C. figures. From 1977 to 1981, he was United States Attorney for the District of Connecticut. In the early 1980s he worked in private law practice, including as volunteer counsel for the NAACP Legal Defense Fund.
Blumenthal served one term in the Connecticut House of Representatives from 1985 to 1987; in 1986 he was elected to the Connecticut Senate and began service in 1987. He was elected Attorney General of Connecticut in 1990 and served for 20 years. During this period political observers speculated about him as a contender for governor of Connecticut, but he never pursued the office.
Blumenthal announced his 2010 run for U.S. Senate after incumbent Senator Chris Dodd announced his retirement. He faced Linda McMahon, a professional wrestling magnate, in the 2010 election, winning with 55% of the vote. He was sworn in on January 5, 2011. After Joe Lieberman retired in 2013, Blumenthal became Connecticut's senior senator. He was reelected in 2016 with 63.2% of the vote, becoming the first person to receive more than a million votes in a statewide election in Connecticut.
Early life and education
Blumenthal was born into a Jewish family in Brooklyn, New York, the son of Jane (née Rosenstock) and Martin Blumenthal. At age 17, Martin Blumenthal immigrated to the United States from Frankfurt, Germany; Jane was raised in Omaha, Nebraska, graduated from Radcliffe College, and became a social worker. Martin Blumenthal had a career in financial services and became president of a commodities trading firm. Jane's father, Fred "Fritz" Rosenstock, raised cattle, and as youths Blumenthal and his brother often visited their grandfather's farm. Blumenthal's brother David Blumenthal is a doctor and health care policy expert who became president of the Commonwealth Fund.
Blumenthal attended Riverdale Country School in the Riverdale section of the Bronx. He then attended Harvard College, from which he graduated in 1967 with an A.B. degree magna cum laude and membership in Phi Beta Kappa. As an undergraduate, he was editorial chairman of The Harvard Crimson. Blumenthal was a summer intern reporter for The Washington Post in the London Bureau. He was selected for a Fiske Fellowship, which allowed him to study at the University of Cambridge in England for one year after graduation from Harvard.
In 1973, Blumenthal received his J.D. degree from Yale Law School, where he was editor-in-chief of the Yale Law Journal. At Yale, he was classmates with future President Bill Clinton and future Secretary of State Hillary Clinton. One of his co-editors of the Yale Law Journal was future United States Secretary of Labor Robert Reich. He was also a classmate of future Supreme Court Justice Clarence Thomas and radio host Michael Medved.
Military service
Blumenthal received five draft deferments during the Vietnam War, first educational deferments, then deferments based on his occupation. In April 1970 Blumenthal enlisted in the United States Marine Corps Reserve which, The New York Times reported, "virtually guaranteed that he would not be sent to Vietnam". He served in units in Washington, D.C., and Connecticut from 1970 to 1976, attaining the rank of sergeant.
During his 2010 Senate campaign, news report videos that showed Blumenthal claiming he had served in Vietnam created a controversy. Blumenthal denied having intentionally misled voters, but acknowledged having occasionally "misspoken" about his service record. He later apologized to voters for remarks about his military service which he said had not been "clear or precise".
Early political career
Blumenthal served as administrative assistant to Senator Abraham A. Ribicoff, as aide to Daniel P. Moynihan when Moynihan was Assistant to President Richard Nixon, and as a law clerk to Judge Jon O. Newman, U.S. District Court of the District of Connecticut, and to Supreme Court Justice Harry A. Blackmun.
Before becoming attorney general, Blumenthal was a partner in the law firm of Cummings & Lockwood, and subsequently in the law firm of Silver, Golub & Sandak. In December 1982, while still at Cummings & Lockwood, he created and chaired the Citizens Crime Commission of Connecticut, a private, nonprofit organization. From 1981 to 1986, he was a volunteer counsel for the NAACP Legal Defense Fund.
At age 31, Blumenthal was appointed United States Attorney for the District of Connecticut, serving from 1977 to 1981. As the chief federal prosecutor of that state, he successfully prosecuted many major cases involving drug traffickers, organized crime, white collar criminals, civil rights violators, consumer fraud, and environmental pollution.
In 1982, he married Cynthia Allison Malkin. She is the daughter of real estate investor Peter L. Malkin. Her maternal grandfather was lawyer and philanthropist Lawrence Wien.
In 1984, when he was 38, Blumenthal was elected to the Connecticut House of Representatives, representing the 145th district. In 1987, he won a special election to fill a vacancy in the 27th district of the Connecticut Senate, at age 41. Blumenthal resided in Stamford, Connecticut.
In the 1980s, Blumenthal testified in the state legislature in favor of abolishing Connecticut's death penalty statute. He did so after representing Joseph Green Brown, a Florida death row inmate who was found to have been wrongly convicted. Blumenthal succeeded in staving off Brown's execution just 15 hours before it was scheduled to take place, and gained a new trial for Brown.
Attorney General of Connecticut
Blumenthal was elected the 23rd Attorney General of Connecticut in 1990 and reelected in 1994, 1998, 2002 and 2006. On October 10, 2002, he was awarded the Raymond E. Baldwin Award for Public Service by the Quinnipiac University School of Law.
Tenure
Pequot land annexation bid
In May 1995, Blumenthal and the state of Connecticut filed lawsuits challenging a decision by the Department of the Interior to approve a bid by the federally recognized Mashantucket Pequot for annexation of 165 acres of land in the towns of Ledyard, North Stonington and Preston. The Pequot were attempting to have the land placed in a federal trust, a legal designation to provide them with land for their sovereign control, as long years of colonization had left them landless. Blumenthal argued that the Interior Department's decision in support of this action was "fatally, legally flawed, and unfair" and that "it would unfairly remove land from the tax rolls of the surrounding towns and bar local control over how the land is used, while imposing [a] tremendous burden." The tribe announced the withdrawal of the land annexation petition in February 2002.
Interstate air pollution
In 1997, Blumenthal and Governor John G. Rowland petitioned the United States Environmental Protection Agency (EPA) to address interstate air pollution problems created from Midwest and southeastern sources. The petition was filed in accordance with Section 126 of the Clean Air Act, which allows a state to request pollution reductions from out-of-state sources that contribute significantly to its air quality problems.
In 2003, Blumenthal and the attorneys general of eight other states (New York, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, Rhode Island, and Vermont) filed a federal lawsuit against the Bush administration for "endangering air quality by gutting a critical component of the federal Clean Air Act." The suit alleged that changes in the act would have exempted thousands of industrial air pollution sources from the act's New Source Review provision and that the new rules and regulations would lead to an increase in air pollution.
Tobacco
While attorney general, Blumenthal was one of the leaders of a 46-state lawsuit against the tobacco industry, which alleged that the companies involved had deceived the public about the dangers of smoking. He argued that the state of Connecticut should be reimbursed for Medicaid expenses related to smoking. In 1998, the tobacco companies reached a $246 billion national settlement, giving the 46 states involved 25 years of reimbursement payments. Connecticut's share of the settlement was estimated at $3.6 billion.
In December 2007, Blumenthal filed suit against RJ Reynolds, alleging that a 2007 Camel advertising spread in Rolling Stone magazine used cartoons in violation of the master tobacco settlement, which prohibited the use of cartoons in cigarette advertising because they entice children and teenagers to smoke. The company paid the state of Connecticut $150,000 to settle the suit and agreed to end the advertising campaign.
Microsoft lawsuit
In May 1998, Blumenthal and the attorneys general of 19 other states and the District of Columbia filed an anti-trust lawsuit against Microsoft, accusing it of abusing its monopoly power to stifle competition. The suit, which centered on Microsoft's Windows 98 operating system and its contractual restrictions imposed on personal computer manufacturers to tie the operating system to its Internet Explorer browser, was eventually merged with a federal case brought by the United States Department of Justice (DOJ) under Attorney General Janet Reno.
A 2000 landmark federal court decision ruled that Microsoft had violated antitrust laws, and the court ordered that the company be broken up. In 2001, the federal appeals court agreed, but rather than break up the company, it sent the case to a new judge to hold hearings and determine appropriate remedies. Remedies were later proposed by Blumenthal and eight other attorneys general; these included requiring that Microsoft license an unbundled version of Windows in which middleware and operating system code were not commingled.
In 2001, the Bush administration's DOJ settled with Microsoft in an agreement criticized by many states and other industry experts as insufficient. In November 2002, a federal court ruling imposed those same remedies. --> In August 2007, Blumenthal and five other states and the District of Columbia filed a report alleging that the federal settlement with Microsoft and court-imposed Microsoft remedies had failed to adequately reduce Microsoft's monopoly.
Stanley Works
On May 10, 2002, Blumenthal and Connecticut State Treasurer Denise L. Nappier helped to stop the hostile takeover of New Britain-based Stanley Works, a major Connecticut employer, by filing a lawsuit alleging that the move to reincorporate in Bermuda based on a shareholder's vote of May 9 was "rife with voting irregularities." The agreement to temporarily halt the move was signed by New Britain Superior Court Judge Marshall Berger. On June 3 Blumenthal referred the matter to the U.S. Securities and Exchange Commission (SEC) for further investigation and on June 25 he testified before the U.S. House Committee on Ways and Means that "Longtime American corporations with operations in other countries can dodge tens of millions of dollars in federal taxes by the device of reincorporating in another country" by "simply [filing] incorporation papers in a country with friendly tax laws, open a post-office box and hold an annual meeting there" and that Stanley Works, along with "Cooper Industries, Seagate Technologies, Ingersoll-Rand and PricewaterhouseCoopers Consulting, to name but a few, have also become pseudo-foreign corporations for the sole purpose of saving tax dollars." Blumenthal said, "Corporations proposing to reincorporate to Bermuda, such as Stanley, often tell shareholders that there is no material difference in the law", but said that this was not the case and was misleading to their shareholders. In order to rectify this situation he championed the Corporate Patriot Enforcement Act to close tax loopholes.
Tomasso Group and Rowland corruption
Blumenthal was involved in a series of lawsuits against associates of Connecticut Governor Rowland and the various entities of the Tomasso Group over Tomasso's bribing of state officials, including Rowland, in exchange for the awarding of lucrative state contracts. Blumenthal subpoenaed Tomasso Brothers Inc.; Tomasso Brothers Construction Co.; TBI Construction Co. LLC; Tunxis Plantation Country Club; Tunxis Management Co.; Tunxis Management Co. II; and Tenergy Water LLC (all part of the Tomasso Group). Lawyers for the Tomasso Group argued that the attorney general had no special power to look into the operations of private firms under whistleblower law as no actual whistleblowers had come forward and all incriminating testimony was in related federal cases. Connecticut law requires the attorney general to both be the attorney for the state and investigate the state government's misdeeds, and the rules governing the office did not adequately address this inherent conflict of interest. The state's case against the Tomasso Group failed but federal investigations ended in prison sentences for the Group's president, for Rowland, and for a number of his associates. The Tomasso Group stopped bidding on state contracts to avoid a substantial legal challenge from Blumenthal under newly written compliance statutes.
Charter schools lawsuit
In September 1999, Blumenthal announced a lawsuit against Robin Barnes, the president and treasurer of New Haven-based charter school the Village Academy, for serious financial mismanagement of the state-subsidized charitable organization. Citing common law, the suit sought to recover money misspent and serious damages resulting from Barnes's alleged breach of duty.
In a Connecticut Supreme Court decision, Blumenthal v. Barnes (2002), a unanimous court determined that the state attorney general could act using only the powers specifically authorized by the state legislature, and that since the attorney general's jurisdiction is defined by statute rather than common law, Blumenthal lacked the authority to cite common law as the basis for filing suit against Barnes. Despite this ruling, Blumenthal announced that he intended to pursue a separate 2000 lawsuit against the school's trustees filed on behalf of the State Department of Education.
Regional transmission organization
In 2003 Blumenthal, former Massachusetts Attorney General Tom Reilly, Rhode Island Attorney General Patrick C. Lynch, and consumer advocates from Connecticut, Maine, and New Hampshire opposed "the formation of a regional transmission organization (RTO) that would merge three Northeast and mid-Atlantic power operators, called Independent Service Operators (ISOs), into a single super-regional RTO." In a press release, he said, "This fatally flawed RTO proposal will raise rates, reduce accountability and reward market manipulation. It will increase the power and profits of transmission operators with an immediate $40 million price tag for consumers." The opposition was due to a report authored by Synapse Energy Economics, Inc., a Cambridge-based energy consulting firm, that alleged that consumers would be worse off under the merger.
Gina Kolb lawsuit
In 2004, Blumenthal sued Computer Plus Center of East Hartford and its owner, Gina Kolb, on behalf of the state. It was alleged that CPC overcharged $50 per computer, $500,000 in total, on a three-year, $17.2 million contract to supply computers to the state. Blumenthal sued for $1.75 million. Kolb was arrested in 2004 and charged with first-degree larceny. Kolb later countersued, claiming the state had grossly abused its power. Kolb was initially awarded $18.3 million in damages, but Blumenthal appealed the decision and the damages initially awarded were reduced to $1.83 million. Superior Court judge Barry Stevens described the jury's initial award of $18.3 million as a "shocking injustice" and said it was "influenced by partiality or mistake."
Big East and ACC
Blumenthal played a pivotal role in one of the biggest college athletics stories of the decade, the expansion of the Atlantic Coast Conference and the departures of Boston College, Miami, and Virginia Tech from the Big East. He led efforts by the Big East football schools (Virginia Tech, Rutgers, Pittsburgh, and West Virginia) in legal proceedings against the Atlantic Coast Conference, the University of Miami and Boston College, accusing them of improper disclosure of confidential information and of conspiring to dismantle the Big East. According to Blumenthal, the case was pursued because "the future of the Big East Conference was at risk—the stakes huge for both state taxpayers and the university's good name." The suits cost the schools involved $2.2 million in the first four months of litigation. The lawsuit against the ACC was initially dismissed on jurisdictional grounds but was subsequently refiled. A declaratory judgment by the Supreme Judicial Court of Massachusetts exonerated Boston College in the matter. Virginia Tech accepted an invitation from the ACC and withdrew from the suit to remove itself from the awkward position of suing its new conference. An out-of-court $5 million settlement was eventually reached, which included a $1 million exit fee that Boston College was required to pay the Big East under the league's constitution.
Some have speculated that the lawsuit was one of the biggest reasons that the University of Connecticut was not sought after by the ACC during its 2011 additions of then-Big East members Syracuse and Pittsburgh. UConn is a member of the less lucrative American Athletic Conference, the successor to the original Big East.
Interstate 84
On October 2, 2006, Blumenthal launched an investigation of a botched reconstruction project of Interstate 84 in Waterbury and Cheshire. The original contractor for the job, L.G. DeFelice, went out of business and it was later revealed that hundreds of storm drains had been improperly installed. Blumenthal subsequently announced lawsuits against L.G. DeFelice and the Maguire Group, the engineering firm that inspected the project. United States Fidelity & Guaranty, the insurer behind the performance bond for the I-84 construction, agreed to pay $17.5 million to settle the claims. Under the agreement, the state of Connecticut retained the right to sue L.G. DeFelice for additional funds. In 2009, the bonding company agreed to pay an additional $4.6 million settlement, bringing the total award to $22.1 million ($30,000 more than the repair costs).
Lyme disease guidelines investigation
In November 2006, Blumenthal tried, as Paul A. Offit described it, "to legislate a disease, Chronic Lyme, into existence". He launched an antitrust investigation into the Infectious Diseases Society of America's (IDSA's) 2006 guidelines regarding the treatment of Lyme disease. Responding to concerns from chronic Lyme disease advocacy groups, Blumenthal claimed the IDSA guidelines would "severely constrict choices and legitimate diagnosis and treatment options for patients." The medical validity of the IDSA guidelines was not challenged, and a journalist writing in Nature Medicine suggested some IDSA members may not have disclosed potential conflicts of interest, while a Forbes piece described Blumenthal's investigation as "intimidation" of scientists by an elected official with close ties to Lyme advocacy groups. The Journal of the American Medical Association described the decision as an example of the "politicization of health policy" that went against the weight of scientific evidence and may have a chilling effect on future decisions by medical associations. In 2008, Blumenthal ended the investigation after the IDSA agreed to conduct a review of the guidelines. In 2010, an eight-member independent review panel unanimously agreed that the original 2006 guideline recommendations were "medically and scientifically justified" in the light of the evidence. The committee did not change any of the earlier recommendations but did alter some of the language in an executive summary of the findings. Blumenthal said he would review the final report.
Internet pornography, prostitution, and sexual predators
MySpace/Facebook
In March 2006, Blumenthal noted that more than seven incidents of sexual assault in Connecticut had been linked directly to MySpace contacts. Earlier that year, Blumenthal and attorneys general in at least five other states were involved in discussions with MySpace that resulted in the implementation of technological changes aimed at protecting children from pornography and child predators on the company's website. At Blumenthal's urging, MySpace installed a link to free blocking software ("K9 Web Protection"), but in May 2006, Blumenthal announced that the site had failed to make the program easy to find and that it was not clearly labeled. He also urged MySpace to take further steps to safeguard children, including purging deep links to pornography and inappropriate material, tougher age verification, and banning users under 16.
Blumenthal was co-chair, along with North Carolina Attorney General Roy Cooper, of the State Attorney General Task Force on Social Networking. In 2008, the attorneys general commissioned the Internet Safety Technical Task Force report, which researched "ways to help squash the onslaught of sexual predators targeting younger social-networking clients".
Blumenthal's office subpoenaed MySpace for information about the number of registered sex offenders on its site. In 2009, MySpace revealed that over a two-year span it had roughly 90,000 members who were registered sex offenders (nearly double what MySpace officials had originally estimated one year earlier). Blumenthal accused MySpace of having "monstrously inadequate counter-measures" to prevent sex offenders from creating MySpace profiles.
Blumenthal and Cooper secured agreements from MySpace and Facebook to make their sites safer. Both implemented dozens of safeguards, including finding better ways to verify users' ages, banning convicted sex offenders, and limiting the ability of older users to search for members under 18.
Craigslist
In March 2008, Blumenthal issued a letter to Craigslist attorneys demanding that the website cease allowing postings for erotic services, which he claimed promoted prostitution, and accused the site of "turning a blind eye" to the problem. He worked with Craigslist and a group of 40 attorneys general to create new measures on the site designed to thwart ads for prostitution and other illegal sexual activities. In April 2009, Craigslist came under the scrutiny of law enforcement agencies following the arrest of Philip Markoff (the "Craigslist Killer"), suspected of killing a 25-year-old masseuse he met through Craigslist at a Boston hotel. Blumenthal subsequently called for a series of specific measures to fight prostitution and pornography on Craigslist—including steep financial penalties for rule breaking, and incentives for reporting wrongdoing. He said, "Craigslist has the means—and moral obligation—to stop the pimping and prostituting in plain sight."
Leading a coalition of 39 states, in May 2010 Blumenthal subpoenaed Craigslist as part of an investigation into whether the site was taking sufficient action to curb prostitution ads and whether it was profiting from them. He said that prostitution ads remained on the site despite previous assurances that they would be removed. The subpoena sought documents related to Craigslist's processes for reviewing potentially objectionable ads, as well as documents detailing the revenue gained from ads sold to Craigslist's erotic services and adult services categories. In August 2010, Blumenthal called on Craigslist to shut the section down permanently and take steps to eradicate prostitution ads from other parts of the site. He also called on Congress to alter a landmark communications law (the Communications Decency Act) that Craigslist has cited in defense of the ads.
Following continued pressure, Craigslist removed the adult services sections from its U.S. sites in September 2010 and from its international sites in December 2010. Blumenthal called the decision a victory against sexual exploitation of women and children, and against human trafficking connected to prostitution.
Blumenthal and other state attorneys general reached a settlement with Craigslist on the issue; the settlement called for the company to charge people via credit card for any ads that were suggestive in nature so the person could be tracked down if they were determined to be offering prostitution. But Blumenthal remarked that after the settlement, the ads continued to flourish using code words.
Terrorist surveillance program
In October 2007, Blumenthal and the attorneys general of four other states lobbied Congress to reject proposals to provide immunity from litigation to telecommunications firms that cooperated with the federal government's terrorist surveillance program following the September 11 attacks. In 2008 Congress passed and President George W. Bush signed into law a new terrorist surveillance bill including the telecom immunity provisions Blumenthal opposed.
Countrywide Financial
In August 2008, Blumenthal announced that Connecticut had joined California, Illinois and Florida in suing subprime mortgage lender Countrywide Financial (now owned by Bank of America) for fraudulent business practices. The suit alleged that Countrywide pushed consumers into "deceptive, unaffordable loans and workouts, and charged homeowners in default unjustified and excessive legal fees." According to Blumenthal, "Countrywide conned customers into loans that were clearly unaffordable and unsustainable, turning the American Dream of homeownership into a nightmare" and when consumers defaulted, "the company bullied them into workouts doomed to fail." He also claimed that Countrywide "crammed unconscionable legal fees into renegotiated loans, digging consumers deeper into debt" and "broke promises that homeowners could refinance, condemning them to hopelessly unaffordable loans." The lawsuit demanded that Countrywide make restitution to affected borrowers, give up improper gains and rescind, reform or modify all mortgages that broke state laws. It is also sought civil fines of up to $100,000 per violation of state banking laws, and up to $5,000 per violation of state consumer protection laws.
In October 2008, Bank of America initially agreed to settle the states' suits for $8.4 billion, and in February 2010, Countrywide mailed payments of $3,452.54 to 370 Connecticut residents. The settlement forced Bank of America to establish a $150 million fund to help repay borrowers whose homes had been foreclosed upon, $1.3 million of which went to Connecticut.
Blumenthal commented in defense of U.S. Senator and Senate Banking Committee chair Christopher Dodd, who had been harshly criticized for accepting a VIP loan from Countrywide, "there's no evidence of wrongdoing on [Mr. Dodd's] part any more than victims who were misled or deceived by Countrywide." In August 2010, Dodd was cleared by the Senate Ethics Committee, which found "no credible evidence" that he knowingly tried to use his status as a U.S. senator to receive loan terms not available to the public.
Global warming
Blumenthal has been a vocal advocate of the scientific consensus on climate change, that human activity is responsible for rising global temperatures and that prompt action to reduce greenhouse gas emissions must be taken. He has urged the Environmental Protection Agency to declare carbon dioxide a dangerous air pollutant. "I urge the new Obama EPA to declare carbon dioxide a danger to human health and welfare so we can at last begin addressing the potentially disastrous threat global warming poses to health, the environment and our economy. We must make up for lost time before it's too late to curb dangerous warming threatening to devastate the planet and human society." He has brought suit against a number of electric utilities in the Midwest, arguing that coal-burning power plants are generating excess emissions. In 2009, the Second Circuit Court of Appeals agreed to allow Blumenthal's lawsuit to proceed. Blumenthal has said, "no reputable climate scientist disputes the reality of global warming. It is fact, plain and simple. Dithering will be disastrous."
Trump emoluments lawsuit
Blumenthal and Representative John Conyers Jr. led a group of 196 congressmen in filing a federal lawsuit accusing President Trump of violating the emoluments clause of the US Constitution.
Prospect of gubernatorial candidacy
Blumenthal was often considered a top prospect for the Democratic nominee for governor of Connecticut, but never ran for the office.
On March 18, 2007, Hartford Courant columnist Kevin Rennie reported that Blumenthal had become seriously interested in running for governor in 2010. On February 2, 2009, Blumenthal announced he would forgo a gubernatorial run and seek reelection that year as attorney general.
U.S. Senate
Elections
2010
After Chris Dodd announced on January 6, 2010, that he would retire from the Senate at the end of his term, Blumenthal told the Associated Press that he would run in the election for Dodd's seat in November 2010. Later that day, President Barack Obama and Vice President Joe Biden called Blumenthal to express their best wishes.
The same day, Public Policy Polling released a poll they took on the two preceding evenings, including races where Blumenthal was paired against each of the three most-mentioned Republicans contending for their party's nomination for the seat. He led by at least 30% in each hypothetical race: against Rob Simmons (59%–28%), against Linda McMahon (60%–28%), and against Peter Schiff (63%–23%), with a 4.3% margin of error cited. Rasmussen Reports also polled after Blumenthal announced his candidacy and found a somewhat more competitive race, but with Blumenthal holding a strong lead.
A February poll by Rasmussen found that Blumenthal held leads of 19 against Simmons and 20 against McMahon, and that Republicans had made up little ground since the initial Rasmussen poll after Blumenthal announced. On May 21, Blumenthal received the Democratic nomination by acclamation.
The New York Times reported that Blumenthal misspoke on at least one occasion by saying he'd served with the military "in Vietnam". Video emerged of him speaking to a group of veterans and supporters in March 2008 in Norwalk, saying, in reference to supporting troops returning from Iraq and Afghanistan, "We have learned something important since the days that I served in Vietnam." There were also other occasions where he accurately described his military service. At a 2008 ceremony in Shelton, Connecticut, he said, "I served during the Vietnam era... I remember the taunts, the insults, sometimes even physical abuse."
Blumenthal's commanding officer in 1974 and 1975, Larry Baldino of Woodbridge, addressed the controversy in a letter to the editor in the New Haven Register. Baldino wrote that the misleading statement was too "petty" to be the basis for supporting or not supporting Blumenthal. Baldino further called Blumenthal "good-natured" and "one of the best Marines with whom I ever worked".
Days after the nomination, Quinnipiac University Polling Institute polling indicated that Blumenthal held a 25-point lead over McMahon. The Cook Political Report changed its assessment of the race to Leans Democratic, making Blumenthal the favored candidate over McMahon.
Blumenthal won the November 2 election, defeating McMahon 55% to 43%.
2016
August Wolf, a bond salesman and former Olympian, was the only declared Republican candidate running against Blumenthal in the 2016 Senate election.
In August 2015, economist Larry Kudlow threatened to run against Blumenthal if Blumenthal voted in favor of the Iran Nuclear Deal.
According to a pair of Quinnipiac polls on October 15, 2015, Blumenthal had a 34-point lead over Kudlow and a 35-point lead over Wolf.
Blumenthal was reelected with 63% of the vote against Republican state representative Dan Carter, becoming the first person in Connecticut's history to receive over a million votes in a single election.
Tenure
Blumenthal was sworn into the 112th United States Congress on January 5, 2011. He announced plans to return to Connecticut every weekend to join a "listening tour" of his home state.
In March 2012, Blumenthal and New York Senator Chuck Schumer gained national attention after they called upon Attorney General Eric Holder and the Department of Justice to investigate practices by employers to require Facebook passwords for employee applicants and workers.
Blumenthal worked with Senator Mark Kirk to eliminate pensions for members of Congress who are convicted of felonies while serving in office.
In the wake of the 2021 storming of the United States Capitol, Blumenthal blamed Trump, saying that Trump "incited, instigated and supported" the attack. He called for Vice President Mike Pence to invoke the Twenty-fifth Amendment to the United States Constitution. Blumenthal also requested an investigation into the lack of response from law enforcement and the military.
In December 2021, Blumenthal gave a speech and presented awards at a ceremony hosted by the Connecticut People’s World Committee, an affiliate of the Communist Party USA.
Committee assignments
Committee on Armed Services
Subcommittee on Cybersecurity
Subcommittee on Readiness and Management Support
Subcommittee on SeaPower
Committee on Commerce, Science, and Transportation
Subcommittee on Consumer Protection, Product Safety, and Data Security (Chairman)
Committee on the Judiciary
Subcommittee on Antitrust, Competition Policy and Consumer Rights
Subcommittee on Human Rights and the Law
Subcommittee on Immigration, Citizenship, and Border Safety
Subcommittee on the Constitution (Chair)
Committee on Veterans' Affairs
Special Committee on Aging
Caucus memberships
Expand Social Security Caucus
Senate Oceans Caucus
Senate Ukraine Caucus
Legislation sponsored
The following is an incomplete list of legislation that Blumenthal has sponsored:
Affordable College Textbook Act (S. 1864; 115th Congress)
Political positions
The American Conservative Union gave him a 3% lifetime conservative rating in 2019.
Guns
As of 2010, Blumenthal had a "F" rating from the National Rifle Association for his pro-gun-control voting record.
In response to the 2015 San Bernardino attack, Blumenthal gave his support for improved access to mental health resources and universal background checks.
In January 2016, Blumenthal was one of 18 senators to sign a letter to Thad Cochran and Barbara Mikulski requesting that the Labor, Health and Education subcommittee hold a hearing on whether to allow the Centers for Disease Control and Prevention (CDC) to fund a study of gun violence and "the annual appropriations rider that some have interpreted as preventing it" with taxpayer dollars. The senators noted their support for taking steps "to fund gun-violence research, because only the United States government is in a position to establish an integrated public-health research agenda to understand the causes of gun violence and identify the most effective strategies for prevention."
In the wake of the Orlando nightclub shooting, Blumenthal said, "The Senate's inaction on commonsense gun violence prevention makes it complicit in this public health crisis. Prayers and platitudes are insufficient. The American public is beseeching us to act on commonsense, sensible gun violence prevention measures, and we must heed that call."
In October 2016, Blumenthal participated in the Chris Murphy gun control filibuster, speaking in support of the Feinstein Amendment, which would have banned people known to be or suspected of being terrorists from buying guns. That same year, he stated his support for efforts to require toy or fake firearms to have orange parts so they could more easily be distinguished from real guns.
In response to the 2017 Las Vegas shooting, Blumenthal declared in an interview with Judy Woodruff, "we must break the grip of the NRA". He continued, "we can at least save lives. Would it have prevented the Las Vegas atrocity, that unspeakable tragedy? We will never know. But it might have, and we can definitely prevent such mass shootings by adopting these kinds of commonsense measures."
In 2018, Blumenthal was a cosponsor of the NICS Denial Notification Act, legislation developed in the aftermath of the Stoneman Douglas High School shooting that would require federal authorities to inform states within a day after a person failing the National Instant Criminal Background Check System attempted to buy a firearm.
In January 2019, Blumenthal was one of 40 senators to introduce the Background Check Expansion Act, a bill that would require background checks for either the sale or transfer of all firearms including all unlicensed sellers. Exceptions to the bill's background check requirement included transfers between members of law enforcement, loaning firearms for either hunting or sporting events on a temporary basis, providing firearms as gifts to members of one's immediate family, firearms being transferred as part of an inheritance, or giving a firearm to another person temporarily for immediate self-defense.
In June 2019, Blumenthal was one of four senators to cosponsor the Help Empower Americans to Respond (HEAR) Act, legislation that would ban suppressors being imported, sold, made, sent elsewhere or possessed and grant a silencer buyback program as well as include certain exceptions for current and former law enforcement personnel and others. The bill was intended to respond to the Virginia Beach shooting, where the perpetrator used a .45-caliber handgun with multiple extended magazines and a suppressor.
Antitrust, competition and corporate regulation
In June 2019, Blumenthal was one of six Democrats led by Amy Klobuchar to sign letters to the Federal Trade Commission (FTC) and the Department of Justice recounting that many of them had "called on both the FTC and the Justice Department to investigate potential anticompetitive activity in these markets, particularly following the significant enforcement actions taken by foreign competition enforcers against these same companies" and requesting that both agencies confirm whether they had opened antitrust investigations into each of the companies and that both agencies pledge to publicly release any such investigations' findings.
Aviation safety
Blumenthal called for the Federal Aviation Administration (FAA) to temporarily ground all Boeing 737 MAX 8 aircraft in the United States until an investigation into the cause of the crash of Ethiopian Airlines Flight 302 is complete.
Agriculture
In March 2019, Blumenthal was one of 38 senators to sign a letter to United States Secretary of Agriculture Sonny Perdue warning that dairy farmers "have continued to face market instability and are struggling to survive the fourth year of sustained low prices" and urging his department to "strongly encourage these farmers to consider the Dairy Margin Coverage program."
In May 2019, Blumenthal and eight other Democratic senators sent Perdue a letter criticizing the USDA for purchasing pork from JBS USA, writing that it was "counterproductive and contradictory" for companies to receive funding from "U.S. taxpayer dollars intended to help American farmers struggling with this administration's trade policy." The senators requested the department "ensure these commodity purchases are carried out in a manner that most benefits the American farmer’s bottom line—not the business interests of foreign corporations."
In June 2019, Blumenthal and 18 other Democratic senators sent a letter to USDA Inspector General (IG) Phyllis K. Fong requesting that the IG investigate USDA instances of retaliation and political decision-making and asserting that not to conduct an investigation would mean these "actions could be perceived as a part of this administration’s broader pattern of not only discounting the value of federal employees, but suppressing, undermining, discounting, and wholesale ignoring scientific data produced by their own qualified scientists."
Economy
In March 2019, Blumenthal led five Democratic senators in signing a letter to the Federal Trade Commission requesting it "use its rulemaking authority, along with other tools, in order to combat the scourge of non-compete clauses rigging our economy against workers" and arguing that incomplete clauses "harm employees by limiting their ability to find alternate work, which leaves them with little leverage to bargain for better wages or working conditions with their immediate employer." The senators added that the FTC had the responsibility of protecting both consumers and workers and needed to "act decisively" to address their concerns over "serious anti-competitive harms from the proliferation of non-competes in the economy."
Child care
In 2019, Blumenthal and 34 other senators introduced the Child Care for Working Families Act, a bill that created 770,000 new child care jobs and ensured families under 75% of the state median income did not pay for child care with higher-earning families having to pay "their fair share for care on a sliding scale, regardless of the number of children they have." The legislation also supported universal access to high-quality preschool programs for all 3- and 4-year-olds and changed the child care workforce compensation and training to aid both teachers and caregivers.
Children's programming
In 2019, following the Federal Communications Commission's announcement of rules changes to children's programming by modifying the Children's Television Act of 1990, Blumenthal and eight other Democratic senators signed a letter to FCC Chairman Ajit Pai that expressed concern that the proposed changes "would limit the reach of educational content available to children and have a particular damaging effect on youth in low-income and minority communities" and asserted that the new rules would see a reduction in access to valuable educational content through over-the-air services.
Disaster relief
In April 2018, Blumenthal was one of five Democratic senators to sign a letter to FEMA administrator Brock Long calling on FEMA to enter an agreement with the United States Department of Housing and Urban Development that would "stand up the Disaster Housing Assistance Program and address the medium- and longer-term housing needs" of evacuees of Puerto Rico in the aftermath of Hurricane Maria. The senators wrote that "FEMA's refusal to use the tools at its disposal, including DHAP, to help these survivors is puzzling—and profoundly troubling" and that hundreds of hurricane survivors were susceptible to being left homeless in the event that FEMA and HUD continued to not work together.
Drug policy
In February 2017, Blumenthal and 30 other senators signed a letter to Kaléo Pharmaceuticals in response to the opioid-overdose-reversing device Evzio rising in price from $690 in 2014 to $4,500 and requested the company provide the detailed price structure for Evzio, the number of devices Kaléo Pharmaceuticals set aside for donation, and the totality of federal reimbursements Evzio received in the previous year.
In March 2017, Blumenthal was one of 21 senators to sign a letter led by Ed Markey to Senate Majority Leader Mitch McConnell that noted that 12% of adult Medicaid beneficiaries had some form or a substance abuse disorder and that one third of treatment administered for opioid and other substance use disorders in the U.S. was financed through Medicaid and opined that the American Health Care Act could "very literally translate into a death spiral for those with opioid use disorders" due to inadequate coverage for substance use disorder treatment.
In April 2019, Blumenthal was one of 11 senators to sign a letter to Juul CEO Kevin Burns asserting that the company had "lost what little remaining credibility the company had when it claimed to care about the public health" and that they would not rest until Juul's "dangerous products are out of the hands of our nation's children." The senators requested Juul list each of its advertising buys and detail the steps the company has taken to ensure its advertisements are not seen by people under 21 in addition to asking if Juul had purchased any social media influencers for product promotion.
Blumenthal has a "C" rating from NORML for his voting history regarding cannabis-related causes.
Encryption
On March 3, 2020, Blumenthal cosponsored a bill to make it difficult for people and organizations to use encryption under the EARN-IT Act of 2020
Environment
In June 2019, Blumenthal was one of 44 senators to introduce the International Climate Accountability Act, legislation that would prevent Trump from using funds in an attempt to withdraw from the Paris Agreement and directing the Trump administration to instead develop a strategic plan for the United States that would allow it to meet its commitment under the Paris Agreement.
LGBT rights
In September 2014, Blumenthal was one of 69 members of the US House and Senate to sign a letter to then-FDA commissioner Sylvia Burwell requesting that the FDA revise its policy banning donation of corneas and other tissues by men who have had sex with another man in the preceding 5 years.
In June 2019, Blumenthal was one of 18 senators to sign a letter to United States Secretary of State Mike Pompeo requesting an explanation of a State Department decision not to issue an official statement that year commemorating Pride Month or the annual cable outlining activities for embassies commemorating Pride Month. They also asked why the LGBTI special envoy position had remained vacant and asserted that "preventing the official flying of rainbow flags and limiting public messages celebrating Pride Month signals to the international community that the United States is abandoning the advancement of LGBTI rights as a foreign policy priority."
Health care
In February 2019, Blumenthal and 22 other Democratic senators introduced the State Public Option Act, a bill that would authorize states to form a Medicaid buy-in program for all residents and thereby grant all denizens of the state the ability to buy into a state-driven Medicaid health insurance plan if they wished. Brian Schatz, a bill cosponsor, said the legislation would "unlock each state’s Medicaid program to anyone who wants it, giving people a high-quality, low-cost public health insurance option" and that its goal was "to make sure that every single American has comprehensive health care coverage."
In June 2019, Blumenthal was one of eight senators to cosponsor the Territories Health Equity Act of 2019, legislation that would remove the cap on annual federal Medicaid funding and increase federal matching rate for Medicaid expenditures of territories along with more funds being provided for prescription drug coverage to low-income seniors in an attempt to equalize funding for American territories Puerto Rico, the Virgin Islands, Guam, American Samoa and the Northern Mariana Islands with that of U.S. states.
In June 2019, Blumenthal and 14 other senators introduced the Affordable Medications Act, legislation intended to promote transparency by mandating that pharmaceutical companies disclose the amount of money going to research and development, marketing and executives' salaries. The bill also abolished the restriction that stopped the federal Medicare program from using its buying power to negotiate lower drug prices for beneficiaries and hinder drug company monopoly practices used to keep prices high and disable less expensive generics entering the market.
In August 2019, Blumenthal was one of 19 senators to sign a letter to United States Secretary of the Treasury Steve Mnuchin and United States Secretary of Health and Human Services Alex Azar requesting data from the Trump administration in order to help states and Congress understand the potential consequences in the event that the Texas v. United States Affordable Care Act (ACA) lawsuit prevailed in courts, claiming that an overhaul of the present health care system would form "an enormous hole in the pocketbooks of the people we serve as well as wreck state budgets". That same month, Blumenthal, three other Senate Democrats, and Bernie Sanders signed a letter to Acting FDA Commissioner Ned Sharpless in response to Novartis falsifying data as part of an attempt to gain the FDA's approval for its new gene therapy Zolgensma, writing that it was "unconscionable that a drug company would provide manipulated data to federal regulators in order to rush its product to market, reap federal perks, and charge the highest amount in American history for its medication."
Housing
In April 2019, Blumenthal was one of 41 senators to sign a bipartisan letter to the housing subcommittee praising the United States Department of Housing and Urban Development's Section 4 Capacity Building program as authorizing "HUD to partner with national nonprofit community development organizations to provide education, training, and financial support to local community development corporations (CDCs) across the country" and expressing disappointment that Trump's budget "has slated this program for elimination after decades of successful economic and community development." The senators wrote of their hope that the subcommittee would support continued funding for Section 4 in Fiscal Year 2020.
Journalism
In July 2019, Blumenthal cosponsored the Fallen Journalists Memorial Act, a bill introduced by Ben Cardin and Rob Portman that would create a new memorial that would be privately funded and constructed on federal lands within Washington, D.C., in order to honor journalists, photographers, and broadcasters who died in the line of duty.
Government shutdown
In March 2019, Blumenthal and 38 other senators signed a letter to the Appropriations Committee opining that contractor workers and by extension their families "should not be penalized for a government shutdown that they did nothing to cause" while noting that there were bills in both chambers of Congress that would provide back pay to compensate contractor employees for lost wages, urging the Appropriations Committee "to include back pay for contractor employees in a supplemental appropriations bill for FY2019 or as part of the regular appropriations process for FY2020."
Infrastructure
In June 2019, Blumenthal was one of eight senators to sponsor the Made in America Act, legislation that would mandate that federal programs that had funded infrastructure projects not currently subject to Buy America standards use domestically produced materials. Bill cosponsor Tammy Baldwin said the bill would strengthen Buy America requirements and that she was hopeful both Democrats and Republicans would support "this effort to make sure our government is buying American products and supporting American workers."
Maternal mortality
In May 2019, Blumenthal was one of six senators to cosponsor the Healthy MOMMIES Act, legislation that would expand Medicaid coverage in an attempt to provide comprehensive prenatal, labor and postpartum care with an extension of the Medicaid pregnancy pathway from 60 days to a full year following birth to assure new mothers access to services unrelated to pregnancy. The bill also directed Medicaid and the Children's Health Insurance Program's Payment and Access Commission to report its data regarding doula care coverage under state Medicaid programs and develop strategies aimed at improving access to doula care.
Net neutrality
In May 2014, days before the FCC was scheduled to rewrite its net neutrality rules, Blumenthal was one of 11 senators to sign a letter to FCC Chairman Tom Wheeler charging Wheeler's proposal with destroying net neutrality instead of preserving it and urging the FCC to "consider reclassifying Internet providers to make them more like traditional phone companies, over which the agency has clear authority to regulate more broadly."
In March 2018, Blumenthal was one of ten senators to sign a letter spearheaded by Jeff Merkley lambasting a proposal from FCC Chairman Ajit Pai that would curb the scope of benefits from the Lifeline program during a period where roughly 6.5 million people in poor communities relied on Lifeline to receive access to high-speed internet, claiming that it was Pai's "obligation to the American public, as the Chairman of the Federal Communications Commission, to improve the Lifeline program and ensure that more Americans can afford access, and have means of access, to broadband and phone service." The senators also advocated insuring that "Lifeline reaches more Americans in need of access to communication services."
Abortion
Blumenthal is pro-choice. He supports efforts to make it a crime for demonstrators to block access to health clinics. He opposed efforts by Walmart to ban the sale of emergency contraception and supports requirements that pharmacies fill birth control prescriptions. He support federal funding for family planning clinics.
Immigration
In August 2018, Blumenthal was one of 17 senators to sign a letter spearheaded by Kamala Harris to United States Secretary of Homeland Security Kirstjen Nielsen demanding that the Trump administration take immediate action in attempting to reunite 539 migrant children with their families, citing each passing day of inaction as intensifying "trauma that this administration has needlessly caused for children and their families seeking humanitarian protection."
In April 2019, Blumenthal was one of six Democratic senators to sign a letter to Acting Defense Secretary Patrick M. Shanahan expressing concern over memos by Marine Corps General Robert Neller in which Neller critiqued deployments to the southern border and funding transfers under Trump's national emergency declaration as having posed an "unacceptable risk to Marine Corps combat readiness and solvency" and noted that other military officials had recently stated that troop deployment did not affect readiness. The senators requested Shanahan explain the inconsistencies and that he provide both "a staff-level briefing on this matter within seven days" and an explanation of how he would address Neller's concerns.
In June 2019, following the Housing and Urban Development Department's confirmation that DACA recipients did not meet eligibility for federal backed loans, Blumenthal and 11 other senators introduced The Home Ownership Dreamers Act, legislation that mandated that the federal government was not authorized to deny mortgage loans backed by the Federal Housing Administration, Fannie Mae, Freddie Mac, or the Agriculture Department solely due to an applicant's immigration status.
In June 2019, Blumenthal and six other Democratic senators led by Brian Schatz sent letters to the Government Accountability Office along with the suspension and debarment official and inspector general at the US Department of Health and Human Services citing recent reports that showed "significant evidence that some federal contractors and grantees have not provided adequate accommodations for children in line with legal and contractual requirements" and urging government officials to determine whether federal contractors and grantees were in violation of contractual obligations or federal regulations and should thus face financial consequences.
In July 2019, following reports that the Trump administration intended to end protections of spouses, parents and children of active-duty service members from deportation, Blumenthal was one of 22 senators to sign a letter led by Tammy Duckworth arguing that the program allowed service members the ability "to fight for the United States overseas and not worry that their spouse, children, or parents will be deported while they are away" and that the program's termination would cause personal hardship for service members in combat.
In July 2019, Blumenthal and 15 other Senate Democrats introduced the Protecting Sensitive Locations Act, which mandated that ICE agents get approval from a supervisor before engaging in enforcement actions at sensitive locations except in special circumstances and that agents receive annual training in addition to being required to annually report enforcement actions in those locations.
Central America
In April 2019, Blumenthal was one of 34 senators to sign a letter to Trump encouraging him "to listen to members of your own Administration and reverse a decision that will damage our national security and aggravate conditions inside Central America", asserting that Trump had "consistently expressed a flawed understanding of U.S. foreign assistance" since becoming president and that he was "personally undermining efforts to promote U.S. national security and economic prosperity" by preventing the use of Fiscal Year 2018 national security funding. The senators argued that foreign assistance to Central American countries created less migration to the U.S., citing the funding's help in improving conditions in those countries.
China
In April 2018, Blumenthal stated his support for "strong efforts to crack down on intellectual property theft and unfair trade practices by China or any other nation", but said that Trump was implementing "trade policy by tweet, reaction based on impulse and rash rhetoric that can only escalate tensions with all economic powers and lead to a trade war" and that U.S. actions through trade without a strategy or an endgame seemed "highly dangerous" to the American economy.
In June 2018, Blumenthal cosponsored a bipartisan bill that would reinstate penalties on ZTE for export control violations in addition to barring American government agencies from either purchasing or leasing equipment or services from ZTE or Huawei. The bill was offered as an amendment to the National Defense Authorization Act for 2019 and was in direct contrast to the Trump administration's announced intent to ease sanctions on ZTE.
In August 2018, Blumenthal and 16 other lawmakers urged the Trump administration to impose sanctions under the Global Magnitsky Act against Chinese officials who are responsible for human rights abuses against the Uyghur Muslim minority in the Xinjiang region. They wrote, "The detention of as many as a million or more Uyghurs and other predominantly Muslim ethnic minorities in 'political reeducation' centers or camps requires a tough, targeted, and global response."
In May 2019, Blumenthal was a cosponsor of the South China Sea and East China Sea Sanctions Act, a bipartisan bill reintroduced by Marco Rubio and Ben Cardin that was intended to disrupt China's consolidation or expansion of its claims of jurisdiction over both the sea and air space in disputed zones in the South China Sea.
In July 2019, Blumenthal was a cosponsor of the Defending America's 5G Future Act, a bill that would prevent Huawei from being removed from the "entity list" of the Commerce Department without an act of Congress and authorize Congress to block administration waivers for U.S. companies to do business with Huawei. The bill would also codify Trump's executive order from the previous May that empowered his administration to block foreign tech companies deemed a national security threat from conducting business in the United States.
Middle East
In March 2017, Blumenthal co-sponsored the Israel Anti-Boycott Act (S.270), which made it a federal crime, punishable by a maximum sentence of 20 years imprisonment, for Americans to encourage or participate in boycotts against Israel and Israeli settlements in the occupied Palestinian territories if protesting actions by the Israeli government.
In March 2019, Blumenthal was one of nine Democratic senators to sign a letter to Salman of Saudi Arabia requesting the release of human rights lawyer Waleed Abu al-Khair and writer Raif Badawi, women's rights activists Loujain al-Hathloul and Samar Badawi, and Dr. Walid Fitaih. The senators wrote, "Not only have reputable international organizations detailed the arbitrary detention of peaceful activists and dissidents without trial for long periods, but the systematic discrimination against women, religious minorities and mistreatment of migrant workers and others has also been well-documented."
Railroad safety
In June 2019, Blumenthal was one of ten senators to cosponsor the Safe Freight Act, a bill that would mandate all freight trains have one or more certified conductors and one certified engineer on board who can collaborate on how to protect both the train and people living near the tracks. The legislation was meant to correct a rollback of the Federal Railroad Administration on a proposed rule intended to establish safety standards.
SafeSport
In February 2022, on a Nightline program on criticisms of the United States Center for SafeSport titled "Sports misconduct watchdog faces crisis of confidence", Blumenthal said: "There is simply no way that SafeSport can be given a passing grade", that "these young athletes deserve better protection", and that SafeSport does not have his confidence and trust. He and Senator Jerry Moran said that they believe that more transparency is required from SafeSport, which does not make its investigative findings or arbitration decisions public, to protect young athletes, and that SafeSport must make its work public.
Special Counsel investigation
In March 2019, after Attorney General William Barr released a summary of the Mueller report, Blumenthal said the issue was about "obstruction of justice, no exoneration there, and the judgment by William Barr may have been completely improper" and that he did not "deeply respect and trust the Barr summary, which was designed to frame the message before the information was available." After the Justice Department publicly released the redacted version of the report the following month, Blumenthal said, "What's demonstrated in powerful and compelling detail in this report is nothing less than a national scandal. This report is far from the end of the inquiry that this country needs and deserves. It is the beginning of another chapter."
In April 2019, Blumenthal was one of 12 Democratic senators to sign a letter led by Mazie Hirono that questioned Barr's decision to offer "his own conclusion that the President’s conduct did not amount to obstruction of justice" and called for both the Justice Department's inspector general and the Office of Professional Responsibility to launch an investigation into whether Barr's summary of the Mueller report and his April 18 news conference were misleading.
Telecommunications
In April 2019, Blumenthal was one of seven senators to sponsor the Digital Equity Act of 2019, legislation establishing a $120 million grant program that would fund both the creation and implementation of "comprehensive digital equity plans" in each U.S. state along with providing a $120 million grant program to give support toward projects developed by individuals and groups. The bill also gave the National Telecommunications and Information Administration (NTIA) the role of evaluating and providing guidance toward digital equity projects.
Personal life
On June 27, 1982, Blumenthal married Cynthia Malkin. They were engaged during her senior year at Harvard and married the following year. She is the daughter of Peter L. Malkin and maternal granddaughter of Lawrence Wien. They have four children. Their son Matt Blumenthal was elected to the Connecticut House of Representatives from the 147th district in 2018.
Blumenthal's wealth exceeds $100 million, making him one of the richest members of the Senate. His family's net worth is derived largely from his wife; the Malkins are influential real estate developers and property managers with holdings including an ownership stake in the Empire State Building.
Electoral history
Connecticut Legislature
Connecticut Attorney General
U.S. Senator
See also
List of Jewish members of the United States Congress
References
Further reading
Nerozzi, Timothy (Dec. 14, 2021), "Sen. Blumenthal speaks at Communist award show, touts 'Build Back Better' ", Fox News
Altimari, Dave and Mahony, Edmund (January 30, 2010). Computer Firm Owner Awarded $18 Million In Countersuit Against State. Courant.com. Retrieved February 7, 2010.
Mosher, James (December 27, 2009). Don't outlaw our stoves, Eastern Connecticut farmers urge, Attorney general: Burning wood outside pollutes air. NorwichBulletin.com. Retrieved January 6, 2010.
Pesci, Donald (December 10, 2009). Blumenthal: worst Attorney General in U.S.. RegisterCitizen.com. Retrieved January 6, 2010.
Baue, William (July 9, 2002). Connecticut Fights to Keep Stanley Works from Disappearing to Bermuda. Socialfunds.com. Retrieved September 5, 2004.
Connecticut Attorney General's Office (August 14, 1997). Governor, Attorney General Urge Tighter Restrictions on Air Pollution. Press release. Retrieved September 5, 2004.
Connecticut Attorney General's Office (October 15, 2001). Attorney General Submits Comments To FERC Opposing Formation Of Regional Transmission Organization. Press release. Retrieved September 5, 2004.
Connecticut Attorney General's Office (May 10, 2002). Lawsuit Filed By Blumenthal, Nappier Brings Halt To Stanley Works' Reincorporation Plans. Press release. Retrieved September 5, 2004.
Connecticut Attorney General's Office (June 3, 2002). Attorney General Asks SEC To Investigate Stanley Works Vote. Press release. Retrieved September 5, 2004.
Connecticut Attorney General's Office (September 30, 2003). Blumenthal, New England AGs And Consumer Advocates Warn That Proposed RTO Will Raise Rates, Without Consumer Benefit. Press release. Retrieved September 5, 2004.
Connecticut Attorney General's Office (October 27, 2003). Connecticut and 11 Other States File Suit to Prevent Weakening of the Clean Air Act. Press release. Retrieved September 5, 2004.
Patrick, Mike (October 10, 2003). Law School lauds Blumenthal with public service award. QUDaily. Retrieved September 5, 2004.
Sorry, Stanley - editorial (May 9, 2003). Wall Street Journal, cited from the article at The Center for Freedom and Prosperity, The Wall Street Journal, May 9, 2003. Retrieved September 5, 2004.
Peterson, Paul; White, David; Doolittle, Nick; & Roschelle, Amy (September 29, 2003) of Synapse, Energy Economics Inc. FERC's Transmission Pricing Policy: New England Cost Impacts. Report commissioned by Connecticut Attorney General's Office.
Subcommittee on Select Revenue Measures of the House Committee on Ways and Means (June 6, 2002). Statement of the Hon. Richard Blumenthal, Attorney General, Connecticut Attorney General's Office. Retrieved September 5, 2004.
Subcommittee on Select Revenue Measures of the House Committee on Ways and Means (June 25, 2002). Statement of the Hon. Richard Blumenthal, Attorney General, Connecticut Attorney General's Office, Hearing on Corporate Inversions. Retrieved September 5, 2004.
Plotz, David (September 15, 2000). "Richard Blumenthal: He was supposed to be president. So why is he only Connecticut's attorney general?". Slate.com. Retrieved January 6, 2010.
Titus, Elizabeth, "Blumenthal predicts Hagel will be confirmed", Politico, 1/13/13. Re: Chuck Hagel's nomination as US Secretary of Defense; Blumenthal seat on Armed Services noted; Blumenthal spoke on Fox News Sunday''.
External links
Senator Richard Blumenthal official U.S. Senate website
Blumenthal for Senate
|-
|-
|-
|-
|-
|-
|-
|-
1946 births
20th-century American politicians
21st-century American politicians
Alumni of Trinity College, Cambridge
Jewish American military personnel
American people of German-Jewish descent
American prosecutors
Connecticut Attorneys General
Connecticut Democrats
Connecticut state senators
Democratic Party United States senators from Connecticut
The Harvard Crimson people
Jewish American state legislators in Connecticut
Jewish United States senators
Journalists from New York City
Law clerks of the Supreme Court of the United States
Living people
Members of the Connecticut House of Representatives
Military personnel from Connecticut
People from Brooklyn
People from Greenwich, Connecticut
United States Attorneys for the District of Connecticut
United States Marine Corps reservists
United States Marines
United States senators from Connecticut
Yale Law School alumni
Harvard College alumni
Riverdale Country School alumni
Wien family
21st-century American Jews |
33369891 | https://en.wikipedia.org/wiki/IMessage | IMessage | iMessage is an instant messaging service developed by Apple Inc. and launched in 2011. iMessage functions exclusively on Apple platforms: macOS, iOS, iPadOS, and watchOS.
Core features of iMessage, available on all supported platforms, include sending text messages, images, videos, and documents; getting delivery and read statuses (read receipts); and end-to-end encryption so only the sender and recipient - no one else, including Apple itself - can read the messages. The service also allows sending location data and stickers. On iOS and iPadOS, third-party developers can extend iMessage capabilities with custom extensions, an example being quick sharing of recently played songs.
Launched on iOS in 2011, iMessage arrived on macOS (then called OS X) in 2012. In 2020, Apple announced an entirely redesigned version of the macOS Messages app which adds some of the features previously unavailable on the Mac, including location sharing and message effects.
History
iMessage was announced by Scott Forstall at the WWDC 2011 keynote on June 6, 2011. A version of the Messages app for iOS with support for iMessage was included in the iOS 5 update on October 12, 2011. On February 16, 2012, Apple announced that a new Messages app replacing iChat would be part of OS X Mountain Lion. Mountain Lion was released on July 25, 2012.
On October 23, 2012, Apple CEO, Tim Cook announced that Apple device users have sent 300 billion messages using iMessage and that Apple delivers an average of 28,000 messages per second. In February 2016, Eddy Cue announced that the number of iMessages sent per second had grown to 200,000.
In May 2014, a lawsuit was filed against Apple over an issue that, if a user switches from an Apple device to a non-Apple device, messages being delivered to them through iMessage would not reach their destination. In November 2014 Apple addressed this problem by providing instructions and an online tool to deregister iMessage. A federal court dismissed the suit in Apple's favor.
On March 21, 2016, a group of researchers from Johns Hopkins University published a report in which they demonstrated that an attacker in possession of iMessage ciphertexts could potentially decrypt photos and videos that had been sent via the service. The researchers published their findings after the vulnerability had been patched by Apple.
On June 13, 2016, Apple announced the addition of Apps to iMessage service, accessible via the Messages apps. Apps can create and share content, add stickers, make payments, and more within iMessage conversations without having to switch to standalone apps. One could develop standalone iMessage apps or an extension to existing iOS apps. Publishers can also create standalone stickers apps without writing any code. According to Sensor Tower, as of March 2017 the iMessage App Store features nearly 5,000 Message-enabled apps.
At the WWDC 2020 keynote on June 22, 2020, Apple previewed the next version of its macOS operating system, planned for release in late 2020. Big Sur ships with a redesigned version of Messages with features previously available only on iOS devices, such as message effects, memojis, stickers and location sharing.
Features
iMessage allows users to send texts, documents, photos, videos, contact information, and group messages over the Internet to other iOS or macOS users. iMessage is an alternative to the SMS and MMS messaging for most users with devices running iOS 5 or later. The "Send as SMS" setting under Messages will cause the message to be sent via SMS if the sender does not have an active Internet connection. If the receiver has no Internet connection, the message should be stored on a server until a connection is restored.
iMessage is accessible through the Messages app on an iPhone, iPad or iPod touch running iOS 5 or later, or on a Mac running OS X Mountain Lion or later. Owners of these devices can register one or more email addresses with Apple. Additionally, iPhone owners can register their phone numbers with Apple, provided their carrier is supported. When a message is sent to a mobile number, Messages will check with Apple if the mobile number is set up for iMessage. If it is not, the message will seamlessly transition from iMessage to SMS.
In Messages, the user's sent communication is aligned to the right, with replies from other people on the left. A user can see if the other iMessage user is typing a message. A pale gray ellipsis appears in the text bubble of the other user when a reply is started. It is also possible to start a conversation on one iOS device and continue it on another. On iPhones, green buttons and text bubbles indicate SMS-based communication; on all iOS devices, blue buttons and text bubbles indicate iMessage communication.
All iMessages are encrypted and can be tracked using delivery receipts. If the recipient enables Read Receipts, the sender will be able to see when the recipient has read the message. iMessage also allows users to set up chats with more than two people—a "group chat".
With the launch of iOS 10, users can send messages accompanied by a range of "bubble" or "screen" effects. By holding down the send button with force, the range of effects is surfaced for users to select an effect to be sent to the receiver.
With the launches of iOS 14 and macOS 11 Big Sur, users gain a myriad of features such as the ability to pin individual conversations, mention other users, set an image for group conversations, and send inline replies. Additionally, more of the features from the Messages app on iOS and iPadOS were ported over to their macOS counterpart.
Technology
The iMessage protocol is based on the Apple Push Notification service (APNs)—a proprietary, binary protocol. It sets up a Keep-Alive connection with the Apple servers. Every connection has its own unique code, which acts as an identifier for the route that should be used to send a message to a specific device. The connection is encrypted with TLS using a client-side certificate, that is requested by the device on the activation of iMessage.
Platforms
iMessage is only available on the Apple operating systems, such as iOS, iPadOS, macOS, and watchOS. Unlike some other messaging apps, it does not have compatibility for Android or Microsoft Windows, and does not have any web access/interface. This means iMessage must be accessed using the app on a device using an Apple operating system.
Unofficial platforms
iMessage is only officially supported on Apple devices, but many apps exist that forward iMessages to devices that don't run Apple's operating system. The iMessage forwarding apps achieve this by creating an iMessage server on an iOS or macOS device that forwards the messages to a client on any other device, including Android, Windows, and Linux machines. The apps that use an iOS device as a server require the device to be jailbroken.
On November 23, 2012, Beast Soft released the first version of their Remote Messages jailbreak tweak for iOS 5. Remote Messages created an iMessage and SMS server on the iOS device that could be accessed by any other internet enabled device through a web app. Remote Messages had the ability to send any attachments from the client device, as well as sending photos from the iOS server device through the web app. Beast Soft would continue to update Remote Messages through October 2015, supporting all iOS versions from iOS 5 through iOS 9.
On May 3, 2016, an independent open-source project named "PieMessage" was announced by app developer Eric Chee, consisting of code for OS X that communicates with iMessage and connects to an Android client, allowing the Android client to send and receive messages.
On October 16, 2017, following inactivity from Beast Soft as well as a monetary bounty requesting an iMessage tweak compatible with iOS 10, SparkDev released AirMessage. AirMessage was similar to Remote Messages in that the client was accessed through a web app, however it was more limited in features and did not support sending attachments like Remote Messages previously had. AirMessage also did not add support for any of the new iMessage features of iOS 10, such as tapback reactions or screen effects. AirMessage was updated through June 2020, ending with support for iOS 10 through iOS 13.
On December 10, 2017, 16-year-old developer Roman Scott released weMessage, the first publicly available Android app that forwarded iMessages from a macOS server device to an Android client. Scott released two substantial updates to weMessage, the first of which added iMessage screen effects and bug fixes and the second of which added SMS and MMS support, as well as fixes for contact syncing and server management. On November 11th, 2018, citing his inability to spend more time on the project, Scott open-sourced weMessage.
On February 22, 2019 independent developer Cole Feuer released the AirMessage app for Android. Feuer's AirMessage coincidentally shares a name with SparkDev's iOS tweak, but AirMessage for Android is not in any way related to the AirMessage jailbreak tweak. AirMessage for Android includes code for a server running on OS X Yosemite and higher, and an Android client that runs on Android 6 and higher that can send and receive iMessages. Like weMessage, AirMessage has support for displaying, but not sending, screen effects, and AirMessage also has the ability to display tapback messages and send tapback notifications. In January 2020, Feuer released an update that added SMS and MMS capabilities, as well as web link previews, a photo gallery viewer, and the ability to send a location message.
On August 15, 2020, Ian Welker released SMServer as a free and open-source iOS jailbreak tweak for iOS 13 that uses a web app client. Welker maintains an API on his GitHub page with extensive documentation on how to use the IMCore and ChatKit libraries. SMServer was the first app to support iOS 14 and macOS Big Sur features of iMessage, such as group chat photos and displaying pinned conversations. It was also the first app to support remote sending of tapback messages and subject line text.
On August 21, 2020, Eric Rabil released a video showcasing his upcoming server and web app, MyMessage. MyMessage was the first app to showcase support for sending tapback messages and receiving digital touch and handwritten messages, which Rabil claimed to have achieved by writing code that directly communicated with the iMessage service rather than using AppleScript and reading the database. MyMessage is the only app to run its server on both macOS and iOS, but as of February 2021, only the server component of MyMessage has been released, with the web app frontend still receiving stability development.
From August 2020 through October 2020, a free and open-source project called BlueBubbles was publicly released. BlueBubbles was built to address some of the difficulties and limitations of AirMessage for Android, such as the fact that AirMessage was closed source, required port forwarding, and had no native apps for operating systems such as Windows or Linux. BlueBubbles requires a server running MacOS High Sierra or higher, and like AirMessage, it has some limitations on MacOS Big Sur. In November and December 2020, BlueBubbles added the ability to send and receive typing indicators from the Android app, as well as the ability to send read receipts and tapback messages. (both on Android)
On January 29, 2021, Aziz Hasanain released a free and open-source jailbreak tweak called WebMessage for iOS 12 through iOS 14. Hasanain used Welker's documentation of the IMCore and ChatKit libraries to assist his development of WebMessage, which is the first jailbreak tweak to use a downloaded app as the client instead of a web app.
Reception
On November 12, 2012, Chetan Sharma, a technology and strategy consulting firm, published the US Mobile Data Market Update Q3 2012, noting the decline of text messaging in the United States, and suggested the decline may be attributed to Americans using alternative free messaging services such as iMessage.
In 2017, Google announced they would compete with iMessage with their own messaging service, Android Messaging.
Security and privacy
On November 4, 2014, the Electronic Frontier Foundation (EFF) listed iMessage on its "Secure Messaging Scorecard", giving it a score of 5 out of 7 points. It received points for having communications encrypted in transit, having communications encrypted with keys the provider doesn't have access to (end-to-end encryption), having past communications secure if the keys are stolen (forward secrecy), having their security designs well-documented, and having a recent independent security audit. It missed points because users can not verify contacts' identities and because the source code is not open to independent review. In September 2015, Matthew Green noted that, because iMessage does not display key fingerprints for out-of-band verification, users are unable to verify that a man-in-the-middle attack has not occurred. The post also noted that iMessage uses RSA key exchange. This means that, as opposed to what EFF's scorecard claims, iMessage does not feature forward secrecy.
On August 7, 2019, researchers from Project Zero presented 6 “interaction-less” exploits in iMessage that could be used to take over control of a user's device. These six exploits have been fixed in iOS 12.4, released on July 22, 2019, however there are still some undisclosed exploits which will be patched in a future update.
Project Pegasus revelations in July 2021 found the software used iMessage exploits.
See also
FaceTime, Apple's videotelephony service which also uses APNs
Signal (software), an end-to-end encrypted messenger with forward secrecy, available for the same platforms on which iMessage runs
Threema
WhatsApp
Facebook Messenger
WeChat
Line
Skype
Snapchat
References
Further reading
Instant messaging protocols |
29660380 | https://en.wikipedia.org/wiki/Sigil%20%28application%29 | Sigil (application) | Sigil is free, open-source editing software for e-books in the EPUB format.
As a cross-platform application, Sigil is distributed for the Windows, macOS, Haiku and Linux platforms under the GNU GPL license. Sigil supports code-based editing of EPUB files, as well as the import of HTML and plain text files. A companion application, PageEdit, allows WYSIWYG editing of EPUB files.
Sigil has been developed by Strahinja Val Marković and others since 2009. From July 2011 to June 2015 John Schember was the lead developer. In June 2015 development of Sigil was taken over by Kevin Hendricks and Doug Massay.
Features
Sigil's features include:
Full UTF-16 and EPUB 2 specification support
Multiple views: book, code and preview view
Table of contents generator with multi-level heading support
Metadata editor with full support for all metadata entries
Hunspell based spell checking with default and user configurable dictionaries
Full regular expression (PCRE) support for find and replace
Supports import of EPUB and HTML files, images, and style sheets
Integrated API to external HTML and graphics editors
FlightCrew validator for EPUB standard compliance validation (separate plugin)
Sigil has full EPUB 2 specifications support, but only limited EPUB 3 support. Since version 0.9.3 of January 2016, the developers have been focusing on "improving Sigil’s ability to work with and generate epub3 ebooks without losing any of its epub2 capabilities".
WYSIWYG editing in book view was discontinued in 2019 and moved to a separate application, PageEdit.
See also
Calibre (software)
List of free and open-source software packages
References
External links
Old code repository
Sigil development blog
Ebook editing with Sigil LWN.net, 2011
EPUB readers
Free application software
Cross-platform free software
MacOS text-related software
Windows text-related software
Linux text-related software
Typesetting software
Desktop publishing software
Software that uses Qt |
28518879 | https://en.wikipedia.org/wiki/Datalight | Datalight | Datalight was a privately held software company specializing in power failsafe and high performance software for preserving data integrity in embedded systems. The company was founded in 1983 by Roy Sherrill, and is headquartered in Bothell, Washington. As of 2019 the company is a subsidiary of Tuxera under the name of Tuxera US Inc.
Overview and history
Datalight was founded in 1983 by Roy Sherrill, a former Boeing engineer. Datalight's initial products were two DOS applications: the Datalight Small-C compiler and the Datalight C-Bug debugger. A full C compiler named Datalight C was available from Datalight between 1987 and 1993; Datalight C, developed by Walter Bright, evolved into Zortech C and is now Digital Mars C. Datalight C was also developed into an optimizing compiler called Datalight Optimum-C, which later became Zortech C++, the first native C++ compiler. In 1988, Datalight released C_thru_ROM, which provided embeddedable C functions and C start-up code, allowing programs developed on DOS to run as standalone applications without DOS dependence. In 1989, ROM-DOS 1.0 was released.
CardTrick was announced in 1993 to support the flash memory being built into PCMCIA cards. CardTrick later evolved into the embedded flash memory manager FlashFX in 1995, moving Datalight into the raw flash memory market. The company grew rapidly in the late 1990s, receiving the WA Fast 50 award for the fastest growing companies in Washington state in 1997 and 1998.
The first of four patents to eventually be assigned to Datalight, "Method and apparatus for allocating storage in a flash memory", was awarded in 1999, followed up with an additional FlashFX-related patent, "Method and system for managing bad areas in flash memory", in 2001.
In 2003, Reliance, a reliable transactional embedded file system, was released; a related patent, "Reliable file system and method of providing the same", was awarded in 2007.
In 2013, another file system related patent, "Method and Apparatus for Fault-tolerant Memory Management" was issued.
In 2009, Datalight released FlashFX Tera to support the growing size and complexity of NAND flash arrays. That same year, Reliance Nitro was released, building upon Reliance and adding a tree-based architecture to improve performance for large files (>100 MB) and large numbers of files.
In June 2019, the Finnish storage software and networking technology company Tuxera signed an agreement to acquire Datalight.
Products
Reliance family
Reliance
First released in 2003, Reliance is an embedded file system designed for applications with high reliability requirements. Key features:
Provides immunity to file corruption, including after unexpected system interruption (e.g., power loss), via atomic transaction points.
Does not need to check disk integrity at start-up, meaning a shorter boot time.
Dynamic file system configuration for performance optimization.
Full data-exchangeability with Microsoft Windows, via the Reliance Windows Driver.
Reliance has a maximum volume size of 2 TB and a maximum file size of 4 GB.
Reliance Nitro
Released in 2009, Reliance Nitro is a file system developed from Reliance; it improved on the performance of original Reliance, primarily by adding a tree-based directory architecture facilitating faster look-ups. The maximum volume size on Reliance Nitro is 32 TB; maximum file size is constrained only by free space.
Reliance Windows Driver
Datalight provides Windows drivers for both Reliance (Reliance Windows Driver; RWD) and Reliance Nitro (Reliance Nitro Windows Driver; RNWD); they provide exchangeability between Reliance-formatted media and Microsoft Windows. Both support Windows Vista and Windows XP; an older version of RWD supports Windows 2000. The drivers are bundled with tools to format media and a utility to check file system integrity.
FlashFX
Introduced in 1995, FlashFX is a flash media manager which allows applications to access flash memory as if it were a hard drive, abstracting the complexity of flash media. FlashFX operates either NAND or NOR flash and supports numerous flash devices. It can be used with any file system.
Versions:
FlashFX Pro: Supports around 200 flash chip part numbers and flash arrays up to 2 GB. Has pre-ported versions for Windows CE, VxWorks, Nucleus PLUS, and ThreadX. FlashFX Pro is available for Windows Mobile (FlashFX Tera is not).
FlashFX Tera: Supports around 300 flash chip part numbers and flash arrays up to 2 TB. Has pre-ported versions for Linux, Windows CE, and VxWorks. FlashFX Tera supports MLC NAND flash, while FlashFX Pro does not; another improvement is Tera's error correction, which is more robust than Pro's.
Products using FlashFX include Arcom's PC/104 computer, Curtis-Wright's Continuum Software Architecture, Teltronic's HTT-500 handset, and MCSI's PROMDISK disk emulator.
XCFiles
XCFiles, released in June 2010, is an exFAT-compatible file system aimed at consumer devices. It allows embedded systems to support SDXC, the SD Card Association standard for extended capacity storage cards. Marketed as "independent of the target platform", XCFiles is intended to be portable to any 32-bit platform which meets certain requirements (such as supporting semaphores and unsigned 64-bit integers).
XCFiles is marketed in Japan as 'exFiles' by A.I. Corporation; it was released there in April 2009.
ROM-DOS
ROM-DOS (sometimes called Datalight DOS) was introduced in 1989 as an MS-DOS compatible operating system designed for embedded systems. It includes backward compatibility build options allowing compatibility with specific versions of MS-DOS (e.g., DOS 5.01). ROM-DOS 7.1 added support for FAT32 and long file names. ROM-DOS includes a compact TCP/IP stack; and SOCKETS, a network socket API and connectivity package, is available as an optional add-on for ROM-DOS. The SDK comes with Borland C/C++ and Turbo Assembler.
System requirements:
Intel 80186 or compatible
10 KB of RAM
54–72 KB of ROM or disk space (depending on version)
Some devices which use or used ROM-DOS are the Canon PowerShot Pro70, Advantech's ADAM-4500, the Percon Falcon 325, several early PDAs (Tandy Zoomer, IBM Simon, HP OmniGo 100/120, Nokia 9000/9000i/9110/9110i), Casio Algebra FX series graphing calculators, MCSI's PROMDISK, and Arcom's PC/104 computer. Intel's Advanced RAID Configuration Utility (ARCU) is based on ROM-DOS, and, as of 2004, all Intel server board System Resource CDs included ROM-DOS. Symbol's FMT 3000 came with a copy of ROM-DOS.
Commands
The following list of commands is supported by ROM-DOS.
ATTRIB
BACKUP
BREAK
CALL
CD
CHDIR
CHKDSK
CHOICE
CLS
COMM
COMMAND
COPY
CTTY
DATE
DEL
DELTREE
DIR
DISK2IMG
DISKCOMP
DISKCOPY
DUMP
ECHO
EMM386
ERASE
EXE2BIN
EXIT
FDISK
FIND
FOR
FORMAT
GOTO
HELP
IF
KEYB
LABEL
LFNFOR
LOADHIGH
MD
MEM
MINICMD.COM
MKDIR
MODE
MORE
MOVE
MSCDEX
NED
PATH
PAUSE
POWER
PRINT
PROMPT
PROTO
RD
REM
REMDISK
REMQUIT
REMSERV
REN
RESTORE
RMDIR
RSZ
SERLINK
SERSERV
SET
SHARE
SHIFT
SMARTDRV
SORT
SUBST
SYS
TIME
TRANSFER
TREE
TRUENAME
TYPE
UMBLINK
VER
VERIFY
VOL
XCOPY
References
External links
2019 mergers and acquisitions
Computer storage companies
Defunct software companies of the United States
Software companies based in Washington (state)
Software companies established in 1983
Software companies disestablished in 2019
1983 establishments in Washington (state)
2019 disestablishments in Washington (state)
Software companies of the United States |
15810462 | https://en.wikipedia.org/wiki/Johnny%20Long | Johnny Long | Johnny Long, otherwise known as "j0hnny" or "j0hnnyhax", is a computer security expert, author, and public speaker in the United States.
Long is well known for his background in Google hacking, a process by which vulnerable servers on the Internet can be identified through specially constructed Google searches. He has gained fame as a prolific author and editor of numerous computer security books.
Career in computer security
Early in his career, in 1996, Long joined Computer Sciences Corporation and formed the corporation's vulnerability assessment team known as Strike Force. Following a short position at Ciphent as their chief scientist, Long now dedicates his time to the Hackers for Charity organization. He continues to provide talks at many well-publicized security events around the world. In recent years, Long has become a regular speaker at many annual security conferences including DEF CON, the Black Hat Briefings, ShmooCon, and Microsoft's BlueHat internal security conferences. Recently, his efforts to start the Hackers for Charity movement have gained notable press attention. His talks have ranged from Google hacking to how Hollywood portrays hackers in film.
Google hacking
Through his work with CSC's Strike Force, Johnny was an early pioneer in the field of Google hacking. Through specially crafted search queries it was possible to locate servers on the Internet running vulnerable software. It was equally possible to locate servers that held no security and were openly sharing personal identifiable information such as Social Security numbers and credit card numbers. These efforts grew into the creation of the Google Hacking Database, through which hundreds of Google hacking search terms are stored. The field of Google hacking has evolved over time to not just using Google to passively search for vulnerable servers, but to actually use Google search queries to attack servers.
Hackers for Charity
In his latest endeavor, Johnny Long has created the Hackers for Charity non-profit organization. Known by its byline, "I Hack Charities", the organization collects computer and office equipment to donate to underdeveloped countries. Along with coordinating the donation of goods and supplies, Johnny lived in Uganda with his family for seven years full-time where they personally setup computer networks and helped build village infrastructures. In addition, they started a computer training center which provides free and low-cost technical training, a hackerspace, a restaurant and a leather working program all based in Jinja Uganda. Each of these projects are still running (as of May 2019). Each of these projects were funded by donations from the hacker community through fundraising efforts at various conferences.
Personal life
Long is known to publicly pronounce his faith in Christianity. He begins and ends each of his presentations with information regarding Hackers for Charity and regularly donates proceeds from his books to help HFC.
Published works
Long has contributed to the following published works:
Google Hacking for Penetration Testers, Syngress Publishing, 2004. (Author, book translated into five different languages)
Aggressive Network Self-Defense, Syngress Publishing, 2005. (Author, Chapter 4, "A VPN Victim's Story: Jack's Smirking Revenge" with Neil Archibald.
InfoSec Career Hacking, Syngress Publishing, 2005. . (Author, Chapter 6, "No Place Like /home – Create an Attack Lab")
Stealing the Network: How to Own an Identity, Syngress Publishing, 2005. . (Technical Editor, Author, Chapter 7, "Death by a Thousand Cuts"; Chapter 10, "There's something else" with Anthony Kokocinski; and "Epilogue: The Chase")
OS X For Hackers at Heart, Syngress Publishing, 2005. (Author, Chapter 2, "Automation" and Chapter 5, "Mac OS X for Pen Testers")
Penetration Tester's Open Source Toolkit, Syngress Publishing, 2005. (Technical Editor, Author, "Running Nessus with Auditor")
Stealing the Network: How to Own a Shadow, Syngress Publishing, 2007.
Google Talking, Syngress Publishing, 2007. (Technical Editor and Contributor)
Techno Security's Guide to Managing Risks for IT Managers, Auditors and Investigators, Syngress Publishing, 2007 . (Author, Chapter 8, "No-Tech Hacking")
Asterisk Hacking, Syngress Publishing, 2007. (Technical Editor)
Google Hacking for Penetration Testers, Volume 2, Syngress Publishing, 2007. (Author)
TechnoSecurity's Guide to E-Discovery and Digital Forensics, Elsevier Publishing, 2007 (Author, "Death by 1000 cuts").
No-Tech Hacking, Elsevier Publishing, 2008 (Author)
References
External links
Living people
Writers about computer security
Year of birth missing (living people) |
37336711 | https://en.wikipedia.org/wiki/Catherine%20G.%20Wolf | Catherine G. Wolf | Catherine Gody Wolf (May 25, 1947 - February 7, 2018) was an American psychologist and expert in human-computer interaction. She was the author of more than 100 research articles and held six patents in the areas of human-computer interaction, artificial intelligence, and collaboration. Wolf was known for her work at IBM's Thomas J. Watson Research Center in Yorktown Heights, NY, where she was a 19-year staff researcher.
In the late 1990s, Wolf was diagnosed with Amyotrophic lateral sclerosis (ALS), better known as Lou Gehrig's disease. Despite a rapid physical deterioration, Wolf was still able to communicate with the world via electronic sensory equipment, including a sophisticated brain-computer interface. Remarkably, with almost no voluntary physical functions remaining, she published novel research into the fine-scale abilities of ALS patients.
Education
Wolf completed her undergraduate degree at Tufts University, where she majored in psychology. In 1967 she met her future husband, Joel Wolf, then a student at the Massachusetts Institute of Technology. Both continued on to graduate school at Brown University, where Catherine focused her research on the way that children perceive speech. After Brown, Wolf completed additional postgraduate work at MIT before entering the workforce as a full-time researcher.
Career
Wolf's career focused on human-computer interaction. In 1977, she joined Bell Labs, where she became a human factors manager. Eight years later, she began her tenure as a research psychologist at the Thomas J. Watson Research Center, IBM's research headquarters. During her time at IBM, Wolf was particularly interested in learning how people interact with software in the workplace. In response to behaviors she observed, she designed and tested new interface systems in which speech and handwritten words could be converted to digital information. Among other technologies, Wolf worked on a system known as the Conversation Machine, which was the precursor of today's phone banking systems: users could access their accounts by conversing with an automated voice system. She also published papers on the sharing of information in the workplace and search in the context of technical support.
In all, Wolf held title to six patents and more than 100 research articles. In 1997, she was diagnosed with ALS, a.k.a. Lou Gehrig's disease, which eventually prevented her from performing her normal work duties. Wolf went on long-term disability leave in 2004 and officially retired from IBM in 2012. Even after losing almost all muscle function, however, Wolf still contributed to research on human-computer interaction. She also did work with the Wadsworth Center, part of the New York State Department of Health, as a tester of various systems. In 2009, Wolf also published a research article extending a scale commonly used to assess the progression of ALS (known as the ALSFRS-R) to more finely assess the abilities of people with advanced ALS. This paper added significantly to the understanding of what ALS patients might be capable of even after most of their muscle function has been lost.
Living with ALS
Wolf first felt symptoms of ALS in 1996, when her foot wouldn't flex properly. She was positively diagnosed with ALS a year later.
In 2001, Wolf decided to have a tracheotomy, a surgical procedure that permanently attached a breathing tube in her neck, allowing her to breathe without the use of her nose or mouth.
Wolf eventually lost the use of all of her muscles except a few in her face and eyes. To communicate, she used a computer system which translated movement of her eyebrows into text. She was adept at communicating in this way, even if though she could only "type" out one or two words a minute. She wrote poetry, sent emails, conducted occasional interviews, and wrote articles for such outlets as Neurology Now. She was even able to stay active on Facebook.
Concurrent with the loss of her muscle control, Wolf became increasingly an expert on brain-computer interface (BCI) systems, and helped other researchers learn more about how such systems can work. She was aware that she might lose the ability to communicate with her eyebrows, so she worked with scientists on an EEG-based interface system for herself, if that day came. EEG (electroencephalography) measures voltage fluctuations along the scalp that result from neuron activity in the brain. With such a setup in place, Wolf hoped to communicate words simply by focusing her thoughts on one letter at a time. Wolf provided researchers with important feedback about BCI's, since they didn't work flawlessly.
Personal
Wolf was married to Joel Wolf, a mathematician at IBM's TJ Watson Research Center. They had two daughters, Laura and Erika, and several grandchildren.
On April 26, 2003, Wolf was honored with a Distinguished Service Award from her alma mater, Tufts University, for "the ideal of citizenship and public service."
On February 7, 2018, Wolf died at her home in Katonah, New York at the age of 70.
References
External links
Beth Schwartzapfel. "I Will Be Heard!" Brown Alumni Monthly, March/April, 2009 (retrieved October 15, 2012)
Brooke Baldwin. "Writing emails with her mind." CNN, February 5, 2010 (retrieved November 19, 2012)
Catherine G. Wolf. "Wave of the Future." Neurology Now, November/December 2007. Volume 3(6). p 41-42 (retrieved November 19, 2012)
Marek Fuchs. "A Thing or Two to Say Before Dying." The New York Times, August 28, 2005 (retrieved November 19, 2012)
Cathy Wolf - Facebook
Hear Me Now - The Story of Catherine Wolf, Human-Computer Interaction Pioneer (retrieved May 6, 2018)
20th-century American scientists
21st-century American scientists
20th-century American women scientists
21st-century American women scientists
American computer scientists
American women computer scientists
Human–computer interaction researchers
Artificial intelligence researchers
American women psychologists
American psychologists
Tufts University alumni
Brown University alumni
IBM employees
People from Washington, D.C.
1947 births
2018 deaths
Deaths from motor neuron disease
Neurological disease deaths in New York (state) |
521507 | https://en.wikipedia.org/wiki/Priority%20inversion | Priority inversion | In computer science, priority inversion is a scenario in scheduling in which a high priority task is indirectly superseded by a lower priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high priority tasks can only be prevented from running by higher priority tasks. Inversion occurs when there is a resource contention with a low priority task that is then preempted by a medium priority task.
Example
Consider two tasks and , of high and low priority respectively, either of which can acquire exclusive use of a shared resource . If attempts to acquire after has acquired it, then becomes blocked until relinquishes the resource. Sharing an exclusive-use resource ( in this case) in a well-designed system typically involves relinquishing promptly so that (a higher priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task of medium priority (, where represents the priority for task ) becomes runnable during 's use of . At this point, being higher in priority than , preempts (since does not depend on ), causing to not be able to relinquish promptly, in turn causing —the highest priority process—to be unable to run (that is, suffers unexpected blockage indirectly caused by lower priority tasks like ).
Consequences
In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high priority task goes unnoticed, and eventually the low priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watchdog timer resetting the entire system. The trouble experienced by the Mars Pathfinder lander in 1997 is a classic example of problems caused by priority inversion in realtime systems.
Priority inversion can also reduce the perceived performance of the system. Low priority tasks usually have a low priority because it is not important for them to finish promptly (for example, they might be a batch job or another non-interactive activity). Similarly, a high priority task has a high priority because it is more likely to be subject to strict time constraints—it may be providing data to an interactive user, or acting subject to realtime response guarantees. Because priority inversion results in the execution of a lower priority task blocking the high priority task, it can lead to reduced system responsiveness, or even the violation of response time guarantees.
A similar problem called deadline interchange can occur within earliest deadline first scheduling (EDF).
Solutions
The existence of this problem has been known since the 1970s. Lampson and Redell
published one of the first papers to point out the priority inversion problem. Systems such as the UNIX kernel were already addressing the problem with the splx() primitive. There is no foolproof method to predict the situation. There are however many existing solutions, of which the most common ones are:
Disabling all interrupts to protect critical sections
When disabling interrupts is used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible. Since there's only one piece of lock data (the interrupt-enable bit), misordering locking is impossible, and so deadlocks cannot occur. Since the critical regions always run to completion, hangs do not occur. Note that this only works if all interrupts are disabled. If only a particular hardware device's interrupt is disabled, priority inversion is reintroduced by the hardware's prioritization of interrupts. In early versions of UNIX, a series of primitives named splx(0) ... splx(7) disabled all interrupts up through the given priority. By properly choosing the highest priority of any interrupt that ever entered the critical section, the priority inversion problem could be solved without locking out all of the interrupts. Ceilings were assigned in rate-monotonic order, i.e. the slower devices had lower priorities.
In multiple CPU systems, a simple variation, "single shared-flag locking" is used. This scheme provides a single flag in shared memory that is used by all CPUs to lock all inter-processor critical sections with a busy-wait. Interprocessor communications are expensive and slow on most multiple CPU systems. Therefore, most such systems are designed to minimize shared resources. As a result, this scheme actually works well on many practical systems. These methods are widely used in simple embedded systems, where they are prized for their reliability, simplicity and low resource use. These schemes also require clever programming to keep the critical sections very brief. Many software engineers consider them impractical in general-purpose computers.
Priority ceiling protocol
With priority ceiling protocol, the shared mutex process (that runs the operating system code) has a characteristic (high) priority of its own, which is assigned to the task locking the mutex. This works well, provided the other high priority task(s) that tries to access the mutex does not have a priority higher than the ceiling priority.
Priority inheritance
Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, thus keeping medium priority tasks from pre-empting the (originally) low priority task, and thereby affecting the waiting high priority task as well. Once the resource is released, the low priority task continues at its original priority level.
Random boosting
Ready tasks holding locks are randomly boosted in priority until they exit the critical section. This solution is used in Microsoft Windows.
Avoid blocking
Because priority inversion involves a low-priority task blocking a high-priority task, one way to avoid priority inversion is to avoid blocking, for example by using non-blocking algorithms such as read-copy-update.
See also
Nice (Unix)
Non-blocking synchronization
Pre-emptive multitasking
Read-copy-update (RCU)
Resource starvation
References
External links
Description from FOLDOC
Citations from CiteSeer
IEEE Priority Inheritance Paper by Sha, Rajkumar, Lehoczky
Introduction to Priority Inversion by Michael Barr
Concurrency control
Software bugs
Embedded systems |
217282 | https://en.wikipedia.org/wiki/Paul%20Westphal | Paul Westphal | Paul Douglas Westphal (November 30, 1950 – January 2, 2021) was an American basketball player, head coach, and commentator.
Westphal played in the National Basketball Association (NBA) from 1972 to 1984. Playing the guard position, he won an NBA championship with the Boston Celtics in 1974. Westphal played in the NBA Finals again in 1976 as a member of the Phoenix Suns. His NBA career also included stints with the Seattle SuperSonics and the New York Knicks. In addition to being a five-time All-Star selection, Westphal earned three All-NBA First Team selections and one Second Team honor.
After his playing career ended, Westphal coached for Southwestern Baptist Bible College (now Arizona Christian University), Grand Canyon University, and Pepperdine University, and served also as head coach of the Phoenix Suns, Seattle SuperSonics, and Sacramento Kings in the NBA. Westphal coached the Suns to the NBA Finals in 1993.
In 2019, Westphal was inducted into the Naismith Basketball Hall of Fame.
Early life
Born in Torrance, California, He went to Aviation High School in Redondo Beach, California, from 1966 to 1969. Westphal attended the University of Southern California (USC), where he played college basketball for the USC Trojans as a guard. The Trojans had a win-loss record in 1971, setting a Trojans record for winning percentage. He was an All-American and team captain in 1972. Playing for USC from 1970 to 1972, he averaged 16.9 points per game and led the Trojans with 20.3 points per game in 1972.
Playing career
Boston Celtics (1972–1975)
The Boston Celtics selected Westphal with the tenth overall pick in the 1972 NBA draft. After three seasons in Boston, including a championship in 1974, the Celtics traded Westphal and two second round draft picks to the Phoenix Suns for Charlie Scott.
Phoenix Suns (1975–1980)
In his first season with in Phoenix, Westphal helped the Suns reach their first NBA Finals, against the Celtics. In Game 5 of that series, often called "the greatest game ever played" in NBA history, he made several critical plays that pushed the game into triple overtime before Boston prevailed. Notably, Westphal exploited a loophole within NBA rules that effectively allowed the Suns to cede a point to get the ball at half-court with two seconds remaining at the end of the second overtime; the Suns tied the game thanks to the loophole.
Westphal was sixth in the NBA in scoring average for the season at 25.2 points per game, and was also the first NBA All-Star Weekend H-O-R-S-E Competition champion. The following season, he was seventh in scoring average with 24.0 points per game.
Seattle Supersonics (1980–1981)
After the 1979–80 season in early June, the Suns traded Westphal to the Seattle SuperSonics for Dennis Johnson, where he played one season.
New York Knicks (1982–1983)
After his season with the Sonics, Westphal then signed with the New York Knicks as a free agent in late February 1982.
Return to Phoenix (1983–1984)
He signed a two-year contract with Phoenix in September 1983, and the Suns waived him in October 1984.
In his NBA career, Westphal scored a total of 12,809 points for an average of 15.6 points per game, with 3,591 assists for an average of 4.4 assists per game. He also had 1,580 rebounds, for an average of 1.9 per game. Westphal was a five-time All-Star, a three-time All-NBA first team selection, and a one-time second team All-NBA selection. He is Phoenix's fifth all-time leading scorer (9,564), averaging 20.6 points in six seasons (1975–80, 1983–84). His No. 44 was retired by the Suns, and he is a member of their Ring of Honor. Westphal was also inducted into the Naismith Basketball Hall of Fame as a player on September 6, 2019.
Coaching career
Westphal's coaching career started in 1985 at Southwestern Baptist Bible College (now Arizona Christian University), located in Phoenix. After compiling a 21–9 record in his lone season there, he moved on to Grand Canyon College, also in Phoenix, and after two seasons led them to the NAIA national title in 1988.
In 1988, after three years in the college ranks, Westphal became an assistant coach with the Phoenix Suns under head coach Cotton Fitzsimmons, and in 1992, he succeeded Fitzsimmons as head coach of the Suns. With players such as Kevin Johnson, Dan Majerle, rookie Richard Dumas, Charles Barkley, and Danny Ainge, the Suns made it to the NBA Finals in Westphal's first season as a coach, but lost to the Chicago Bulls in six games. While the Suns made the playoffs during each of Westphal's seasons as coach, they did not return to the Finals, and Westphal was let go during the 1995–96 season. He served as an assistant coach for a high school team in Arizona for two years before he returned to the NBA as a coach with the SuperSonics for the 1998–99 season. He coached in Seattle until he was fired 15 games into the 2000–01 season.
Westphal returned to the college ranks in April 2001 at Pepperdine University. In his first season, Westphal led the Waves to a 22–9 record and tied the nationally ranked Gonzaga Bulldogs for the WCC title. The team received an at-large berth to the NCAA Tournament, but lost 83–74 to Wake Forest in the first round, played at ARCO Arena in Sacramento. This was the only postseason berth during the rest of Westphal's five-year tenure and he finished with an overall record of 74–72. After a 7–20 season in 2005–06, Westphal was fired on March 15, 2006.
On June 28, 2007, the Dallas Mavericks announced they had hired Westphal as an assistant coach under head coach Avery Johnson. When Johnson was replaced by Rick Carlisle, Westphal left coaching to become executive vice-president of basketball operations (under Donnie Nelson) for the Mavericks in October 2008. On June 10, 2009, Westphal was named head coach of the Sacramento Kings. Westphal was fired from the Kings on January 5, 2012.
For the 2014–15 season, Westphal was hired by the Brooklyn Nets as an assistant to new head coach Lionel Hollins. Hollins had previously served as Westphal's assistant coach in Phoenix. When the Nets fired Hollins in January 2016, Westphal left the team.
Broadcasting career
Westphal also worked as a studio analyst for Fox Sports Net West/Prime Ticket for Los Angeles Clippers and Los Angeles Lakers games, first joining them during the Clippers' run in the 2006 NBA Playoffs.
Personal life
Westphal was married to Cindy Westphal and they had two children together. He was a Christian.
In August 2020, ESPN reported that he was diagnosed with brain cancer to which he succumbed in Scottsdale, Arizona, on January 2, 2021, at age 70.
Head coaching record
NBA
|-
| style="text-align:left;"|Phoenix
| style="text-align:left;"|
| 82||62||20|||| style="text-align:center;"|1st in Pacific||24||13||11||
| style="text-align:center;"|Lost in NBA Finals
|-
| style="text-align:left;"|Phoenix
| style="text-align:left;"|
| 82||56||26|||| style="text-align:center;"|2nd in Pacific||10||6||4||
| style="text-align:center;"|Lost in Conference Semifinals
|-
| style="text-align:left;"|Phoenix
| style="text-align:left;"|
| 82||59||23|||| style="text-align:center;"|1st in Pacific||10||6||4||
| style="text-align:center;"|Lost in Conference Semifinals
|-
| style="text-align:left;"|Phoenix
| style="text-align:left;"|
| 33||14||19|||| style="text-align:center;"|(fired)||—||—||—||—
| style="text-align:center;"|—
|-
| style="text-align:left;"|Seattle
| style="text-align:left;"|
| 50||25||25|||| style="text-align:center;"|5th in Pacific||—||—||—||—
| style="text-align:center;"|Missed playoffs
|-
| style="text-align:left;"|Seattle
| style="text-align:left;"|
| 82||45||37|||| style="text-align:center;"|4th in Pacific||5||2||3||
| style="text-align:center;"|Lost in First Round
|-
| style="text-align:left;"|Seattle
| style="text-align:left;"|
| 15||6||9|||| style="text-align:center;"|(fired)||—||—||—||—
| style="text-align:center;"|—
|-
| style="text-align:left;"|Sacramento
| style="text-align:left;"|
| 82||25||57|||| style="text-align:center;"|5th in Pacific||—||—||—||—
| style="text-align:center;"|Missed playoffs
|-
| style="text-align:left;"|Sacramento
| style="text-align:left;"|
| 82||24||58|||| style="text-align:center;"|5th in Pacific||—||—||—||—
| style="text-align:center;"|Missed playoffs
|-
| style="text-align:left;"|Sacramento
| style="text-align:left;"|
| 7||2||5|||| style="text-align:center;"|(fired)||—||—||—||—
| style="text-align:center;"|—
|-class="sortbottom"
| style="text-align:center;" colspan="2"|Career
| 597||318||279|||| ||49||27||22||||
College
Sources:
References
External links
Paul Westphal's bio on Phoenix Suns' website
Paul Westphal's official bio on NBA.com
1950 births
2021 deaths
All-American college men's basketball players
American Christians
American men's basketball coaches
American men's basketball players
American television sports announcers
Basketball coaches from California
Basketball players at the 1971 Pan American Games
Basketball players from Torrance, California
Boston Celtics draft picks
Boston Celtics players
Brooklyn Nets assistant coaches
College basketball announcers in the United States
College men's basketball head coaches in the United States
Dallas Mavericks assistant coaches
Deaths from brain tumor
Deaths from cancer in Arizona
Grand Canyon Antelopes men's basketball coaches
Naismith Memorial Basketball Hall of Fame inductees
National Basketball Association All-Stars
National Basketball Association players with retired numbers
New York Knicks players
Pan American Games competitors for the United States
Pepperdine Waves men's basketball coaches
Phoenix Suns assistant coaches
Phoenix Suns head coaches
Phoenix Suns players
Point guards
Sacramento Kings head coaches
Seattle SuperSonics head coaches
Seattle SuperSonics players
Shooting guards
Sportspeople from Redondo Beach, California
USC Trojans men's basketball players |
22018823 | https://en.wikipedia.org/wiki/LOBSTER | LOBSTER | LOBSTER was a European network monitoring system, based on passive monitoring of traffic on the internet. Its functions were to gather traffic information as a basis for improving internet performance, and to detect security incidents.
Objectives
To build an advanced pilot European Internet traffic monitoring infrastructure based on passive network monitoring sensors.
To develop novel performance and security monitoring applications, enabled by the availability of the passive network monitoring infrastructure, and to develop the appropriate data anonymisation tools for prohibiting unauthorised access or tampering of the original traffic data.
History
The project originated from SCAMPI, a European project active in 2004–5, aiming to develop a scalable monitoring platform for the Internet. LOBSTER was funded by the European Commission and ceased in 2007. It fed into "IST 2.3.5 Research Networking testbeds", which aimed to contribute to improving internet infrastructure in Europe.
36 LOBSTER sensors were deployed in nine countries across Europe by several organisations. At any one time the system could monitor traffic across 2.3 million IP addresses. It was claimed that more than 400,000 Internet attacks were detected by LOBSTER.
Passive monitoring
LOBSTER was based on passive network traffic monitoring. Instead of collecting flow-level traffic summaries or actively probing the network, passive network monitoring records all IP packets (both headers and payloads) that flow through the monitored link. This enables passive monitoring methods to record complete information about the actual traffic of the network, which allows for tackling monitoring problems more accurately compared to methods based on flow-level statistics or active monitoring.
The passive monitoring applications running on the sensors were developed on top of MAPI (Monitoring Application Programming Interface), an expressive programming interface for building network monitoring applications, developed in the context of the SCAMPI and LOBSTER projects. MAPI enables application programmers to express complex monitoring needs, choose only the amount of information they are interested in, and therefore balance the monitoring overhead with the amount of the received information. Furthermore, MAPI gives the ability for building remote and distributed passive network monitoring applications that can receive monitoring data from multiple remote monitoring sensors.
Developed applications
The LOBSTER sensors operated by the various organisations monitored the network traffic using different measurement applications. All applications were developed within the LOBSTER project using MAPI, according to the needs of each organisation.
Appmon, an application for Accurate Per-Application Network Traffic Classification.
Stager, a system for aggregating and presenting network statistics.
ABW, an application written on top of LOBSTER DiMAPI (Distributed Monitoring Application Interface) and tracklib library.
References
Computer networking |
43635062 | https://en.wikipedia.org/wiki/Linux%20Foundation%20Linux%20Certification | Linux Foundation Linux Certification | The Linux Foundation Linux Certification (LFLC) is a certification program for system administrators and engineers working with the Linux operating system, announced by the Linux Foundation in August 2014.
Linux Foundation Certifications are valid for 2 years. Candidates have the option to retake and pass the same exam to keep their Certification valid. The Certification will become valid for 2 years starting on the date the exam is retaken and passed.
Candidates may also keep Certification valid by completing one of the renewal requirement options below. Renewal requirements must be completed prior to the expiration of the Certification. If the renewal requirements are satisfied, the expiration of the Certification will be extended for 2 years. The date on which the extension takes effect becomes the Renewal Date for the Certification. Any renewal requirements completed before the Renewal Date of a Certification will not carry over to the new period.
Linux Foundation Certified System Administrator (LFCS)
The Linux Foundation Certified System Administrator (LFCS) certification provides assurance that the recipient is knowledgeable in the use of Linux, especially in relation to the use of the terminal.
Linux Foundation Certified Engineer (LFCE)
The Linux Foundation Certified Engineer (LFCE) certification provides assurance that the recipient is knowledgeable in the management and design of Linux systems.
Certified Kubernetes Security Specialist (CKS)
In July 2020, The Linux Foundation announced a new Kubernetes certification, the Certified Kubernetes Security Specialist. Obtaining the CKS will require a performance-based certification exam.
See also
Linux Professional Institute Certification
Ubuntu Professional Certification
References
External links
Information technology qualifications |
47510418 | https://en.wikipedia.org/wiki/OpenDataSoft | OpenDataSoft | Opendatasoft is a French company that offers data sharing software. Based in Paris, it also has a subsidiary in Boston (United States) and an office in Nantes (France).
Opendatasoft has developed a tool for sharing and reusing the data of companies and public administrations. Its software lets you organize, share, and visualize any type of data. The software can be used by both private companies and public entities
History
Opendatasoft was founded in 2011 by Jean-Marc Lazard (CEO), Franck Carassus (CEO North America), and David Thoumas (CTO).
In 2014, the company was integrated into the incubation program of the Open Data Institute to promote the sharing of public data. That same year, Opendatasoft was ranked by the Electronic Business Group as one of the ten most innovative startups in France.
In June 2015, Opendatasoft was awarded €1.5 million in seed funding from the venture capital fund Aurinvest.
In 2016, Opendatasoft was included in the "IDC Innovators: Smart City Open Data Platforms"report. In the same year, the company opened its North American headquarters in Boston, Massachusetts.
In October 2016, Opendatasoft was granted €5 million in Series A funding from Aster Capital, Salesforce Ventures, Aurinvest., and Caisse des Dépôts et Consignations, who were joined two years later by UL Ventures.
In November 2018, Opendatasoft launched "Data on Board," an annual summit for enthusiasts of data and data sharing. In 2018, 250 people attended the event. During the second edition, held at Campus Station F in Paris, 500 participants and 20 French and international speakers met to discuss the challenges surrounding the use and sharing of data.
The Opendatasoft solution
Opendatasoft is a SaaS (software as a service) solution that allows companies, local authorities, and public administrations to organize, share, and visualize any type of data (as tables and graphs), as well as make data available via APIs.
The company's founding principles are based on the sharing of public data as open data: historically, its customers include entities of the French government and local French communities. In recent years, Opendatasoft has broadened the scope of its activities and now works with all types of companies, both public and private, in France and around the world. Users of the software can share their data publicly as open data to create services for citizens, users of public transportation, companies, and associations. The data can also be published in a limited way within an organization or a group of organizations for use by partners or employees.
Opendatasoft's customers come from a variety of sectors: energy & utilities, local communities, transportation & mobility, government, banking & insurance, luxury goods, IoT, etc.
To date, Opendatasoft has earned the trust of more than 280 customers worldwide: in Europe (France, Belgium, United Kingdom, Germany, Switzerland, Spain, Portugal), North America (United States, Canada), Mexico, Australia, and the Middle East.
In particular, the company has developed the open data portals of the City of Paris (Paris Data), the SNCF Group, the GRDF Group, Kering, EDF, the Occitanie Region, Mexico City, Newark, Vancouver, and more.
References
Software companies of France
French companies established in 2011
Software companies established in 2011
Software companies based in Massachusetts
Software companies of the United States |
63065493 | https://en.wikipedia.org/wiki/Visionware | Visionware | Visionware Ltd was a British software company that developed and marketed products that helped integration of Microsoft Windows clients to Unix-based server applications. It was based in Leeds in West Yorkshire. The three products it was most known for were PC-Connect, XVision, and SQL-Retriever.
Visionware was created in June 1989 as a management buy-out from Systime Computers. The firm experienced substantial growth during its five and a half years of existence. Visionware was acquired by the Santa Cruz Operation (SCO) in December 1994.
Origins in Systime
Visionware has its origins in Leeds-based Systime Computers, which during the late 1970s and early 1980s had become the second largest British manufacturer of computers. Its success was based around selling systems built around OEM components from Digital Equipment Corporation (DEC), and it had grown to have some 1200 employees with turnover of around £40 million. It had then fallen on quite difficult times, in part due to lawsuits from DEC for intellectual property infringement and even more so due to running afoul of Cold War-era U.S. export restrictions regarding indirect sales to Eastern bloc countries. In 1985 what was left of Systime had been acquired by Control Data Corporation. Systime then focused on selling products built by its own engineers, and placed a greater emphasis on innovation in software technologies.
The Systime-Control Data arrangement did not prosper, and in June 1989, Control Data got out of the position via Systime being split into four separate companies, each funded by a management buyout with some venture capital funding attached.
Independent company
Visionware Ltd was one of these four management-buyout ventures, focusing on Windows-Unix connectivity products that had been developed at Systime. The two founders were former Systime technical development manager Tony Denson and former commercial manager Chris Holmes. It has been said that Visionware had an initial employee count of 20 people and initial annual revenues of $300,000.The new firm debuted at the European Unix Show in London in June 1989.
As one former SCO UK employee has succinctly summarised, "Visionware specialised in software that ran on Windows that made Unix easier to use." The core Visionware products were:
PC-Connect – in part a terminal emulator for Microsoft Windows, it was composed of implementation elements that ran on both Unix and Windows, and supported cut-and-paste between Windows, X Windows, and Unix character mode applications. It was re-sold under the name of Altos Computer Systems and also was part of several other redistribution agreements. Supported Unix platforms included Sun Solaris, IBM AIX, HP-UX, UnixWare, SCO Unix, Xenix, and various others. PC-Connect was first developed by Systime and released by them in 1987.
XVision – a Windows-based server for the Unix-oriented X Window System. It supported color graphics and maintained the look-and-feel of Windows within the X applications. Originally called PC-XVision when under development at Systime.
SQL-Retriever – an Open Database Connectivity (ODBC)-compliant database connectivity software product. It supported operation in conjunction with a number of database products, including Informix, Oracle, Uniplex, and Interbase, with the idea that Windows applications such as Microsoft Excel could pull data from a relational database and incorporate it into the application. Originally called SQL-Connect when first developed by Systime and in its initial releases by Visionware, the name was changed around 1991 to avoid a copyright issue with a large database vendor.
In the early 1990s, the market that Visionware was in – providing connectivity between Windows PCs and corporate applications – was an important and growing one. Overall, the goal of Visionware was expressed as the enablement of "seamless integration" between Windows-based PCs and Unix-based servers.
As of 1992, a majority of Visionware's revenues were coming from the European market. That same year, it set up a North American operation based in Menlo Park, California in the United States. By 1993, Visionware had revenues of around $6 million.
In 1994, Visionware had some $12 million in revenue – double that of the previous year – and 130 employees, most of whom were in Leeds. In addition to the North American operation, the firm also had smaller European ones in Bonn and Paris, where area marketing and communications staff were based, as well as one in Sydney, Australia.
By October of that year, there were industry rumours that Visionware was open to being acquired, a notion that the company denied.
Acquisition by SCO
On 12 December 1994, the Santa Cruz Operation announced that it had acquired Visionware for $14.75 million in cash and a small amount of stock.
SCO had worked with Visionware since 1993 on an optimised X server for Wintif, a version of Motif with a Windows look-and-feel that was made by an earlier, Cambridge-based SCO acquisition, IXI Limited. (And Visionware had collaborated with IXI going back to Systime days.)
The acquisition gave SCO a better foothold in the Windows client world and the ability to put a Windows-friendly front on its traditional OpenServer-based Unix product line, although there was some skepticism that SCO's traditional base of back-end transaction processing would see much need for desktop client access. Visionware co-founder Denson said that both Visionware's and SCO's customers would benefit from the acquisition.
The Visionware brand continued until 1995 when the company, now a business unit of SCO, was merged with IXI to form IXI Visionware, Ltd. Later that year the merged business unit was subsumed more fully into its parent and became the Client Integration Division of SCO, which put out both sets of products under the "Vision" branded family name. This division then developed and released the Tarantella terminal services application in 1997 and that became the core of Tarantella, Inc. in 2001. As a consequence, the Vision family received less investment going forward.
Fates
PC-Connect had evolved into the TermVision product under SCO, with 32-bit and Windows 95 support, but that product then faded away with the Vision product line. SQL-Retriever was dropped from the Vision line by Tarantella and had no more releases. However, the source code for SCO's XVision product was purchased by MKS Inc., an American company based in Fairfax, Virginia, and with further enhancements and a new name, became the basis for that company's ongoing MKS X/Server product.
Tarantella, Inc. struggled and following company-wide layoffs, the Cambridge development site closed in the summer of 2003. However the Leeds office stayed open, and became part of Sun Microsystems following its purchase of Tarantella and later became part of the Oracle Secure Global Desktop product team, moving to a facility in Lawnswood within Leeds.
References
Defunct software companies
Defunct companies based in Leeds
Software companies established in 1989
Software companies disestablished in 1994
Software companies of England
1989 establishments in England
1994 disestablishments in England |
1064216 | https://en.wikipedia.org/wiki/National%20Technical%20University%20of%20Athens | National Technical University of Athens | The National (Metsovian) Technical University of Athens (NTUA; , National Metsovian Polytechnic), sometimes known as Athens Polytechnic, is among the oldest higher education institutions of Greece and the most prestigious among engineering schools. It is named in honor of its benefactors Nikolaos Stournaris, Eleni Tositsa, Michail Tositsas and Georgios Averoff, whose origin is from the town of Metsovo in Epirus.
It was founded in 1837 as a part-time vocational school named Royal School of Arts which, as its role in the technical development of the fledgling state grew, developed into Greece's sole institution providing engineering degrees up until the 1950s, when polytechnics were established outside Athens. Its traditional campus, located in the center of Athens on Patision Avenue on a site donated by Eleni Tositsa, features a suite of magnificent neo-classical buildings by architect Lysandros Kaftantzoglou (1811–1885). A new campus, the Zografou Campus, was built in the 1980s.
NTUA is divided into nine academic schools, eight being for the engineering disciplines, including architecture, and one for applied sciences (mathematics and physics). Undergraduate studies have a duration of five years. Admission to NTUA is highly selective and can only be accomplished through achieving exceptional grades in the annual Panhellenic Exams. It is a widely spread perception that the vast majority of each year's Panhellenic Exams top students interested in the sciences and technology opts to attend NTUA. The university comprises about 700 of academic staff, 140 scientific assistants and 260 administrative and technical staff. It also has about 8,500 undergraduates and about 1,500 postgraduate students. Eight of the NTUA's Schools are housed at the Zografou Campus, while the School of Architecture is based at the Patision Complex.
History
NTUA was established by royal decree on December 31, 1836 (OS), January 21, 1837 (NS), under the name "Royal School of Arts" (Βασιλικό Σχολείο των Τεχνών). It began functioning as a part-time vocational school (only Sundays and holidays) to train craftsmen, builders and master craftsmen to cover the needs of the new Greek state. In 1840, due to its increasing popularity and the changing socio-economic conditions in the new state, NTUA was upgraded to a daily technical school which worked along with the Sunday school. The courses were expanded and the institution was housed in its own building in Pireos Street.
The restructuring
In 1843 a major restructuring was made. Three departments were created:
Part-Time Vocational School
Daily School
A new Higher School of Fine Arts
The new department's object was fine arts and engineering. The new department, which was later renamed School of Industrial and Fine Arts, rapidly evolved towards a major higher education institution. Tradition has it that arts referred to both technical professions and fine arts. Even today, the school maintains a school of architecture that is closely related to the School of Fine Arts, which later evolved to become a separate school.
The name Polytechnnic came in 1862, with the introduction of several new technical courses. This restructuring continued steadily until 1873. At the time, the school became overwhelmed by the plethora of students wanting to learn high technical skills, and this led to its moving to a new campus.
The relocation
In 1873 it moved to its new campus in Patision Street and was known as Metsovion Polytechnion (Metsovian Polytechnic) after the birthplace of its benefactors who financed the construction of this campus. At the time, the campus in Patision Street was even partially incomplete, but the high demand by students made it urgent to rellocate.
In 1887, the institution was partitioned into three schools of technical orientation, the schools of Structural Engineering, Architecture and Mechanical Engineering, all four-year degrees at the time. This is when the institute was recognized as a technical education facility by the state, which was a crucial step for its development, as it became accompanied to the country's needs as it developed.
In 1914, new schools were created and was officially named Ethnicon Metsovion Polytechnion (National Metsovian Polytechnic) went under the supervision of the Ministry of Public Works. This is when new technical schools started to be formed, a procedure was completed three years later, in 1917 when the NTUA changed form: By special law, the old School of Industrial Arts was now separated into the Higher Schools of Civil Engineering, Mechanical & Electrical Engineering, Chemical Engineering, Surveying Engineering and Architecture. Later, the schools of Naval Engineering, and Mining and Metallurgical Engineering were formed, and the school of Mechanical & Electrical Engineering was split up into two separate schools, Mechanical Engineering and Electrical and Computer Engineering, which is almost the form of schools maintained until now.
In 1923, the NTUA alumni formed the core of the Technical Chamber of Greece, the professional organization that serves as the official technical adviser of the Greek state and is responsible for awarding professional licenses to all practicing engineers in Greece.
In 1930, the Athens School of Fine Arts was established, acquiring its independence from the NTUA, as a school exclusively focused in the teaching the fine arts. This allowed the two schools to develop separately as a technical and an arts school respectively.
In 1941 to 1944, the National Technical University of Athens played an important role in the country's political life with the Greek students participating in the National Resistance under the German occupation. During the Axis occupation of Greece, NTUA, in addition to its function as an academic institution, became one of the most active resistance centers in Athens.
The uprising
The most important event of NTUA's history is the Athens Polytechnic uprising on November 17, 1973, which was the first step to overthrow Greece's military dictatorship.
On 14, 15 and 16 of November 1973, the students were barricaded inside the institute, and started broadcasting a pirate radio transmission, calling the people of Athens to rebel. In the evening of November 17 however, an AMX-30 class military tank broke the main gate and charged inside, after receiving orders from the dictators. About 23 people were killed in the following events and the uprising ended. The junta however, was irreparably damaged by the popular outcry. The junta fell in 1974, after the Turkish invasion in Cyprus and since then, November 17 is celebrated as a day of freedom and democracy. All schools and universities of the country remain closed on that day.
Emblem
The emblem of the National Technical University of Athens is Prometheus.
Academic profile
Schools
The National Technical University of Athens is divided into nine academic schools (), which are furthermore divided into 33 departments (Greek: τομείς):
School of Applied Mathematical and Physical Sciences
School of Electrical and Computer Engineering
School of Civil Engineering
School of Mechanical Engineering
School of Architecture
School of Chemical Engineering
School of Rural and Surveying Engineering
School of Mining and Metallurgical Engineering
School of Naval Architecture and Marine Engineering
School of Applied Mathematics and Physics
Department of Mathematics
Department of Physics
Department of Mechanics
Department of Humanities, Social Sciences and Law
School of Electrical and Computer Engineering
Department of Signals, Controls and Robotics
Department of Computer Science
Department of Εlectric Power
Department of Electromagnetics, Electrooptics and Electronic Materials
Department of Industrial Electric Devices and Decision Systems
Department of Communications, Electronics and Information Systems
Department of Information Transmission Systems and Material Technology
School of Civil Engineering
Department of Structural Engineering
Department of Water Resources, Hydraulic and Maritime Engineering
Department of Transportation Planning and Engineering
Department of Geotechnical Engineering
Department of Engineering Construction and Management
School of Mechanical Engineering
Department of Fluid Mechanics Engineering
Department of Thermal Engineering
Department of Nuclear Engineering
Department of Mechanical Constructions and Automatic Control
Department of Manufacturing Technology
Department of Industrial Management and Operational Research
School of Architecture
Department of Architectural Design
Department of Urban and Regional Planning
Department of Interior Design and Landscaping
Department of Building Technology-Structural Design and Mechanical Equipment
School of Chemical Engineering
Department of Chemical Sciences
Department of Process and Systems Analysis, Design and Development
Department of Materials Science and Engineering
Department of Synthesis and Development of Industrial Processes
School of Rural and Surveying Engineering
Department of Topography
Department of Geography and Regional Planning
Department of Infrastructure and Rural Development
School of Mining and Metallurgical Engineering
Department of Geological Sciences
Department of Mining Engineering
Department of Metallurgy and Materials Technology
School of Naval Architecture and Marine Engineering
Department of Ship Design & Maritime Transport
Department of Ship Hydrodynamics
Department of Marine Engineering
Department of Marine Structures
Studies
Undergraduate studies
The academic calendar of NTUA comprises 10 independent, integral academic semesters. Each semester lasts 18 weeks: 13 weeks of classes, a two-week break (Christmas and Easter holidays for the fall and spring semester respectively), and three weeks of semester exams. The tenth semester is devoted to the preparation of the diploma thesis. The diploma thesis has to be related to one of the courses of the faculty. The student has at his or her disposal at least a full academic semester to prepare the thesis. Upon completion of the thesis, the student must take part in an oral examination that can take place either in June, October or February, after the final examinations, provided that the student has passed all courses prescribed by the curriculum.
An important part of the studies in NTUA are the summer "training" projects which take place in Industrial and Production Units, in the period between the end of the spring semester and the beginning of the fall semester. They constitute an elective course for the Faculties of Civil Engineering, Survey Engineering (Surveying and Geodesy Camp) and Mining and Metallurgy Engineering (Mining Camp) and are partially subsidized by the European Union.
Postgraduate studies
There are currently 20 departmental or inter-departmental postgraduate courses, coordinated by NTUA Departments, leading to the respective Post Graduate Specialization Diploma, with a minimum duration of 17 months, including one in Business Administration (in collaboration with the Athens University of Economics and Business). Moreover, NTUA participates in nine post-graduate programs coordinated by other Greek Universities. After the acquisition of the Post Graduate Specialization Diploma, the student can proceed towards submitting a doctoral thesis.
Academic staff
Academic evaluation
An external evaluation of all academic departments in Greek universities will be conducted by the Hellenic Quality Assurance and Accreditation Agency (HQAA) in the following years.
School of Naval Architecture and Marine Engineering (2012)
School of Mechanical Engineering (2012)
School of Civil Engineering (2013)
School of Electrical and Computer Engineering (2013)
School of Mining and Metallurgical Engineering (2013)
School of Chemical Engineering (2013)
School of Applied Mathematical & Physical Science (2013)
School of Rural and Surveying Engineering (2014)
School of Architecture (2014)
Research and innovation
NTUA boasts high research activity, as research and education are both its goals. Research is managed by administrative and education personnel, but can be conducted by graduate and sometimes undergraduate students as well. Research is administrated by five different offices:
The Special Accounting for Research Office (ΕΛΚΕ)
The Liaison Office
The Innovation and Entrepreneurship Unit
The Internship Programme
The Office of Researchers
The Interdisciplinary Research Center
The Interdisciplinary Unit for Reusable Energy
Research is funded by the NTUA endowment, or often directly through public or private funds.
Ranking
The National Technical university of Athens is ranked 338th in the world, 116th in the European Union and third in Greece by the Webometrics Ranking of World Universities website. NTUA was ranked between 551 and 600 by the QS World University Rankings in 2012, with the corresponding faculty area ranks being 152nd for Engineering & Technology and 352nd for Natural Sciences respectively. The 2012 performance ranking of scientific papers for world universities released by the National Taiwan University (NTU Ranking) ranked NTUA as excellent. NTUA has the highest citation impact score (0.88) among the Greek universities, based on a ranking prepared by the Directorate General for Science and Research of the European Commission in 2003 (updated 2004) that was compiled as part of the Third European Report on Science & Technology Indicators.
Campus
Patision Complex
The Averof building is one of the most important and elegant buildings of the Athenian Neoclassical period located in the center of Athens and the most important work of architect Lysandros Kaftanzoglou. It constitutes also one of the most important creations of European Neoclassicism, directly influenced in its design by the monuments of the Athenian Acropolis. Its construction began in 1862 and ended in 1878. After its completion, the building was in continuous use for more than 125 years during which it suffered from several additions and alterations. The main building has housed at times the National Gallery and various exhibitions of Schliemann's archaeological findings and relics of the 1821 Greek revolution.
The Averof building reached a deteriorating state and was eventually in great need of restoration and modernization in order to continue operating as an educational establishment. The aim of the conservation project, namely for the Averof to be used again as an educational building, was successfully achieved after the building became operative in the beginning of 2010 and won the grand prize of Europa Nostra in 2012.
Zografou Campus
The main campus is located in the Zografou area of Athens, housing all the schools of NTUA except architecture, which remains in its traditional location on the Patision Avenue for historical reasons. The main campus spreads over an area of about 190 acres, 6 km from the center of Athens. It includes buildings of 65 acres with fully equipped lecture theaters, laboratories, libraries, gyms, a central library, a computer center and a medical center.
The School of Applied Mathematics and Physical Sciences is housed in the center of the campus. Right next to it is the Mining and Metallurgical Engineering School. The Civil Engineering School and the Rural and Surveying Engineering School are both housed on the south-west near the Zografou Gate. Mechanical Engineering, Chemical Engineering, Naval Engineering, and the new Electrical Engineering School are all housed near the middle of the campus, while the old electrical engineering buildings remain on the north-east.
Transportation
There are in-campus roads making all buildings accessible by bicycle and car. There are also various internal buses that allow for transportation within the facilities, driving around the perimeter of the campus and through eight different bus stops. The campus is accessible through three main gates: the Katechaki and Kokkinopoulou Gates on the north, and the Zografou Gate on the west. There are 2,000 dedicated parking spots scattered throughout the campus, most nearby all major buildings. The campus resides near the metro station of Katechaki, which makes it accessible within minutes from any area of Athens. Furthermore, six different buses are available for transportation from various city locations to the campus: the 608 from Galatsi, 230 from Acropolis, 242 from the Katechaki Metro station and 140 from Glifada.
Central Library
On the campus lies the NTUA Central Library, which has operated since 1914, and was the first library in Greece with a complete index. Today, it remains one of the largest technical libraries in the country, featuring a collection of over 215,000 books and 100,000 scientific issues. The library is available to the public at all times for studying, and available to students, faculty, and internal and external researchers for borrowing.
The central library building at Zografou campus houses also the historical library of NTUA as a special collection. This scientific-technical library is unique in Greece, and one of the most important in Europe, since it contains approximately 60,000 volumes and periodicals (1,096 titles) issued from the 17th century until 1950. The main bulk of NTUA's historical collection consists of old and rare books, pamphlets, maps, engravings and encyclopedias.
Other facilities
Lavrion Technological and Cultural Park (LTCP)
Lavrion Technological and Cultural Park (LTCP), is a body of scientific research, education, business and culture. Founded in place of the old French Mining Company of Lavrion (Compagnie Francaise des Mines du Laurium) in 1992, as a result of the initiative undertaken from the National Technical University of Athens.
LTCP aims at linking scientific and technological research conducted in Athens with the needs and interests of the business world, and to the realization of cultural events related to the promotion of the history and culture of the wider area of Lavreotiki, and the emergence of the history of activities in the past had developed in the maintenance of premises. The LTCP area is a unique monument of industrial architecture and archeology and placed him in a series of housing facilities for business and research excellence.
The services provided by LTCP as well as its renovated facilities, continue to support research, education and technology. Today, LTCP is essentially the only technology park in Attica, which specializes in areas - keys of modern applied technology, such as information technology, electronics technology, telecommunications, robotics, technology laser, environmental technology, energy, shipbuilding, marine technology, etc.
Metsovion Interdisciplinary Research Center (MIRC)
The Metsovion Interdisciplinary Research Center (MIRC) of the National Technical University of Athens for the Protection and Development of Mountainous Environment and Local European Cultures was founded in 1993 by decision of the National Technical University of Athens Senate, following the proposal of the then Rector Professor Nikos Markatos.
The principal aim of MIRC is to contribute to the protection and development of mountainous environment and local European cultures and the provision of continuing education. As well as, the conduct of research, studies, seminars and conferences, relevant to the broader object of MIRC, the creation of a European network with related organizations under the aegis of the center or the participation in already existing networks. The above will be utilized by universities, cultural, research and productive organizations with the aim of assisting Metsovon in becoming a European center of decentralized interdisciplinary, educational, research, technological and cultural activities of NTUA.
Culture
Music Department
The NTUA Music Department was established in 1960 by Chancellor Alexander Pappas. The first president of the music department was composer Vassilis Makridis. It features a mixed choir, a string orchestra, and free lessons for various instruments, among others piano, guitar, bouzouki, and cello. The music department groups regularly perform publicly within the facilities of the university, but also elsewhere. The department president today is conductor and composer Michalis Economou.
Dancing Department
The Dancing Department was established in 1990. It is formed by students, and it features various groups, including a Greek traditional and Cretan folk dances group, a European and Latin Ballroom dances group, a Salsa and a tango group. The groups meets weekly, and perform regularly inside and outside the facilities of the university. Attendance and dancing lessons are free for undergraduate and graduate students, alumni, faculty and even people not related to the university. The dancing department is housed near the center of the main campus.
Theatrical Group
The Theatrical Group was established in 1991. It is a self-managed group, which teaches the art of performance and often performs in public. Participation in the group is free for students. The theatrical group is housed near the center of the main campus. The theatrical group has also organized a separate percussion lessons group.
Sports
The main sports facilities of NTUA are housed in the Sports Center located to the south of the campus, taking up about 3,500 square meters. The campus sport facilities feature tennis and soccer courts, a field and track, a sauna, ping pong tables, and more. More than 40 sport teams exist, and the sports practiced include aerobics, yoga, Pilates, basketball, volleyball, soccer, handball, ping pong, tennis, martial arts inside the campus facilities and swimming, polo, rowing, yachting, rappelling, rafting, squash, wind surfing, and equestrianism outside.
Each year several inter-departmental championships are organized among the teams of the university faculties. NTUA student's teams have been distinguished and received many awards in Panhellenic University Games, as well as in university games abroad.
Open source
There is an open source students group whose purpose is to promote the use of open source software throughout the university and beyond. Furthermore, NTUA officially supports open source software by using it in its laboratories and other facilities, but also hosting mirrors for all major open source projects with a collection of over 2.5 terabytes of free and open source software.
Foreign languages
English, French, German and Italian are the four languages taught in NTUA. All non-exchange students have to choose one from those as a mandatory foreign language course. For foreign students, the NTUA Linguistic Service offers the option of attending Greek courses during the entire academic year, free of charge. These courses are intended to provide students with the basic linguistic tools, so that they can understand and communicate efficiently with people in Greece.
Participation in international organizations
CESAER – Conference of European Schools for advanced Engineering, Education and Research
EEGECS - Network on European Education in Geodetic Engineering, Cartography and Surveying
SEFI – Societe Europeene pour la Formation des Ingenieurs (European Society for Engineering Education)
TIME – Top Industrial Managers Europe
Student unions
NTUA Students' Formula Team
Society of Naval Architects and Marine Engineers, NTUA
Athens Local BEST Group
Electrical Engineering STudent's European Association Local Committee of Athens (EESTEC)
Euroavia Athens, NTUA
International Association for the Exchange of Students for Technical Experience (IAESTE)
American Institute of Chemical Engineers Student Chapter (AIChE)
Notable alumni
Nicholas Ambraseys - professor of engineering seismology at Imperial College London
Dimitris Anastassiou - developer of MPEG-2 algorithm for transmitting high quality audio and video over limited bandwidth and Columbia University professor of electrical engineering
John Argyris - one of the founders of the finite element method, professor at Imperial College London and University of Stuttgart
Costas Azariadis - professor at the Department of Economics, UCLA and Edward Mallinckrodt Distinguished University Professor in Arts & Sciences, Washington University in St. Louis
Dimitri Bertsekas - professor of engineering at MIT
Charalambos Bouras - historian, professor of History of Architecture and restoration architect
Georges Candilis - architect and urbanist, one of the founders of Team 10
Giorgio de Chirico - painter
Constantine Dafermos - Greek applied mathematician, professor at Brown University and recipient of Norbert Wiener Prize in Applied Mathematics
Constantinos Daskalakis - computer scientist, professor at MIT
Athos Dimoulas - poet
Eleftherios N. Economou - Professor of Physics, former Chairman of the Foundation for Research and Technology - Hellas
Georgios Gennimatas - former MP, minister and founding member of Panhellenic Socialist Movement
John Iliopoulos - recipient of the Dirac Medal
Paris Kanellakis - computer scientist, professor at Brown University
Linda P. B. Katehi - Chancellor of UC Davis
Alexander S. Kechris - mathematician, professor at Caltech
Emmanouil Korres - professor, writer, restoration architect
Georgios Lianis - Professor and first Minister of Research and Technology (1982)
Nikos Markatos - former Rector of the NTUA
Max Nikias - former President of the University of Southern California
Constantine Papadakis - former president of Drexel University
Christos Papadimitriou - computer scientist, laureate of the 2002 Knuth Prize for longstanding and seminal contributions to the foundations of computer science
Yannis Papathanasiou - politician, former Minister for Economy and Finance of Greece
Nicholas A. Peppas - professor in engineering, University of Texas at Austin
Dimitris Pikionis - architect
George Prokopiou - billionaire shipowner
Athanasios Roussopoulos - professor in applied statics and iron constructions at the National Technical University of Athens, where his work was mostly concerned with the development of the theory of aseismic structures, politician, member of the Greek Parliament and Minister of Public Works in 1966, he was also president of the Technical Chamber of Greece
Joseph Sifakis - computer scientist, laureate of the 2007 Turing Award for his work on model checking.
Michael Triantafyllou - professor of mechanical and ocean engineering at MIT
Alexis Tsipras - former Prime Minister of Greece
Ioannis Vardoulakis - professor of civil engineering at University of Minnesota and at NTUA, a pioneer of theoretical and experimental geomechanics
Iannis Xenakis - one of the most important post-war avant-garde composers, pioneer of the use of mathematical models in music and architect
Mihalis Yannakakis - computer scientist, laureate of the 2005 Knuth Prize for numerous ground-breaking contributions to theoretical computer science
Mihail Zervos - professor of financial mathematics at London School of Economics
See also
Athens Polytechnic uprising
Polytechnic (Greece)
List of universities in Greece
Top Industrial Managers for Europe
Open access in Greece
References
External links
Hellenic Quality Assurance and Accreditation Agency (HQAA)
School of Naval Architecture and Marine Engineering, HQAA Final Report, 2012
School of Mechanical Engineering, HQAA Final Report, 2012
School of Civil Engineering, HQAA Final Report, 2013
School of Electrical and Computer Engineering, HQAA Final Report, 2013
School of Applied Mathematical & Physical Science, HQAA Final Report, 2013
School of Chemical Engineering, HQAA Final Report, 2013
School of Rural & Surveying Engineering, HQAA Final Report, 2013
School of Mining & Metallurgical Engineering, HQAA Final Report, 2013
School of Architecture, HQAA Final Report, 2014
NTUA Council
"ATHENA" Plan for Higher Education, 2013
National Technical University of Athens (NTUA) - Official Website
NTUA Central Library
NTUA Network Management Center (NOC)
NTUA DASTA Office (Career Office & Innovation Unit)
NTUA Internship Programme
NTUA Innovation and Entrepreneurship Unit (IEU)
NTUA ERASMUS Office
Maps and images from NTUA's campuses.
NTUA Publications
NTUA Career Office
ESN NTUA Athens
Lavrion Technological and Cultural Park (LTCP) of NTUA
Metsovion Interdisciplinary Research Center (MIRC) of NTUA
NTUA Hydrological Observatory of Athens
Greek Research & Technology Network (GRNET)
okeanos (GRNET's cloud service)
NTUA Free and Open Source Community
IEEE NTUA Student Branch
Job Fair Athens
Hellenic Universities Rectors' Synod
Hellenic Universities Faculty Association
Universities in Greece
Educational institutions established in 1837
Technological educational institutions in Greece
Science and technology in Greece
Education in Athens
Engineering universities and colleges in Greece
1837 establishments in Greece
Exarcheia |
1338305 | https://en.wikipedia.org/wiki/Privacy%20policy | Privacy policy | A privacy policy is a statement or legal document (in privacy law) that discloses some or all of the ways a party gathers, uses, discloses, and manages a customer or client's data. Personal information can be anything that can be used to identify an individual, not limited to the person's name, address, date of birth, marital status, contact information, ID issue, and expiry date, financial records, credit information, medical history, where one travels, and intentions to acquire goods and services. In the case of a business, it is often a statement that declares a party's policy on how it collects, stores, and releases personal information it collects. It informs the client what specific information is collected, and whether it is kept confidential, shared with partners, or sold to other firms or enterprises. Privacy policies typically represent a broader, more generalized treatment, as opposed to data use statements, which tend to be more detailed and specific.
The exact contents of a certain privacy policy will depend upon the applicable law and may need to address requirements across geographical boundaries and legal jurisdictions. Most countries have their own legislation and guidelines of who is covered, what information can be collected, and what it can be used for. In general, data protection laws in Europe cover the private sector, as well as the public sector. Their privacy laws apply not only to government operations but also to private enterprises and commercial transactions.
California Business and Professions Code, Internet Privacy Requirements (CalOPPA) mandate that websites collecting Personally Identifiable Information (PII) from California residents must conspicuously post their privacy policy. (See also Online Privacy Protection Act)
History
In 1968, the Council of Europe began to study the effects of technology on human rights, recognizing the new threats posed by computer technology that could link and transmit in ways not widely available before. In 1969 the Organisation for Economic Co-operation and Development (OECD) began to examine the implications of personal information leaving the country. All this led the council to recommend that policy be developed to protect personal data held by both the private and public sectors, leading to Convention 108. In 1981, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) was introduced. One of the first privacy laws ever enacted was the Swedish Data Act in 1973, followed by the West German Data Protection Act in 1977 and the French Law on Informatics, Data Banks and Freedoms in 1978.
In the United States, concern over privacy policy starting around the late 1960s and 1970s led to the passage of the Fair Credit Reporting Act. Although this act was not designed to be a privacy law, the act gave consumers the opportunity to examine their credit files and correct errors. It also placed restrictions on the use of information in credit records. Several congressional study groups in the late 1960s examined the growing ease with which automated personal information could be gathered and matched with other information. One such group was an advisory committee of the United States Department of Health and Human Services, which in 1973 drafted a code of principles called the Fair Information Practices. The work of the advisory committee led to the Privacy Act in 1974. The United States signed the Organisation for Economic Co-operation and Development guidelines in 1980.
In Canada, a Privacy Commissioner of Canada was established under the Canadian Human Rights Act in 1977. In 1982, the appointment of a Privacy Commissioner was part of the new Privacy Act. Canada signed the OECD guidelines in 1984.
Fair information practice
There are significant differences between the EU data protection and US data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning citizens of the EU. In 2001 the United States Department of Commerce worked to ensure legal compliance for US organizations under an opt-in Safe Harbor Program. The FTC has approved TRUSTe to certify streamlined compliance with the US-EU Safe Harbor.
Current enforcement
In 1995 the European Union (EU) introduced the Data Protection Directive for its member states. As a result, many organizations doing business within the EU began to draft policies to comply with this Directive. In the same year, the U.S. Federal Trade Commission (FTC) published the Fair Information Principles which provided a set of non-binding governing principles for the commercial use of personal information. While not mandating policy, these principles provided guidance of the developing concerns of how to draft privacy policies.
The United States does not have a specific federal regulation establishing universal implementation of privacy policies. Congress has, at times, considered comprehensive laws regulating the collection of information online, such as the Consumer Internet Privacy Enhancement Act and the Online Privacy Protection Act of 2001, but none have been enacted. In 2001, the FTC stated an express preference for "more law enforcement, not more laws" and promoted continued focus on industry self-regulation.
In many cases, the FTC enforces the terms of privacy policies as promises made to consumers using the authority granted by Section 5 of the FTC Act which prohibits unfair or deceptive marketing practices. The FTC's powers are statutorily restricted in some cases; for example, airlines are subject to the authority of the Federal Aviation Administration (FAA), and cell phone carriers are subject to the authority of the Federal Communications Commission (FCC).
In some cases, private parties enforce the terms of privacy policies by filing class action lawsuits, which may result in settlements or judgments. However, such lawsuits are often not an option, due to arbitration clauses in the privacy policies or other terms of service agreements.
Applicable law
United States
While no generally applicable law exists, some federal laws govern privacy policies in specific circumstances, such as:
The Children's Online Privacy Protection Act (COPPA) affects websites that knowingly collect information about or targeted at children under the age of 13. Any such websites must post a privacy policy and adhere to enumerated information-sharing restrictions COPPA includes a "safe harbor" provision to promote Industry self-regulation.
The Gramm-Leach-Bliley Act requires institutions "significantly engaged" in financial activities give "clear, conspicuous, and accurate statements" of their information-sharing practices. The Act also restricts use and sharing of financial information.
The Health Insurance Portability and Accountability Act (HIPAA) privacy rules requires notice in writing of the privacy practices of health care services, and this requirement also applies if the health service is electronic.
The California Consumer Privacy Act (CCPA) gives consumers more control over the personal information that businesses collect about them and the CCPA regulations provide guidance on how to implement the law.
The California Privacy Rights Act of 2020 (CPRA) expands the privacy and information security obligations of most employers doing business in California.
Some states have implemented more stringent regulations for privacy policies. The California Online Privacy Protection Act of 2003 – Business and Professions Code sections 22575-22579 requires "any commercial websites or online services that collect personal information on California residents through a web site to conspicuously post a privacy policy on the site". Both Nebraska and Pennsylvania have laws treating misleading statements in privacy policies published on websites as deceptive or fraudulent business practices.
Canada
Canada's federal Privacy Law applicable to the private sector is formally referred to as Personal Information Protection and Electronic Documents Act (PIPEDA). The purpose of the act is to establish rules to govern the collection, use, and disclosure of personal information by commercial organizations. The organization is allowed to collect, disclose and use the amount of information for the purposes that a reasonable person would consider appropriate in the circumstance.
The Act establishes the Privacy Commissioner of Canada as the Ombudsman for addressing any complaints that are filed against organizations. The Commissioner works to resolve problems through voluntary compliance, rather than heavy-handed enforcement. The Commissioner investigates complaints, conducts audits, promotes awareness of and undertakes research about privacy matters.
European Union
The right to privacy is a highly developed area of law in Europe. All the member states of the European Union (EU) are also signatories of the European Convention on Human Rights (ECHR). Article 8 of the ECHR provides a right to respect for one's "private and family life, his home and his correspondence", subject to certain restrictions. The European Court of Human Rights has given this article a very broad interpretation in its jurisprudence.
In 1980, in an effort to create a comprehensive data protection system throughout Europe, the Organization for Economic Co-operation and Development (OECD) issued its "Recommendations of the Council Concerning Guidelines Governing the Protection of Privacy and Trans-Border Flows of Personal Data". The seven principles governing the OECD’s recommendations for protection of personal data were:
Notice—data subjects should be given notice when their data is being collected;
Purpose—data should only be used for the purpose stated and not for any other purposes;
Consent—data should not be disclosed without the data subject's consent;
Security—collected data should be kept secure from any potential abuses;
Disclosure—data subjects should be informed as to who is collecting their data;
Access—data subjects should be allowed to access their data and make corrections to any inaccurate data; and
Accountability—data subjects should have a method available to them to hold data collectors accountable for not following the above principles.
The OECD guidelines, however, were nonbinding, and data privacy laws still varied widely across Europe. The US, while endorsing the OECD’s recommendations, did nothing to implement them within the United States. However, all seven principles were incorporated into the EU Directive.
In 1995, the EU adopted the Data Protection Directive, which regulates the processing of personal data within the EU. There were significant differences between the EU data protection and equivalent U.S. data privacy laws. These standards must be met not only by businesses operating in the EU but also by any organization that transfers personal information collected concerning a citizen of the EU. In 2001 the United States Department of Commerce worked to ensure legal compliance for US organizations under an opt-in Safe Harbor Program. The FTC has approved a number of US providers to certify compliance with the US-EU Safe Harbor. Since 2010 Safe Harbor is criticised especially by German publicly appointed privacy protectors because the FTC's will to assert the defined rules hadn't been implemented in a proper even after revealing disharmonies.
Effective 25 May 2018, the Data Protection Directive is superseded by the General Data Protection Regulation (GDPR), which harmonizes privacy rules across all EU member states. GDPR imposes more stringent rules on the collection of personal information belonging to EU data subjects, including a requirement for privacy policies to be more concise, clearly-worded, and transparent in their disclosure of any collection, processing, storage, or transfer of personally identifiable information. Data controllers must also provide the opportunity for their data to be made portable in a common format, and for it to be erased under certain circumstances.
Australia
The Privacy Act 1988 provides the legal framework for privacy in Australia. It includes a number of national privacy principles. There are thirteen privacy principles under the Privacy Act. It oversees and regulates the collection, use and disclosure of people's private information, makes sure who is responsible if there is a violation, and the rights of individuals to access their information.
India
The Information Technology (Amendment) Act, 2008 made significant changes to the Information Technology Act, 2000, introducing Section 43A. This section provides compensation in the case where a corporate body is negligent in implementing and maintaining reasonable security practices and procedures and thereby causes wrongful loss or wrongful gain to any person. This applies when a corporate body possesses, deals or handles any sensitive personal data or information in a computer resource that it owns, controls or operates.
In 2011, the Government of India prescribed the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011 by publishing it in the Official Gazette. These rules require a body corporate to provide a privacy policy for handling of or dealing in personal information including sensitive personal data or information. Such a privacy policy should consist of the following information in accordance with the rules:
Clear and easily accessible statements of its practices and policies;
Type of personal or sensitive personal data or information collected;
Purpose of collection and usage of such information;
Disclosure of information including sensitive personal data or information;
Reasonable security practices and procedures.
The privacy policy should be published on the website of the body corporate, and be made available for view by providers of information who have provided personal information under lawful contract.
Online privacy certification programs
Online certification or "seal" programs are an example of industry self-regulation of privacy policies. Seal programs usually require implementation of fair information practices as determined by the certification program and may require continued compliance monitoring. TRUSTArc (formerly TRUSTe), the first online privacy seal program, included more than 1,800 members by 2007. Other online seal programs include the Trust Guard Privacy Verified program, , and Webtrust.
Technical implementation
Some websites also define their privacy policies using P3P or Internet Content Rating Association (ICRA), allowing browsers to automatically assess the level of privacy offered by the site, and allowing access only when the site's privacy practices are in line with the user's privacy settings. However, these technical solutions do not guarantee websites actually follows the claimed privacy policies. These implementations also require users to have a minimum level of technical knowledge to configure their own browser privacy settings. These automated privacy policies have not been popular either with websites or their users. To reduce the burden of interpreting individual privacy policies, re-usable, certified policies available from a policy server have been proposed by Jøsang, Fritsch and Mahler.
Criticism
Many critics have attacked the efficacy and legitimacy of privacy policies found on the Internet.
Concerns exist about the effectiveness of industry-regulated privacy policies. For example, a 2000 FTC report Privacy Online: Fair Information Practices in the Electronic Marketplace found that while the vast majority of websites surveyed had some manner of privacy disclosure, most did not meet the standard set in the FTC Principles. In addition, many organizations reserve the express right to unilaterally change the terms of their policies. In June 2009 the EFF website TOSback began tracking such changes on 56 popular internet services, including monitoring the privacy policies of Amazon, Google and Facebook.
There are also questions about whether consumers understand privacy policies and whether they help consumers make more informed decisions. A 2002 report from the Stanford Persuasive Technology Lab contended that a website's visual designs had more influence than the website's privacy policy when consumers assessed the website's credibility. A 2007 study by Carnegie Mellon University claimed "when not presented with prominent privacy information..." consumers were "…likely to make purchases from the vendor with the lowest price, regardless of that site's privacy policies". However, the same study also showed that when information about privacy practices is clearly presented, consumers prefer retailers who better protect their privacy and some are willing to "pay a premium to purchase from more privacy protective websites". Furthermore, a 2007 study at the University of California, Berkeley found that "75% of consumers think as long as a site has a privacy policy it means it won't share data with third parties," confusing the existence of a privacy policy with extensive privacy protection. Based on the common nature of this misunderstanding, researcher Joseph Turow argued to the U.S. Federal Trade Commission that the term "privacy policy" thus constitutes a deceptive trade practice and that alternative phrasing like "how we use your information" should be used instead.
Privacy policies suffer generally from a lack of precision, especially when compared with the emerging form of the Data Use Statement. Where privacy statements provide a more general overview of data collection and use, data use statements represent a much more specific treatment. As a result, privacy policies may not meet the increased demand for transparency that data use statements provide.
Critics also question if consumers even read privacy policies or can understand what they read. A 2001 study by the Privacy Leadership Initiative claimed only 3% of consumers read privacy policies carefully, and 64% briefly glanced at, or never read privacy policies. The average website user once having read a privacy statement may have more uncertainty about the trustworthiness of the website than before. One possible issue is length and complexity of policies. According to a 2008 Carnegie Mellon study, the average length of a privacy policy is 2,500 words and requires an average of 10 minutes to read. The study cited that "Privacy policies are hard to read" and, as a result, "read infrequently". However, any efforts to make the information more presentable simplify the information to the point that it does not convey the extent to which users' data is being shared and sold. This is known as the 'transparency paradox.'
References
Further reading
Gazaleh, Mark (2008) Online trust and perceived utility for consumers of web privacy statements, WBS London, 35pp.
Privacy law
Human rights
Identity management
Digital rights
Terms of service
Computing and society |
2346717 | https://en.wikipedia.org/wiki/Volume%20%28computing%29 | Volume (computing) | In computer data storage, a volume or logical drive is a single accessible storage area with a single file system, typically (though not necessarily) resident on a single partition of a hard disk. Although a volume might be different from a physical disk drive, it can still be accessed with an operating system's logical interface. However, a volume differs from a partition.
Differences from partition
A volume is not the same thing as a partition. For example, a floppy disk might be accessible as a volume, even though it does not contain a partition, as floppy disks cannot be partitioned with most modern computer software. Also, an OS can recognize a partition without recognizing any volume associated with it, as when the OS cannot interpret the filesystem stored there. This situation occurs, for example, when Windows NT-based OSes encounter disks with non-Microsoft OS partitions, such as the ext3 filesystem commonly used with Linux. Another example occurs in the Intel world with the "Extended Partition". While these are partitions, they cannot contain a filesystem directly. Instead, "logical drives" (aka volumes) must be created within them. This is also the case with NetWare volumes residing inside of a single partition. In short, volumes exist at the logical OS level, and partitions exist at the physical, media specific level. Sometimes there is a one-to-one correspondence, but this is not guaranteed.
In Microsoft Windows Server 2008 and onward the term "volume" is used as a superset that includes "partition" as well.
It isn't uncommon to see a volume packed into a single file. Examples include ISO9660 disc images (CD/DVD images, commonly called "ISOs"), and installer volumes for Mac OS X (DMGs). As these volumes are files which reside within another volume, they certainly are not partitions.
Example
This example concerns a Windows XP system with two physical hard disks. The first hard disk has two partitions, the second has only one. The first partition of the first hard disk contains the operating system. Mount points have been left at defaults.
In this example,
"C:", "D:", and "E:" are volumes.
Hard Disk 1 and Hard Disk 2 are physical disks.
Any of these can be called a "drive".
Nomenclature
In Linux systems, volumes are usually handled by the Logical Volume Manager or the Enterprise Volume Management System and manipulated using mount(8). In NT-based versions of Microsoft Windows, volumes are handled by the kernel and managed using the Disk Management MMC snap-in or the Diskpart command line tool.
Windows NT-based operating systems
It is important to note that Windows NT-based OSes do not have a single root directory. As a result, Windows will assign at least one path to each mounted volume, which will take one of two forms:
A drive letter, in the form of a single letter followed by a colon, such as "F:"
A mount-point on an NTFS volume having a drive letter, such as "C:\Music"
In these two examples, a file called "Track 1.mp3" stored in the root directory of the mounted volume could be referred to as "F:\Track 1.mp3" or "C:\Music\Track 1.mp3", respectively.
In order to assign a mount point for a volume as a path within another volume, the following criteria must be met:
The mounted-to volume must be formatted NTFS.
A directory must exist at the root path. (As of Windows Vista, it can be any subdirectory in a volume)
That directory must be empty.
By default, Windows will assign drive letters to all drives, as follows:
"A:" and "B:" to floppy disk drives, whether present or not
"C:" and subsequent letters, as needed, to:
Hard disks
Removable disks, including optical media (e.g. CDs and DVDs)
Because of this legacy convention, the operating system startup drive is still most commonly assigned "C:", however this is not always the case. Since personal computers now no longer include floppies, and optical disc and other removable drives typically still start at "D:", letters A and B are available for manual assignment by a user with administrative privileges. This assignment will be remembered by the same OS on the same PC next time a removable volume is inserted, as long as there are no conflicts, and as long as the removable drive has not been reformatted on another computer (which changes its volume serial number), and as long as the OS has not been reinstalled on the computer.
On Windows XP, mount points may be managed through the Disk Management snap-in for the Microsoft Management Console. This can be most conveniently accessed through "Computer Management" in the "Administrative Tools" section of the Control Panel.
More than one drive letter can refer to a single volume, as when using the SUBST command.
Warning: removing drive letters or mount-points for a drive may break some programs, as some files may not be accessible under the known path. For example, if a program is installed at "D:\Program Files\Some Program", it may expect to find its data files at "D:\Program Files\Some Program\Data". If the logical disk previously called "D:" has its drive letter changed to "E:", "Some Program" won't be able to find its data at "D:\Program Files\Some Program\Data", since the drive letter "D:" no longer represents that volume.
Unix-like operating systems
In Unix-like operating systems, volumes other than the boot volume have a mount-point somewhere within the filesystem, represented by a path. Logically, the directory tree stored on the volume is grafted in at the mountpoint. By convention, mount-points will often be placed in a directory called '/mnt', though '/media' and other terms are sometimes used.
To use a given path as a mount-point for another volume, a directory (sometimes called a "folder") must exist there.
Unix-like operating systems use the command to manipulate mount points for volumes.
For example, if a CD-ROM drive containing a text file called 'info.txt' was mounted at '', the text file would be accessible at ''.
Data management speed
Files within a volume can generally be moved to any other place within that volume by manipulating the filesystem, without moving the actual data. However, if a file is to be moved outside the volume, the data itself must be relocated, which is a much more expensive operation.
In order to better visualize this concept, one might consider the example of a large library. If a non-fiction work is originally classified as having the subject "plants", but then has to be moved to the subject "flora", one does not need to refile the book, whose position on the shelf would be static, but rather, one needs only to replace the index card. However, to move the book to another library, adjusting index cards alone is insufficient. The entire book must be moved.
Labels and serial numbers
A volume label is the name given to a specific volume in a filesystem. In the FAT filesystem, the volume label was traditionally restricted to 11 characters (reflecting the 8.3 restrictions, but not divided into name and extension fields) even when long file name was enabled, stored as an entry within a disk's root directory with a special volume-label attribute bit set, and also copied to an 11-byte field within the Extended BIOS Parameter Block of the disk's boot sector. The label is always stored as uppercase in FAT and VFAT filesystems, and cannot contain special characters that are also disallowed for regular filenames. In the NTFS filesystem, the length of its volume label is restricted to 32 characters, and can include lowercase characters and even Unicode. In the exFAT filsystem, the length of its volume label is also restricted to 11 characters, but can include lowercase characters and Unicode. The command is used to change the label in DOS, Windows, and OS/2. For GUI systems like Windows Explorer, can be pressed while the volume is highlighted, or a right-click on the name will bring up a context menu that allows it to be renamed, either of which is the same process as for renaming a file. Changing the label in Windows will also change the volume creation timestamp to the current date and time for FAT filesystems. NTFS partitions have the System Volume Information directory, whose creation timestamp is set when Windows creates the partition, or when it first recognizes a repartitioning (the creation of a new volume) by a separate disk utility.
In contrast to the label, the volume serial number is generally unique and is not normally changed by the user, and thus acts as a more consistent and reliable identifier of when a volume has been changed (as when a disk is removed and another inserted). Disk formatting changes the serial number, but relabeling does not. It originated in 1950s in mainframe computer operating systems. In OS/360 line it is human-configurable, has a maximum length of six characters, is in uppercase, must start with a letter, and identifies a volume to the system in unique manner. For example, "SYSRES" is often used for a system residence volume. Operating systems may use the volume serial number as mountpoint name.
A volume serial number is a serial number assigned to a disk volume or tape volume.
In FAT and NTFS file systems, a volume serial number is a feature used to determine if a disk is present in a drive or not, and to detect if it was exchanged with another one. This identification system was created
by Microsoft and IBM during their development of OS/2. It was introduced in MS-DOS 4.01 in 1988.
The volume serial number is a 32-bit number determined by the date and time on the real-time clock on the current computer at the time of a disk's formatting. Previously, determination by the OS of whether a disk was swapped was done by reading the drive's volume label. However, even at that time the volume label was not required to be unique and was optional. Therefore, many users had not given disks any meaningful name and the old method failed.
The command can be used from the command line to display the current label and serial number of a volume.
References
External links
MSDN's article on Hard Links and Junctions
Disk file systems |
51599940 | https://en.wikipedia.org/wiki/Kent%20Smack | Kent Smack | Kent Smack (born March 1, 1975 in Flemington, New Jersey) is a former two-time member of the U.S. National Rowing Team, ultimately earning his spot to compete for Team U.S.A. at the 2004 Athens Olympics. Kent now serves as Managing Director and President of ESM Software Group, where he is a certified expert in business improvement consulting, strategic planning, and developing Balanced Scorecards to manage and execute strategy.
Rowing career
Kent began his rowing career at Hobart and William Smith Colleges as a novice, joining the team only after learning he would sit on the bench for Hobart's lacrosse team. Kent went on to serve as the crew captain, leading the Hobart Statesmen to the 1996 New York State Championship. Hobart was early to recognize Kent's potential as a rower. Following Kent's graduation, Hobart Rowing began the Kent D. Smack award, recognizing the oarsman who demonstrates the most improvement over his career.
After finishing up his college career, Kent continued his education to earn his master's degree in communications and information systems at Rutgers University in 1999. However, Kent never stopped rowing. In 1999, Kent decidedly won the US men's club singles event, beating his future Olympic teammate, Brett Wilkinson (rower). Kent then moved onto earned his place on Team U.S.A's rowing team in 2001. He competed in the 2001 World Rowing Championships in Lucerne, finishing 13th overall in the world. He later competed in the 2004 Summer Olympics in Athens in the men's quadruple sculls, earning a second-place finish in the Olympic Qualification Regatta, and 11th at the Olympic Games.
Balanced Scorecard
During his time training as an oarsman, Kent ventured out to develop his professional career as well. Working with Robert S. Kaplan and David P. Norton, co-founders of ESM Software Group, Kent trained and consulted on the Balanced Scorecard methodology for over a decade, publishing multiple articles through the Harvard Business Review. Specializing in SaaS product development and earning an expert certification in business improvement consulting, Kent now supports development of software for Balanced Scorecard reporting in conjunction with strategic management. He is cited as is an innovator of strategy management software that maximizes the value of cloud computing applications to end users. Kent serves as Managing Director and President of ESM Software Group.
Publications
Kaplan, Robert S., et al. Balanced Scorecard Report Volume 5, Number 4. Harvard Business Review, July 2003.
Kaplan, Robert S., et al. Balanced Scorecard Report Volume 4, Number 6. Harvard Business Review, Nov. 2002.
Palazzolo, Christopher and Kent D. Smack. "Four Steps to BSC Software Selection." Harvard Business Review, Nov. 2002.
Smack, Kent D. "How-to's of BSC Reporting Part 1." Harvard Business Review, July. 2003.
Smack, Kent D. "How-to's of BSC Reporting Part II - The Reporting Meeting." Harvard Business Review, Nov. 2003."
References
1975 births
Living people
People from Flemington, New Jersey
Rowers at the 2004 Summer Olympics
Olympic rowers of the United States
American male rowers |
20167253 | https://en.wikipedia.org/wiki/Mformation | Mformation | Mformation Software Technologies was a software company. Mformation was founded in 1999. Founder Dr. Rakesh Kushwaha was serving as the company's CTO. Kevin A. Wood was the CEO. Mformation was a member of the Open Mobile Alliance (OMA), which works to support, extend and improve the standards for remote device management. On 17 September 2015, Alcatel-Lucent announced that it was acquiring Mformation for an undisclosed amount to strengthen its Customer Experience Management solution.
Locations
Mformation was headquartered in Woodbridge, New Jersey, with offices and subsidiaries around the globe including Bangalore, India.
Patent lawsuit
Mformation sued Research In Motion (RIM) in 2008, accusing it of infringement of two patents. In July 2012, jurors in federal court in San Francisco determined that RIM's software, which lets companies manage workers’ BlackBerry devices remotely, infringed Mformation’s patents. RIM was found liable for $147.2 million in damages but judge overturned the ruling.
References
External links
Mformation Web site
Mobile device management
Software companies based in New Jersey
Software companies established in 1999
1999 establishments in New Jersey
Telecommunications companies established in 1999
Software companies of the United States |
196649 | https://en.wikipedia.org/wiki/E-government | E-government | E-government (short for electronic government) is the use of technological communications devices, such as computers and the Internet, to provide public services to citizens and other persons in a country or region. E-government offers new opportunities for more direct and convenient citizen access to government, and for government provision of services directly to citizens.
The term consists of the digital interactions between a citizen and their government (C2G), between governments and other government agencies (G2G), between government and citizens (G2C), between government and employees (G2E), and between government and businesses/commerces (G2B). E-government delivery models can be broken down into the following categories: This interaction consists of citizens communicating with all levels of government (city, state/province, national, and international), facilitating citizen involvement in governance using information and communication technology (ICT) (such as computers and websites) and business process re-engineering (BPR). Brabham and Guth (2017) interviewed the third party designers of e-government tools in North America about the ideals of user interaction that they build into their technologies, which include progressive values, ubiquitous participation, geolocation, and education of the public.
Other definitions stray from the idea that technology is an object and defines e-government simply as facilitators or instruments and focus on specific changes in Public Administration issues. The internal transformation of a government is the definition that established the specialist technologist Mauro D. Ríos. In his paper "In Search of a Definition of Electronic Government", he says: "Digital government is a new way of organization and management of public affairs, introducing positive transformational processes in management and the structure itself of the organization chart, adding value to the procedures and services provided, all through the introduction and continued appropriation of information and communication technologies as a facilitator of these transformations."
Terminology
E-government is also known as e-gov, electronic government, Internet governance, digital government, online government, connected government. As of 2014 the OECD still uses the term digital government, and distinguishes it from e-government in the recommendation produced there for the Network on E-Government of the Public Governance Committee. Several governments have started to use the term digital government to a wide range of services involving contemporary technology, such as big data, automation or predictive analytics.
E-gov strategies (or digital government) is defined as "The employment of the Internet and the world-wide-web for delivering government information and services to the citizens." (United Nations, 2006; AOEMA, 2005). Electronic government (or e-government) essentially refers to "utilization of Information Technology (IT), Information and Communication Technologies (ICT s), and other web-based telecommunication technologies to improve and/or enhance on the efficiency and effectiveness of service delivery in the public sector". E-government promotes and improves broad stakeholders contribution to national and community development, as well as deepen the governance process.
In electronic government systems, government operations are supported by web-based services. It involves the use of information technology, specifically the Internet, to facilitate the communication between the government and its citizens.
Transformational government
Transformational government or also transformational e-government is the use of computer-based information and communications technologies (ICT) to change the way governments work. The term is commonly used to describe a government reform strategy which attempts to radically change the way people understand government, especially those working within government. For example, it is often associated with a whole-of-government viewpoint, which tries to foster cross-department collaboration and provide one-stop-shop convenience in the delivery of services to citizens.
The term transformational government is usually used aspirationally, as denoting the highest level of what e-government can achieve:
presence, where ICT, and usually websites, are used to provide information;
interaction, where government interacts with citizens, and departments interact with each other, online especially by email;
transaction, where such things as paying taxes or licenses are carried out online;
transformation, which involves a reinvention of government functions and how they operate. In relation to developing countries, it is often associated with hopes of reducing corruption, and in relation to developed countries, with attempts to increase the involvement of the private and voluntary sectors in government activity.
Government 2.0
Government 2.0 or Gov 2.0 refers to government policies that aim to harness collaborative technologies and interactive Internet tools to create an open-source computing platform in which government, citizens, and innovative companies can improve transparency and efficiency. Put simply, Gov 2.0 is about "putting government in the hands of citizens". Gov 2.0 combines interactive Web 2.0 fundamentals with e-government and increases citizen participation by using open-source platforms, which allow development of innovative apps, websites, and widgets. The government's role is to provide open data, web services, and platforms as an infrastructure.
E-governance
E-government should enable anyone visiting a city website to communicate and interact with city employees via the Internet with graphical user interfaces (GUI), instant-messaging (IM), learn about government issues through audio/video presentations, and in any way more sophisticated than a simple email letter to the address provided at the site"
The essence of e-governance is "The enhanced value for stakeholders through transformation" and "the use of technology to enhance the access to and delivery of government services to benefit citizens, business partners and employees". The focus should be on:
The use of information and communication technologies, and particularly the Internet, as a tool to achieve better government.
The use of information and communication technologies in all facets of the operations of a government organization.
The continuous optimization of service delivery, constituency participation, and governance by transforming internal and external relationships through technology, the Internet and new media.
Whilst e-government has traditionally been understood as being centered around the operations of government, e-governance is understood to extend the scope by including citizen engagement and participation in governance. As such, following in line with the OECD definition of e-government, e-governance can be defined as the use of ICTs as a tool to achieve better governance.
Non-internet e-government
While e-government is often thought of as "online government" or "Internet-based government," many non-Internet "electronic government" technologies can be used in this context. Some non-Internet forms include telephone, fax, PDA, SMS text messaging, MMS, wireless networks and services, Bluetooth, CCTV, tracking systems, RFID, biometric identification, road traffic management and regulatory enforcement, identity cards, smart cards and other near field communication applications; polling station technology (where non-online e-voting is being considered), TV and radio-based delivery of government services (e.g., CSMW), email, online community facilities, newsgroups and electronic mailing lists, online chat, and instant messaging technologies.
History
One of the first references to the term "electronic government" happened alongside with the term electronic democracy in 1992. During the last two decades, governments around the world have invested in ICT with the aim of increasing the quality and decreasing the cost of public services. But over that time, as even the least developed countries have moved to websites, e-services and e-government strategies, it has become increasingly clear that e-government has not delivered all the benefits that were hoped for it. One study found that 35% of e-government projects in developing countries resulted in total failures; and that 50% were partial failures.
In reaction to these poor outcomes, there has been a shift of perspective to transformational government, aiming beyond purely technical aspects of better enabling e-government processes towards addressing the cultural and organisational barriers which have hindered public service benefits realisation. Researchers have defined the rationale for transformational government as "the exploitation of e-government such that benefits can be realized".
In 2010 the Organization for the Advancement of Structured Information Standards (OASIS) published a report which identified a wide range of common pitfalls which have hampered many governments in achieving significant impacts through their technology investments. However, OASIS also noted that:
"… an increasing number [of governments] are now getting to grips with the much broader and complex set of cultural and organizational changes which are needed for ICT to deliver significant benefits to the public sector. This new approach is generally referred to as Transformational Government."
OASIS cites the UK and Australia as two of the leaders in this area:
"Transformational Government…. encompasses a new "virtual" business layer within government which allows an integrated, government-wide, citizen-focused service to be presented to citizens across all channels, but at no extra cost and without having to restructure government to do so. Two very good examples of this new approach are South Australia’s "Ask Just Once" portal and the UK Government’s DirectGov portal, and the approach is explained in very good detail in the CS Transform’s white paper entitled "Citizen Service Transformation – a manifesto for change in the delivery of public services".
UN e-Government Development Index
The Division of a Public Administration and Development Management (DPAPM) of the United Nations Department of Economic and Social Affairs (UN-DESA) conducts a bi-annual e-government survey which includes a section titled e-Government Development Index (EGDI). It is a comparative ranking of 193 countries of the world according to three primary indicators: i) the OSI - Online Service Index that measures the online presence of the government in terms of service delivery; ii) the TII - Telecommunication Infrastructure Index iii) HCI -Human Capital Index. Constructing a model for the measurement of digitized services, the Survey assesses the 193 member states of the UN according to a quantitative composite index of e-government readiness based on website assessment; telecommunication infrastructure and human resource endowment.
A diverse group of 100 researchers online volunteers from across the globe engaged with the United Nations Department of Economic Affairs (UN DESA) to process 386 research surveys carried out across 193 UN Member States for the 2016 UN E-Government Survey. The diversity of nationalities and languages of the online volunteers—more than 65 languages, 15 nationalities, of which half are from developing countries—mirrors perfectly the mission of the survey.
The survey has been criticized not including an index of digital inclusion levels.
Delivery models and activities of e-government
The primary delivery models of e-government can be divided into:
Government-to-citizen or government-to-consumer (G2C) approaches such as setting up websites where citizens can download forms, government information, etc.
In this model, the G2C model applies the strategy of customer relationship management (CRM) with business concept.
By managing their "customer" (citizen) relationship, the business (government) can provide the products and services required to fulfill the needs of the customer (citizen).
In United States, the NPR (National Partnership for Reinventing Government) has been implemented from 1993.
Government-to-business (G2B)
Government-to-government (G2G)
Government-to-employees (G2E)
Within each of these interaction domains, four kinds of activities take place:
pushing information over the Internet, e.g.: regulatory services, general holidays, public hearing schedules, issue briefs, notifications, etc.
two-way communications between the agency and the citizen, a business, or another government agency. In this model, users can engage in dialogue with agencies and post problems, comments, or requests to the agency.
conducting transactions, e.g.: lodging tax returns, applying for services and grants.
governance, e.g.: To enable the citizen transition from passive information access to active citizen participation by:
Informing the citizen
Representing the citizen
Encouraging the citizen to vote
Consulting the citizen
Involving the citizen
Examples of online transactional services, employed in e-governments include:
Applying for a birth certificate
Applying for a building permit
Applying for a business license
Applying for a death certificate
Applying for a driver's license
Applying for environmental permits
Applying for government vacancies online
Applying for land title registration
Applying for a marriage certificate
Applying for a personal identity card
Applying for social protection programs
Applying for a visa
Declaring to police
Paying fines
Paying for utilities (water, gas electricity)
Registering a business
Registering a motor vehicle
Looking up land registration info
Looking up address and telephone number info in online telephone directory
Submitting a change of address
Submitting income taxes
Submitting Value Added Tax
Controversies
Disadvantages
The main disadvantages concerning e-government are that there exists a digital divide and digital inequalities that bar certain people from accessing the full benefits of digitization. When presented as the only option to access an essential service, those who do not have public access to computers and the internet, or do not have adequate knowledge on how to use them, suffer.
Other disadvantages include the reliability of information on the web and issues that could influence and bias public opinions. There are many considerations and potential implications of implementing and designing e-government, including disintermediation of the government and its citizens, digital self-determination of citizens in a global internet network, impacts on economic, social, and political factors, vulnerability to cyber attacks, and disturbances to the status quo in these areas.
The political nature of public sector forms are also cited as disadvantages to e-government systems.
Cost
Although "a prodigious amount of money has been spent" on the development and implementation of e-government, some say it has yielded only a mediocre result. The outcomes and effects of trial Internet-based government services are often difficult to gauge or users find them unsatisfactory.
According to Gartner, Worldwide IT spending is estimated to total $3.6 trillion in 2011 which is 5.1% increase from the year 2010 ($3.4 trillion).
Development
Because E-government is in the early stages of development in many countries and jurisdictions, it is hard to be applied to forms of government that have been institutionalized. Age-old bureaucratic practices being delivered in new mediums or using new technologies can lead to problems of miscommunication.
An example of such a practice was the automation of the Indiana welfare program that began in 2006. An audit commissioned by then Indiana Governor Mitch Daniels in 2005 found that several Family and Social Service Administration (FSSA) employees and welfare recipients were committing welfare fraud. The bureaucratic nature of Indiana’s welfare system allowed people to cheat the system and cost the state large amounts of money. Daniels characterized the system as “irretrievably broken,” stating that it was at a state where employees could not fix it on their own. He cited many issues that directly tie into the fact that the system had not been automated.
In hopes to reap the many benefits of e-government, Daniels signed into law a bill privatizing and automating the enrollment service for Indiana’s welfare programs. Daniels aimed to streamline benefits applications, privatize casework, and identify fraud. It was believed that moving away from face-to-face casework and toward electronic communication would fix the aforementioned problems and improve efficiency.
Indiana's welfare enrollment facilities were replaced with online applications and call centers operated by IBM. These ran into issues almost immediately. The mainly face-to-face and personalized practice was modernized essentially overnight, blindsiding many people who relied on those features. The automated system worked upon a one size fits all approach that attributed errors to the recipient over anything else. Problems that were previously solvable through a single phone call with a recipient’s caseworker became increasingly complicated due to the private call center workers not being adequately trained.
Welfare recipients were denied their benefits due to lack of direct help, system errors out of their control, or simply an inability to use the technology meant to speed up the process. The transition overwhelmed not only recipients but also the employees. In October 2009, even Daniels admitted to the project being flawed and problematic, cancelling the contract with IBM. Indiana began rolling out a hybrid system starting in 2010, including caseworkers and some automation where appropriate.
False sense of transparency and accountability
Opponents of e-government argue that online governmental transparency is dubious because it is maintained by the governments themselves. Information can be added or removed from the public eye. To this day, very few organizations monitor and provide accountability for these modifications. Those that do so, like the United States’ OMBWatch and Government Accountability Project, are often nonprofit volunteers. Even the governments themselves do not always keep track of the information they insert and delete.
Hyper-surveillance
Increased electronic contact and data exchange between government and its citizens goes both ways. Once e-government technologies become more sophisticated, citizens will be likely be encouraged to interact electronically with the government for more transactions, as e-services are much less costly than brick and mortar service offices (physical buildings) staffed by civil servants. This could potentially lead to a decrease in privacy for civilians as the government obtains more and more information about their activities. Without safeguards, government agencies might share information on citizens. In a worst-case scenario, with so much information being passed electronically between government and civilians, a totalitarian-like system could develop. When the government has easy access to countless information on its citizens, personal privacy is lost.
Inaccessibility
An e-government website that provides government services often does not offer the "potential to reach many users including those who live in remote areas [without Internet access], are homebound, have low literacy levels, exist on poverty line incomes." Homeless people, people in poverty and elderly people may not have access.
Trust
Trust in e-governance is very highly dependent on its performance and execution, which can be measured through the effectiveness of current actions. This is much riskier and prone to fluctuation than a system of trust that is based on reputation because performance does not consider past actions.
With the automation of institutionalized government services, trust can go both ways: both in the trust that people have for the government, and the trust the government places in its people. In the case of Indiana’s automated welfare system, the less skilled call center workers defaulted their decisions to the automated system and favored solutions that best fit the system rather than the people. When too much trust is put in e-governance, errors and mistakes are not caught.
A crucial part of the Indiana welfare system was the relationship between caseworkers and their clients. It was the main way for Hoosiers to interact with this public institution and get the help they need. However, Daniels and many others saw a potential invitation to fraud. There were indeed instances of welfare fraud occurring between caseworkers and clients, such as this case from Marion County, December 2009. But the motivation to automate was an attempt to catch people taking advantage of the system rather than trying to get the services to as many people as possible. Welfare recipients were being considered as criminals rather than people in need. Such treatment of the poor is similar to that of poorhouses from the 19th and 20th centuries. Both developed flawed systems with an intent to punish, creating more burdens than the initially marketed benefits.
Advantages
The ultimate goal of the e-government is to be able to offer an increased portfolio of public services to citizens in an efficient and cost-effective manner. E-government allows for government transparency. Government transparency is important because it allows the public to be informed about what the government is working on as well as the policies they are trying to implement.
Simple tasks may be easier to perform through electronic government access. Many changes, such as marital status or address changes can be a long process and take a lot of paperwork for citizens. E-government allows these tasks to be performed efficiently with more convenience to individuals.
E-government is an easy way for the public to be more involved in political campaigns. It could increase voter awareness, which could lead to an increase in citizen participation in elections.
It is convenient and cost-effective for businesses, and the public benefits by getting easy access to the most current information available without having to spend time, energy and money to get it.
E-government helps simplify processes and makes government information more easily accessible for public sector agencies and citizens. For example, the Indiana Bureau of Motor Vehicles simplified the process of certifying driver records to be admitted in county court proceedings. Indiana became the first state to allow government records to be digitally signed, legally certified and delivered electronically by using Electronic Postmark technology. In addition to its simplicity, e-democracy services can reduce costs. Alabama Department of Conservation & Natural Resources, Wal-Mart and NIC developed an online hunting and fishing license service utilizing an existing computer to automate the licensing process. More than 140,000 licenses were purchased at Wal-Mart stores during the first hunting season and the agency estimates it will save $200,000 annually from service.
The anticipated benefits of e-government include efficiency, improved services, better accessibility of public services, sustainable community development and more transparency and accountability.
Democratization
One goal of some e-government initiatives is greater citizen participation. Through the Internet's Web 2.0 interactive features, people from all over the country can provide input to politicians or public servants and make their voices heard. Blogging and interactive surveys allow politicians or public servants to see the views of the people on any issue. Chat rooms can place citizens in real-time contact with elected officials or their office staff or provide them with the means to interact directly with public servants, allowing voters to have a direct impact and influence in their government. These technologies can create a more transparent government, allowing voters to immediately see how and why their representatives in the capital are voting the way they are. This helps voters decide whom to vote for in the future or how to help the public servants become more productive.
A government could theoretically move more towards a true democracy with the proper application of e-government. Government transparency will give insight to the public on how decisions are made and hold elected officials or public servants accountable for their actions. The public could become a direct and prominent influence in government legislature to some degree.
Environmental bonuses
Proponents of e-government argue that online government services would lessen the need for hard copy paper forms. Due to recent pressures from environmentalist groups, the media, and the public, some governments and organizations have turned to the Internet to reduce paper use. The United States government utilizes the website Government Forms, by Agency | A | USAGov to provide "internal government forms for federal employees" and thus "produce significant savings in paper. As well, if citizens can apply for government services or permits online, they may not need to drive into a government office, which could lead to less air pollution from gas and diesel-fuelled vehicles.
Speed, efficiency, and convenience
E-government allows citizens to interact with computers to achieve objectives at any time and any location and eliminates the necessity for physical travel to government agents sitting behind desks and windows. Many e-government services are available to citizens with computers and Internet access 24 hours a day and seven days a week, in contrast to brick and mortar government offices, which tend to be only open during Business hours (notable exceptions are police stations and hospitals, which are usually open 24 hours a day so that staff can deal with emergencies).
Improved accounting and record-keeping can be noted through computerization, and information and forms can be easily accessed by citizens with computers and Internet access, which may enable quicker processing time for applications and find information. On the administrative side, access to help find or retrieve files and linked information can now be stored in electronic databases versus hard copies (paper copies) stored in various locations. Individuals with disabilities or conditions that affect their mobility no longer have to be mobile to be active in government and can access public services in the comfort of their own homes (as long as they have a computer and Internet and any accessibility equipment they may need).
Public approval
Recent trials of e-government have been met with acceptance and eagerness from the public. Citizens participate in online discussions of political issues with increasing frequency, and young people, who traditionally display minimal interest in government affairs, are drawn to electronic voting procedures.
Although Internet-based governmental programs have been criticized for lack of reliable privacy policies, studies have shown that people value prosecution of offenders over personal confidentiality. Ninety percent of United States adults approve of Internet tracking systems of criminals, and 57% are willing to forgo some of their personal internet privacy if it leads to the prosecution of criminals or terrorists.
Technology-specific e-government
There are also some technology-specific sub-categories of e-government, such as m-government (mobile government), ubiquitous government), and g-government (GIS/GPS applications for e-government).
The previous concern about developments in E-government concerning technology are due to the limited use of online platforms for political reasons by citizens in local political participations.
The primary delivery models of e-government are classified depending on who benefits. In the development of the public sector or private sector portals and platforms, a system is created that benefits all constituents. Citizens needing to renew their vehicle registration have a convenient way to accomplish it while already engaged in meeting the regulatory inspection requirement. On behalf of a government partner, the business provides what has traditionally, and solely, managed by the government and can use this service to generate profit or attract new customers. Government agencies are relieved of the cost and complexity of having to process the transactions.
To develop these public sector portals or platforms, governments have the choice to internally develop and manage, outsource, or sign a self-funding contract. The self-funding model creates portals that pay for themselves through convenience fees for certain e-government transactions, known as self-funding portals.
Social Media Usage
Social networking services and websites are an emerging area for e-democracy. The social networking entry point is within the citizens’ environment and the engagement is on the citizens’ terms. Proponents of e-government perceive the government's use of social networking as a medium to help the government act more like the public it serves. Examples can be found at almost every state government portal through Facebook, Twitter, and YouTube widgets.
Government and its agents also have the opportunity to follow citizens to monitor satisfaction with services they receive. Through ListServs, RSS feeds, mobile messaging, micro-blogging services and blogs, government and its agencies can share information to citizens who share common interests and concerns. Government is also beginning to Twitter. In the state of Rhode Island, Treasurer Frank T. Caprio is offering daily tweets of the state's cash flow. For a full list of state agencies with Twitter feeds, visit NIC. For more information, visit transparent-gov.com.
E-Signature
Several local governments in the United States have allowed online e-signatures for candidate nominating petitions and signature requirements for ballot initiatives. In 2012 Arizona launched a prototype system called E-qual, which allowed statewide candidates running for office to collect signatures online and share the link on other forms on social media. E-qual was expanded in 2016 to cover candidates in local elections within the state, but it was not used at the local level before the 2020 state election. The city of Boulder, Colorado has implemented a similar system in 2020 to collect signatures for city ballot questions.
By country
Africa
Kenya
Following the transition from the longstanding Kenya African National Union government to the National Rainbow Coalition government in December 2002, in January 2004 a Directorate of e-government was established after an executive (cabinet) session. The newly created department had the duty to draw the plan of action for future ICT implementations.
Like many other African nations, Kenya has embraced the high mobile penetration rate within its population. Even people living in remote areas that did not have access to traditional telecommunications' networks can now communicate with ease. The fact of the same has, and continues to have, a great impact on the governments' strategies in reaching out to its citizens.
Given that about 70% of the population owns mobile phones, leading mobile network operators like Safaricom have taken a great step in offering services that meet citizens' demands. Such services include Kipokezi (which allows subscribers to do online chatting and also exchange electronic mails via standard mobile phones), and M-Pesa (which allows the subscribers to send and receive electronic cash). Such services have even appealed to the majority of Kenyans, as they support the branchless members of the society too, in undertaking normal and secure businesses via M-Pesa. The recent IMF report reveals that MPESA transactions in Kenya exceeded those carried out by the Western Union worldwide.
Website: Open Kenya | Transparent Africa
Eurasia
Armenia
Armenian e-government was established in 2004. E-government brings together all tools and databases created by Armenian state agencies and provides a user-friendly online environment for users. It includes more than twenty services and tools. Under this initiative, "Interactive Budget" and "State Non-Commercial Organisations' Financing" sections are available for the first time. There are also twenty other tools, including search engines, allowing to find the Government's and the Prime Minister's decisions, the agenda of the next cabinet sitting, information on the state purchases, the electronic tax reporting system, the online application system of the Intellectual Property Agency, the information search system of the Intellectual Property Agency, as well as the Electronic Signature and Electronic Visa (e-visa) sections. It is worth mentioning that the Electronic Signature is used in several other services when a user wants to submit an application or receive information. The Electronic Signature is universal system and is used both by the state officials and by citizens, legal entities.
E-License: This system allows companies to submit an application for obtaining or terminating licenses regarding various activities (pharmaceuticals, banking, construction, transport etc.) It also provides other services in respect of already obtained license.
System of reports on licensed activities: The Report Acceptance System for licensed persons enables to submit any report (annually, monthly or quarterly) on licensed activities.
E-Payments: Electronic Payment System effectively processes online payments. This application is designed specifically for charging the state fees, local fees, the administrative penalties or services provided by state and local governmental bodies. Payments can be made by Visa, Mastercard, PayPal and local Arca or Mobidram systems.
E-Cadastre: The system enables to submit an application to the property cadastre and receives information on landowners, the surface of a plot of land, legal status of any property. The state electronic payment system is integrated into this tool. Online applications for registration of rights and restrictions and related documents may be submitted by users who have a digital signature.
E-Draft: In 2016 the Ministry of Justice of Armenia developed Legal Drafts' Database. It is designed particularly for publication any draft initiated by the government or member of Parliament. The database can be accessed through a website which provides the possibility of presenting the legal acts' drafts to the public, organizing online discussions, and as a consequence - the active participation of representatives of civil society in the law-making process. The website enables them to search legal drafts, follow their further progress, and become familiar with the presented suggestions. The registered users can present suggestions, get informed with the "summary paper" of the suggestions to the draft, the adopted suggestions or the reasoning concerning the not adopted ones.
E-Register: The system enables registration of legal entities, such as limited liability companies, joint-stock companies, foundations, and self-employed entrepreneurs. On average it takes twenty minutes to register a company depending on the entity's type. State fee can be paid through E-Payments system. The system also allows users to track the submitted applications and search existing companies as well as purchase full information about any company, including information about shareholders.
Datalex: This system allows users to find cases, search for laws of Armenia, as well as to follow the schedule of court hearings.
E-Announcement: The system is designed for public announcements. The state authorities are obliged to make public announcements under certain circumstances stipulated by law.
E-Tax: This tool simplifies the tax declaration process for both taxpayers and tax authorities. Any natural person or legal entity can submit tax declaration verifying it by electronic signature.
E-IP: Online submission of patent and trademark applications using electronic signature.
E-Visa This application enables the process of obtaining a visa through an electronic application. Visas are issued within two days.
E-Signature: The system allows users to verify the identity of the user and protect the submitted application. Any resident of Armenia, either a natural person or legal entity, can obtain an electronic signature and use it while applying e-government systems.
Azerbaijan
The "e-government" framework was established in accordance with the "National Strategy on Information-Communication Technologies in the Development of the Republic of Azerbaijan (2003–2012)" and implemented in the framework of the "E-Azerbaijan" Program. The project is aimed to increase the convenience and efficiency of the activity of state agencies, simplify interactions between population, businesses, and government agencies, contribute to creating new citizen-official relations framework and ensure transparency and free flow of information.
The main components of the e-government infrastructure are integrated network infrastructure for state bodies, e-government portal, e-government gateway, State register of information resources and systems, e-signature, e-document circulation and e-government data center (under preparation).
State portal www.e-gov.az was established to facilitate citizens in benefiting from e-services provided by government agencies on a ‘single window’ principle with the combination of services. Through e-government portal, citizens can use more than 140 e-services of 27 state agencies. Besides, a gateway between government agencies was established to ensure the mutual exchange of information, and most state agencies are connected to this infrastructure. The gateway allows users to efficiently use the existing government information systems and safe contact between them, issuing requests and rendering e-services, liberates citizens from providing same information or documents which are already available in information databases.
On 14 March 2018, it was launched E-government Development Center. It is a public legal entity that is subordinated to State Agency for Public Service and Social Innovations under the President of the Republic of Azerbaijan. The service tries to utilize digital technologies, establish e-government to make state services operate more efficiently, ensure public services availability, and improve the living standards of the citizens of the country. It is government-to-citizen type of e-governance.
Bangladesh
The eGovernment web portal has been developed to provide more convenient access to various government services and information through one window. Services can now be delivered to people at their convenience, and more importantly, now have a lot more weight on transparency and accountability of public services.
India
The E-Governance initiatives and programs in India are undertaken by the Ministry of Electronics and Information Technology (MeitY www.meity.gov.in ).
The current umbrella program for e-governance of Government of India is known by the title "DIGITAL INDIA" (www.digitalindia.gov.in)
Indian government has launched many e-governance initiatives, including a portal for public grievance, MCA21 Mission Mode Project, e-Filing of income tax, e-gazette, Project Nemmadi, and their overall digital India policy.
Indonesia
E-government in Indonesia is developing, especially in central and regional/local government offices. E-government was officially introduced to public administration by Presidential Directive No 6/2001 on Telematics, which states that the government of Indonesia has to use telematics technology to support good governance. Furthermore, e-government should have been introduced for different purposes in government offices. As one of the ISO member countries, Indonesia gives more attention to facilitating the activities of standardization. Among of the facilities provided are building the National information system of standardization (SISTANAS) and Indonesia Standardization Information Network (INSTANET). As of 2017, ministries, institutions and local governments of Indonesia used to run separate e-government systems, which is now integrated into a centrally based system. In 2017, the government has also undertaken programs for digitization of SMEs and the informal sector. Many of the cities across Indonesia including Jakarta, Bandung, Surabaya,and Makassar are implementing the concept of Smart City, consisting of e-government, e-health, e-education, e-logistics and e-procurement as priority areas.
Iran
In 2002, Iran published a detailed report named TAKFA (Barnameye Tose-e va Karborde Fanavaie Etela’at) in which it was predicted that most of the government bodies would try to virtualize their services as soon as possible. However, based on the reports by UN bodies, Iran has failed in recent years to meet the average standards of e-government. In 2008, the Supreme Council of Information released a report which criticized the government for its poor advancement in employing new communication technologies for administration purposes.
In 2016, Iran launched the National Information Network and improved the quality and speed of internet access. In 2017 Iran introduced phase one of e-government including E-Tax, E-Customs, E-Visa, E-Government Portal, and a mobile application to modernize Iran's government services.
The Iranian government plans to introduce other phases of E-gov soon.
Iraq
The Iraqi E-government citizen program was established to "eliminate bribery and favoritism and end the citizens' suffering in going back repeatedly to directories", the interface lets the citizen send requests and complaints, it can also be used for issuing identity cards, driving licenses and passports.
Jordan
Jordan has established its e-government program since 2002. many governmental services are provisioned online.
Kazakhstan
The e-government portal egov.kz was launched in 2012 as part of Kazakhstan's effort to modernize how citizens access government services and information. The egov.kz mobile app was recognized as best app in the GovTechioneers competition at the 2017 World Government Summit in Dubai.
Korea
Announced in 2013 with "an ambitious plan to allow wider public access to government data to improve the transparency of state affairs.", this initiative includes: citizen-centered government innovation, core values of openness, sharing, communication, collaboration for all areas of governing, customized services to individual citizens, which will create jobs and support creative economy.
Malaysia
In Malaysia, the e-government efforts are undertaken by the Malaysian government, under the umbrella of Multimedia Super Corridor (MSC) and e-government flagships, which was launched in mid-1996, by Dr Mahathir Mohamad (1981-2003), by the then Prime Minister of Malaysia (Jeong & Nor Fadzlina, 2007).
Electronic government is an initiative aimed at reinventing how the government works. It seeks to improve how the government operates, as well as how it delivers services to the people (Ibrahim Ariff & Goh Chen Chuan, 2000).
Myanmar
The Yangon City Development Committee (Burmese- ရန်ကုန်မြို့တော်စည်ပင်သာယာရေးကော်မတီ) (YCDC) is the administrative body of Yangon, and Yangon is the largest city and former capital of Myanmar (Burma). The Yangon City Development Committee consists of 20 departments. Its headquarters was on the Yangon City Hall. The committee's chairman is also the city's mayor.
In 2003, YCDC was organized to provide e-government for Yangon City. The main purposes of the city's e-government program are to provide easy access between the government and the city's citizens via the Internet, to reduce paper usage, to reduce the city budget, to build the city's fiber ring, to provide timely public information, to store public data and to develop and expand G2G, G2C, G2B, and G2E programs.
In January 2013 responsibility for e-government was divided between the e-Government Administration Committee and the e-Government Processing Committee. The e-Government Administration Committee includes the Mayor of Yangon City as Chief, the General Secretary of Yangon City as Sub-Chief, and the other 20 head of department officers as chairmen. The e-Government Processing Committee includes the Head of Public Relation and Information Department as Chief and the other 20 deputy head of department officers as chairmen.
The official web-portal is www.ycdc.gov.mm.
Mandalay is the second-largest city and the last royal capital of Myanmar (Burma). In 2014, Mandalay Region Government developed www.mdyregion.gov.mm to know about regional government and their activities to people.
Mandalay Region Government organized the e-Government Steering Committee on 23 June 2016.
That committee chairman was U Sai Kyaw Zaw, Minister, Ministry of Ethnic Affairs.
On 21 July 2017 www.emandalay.gov.mm web portal was opened by Dr. Zaw Myint Maung, Prime Minister of Mandalay Region Government. That portal includes 2 e-services, 199 topics from 70 agencies.
The committee develops a Regional Data Center too. That Datacenter will be opened in 2018.
Nepal
The e-government planning and conceptual framework has been presented to Nepal in extensive support from the Government of Korea (KIPA). E-government Vision is ‘The Value Networking Nepal’ through:
Citizen-centered service
Transparent service
Networked government
Knowledge-based society
Nepal's E-government mission statement is "Improve the quality of people’s lives without any discrimination, transcending regional and racial differences, and realize socio-economic development by building a transparent government and providing value-added quality services through ICT."
The e-government practice has been slow both in adoption and practice in Nepal. However, local government bodies now have dedicated team of ICT Volunteers working towards implementing e-government in the country through an extensive ICT for Local Bodies initiatives.
Pakistan
In 2014, the Government of Pakistan created the National Information Technology Board under the Ministry of Information Technology & Telecom to enable a digital eco-system for government services to the citizens of Pakistan. NITB was formed as a result of a merger between Pakistan Computer Bureau (PCB) and Electronic Government Directorate (EGD).
The key functions identified by the NITB are:
Provide technical guidance for the introduction of e-Governance in the Federal Govt.
Suggest the efficient and cost-effective implementation of e-government programs in the Federal Ministries/Divisions.
To carry out a training needs assessment and design and implement the identified IT capacity building programs for the employees of Federal Ministries/Divisions.
Review the status of e-government readiness regularly to ensure sustainable, accelerated digitization and relevant human resource development.
Identify the areas where IT interventions can be helpful and to suggest measures for the automation of these areas through Business Process Re-engineering (BPR).
NITB rolled out an e-Office Suite across various ministries in the Government of Pakistan. While it clearly pursued efficiency gains and improved transparency, it also hoped to deliver "efficient and cost-effective public services to citizens of Pakistan." The suite primarily included five modules or applications across all the ministries. Description of each module listed are:
Internal Communication Module
HR Management Module
Inventory & Procurement Management Module
Project Management Module
Finance Budget Module
NITB released a high-level diagram that describes the process of transforming federal government agencies and ministries to e-office environments.
Criticism: NITB's rollout of the e-Office suite across almost all federal agencies is not only overly ambitious but also likely to fail. It seems to put together a lot of lofty organizational efficiency goals with a set of delivery or citizen-facing targets. In fact, most of the services NITB has provided have been largely conceptual and not sufficient concrete. The process outlined in the adoption process diagram seems devoid of any user-centric design or value proposition formulation. Instead of creating many MVPs (Minimum Viable Products) and taking advantage of an iterative and validated learning the process, the e-Office Suite seems to incorporate all the features and functions that various ministries and divisions may need or use. It seems to focus more on the needs of the bureaucrats and government agencies rather than the needs of the end-user (citizens of Pakistan) and what services would they need that a ministry or division can provide.
Russia
On the Federal Law "On providing state and municipal services" (2010), the strategy on development of Information Society in the Russian Federation, approved by the President (2008), the Federal target programme "Electronic Russia" (2002 – 2010 years), approved by the government (2002), the State Programme "Information Society" (2010), the Procedure on development and approval of administrative regulations execution of public functions (public services), approved by the government (2005), the concept of administrative reform in the Russian Federation in 2006 - 2010 respectively, approved by the government (2005), on other orders, resolutions and acts in the Russian Federation was created electronic government (or e-government).
The main target on creating e-government lies in the field of providing equal opportunities for all the Russians in spite of their living place and their incomes and make a more effective system of public administration. So e-government is created for reaching the useful system of public management accommodating the individual interests of every citizen by participation through ICTs in public policy-making.
Nowadays Russian e-government includes such systems as:
1. The United interagency Interacting system using for providing of state and municipal services, exchange of information and data between participants of interagency interacting, quick approval of state and municipal decisions, etc.
2. The United system for authentication and authorization providing evidence of the rights of all participants of e-government.
3. United portal of state and municipal services and functions which are the "single window" for all information and services assured by government and municipals.
The portal of public services is one of the key elements of the project to create an "electronic government" in the country. The portal provides a single point of access to all references on state and municipal services through the Internet and provides citizens and organizations the opportunity to receive these services electronically. Monthly visits by users of the public services portal range between 200,000 and 700,000. For example, citizens are now able to get or exchange a driver's license through this portal.
4. Head system providing utilization of electronic signature.
Other systems located on cloud services, since cloud computing has been a useful tool for E-Government according to researchers.
Today Russian e-government elements are demanded in the spheres of e-governance, e-services (e-health, e-education, e-library, etc.), e-commerce, e-democracy (web-election, Russian public initiative). By the United Nations E-Government Survey 2012: E-Government for the People Russia became one of the 7 emerging leaders in e-government development, took 9th place in rating of e-government development in largest population countries, took 8th rank in Top e-participation leaders, after Norway, Sweden and Chile, Advancing 32 positions in the world rankings, the Russian Federation became the leader of e-government in Eastern Europe. Evolution of ICT in the Russian Federation provided the raising of Russia in e-government development index to the 27 places.
Saudi Arabia
In 2015, the Ministry of Interior of Saudi Arabia launched the e-service application known as Absher. The application allows the people of the Kingdom to access more than 279 different government services from their smartphones, without the need to queue or for the inefficiencies of bureaucracy.
Some e-services that can be completed by way of the application include:
Passport Services
Traffic Services
Expatriate Affairs Services
Civil Affairs Services
Authorizations
General Directorate of Prisons
Public Prosecution
Public Security
MOI Services (Ministry of Interior)
Ministry of Hajj
General Services
Information Services
Another application that has been launched is Tawakkalna. This application was created by the Saudi Data and Artificial Intelligence Authority (SDAIA) in order for the government to better counteract against Covid-19. Initially the application was created so as to issue permits to those who were required to commute to work during lockdown. Now, it is being used for travel; entering commercial buildings, hospitals, and schools within the Kingdom; setting vaccine appointments; and Covid-19 tracing.
Sri Lanka
Sri Lanka have taken some initiative actions to provide the benefits of e-government to the citizens.
Thailand
To implement the principles of e-government, the Ministry of Information and Telecommunication Technologies of Thailand developed a plan for creating a modern e-services system during 2009–2014.
The next stage was the five-year project of the digital government, which began in 2016 and will be completed in 2021. This project assumes that within five years, more than 80% of Thai government agencies will use electronic documents for identification.
There is the Unified State Portal of e-Government of Thailand, developed by the Ministry of Information and Telecommunications Technology in 2008.
In 2018, Thailand ranks 73rd in the UN e-government ranking.
Turkey
E-Government in Turkey is the use of digital technology to improve service efficiency and effectiveness in Turkey.
As of December 2020, 700 government agencies offers 5,338 applications to 51,757,237 million users. The mobile application offers 2,850 services.
Ukraine
The main coordinating government body in matters of e-government is Ministry of Digital Transformation established in 2019. In 2020, it launched the Diia app and web portal which allows Ukrainians to use various kinds of documents (including ID-cards and passports) via their smarphones as well as to access various government services with the plans to make all governmental services available by 2023.
United Arab Emirates
In the United Arab Emirates, the Emirates eGovernment is designed for e-government operations.
UK
Transformational Government: Enabled by Technology, 2005: "The future of public services has to use technology to give citizens choice, with personalised services designed around their needs not the needs of the provider"
North America
Canada
The current Clerk of the Privy Council – the head of the federal public service has made workplace renewal a pillar of overall public service renewal. The key to workplace renewal is the adoption of collaborative networked tools. An example of such a tool is GCPEDIA – a wiki platform for federal public servants. Other tools include GCconnex, a social networking tool, and GCforums, a discussion board system.
Report of the Auditor General of Canada: Chapter 1 Information Technology: Government On-Line 2003: "One of the key principles of Government On-Line is that programs and services will be transformed to reflect the needs and expectations of clients and citizens. From the government’s perspective, the overall objective of the GOL initiative is full service transformation – to fundamentally change the way the government operates and to deliver better services to Canadians."
United States
The election of Barack Obama as President of the United States became associated with the effective use of Internet technologies during his campaign and in the implementation of his new administration in 2009. On January 21, 2009, the President signed one of his first memorandums – the Memorandum for the Heads of Executive Departments and Agencies on Transparency and Open Government. The memo called for an unprecedented level of openness in government, asking agencies to "ensure the public trust and establish a system of transparency, public participation, and collaboration." The memo further "directs the Chief Technology Officer, in coordination with the Director of the Office of Management and Budget (OMB) and the Administrator of General Services (GSA), to coordinate the development by appropriate executive departments and agencies [and] to take specific actions implementing the principles set forth in the memorandum."
President Obama's memorandum centered around the idea of increasing transparency throughout various different federal departments and agencies. By enabling public websites like recovery.gov and data.gov to distribute more information to the American population, the administration believes that it will gain greater citizen participation.
In 2009 the U.S. federal government launched Data.gov to make more government data available to the public. With data from Data.Gov, the public can build apps, websites, and mashups. Although "Gov 2.0", as a concept and as a term, had been in existence since the mid-2000s, it was the launch of Data.gov that made it "go viral".
In August 2009 the City of San Francisco launched DataSF.org with more than a hundred datasets. Just weeks after the DataSF.org launch, new apps and websites were developed. Using data feeds available on DataSF.org, civic-minded developers built programs to display public transportation arrival and departure times, where to recycle hazardous materials, and crime patterns. Since the launch of DataSF.org there have been more than seventy apps created with San Francisco's data.
In March 2009, former San Francisco Mayor Gavin Newsom was at Twitter headquarters for a conversation about technology in government. During the town hall, Newsom received a tweet about a pothole. He turned to Twitter co-founders Biz Stone and Evan Williams and said let's find a way for people to tweet their service requests directly to San Francisco's 311 customer service center. Three months later, San Francisco launched the first Twitter 311 service, called @SF311, allowing residents to tweet, text, and send photos of potholes and other requests directly to the city. Working with Twitter and using the open-source platform, CoTweet turned @SF311 into reality. The software procurement process for something like this would normally have taken months, but in this case, it took less than three months. The @SF311 is saving the city money in call center costs. In 2011, The United States Government Accountability Office passed the Electronic Government Act in 2002 to promote better use of internet and information technology. Besides, to improve government services for citizens, internal government operations, and opportunities for citizen participation in government.
Presidential Innovation Fellows program where "teams of government experts and private-sector doers take a user-centric approach to issues at the intersection of people, processes, products, and policy to achieve lasting impact" launched in 2012. 18F a new digital government delivery service, was formed in early 2014 and United States Digital Service (USDS) was launched later in 2014.
South America
Brazil
The goal defined in the Digital Government Strategy is to reach the total digitization of services by the end of 2022.
"The main objective of the digital government is to bring citizens closer to the State. Technologies allow us to see each Brazilian better, including those who feel excluded, to direct public policies in a much more agile and efficient way and to reach mainly those who need it most ", emphasizes the Digital Government secretary of the Ministry of Economy, Luís Felipe Monteiro .
International initiatives
The early pioneering work by some governments is now being picked up by a range of global organizations which offer support to governments in moving to a transformational government approach. For example:
The World Bank has set up an eTransform Initiative (ETI) with support from global IT partners such as Gemalto, IBM, L-1 Identity Solutions, Microsoft and Pfizer. "The eTransform Initiative is about tapping information technology, expertise and experiences", said Mohsen Khalil, Director of the World Bank Group's Global Information and Communication Technologies Department. "Government transformation is about change management facilitated by technology. This initiative will facilitate the exchange of lessons and experiences among various governments and industry players, to maximize impact and lower risks of ICT-enabled government transformation."
A number of private sector organizations working in this area have published white papers which pull together global best practices on government transformation.
OASIS launched (September 2010) a new Technical Committee tasked with producing a new global best practice standard for a transformational government Framework. The Framework is expressed as a number of "Pattern Languages", each providing a detailed set of guidance notes and conformance clauses on how to deliver the required changes in practice.
See also
AI mayor
Center for Electronic Governance at UNU-IIST
Collaborative e-democracy
Cyberocracy
Digital era governance
Digital 5
Digital Government Society of North America
E-democracy
E-Government Unit
E-participation
E-procurement
E-government factsheets
eGovernment in Europe
Electronic voting
eRulemaking
Government by algorithm
Information society
International Conference on Theory and Practice of Electronic Governance
Issue tracking systems in government
National Center for Digital Governance
Online consultation
Online petition
Online deliberation
Online volunteering
Open-source governance
Open government
Project Cybersyn
Teleadministration
References
Further reading
Jane Fountain, Building the Virtual State: Information Technology and Institutional Change (2001)
Encyclopedia of Digital Government. Edited by Ari-Veikko Anttiroiko and Matti Mälkiä. Idea Group Reference.
Cordella, A (2007), E-government: towards the e-bureaucratic form?, Journal of Information Technology, 22, 265–274.
Foundations of e-Government. Edited by Ashok Agarwal and V Ventaka Ramana. ICEG'07 5th International Conference on e-Governance
West, Darrell. State and Federal Electronic Government in the United States. The Brookings Institution. 2008-08-26. Retrieved on 2008-09-16.
West, Darrell. Improving Technology Utilization in Electronic Government Around the World. The Brookings Institution. 2008-08-26. Retrieved on 2008-09-16.
von Lucke, Jörn; Reinermann, Heinrich (2000). Speyer Definition of Electronic Government. Forschungsinstitut für öffentliche Verwaltung, Speyer. Retrieved on 2020-07-01.
Ríos, Mauro D. In search of a definition of e-government (in Spanish). NovaGob. 2014.
OASIS Transformational Government Framework Technical Committee (September 2010)
ACT-IAC (October 2010)
External links
2020 E-Government Development Index country listings
Politics and technology
Public administration
Public services |
18717593 | https://en.wikipedia.org/wiki/Accel-KKR | Accel-KKR | Accel-KKR is an American technology-focused private equity firm with over $10 billion in total assets under management. The firm invests primarily in middle-market software and technology-enabled services businesses, providing capital for buyouts and growth investments across a range of opportunities including recapitalizations, divisional carve-outs, and going-private transactions. The company has offices in Menlo Park, California, (headquarters), Atlanta, Georgia (opened in 2006), Mexico City (opened in 2018), and London (opened in 2013).
History
The firm was founded in February 2000 as a partnership between the venture capital firm Accel Partners and Kohlberg Kravis Roberts, one of the oldest and largest leveraged buyout firms. Since the mid-2000s, the firm has operated independently of its original backers. Today the firm is run by Co-Managing Partners Tom Barnds and Rob Palumbo with headquarters in the San Francisco Bay Area.
Accel-KKR’s second fund closed in 2006 with over $300 million in capital commitments and its third fund closed in 2008 with $600 million, a third over its $450 million target. In 2012, the firm closed its fourth fund with $750 million, exceeding its target of $700 million.
In 2015, Accel-KKR raised $1.3 billion for its fifth buyout fund, including $100 million from the firm’s general partners, making it the largest general partner commitment to date. The fund also drew a third of its capital from outside the US.
Since the firm’s inception through 2015, Accel-KKR had reported a 32% annualized return on investment.
In May 2017, Goldman Sachs Asset Management's Petershill unit took a minority stake of less than 10% in Accel-KKR.
In 2019 and 2020, Inc. named Accel-KKR one of the “50 Best Private Equity Firms for Entrepreneurs” while the American Chamber of Commerce in New Zealand recognized the firm as “Investor of the Year from the USA” in 2019 for its investment in Seequent Ltd. Accel-KKR was also named the "2019 GP-Led Deal of The Year in the Americas" by Private Equity International in recognition of Accel-KKR Capital Partners CV III fund, which closed in September 2019.
In January 2020, Accel-KKR closed on $276.7 million in commitments for Accel-KKR Credit Partners LP – Series 1, its private lending vehicle, which targets maturing software startups whose collateral may fall short of what bank lenders require for loans. Also in 2020, the firm closed on its first fund intended specifically to invest in smaller companies. The $640 million Accel-KKR Emerging Buyout Partners LP fund is focused on software and technology companies with enterprise values of $70 million or less.
As of 2020, Accel-KKR has raised over $10 billion of investor commitments and invested in or acquired over 250 software and technology-enabled services companies, with the majority of such companies in the United States.
Of those, Accel-KKR has completed 79 international transactions, including 48 in Europe, 11 in Canada, 12 in Australia/NZ/Asia-Pacific, 6 in Latin America and 2 in South Africa.
Current Portfolio Companies
Accel-KKR’s current portfolio companies include:
Abrigo
Cendyn
Cielo
ClickDimensions
Delta Data
ESG
ESO Solutions
FastSpring
FM Systems
Forcura
HumanForce
IMED
IntegriChain
ITC
Kerridge Commercial Systems
Kimble Applications
Lemontech
Navtor
OrthoFi
Paymentus
Partnerize
Pegasus
Peppermint Technology
Reapit
Recurly
SafeGuard
Salsa
Sandata
Seequent
Siigo
Smart Communications
SugarCRM
Surgical Information Systems
Team Software
TELCOR
Tools Group
TravelTripper
TrueCommerce
Vistex
Vitu
Selected Previous Investments
Previous investments from which Accel-KKR has since exited include:
Vyne sold to TJC
Abila sold to Community Brands for over $280 MM in 2017
Cielo sold to Permira in 2019
One.com sold to Cinven in 2018
RiseSmart sold to Randstad for $100 MM in 2015
Zinc Ahead sold to Veeva for $130 MM in 2015
On Center Software sold to Roper Industries for $157 MM in 2015
Accumatica sold to EQT Partners in 2019
PrismHR sold to Summit Partners in 2017
Clavis Insight sold to Ascential plc in 2017
Applied Predictive Technologies sold to MasterCard for $600 MM in 2015
iTradeNetwork sold to Roper Industries for $525 MM in 2010
Endurance International Group sold to Warburg Pincus for $1 billion in 2011
Saber Corporation sold to Electronic Data Systems for $463 MM in 2007
IntrinsiQ sold to AmerisourceBergen for $35 MM in 2011
CRS Retail Systems sold to Epicor for $121 MM in 2005
Savista sold in two transactions to Torex and Accenture for $100 MM in 2006
Systems & Software, Inc. sold to Constellation Software in 2007
Alias Systems Corporation sold to Autodesk for $197 MM in 2006
Kana Software sold to Verint Systems for $514 MM in 2014
N-able sold to Solar Winds for $127 MM in 2013
HighJump sold to Korber for $725 MM in 2017
Jaggaer sold to Cinven for over $1.5 billion in 2019
Episerver sold to Insight Venture Partners for $1.16 billion in 2018
Model N, which went public on the NYSE under the ticker symbol MODN in 2013
References
Further reading
KKR and Accel Open Atlanta Office. New York Times, July 31, 2006
External links
Accel-KKR (company website)
Private equity firms of the United States
Venture capital firms of the United States
Financial services companies established in 2000
Kohlberg Kravis Roberts
Companies based in Menlo Park, California
fi:Accel Partners |
12600716 | https://en.wikipedia.org/wiki/Rage%20%28video%20game%29 | Rage (video game) | Rage is a first-person shooter video game developed by id Software and published by Bethesda Softworks, released in November 2010 for iOS, in October 2011 for Microsoft Windows, the PlayStation 3, and the Xbox 360, and in February 2012 for OS X. It was first shown as a tech demo at the 2007 Apple Worldwide Developers Conference and was announced at the QuakeCon. Rage uses id Software's id Tech 5 game engine and was the final game released by the company under the supervision of founder John Carmack.
Rage is set in a post-apocalyptic near future, following the impact of the asteroid 99942 Apophis on Earth. Players take control of Nicholas Raine, a soldier put into hibernation in an underground shelter who emerges into the wasteland a century later, and finds himself a wanted man by an oppressive organization known as The Authority. The game has been described as similar to the movie Mad Max 2, and video games such as Fallout and Borderlands.
Rage received mainly positive reviews, with reviewers praising the game's combat mechanics, gameplay and graphics while criticizing the lack of story, characters, and direction. The iOS version, titled Rage: Mutant Bash TV, was released in 2010. A sequel, Rage 2, was released on May 14, 2019.
Gameplay
The game primarily consists of first-person shooter and driving segments, with the player using their vehicle to explore the world and travel between missions.
Combat is undertaken from a first-person perspective; the player is armed with a variety of upgradeable firearms, as well as a crossbow, and boomerang-like weapons called "wingsticks" which can be used for stealthy attacks. There are several types of ammunition available for each weapon, to allow the player to further customize their play style. As an example, the crossbow's primary ammunition is metal bolts, but it also can shoot electrified bolts, explosive bolts, and more. There are two standard varieties of enemies: enemies with firearms which will take cover and exchange fire with the player, and melee enemies that will charge the player and attack with melee weapons.
There are a variety of vehicular events for the player to take part in, including races and checkpoint rallies. Racing events may or may not have opponents, and some of them are armed races while others are not. Players can augment their cars with various items and upgrades they can gain by completing events. Rage features some role-playing elements, including an inventory system, looting system, and different types of ammo. Players have the option to customize their weapons and vehicles, as well as build a wide assortment of items using collected recipes. Vehicles be used for racing and for traveling from one location to the other with occasional attacks from enemy vehicles. There are side missions and several other minor exploratory elements.
Multiplayer
Rage has two multiplayer modes: "Road Rage" and "Wasteland Legends". In Road Rage, up to four players compete in a free-for-all match that takes place in an arena designed to make use of the vehicles. The goal is to collect rally points that appear around the arena while killing one's opponents and stealing their points. Legends of the Wasteland is a series of two-player co-op missions based on stories that are heard throughout the single-player campaign. There are a total of nine objectives in this game type.
Plot
On August 23, 2029, asteroid 99942 Apophis collides with Earth, destroying human civilization and turning the world into a wasteland. Survivors come together to form settlements around oases and other practical or habitable locations, while the wastes are plagued by various bandits clans, and mutants, who attack all normal humans in a voracious horde.
In 2135, former U.S. Marine Lieutenant Nicholas Raine emerges from an underground shelter called an Ark, 106 years after being put into stasis. These underground shelters are the direct result of the Eden Project, a massive international undertaking in which hundreds of Arks, containing cryogenic pods, were sealed under the surface of the Earth to preserve enough of the human population to rebuild civilization after the asteroid collision. The Eden Project was far less successful than hoped, as Raine's Ark, in particular, was heavily damaged, with all of its other residents dead and equipment destroyed, and he wakes up alone with no specific goal in mind.
Raine enters the surface, where he is immediately attacked by bandits but is saved by Dan Hagar (voiced by actor John Goodman), a local wasteland settler who brings Raine to his settlement. Hagar informs him that a powerful technologically advanced organization known as the Authority, which considers itself the one true government of the wasteland, is hunting for Ark survivors for an unknown purpose. Raine briefly aids Hagar's settlement and others in the local area by completing a few small jobs, and during this time it is revealed that the nanotrites injected into Raine's blood before he was sent into hibernation have granted him superhuman abilities to help him survive the harsh environment, but have made him valuable to the Authority. Hagar believes Raine's continued presence is too dangerous for the settlement and sends him to the nearby town of Wellspring instead.
In Wellspring, Raine helps the town with various problems such as fighting off bandits, mutants, and ferrying supplies. Eventually, he comes into contact with Dr. Kvasir, an elderly scientist who previously worked for the Authority, who tells Raine about the inhumane experiments they were responsible for, such as the creation of the mutants. Kvasir puts him into contact with the Resistance, an armed anti-Authority group, where he is tasked with rescuing their leader, Captain Marshall, who has recently been imprisoned by the Authority. Raine again attracts attention from the Authority, forcing him to flee Wellspring and join the Resistance at their headquarters in Subway Town, where he earns the trust of the town and its tyrannical mayor, Redstone. He also learns what had happened in the past century from Captain Marshall, who is an Ark survivor himself. General Martin Cross, who was in charge of the Eden Project, sabotaged the operation shortly before 99942 Apophis struck the Earth by ensuring that only the Arks with people loyal to him were opened on schedule, with this first wave of Ark survivors eventually forming the Authority. The remaining Arks were supposed to stay underground forever in hibernation, including Raine's Ark, which surfaced only because its systems were damaged and it automatically rose to protect any surviving inhabitants.
With the Authority beginning to forcefully expand its influence on the wasteland settlements, the Resistance is forced to act with the help of Raine who can recover data that shows the location of every Ark on the planet. Captain Marshall plans to use this data to activate all the Arks and form an army that can defeat the Authority, but the only way to do this is to transmit the data from Capital Prime, the main headquarters for the Authority. Alone, Raine fights his way through Capital Prime to transmit the Ark activation code, and the game concludes with all the remaining Arks simultaneously becoming active and surfacing.
Development
According to design director Matt Hooper, the game's origins were in the concept of muscle cars within a desert setting, which was expanded upon by the creation of a post-apocalyptic world. A team of around 60 core developers worked on the title, which was intended to be the first release of an ongoing franchise.
Rage was intended to have a 'Teen' rating but ended up receiving an 'M' instead. The Windows PC and Xbox 360 versions ship on three dual-layer DVDs, while the PlayStation 3 version ships on one Blu-ray Disc. John Carmack has revealed that an uncompressed software build of Rage is one terabyte in size. The PS3, Windows and OS X versions use OpenGL as the graphics API. While a Linux version was speculated, there has been no confirmation of an official build. Timothee Besset had stated that he would try to make Linux builds for Rage much as he had done in the past, and was expected sometime in 2012 but he resigned his position at id Software. John Carmack has since revealed on Twitter that there are "no plans for a native Linux client". However, the game is playable on Linux via the Wine compatibility layer.
Id announced its decision to partner with Electronic Arts for publication of Rage. In March 2009, the company's CEO Todd Hollenshead said "No, it won't be out this year," when asked about a possible release date. A trailer and several screenshots were released on August 13, 2009, at QuakeCon where it showcased various locations, racing and first-person gameplay, and a brief insight into the storyline of the game. During Gamescom in Cologne, Germany, Electronic Arts released four new screenshots for Rage.
In 2009, John Carmack stated id Software was not planning to support dedicated servers for the Windows version, and instead would use a matchmaking system like console games. ZeniMax Media, who had acquired developer id Software in June 2009, announced that it had picked up the publishing rights to Rage and that EA would not be involved in the sales or marketing of the title. The announcement also noted that the development of Rage had not been affected by the new deal. Creative director Tim Willits confirmed that the game would miss releasing in 2010, and would launch in 2011. Willits later accepted the award from IGN Media for "Best Game" and "Best First Person Shooter" at E3. Additionally, the game was awarded Best First-Person Shooter, Best New IP, Best Xbox 360, Windows, and PlayStation 3 game as well as the Game of the Show of E3 2010 by GameTrailers.
In his keynote speech at QuakeCon 2010 on August 12, 2010, Carmack announced that id was developing a Rage-related game for Apple's iOS. He later described the mobile Rage as a "little slice of Rage ... [about] 'Mutant Bash TV', a post-apocalyptic combat game show in the Rage wasteland", and separately hinted that he might try to port Rage Mobile to Android, although he later stated no id titles would be coming to Android due to lack of financial viability.
At QuakeCon 2011, Carmack offered many technical insights of the development and differences between the three main platforms (Windows, Xbox 360, PlayStation 3), noting that it was not easy developing such an optimized engine to be able to smoothly run on consoles and still having the best artistically looking game on consoles. He also affirmed that the PC platform at the time was as much as 10 times faster than the current generation of gaming consoles, but this did not mean 10x the performance because of the extra layers of abstraction found in PC compatible operating systems. On September 16, 2011; Bethesda announced Rage had gone gold.
Marketing and release
Bethesda vice president of public relations Pete Hines initially said that a demo of the game is not likely, although one was later released on the Xbox Live Marketplace. Those who pre-ordered the game would receive an automatic upgrade to the Anarchy Edition of the game, which included four exclusive in-game items. Tim Willits claimed modding tools will be available a couple of days after release, although this proved to not be true.
Rage appeared on fourth season episodes "Problem Dog" and "Hermanos" of Breaking Bad, both broadcast in 2011, as a video game that Jesse Pinkman plays to try to shake off killing Gale Boetticher. The inclusion had come from marketing opportunity discussions between id and AMC, both being fans of each other's work. Looking for video game material to include, id suggested the use of Rage. id took mostly pre-existing game areas (the Well area), but worked with the production of the show to include allusions and references to Gale's murder to tie into Jesse's narrative within the game, such as Gale's name written on walls. From there, they provided a good deal of pre-recorded game footage to AMC to work with. While the show has Jesse playing Rage via a light gun, this was not part of the end development. In return, id included several Breaking Bad references in Rage on release, such as a version of the acrylic cube containing Tuco's teeth grill that Hank Schrader receives as a reward for killing him, from the episode "Bit by a Dead Bee".
A viral campaign was released that features Los Angeles Clippers power forward Blake Griffin in which he performs stunts to get himself in the game such as dunking over a tiger to impress the developers.
On October 4, 2011, the game was released. On February 2, 2012 Rage was released for OS X through digital distribution, lacking multiplayer content.
Editions
Rage was available for pre-order in three retail versions: the Anarchy Edition and two region-dependent Collector's Editions. Those who pre-ordered the standard edition of Rage automatically got their copy upgraded to Anarchy Edition. Two Collector's Editions of the game were also available; one through EB Games in Australia, the other through Game and Gamestation in the UK.
The Anarchy Edition adds a Crimson Elite Armor, a double barrel shotgun, fists of rage (an upgrade for fists that attaches metal blades to the character's hand gloves for use in melee combat) and a buggy called Rat Rod.
The Australian version of the Collector's Edition (officially called Rage Exclusive EB Games Edition) contains everything from the Anarchy Edition, an exclusive Wingstick prop, six exclusive Rage badges and an exclusive poster of the game.
The British version of the Collector's Edition (officially called Rage Collector's Pack) also contains all the content of the Anarchy Edition, the three-issue Dark Horse comics based on Rage and a 'Making Of' DVD.
The version released for Mac OS X was called Rage: Campaign Edition. This version contains all content of the Anarchy Edition and the Wasteland Sewer Missions DLC pack. Multiplayer is not present in this version. Only the single-player campaign is available hence the name of the edition.
Mods
The modding tools for Rage were originally going to be released with the game itself but instead were released on February 8, 2013, on Steam. Titled RAGE Tool Kit or simply id Studio, the tools were used to create the game as well as the DLC.
Downloadable content
Downloadable content (DLC) was mentioned to be planned for all platforms. The Wasteland Sewer Missions DLC pack, integral part of the Campaign Edition, was released on October 4, 2011, providing access to the sewer systems. A code for the DLC was given away as a pre-order bonus with the Anarchy Edition that allowed early and free access to the DLC. The player character is given a task by people of various cities to rid their city's sewers of the mutant infestation.
Anarchy Edition add-on DLC was released on 15 February 2012 containing all the content of the Anarchy Edition excluding the free Wasteland Sewer Missions Pack DLC code. The package upgrades the standard edition of Rage to the Anarchy Edition.
A new Rage DLC release called The Scorchers was released on December 18, 2012, for Windows, Xbox 360 and PlayStation 3. The plot focused on 'The Scorchers', a bandit clan cut from the final release of the main game and only encountered in vehicle combat missions. The Scorchers were hatching a plan to end all life by destroying the Wasteland and it was up to the main character to save humanity. The DLC added a, "Ultra Nightmare" difficulty level and the ability to keep playing the game even after the main questline was completed. The pack features new characters, six new areas, new minigames, new enemies and a new weapon called Nailgun which features three distinct ammunition types. The DLC also fixes some bugs in the game.
Related media
In November 2010, id Software released Rage: Mutant Bash TV for iOS devices as a demo for showcasing its gameplay. The HD version of the game called Rage HD was released for all iOS devices. John Carmack hinted that he intends to release another iPhone app based on the Rage universe that focuses on the racing aspect of the game.
In March 2011, Bethesda and Dark Horse Comics announced a three-issue comic book series based on Rage. The original miniseries was written by Arvid Nelson, and penciled by Andrea Mutti. The cover art was created by Glenn Fabry. The comic series, developed with the direct participation of Rages creative director, Tim Willits, presents a new twist on the post-apocalyptic near future as one woman discovers that the survival of humankind does not necessarily mean the survival of humanity. The Earth has been devastated by a collision with an asteroid, with a tiny fraction of the population surviving in life-sustaining Arks buried deep below its surface. Those who survive emerge to find a wasteland controlled by a global military dictatorship called the Authority. But a rescued scientist learns that the Authority has lied to her and the other survivors about how this new world came to be.
That same month, Bethesda announced that they would team up with Del Rey Books to create a novel based on Rage. The novel was written by Matthew J. Costello, also responsible for the video game. It was released on August 30, 2011.
Reception
Pre-release
The game received a great deal of recognition before its release. It won the Game Critics Awards of E3 2010 for "Best Console Game", and "Best Action Game", along with the "Special Commendation for Graphics". IGN awarded it their "Best Overall Game" and "Best Shooter" in their E3 2010 awards. It also won many of GameTrailers' E3 2010 awards, including "Best New IP", "Best First Person Shooter", "Best PS3 Game", "Best Xbox 360 Game", "Best PC Game", and "Game of the Show".
Post-release
Rage received generally positive reviews on all platforms except the iOS version, which received average reviews, according to the aggregate review site Metacritic. The game received praise for its graphics, gameplay and combat mechanics, and criticism mostly aimed towards the game's story and poor out-of-the-box PC compatibility.
EGMNow praised the Xbox 360 version and stated it features impressive visuals, brutal and satisfying combat, fluid animations and advanced enemy AI, many entertaining side-missions, and an addictive multiplayer component. The one complaint they had with Rage was that the final boss fight was unsatisfying compared to the rest of the game's impressive combat scenarios. GameZone gave the same console version 8.5 out of 10 and called it "a great experience that's coupled with some intensely fun gunplay and some incredibly impressive graphics. It's not quite the Borderlands meets Fallout experience that gamers were expecting. It isn't very long, and it does skimp out on character development, but it focuses more on what id knows best--shooting things in the face. This is one post-apocalyptic wasteland that you'll definitely want to venture into." Ars Technica gave a more negative review of the Xbox 360 version, criticizing lack of story, undeveloped characters, uninteresting quests and a "broken save system" (autosave checkpoints being too far apart, forcing frequent manual saves which are slow on the Xbox 360), while acknowledging the quality of the visuals. Edge gave it seven out of ten and called it "a stunningly rendered FPS, but one that seems caught between a desire to innovate and the desire to be true to the template its creators defined."
Game Informer said that "while most people will rave about Rage's technology, this game's most impressive component is its gunplay ... the mutated hostiles of the wastes ... crawl out of the woodwork, scamper along walls, and create a sense of absolute terror", and "the challenge posed to the player is to put them down quickly or pray that every close range shotgun blast takes a large chunk of flesh." The soundtrack was described as "appropriately moody", and the animation system as one of the most "impressive" ever made. However, the review also argued that "the driving sections are no more than optional diversions" and "the lack of content in the overworld is disappointing". In conclusion, the story and overworld were described as "dated", but the "pulse-pounding gunplay" was hailed as "a nice change of pace" that "stands out in a crowded market". IGN praised the game's graphics, calling them some of the best, but criticized the game's story and forgettable characters. In Japan, where the game was ported on October 6, 2011 (the same release date as Australia's), Famitsu gave the PS3 and Xbox 360 versions a score of two nines, one eight, and one nine for a total of 35 out of 40.
411Mania gave the PS3 version a score of eight out of ten and said it was "by no means a bad game. Id has a solid shooter with great graphics and solid controls. However, plenty of other games have done the same things now. Rage is still a good game and will tide shooter fans over until other shooters release later this year. Just don't expect anything groundbreaking here." Digital Spy gave the same console version four stars out of five and called it "a triumphant mix of vintage shooter mechanics and high-octane driving segments. The end result is a title that captures the essence of its genre-defining predecessors while offering fans something new. id's Tech 5 engine ensures that this is the studio's best-look release to date, and the sheer volume of features on offer make it one of the most rewarding." Softpedia gave the Xbox 360 version a score of four stars out of five and called it "a must-have, must-play game." The Digital Fix gave the same console version eight out of ten and called it "a sound purchase". The Guardian gave the same console version a similar score of four stars out of five and called it "a decidedly mixed affair. It isn't perfect, some of it feels quite antiquated, and it is by no means the high-water moment in the FPS genre that Doom and Quake were in their day. But it is still a very eye-catching and incredibly fun shooter, and in its best moments, it can't be matched for pure entertainment value."
However, The Daily Telegraph gave the Xbox 360 version three-and-a-half stars out of five and called it "a game that would have benefitted from being streamlined, with additional FPS levels replacing the awkward driving. It should have been an id game. Instead, it occupies this weird halfway-house between Borderlands, MotorStorm, and Doom, not quite an RPG, not quite a racer and not quite an FPS." The A.V. Club gave the same console version a C+ and said that its "huge swaths ... are lifted wholesale from Fallout 3, Borderlands, and BioShock, making Rage forever veer between loving homage and blatant plagiarism. In the end, Rage is an insecure, overly busy game that tries too hard to be too many things, and winds up with a greasy sheen of flop-sweat on its brow."
Accolades
Rage was also recognized in several 2011 end-of-year award ceremonies. It was nominated for "Best Graphics" and "Best New Franchise" in Xbox Achievements' Game of the Year 2011 Awards. GameTrailers nominated it for "Best First Person Shooter" and "Best New IP". At the 2011 Spike Video Game Awards, it was nominated for "Best Graphics" and "Best Shooter". Technical issues with the PC version led to articles explaining to users how to "fix" Rages problems. AMD has released drivers that attempt to fix some of the issues. On October 10, 2011, patches for the Windows version were released which added various graphical options to the game and fixed some driver-related graphical issues.
Sequel
A sequel, Rage 2, which is a joint development between id Software and Avalanche Studios, was released on May 14, 2019, for Microsoft Windows, PlayStation 4 and Xbox One.
Notes
References
External links
2010 video games
Aspyr games
Bethesda Softworks games
First-person adventure games
First-person shooters
Games for Windows certified games
Id Software games
Id Tech games
IOS games
Impact event video games
Multiplayer and single-player video games
Fiction about near-Earth asteroids
Open-world video games
MacOS games
PlayStation 3 games
Post-apocalyptic video games
Vehicular combat games
Video games developed in the United States
Video games scored by Rod Abernethy
Video games set in the 22nd century
Windows games
Xbox 360 games
Cancelled Linux games |
1186137 | https://en.wikipedia.org/wiki/Geode%20%28processor%29 | Geode (processor) | Geode was a series of x86-compatible system-on-a-chip microprocessors and I/O companions produced by AMD, targeted at the embedded computing market.
The series was originally launched by National Semiconductor as the Geode family in 1999. The original Geode processor core itself is derived from the Cyrix MediaGX platform, which was acquired in National's merger with Cyrix in 1997. AMD bought the Geode business from National in August 2003 to augment its existing line of embedded x86 processor products. AMD expanded the Geode series to two classes of processor: the MediaGX-derived Geode GX and LX, and the modern Athlon-derived Geode NX.
Geode processors are optimized for low power consumption and low cost while still remaining compatible with software written for the x86 platform. The MediaGX-derived processors lack modern features such as SSE and a large on-die L1 cache but these are offered on the more recent Athlon-derived Geode NX. Geode processors tightly integrate some of the functions normally provided by a separate chipset, such as the northbridge. Whilst the processor family is best suited for thin client, set top box and embedded computing applications, it can be found in unusual applications such as the Nao robot and the Win Enterprise IP-PBX.
The One Laptop per Child project used the GX series Geode processor in OLPC XO-1 prototypes, but moved to the Geode LX for production. The Linutop (rebranded Artec ThinCan DBE61C or rebranded FIC ION603A) is also based on the Geode LX. 3Com Audrey was powered by a 200 MHz Geode GX1.
The SCxxxx range of Geode devices are a single-chip version, comparable to the SiS 552, VIA CoreFusion or Intel's Tolapai, which integrate the CPU, memory controller, graphics and I/O devices into one package. Single processor boards based on these processors are manufactured by Artec Group, PC Engines (WRAP), Soekris, and Win Enterprises.
AMD discontinued all Geode processors in 2019.
Features
CPU features table
National Semiconductor Geode
Geode GXm
Rebranded Cyrix MediaGXm. Returns "CyrixInstead" on CPUID.
0.35 μm four-layer metal CMOS
MMX instructions
Core speed: 180, 200, 233, 266 MHz
3.3 V I/O, 2.9 V core
16 KB four-way set associative write-back unified (I&D) L1 cache, 2 or 4 KB of which can be reserved as I/O scratchpad RAM for use by the integrated graphics core (e.g. for bitblits)
30-33 MHz PCI bus interconnect with CPU bus
64-bit SDRAM interface
Fully static design
CS5530 companion chip (implements sound and video functions)
VSA architecture
1280×1024×8 or 1024×768×16 display
Geode GXLV
Die-shrunk GXm
0.25 μm four-layer metal CMOS
Core speed: 166, 180, 200, 233, 266 MHz
3.3 V I/O, 2.2, 2.5, 2.9 V core
Typical power: 1.0 W at 2.2 V/166 MHz, 2.5 W at 2.9 V/266 MHz
Geode GX1
Die-shrunk GXLV
0.18 μm four-layer metal CMOS
Core speed: 200, 233, 266, 300, 333 MHz
3.3 V I/O, 1.8, 2.0, 2.2 V core
Typical power: 0.8 W at 1.8 V/200 MHz, 1.4 W at 2.2 V/333 MHz
64-bit SDRAM interface, up to 111 MHz
CS5530A companion chip
60 Hz VGA refresh rate
National Semiconductor/AMD SC1100 is based on the Geode GX1 core and the CS5530 support chip.
Geode GX2
Announced by National Semiconductor Corporation October, 2001 at Microprocessor Forum. First demonstration at COMPUTEX Taiwan, June, 2002.
0.15 μm process technology
MMX and 3DNow! instructions
16 KB Instruction and 16 KB Data L1 cache
GeodeLink architecture, 6 GB/s on-chip bandwidth, up to 2 GB/s memory bandwidth
Integrated 64-bit PC133 SDRAM and DDR266 controller
Clockrate: 266, 333, and 400 MHz
33 MHz PCI bus interconnect with CPU bus
3 PCI masters supported
1600×1200 24-bit display with video scaling
CRT DACs and an UMA DSTN/TFT controller.
Geode CS5535 or CS5536 companion chip
Geode SCxx00 Series
Developed by National Tel Aviv (NSTA) based on IP from Longmont and other sources.
Applications:
The SC3200 was used in the Tatung TWN-5213 CU.
AMD Geode
In 2002, AMD introduced the Geode GX series, which was a re-branding of the National Semiconductor GX2. This was quickly followed by the Geode LX, running up to 667 MHz. LX brought many improvements, such as higher speed DDR, a re-designed instruction pipe, and a more powerful display controller. The upgrade from the CS5535 I/O Companion to the CS5536 brought higher speed USB.
Geode GX and LX processors are typically found in devices such as thin clients and industrial control systems. However, they have come under competitive pressure from VIA on the x86 side, and ARM processors from various vendors taking much of the low-end business.
Because of the relative performance, albeit higher PPW, of the GX and LX core design, AMD introduced the Geode NX, which is an embedded version of the Athlon processor, K7. Geode NX uses the Thoroughbred core and is quite similar to the Athlon XP-M that use this core. The Geode NX includes 256 KB of level 2 cache, and runs fanless at up to 1 GHz in the NX1500@6 W version. The NX2001 part runs at 1.8 GHz, the NX1750 part runs at 1.4 GHz, and the NX1250 runs at 667 MHz.
The Geode NX, with its strong FPU, is particularly suited for embedded devices with graphical performance requirements, such as information kiosks and casino gaming machines, such as video slots.
However, it was reported that the specific design team for Geode processors in Longmont, Colorado, has been closed, and 75 employees are being relocated to the new development facility in Fort Collins, Colorado. It is expected that the Geode line of processors will be updated less frequently due to the closure of the Geode design center.
In 2009, comments by AMD indicated that there are no plans for any future micro architecture upgrades to the processor and that there will be no successor; however, the processors will still be available with the planned availability of the Geode LX extending through 2015. In 2016 AMD updated the product roadmap announcing extension of last time buy and shipment for the LX series to 2019. In early 2018 hardware manufacturer congatec announced an agreement with AMD for a further extension of availability of congatec's Geode based platforms.
Geode GX
Geode LX
Features:
Low power.
Full x86 compatibility.
Processor functional blocks:
CPU Core
GeodeLink Control Processor
GeodeLink Interface Units
GeodeLink Memory Controller
Graphics Processor
Display Controller
Video Processor
Video Input Port
GeodeLink PCI Bridge
Security Block
128-Bit Advanced Encryption Standard (AES) - (CBC/ECB)
True Random Number Generator
Specification:
Processor frequency up to 600 MHz (LX900), 500 MHz (LX800) and 433 MHz (LX700).
Power management: ACPI, lower power, wakeup on SMI/INTR.
64K Instruction / 64K Data L1 cache and 128K L2 cache
Split Instruction/Data cache/TLB.
DDR Memory 400 MHz (LX 800), 333 MHz (LX 700)
Integrated FPU with MMX and 3DNow!
9 GB/s internal GeodeLink Interface Unit (GLIU)
Simultaneous, high-res CRT and TFT (High and standard definition). VESA 1.1 and 2.0 VIP/VDA support
Manufactured at a 0.13 micrometre process
481-terminal PBGA (Plastic Ball grid array)
GeodeLink active hardware power management
Applications:
OLPC XO-1
Geode NX
Features:
7th generation core (based on Mobile Athlon XP-M).
Power management: AMD PowerNow!, ACPI 1.0b and ACPI 2.0.
3DNow!, MMX and SSE instruction sets
0.13 μm (130 nm) fabrication process
Pin compatibility between all NX family processors.
OS support: Linux, Windows CE, MS Windows XP.
Compatible with Socket A motherboards
Geode NX 2001
In 2007, there was a Geode NX 2001 model on sale, which in fact was a relabelled Athlon XP 2200+ Thoroughbred. The processors, with part numbers AANXA2001FKC3G or ANXA2001FKC3D, their specifications are 1.8 GHz clock speed, and 1.65 volt core operating voltage. The power consumption is 62.8 Watt. There are no official references to this processor except officials explaining that the batch of CPUs were "being shipped to specific customers", though it is clear it is a desktop Athlon XP CPU core instead of the Mobile Athlon XP-M derived Thoroughbred cores of the other Geode NX CPUs, and thus doesn't feature embedded application specific thermal envelope, power consumption and power management features. This kind of "badge engineering" of a particular CPU to accommodate a request for a desktop class chip from an OEM which merely wants to maintain brand recognition and association with the GeodeNX CPUs in its products, but the actual end-product application doesn't necessitate the advanced power and thermal optimization of the GeodeNX CPU's, is understandable, as re-labeling a part in a product catalog, is practically free and the processors do share the same CPU socket (Socket A).
Chipsets for Geode
NSC Geode CS5530A Southbridge for Geode GX1.
NSC/AMD Geode CS5535 Southbridge for Geode GX(2) and Geode LX (USB 1.1). Integrates four USB ports, one ATA-66 UDMA controller, one Infrared communication port, one AC'97 controller, one SMBUS controller, one LPC port, as well as GPIO, Power Management, and legacy functional blocks.
AMD Geode CS5536 Southbridge for Geode GX and Geode LX (USB 2.0). Power consumption: 1.9 W (433 MHz) and 2.4 W (500 MHz). This chipset is also used on PowerPC board (Amy'05).
Geode NX processors are "100 percent socket and chipset compatible" with AMD's Socket A Athlon XP processors: SIS741CX Northbridge and SIS 964 Southbridge, VIA KM400 Northbridge and VIA VT8235 Southbridge, VIA KM400A Northbridge and VIA VT8237R Southbridge and other Socket A chipsets.
See also
ALIX
Cyrix Cx5x86
Wireless Router Application Platform – WRAP
3Com Audrey
Koolu
Linutop
Netbook
MediaGX
Soekris
Sony eVilla
ThinCan
Virgin Webplayer
PC/104
Intel Atom
VIA Nano
References
External links
AMD pages for Geode
AMD Geode LX800 product information
AMD Geode LX Processors Data Book
National Semiconductor press release: Cyrix -> VIA, MediaGX -> Geode
National Semiconductor press release: Geode sold to AMD
CPU-INFO: Cyrix MediaGX, indepth processor history
Voltage and P State information for Geode NX
Quixant QX-10 Geode NX Motherboard for gaming applications
Soekris Engineering sells embedded boards with Geode processors
PC Engines ALIX another embedded board with Geode LX processor
CM-iGLX the smallest Computer On Module based on Geode LX
Fit-PC full-featured end-user product based on the CM-iGLX PC-on-module above
Artec Group manufactures products based on the Geode such as the ThinCan.
Troika NG PowerPC board using CS5536.
Linux on Geode
Installing Linux on Geode-based Single-Board Computers
DEvoSL - DSL on Evo T20 HowTo
Compaq Evo T20 Notes
Installing Linux onto the IBM Netvista N2200
Linux on CASIO Cassiopeia Fiva
Linux with Cyrix MediaGXm, NSC/AMD Geode GX
Linux Development on the Pepper Pad 3
Patching linux with OCF to hook into Geode's AES Security Block
Pus-pus is a compact Debian-based distribution to run onto the IBM Netvista N2200
Zeroshell router/firewall appliance
NetBSD on Geode
Wasabi Systems Certified NetBSD port and NAS software
Advanced Micro Devices x86 microprocessors
Embedded microprocessors
National Semiconductor microprocessors |
271390 | https://en.wikipedia.org/wiki/GEDCOM | GEDCOM | GEDCOM ( ) (an acronym standing for Genealogical Data Communication) is an open de facto specification for exchanging genealogical data between different genealogy software. GEDCOM was developed by The Church of Jesus Christ of Latter-day Saints (LDS Church) as an aid to genealogical research.
A GEDCOM file is plain text (usually either UTF-8 or ASCII) containing genealogical information about individuals, and metadata linking these records together. Most genealogy software supports importing from and exporting to GEDCOM format. However, some genealogy software programs incorporate the use of proprietary extensions to the format, which are not always recognized by other genealogy programs, such as the GEDCOM 5.5 EL (Extended Locations) specification.
While GEDCOM X and several other specifications have been suggested as replacements, the current 2019 version, based on the draft from 1999, remains the industry standard 20 years on.
GEDCOM model
GEDCOM uses a lineage-linked data model. This data model is based on the nuclear family and the individual. This contrasts with evidence-based models, where data is structured to reflect the supporting evidence. In the GEDCOM lineage-linked data model, all data is structured to reflect the believed reality, that is, actual (or hypothesized) nuclear families and individuals.
GEDCOM file structure
A GEDCOM file consists of a header section, records, and a trailer section. Within these sections, records represent people (INDI record), families (FAM records), sources of information (SOUR records), and other miscellaneous records, including notes. Every line of a GEDCOM file begins with a level number where all top-level records (HEAD, TRLR, SUBN, and each INDI, FAM, OBJE, NOTE, REPO, SOUR, and SUBM) begin with a line with level 0, while other level numbers are positive integers.
Although it is theoretically possible to write a GEDCOM file by hand, the format was designed to be used with software and thus is not especially human-friendly. A GEDCOM validator that can be used to validate the structure of a GEDCOM file is included as part of PhpGedView project, though it is not meant to be a standalone validator. For standalone validation "The Windows GEDCOM Validator" can be used. or the older unmaintained Gedcheck from the LDS Church.
During 2001, The GEDCOM TestBook Project evaluated how well four popular genealogy programs conformed to the GEDCOM 5.5 standard using the Gedcheck program. Findings showed that a number of problems existed and that "The most commonly found fault leading to data loss was the failure to read the NOTE tag at all the possible levels at which it may appear." In 2005, the Genealogical Software Report Card was evaluated (by Bill Mumford who participated in the original GEDCOM Testbook Project) and included testing the GEDCOM 5.5 standard using the Gedcheck program.
Example
The following is a sample GEDCOM file.
The header (HEAD) includes the source program and version (Personal Ancestral File, 5.0), the GEDCOM version (5.5), the character encoding (ANSEL), and a link to information about the submitter of the file.
The individual records (INDI) define John Smith (ID I1), Elizabeth Stansfield (ID I2), and James Smith (ID I3).
The family record (FAM) links the husband (HUSB), wife (WIFE), and child (CHIL) by their ID numbers.
Versions
The current version of the specification is GEDCOM 5.5.1, which was released on 15 November 2019. The former draft GEDCOM 5.5.1 specification was issued in 1999, introducing nine new tags, including WWW, EMAIL and FACT, and adding UTF-8 as an approved character encoding. ANSEL is still defined as valid character encoding, but it is not very common and not needed any longer. The current release has only minor corrections to the draft. The draft was not formally approved, but its provisions have been adopted in some part by a number of genealogy programs and is used by FamilySearch.org. While PAF 5.2 does support GEDCOM 5.5, PAF 5.2 uses UTF-8 as its internal character set, a feature which was introduced in the GEDCOM 5.5.1 draft, and can output a UTF-8 GEDCOM.
On 23 January 2002, a draft (beta) version of GEDCOM 6.0 was released for developer study only, as it was not a complete specification, and developers were recommended to not begin implementation in their software. For example, descriptions of the meaning and expected contents of tags were not included. GEDCOM 6.0 was to be the first version to store data in XML format, and was to change the preferred character set from ANSEL to Unicode.
Lineage-linked GEDCOM is the deliberate de facto common denominator. Despite version 5.5 of the GEDCOM standard first being published in 1996, many genealogical software suppliers have yet to support the feature of multilingual Unicode text (instead of the ANSEL character set) introduced with that version of the specification. Uniform use of Unicode would allow for the usage of international character sets. An example is the storage of East Asian names in their original Chinese, Japanese and Korean (CJK) characters, without which they could be ambiguous and of little use for genealogical or historical research.
Release history
Limitations
Support for multi-person events and sources
A GEDCOM file can contain information on events such as births, deaths, census records, ship's records, marriages, etc.; a rule of thumb is that an event is something that took place at a specific time, at a specific place (even if time and place are not known). GEDCOM files can also contain attributes such as physical description, occupation, and total number of children; unlike events, attributes generally cannot be associated with a specific time or place.
The GEDCOM specification requires that each event or attribute is associated with exactly one individual or family. This causes redundancy for events such as census records where the actual census entry often contains information on multiple individuals. In the GEDCOM file, for census records a separate census "CENS" event must be added for each individual referenced. Some genealogy programs, such as Gramps and The Master Genealogist, have elaborate database structures for sources that are used, among other things, to represent multi-person events. When databases are exported from one of these programs to GEDCOM, these database structures cannot be represented in GEDCOM due to this limitation, with the result that the event or source information including all of the relevant citation reference information must be duplicated each place that it is used. This duplication makes it difficult for the user to maintain the information related to sources.
In the GEDCOM specification, events that are associated with a family such as marriage information is only stored in a GEDCOM once, as part of the family (FAM) record, and then both spouses are linked to that single family record.
Ambiguity in the specification
The GEDCOM specification was made purposefully flexible to support many ways of encoding data, particularly in the area of sources. This flexibility has led to a great deal of ambiguity, and has produced the side effect that some genealogy programs which import GEDCOM do not import all of the data from a file.
Support for varying definitions of families and relationships
GEDCOM does not explicitly support data representation of many types of close interpersonal relationships, such as same-sex marriages, domestic partnerships, cohabitation, polyamory or polygamy. Such relationships can only be represented using the generic ASSO tag used for any type of relationship.
Ordering of events that do not have dates
The GEDCOM specification does not offer explicit support for keeping a known order of events. In particular, the order of relationships (FAMS) for a person and the order of the children within a relationship (FAM) can be lost. In many cases the sequence of events can be derived from the associated dates. But dates are not always known, in particular when dealing with data from centuries ago. For example, in the case that a person has had two relationships, both with unknown dates, but from descriptions it is known that the second one is indeed the second one. The order in which these FAMS are recorded in GEDCOM's INDI record will depend on the exporting program. In Aldfaer for instance, the sequence depends on the ordering of the data by the user (alphabetical, chronological, reference, etc.). The proposed XML GEDCOM standard does not address this issue either.
Lesser-known features
GEDCOM has many features that are not commonly used. Some software packages do not support all the features that the GEDCOM standard allows.
Multimedia
The GEDCOM standard supports the inclusion of multimedia objects (for example, photos of individuals). Such multimedia objects can be either included in the GEDCOM file itself (called the "embedded form") or in an external file where the name of the external file is specified in the GEDCOM file (called the "linked form"). Embedding multimedia directly in the GEDCOM file makes transmission of data easier, in that all of the information (including the multimedia data) is in one file, but the resulting file can be enormous. Linking multimedia keeps the size of the GEDCOM file under control, but then when transmitting the file, the multimedia objects must either be transmitted separately or archived together with the GEDCOM into one larger file. Support for embedding media directly was dropped in the draft 5.5.1 standard.
Conflicting information
The GEDCOM standard allows for the specification of multiple opinions or conflicting data, simply by specifying multiple records of the same type. For example, if an individual's birth date was recorded as 10 January 1800 on the birth certificate, but 11 January 1800 on the death certificate, two BIRT records for that individual would be included, the first with the 10 January 1800 date and giving the birth certificate as the source, and the second with the 11 January 1800 date and giving the death certificate as the source. The preferred record is usually listed first.
This example encoded in GEDCOM might look like this:
0 @I1@ INDI
1 NAME John /Doe/
1 BIRT
2 DATE 10 JAN 1800
2 SOUR @S1@
3 DATA
4 TEXT Transcription from birth certificate would go here
3 NOTE This birth record is preferred because it comes from the birth certificate
3 QUAY 2
1 BIRT
2 DATE 11 JAN 1800
2 SOUR @S2@
3 DATA
4 TEXT Transcription from death certificate would go here
3 QUAY 2
Conflicting data may also be the result of user errors. The standard does not specify in any way that the contents must be consistent. A birth date like "10 APR 1819" might mistakenly have been recorded as "10 APR 1918" long after the person's death. The only way to reveal such inconsistencies is by rigorous validation of the content data.
Internationalization
The GEDCOM standard supports internationalization in several ways. First, newer versions of the standard allow data to be stored in Unicode (or, more recently, UTF-8), so text in any language can be stored. Secondly, in the same way that you can have multiple events on a person, GEDCOM allows you to have multiple names for a person, so names can be stored in multiple languages (although there is no standardized way to indicate which instance is in which language). Finally, in the latest version (5.5.1, not yet in widespread use), the NAME field also supports a phonetic variation (FONE) and a romanized variation (ROMN) of the name.
GEDCOM X
In February 2012 at the RootsTech 2012 conference, FamilySearch outlined a major new project around genealogical standards called GEDCOM X, and invited collaboration.
It will include software developed under the Apache open source license. It includes data formats that facilitate basing family trees on sources and records (both physical artifacts and digital artifacts), support for sharing and linking data online, and an API.
In August 2012 FamilySearch employee and GEDCOM X project leader Ryan Heaton dropped the claim that GEDCOM X is the new industry standard, and repositioned GEDCOM X as another FamilySearch open source project.
Alternatives to GEDCOM
Commsoft, the authors of the Roots series of genealogy software and Ultimate Family Tree, defined a version called Event-Oriented GEDCOM (also known as "Event GEDCOM" and originally called InterGED), which included events as first class (zero-level) items. Although it is event based, it is still a model built on assumed reality rather than evidence. Event GEDCOM was more flexible, as it allowed some separation between believed events and the participants. However, Event GEDCOM was not widely adopted by other developers due to its semantic differences. With Roots and Ultimate Family Tree no longer available, very few people today are using Event GEDCOM.
Gramps XML is an XML-based open format created by the open source genealogy project Gramps and used also by PhpGedView.
The Family History Information Standards Organisation was established in 2012 with the aim of developing international standards for family history and genealogical information. One of their standards is a continuance of GEDCOM, called Extended Legacy Format (ELF), that will begin with compatibility with GEDCOM 5.5(.1) but include an extensibility mechanism. This is designed to assist software with a financial commitment to GEDCOM and prevent it getting left behind as further standards evolve.
See also
FamilySearch
Ancestral File Number
International Genealogical Index
GENDEX – Genealogical index
Genealogical numbering systems
GNTP – Genealogy Network Transfer Protocol
Tiny Tafel Format – encoded "ancestor table"
References
External links
General
GEDCOM Standard
FamilySearch GEDCOM Guide
GEDCOM X Project
on LDS Church's Adoption of the XML Standard
THE GEDCOM STANDARD Release 5.5.1, released 15. November 2019
Computer-related introductions in 1984
Computer file formats
Genealogy and The Church of Jesus Christ of Latter-day Saints
Genealogy software |
43325520 | https://en.wikipedia.org/wiki/Thread%20%28network%20protocol%29 | Thread (network protocol) | Thread is an IPv6-based, low-power mesh networking technology for Internet of things (IoT) products, intended to be secure and future-proof. The Thread protocol specification is available at no cost; however, this requires agreement and continued adherence to an End-User License Agreement (EULA), which states that "Membership in Thread Group is necessary to implement, practice, and ship Thread technology and Thread Group specifications." Membership of the Thread Group is subject to an annual membership fee, except for the "Academic" tier.
In July 2014, the "Thread Group" alliance was formed as a working group to aid Thread becoming an industry standard by providing Thread certification for products. Initial members were ARM Holdings, Big Ass Solutions, NXP Semiconductors/Freescale, Google-subsidiary Nest Labs, OSRAM, Samsung, Silicon Labs, Somfy, Tyco International, Qualcomm, and the Yale lock company. In August 2018 Apple Inc. joined the group and released its first Thread product, the HomePod Mini, in late 2020.
Thread uses 6LoWPAN, which, in turn, uses the IEEE 802.15.4 wireless protocol with mesh communication, as does Zigbee and other systems. However, Thread is IP-addressable, with cloud access and AES encryption. A BSD-licensed open-source implementation of Thread, called "OpenThread", has been released by Google.
In 2019, the Connected Home over IP project (later renamed "Matter"), led by Zigbee, Google, Amazon and Apple, announced a broad collaboration to create a royalty-free standard and open-source code base to promote interoperability in home connectivity, leveraging Thread, as well as Wi-Fi and Bluetooth Low Energy.
Selling points and key features
Thread uses 6LoWPAN, which is based on the use of a connecting router, called an edge router. Thread calls their edge routers Border Routers. Unlike other proprietary networks, 6LoWPAN, like any network with edge routers, does not maintain any application layer state, because such networks forward datagrams at the network layer. This means that 6LoWPAN remains unaware of application protocols and changes. This lowers the processing power burden on edge routers. It also means that Thread does not need to maintain an application layer. Thread states that multiple application layers can be supported, as long as they are low-bandwidth and are able to operate over IPv6.
Thread touts that there is no single point of failure in its system. However, if the network is only set up with one edge router, then this can serve as a single point of failure. The edge router or another router can assume the role of Leader for certain functions. If the Leader fails, another router or edge router will take its place. This is the main way that Thread guarantees no single point of failure.
Thread promises a high level of security. Only devices that are specifically authenticated can join the network. All communications through the network are secured with a network key.
Competing IoT protocols
Competing Internet of things (IoT) protocols include Bluetooth Low Energy (including Bluetooth Mesh), Zigbee, Z-Wave, Wi-Fi HaLow, Bluetooth 5, Wirepas, MiraOS and VEmesh.
See also
Home automation
Wi-Fi Direct
Wi-Fi EasyMesh
DASH7
KNX (standard)
LonWorks (standard)
BACnet (standard)
References
External links
– official site
OpenThread
Home automation
Building automation
Personal area networks
Mesh networking
IEEE 802
IPv6
Computer-related introductions in 2014 |
283853 | https://en.wikipedia.org/wiki/New%20Jersey%20Institute%20of%20Technology | New Jersey Institute of Technology | New Jersey Institute of Technology (NJIT) is a public research university in Newark, New Jersey with a degree-granting satellite campus in Jersey City. Founded in 1881 with the support of local industrialists and inventors especially Edward Weston, NJIT opened as Newark Technical School in 1885 with 88 students. The school grew into a classic engineering college – Newark College of Engineering – and then, with the addition of a School of Architecture in 1973, into a polytechnic university that now hosts five colleges and one school. As of fall 2020, the university enrolls about 11,600 students, 2,000 of whom live on campus.
NJIT offers 52 undergraduate (Bachelor of Science/Arts) majors and 67 graduate (Masters and PhD) programs. Via its Honors College it also offers professional programs in Healthcare and Law in collaboration with nearby institutions including Rutgers Medical School and Seton Hall Law School. Cross-registration with Rutgers University-Newark which borders its campus is also available. NJIT is classified among "R1: Doctoral Universities – Very high research activity". It operates the Big Bear Solar Observatory, the Owens Valley Radio Observatory (both in California) and a suite of automated observatories across Antarctica, South America and the US.
As of May 2021, the school's founders, faculty and alumni include a Turing Award winner (2011), a Dannie Heineman Prize for Mathematical Physics winner (2015), 9 members of the National Academy of Engineering, 2 members of the National Inventors Hall of Fame, 1 member of the National Academy of Sciences, an astronaut, a National Medal of Technology and Innovation winner, a Congressional Gold Medal winner, a William Bowie Medal winner, multiple IEEE medalists, and 15 members of the National Academy of Inventors including 5 senior members. Over the past 20 years NJIT graduates have won fourteen Goldwaters, five Fulbrights, two Boren Scholarships, a Truman, four Gilmans, three DAADs, a Tau Beta Pi graduate Fellowship, a Humanity in Action Fellowship, two Whitakers, and sixteen NSF Graduate Research Fellowships.
NJIT is a member of the Association of Public and Land-grant Universities, a Sea grant college, a Space grant college, and a member of the Association of Collegiate Schools of Architecture. It has participated in the McNair Scholars Program since 1999. With 20 varsity teams, the NCAA Division I "Highlanders" mainly compete in the America East Conference.
History
Founding and early years
The New Jersey Institute of Technology has a history dating back to the 19th century. Originally introduced from Essex County on March 24, 1880 and revised with input from the Newark Board of Trade in 1881, an act of the New Jersey State Legislature essentially drew up a contest to determine which municipality would become home to the state's urgently needed technical school. The challenge was straightforward: the state would stake "at least $3,000 and not more than $5,000" and the municipality that matched the state's investment would earn the right to establish the new school.
The Newark Board of Trade, working jointly with the Newark City Council, launched a campaign to win the new school. Dozens of the city's industrialists, along with other private citizens, eager for a work force resource in their home town, threw their support behind the fund-raiser. By 1884, the collaboration of the public and private sectors produced success. Newark Technical School was ready to open its doors.
The first 88 students, mostly evening students, attended classes in a rented building at 21 West Park Street. Soon the facility became inadequate to house an expanding student body. To meet the needs of the growing school, a second fund-raiser—the institution's first capital campaign—was launched to support the construction of a dedicated building for Newark Technical School. In 1886, under the leadership of the school's dynamic first director, Dr. Charles A. Colton, the cornerstone was laid at the intersection of High Street and Summit Place for the three-story building later to be named Weston Hall, in honor of the institution's early benefactor. A laboratory building, later to be called Colton Hall, was added to the campus in 1911.
Becoming Newark College of Engineering
Dr. Allan R. Cullimore led the institution from 1920 to 1949, transforming Newark Technical School into Newark College of Engineering (name adopted in 1930). Campbell Hall was erected in 1925, but due to the Depression and World War II, only the former Newark Orphan Asylum, now Eberhardt Hall, was purchased and renovated by the college in the succeeding decades. Cullimore left an unpublished history of the institution dated 1955.
As of 1946, about 75% of the freshman class had served in the U. S. Armed Forces. Cullimore Hall was built in 1958 and two years later the old Weston Hall was razed and replaced with the current seven-story structure. Doctoral level programs were introduced and six years later, in 1966, an , four-building expansion was completed.
Becoming New Jersey Institute of Technology
With the addition of the New Jersey School of Architecture in 1973, the institution had evolved into a technological university, emphasizing a broad range of graduate and undergraduate degrees and dedicated to significant research and public service. President William Hazell, Jr., felt that the name of the school should clearly communicate this dynamic evolution. Alumni were solicited for suggestions to rename the institution, with the winning suggestion coming from Joseph M. Anderson '25.
Anderson's suggestion – New Jersey Institute of Technology – cogently emphasized the increasing scope of educational and research initiatives at a preeminent New Jersey university. The Board of Trustees approved the transition to the new name in September 1974, and Newark College of Engineering officially became New Jersey Institute of Technology on January 1, 1975. Anderson received the personal congratulations of President Hazell. At that time, the Newark College of Engineering name was retained for NJIT's engineering school.
The establishment of a residential campus and the opening of NJIT's first dormitory (Redwood Hall) in 1979 began a period of steady growth that continues today under the Landscape Master Plan. Two new schools were established at the university during the 1980s, the College of Science and Liberal Arts in 1982 and the School of Industrial Management in 1988. The Albert Dorman Honors College was established in 1994, and the newest school, the College of Computing Sciences, was created in 2001.
Recent history
On May 2, 2003 Robert A. Altenkirch was inaugurated as president. He succeeded Saul K. Fenster, who was named the university's sixth president in 1978. Altenkirch retired in 2011 and on January 9, 2012, NJIT Trustees named Joel Bloom president.
In 2003 the opening of the new Campus Center on the site of the former Hazell Hall centralized campus social events. Construction of a new Atrium, Bookstore, Information Desk, Dining Hall, computer lab, and new student organization offices continued into 2004. In 2005 a row of automobile chop shops adjacent to campus were demolished. In 2006 construction of a new off-campus residence hall by American Campus Communities commenced in the chop shops' location. The new hall, which opened in 2007, is dubbed the University Centre.
Also in 2005, Eberhardt Hall was fully renovated and re-inaugurated as the Alumni Center and the symbolic front door to the university. Its restored tower was the logo of the former Newark College of Engineering and was designed by Kevin Boyajian and Scott Nelson. A rebranding campaign with the current slogan, "NJIT – New Jersey's Science and Technology University – The Edge in Knowledge", was launched to emphasize NJIT's unique position as New Jersey's preeminent science-and-technology-focused research university.
Recently, the school changed its accredited management school into an AACSB-accredited business school. The business school focuses on utilizing technology to serve business needs. The school benefits from its proximity to New York City; in particular, Wall Street is just twenty-five minutes away. The school also has a strong academic collaboration with the nearby Rutgers business school. In 2008 NJIT began a program with the Heritage Institute of Technology (HIT) in West Bengal, India under which 20 HIT students come to NJIT for summer internships.
In 2009 the New Jersey School of Architecture was reorganized as the College of Architecture and Design (CoAD). Within the college, the New Jersey School of Architecture continues, and it was joined by the newly established School of Art + Design.
In June 2010, NJIT officially completed its purchase of the old Central High School building which sits between the NJIT and Rutgers–Newark campuses. With the completion of the purchase, Summit Street, from Warren Street to New Street, was converted into a pedestrian walkway. Subsequently, the Central High School building was extensively renovated, preserved, and updated per the Campus Master Plan, which included tearing down Kupfrian Hall to create more greenery.
Facilities added in 2016-18 include: a 209,000 sq.ft., multi-purpose Wellness and Events Center (WEC) which features a retractable-seating arena that can accommodate 3,500 spectators or 4,000 event participants; a 24,000 sq. ft. Life Sciences and Engineering Center; a 10,000 sq. ft. Makerspace, and a parking garage with spaces for 933 cars.
The university awarded 2,951 degrees in 2017, including 1512 bachelor's, 1281 master's, and 59 PhDs. Enrollment, currently at 11,423, is projected to reach 12,200 by 2020.
Academics
Admissions
The admission criteria consists of:
High school academic record
Standardized test scores (SAT or ACT scores)
Class rank
Portfolio: Applicants to the Architecture, Digital Design, Industrial Design, and Interior Design majors are required to submit a portfolio of their creative work.
The average SAT score (math + verbal only) for enrolling freshmen in fall 2018 is 1288 (662 Math, 626 Verbal).
The average SAT score (math + verbal only) for enrolling freshmen in the Honors College in fall 2018 is 1470.
The minimum SAT score (math + verbal only) for enrolling freshmen in the accelerated BS/MD program – run in combination with New Jersey Medical School (Rutgers) – is 1450.
The male-to-female student ratio is about 3.2 to 1, and the student-to-faculty ratio is 20 to 1.
Rankings
In November 2021 in an entrepreneurship ranking by Princeton Review NJIT was ranked 34th in the US out of nearly 300 schools with entrepreneurship offerings.
In July, 2021 in an article entitled "The Best New Tech Talent May Not Be Where You Think: A Guide to Hiring from Universities in 2021" published by HackerRank, NJIT was ranked #1 in four of five programming language skills based on certification pass rates.
In the 2021 edition of the QS World University Ranking USA, NJIT was ranked 90th (2-way tie) out of the 352 US Institutions listed (more than 750 considered).
In the 2020 edition of the QS World University Rankings: USA, NJIT was ranked 74th. The ranking listed 302 US institutions.
In April 2019 NJIT's undergraduate Biomedical engineering program was ranked 6th in the country by BestValueSchools.com.
In April 2018 Forbes ranked NJIT #1 in the country in upward mobility defined in terms of moving students from the bottom fifth of the income distribution to the top fifth.
In U.S. News 2018 online rankings, four of NJIT's suite of on-line graduate programs were ranked among the best 100 in the country, including its information technology programs, which were ranked 17th.
In Payscale's 2017 College ROI Report, which covers 1833 institutions, NJIT ranked 27th and 42nd for return on investment, based on in-state and out-of-state tuition respectively.
In 2016 NJIT was ranked #3 on a list of "11 Public Colleges Where Grads Make Six Figures" published by "Money Magazine".
NJIT was ranked 133rd out of 662 universities in the US in R&D expenditures in 2016 by the National Science Foundation (NSF).
In 2015, NJIT was ranked #3 on Business Insiders list of "The 50 most underrated colleges in America" (high employment upon graduation and high average salary)
In 2015, NJIT was ranked in the top 25 colleges for earning six figures before attaining a graduate degree in Time's Moneys list.
In 2013, NJIT was ranked the #1 college "value" in the country (based on cost vs. starting salary of recent graduates), by BuzzFeed.
NJIT was ranked 434th out of around 20,000 colleges and universities in the world by Webometrics in January 2011.
NJIT was ranked among the top 100 world universities in Computer Science in 2009 and in 2010 by Academic Ranking of World Universities.
Colleges and schools
Comprising five colleges and one school, the university is organized into 21 departments, three of which, Biological Sciences, History, and Theater Arts, are federated with Rutgers-Newark whose campus abuts NJIT's.
With a student population that is 15% international, NJIT is among the most ethnically diverse national universities in the country.
It has multiple study abroad options along with extensive co-op, internship, and service opportunities.
Newark College of Engineering (NCE)
Newark College of Engineering, which was established in 1919, is one of the oldest and largest professional engineering schools in the United States. It offers 13 undergraduate degree programs, 16 master's and 10 doctoral degree programs. Undergraduate enrollment is more than 2,500, and more than 1,100 are enrolled in graduate study. The 150-member faculty includes engineers and scholars who are widely recognized in their fields. An estimated one in four professional engineers in the State of New Jersey are NCE alumni. The NCE has more 40,000 living alumni.
College of Science and Liberal Arts (CSLA)
The College of Science and Liberal Arts was formed in 1982. It was originally known as the Third College having been preceded by Newark College of Engineering and the New Jersey School of Architecture. In 1986 its name was changed to the College of Science and Liberal Arts as a result of a more sharply defined mission and direction. Growing steadily ever since, CSLA has spawned two of NJIT's colleges: the Albert Dorman Honors College, which evolved out of the Honors Program that was founded in CSLA in 1985, and the College of Computing Sciences, which developed out of CSLA's Computer and Information Science Department.
Today the college consists of six academic departments:
Biological Sciences
Chemistry and Environmental Science
Federated History
Humanities
Mathematical Sciences
Physics
CSLA also houses:
Department of Aerospace Studies
Rutgers/NJIT Theatre Arts Program
Interdisciplinary Program in Materials Science
Center for Applied Mathematics and Statistics
Center for Solar Research
Big Bear Solar Observatory
Owens Valley Solar Array
J. Robert and Barbara A. Hillier College of Architecture and Design (HCAD)
The College of Architecture and Design houses the School of Architecture (SoA) and the School of Art and Design. The college offers undergraduate degrees in architecture, digital design, industrial design, and interior design as well as graduate degrees in architecture, infrastructure planning, and urban systems. HCAD is the only college at NJIT to have its own designated library. The library contains materials related to the majors offered in HCAD in the form of periodicals, reference materials, rare books, visual materials (i.e. architectural drawings, prints, postcards, maps, etc.), digital databases, and a materials library.
The college offers a pre-college summer program for high school students.
Albert Dorman Honors College (ADHC)
Ying Wu College of Computing Sciences (YWCC)
The Computer Science department, part of the Ying Wu College of Computing Sciences, is the largest at NJIT, comprising more than one fifth of the student population. It is also the largest computer science department among all research universities in the New York metropolitan area.
The department offers a full range of degree programs in computer science (BA/BS, MS and PhD), in addition to emerging interdisciplinary programs: Telecommunication (MS), Bioinformatics (BS/MS), and Computing and Business (BS/MS). The Bioinformatics degree is also available in a pre-med option.
In December 2019, the school opened a satellite site in Jersey City that will focus on financial technology training for those working in the financial industry on Wall Street and in Jersey City.
Martin Tuchman School of Management (MTSM)
The Martin Tuchman School of Management was established in 1988 and was accredited by the Association to Advance Collegiate Schools of Business in 1997. It offers programs in finance, accounting, marketing, management information systems, international business, technological entrepreneurship, and corporate communications in conjunction with Rutgers University.
Degrees available include a Bachelor of Science program (four years, 124 credits), a Master of Science in management program (30 credits), and two Master of Business Administration (MBA) programs: One regular (48 credits; two years for full-time students, three or four years for part-time students) and the other an accelerated 18-month Executive MBA program for managers and professionals. MTSM also offers a Ph.D. degree in Business Data Science. Research areas include fintech, innovation management, and the advancement of technologies in the business domain including deep learning and distributed ledgers.
MTSM hosts entrepreneurship programs for the regional community, including the NSF I-Corps, the New Venture Assistance Program, and the Greater Newark–Jersey City Regional Business Model Competition.
Research
NJIT's R&D expenditures were $142 million in 2017 and $162 million in 2018. Areas of focus include applied mathematics, materials science, biomedical engineering, cybersecurity, and solar-terrestrial physics – of which the Center for Solar-Terrestrial Research is a world leader. A key agent in regional economic development, NJIT hosts VentureLink, formally the Enterprise Development Center (EDC), an on-campus business incubator that houses over 90 start-ups, and the New Jersey Innovation Institute (NJII) which offers R&D services to business.
The university has performed research in nanotechnology, solar-terrestrial physics, polymer science, and the development of a smart gun technology. The university research centers include the National Center for Transportation and Industrial Productivity and SmartCampus. The university hosts the Metro New York FIRST Robotics office. The university also hosts the Center for Solar-Terrestrial Research which owns and operates the Big Bear Solar Observatory, the world's largest solar observatory, located in Big Bear Lake, California, and operates the Owens Valley Solar Array, near Bishop, California.
In the past, NJIT was home to the Computerized Conferencing and Communications Center (CCCC), a research center that specialized in computer-mediated communication. The systems that resulted from this research are the Electronic Information Exchange System, as well as the continuations: The Electronic Information Exchange System 2 (EIES2), and the Tailorable Electronic Information Exchange System (TEIES). One of the foremost developments of EIES was that of the "Virtual Classroom", a term coined by Dr. Starr Roxanne Hiltz. This was the first e-learning platform in the world, and was unique in that it evolved onto an existing communications system, rather than having a system created specifically for it. Their missions completed, the CCCC and EIES were terminated in the mid-90s.
The university currently operates a Class-10 cleanroom and a Class-1000 cleanroom on campus for academic and research purposes including counter-bioterrorism research.
The university also maintains an advanced 67-node supercomputer cluster in its Mathematics Department for research purposes.
NJIT conducts cybersecurity research in a number of areas including cross-domain information sharing, data security and privacy, data mining for malware detection, geospatial information security, secure social networks, and secure cloud computing. The university is designated a National Center of Academic Excellence (CAE) in Cyber Defense Education through the 2020 academic year by the National Security Agency and Department of Homeland Security.
Libraries and archives supporting research
NJIT's Main Library, The Robert W. Van Houten Library, is located in the Central Avenue Building, a facility for quiet and group study, researching, and browsing print and online sources. Since 1997 the Van Houten Information Commons has housed 120 computer workstations.
The Barbara and Leonard Littman Library for Architecture and Design is located in Weston Hall. It houses a core collection that includes print and electronic books, journals, maps, drawings, models, e-images, materials samples, and over 70,000 slides.
Included among NJIT's information resources are the university's historical archive including items developed and manufactured by Edward Weston, a scientist, prolific inventor, and a founding member of the board of trustees of the university. Dr. Weston's collection of artifacts and rare books is housed in the Van Houten Library and is available to scholars interested in the history of science and technology.
Residence life
Living: on-campus
About 80% of NJIT students commute to campus. The Residence Life (on-campus) community currently includes a little over 2,200 students.
There are five residence halls on the NJIT campus. Redwood Hall, constructed in 1978, was the first, followed by Cypress, Oak, and Laurel (constructed in 1997 and extended in 1999). Cypress and Redwood are primarily used for freshman students, while Laurel and Oak house upperclassmen. The fifth, Warren Street Village, which opened in the fall of 2013, provides housing for Dorman Honors College students and several Greek houses which together provide space for about 600 students. The Warren Street Village also houses the Albert Dorman Honors College itself.
Living: off-campus
A new almost-on-campus resident hall known as University Centre (run by American Campus Communities) was completed in 2007. Located near NJIT's Guttenberg Information Technologies Center (GITC) building, it houses students from NJIT, Rutgers–Newark, New Jersey Medical School (Rutgers), and Seton Hall University. Many students from local institutions find housing in nearby neighborhoods and towns including Harrison, Kearny, Fairmount and East Orange.
Athletics
Sports/Teams
NJIT sponsors 20 varsity sports teams including 19 NCAA-Division I teams and 1 ACHA Division II team. It also sponsors 6 club-level sports. Its teams are called the Highlanders. The school colors are red and white with blue accent. NJIT's teams compete at the NCAA Division I level primarily as members of the America East Conference (AEC). Several teams have affiliations outside of AEC as follows: Men's volleyball competes in the Eastern Intercollegiate Volleyball Association (EIVA), Men's fencing team is a member of the Mid-Atlantic Collegiate Fencing Association (MACFA), Woman's and Men's tennis compete in the Southland Conference (SLC). As of 2016, the women's fencing team is independent.
On 6 December 2014 NJIT's basketball team, unranked and independent at the time, made headlines in national sports reports when they defeated the nationally ranked (#17) Michigan Wolverines.
NCAA Division I sports at NJIT are:
(M) Baseball
(M) (W) Basketball
(M) (W) Cross country
(M) (W) Fencing, men compete in MACFA, woman compete as an independent
(M) Lacrosse
(M) (W) Soccer
(M) Swimming & diving
(M) (W) Tennis, compete in SLC
(M) (W) Track & field (indoor)
(M) (W) Track & field (outdoor)
(M) (W) Volleyball, men compete in EIVA
ACHA Division II sports:
(M) Ice Hockey, compete in CSCHC
Club-level sports:
Archery, Bowling, Cricket, Ice Hockey, Mixed Martial Arts, Ultimate Frisbee
Facilities
In recent years NJIT has extensively added to and upgraded its sports and recreation facilities. In 2017 it opened the Wellness and Events Center (WEC), a major facility that includes a 3500-seat Basketball/Volleyball arena which can be converted into an event space capable of accommodating 3,000 attendees. In 2019 a new Soccer/Lacrosse field was opened. The WEC replaced the Estelle & Zoom Fleisher Athletic Center.
Notable alumni
Since its founding in 1881, NJIT has issued degrees to more than 77,000 graduates. NJIT alumni have gone on to pursue distinguished careers in many sectors.
Faculty and administrators at other universities
A. Michael Noll (class of 1961), dean at University of Southern California
Judea Pearl (class of 1961), professor at University of California, Los Angeles; winner of Turing Award (Nobel Prize of Computing) in 2011 (co-listed under Science and Engineering)
John Pelesko, (class of 1997) earned his PhD at NJIT;a professor and an associate dean at the University of Delaware
Pierre Ramond (class of 1965), distinguished professor of physics at University of Florida
Victor J. Stenger (class of 1956), professor of physics at University of Hawaii who authored nine books
Yuriy Tarnawsky (class of 1956), professor of Ukrainian literature and culture at Columbia University
Charles Speziale (class of 1970), scientist at NASA Langley Research Center and professor at Boston University
Business and industry
Albert Dorman (class of 1945, Hon ScD 1999) founder and chairman (ret) of AECOM Technology Corp., a member of the National Academy of Engineering, a fellow of the American Institute of Architects, and a distinguished member of the American Society of Civil Engineers.
Ying Wu (MSEE 1988) Telecommunications engineer and entrepreneur, Chairman of the China Capital group. Founder and ex-CEO of UTStarcom (China) Ltd.
Ehsan Bayat (born 1963, class of 1986), chairman and founder of Afghan Wireless Communication Company, Ariana Television and Radio, Bayat Foundation, Bayat Energy
Robert S. Dow (born 1945, class of 1969), senior partner, former managing partner of Lord Abbett, and Olympic fencer (He is also listed under Sports.)
Frederick Eberhardt (class of 1884), president of Gould & Eberhardt, a Newark-based machine tool manufacturer, and one of 88 in NJIT's inaugural class
Vince Naimoli (class of 1962), owner of the Tampa Bay Devil Rays
Victor Pelson (deceased, class of 1959), American executive at AT&T Corporation.
Jim Stamatis (class of 1985), vice president at Louis Berger Group
Dick Sweeney (class of 1981), co-founder of Keurig
Military, politics and government
Harry L. Ettlinger (born 1926, class of 1950), one of the Monuments Men; awarded the Congressional Gold Medal in 2015,
Ellen M. Pawlikowski (class of 1978), 4-Star General of the United States Air Force (retired August 2018), elected to the National Academy of Engineering in 2014,
Paul Sarlo (born 1968, BS 1992, MS 1995), politician who has served in the New Jersey Senate since 2003, where he represents the 36th Legislative District,
Funsho Williams (MSc 1974), Nigerian civil servant and politician.
Science and engineering
Sara Del Valle, (class of 2001), mathematical epidemiologist at the Los Alamos National Laboratory.
Judea Pearl, (class of 1961) prominent worker on superconducting electronic components and artificial intelligence. Winner of the Turing Award
Pierre Ramond (class of 1965) Theoretical physicist who made significant contributions to string theory, winner of the Dannie Heineman Prize for Mathematical Physics (2015).
Gerard J. Foschini, (class of 1961) prominent telecommunications engineer at Bell Labs, winner of the IEEE Alexander Graham Bell Medal. He is a member of the National Academy of Engineering.
Beatrice Hicks (1919–1979, class of 1939), founder of the Society of Women Engineers and member of the National Academy of Engineering.
Paul Charles Michaelis, (BSEE and MS Physics) researcher of magnetic bubble memory, received the IEEE Morris N. Liebmann Memorial Award in 1975.
John J. Mooney (MSc 1960), co-inventor of the three-way catalytic converter, winner of National Medal of Technology in 2002.
T. J. O'Malley (class of 1936), aerospace engineer, winner of the NASA Distinguished Public Service Medal, 1969, 1974.
John Sawruk (1946–2008), mechanical engineer, Boss Kettering Award winner for his work on the GM 2.4L 4 cylinder engine
Wally Schirra (1923–2007), astronaut, only person to fly in all of America's first three space programs (Mercury, Gemini and Apollo)
Victor J. Stenger (1935-2014) BSEE, class of 1956. Noted particle physicist, philosopher, and religious skeptic. Author of 13 books for the general reader, and numerous essays, many of which relate to the existence of God.
Entertainment
Rashia Fisher, rapper who is known as Rah Digga and a member of Flipmode Squad (attended, but did not graduate)
Sports
Raymond E. Blum (class of 1950), speed skater who competed in the 1948 Winter Olympics in St. Moritz, Switzerland
Robert Dow (fencer) (class of 1969) He competed in the team sabre event in the 1972 Summer Olympics.
Hernan "Chico" Borja (deceased, class of 1980) (soccer, player and coach) First NJIT men's player to be named an ALL American. He played for several professional teams including the New York Cosmos. He was a member of the US national team from 1982 to 1988.
Isaiah Wilkerson (class of 2012), professional basketball player.
Chris Flores (class of 2013), professional basketball player
Mark Leiter Jr., (class of 2016) professional baseball player
Damon Lynn (class of 2017), professional basketball player (NBA G League)
Notable faculty
University presidents
Charles A. Colton, 1st president, 1881–1918
Daniel Hodgdon, 2nd president, 1918–1920
Allan Cullimore, 3rd president, 1920–1947
Robert Van Houten (class of 1930), 4th president, 1947–1970
William Hazell, 5th president, 1970–1975
Paul H. Newell Jr, interim appointment, 1975–1976
Charles R. Bergman, Interim appointment, 1977
Saul Fenster, 6th president, 1978–2002
Robert Altenkirch, 7th president, 2003–2011
Joel Bloom, 8th president, 2012–2022
Teik C. Lim, 9th president, 2022-present
Faculty and administrators at NJIT
Ali Akansu, professor of electrical and computer engineering. He is an IEEE fellow.
Julie Ancis, professor of Cyberpsychology. She is an American Psychological Association (APA) Fellow.
David Bader, Distinguished Professor, Department of Computer Science in the Ying Wu College of Computing. He is an IEEE, AAAS, SIAM and ACM Fellow.
Yeheskel Bar-Ness, professor emeritus of electrical and computer engineering. He is an IEEE Fellow.
Denis Blackmore, professor of mathematics.
Kevin Belfield, dean of NJIT's College of Science and Liberal Arts. He was elected a fellow of the American Chemical Society in 2020 and a fellow of the Royal Society of Chemistry in 2022.
Jeannette Brown (deceased), chemist, historian, writer, Elected a fellow of the Association for Women in Science in 2007.
Ian Gatley, professor of physics.
Erol Gelenbe, professor of computer science at NJIT, dean at the University of Central Florida and professor at Imperial College London.
Lillian Gilbreth,(deceased) professor at NJIT, 1941–43, and first female member of the National Academy of Engineering.
Philip R. Goode, Distinguished research professor of physics. He is a Fellow of the American Physical Society, the American Geophysical Union and the American Astronomical Society.
Craig Gotsman, Dean of the Ying Wu College of Computing, He is a member of Academia Europaea, and a member of the National Academy of Inventors.
Starr Roxanne Hiltz, professor emerita of information systems, recipient of Electronic Frontier Foundation Pioneer Award (1994), co-author of 'The Network Nation' with her husband Murry Turoff.
Michael Hinchey, professor of computer science at NJIT, and professor at the University of Limerick, Hinchey is a Member of Academia Europaea.
Moshe Kam, Dean of the Newark College of Engineering, and professor of electrical and computer engineering. 49th President and CEO of IEEE.
Burt Kimmelman, poet and professor of English. He has published nine books of poetry and two book-length literary criticisms.
Gregory Kriegsmann (deceased), professor of mathematics, elected as a Fellow of the Society for Industrial and Applied Mathematics (SIAM) in 1994.
David Kristol, professor emeritus of biomedical engineering.
Louis J. Lanzerotti, researcher and engineer involved in numerous satellite programs including Voyager, Cassini, and Galileo among others. He is a member of the National Academy of Engineering and an IEEE fellow.
Paul Magriel, mathematics professor at NJIT, and leading backgammon player.
Donald Pederson,(deceased) prominent electrical engineer who led the development of SPICE, a very widely used program for computer-aided circuit design. A lecturer of engineering at NJIT, Pederson was a member of the National Academy of Engineering, the National Academy of Sciences, and the American Academy of Arts and Sciences. He was awarded several IEEE medals including the medal of Honor.
Brandon 'Scoop B' Robinson professor of humanities.
Omowunmi Sadik, distinguished professor of chemistry and environmental science. She is a fellow of the American Institute for Medical and Biological Engineering, and a fellow of the Royal Society of Chemistry.
Sunil Saigal, distinguished professor of civil engineering; a Fellow of both the American Society of Mechanical Engineers and the American Society of Civil Engineers.
Kamalesh Sirkar, professor of chemical engineering. Sirkar holds 25 US patents.
Murray Turoff, professor emeritus of computer and information systems, recipient of Electronic Frontier Foundation Pioneer Award (1994), co-author of 'The Network Nation' with his wife Starr Roxanne Hiltz.
Guiling (Grace) Wang, Founding Director of the AI Center for Research at NJIT, Wang is an IEEE Fellow
Leslie Kanes Weisman, professor of architecture.
Edward Weston, (deceased) prominent member of the founding board of trustees; co-founder of the Weston Electric Light Company; holder of 334 US patients. Awards include: the Elliott Cresson Medal (1910), the Franklin Medal (1924), and the IEEE Lamme Medal.
Mengchu Zhou, professor of electrical and computer engineering. He is a fellow in several organizations including the IEEE, the American Association for the Advancement of Science and Chinese Association of Automation.
Karl W. Schweizer, Professor of History; Author/Editor of 20 books; Fellow of the British Royal Historical Society and The Royal Society of Arts
See also
The Vector – student newspaper
NJIT Capstone Program
2007–08 NJIT Highlanders men's basketball team
Arnold Air Society
Footnotes
References
External links
NJIT Highlanders Athletics website
Universities and colleges in Newark, New Jersey
Engineering universities and colleges in New Jersey
Technological universities in the United States
Architecture schools in New Jersey
Business schools in New Jersey
Business incubators of the United States
Computer science departments in the United States
Research institutes in the United States
Educational institutions established in 1881
1881 establishments in New Jersey
Public universities and colleges in New Jersey |
957781 | https://en.wikipedia.org/wiki/Computer-assisted%20reporting | Computer-assisted reporting | Computer-assisted reporting describes the use of computers to gather and analyze the data necessary to write news stories.
The spread of computers, software and the Internet changed how reporters work. Reporters routinely collect information in databases, analyze public records with spreadsheets and statistical programs, study political and demographic change with geographic information system mapping, conduct interviews by e-mail, and research background for articles on the Web.
Collectively this has become known as computer-assisted reporting, or CAR. It is closely tied to "precision" or analytic journalism, which refer specifically to the use of techniques of the social sciences and other disciplines by journalists.
History and development
One researcher argues the "age of computer-assisted reporting" began in 1952, when CBS television used a UNIVAC I computer to analyze returns from the U.S. presidential election. One of the earliest examples came in 1967, after riots in Detroit, when Philip Meyer of the Detroit Free Press used a mainframe computer to show that people who had attended college were equally likely to have rioted as were high school dropouts.
Since the 1950s, computer-assisted developed to the point that databases became central to the journalist's work by the 1980s. In his book, Precision Journalism, the first edition of which was written in 1969, Philip Meyer argues that a journalist must make use of databases and surveys, both computer-assisted. In the 2002 edition, he goes even further and states that "a journalist has to be a database manager".
In 2001, computers had reached a critical mass in American newsrooms in terms of general computer use, online research, non-specialist content searching, and daily frequency of online use, showing that CAR has become ubiquitous in the United States.
Tools and techniques
The techniques expanded from polling and surveying to a new opportunity for journalists: using the computer to analyze huge volumes of government records. The first example of this type may have been Clarence Jones of The Miami Herald, who in 1969 worked with a computer to find patterns in the criminal justice system. Other notable early practitioners included David Burnham of The New York Times, who in 1972 used a computer to expose discrepancies in crime rates reported by the police; Elliot Jaspin of The Providence Journal, who in 1986 matched databases to expose school bus drivers with bad driving histories and criminal records; and Bill Dedman of The Atlanta Journal-Constitution, who received the Pulitzer Prize for his 1988 investigation, The Color of Money, which dealt with mortgage lending discrimination and redlining in middle-income black neighborhoods.
Professional organizations
In the last 15 years, journalism organizations such as the National Institute for Computer-Assisted Reporting (NICAR, a program of Investigative Reporters and Editors) and the Danish International Center for Analytical Reporting (DICAR), have been created solely to promote the use of CAR in newsgathering. Many other organizations, such as the Society of Professional Journalists, the Canadian Association of Journalists and the University of King's College in Halifax, Nova Scotia, offer CAR training or workshops. Journalists have also created mailing lists to share ideas about CAR, including NICAR-L, CARR-L and JAGIS-L.
See also
Automated journalism
Data-driven journalism
Notes
Journalism |
48932601 | https://en.wikipedia.org/wiki/Information%20technology%20in%20Sri%20Lanka | Information technology in Sri Lanka | Information Technology in Sri Lanka refers to business process outsourcing, knowledge process outsourcing, software development, IT Services , and IT education in Sri Lanka. Sri Lanka is always ranked among the top 50 outsourcing destinations by AT Kearney, and Colombo and ranked among "Top 20 Emerging Cities" by Global Services Magazine. The export revenue of this industry grew from USD 213 million in 2007 to USD 1089 million in 2019.
History
For the purpose of developing IT in Sri Lanka, Computer Society of Sri Lanka was started in 1976. Sri Lanka's IT, KPO/BPO industry has a short span of history starting around 2000. IT/BPO sector has been identified as a priority sector for economic development in the country.
Business process outsourcing
Sri Lanka is an offshore development center and Joint venture hub for several Fortune 500 companies from North America, UK, Australia, Sweden, Norway and Japan. Well known customers of Sri Lankan BPO industry include Google, J.P. Morgan & Co, Microsoft, Emirates , Infor and Qatar Airways.
Sri Lanka's BPO cost is 30% lower than other offshore destinations.
Sri Lanka also won the Outsourcing Destination of the Year award by National Outsourcing Association.
Rankings
Government Bodies
The prominent government body related to IT in Sri Lanka is the Ministry of Technology. Other than that Ministries of Education, Skills development are working on developing the education while Ministry of Industry and Commerce is on industrial level activities.
Revenue Statistics
Legislation
IT in Sri Lanka is governed under the Information and Communication Technology Act No. 27 of 2003.
Other Related Acts
Electronic Transactions Act, No. 19 OF 2006
National Digital Policy for Sri Lanka
Data Protection Bill
Telecommunication Levy Act, No. 21 OF 2011
Telecommunication Levy Act(Amendment) Act, No. 8 OF 2013
Agencies
The Information and Communication Technology Agency (Sri Lanka)
Sri Lanka Computer Emergency Readiness Team (SLCERT)
Telecommunications Regulatory Commission of Sri Lanka (TRCSL)
Organisations
Sri Lanka Association of Software and Service Companies (SLASSCOM) similar to India's NASSCOM is another agency working on the development of business, education and employment.
The Federation of Information Technology Industry Sri Lanka (FITIS)
The ICT Industry Skills Council (ICTISC)
Recent development
There are many global IT services companies established in Sri Lanka such as HSBC, IFS, Intel, Motorola, WNS,RR Donnelley, Virtusa, Pearsons and Accenture.
IT parks
Sri Lanka has few government owned and privately managed IT Parks.
Orion City IT Park established in 2009 is a privately owned IT park situated in Dematagoda area in Colombo. The Park is spread over 16 acres and currently has 800,000 sq feet of developed space. Currently this park houses, Virtusa, IFS AB, WNS Global Services and several other IT and non-IT companies.
In 2011 a full featured IT park was proposed to be built at Hambantota as a government project. In 2012 this project was approved by the Cabinet.
TRACE Expert City is a similar one. This was developed by Urban Development Authority, working with the Ministry of Defense, This is situated in Maradana.
In February 2016 India’s External Affairs Minister Sushma Swaraj announced that they are offering an IT park for Sri Lanka considering the IT industry's importance in the country.
Employment
According to the National ICT workforce Survey 2013, the positive domestic developments and gradual recovery of the global economic situation have created a conducive environment for growth of the industry’s workforce and the projection figure shows that this trend is likely to continue. The overall workforce has grown to 75,107 in 2013 with a projection. In 2013, 63% of the workforce held graduate or post-graduate level qualifications. In Sri Lanka, IT industry has become one of the largest sectors in producing employment opportunities by creating thousands of it job vacancies. Notably, many foreign IT companies start production officers in Sri Lanka due to the wide availability high quality skilled resources and relatively low operational costs.
IT education
Secondary Education
In the national level curriculum, first computer related subject taught at public schools is Information and Communication Technology. This is an elective subject for GCE Ordinary Level in Sri Lanka. For GCE Advanced Level in Sri Lanka a compulsory subject and an exam called General Information Technology was introduced considering the need of IT literacy for every student. Under the technology stream introduced for GCE Advanced Level in past years a new main subject for IT, Information and Communication Technology is added.
Higher Education
With the rapid development of IT industry and job demand in 90's, steps were taken by both government and private sector to improve the IT education in the country.
Criticism
CEPA Controversy
In 2016 January, the Sri Lankan government announced that Indo-Sri Lankan Comprehensive Economic Partnership Agreement between India and Sri Lanka will be finalized. It is assumed that Sri Lankan IT industry's job market will be opened to Indians by this agreement causing unemployment among Sri Lankans.
Future
Sri Lanka's IT industry's goal is to achieve USD 5 billion in exports by 2022 while creating 200,000 jobs and uplifting 1,000 tech start-ups.
In 2012 Computer literacy of Sri Lanka was 38%.
See also
Economy of Sri Lanka
Knowledge economy
Outsourcing
Sri Lanka Institute of Information Technology
References |
448557 | https://en.wikipedia.org/wiki/Tokyo%20subway%20sarin%20attack | Tokyo subway sarin attack | The was an act of domestic terrorism perpetrated on 20 March 1995, in Tokyo, Japan, by members of the cult movement Aum Shinrikyo. In five coordinated attacks, the perpetrators released sarin on three lines of the Tokyo Metro (then Teito Rapid Transit Authority) during rush hour, killing 14 people, severely injuring 50 (some of whom later died), and causing temporary vision problems for nearly 1,000 others. The attack was directed against trains passing through Kasumigaseki and Nagatachō, where the Diet (Japanese parliament) is headquartered in Tokyo.
The group, led by Shoko Asahara, had already carried out several assassinations and terrorist attacks using sarin, including the Matsumoto sarin attack nine months earlier. They had also produced several other nerve agents, including VX, and attempted to produce botulinum toxin and had perpetrated several failed acts of bioterrorism. Asahara had been made aware of a police raid scheduled for March 22 and had planned the Tokyo subway attack in order to hinder police investigations into the cult and perhaps spark the apocalypse they believed in. The leader also wanted to start a Third World War.
In the raid following the attack, police arrested many senior members of the cult. Police activity continued throughout the summer, and over 200 members were arrested, including Asahara. Thirteen of the senior Aum management, including Asahara himself, were sentenced to death and later executed; many others were given prison sentences up to life. The attack remains the deadliest terrorist incident in Japan as defined by modern standards.
Background
Aum Shinrikyo
Origins
Aum Shinrikyo was founded in 1984 as a yoga and meditation class, initially known as , by pharmacist Chizuo Matsumoto. The group believed in a doctrine revolving around a syncretic mixture of Indian and Tibetan Buddhism, as well as Christian and Hindu beliefs, especially relating to the Hindu god Shiva. They believed Armageddon to be inevitable in the form of a global war involving the United States and Japan; that non-members were doomed to eternal hell, but could be saved if killed by cult members; and that only members of the cult would survive the apocalypse, and would afterwards build the Kingdom of Shambhala. In 1987, the group rebranded and established a New York branch; the following year, it opened a headquarters in Fujinomiya. Around this time, the mental health of Matsumoto (now going by the name Shoko Asahara) deteriorated – he developed a health anxiety and expressed suicidal views.
In August 1989, the group was granted official religious corporation status by the Tokyo Metropolitan Government, giving it privileges such as tax breaks and freedom from governmental oversight. This recognition caused dramatic growth, including an increase in net worth from less than 430 million yen to over 100 billion yen (approximately $5.6m to $1.1b in 2017 dollars) over the next six years, as well as an increase in membership from around 20 members to around 20,000 by 1992.
The drastically increasing popularity of the group also saw an increase in violent behaviour from its members. In the year preceding its recognition by the Tokyo government, a member of the cult – Terayuki Majima – had accidentally drowned during a ritual; his body was cremated, with the remaining bones ground up and scattered over a nearby lake. Majima's friend – a fellow member of the group – was murdered by members acting under Asahara's orders, after he became disillusioned and tried to leave.
Three months after recognition, six Aum Shinrikyo members were involved in the murder of a lawyer, Tsutsumi Sakamoto, who had been working on a class-action lawsuit against the cult, as well as his wife and 1-year-old son. Asahara had previously advanced the concept of 'poa''': a doctrine which stated that not only were people with bad karma doomed to an eternity in hell (unless they were 'rebirthed' through intervention by 'enlightened people'), but that it was acceptable to kill those at risk of bad karma to save them from hell.
Early attempts to seize power
Asahara had experienced delusions of grandeur as early as 1985 – while meditating, he claims that the god Shiva had been revealed to him, and had appointed him 'Abiraketsu no Mikoto' ('The god of light who leads the armies of the gods'), who was to build the Kingdom of Shambhala, a utopian society made up of those who had developed 'psychic powers'.
In 1990, Asahara announced that the group would run 25 candidates in the election that year to the Japanese Diet, under the banner of . Despite showing confidence in their ability to gain seats in the diet, the party received only 1,783 votes; the failure to achieve power legitimately, blamed by Asahara on an external conspiracy propagated by "Freemasons and Jews", caused him to order the cult to produce botulinum and phosgene in order to overthrow the Japanese government. As members became disillusioned with the group (following contact with the outside world made during the election campaign) and defected, an attitude among the remaining members that 'the unenlightened' did not deserve salvation became accepted.
Attempts to stockpile botulinum toxin proved unsuccessful. Seiichi Endo – one of the members tasked with acquiring botulinum toxin – collected soil samples from the Ishikari River, and attempted to produce the toxin using three capacity fermenters. In total, around 50 batches of of a crude broth were producedhowever, the cult did not attempt to purify the broth (which mostly would have consisted of bacterial cultivation media; one member even fell into one of the fermenters and nearly drowned, but otherwise suffered no ill effects).
Despite mouse bioassays run by Tomomasu Nakagawa (another cult member assisting Endo) returning no toxic effects, in April 1990 the crude broth was loaded into three trucks equipped with custom spray devices, which was to be sprayed at two US naval bases, Narita airport, the Diet building, the Imperial Palace, and the headquarters of a rival religious group.
Simultaneously, Asahara announced that the coming apocalyptic war could not save people outside of the cult, and that members should attend a three-day seminar in Ishigakijima in order to seek shelter. The spraying attacks failed to cause any ill effects among the population but 1270 people attended the seminar, many of them becoming devout monks.
With the intention of building a compound incorporating facilities such as a phosgene plant (as well as facilities to manufacture VX and chlorine gas), Aum Shinrikyo used 14 dummy companies to purchase acres of land in Namino (now part of Aso city), and began construction. However, public attitudes towards the cult had become very negative due to suspicions around the cult's illegal activities. These attitudes were exacerbated once it was revealed to the surrounding community that the group had acted illegally. A police investigation in October resulted in the arrests of several Aum members, causing Asahara to fear a police raid – he hence ordered the destruction of all biological and chemical weapon stockpiles, and for the cult to focus on legitimate, non-violent strategies only.
Restarting violent activity
After the destruction of the illegal weapon stockpiles, the cult relied on 'mainstream' methods to attract other members – this included frequent television appearances by Asahara, as well as the setting up of the 'Aum Shinrikyo broadcasting' radio station in Russia in April 1992. However, starting in late 1992, Asahara's mental health deteriorated further – his suicidal feelings intensified, he began to complain of hallucinations and paranoia, and he withdrew from public appearances (except on Aum Shinrikyo Broadcasting), claiming society was preventing him from fulfilling his destiny as Christ. The concurrent replacement of the previously predominantly female group of top advisers with a more aggressive male group led to the gradual restarting of the violent campaign to seize power. At some point in 1992, Asahara published Declaring Myself the Christ, in which he identified with the "Lamb of God".
He outlined a doomsday prophecy, which included a Third World War, and described a final conflict culminating in a nuclear Armageddon, borrowing the term from the Book of Revelation . His purported mission was to take upon himself the sins of the world, and he claimed he could transfer to his followers spiritual power and take away their sins.
Asahara claimed to be able to see dark conspiracies everywhere promulgated by Jews, Freemasons, the Dutch, the British Royal Family, and rival Japanese religions.
The president of the Okamura ironworks, an industrial plant facing debt troubles, was a member of the cult who consulted with Asahara about a takeover strategy. In September 1992, Asahara was made president of the ironworks, resulting in 90% of staff being dismissed or leaving due to the 'Aum-ification' of the plant. These workers were replaced with other members of the group. Over the course of 1993, the cult smuggled AK-74 rifles and bullets, and began to prototype rifles based on the AK-74 design.
Under the oversight of Endo, the biological weapons division of the cult resumedthis time pursuing not only botulinum toxin, but also anthrax, using improved drum fermenters at their Kameido facility.
Again, the group did not attempt to purify the resulting product, which resembled a foul-smelling brown slurry. Further failed attacks on individuals were attempted in 1993 and 1994 using botulinum – first using a homemade sprayer mounted to a car, and then by mixing with juice – but neither had any effects. Five days before the sarin attack on the Tokyo subways, botulinum was dispersed in a failed attack on Kasumigaseki station – a dissident member had replaced the active compound with water, but the cult had failed to acquire an active strain of C. botulinum.
Similarly, the Aum anthrax program was a failure – despite having access to a sympathiser outside of the group who could acquire anthrax spores, the strain received by the group was a Sterne vaccine strain incapable of causing harm. It was unclear why, despite having this knowledge, the group executed two attacks in 1993 using this vaccine strain – once from the roof of the headquarters building in Kameido, and once from a truck with a custom spraying device, aimed at the Diet building, Imperial Palace, and Tokyo Tower. Both attacks caused no effects other than a foul smell, reported by passers-by.
In the summer of 1993, Endo attempted a different strategy – by desiccating the slurry, the B. anthracis spores could be spread as a powder, rather than through spraying – this was achieved with a crude hot air dryer. Nakagawa has claimed that an attempt was made to spread this powder through the centre of Tokyo, but this, also, had no effects. The total failure of the biological weapons program had, by mid-1993, convinced Asahara to focus on the chemical weapons division under Masami Tsuchiya. While Endo would be promoted within the cult to 'health minister' in 1994 – reflecting his seniority – no further attacks using biological weapons were attempted.
Chemical weapon production
Tsuchiya had established a small laboratory in their Kamikuishiki complex in November 1992. After initial research (done at Tsukuba University, where he had previously studied chemistry), he suggested to Hideo Muraia senior Aum advisor who had tasked him with researching chemical weapons in November 1992, out of fear that the cult would soon be attacked with themthat the most cost-effective substance to synthesize would be sarin.
He was subsequently ordered to produce a small amountwithin a month, the necessary equipment had been ordered and installed, and of sarin had been produced via synthetic procedures derived from the five-step DHMP process as originally described by IG Farben in 1938, and as used by the Allies after World War II.
After this small quantity had been produced, Murai ordered Tsuchiya to produce about when Tsuchiya protested, noting that this level of scaling was not feasible in a research laboratory, a chemical plant was ordered to be built alongside the biological production facility in the Fujigamine district of Kamikuishiki, to be labeled Satyan-7 ('Truth'). The specialized equipment and substantial chemicals needed to run the facility were purchased using shell companies under Hasegawa Chemical, a chemical company already owned by Aum. At the same time, in September 1993, Asahara and 24 other cult members traveled from Tokyo to Perth, Australia, bringing generators, tools, protective equipment (including gas masks and respirators), and chemicals to make sarin.
After repurchasing chemicals confiscated by customs, the group chartered aircraft from Perth to Banjawarn Station, where they searched for uranium deposits to make nuclear weapons and may have tested the efficacy of the synthesized sarin on animals. They remained in Australia for eight days and attempted to return in October of the same year, but were denied visasBanjawarn Station would be sold a year later.
The Satyan-7 facility was declared ready for occupancy by September 1993 with the capacity to produce about of sarin, being equipped with capacity mixing flasks within protective hoods, and eventually employing 100 Aum members; the UN would later estimate the value of the building and its contents at $30 million.
Despite the safety features and often state-of-the-art equipment and practices, the operation of the facility was very unsafe – one analyst would later describe the cult as having a "high degree of book learning, but virtually nothing in the way of technical skill."
When the facility developed leaks, buckets were used to contain spills; several technicians inhaled fumes on repeated occasions, developing 'symptoms ranging from nosebleeds to convulsions', and toxic chemicals leaked from the site and into the soil. Citizens lodged complaints about foul smells several times, with the cult claiming that the US Army had assaulted the complex with poison gas. An accident at the plant in November 1994 would eventually force the suspension of the production of chemical agents.
By December, Tsuchiya had accumulated in total about of sarinfrom this, two separate assassination attempts were made on Daisaku Ikeda, leader of Soka Gakkai (a rival Japanese religious movement), in mid-1994. The first attack involved a truck with a spraying system, as previously used – the spraying system malfunctioned, spraying sarin into the truck itself, and mildly poisoning the operators. The second attack utilized a truck modified to include an evaporation system based on heating sarin over a gas stove fire; despite prior warnings from cult member Kazuyoshi Takizawa, the truck caught fire during the dissemination, severely poisoning the driver Tomomitsu Niimi and causing both Niimi and Murai – the operators – to flee. Niimi received an injection of atropine and pralidoxime iodine, saving his life.
Despite the failure of the attack, the members of Aum were convinced of sarin's efficacy, prompting Asahara to appoint Takizawa in charge of operations of Satyan-7; Tsuchiya was assigned to several other projects and would go on to manufacture several psychoactives – LSD, PCP, methamphetamine, mescaline, and phenobarbital to be used in the cult activities and brainwashing; he would also manufacture small amounts of phosgene, VX, soman, cyclosarin, and gunpowder. These compounds would be used in several attacks and assassination attempts:
Matsumoto sarin attack
In June 1994, Asahara ordered the cult to assassinate the judges involved in deciding a commercial land dispute involving the cult, due to his belief that they would not deliver a favourable judgement. About a week later, on 27 June, of sarin was loaded onto a truck equipped with a fan, heater, and pumpsix members, pre-administered with sarin antidotes and wearing improvised gas masks, began the propagation of sarin at around 10:40pm, spraying for around 10–20 minutes.
Due to it being a warm evening, many residents had left their windows open while they sleptthe first emergency call was made at 11:09pm. Within an hour, a mass disaster caused by an unknown toxic gas had been declared. Fifty-eight people were hospitalised, of whom seven people died in the immediate aftermath, and an eighth 14 years later, and an additional 253 people sought medical care at outpatient clinics.
Investigations after the Matsumoto attack were generally inconclusive, with the primary suspect being Yoshiyuki Kōno, whose wife had been left comatose by the attack. Blame would not be clearly attributed to Aum Shinrikyo until after the subway attack, despite tipoffs – in September 1994, two anonymous letters were sent to major media outlets in Japan – the first asserting that the group were responsible for the attack, and the second claiming that Matsumoto was an open-air 'experiment of sorts', noting that the results would have been much worse if sarin had been released indoors, such as in 'a crowded subway'.
Following an accident at Satyan-7 the next month (and complaints from the surrounding communities), a police investigation revealed methylphosphonic acid and isopropyl methylphosphonic acid – the former being a degradation product of sarin, and the latter being a definitive signature of both sarin production and of failures in production. However, there was no law at the time prohibiting the production of the nerve agents. This evidence was left unacted on, but was leaked to the Yomiuri Shimbun in January 1995, alerting Asahara and the cult, and causing Nakagawa and Endo to begin the process of destroying and/or hiding all nerve agents and biological weapons, which lasted until the end of February.
Preparation for the attack
Fingerprint evidence for an Aum member linked to an earlier kidnapping, in addition to the sarin-contaminated soil samples, caused the police to set a raid date for 22 March. Asahara was made aware of the impending raid by two cult members inside the Self-Defense Forces, and ordered an attack on Tokyo subway lines close to the Metropolitan Police Department on the morning of the 20th March – possibly as a desperate attack to initiate the apocalypse.
To aid in this, Tsuchiya was ordered by Endo to produce sarin again on 18 March – due to a lack of normal precursors as a result of the chemical destruction process, the sarin produced was of a lower quality and caused the normally colourless sarin to appear brown. of the chemical was manufactured and stored in a large container, from which it was decanted into plastic bags. Later forensic analysis found that the sarin utilised in the attack was roughly half as pure as that used in the Matsumoto attack.
Attack
On Monday, 20 March 1995, five members of Aum Shinrikyo launched a chemical attack on the Tokyo subway (on lines that are part of the present-day Tokyo Metro), one of the world's busiest commuter transport systems, at the peak of the morning rush hour. The chemical agent used, liquid sarin, was contained in plastic bags which each team then wrapped in newspaper. Each perpetrator carried two packets totaling approximately of sarin, except Yasuo Hayashi, who carried three bags totalling approximately of sarin. Aum originally planned to spread the sarin as an aerosol but did not follow through with it. Sarin has an LD50 of , corresponding to for a human; however, dispersal issues dramatically reduced its effectiveness.
Carrying their packets of sarin and umbrellas with sharpened tips, the perpetrators boarded their appointed trains. At prearranged stations, the sarin packets were dropped and punctured several times with the sharpened tip of the umbrella. Each perpetrator then got off the train and exited the station to meet his accomplice with a car. Leaving the punctured packets on the floor allowed the sarin to leak out into the train car and stations. This sarin affected passengers, subway workers, and those who came into contact with them. Sarin is the most volatile of the nerve agents, which means that it can quickly and easily evaporate from a liquid into a vapor and spread into the environment. People can be exposed to the vapor even if they do not come in contact with the liquid form of sarin. Because it evaporates so quickly, sarin presents an immediate but short-lived threat.
Chiyoda Line
The team of Ikuo Hayashi and Tomomitsu Niimi were assigned to drop and puncture two sarin packets on the Chiyoda Line. Hayashi was the perpetrator and Niimi was his getaway driver. On the way to Sendagi Station, Niimi purchased newspapers to wrap the sarin packets in—the Japan Communist Party's Akahata and the Sōka Gakkai's Seikyo Shimbun.
Hayashi eventually chose to use Akahata. Wearing a surgical mask commonly worn by the Japanese during cold and flu season, Hayashi boarded the first car of southwest-bound 07:48 Chiyoda Line train number A725K. As the train approached Shin-Ochanomizu Station, the central business district in Chiyoda, he punctured one of his two bags of sarin, leaving the other untouched, and exited the train at Shin-Ochanomizu.
The train proceeded down the line with the punctured bag of sarin leaking until 4 stops later at Kasumigaseki Station. There, the bags were removed and eventually disposed of by station attendants, of whom two died. The train continued on to the next station where it was completely stopped, evacuated and cleaned.
Marunouchi Line
Ogikubo-bound
Two men, Ken'ichi Hirose and Koichi Kitamura, were assigned to release two sarin packets on the westbound Marunouchi Line destined for Ogikubo Station. The pair left Aum headquarters in Shibuya at 6:00 am and drove to Yotsuya Station. There Hirose boarded a westbound Marunouchi Line train, then changed to a northbound JR East Saikyō Line train at Shinjuku Station and got off at Ikebukuro Station. He then bought a sports tabloid to wrap the sarin packets in and boarded the second car of Marunouchi Line train A777.
As he was about to release the sarin, Hirose believed the loud noises caused by the newspaper-wrapped packets had caught the attention of a schoolgirl. To avoid further suspicion, he got off the train at either Myogadani or Korakuen Station and moved to the third car instead of the second.
As the train approached Ochanomizu Station, Hirose dropped the newspapers to the floor, repeated an Aum mantra and punctured both sarin packets with so much force that he bent the tip of his sharpened umbrella. Both packets were successfully broken, and all of sarin was released onto the floor of the train. Hirose then departed the train at Ochanomizu and left via Kitamura's car waiting outside the station. Hirose's clumsy release of the sarin resulted in him accidentally poisoning himself, but he was able to administer an antidote stored in Kitamura's car.
At Nakano-sakaue Station, 14 stops later, two severely injured passengers were carried out of the train car, while station attendant Sumio Nishimura removed the sarin packets (one of these two passengers was the only fatality from this attack). The train continued with sarin still on the floor of the third car. Five stops later, at 8:38 am, the train reached Ogikubo Station, the end of the Marunouchi Line, all while passengers continued to board the train. The train continued eastbound until it was finally taken out of service at Shin-Kōenji Station two stops later. The entire ordeal resulted in one passenger's death with 358 being seriously injured.
Ikebukuro-bound
Masato Yokoyama and his driver Kiyotaka Tonozaki were assigned to release sarin on the Ikebukuro-bound Marunouchi Line. On the way to Shinjuku Station, Tonozaki stopped to allow Yokoyama to buy a copy of Nihon Keizai Shimbun, to wrap the two sarin packets. When they arrived at the station, Yokoyama put on a wig and fake glasses and boarded the fifth car of the Ikebukuro-bound 07:39 Marunouchi Line train number B801. As the train approached Yotsuya Station, Yokoyama began poking at the sarin packets. When the train reached the next station, he fled the scene with Tonozaki, leaving the sarin packets on the train car. The packets were not fully punctured. During his drop, Yokoyama left one packet fully intact, while the other packet was only punctured once (and with a small hole), resulting in the sarin being released relatively slowly.
The train reached the end of the line, Ikebukuro, at 8:30 am where it would head back in the opposite direction. Before it departed the train was evacuated and searched, but the searchers failed to discover the sarin packets. The train departed Ikebukuro Station at 8:32 am as the Shinjuku-bound A801. Passengers soon became ill and alerted station attendants of the sarin-soaked newspapers at Kōrakuen Station. One station later, at Hongō-sanchōme, staff removed the sarin packets and mopped the floor, but the train continued on to Shinjuku. After arriving at 9:09 am, the train once again began to make its way back to Ikebukuro as the B901. The train was finally put out of service at Kokkai-gijidō-mae Station in Chiyoda at 9:27 am, one hour and forty minutes after Yokoyama punctured the sarin packet. The attack resulted in no fatalities, but over 200 people were left in serious condition.
Hibiya Line
Tōbu Dōbutsu Kōen-bound
Toru Toyoda and his driver Katsuya Takahashi were assigned to release sarin on the northeast-bound Hibiya Line.
The pair, with Takahashi driving, left Aum headquarters in Shibuya at 6:30 am. After purchasing a copy of Hochi Shimbun and wrapping his two sarin packets, Toyoda arrived at Naka-Meguro Station where he boarded the first car of northeast-bound 07:59 Hibiya Line train number B711T. Sitting close to the door, he set the sarin packets on the floor. When the train arrived at the next station, Ebisu, Toyoda punctured both packets and got off the train. He was on the train for a total of two minutes, by far the quickest sarin drop out of the five attacks that day.
Two stops later, at Roppongi Station, passengers in the train's first car began to feel the effects of the sarin and began to open the windows. By Kamiyacho Station, the next stop, the passengers in the car had begun panicking. The first car was evacuated and several passengers were immediately taken to a hospital. Still, with the first car empty, the train continued down the line for one more stop until it was completely evacuated at Kasumigaseki Station. This attack killed one person and seriously injured 532 others.
Naka-Meguro-bound
Yasuo Hayashi and Shigeo Sugimoto were the team assigned to drop sarin on the southwest-bound Hibiya Line departing Kita-Senju Station for Naka-Meguro Station. Unlike the rest of the attackers, Hayashi carried three sarin packets onto the train instead of two. Prior to the attack, Hayashi asked to carry a flawed leftover packet in addition to the two others in an apparent bid to allay suspicions and prove his loyalty to the group.
After Sugimoto escorted him to Ueno Station, Hayashi boarded the third car of southwest-bound 07:43 Hibiya Line train number A720S and dropped his sarin packets to the floor. Two stops later, at Akihabara Station, he punctured two of the three packets, left the train, and arrived back at Aum headquarters with Sugimoto by 8:30 am. Hayashi made the most punctures of any of the perpetrators. By the next stop, passengers in the third car began to feel the effects of the sarin. Noticing the large, liquid-soaked package on the floor and assuming it was the culprit, one passenger kicked the sarin packets out of the train and onto Kodenmachō Station's subway platform. Four people in the station died as a result.
A puddle of sarin remained on the floor of the passenger car as the train continued to the next station. At 8:10 am, after the train pulled out of Hatchōbori Station, a passenger in the third car pressed the emergency stop button. The train was in a tunnel at the time, and was forced to proceed to Tsukiji Station, where passengers stumbled out and collapsed on the station's platform and the train was taken out of service.
The attack was originally believed to be an explosion and was thus labeled as such in media reports. Eventually, station attendants realized that the attack was not an explosion, but rather a chemical attack. At 8:35 am, the Hibiya Line was completely shut down and all commuters were evacuated. Between the five stations affected in this attack, 10 people died and 275 were seriously injured.
Main perpetrators
Ten men were responsible for carrying out the attacks: five released the sarin, while the other five served as getaway drivers.
The teams were:
Naoko Kikuchi, who was involved in producing the sarin gas, was arrested after a tipoff in June 2012.
Kikuchi was acquitted in 2015 on the grounds that she was unaware of the plot.
Katsuya Takahashi was arrested soon afterward. He was later convicted and given a life sentence.
Ikuo Hayashi
Prior to joining Aum, Hayashi was a senior medical doctor with "an active 'front-line' track record" at the Ministry of Science and Technology. The son of a doctor, Hayashi graduated from Keio University. He was a heart and artery specialist at Keio Hospital, which he left to become head of Circulatory Medicine at the National Sanatorium Hospital in Tokai, Ibaraki (north of Tokyo).
In 1990, he resigned his job and left his family to join Aum in the monastic order Sangha, where he became one of Asahara's favorites and was appointed the group's Minister of Healing, as which he was responsible for administering a variety of "treatments" to Aum members, including sodium pentothal and electric shocks to those whose loyalty was suspect. These treatments resulted in several deaths.
Hayashi later reported to the Japanese police investigators about the sarin attacks and Aum activities post-Tokyo subway attack; his cooperation with the authorities resulted in numerous arrests and convictions, and he was given a life sentence instead of death penalty. Tomomitsu Niimi, who was his getaway driver, was sentenced to death due to his involvement in other crimes perpetrated by Aum members. He was executed at the Osaka Detention House on 6 July 2018 with six others of those principally involved.
Kenichi Hirose
Hirose was thirty years old at the time of the attacks. Holder of a postgraduate degree in physics from Waseda University, Hirose became an important member of the group's Chemical Brigade in their Ministry of Science and Technology. He was also involved in the group's Automatic Light Weapon Development scheme.
Hirose teamed up with getaway driver Kōichi Kitamura. After releasing the sarin, Hirose himself showed symptoms of sarin poisoning. He was able to inject himself with the antidote (atropine sulphate) and was rushed to the Aum-affiliated Shinrikyo Hospital in Nakano for treatment. Medical personnel at the given hospital had not been given prior notice of the attack and were consequently clueless regarding what treatment Hirose needed. When Kitamura realized that he had driven Hirose to the hospital in vain, he instead drove to Aum's headquarters in Shibuya where Ikuo Hayashi gave Hirose first aid.
Hirose was later sentenced to death for his role in the attack. His appeal against his death sentence was rejected by the Tokyo High Court on 28 July 2003 and the sentence was upheld by the Supreme Court of Japan on 6 November 2009. Hirose was executed at the Tokyo Detention House on 26 July 2018, along with five other cult members.
Kitamura, Hirose's getaway driver, was sentenced to life imprisonment.
Toru Toyoda
Toyoda was twenty-seven at the time of the attack. He studied Applied Physics at University of Tokyo's Science Department and graduated with honors. He also held a master's degree, and was about to begin doctoral studies when he joined Aum, where he belonged to the Chemical Brigade in their Ministry of Science and Technology.
Toyoda was sentenced to death. The appeal against his death sentence was rejected by the Tokyo High Court on July 28, 2003, and was upheld by the Supreme Court on November 6, 2009. Toyoda was executed at the Tokyo Detention House on 26 July 2018.
Katsuya Takahashi was Toru Toyoda's getaway driver. Takahashi was arrested in June 2012. In 2015, Takahashi was convicted for his role in the attack and was sentenced to life in prison. His appeal was rejected by the Tokyo High Court in September 2016.
Masato Yokoyama
Yokoyama was thirty-one at the time of the attack. He was a graduate in Applied Physics from Tokai University's Engineering Department. He worked for an electronics firm in Gunma Prefecture for three years after graduation before leaving to join Aum, where he became Undersecretary at the group's Ministry of Science and Technology. He was also involved in their Automatic Light Weapons Manufacturing scheme. Yokoyama was sentenced to death in 1999. His appeals were rejected, and he was executed at the Nagoya Detention House on 26 July 2018.
Kiyotaka Tonozaki, a high school graduate who joined the group in 1987, was a member of the group's Ministry of Construction, and served as Yokoyama's getaway driver. Tonozaki was sentenced to life imprisonment.
Yasuo Hayashi
Yasuo Hayashi was thirty-seven years old at the time of the attacks, and was the oldest person at the group's Ministry of Science and Technology. He studied Artificial Intelligence at Kogakuin University; after graduation he traveled to India where he studied yoga. He then became an Aum member, taking vows in 1988 and rising to the number three position in the group's Ministry of Science and Technology.
Asahara had at one time suspected Hayashi of being a spy. The extra packet of sarin he carried was part of "ritual character test" set up by Asahara to prove his allegiance, according to the prosecution.
Hayashi went on the run after the attacks; he was arrested twenty-one months later, one thousand miles from Tokyo on Ishigaki Island. He was later sentenced to death. His appeal was rejected by the Tokyo High Court in 2008. Hayashi was executed at the Sendai Detention House on 26 July 2018.
Hayashi's getaway driver was Shigeo Sugimoto, whose lawyers argued he played only a minor role in the attack, but the argument was rejected and he was sentenced to life in prison.
Kōichi Kitamura
is a Japanese convicted domestic terrorist and member of the doomsday cult Aum Shinrikyo. In 1995, he served as getaway driver for one of the perpetrators of the Tokyo subway sarin attack, Kenichi Hirose. He was 27 years old when the attack was committed. He is currently serving a life sentence for the attack and other offenses.
Crimes and conviction
Kitamura is a native of Aichi Prefecture and joined Aum Shinrikyo in the late 1980s after reading a book written by leader Shoko Asahara.
During the Tokyo subway sarin attack he drove Kenichi Hirose to the Tokyo Metro Marunouchi Line where Hirose boarded a train and punctured two bags of liquid sarin, causing the death of one person. The attack would kill 13 people and injure more than 5,300. Kitamura also aided cult fugitive Takeshi Matsumoto in hiding from justice between the months of March and April 1995 for the crime of kidnapping.
He remained as a fugitive until November 1996 when he was finally arrested in Tokorozawa, Saitama. In his first trial in May 1997 he admitted to the crimes and reportedly renounced to the cult although he maintained the belief that Asahara had superpowers and his lawyer said that he still was under the spell of the cult.
He was sentenced to life imprisonment in November 1999, with the presiding judge chastising him for playing an "indispensable role" in the attack. The judge also highlighted his self-righteous motive for his crimes and pronounced the sentencing saying that:
After the verdict was read, his lawyer said that Kitamura was still under Asahara's spell which made him a victim of the cult as well. He also said that the court had dismissed this point adding that he would discuss with him whether to appeal to the higher courts.
In January 2002, the Tokyo High Court upheld his sentence, which he called "too harsh" given his role in the attack. The court refuted his argument and highlighted his lack of remorse as motive for upholding the sentence.
Aftermath
Following the attack, Japanese police raided Aum Shinrikyo facilities and arrested members. The cult's headquarters in Tokyo was raided by police on May 16, 1995. Due to fears that armed cult members might resist the raid, the 1st Airborne Brigade of the Japan Ground Self-Defense Force was stationed nearby to provide support if needed.
Injuries and deaths
On the day of the attack, ambulances transported 688 patients and nearly five thousand people reached hospitals by other means. In total, 278 hospitals saw 5,510 patients – 17 of whom were deemed critical, 37 severe, and 984 moderately ill with vision problems. Most of those reporting to hospitals were the "worried well", who had to be distinguished from those who were ill. The categorization was that a moderate casualty just had miosis (excessive constriction of the pupil), a severe casualty was short of breath or had muscular twitching or gastrointestinal problems as well as miosis, and a severe or critical casualty required intensive care unit care.
Witnesses have said that subway entrances resembled battlefields. Several of those affected by sarin went to work in spite of their symptoms, not realizing that they had been exposed to sarin. Most of the victims sought medical treatment as the symptoms worsened and as they learned of the actual circumstances of the attacks via news broadcasts.
By mid-afternoon, the mildly affected victims had recovered from vision problems and were released from hospital. Most of the remaining patients were well enough to go home the following day, and within a week only a few critical patients remained in hospital. The death toll on the day of the attack was eight, with four more dying subsequently. Hospitals only became aware that sarin was involved after about two hours, and then started administering 2-PAM and atropine.
Several of those affected were exposed to sarin only by helping those who had been directly exposed. Among these were passengers on other trains, subway workers and health care workers.
A 2008 law enacted by the Japanese government authorized payments of damages to victims of the gas attack, because the attack was directed at the government of Japan. As of December 2009, 5,259 people have applied for benefits under the law. Of those, 47 out of 70 have been certified as disabled and 1,077 of 1,163 applications for serious injuries or illnesses have been certified.
Surveys of the victims in 1998 and 2001 showed that many were still suffering from post-traumatic stress disorder. In one survey, twenty percent of 837 respondents complained that they felt insecure when on a train, while ten percent answered that they tried to avoid any nerve-attack related news. Over sixty percent reported chronic eyestrain and said their vision had worsened.
Until 2008, 12 fatalities resulting from the attack had been officially acknowledged. However, in 2008 a survey of victims was conducted by the prefectural police department for the purpose of allocating compensation. This survey determined that a man who had died the day after the attack had also been killed by sarin inhalation, thereby increasing the officially recognised death toll to 13. On 10 March 2020, a further victim died, who had been bedridden for the 25 years since the attack. 56-year-old Sachiko Asakawa's cause of death was determined to be hypoxic encephalopathy caused by sarin poisoning, making her the attack's 14th fatality.
Emergency services
Emergency services, including police, fire and ambulance services, were criticised for their handling of the attack and the injured, as were the media (some of whom, though present at subway entrances and filming the injured, hesitated when asked to transport victims to the hospital) and the Subway Authority, which failed to halt several of the trains despite reports of passenger injury. Health services including hospitals and health staff were also criticised: one hospital refused to admit a victim for almost an hour, and many hospitals turned victims away.
Sarin poisoning was not well understood at the time, and many hospitals received information on diagnosis and treatment only because a professor at Shinshu University's school of medicine happened to see reports on television. Dr. Nobuo Yanagisawa had experience with treating sarin poisoning after the Matsumoto incident; he recognized the symptoms, had information on diagnosis and treatment collected, and led a team who sent the information to hospitals throughout Tokyo via fax.
St. Luke's International Hospital in Tsukiji was one of very few hospitals in Tokyo at that time to have the entire building wired and piped for conversion into a "field hospital" in the event of a major disaster. This proved to be a very fortunate coincidence as the hospital was able to take in most of the 600+ victims at Tsukiji Station, resulting in no fatalities at that station.
As there was a severe shortage of antidotes in Tokyo, sarin antidote stored in rural hospitals as an antidote for herbicide/insecticide poisoning was delivered to nearby Shinkansen stations, where it was collected by a Ministry of Health official on a train bound for Tokyo. An Osaka company that manufactured 2-PAM rushed emergency supplies to Tokyo unsolicited on hearing the news.
Defense offered by Japanese and American scholars
Aum had carefully cultivated the friendship of Japanese scholars of religion. After the sarin gas attack, some of them, including Shimada Hiromi, a professor at Tokyo's Japan Women's University, suggested Aum may be innocent. Shimada later apologized, claiming he had been deceived by Aum, but his and others' statements damaged the public image of scholars of religion in general in Japan. Shimada later had to resign from his academic position.
In May 1995, Aum contacted an American group known as AWARE (Association of World Academics for Religious Education), founded by American scholar James R. Lewis, claiming that the human rights of its members were being violated. Lewis recruited human rights lawyer Barry Fisher, scholar of religion J. Gordon Melton, and chemical expert Thomas Banigan. They flew to Japan, with their travel expenses paid by Aum, and announced that they will investigate and report through press conferences at the end of their trip.
In the press conferences, Fisher and Lewis announced that Aum could not have produced the sarin with which the attacks had been committed. They had determined this, Lewis said, with their technical expert, based on photos and documents provided by the group.
In fact, the Japanese police had already discovered at Aum's main compound back in March a sophisticated chemical weapons laboratory that was capable of producing thousands of kilograms a year of the poison. Later investigation showed that Aum not only created the sarin used in the subway attacks, but had committed previous chemical and biological weapons attacks, including a previous attack with sarin that killed eight people and injured 144."Matsumoto sarin victim dies 14 years after attack" , Yomiuri Shimbun, 6 August 2008.
British scholar of Japanese religions Ian Reader, in a detailed account of the incident, reported that Melton "had few doubts by the end of his visit to Japan of Aum’s complicity" and eventually "concluded that Aum had in fact been involved in the attack and other crimes" In fact, the Washington Post account of the final press conference mentioned Lewis and Fisher but not Melton. A Christian anti-cult Web site called Apologetic Index quoted the Washington Post article and implied that Melton had spoken in the press conference. Melton was, however, not mentioned in the original Washington Post article.
Lewis, on the other hand, maintained his opinion that Aum had been framed, and wrote that having the trip funded by Aum had been arranged "so that financial considerations would not be attached to our final report".
Reader concluded that, "The visit was well-intentioned, and the participants were genuinely concerned about possible violations of civil rights in the wake of the extensive police investigations and detentions of followers." However, it was ill-fated and detrimental to the reputation of those involved. While distinguishing between Lewis' and Melton's attitudes, Reader observed that Melton was criticized as well by both Japanese media and some fellow scholars. Using stronger words, Canadian scholar Stephen A. Kent chastised both Lewis and Melton for having put the reputation of the whole category of scholars of new religious movements at risk.
Murakami book
Popular contemporary novelist Haruki Murakami wrote Underground: The Tokyo Gas Attack and the Japanese Psyche (1997). He was critical of the Japanese media for focusing on the sensational profiles of the attackers and ignoring the lives of the victimized average citizens. The book contains extensive interviews with the survivors in order to tell their stories. Murakami later added a second part to the
work, The Place That Was Promised, which focuses on Aum Shinrikyo.
Aum/Aleph today
The sarin attack was the most serious attack upon Japan since World War II. Shortly after the attack, Aum lost its status as a religious organization, and many of its assets were seized. The Diet (Japanese parliament) rejected a request from government officials to outlaw the group. The National Public Safety Commission received increased funding to monitor the group. In 1999, the Diet gave the commission board powers to monitor and curtail the activities of groups that have been involved in "indiscriminate mass murder" and whose leaders are "holding strong sway over their members", a bill custom-tailored to Aum Shinrikyo.
Asahara was sentenced to death by hanging on 27 February 2004, but lawyers immediately appealed the ruling. The Tokyo High Court postponed its decision on the appeal until results were obtained from a court-ordered psychiatric evaluation, which was issued to determine whether Asahara was fit to stand trial. In February 2006, the court ruled that Asahara was indeed fit to stand trial, and on 27 March 2006, rejected the appeal against his death sentence. Japan's Supreme Court upheld this decision on 15 September 2006. Two re-trial appeals were declined by the appellate court. In June 2012, Asahara's execution was postponed due to the further arrests of the two remaining Aum Shinrikyo members wanted in connection with the attack. Japan does not announce dates of executions, which are by hanging, in advance of them being carried out. On 6 July 2018, the Ministry of Justice announced that Asahara had been executed that morning with six others of those principally involved.
On 27 November 2004, all the Aum trials concluded, excluding Asahara's, as the death sentence of Seiichi Endo was upheld by Japan's Supreme Court. As a result, among a total of 189 members indicted, 13 were sentenced to death, five were sentenced to life in prison, 80 were given prison sentences of various lengths, 87 received suspended sentences, two were fined, and one was found not guilty.
In May and June 2012, the last two of the fugitives wanted in connection with the attack were arrested in the Tokyo and Kanagawa area. Of them, Katsuya Takahashi was taken into custody by police near a comic book cafe in Tokyo.
Asahara and twelve other Aum cultists were finally executed by hanging in July, 2018, after all appeals were exhausted.
The group reportedly still has about 2,100 members, and continues to recruit new members under the name "Aleph" as well as other names. Though the group has renounced its violent past, it still continues to follow Asahara's spiritual teachings. Members operate several businesses, though boycotts of known Aleph-related businesses, in addition to searches, confiscations of possible evidence and picketing by protest groups, have resulted in closures.
In popular culture
The Fine Art of Invisible Detection (2021) by Robert Goddard: The attack is a central plot element of the book with the main character's husband being a fatal victim of the attack.
In the anime Mawaru Penguindrum, the fates of a handful of characters are explored and intertwined through a subway terrorist attack. Many references to the year '95 are presented as well.
In the short-lived TV series "Now and Again" the main villain, The Eggman uses a chemical dispersed via broken eggs in a Japanese subway. Clearly taking inspiration from the event.
See also
A, a documentary film made following the arrest of the leaders of Aum Shinrikyo
Banjawarn Station, a cattle station in Western Australia owned by Aum Shinrikyo
Capital punishment in Japan
List of executions in Japan
Me and the Cult Leader Religion in Japan
Notes
References
Bibliography
"Survey: Subway sarin attack haunts more survivors" in Mainichi Online June 18, 2001.
Detailed information on each subway line, including names of perpetrators, times of attack, train numbers and numbers of casualties, as well as biographical details on the perpetrators, were taken from Underground: The Tokyo Gas Attack and the Japanese Psyche by Haruki Murakami.
Ataxia: The Chemical and Biological Terrorism Threat and the US Response , Chapter 3 – Rethinking the Lessons of Tokyo, Henry L. Stimson Centre Report No. 35, October 2000
Bonino, Stefano. Il Caso Aum Shinrikyo: Società, Religione e Terrorismo nel Giappone Contemporaneo, 2010, Edizioni Solfanelli, . Preface by Erica Baffelli.
External links
"Aum Shinrikyo", Dark History Podcast 2018-06-15
Aum Shinrikyo A history of Aum and list of Aum-related links
The Aum Supreme Truth Terrorist Organization – The Crime library Crime Library article about Aum
I got some pictures of sarin scattered on the metro floor Several pictures taken by One of the passengers on the scene (in Japanese)
"Homebrew chemical terror bombs, hype or horror?," The Register''
1995 crimes in Japan
1995 murders in Asia
1995 in Tokyo
1990s murders in Japan
Accidents and incidents involving Tokyo Metro
Aum Shinrikyo
Chemical weapons attacks
Heisei period
History of the Japan Ground Self-Defense Force
March 1995 crimes
March 1995 events in Asia
Mass murder in 1995
Massacres in Japan
Murder in Tokyo
Railway accidents and incidents in Japan
Religious terrorism
Terrorist incidents in Asia in 1995
Terrorist incidents in Japan in the 1990s
Terrorist incidents in Tokyo
Tokyo |
58010719 | https://en.wikipedia.org/wiki/Michael%20Kass | Michael Kass | Michael Kass is an American computer scientist best known for his work in computer graphics and computer vision. He has won an Academy Award and the SIGGRAPH Computer Graphics Achievement Award and is an ACM Fellow.
Kass, David Baraff and Andrew Witkin shared an Academy Award for Scientific and Technical Achievement in 2005 for clothing animation, including his pioneering work on the clothing simulator used by Pixar in the short Geri's Game, Best Animated Short Film, Academy Awards 1997. He contributed a variety of technologies to Pixar animated films, from A Bug's Life through Monsters University.
In 2009, Kass was honored by ACM SIGGRAPH for "his extensive and significant contributions to computer graphics, ranging from image processing to animation to modeling, and in particular for his introduction of optimization techniques as a fundamental tool in graphics." The award citation notes: "Michael is a graphics renaissance man: he's worked on animation, modeling, textures, image processing and even on graphics systems. In each area, he's made groundbreaking contributions."
Google Scholar counts over 30K citations to his work, including one of the top 20 most cited papers in computer science, “Snakes: Active Contour Models," authored with Andrew Witkin and Demetri Terzopoulos. The "Snakes" paper launched the Active contour model, a framework for delineating an object outline from a possibly noisy 2D image for applications like object tracking, shape recognition, segmentation, edge detection and stereo matching.
Kass developed the Hierarchical Z-Buffer with collaborators Ned Greene and Gavin Miller, a rendering technique that enables great increases in practical scene complexity compared to traditional Z-buffering. The algorithm can be found in all modern graphics processing units (GPU).
Currently a Distinguished Engineer at NVIDIA, Kass is involved in a variety of projects related to augmented reality, virtual reality, and various types of content creation. Prior to NVIDIA, he was a Senior Principal Engineer at Intel, a Distinguished Fellow at Magic Leap, a Senior Research Scientist at Pixar, and a Principal Engineer at Apple Computers. His early days in advanced technologies began at Schlumberger Artificial Intelligence Research Laboratory after earning his Ph.D. from Stanford.
Kass has 28 issued U.S. patents and was honored in 2018 by the New York Intellectual Property Law Association as Inventor of the Year.
Kass is also a champion juggler, Argentine tango dancer, and an accomplished ice dancer.
Education
Kass received a B.A. summa cum laude in Artificial Intelligence (Independent Concentration) from Princeton University, an M.S. in Computer Science from Massachusetts Institute of Technology and a Ph.D. in Electrical Engineering from Stanford University.
Career
Michael Kass has been a Distinguished Engineer at NVIDIA since 2017. Prior to NVIDIA, he was a Senior Principal Engineer in the New Technology Group at Intel, Distinguished Fellow at Magic Leap, a Senior Research Scientist at Pixar Animation Studios, and a Principal Engineer with the Advanced Technology Group at Apple Computers. He began working on computer graphics and computer vision at Schlumberger's Palo Alto Research Center following his Ph.D.
Honors, awards and achievements
Computer science
Academy Award for Technical Achievement (2005), for "pioneering work in physically-based computer-generated techniques used to simulate realistic cloth in motion pictures," with David Baraff and Andrew Witkin
SIGGRAPH Computer Graphics Achievement Award (2009), "for his extensive and significant contributions to computer graphics, ranging from image processing to animation to modeling, and in particular for his use of optimization for physical simulation and image segmentation"
ACM Fellow (2017), "for contributions to computer vision and computer graphics, particularly optimization and simulation"
New York Intellectual Property Law Association Inventor of the Year (2018), "for contributions to the field of computer graphics"
Most Cited Paper in Computer Science - Citeseer (19th), for "Snakes: Active contour models," International Journal of Computer Vision (1988), with Demetri Terzopoulos and Andrew Witkin
Helmholtz Award, International Conference on Computer Vision (2013), for "Snakes: Active Contour Models," ICCV 1987, with Demetri Terzopoulos and Andrew Witkin
List of Important Publications in Computer Science, in Computer Vision, for "Snakes: Active contour models," with Demetri Terzopoulos and Andrew Witkin.
Golden Nica, Prix Ars Electronica (1992), with Andrew Witkin, for the image "Reaction Diffusion Texture Buttons," Linz, Austria
Grand Prix, Pixel INA, Imagina (1991), for the animation Splash Dance, with G. Miller
Marr Prize Honorable Mention, 1st International Conference on Computer Vision, London (1987), for "Snakes: Active contour models," with Demetri Terzopoulos and Andrew Witkin
Best Paper in Perception-Vision, Association for the Advancement of Artificial Intelligence (1987), for "Energy constraints on deformable models: Recovering shape and non-rigid Motion," with Demetri Terzopoulos and Andrew Witkin
Honorable Mention, Prix Ars Electronica (1987), for the animation Knot Reel, with Andrew Witkin and K. Fleischer.
Nomination, Best Paper prize, Association for the Advancement of Artificial Intelligence (1986), for "Linear image features in stereopsis"
Grand Prix, Parigraph (1986), for the animation Knot Reel, with Andrew Witkin and K. Fleischer
Other
2nd place, U.S. Argentine Tango Stage Championships, 2012
U.S. Adult National Silver Medalist in ice dance, 2003
U.S. Juggling Champion, 1980
Notable publications
M. Kass, A. Witkin and D. Terzopoulos, "Snakes: Active contour models," International Journal of Computer Vision, 1(4): 321–331, January 1988
A. Witkin and M. Kass, "Spacetime constraints," Siggraph 1988: 159-168
D. Terzopoulos, A. Witkin and M. Kass, "Constraints on deformable models: Recovering 3d shape and nonrigid motion," Artificial Intelligence, 35, 1988.
M. Kass and A. Witkin, "Analyzing oriented patterns," Computer Vision, Graphics, and Image Processing, 37(3): 362–385, 1987
T. DeRose, M. Kass and T. Truong, "Subdivision surfaces in character animation,' Siggraph 1998: 85–94.
N. Greene, M. Kass and G. Miller, "Hierarchical Z-buffer visibility," Siggraph 1993: 231-238
M. Halstead, M. Kass and T. DeRose, "Efficient, fair interpolation using Catmull-Clark surfaces," Siggraph 1993: 35-44
M. Kass and G. Miller, "Rapid, stable fluid dynamics for computer graphics," Siggraph 1990: 49–57.
D. Terzopoulos, A. Witkin and M. Kass, "Symmetry-seeking models and 3D object reconstruction," International Journal of Computer Vision, 1(3): 211–221, 1987.
A. Witkin and M. Kass, "Reaction-diffusion textures," Siggraph 1991: 299-308
A. Witkin, D. Terzopoulos and M. Kass, "Signal matching through scale space," International Journal of Computer Vision, 1(2): 133–144, 1987.
D. Baraff, A. Witkin and M. Kass, "Untangling cloth," ACM Trans. Graph. 22(3): 862-870 (2003).
Pixar film credits
Geri's Game (short)
A Bug's Life
Toy Story 2
Monsters, Inc.'
Finding Nemo The Incredibles Cars Ratatouille WALL-E Up Toy Story 3 Monsters University''
References
External links
Pixar People - Michael Kass
Disney Research - Michael Kass
Year of birth missing (living people)
Living people
American computer scientists
Pixar people
Academy Award for Technical Achievement winners
Stanford University School of Engineering alumni
MIT School of Engineering alumni
Princeton University alumni
Fellows of the Association for Computing Machinery |
34605821 | https://en.wikipedia.org/wiki/Andy%20Johnson-Laird | Andy Johnson-Laird | Andy Johnson-Laird (born February 1945) is an English-American computer scientist. He is the president of digital forensics firm Johnson-Laird Inc. in Portland, Oregon, where he lives with his wife, Kay Kitagawa.
Early life
Johnson-Laird was born in Sheffield in England in February 1945. He was educated at Culford School and then attended the Regent Street Polytechnic, now known as The University of Westminster. Johnson-Laird also has lived in Ferney-Voltaire (France), Toronto (Canada), and San Jose, (Northern California).
Johnson-Laird's computer career started in 1963 at National Cash Register Company's London offices where he worked as a computer operator and taught himself to program the NCR 315 mainframe computer during the night shift. He was then invited to teach as a lecturer in NCR's Computer Education department, teaching NCR customers how to program. Subsequently, he wrote system software for the NCR-Elliott 4100 mainframe computer. Johnson-Laird also worked as a systems programmer for Control Data Corporation in Ferney-Voltaire in support of supercomputer installations at CERN. and various universities in Europe. He transferred to Control Data Corporation's Toronto Development Facility in 1977.
In the late 1970s, Johnson-Laird applied his knowledge of mainframe computers to the emerging hobbyist personal computer market. He purchased and hand-built a SOL-20 personal computer from Processor Technology, and a Cromemco Z-2 as test platforms.
Immigration
Johnson-Laird's 1979 immigration to the United States resulted in litigation over "a legal issue of first impression" concerning "the proper interpretation of section 101(a)(15)(L) of the Immigration and Nationality Act, 8 U.S.C. s 1101(a)(15)(L), which allows 'a firm or corporation or other legal entity' to petition for the granting of 'non-immigrant' status to employees which it wishes to transfer to corporate posts in this country". Johnson-Laird was successful in his challenge to the agency's interpretation of this rule to not permit a petition for an "L" visa by a sole proprietorship. Johnson-Laird was represented by Portland immigration attorney, Gerald H. Robinson Esq. United States District Court Judge James Redden ruled that "Congress intended that the legal status of the petitioning business not be a dispositive consideration in immigration proceedings".
On arriving in the US in 1979, Johnson-Laird wrote the software drivers to permit the CP/M Operating System to run on an Onyx computer—this was the first commercial CP/M microcomputer with a hard disk and a data cartridge tape drive.
Johnson-Laird is one of the early pioneers in the field of digital forensics. His specialty, developed in 1987, is forensic software analysis of computer and Internet-based evidence for copyright, patent, and trade-secret litigation. He is also an expert on software reverse engineering, software development, and developing software in a clean-room environment.
Johnson-Laird developed techniques for computerized source code analysis and the presentation of computer-based evidence that have helped to bring digital forensics into the courtroom. He has served as a Special Master to Federal District Court judges, and has served as an expert witness and provided litigation testimony in many intellectual property cases in the United States and Singapore. He also has published numerous articles on topics related to digital forensics and the legal challenges posed by emerging technologies.
Computer software expert witness
In addition to serving as a technical expert in high-profile and significant litigation, Johnson-Laird's published writings have been cited by the United States Court of Appeals for the Ninth Circuit, first in Sega Enterprises Ltd., v. Accolade Inc., No. 92-15665, D. C. No. CV-91-3871-BAC, as authority for practical necessity to make intermediary copies to understand protected expression in software. Later the court cited Johnson-Laird's article "Software Reverse Engineering in the Real World," University of Dayton Law Review, Volume 19, November 3, Spring 1994, in the case Sony v. Connectix, No. 99-15852, D.C. No. CV-99-00390-CAL, as authority for the need to reverse engineer when developing compatible products and therefore the intermediary copies created in such reverse engineering should be considered fair use under U.S. Copyright Law.
In 1994, the Honorable Marvin J. Garbis, in the U.S. District Court for the District of Maryland appointed Johnson-Laird as a court-appointed expert in the matter of Vaughn v. Amprey, Civil Action No. MJG-84-1911. Additionally, in 2007 Johnson-Laird was appointed as Special Master by Judge Stephen V. Wilson, Central District of California, in the MGM Studios, Inc. v. Grokster, Ltd. case. His appointment on remand encompassed recommending appropriate actions to impose by Permanent Injunction on Defendant StreamCast that would "cope with the copyright infringement" caused by peer-to-peer file sharing systems, while "preserving non-infringing uses" of the system. In 2010, he was appointed Special Master in DataSci v. Medidata, a case before the Honorable Marvin J. Garbis, in the U.S. District Court for the District of Maryland. His role was to resolve discovery disputes relating to the production of computer source code.
Role in the CP/M v. DOS dispute
Because of Johnson-Laird's experience with writing the software drivers for the Basic Input/Output System (BIOS) for various microcomputers, John Katsaros of Digital Research engaged him to create BIOS drivers for CP/M-86 for the first IBM personal computer. Working in conditions of considerable IBM-imposed secrecy in Digital Research's Pacific Grove offices, Johnson-Laird discovered a Microsoft employee's name, Bob O'Rear, in the boot sector of the PC DOS diskette. He reported this and the numerous similarities in the application programming interface of PC DOS and CP/M to Gary Kildall. Kildall was stunned to see the similarities. "There were some shallow changes, but it was essentially the same program," Johnson-Laird reported in an interview with BusinessWeek. It later turned out that Microsoft had licensed a program called 86-DOS from Seattle Computer Products. Tim Paterson had created 86-DOS, which he originally called QDOS, by copying the functional application programming interface from the CP/M manuals. 86-DOS became Microsoft's MS-DOS and IBM's PC DOS.
Photography and documentaries
Johnson-Laird is a photographer and a documentary film maker. In 2005 he implemented a variant of a technique known as streak photography, that used computer software to create computer-generated images. His techniques of computational photography create photographs that are compositions of color and line that appear realistic, but are not. In 2010, in collaboration with Kay Kitagawa and Dina Gomez, Johnson-Laird directed, produced, and edited "EMMA: Unplugged," a 90-minute documentary of the 2010 Emma International Collaboration, an artists' retreat in the Saskatchewan boreal forest hosted by the Saskatchewan Craft Council. He has also directed, produced, and edited other video projects.
Works authored
Published writings by Johnson-Laird include:
"Looking Forward, Legislating Backward?", 4 J. Small & Emerging Bus. L. 95, 101 (2000)
"The Discovery Of Computer Software In Patent Litigation", Federal Courts Law Review (an on-line law journal), March 1998
"A House Divided: Internet Technology From The Ground Up", A. Johnson-Laird and Niels Johnson-Laird, Journal of Internet Law, Volume 1, Number 1, July 1997.
"The Anatomy Of The Internet Meets The Body Of The Law", University of Dayton Law Review, Volume 22, Number 3, Spring 1997.
"Detecting and Demonstrating Plagiarism in Digital Images", co-written with Ewan Croft, The Multimedia Strategist, Volume 1, Number 9, July 1995.
"Smoking Guns and Spinning Disks: The preservation, production, and forensic analysis of computer-based evidence", The Computer Lawyer, Volume 11, Number 8, August 1994.
"Reverse Engineering of Software: Separating Legal Mythology from Actual Technology". The Software Law Journal, Volume V, Number 2, April 1992.
"Using a Computer Expert to Analyze Computer-Based Evidence", The Computer Law Association Bulletin, Volume 7, Number 1, 1992.
"Ingeniería Regresiva en Software: Separando la Mitología Legal de la Tecnología Real", Derecho De La Alta Tecnologia, Volume III, Number 34/35, June/July 1991.
"Reverse Engineering of Software: Separating Legal Mythology from Modern Day Technology", TekBriefs, Number 5, January/February 1991.
"Software Development and 'Reverse Engineering'" Eleventh Annual Computer Law Institute, Cosponsored by the Computer Law Association and University of Southern California Law Center, May 1990.
"Neural Networks: The next intellectual property nightmare?" The Computer Lawyer, Volume 7, Number 3, March 1990.
"The Programmer's CP/M Handbook" Osborne/McGraw Hill, 1983 (). CP/M was the first de facto standard operating systems for microcomputers and was the base from which Microsoft's MS-DOS and IBM's PC DOS came.
Johnson-Laird also serves on the Editorial Board for the Federal Courts Law Review, an on-line journal for Federal Judges.
References
External links
1945 births
Living people
People from Sheffield
English computer scientists
British forensic scientists
People educated at Culford School
Scientists from Portland, Oregon |
556700 | https://en.wikipedia.org/wiki/Computer%20World | Computer World | Computer World () is the eighth studio album by German electronic band Kraftwerk, released on 10 May 1981.
The album deals with the themes of the rise of computers within society. In keeping with the album's concept, Kraftwerk showcased their music on an ambitious world tour. The compositions are credited to Ralf Hütter, Florian Schneider, and Karl Bartos. As was the case with the two previous albums, Computer World was released in both German- and English-language editions.
Artwork
The cover shows a computer terminal (apparently based on one made by the Hazeltine Corporation) displaying the heads of the four band members.
The inner sleeve artwork, created by Emil Schult and photographed by Günter Fröhling, depicts four slightly robotic-looking mannequins (representing the band members engaged in studio activities: performing, recording, mixing), similar to the artwork of the previous album, The Man-Machine, also created by Fröhling. In two photos, the mannequin representing Karl Bartos is seen playing a Stylophone, an instrument which is featured on the track "Pocket Calculator".
Release
Computer World peaked at on the UK Albums Chart. It was certified silver by the British Phonographic Industry (BPI) on 12 February 1982 for shipments in excess of 60,000 copies.
The track "Computer Love" was released as a seven-inch single in the UK, in July 1981, backed with "The Model", from the group's previous album The Man-Machine. The single reached in the charts. In November 1981 the two songs were reissued as a double A-side twelve-inch single, and reached on the UK Singles Chart in February 1982, although "The Model" received the most airplay.
"Pocket Calculator" was released as a seven-inch single in the US by Warner Brothers in 1981, pressed on a fluorescent yellow/lime vinyl, matching the color of the album cover. The flip side featured the Japanese version of "Pocket Calculator," "Dentaku".
"Computerwelt" was remixed in 1982 as a dance version with additional bass and percussion sounds. It was released in January 1982 as a twelve-inch vinyl single only in Germany. The original track was nominated for a Grammy Award for Best Rock Instrumental Performance in 1982. "Computer World" was also chosen by the BBC for use in the titles of their UK computer literacy project, The Computer Programme.
Kraftwerk issued several different versions of the single "Pocket Calculator" in different languages: namely, German ("Taschenrechner"), French ("Mini Calculateur"), Japanese ("Dentaku", or 電卓), and Italian ("Mini Calcolatore").
Critical reception
Computer World was ranked the second best album of 1981 by NME.
In 2012, Slant Magazine placed Computer World at on its list of the 100 best albums of the 1980s. In 2018, Computer World was listed by Pitchfork as the 18th best album of the 1980s. Pitchfork listed the track "Computer Love" as the 53rd best song of the 1980s. Rolling Stone named Computer World the 10th greatest EDM album of all time in 2012.
Legacy
Computer World maintains a distinct influence over subsequent releases across a multitude of genres; this influence is particularly noticeable in early and contemporary hip-hop and rap.
In 1982, American DJ and rapper Afrika Bambaataa wrote the song "Planet Rock" and recorded chords inspired from Trans-Europe Express. The song's lyrics also included the Japanese number counting "Ichi Ni San Shi" from Kraftwerk's "Numbers".
Cybotron's 1983 release "Clear," from the album Enter, contains multiple auditory elements of Computer World: the musical refrain closely resembles parts of "Home Computer" and "It's More Fun to Compute;" additionally, the track contains musical allusions to other Kraftwerk tracks.
Señor Coconut y su Conjunto, an electronic project of German musician Uwe Schmidt which initially covered Kraftwerk's songs, published a merengue-styled version of "It's More Fun to Compute" in their first LP El Baile Alemán, wrongly labeled as "Homecomputer" on the sleeve.
Coldplay used the main riff from "Computer Love" in their song "Talk" from their 2005 album X&Y. La Roux used the main riff from "Computer Love" in their song "I'm Not Your Toy" from their debut album.
Ricardo Villalobos' track "Lugom-IX" from the 2006 album Salvador uses prominently the riff from "Computer World".
Fergie's track "Fergalicious," from her 2006 debut album The Dutchess, borrows heavily from two tracks on Computer World: the opening synth line from "It's More Fun to Compute," as well as the rhythmic component of J.J. Fad's "Supersonic," as the latter track's beat is based upon the Computer World track "Numbers." Arabian Prince, the co-producer of "Supersonic," has been vocal about his admiration of Kraftwerk.
"Home Computer" is used as background music in the Young Sheldon episode "A Computer, a Plastic Pony, and a Case of Beer".
LCD Soundsystem sampled "Home Computer" throughout the track, Disco Infiltrator.
DJ Hooligan (Da Hool) sampled The Mix version of "Home Computer" for the Underground and Cursed remix of the song "Scatman's World" by Scatman John.
Track listing
Personnel
The original 1981 sleeve notes are relatively unspecific regarding roles, merely listing all the equipment suppliers and technicians under the heading "Hardware" and the various other people involved, such as photographers, as "Software". By contrast, the 2009 remastered edition notes list the performer credits as the following:
Kraftwerk
Ralf Hütter – album concept, artwork reconstruction, cover, electronics, keyboards, mixing, Orchestron, production, recording, Synthanorma Sequenzer, synthesiser, vocoder, voice
Florian Schneider – album concept, cover, electronics, mixing, production, recording, speech synthesis, synthesiser, vocoder
Karl Bartos – electronic percussion
Wolfgang Flur – electronic percussion
Emil Schult – cover
Additional personnel
Günter Fröhling – photography
Johann Zambryski – artwork reconstruction
Charts
Weekly charts
Year-end charts
Certifications
References
External links
1981 albums
Albums produced by Florian Schneider
Albums produced by Ralf Hütter
Concept albums
EMI Records albums
German-language albums
Kraftwerk albums
Warner Records albums |
12067215 | https://en.wikipedia.org/wiki/Moshe%20Vardi | Moshe Vardi | Moshe Ya'akov Vardi () is an Israeli mathematician and computer scientist. He is a Professor of Computer Science at Rice University, United States. He is University Professor, the Karen Ostrum George Professor in Computational Engineering, Distinguished Service Professor, and Director of the Ken Kennedy Institute for Information Technology. His interests focus on applications of logic to computer science, including database theory, finite-model theory, knowledge in multi-agent systems, computer-aided verification and reasoning, and teaching logic across the curriculum. He is an expert in model checking, constraint satisfaction and database theory, common knowledge (logic), and theoretical computer science.
Moshe Y. Vardi is the author of over 600 technical papers as well as the editor of several collections. He has authored the books Reasoning About Knowledge with Ronald Fagin, Joseph Halpern, and Yoram Moses, and Finite Model Theory and Its Applications with Erich Grädel, Phokion G. Kolaitis, Leonid Libkin, Maarten Marx, Joel Spencer, Yde Venema, and Scott Weinstein. He is senior editor of Communications of the ACM, after serving as its editor-in-chief for a decade.
Background
He chaired the Computer Science Department at Rice University from January 1994 until June 2002. Prior to joining Rice in 1993, he was at the IBM Almaden Research Center, where he managed the Mathematics and Related Computer Science Department. Dr Vardi received his Ph.D. from the Hebrew University of Jerusalem in 1981.
He lives with his wife Pamela Geyer in Bellaire, TX. His step-son Dr. Aaron Hertzmann is also a famous computer scientist at Adobe Research with expertise in the fields of Computer Vision, Computer Graphics, Human-Computer Interaction, and Machine Learning.
Awards
Vardi is the recipient of three IBM Outstanding Innovation Awards, a co-winner of the 2000 Gödel Prize (for work on temporal logic with finite automata), winner of the 2021 Knuth Prize, a co-winner of the 2005 ACM Paris Kanellakis Theory and Practice Award, and a co-winner of the LICS 2006 Test-of-Time Award. He is also the recipient of the 2008 and 2017 ACM Presidential Award, the 2008 Blaise Pascal Medal in computational science by the European Academy of Sciences, the 2010 Distinguished Service Award from the Computing Research Association, the Institute of Electrical and Electronics Engineers (IEEE) Computer Society's 2011 Harry H. Goode Memorial Award, the 2018 ACM Special Interest Group for Logic and Computation (SIGLOG), the European Association for Theoretical Computer Science (EATCS), the European Association for Computer Science Logic (EACSL), and the Kurt Goedel Society (KGS) jointly sponsored Alonzo Church Award for Outstanding Contributions to Logic and Computation (with Tomas Feder).
He holds honorary doctorates from eight universities: Saarland University, Germany, the University of Orleans and Université Grenoble Alpes in France, UFRGS in Brazil, the University of Liege in Belgium, the Technical University of Vienna, Austria, the University of Edinburgh in Scotland, and the University of Gothenburg in Sweden. Professor Vardi is an editor of several international journals and the president of the International Federation of Computational Logicians. He is a Guggenheim Fellow, as well as a Fellow of the Association for Computing Machinery, the American Association for the Advancement of Science, and the American Association for Artificial Intelligence. He was designated Highly Cited Researcher by the Institute for Scientific Information, and was elected as a member of the US National Academy of Engineering, the National Academy of Sciences, the European Academy of Sciences, and the Academia Europaea. He was named to the American Academy of Arts and Sciences in 2010. He was included in the 2019 class of fellows of the American Mathematical Society "for contributions to the development and use of mathematical logic in computer science". He has also co-chaired the ACM Task Force on Job Migration.
References
1954 births
Living people
Hebrew University of Jerusalem alumni
IBM employees
Israeli computer scientists
Israeli editors
Israeli mathematicians
Israeli science writers
Fellows of the Association for the Advancement of Artificial Intelligence
Fellows of the Association for Computing Machinery
Fellows of the American Association for the Advancement of Science
Fellows of the American Mathematical Society
Formal methods people
Gödel Prize laureates
Knuth Prize laureates
Mathematical logicians
Members of Academia Europaea
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
IBM Research computer scientists
People from Haifa
Rice University faculty |
41738913 | https://en.wikipedia.org/wiki/Data%20Intercept%20Technology%20Unit | Data Intercept Technology Unit | The Data Intercept Technology Unit (DITU, pronounced DEE-too) is a unit of the Federal Bureau of Investigation (FBI) of the United States, which is responsible for intercepting telephone calls and e-mail messages of terrorists and foreign intelligence targets inside the US. It is not known when DITU was established, but the unit already existed in 1997.
DITU is part of the FBI's Operational Technology Division (OTD), which is responsible for all technical intelligence collection, and is located at Marine Corps Base Quantico in Virginia, which is also the home of the FBI's training academy. In 2010, DITU had organized its activities into seven regions.
Internet wiretapping
Interception at Internet service providers
In the late 1990s, DITU managed an FBI program codenamed Omnivore, which was established in 1997. This program was able to capture the e-mail messages of a specific target from the e-mail traffic that travelled through the network of an Internet service provider (ISP). The e-mail that was filtered out could be saved on a tape-backup drive or printed in real-time.
In 1999, Omnivore was replaced by three new tools from the DragonWare Suite: Carnivore, Packeteer and CoolMiner. Carnivore consisted of Microsoft workstations with packet-sniffing software which were physically installed at an Internet service provider (ISP) or other location where it can "sniff" traffic on a LAN segment to look for email messages in transit. Between 1998 and 2000 Carnivore was used about 25 times.
By 2005, Carnivore had been replaced by commercial software such as NarusInsight. A report in 2007 described this successor system as being located "inside an Internet provider's network at the junction point of a router or network switch" and capable of indiscriminately storing data flowing through the provider's network.
The raw data collected by these systems are decoded and put together by a tool called Packeteer and these can be viewed by using a custom made software interface called CoolMiner. FBI field offices have CoolMiner workstations that can access the collected data which are stored at the Storage Area Network (SAN) of one of the seven DITU regions.
In August 2013, CNet reported that DITU helped developing custom "port reader" software that enables the FBI to collect metadata from internet traffic in real time. This software copies the internet communications as they flow through a network and then extracts only the requested metadata. The CNet report says that the FBI is quietly pressing telecom carriers and Internet service providers to install this software onto their networks, so it can be used in cases where the carriers' own lawful interception equipment cannot fully provide the data the Bureau is looking for.
According to the FBI, the Patriot Act from 2001 authorizes the collection of internet metadata without a specific warrant, but it can also be done with a pen register and trap and trace order, for which it is only required that the results will likely be "relevant" to an investigation. A specific warrant is needed though for the interception of the content of internet communications (like e-mail bodies, chat messages and streaming voice and video) both for criminal investigations and for those under the Foreign Intelligence Surveillance Act.
Assisting NSA collection
Since the NSA set up the PRISM program in 2007, it is DITU that actually picks up the data at the various internet companies, like Facebook, Microsoft, Google and Yahoo, before passing them on to the NSA for further processing, analysing and storing.
DITU also works closely with the three biggest American telecommunications providers (AT&T, Verizon, and Sprint) to "ensure its ability to intercept the telephone and Internet communications of its domestic targets, as well as the NSA's ability to intercept electronic communications transiting through the United States on fiber-optic cables".
The latter is probably related to the NSA's collection of domestic telephony metadata, for which the FBI petitioned the Foreign Intelligence Surveillance Court to order the biggest American telecommunication carriers, like for example Verizon Business Network Services, to hand over all the call records of their customers to the NSA.
An NSA document disclosed by the Snowden leaks gives the example of DITU "working with Microsoft to understand an additional feature in Outlook.com which allows users to create email aliases, which may affect our tasking processes."
See also
Communications Assistance for Law Enforcement Act (CALEA)
External links
Meet the Spies Doing the NSA's Dirty Work
Internet Wiretapping - Government and Law Enforcement Use
References
Federal Bureau of Investigation
Mass surveillance |
2265543 | https://en.wikipedia.org/wiki/Virtual%20globe | Virtual globe | A virtual globe is a three-dimensional (3D) software model or representation of Earth or another world. A virtual globe provides the user with the ability to freely move around in the virtual environment by changing the viewing angle and position. Compared to a conventional globe, virtual globes have the additional capability of representing many different views on the surface of Earth. These views may be of geographical features, man-made features such as roads and buildings, or abstract representations of demographic quantities such as population.
On November 20, 1997, Microsoft released an offline virtual globe in the form of Encarta Virtual Globe 98, followed by Cosmi's 3D World Atlas in 1999. The first widely publicized online virtual globes were NASA WorldWind (released in mid-2004) and Google Earth (mid-2005).
Types
Virtual globes may be used for study or navigation (by connecting to a GPS device) and their design varies considerably according to their purpose. Those wishing to portray a visually accurate representation of the Earth often use satellite image servers and are capable not only of rotation but also zooming and sometimes horizon tilting. Very often such virtual globes aim to provide as true a representation of the World as is possible with worldwide coverage up to a very detailed level. When this is the case the interface often has the option of providing simplified graphical overlays to highlight man-made features since these are not necessarily obvious from a photographic aerial view. The other issue raised by such detail available is that of security with some governments having raised concerns about the ease of access to detailed views of sensitive locations such as airports and military bases.
Another type of virtual globe exists whose aim is not the accurate representation of the planet but instead a simplified graphical depiction. Most early computerized atlases were of this type and, while displaying less detail, these simplified interfaces are still widespread since they are faster to use because of the reduced graphics content and the speed with which the user can understand the display.
List of virtual globe software
As more and more high-resolution satellite imagery and aerial photography become accessible for free, many of the latest online virtual globes are built to fetch and display these images. They include:
ArcGIS Explorer a lightweight client for ArcGIS Server, supports WMS and many other GIS file formats.
Bing Maps 3D interface runs inside Internet Explorer and Firefox, and uses NASA Blue Marble: Next Generation.
Bhuvan is an India-specific virtual globe.
Earth3D, a program that visualizes the Earth in a real-time 3D view. It uses data from NASA, USGS, the CIA and the city of Osnabrück. Earth3D is free software (GPL).
EarthBrowser, an Adobe Flash/AIR-based virtual globe with real-time weather forecasts, earthquakes, volcanoes, and webcams.
Google Earth, satellite and aerial photos dataset (including commercial DigitalGlobe images) with international road dataset, the first popular virtual globe along with NASA World Wind.
MapJack is a flash based map covering areas in Canada, France, Latvia, Macau, Malaysia, Puerto Rico, Singapore, Sweden, Thailand, and the United States.
Marble, part of KDE, with data provided by OpenStreetMap, as well as NASA Blue Marble: Next Generation and others. Marble is free and open-source software (LGPL).
NASA World Wind, USGS topographic maps and several satellite and aerial image datasets, the first popular virtual globe along with Google Earth. World Wind is open-source software (NOSA).
NORC is a street view web service for Central and Eastern Europe.
OpenWebGlobe, a virtual globe SDK written in JavaScript using WebGL. OpenWebGlobe is free and open-source software (MIT).
Worldwide Telescope features an Earth mode with emphasis on data import/export, time-series support and a powerful Tour authoring environment.
As well as the availability of satellite imagery, online public domain factual databases such as the CIA World Factbook have been incorporated into virtual globes.
History
The use of virtual globe software was widely popularized by (and may have been first described in) Neal Stephenson's famous science fiction novel Snow Crash. In the metaverse in Snow Crash there is a piece of software called Earth made by the Central Intelligence Corporation. The CIC uses their virtual globe as a user interface for keeping track of all their geospatial data, including maps, architectural plans, weather data, and data from real-time satellite surveillance.
Virtual globes (along with all hypermedia and virtual reality software) are distant descendants of the Aspen Movie Map project, which pioneered the concept of using computers to simulate distant physical environments (though the Movie Map's scope was limited to the city of Aspen, Colorado).
Many of the functions of virtual globes were envisioned by Buckminster Fuller who in 1962 envisioned the creation of a Geoscope that would be a giant globe connected by computers to various databases. This would be used as an educational tool to display large scale global patterns related to topics such as economics, geology, natural resource use, etc.
See also
Digital Earth
Geovisualization
Geoweb
Macroscope (science concept)
Orbiter
Planetarium software
Science On a Sphere
Terragen
References
External links
VirtualGlobes@Benneten – screenshots of many virtual globes
Atlases
Map types
Virtual reality |
23944828 | https://en.wikipedia.org/wiki/Network%20intelligence | Network intelligence | Network intelligence (NI) is a technology that builds on the concepts and capabilities of Deep Packet Inspection (DPI), Packet Capture and Business Intelligence (BI). It examines, in real time, IP data packets that cross communications networks by identifying the protocols used and extracting packet content and metadata for rapid analysis of data relationships and communications patterns. Also, sometimes referred to as Network Acceleration or piracy.
NI is used as a middleware to capture and feed information to network operator applications for bandwidth management, traffic shaping, policy management, charging and billing (including usage-based and content billing), service assurance, revenue assurance, market research mega panel analytics, lawful interception and cyber security. It is currently being incorporated into a wide range of applications by vendors who provide technology solutions to Communications Service Providers (CSPs), governments and large enterprises. NI extends network controls, business capabilities, security functions and data mining for new products and services needed since the emergence of Web 2.0 and wireless 3G and 4G technologies.
Background
The evolution and growth of Internet and wireless technologies offer possibilities for new types of products and services, as well as opportunities for hackers and criminal organizations to exploit weaknesses and perpetrate cyber crime. Network optimization and security solutions therefore need to address the exponential increases in IP traffic, methods of access, types of activity and volume of content generated. Traditional DPI tools from established vendors have historically addressed specific network infrastructure applications such as bandwidth management, performance optimization and quality of service (QoS).
DPI focuses on recognizing different types of IP traffic as part of a CSP's infrastructure. NI provides more granular analysis. It enables vendors to create an information layer with metadata from IP traffic to feed multiple applications for more detailed and expansive visibility into network-based activity.
NI technology goes beyond traditional DPI, since it not only recognizes protocols but also extracts a wide range of valuable metadata. NI's value-add to solutions traditionally based on DPI has attracted the attention of industry analysts who specialize in DPI market research. For example, Heavy Reading now includes NI companies on its Deep Packet Inspection Semi-Annual Market Tracker.
Business Intelligence for data networks
In much the same way that BI technology synthesizes business application data from a variety of sources for business visibility and better decision-making, NI technology correlates network traffic data from a variety of data communication vehicles for network visibility, enabling better cyber security and IP services. With ongoing changes in communications networks and how information can be exchanged, people are no longer linked exclusively to physical subscriber lines. The same person can communicate in multiple ways – FTP, Webmail, VoIP, instant messaging, online chat, blogs, social networks – and from different access points via desktops, laptops and mobile devices.
NI provides the means to quickly identify, examine and correlate interactions involving Internet users, applications, and protocols whether or not the protocols are tunneled or follow the OSI model. The technology enables a global understanding of network traffic for applications that need to correlate information such as who contacts whom, when, where and how, or who accesses what database, when, and the information viewed. When combined with traditional BI tools that examine service quality and customer care, NI creates a powerful nexus of subscriber and network data.
Use in telecommunications
Telcos, Internet Service Providers (ISPs) and Mobile Network Operators (MNOs) are under increasing competitive pressures to move to smart pipe business models. The cost savings and revenue opportunities driving smart pipe strategies also apply to Network Equipment Providers, Software Vendors and Systems Integrators that serve the industry.
Because NI captures detailed information from the hundreds of IP applications that cross mobile networks, it provides the required visibility and analysis of user demand to create and deliver differentiating services, as well as manage usage once deployed.
NI as enabling technology for smart pipe applications
Customer metrics are especially important for telecom companies to understand consumer behaviors and create personalized IP services. NI enables faster and more sophisticated Audience Measurement, User Behavior Analysis, Customer Segmentation, and Personalized Services.
Real-time network metrics are equally important for companies to deliver and manage services. NI classifies protocols and applications from layers 2 through 7, generates metadata for communication sessions, and correlates activity between all layers, applicable for bandwidth & resource optimization, QoS, Content-Based Billing, quality of experience, VoIP Fraud Monitoring and regulatory compliance.
Use in cloud computing
The economics and deployment speed of cloud computing is fueling rapid adoption by companies and government agencies. Among concerns, however, are risks of information security, e-discovery, regulatory compliance and auditing. NI mitigates the risks by providing Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) vendors with real-time situational awareness of network activity, and critical transparency to allay fears of potential customers. A vendor can demonstrate hardened network security to prevent Data Leakage or Data Theft and an irrefutable audit trail of all network transaction – communication and content – related to a customer's account, assuming compliance to regulation and standards.
Use in government
NI extracts and correlates information such as who contacts whom, when where and how, providing situational awareness for Lawful Interception and Cyber Security. Real-time data capture, extraction and analysis allow security specialists to take preventive measures and protect network assets in real time as a complement post-mortem analysis after an attack.
Use in business
Because NI combines real-time network monitoring with IP metadata extraction, it enhances the effectiveness of applications for Database Security, Database Auditing and Network Protection. The network visibility afforded by NI can also be used to build enhancements and next-generation solutions for Network Performance Management, WAN Optimization, Customer Experience Management, Content Filtering, and internal billing of networked applications.
References
Packets (information technology)
Deep packet inspection
Network analyzers
Networking hardware
Business intelligence |
170770 | https://en.wikipedia.org/wiki/Kid%20A | Kid A | Kid A is the fourth studio album by the English rock band Radiohead, released on 2 October 2000 by Parlophone. It was recorded with producer Nigel Godrich in Paris, Copenhagen, Gloucestershire and their hometown of Oxford, England.
After the stress of promoting Radiohead's acclaimed 1997 album OK Computer, songwriter Thom Yorke wanted to diverge from rock music. Drawing influence from electronic music, ambient music, krautrock, jazz, and 20th-century classical music, Radiohead used instruments such as modular synthesisers, ondes Martenot, brass and strings. They processed guitar sounds, incorporated samples and loops, and manipulated their recordings with software such as Pro Tools and Cubase. Yorke wrote impersonal and abstract lyrics, cutting up phrases and assembling them at random. Radiohead considered releasing the material as a double album, but decided it was too dense; a second album of material from the sessions, Amnesiac, was released eight months later.
Kid A was widely anticipated. In a departure from industry practice, Radiohead released no singles or music videos and conducted few interviews and photoshoots. Instead, they became one of the first major acts to use the internet as a promotional tool; Kid A was made available to stream and was promoted with short animated films featuring music and artwork. Bootlegs of early performances were shared on filesharing services, and the album was leaked before release. In 2000, Radiohead toured Europe in a custom-built tent without corporate logos.
Kid A debuted at the top of the UK Albums Chart, and became Radiohead's first number-one album in the United States, where it sold over 207,000 copies in its first week. It has been certified platinum in Australia, Canada, France, Japan, the US and the UK. Like OK Computer, it won the Grammy Award for Best Alternative Album and was nominated for the Grammy Award for Album of the Year. Its departure from Radiohead's earlier sound divided fans and critics, and some dismissed it as pretentious, deliberately obscure, or derivative. However, it later attracted wide acclaim; at the end of the decade, Rolling Stone, Pitchfork and the Times ranked Kid A the greatest album of the 2000s, and in 2020 Rolling Stone ranked it number 20 on its updated list of the "500 Greatest Albums of All Time". Kid A Mnesia, an anniversary reissue compiling Kid A, Amnesiac and previously unreleased material, was released in 2021.
Background
Following the critical and commercial success of their 1997 album OK Computer, the members of Radiohead suffered burnout. Songwriter Thom Yorke became ill, describing himself as "a complete fucking mess ... completely unhinged". Troubled by new acts he felt were imitating Radiohead, he believed his music had become part of a constant background noise he described as "fridge buzz", and he became hostile to the music media. He told The Guardian: "I always used to use music as a way of moving on and dealing with things, and I sort of felt like that the thing that helped me deal with things had been sold to the highest bidder and I was simply doing its bidding. And I couldn't handle that."
Yorke suffered from writer's block, and could not finish writing songs on guitar. He became disillusioned with the "mythology" of rock music, feeling the genre had "run its course". He began to listen almost exclusively to the electronic music of Warp artists such as Boards of Canada, Aphex Twin and Autechre, and said: "It was refreshing because the music was all structures and had no human voices in it. But I felt just as emotional about it as I'd ever felt about guitar music." He liked the idea of his voice being used as an instrument rather than having a leading role, and wanted to focus on sounds and textures instead of traditional songwriting.
Yorke bought a house in Cornwall and spent his time walking the cliffs and drawing, restricting his musical activity to playing the grand piano he had recently bought. "Everything in Its Right Place" was the first song he wrote. He described himself as a "shit piano player", with little knowledge of electronic instruments: "I remember this Tom Waits quote from years ago, that what keeps him going as a songwriter is his complete ignorance of the instruments he's using. So everything's a novelty. That's one of the reasons I wanted to get into computers and synths, because I didn't understand how the fuck they worked. I had no idea what ADSR meant."
Guitarist Ed O'Brien had hoped Radiohead's fourth album would comprise short, melodic guitar songs, but Yorke stated: "There was no chance of the album sounding like that. I'd completely had it with melody. I just wanted rhythm. All melodies to me were pure embarrassment." Bassist Colin Greenwood said: "We felt we had to change everything. There were other guitar bands out there trying to do similar things. We had to move on."
Recording
Radiohead began work on Kid A in Paris in January 1999 with OK Computer producer Nigel Godrich and no deadline. Yorke, who had the greatest control, was still facing writer's block. His new songs were incomplete, and some consisted of little more than sounds or rhythms; few had clear verses or choruses. Yorke's lack of lyrics created problems, as these had provided points of reference and inspiration for his bandmates in the past.
The group struggled with Yorke's new direction; according to Godrich, Yorke did not communicate much, and according to Yorke, Godrich "didn't understand why, if we had such a strength in one thing, we would want to do something else". Multi-instrumentalist Jonny Greenwood feared "awful art-rock nonsense just for its own sake"; his brother, Colin, did not enjoy Yorke's Warp influences, finding them "really cold". The other band members were unsure of how to contribute, and considered leaving. O'Brien said: "It's scary – everyone feels insecure. I'm a guitarist and suddenly it's like, well, there are no guitars on this track, or no drums."
Radiohead experimented with electronic instruments including modular synthesisers and the ondes Martenot, an early electronic instrument similar to a theremin, and used software such as Pro Tools and Cubase to edit and manipulate their recordings. The group found it difficult to use electronic instruments and techniques collaboratively; according to Yorke, "We had to develop ways of going off into corners and build things on whatever sequencer, synthesiser or piece of machinery we would bring to the equation and then integrate that into the way we would normally work." O'Brien began using sustain units, which allow guitar notes to be sustained infinitely, combined with looping and delay effects to create synthesiser-like sounds.
In March, Radiohead moved to Medley Studios in Copenhagen for two weeks, which were unproductive. The sessions produced about 50 reels of tape, each containing 15 minutes of music, with nothing finished. In April, Radiohead resumed recording in a mansion in Batsford Park, Gloucestershire. The lack of deadline and the number of incomplete ideas made it hard to focus, and the group held tense meetings; they agreed to disband if they could not agree on an album worth releasing.
In July, O'Brien began keeping an online diary of Radiohead's progress, and Radiohead moved to their new studio in their hometown of Oxford. In November, Radiohead held a live webcast from their studio, featuring a performance of new music and a DJ set. By 2000, six songs were complete. In January, at Godrich's suggestion, Radiohead split into two groups: one would generate a sound or sequence without acoustic instruments such as guitars or drums, and the other would develop it. Though the experiment produced no finished songs, it helped convince O'Brien of the potential of electronic instruments.
On 19 April 2000, Yorke wrote on Radiohead's website that they had finished recording. Having completed over 20 songs, Radiohead considered a double album, but felt the material was too dense. Instead, they saved half the songs for their next album, Amnesiac, released the following year. Yorke said Radiohead split the work into two albums because "they cancel each other out as overall finished things. They come from two different places." He observed that deciding the track list was not just a matter of choosing the best songs, as "you can put all the best songs in the world on a record and they'll ruin each other". He cited the later Beatles albums as examples of effective sequencing: "How in the hell can you have three different versions of 'Revolution' on the same record and get away with it? I thought about that sort of thing." Agreeing on the track list created arguments, and O'Brien said the band came close to breaking up: "That felt like it could go either way, it could break ... But we came in the next day and it was resolved." The album was mastered by Chris Blair in Abbey Road Studios, London.
Tracks
Radiohead worked on the first track, "Everything in its Right Place", in a conventional band arrangement in Copenhagen and Paris, but without results. In Gloucestershire, Yorke and Godrich transferred the song to a Prophet-5 synthesiser, and Yorke's vocals were processed in Pro Tools using a scrubbing tool. O'Brien and drummer Philip Selway said the track helped them accept that not every song needed every band member to play on it. O'Brien recalled: "To be genuinely sort of delighted that you'd been working for six months on this record and something great has come out of it, and you haven't contributed to it, is a really liberating feeling." Jonny Greenwood described it as a turning point for the album: "We knew it had to be the first song, and everything just followed after it."
Yorke wrote an early version of "The National Anthem" when the band was still in school. In 1997, Radiohead recorded drums and bass for the song, intending to develop it as a B-side for OK Computer, but decided to keep it for their next album. For Kid A, Greenwood added ondes Martenot and sounds sampled from radio stations, and Yorke's vocals were processed with a ring modulator. In November 1999, Radiohead recorded a brass section inspired by the "organised chaos" of Town Hall Concert by the jazz musician Charles Mingus, instructing the musicians to sound like a "traffic jam".
The strings on "How To Disappear Completely" were performed by the Orchestra of St John's and recorded in Dorchester Abbey, a 12th-century church about five miles from Radiohead's Oxfordshire studio. Radiohead chose the orchestra as they had performed pieces by Penderecki and Messiaen. Jonny Greenwood, the only Radiohead member trained in music theory, composed the string arrangement by multitracking his ondes Martenot. According to Godrich, when the orchestra members saw Greenwood's score "they all just sort of burst into giggles, because they couldn't do what he'd written, because it was impossible – or impossible for them, anyway". The orchestra leader John Lubbock encouraged them to experiment and work with Greenwood's ideas. Concerts director Alison Atkinson said the session was "more experimental" than the orchestra's usual bookings.
"Idioteque" samples two computer music pieces, Paul Lansky's "Mild Und Leise" and Arthur Kreiger's "Short Piece". Both samples were taken from Electronic Music Winners, a 1976 experimental music LP which Jonny Greenwood stumbled upon while the band was working on Kid A. The track was built from a drum machine pattern Greenwood created with a modular synthesiser and a sample from "Mild und Leise". He gave the 50-minute recording to Yorke, who took a short section of it and used it to write the song. Yorke also referred to electronic dance music when talking about "Idioteque", and said that the song was "an attempt to capture that exploding beat sound where you're at the club and the PA's so loud, you know it's doing damage".
"Motion Picture Soundtrack" was written before Radiohead's debut single "Creep", and a version of it was recorded on piano during the OK Computer sessions. Yorke recorded it on a pedal organ, influenced by songwriter Tom Waits; the other band members added sampled harp and double bass, attempting to emulate the soundtracks of 1950s Disney films. Radiohead also worked on several songs that were not completed until recording sessions for future albums, including "Nude", "Burn the Witch" and "True Love Waits".
Music
Style and influences
Kid A incorporates influences from electronic artists on Warp Records such as 1990s IDM artists Autechre and Aphex Twin; 1970s Krautrock bands such as Can; the jazz of Charles Mingus, Alice Coltrane and Miles Davis; and abstract hip hop from the Mo'Wax label, including Blackalicious and DJ Krush. Yorke cited Remain in Light (1980) by Talking Heads as a "massive reference point". Björk was another major influence, particularly her 1997 album Homogenic, as was the Beta Band. Radiohead attended an Underworld concert which helped renew their enthusiasm in a difficult moment.
The string orchestration for "How to Disappear Completely" was influenced by Polish composer Krzysztof Penderecki. Jonny Greenwood's use of the ondes Martenot on this and several other Kid A songs was inspired by Olivier Messiaen, who popularised the instrument and was one of Greenwood's teenage heroes. Greenwood described his interest in mixing old and new music technology, and during the recording sessions Yorke read Ian MacDonald's Revolution in the Head, which chronicles the Beatles' recordings with George Martin during the 1960s. The band also sought to combine electronic manipulations with jam sessions in the studio, stating their model was the German group Can.
Kid A has been described as a work of electronica, experimental rock, post-rock, alternative rock, post-prog, ambient, electronic rock, art rock, and art pop. Though guitar is less prominent than on previous Radiohead albums, guitars were still used on most tracks. "Treefingers", an instrumental ambient track, was created by digitally processing O'Brien's guitar loops. Many of Yorke's vocals are heavily modified by digital effects; for example, his vocals on the title track were simply spoken, then vocoded with the ondes Martenot to create the melody.
Lyrics
Yorke's lyrics on Kid A are less personal than on earlier albums, instead heavily incorporating abstract and surreal themes. He cut up phrases and assembled them at random, combining cliches and banal observations; for instance, "Morning Bell" features repetitions of contrasting lines such as "Where'd you park the car?" and "Cut the kids in half". He cited David Byrne's approach on the 1980 Talking Heads album Remain in Light as an influence: "When they made that record, they had no real songs, just wrote it all as they went along. Byrne turned up with pages and pages, and just picked stuff up and threw bits in all the time. And that's exactly how I approached Kid A." Radiohead used Yorke's lyrics "like pieces in a collage ... [creating] an artwork out of a lot of different little things". The lyrics are not included in the liner notes, as Radiohead felt they could not be considered independently of the music, and Yorke said he did not want listeners to focus on them.
Yorke wrote "Everything in Its Right Place" about the depression he experienced on the OK Computer tour, feeling he could not speak. The refrain of "How to Disappear Completely" was inspired by R.E.M. singer Michael Stipe, who advised Yorke to relieve tour stress by repeating to himself: "I'm not here, this isn't happening". The refrain of "Optimistic" ("try the best you can / the best you can is good enough") was an assurance by Yorke's partner, Rachel Owen, when Yorke was frustrated with the band's progress. The title Kid A came from a filename on one of Yorke's sequencers. Yorke said he liked its "non-meaning", saying: "If you call [an album] something specific, it drives the record in a certain way."
Artwork
The Kid A artwork and packaging was created by Yorke with Stanley Donwood, who has worked with Radiohead since their 1994 EP My Iron Lung. Donwood painted on large canvases with knives and sticks, then photographed the paintings and manipulated them with Photoshop. While working on the artwork, Yorke and Donwood became "obsessed" with the Worldwatch Institute website, which was full of "scary statistics about ice caps melting, and weather patterns changing"; this inspired them to use an image of a mountain range as the cover art. Donwood said he saw the mountains as "some sort of cataclysmic power".
Donwood was inspired by a photograph taken during the Kosovo War depicting a square metre of snow full of the "detritus of war", such as military equipment and cigarette stains. He said: "I was upset by it in a way war had never upset me before. It felt like it was happening in my street." The red swimming pool on the album spine and disc was inspired by the 1988 graphic novel Brought to Light by Alan Moore and Bill Sienkiewicz, in which the number of people killed by state terrorism is measured in swimming pools filled with blood. Donwood said this image "haunted" him during the recording of the album, calling it "a symbol of looming danger and shattered expectations". Yorke and Donwood cited an exhibition of paintings by David Hockney they attended in Paris, before recording started, as another influence.
Yorke and Donwood made many versions of the album cover, with different pictures and different titles in different typefaces. Unable to decide, they taped them to the kitchen cupboards and went to bed. According to Donwood, the choice the next day "was obvious". In October 2021, Yorke and Donwood curated an exhibition of Kid A artwork at Christie's headquarters in London.
Promotion
Radiohead minimised their involvement in promotion for Kid A, conducting few interviews or photoshoots. They released no singles, though "Optimistic" and promotional copies of other tracks received radio play. Yorke said the decision not to release singles was to avoid the stress of publicity, which he had struggled with on OK Computer, rather than for artistic reasons.
No advance copies of Kid A were circulated, but it was played under controlled conditions for critics and fans. Radiohead were careful to present it as a cohesive work rather than a series of separate tracks; rather than give EMI executives copies to consider individually, they had them listen to the album in its entirety on a bus from Hollywood to Malibu. Rob Gordon, vice president of marketing at Capitol Records, the American subsidiary of Radiohead's label EMI, praised the album but said promoting it would be a "business challenge". Promotional copies of Kid A came with stickers prohibiting broadcast before September 19. At midnight, it was played in its entirety by the London radio station Xfm. MTV2, KROQ, and WXRK also played the album in its entirety.
Rather than agree to a standard magazine photoshoot for Q, Radiohead supplied digitally altered portraits, with their skin smoothed, their irises recoloured, and Yorke's drooping eyelid removed. Q editor Andrew Harrison described the images as "aggressively weird to the point of taking the piss ... all five of Radiohead had been given the aspect of gawking aliens". Yorke told Q: "I'd like to see them try to put these pictures on a poster." Q projected them onto the Houses of Parliament, placed them on posters and billboards in the London Underground and on the Old Street Roundabout, and had them printed on key rings, mugs and mouse mats, to "turn Radiohead back into a product".
Instead of releasing traditional music videos for Kid A, Radiohead commissioned dozens of 10-second videos featuring Donwood artwork they called "blips", which were aired on music channels and distributed online. Pitchfork described them as "context-free animated nightmares that radiated mystery", with "arch hints of surveillance". Five of the videos were serviced as exclusives to MTV, and "helped play into the arty mystique that endeared Radiohead to its core audience", according to Billboard.
Internet
Though Radiohead had experimented with internet promotion for OK Computer in 1997, by 2000 online music promotion was not widespread; record labels were still reliant on MTV and radio. The "iBlip", a Java applet, could be embedded in fan sites and allowed users to preorder and stream the album; it was used by over 1000 sites and the album was streamed more than 400,000 times. The iBlip also included artwork, photos and links to order the album on the online retailer Amazon. Capitol also streamed the album through Amazon, MTV.com and heavy.com, and ran a campaign with the peer-to-peer filesharing service Aimster, allowing users to swap iBlips and Radiohead-branded Aimster skins.
Donwood wrote that EMI was not interested in the Radiohead website, and left him and the band to update it with "discursive and random content", including images of an "angry-looking" teddy bear with pointed teeth. The bears originated in stories Donwood made for his young children about teddy bears who came to life and ate the "grown-ups" who had abandoned them, and became a staple of the Kid A website and promotional material.
Three weeks before release, Kid A was leaked online and shared on the peer-to-peer service Napster. Asked whether he believed Napster had damaged sales, Capitol president Ray Lott likened the situation to unfounded concern about home taping in the 1980s and said: "I'm trying to sell as many Radiohead albums as possible. If I worried about what Napster would do, I wouldn't sell as many albums." Yorke said Napster "encourages enthusiasm for music in a way that the music industry has long forgotten to do".
Tour
Radiohead rearranged the Kid A songs to perform them live. O'Brien said, "You couldn't do Kid A live and be true to the record. You would have to do it like an art installation ... When we played live, we put the human element back into it." Selway said they "found some new life" in the songs when they came to perform them.
In mid-2000, months before Kid A was released, Radiohead toured the Mediterranean, performing Kid A and Amnesiac songs for the first time. Fans shared concert bootlegs online. Colin Greenwood said: "We played in Barcelona and the next day the entire performance was up on Napster. Three weeks later when we got to play in Israel the audience knew the words to all the new songs and it was wonderful." Later that year, Radiohead toured Europe in a custom-built tent without corporate logos, playing mostly new songs. The tour included a homecoming show in South Park, Oxford, with supporting performances by Humphrey Lyttelton (who performed on Amnesiac), Beck and Sigur Rós. According to journalist Alex Ross, the show may have been the largest public gathering in Oxford history.
Radiohead also performed three concerts in North American theatres, their first in nearly three years. The small venues sold out rapidly, attracting celebrities, and fans camped overnight. In October, Radiohead performed on the American TV show Saturday Night Live; the performance shocked viewers expecting rock songs, with Jonny Greenwood playing electronic instruments, the house brass band improvising over "The National Anthem", and Yorke dancing erratically to "Idioteque". Rolling Stone described the Kid A tour as "a revelation, exposing rock and roll humanity" in the songs. In November 2001, Radiohead released I Might Be Wrong: Live Recordings, comprising performances from the Kid A and Amnesiac tours.
Sales
Kid A reached number one on Amazon's sales chart, with more than 10,000 pre-orders. It debuted at number one on the UK Albums Chart, selling 55,000 copies in its first day – the biggest first-day sales of the year and more than every other album in the top ten combined. Kid A also debuted at number one on the US Billboard 200, selling more than 207,000 copies in its first week. It was Radiohead's first US top-20 album, and the first US number one in three years for any British act. Kid A also debuted at number one in Canada, where it sold more than 44,000 copies in its first week, and in France, Ireland and New Zealand. European sales slowed on 2 October 2000, the day of release, when EMI recalled 150,000 faulty CDs. By June 2001, Kid A had sold 310,000 copies in the UK, less than a third of OK Computer sales. It is certified platinum in the UK, Australia, Canada, France, Japan and the US.
Critical reception
Kid A was widely anticipated; Spin described it as the most anticipated rock record since Nirvana's In Utero. According to Andrew Harrison, then editor of Q, journalists expected it to provide more of the "rousing, cathartic, lots-of-guitar, Saturday-night-at-Glastonbury big future rock moments" of OK Computer. Months before its release, Melody Maker wrote: "If there's one band that promises to return rock to us, it's Radiohead."
After Kid A had been played for critics, many bemoaned the lack of guitar, obscured vocals, and unconventional song structures, and some called the album "a commercial suicide note". The Guardian wrote of the "muted electronic hums, pulses and tones", predicting that it would confuse listeners. Mojo wrote that "upon first listen, Kid A is just awful ... Too often it sounds like the fragments that they began the writing process with – a loop, a riff, a mumbled line of text, have been set in concrete and had other, lesser ideas piled on top." Guardian critic Adam Sweeting wrote that "even listeners raised on krautrock or Ornette Coleman will find Kid A a mystifying experience", and that it pandered to "the worst cliches" about Radiohead's "relentless miserabilism".
Several critics felt Kid A was pretentious or deliberately obscure. The Irish Times bemoaned the lack of conventional song structures and panned the album as "deliberately abstruse, wilfully esoteric and wantonly unfathomable ... The only thing challenging about Kid A is the very real challenge to your attention span." In the New Yorker, novelist Nick Hornby wrote that it was "morbid proof that this sort of self-indulgence results in a weird kind of anonymity rather than something distinctive and original". Melody Maker critic Mark Beaumont called it "tubby, ostentatious, self-congratulatory, look-ma-I-can-suck-my-own-cock whiny old rubbish ... About 60 songs were started that no one had a bloody clue how to finish." Alexis Petridis of the Guardian described it as "self-consciously awkward and bloody-minded, the noise made by a band trying so hard to make a 'difficult' album that they felt it beneath them to write any songs". Rolling Stone published a piece by Michael Krugman and Jason Cohen mocking Kid A as humourless, derivative and lacking in songs; they wrote: "Because it was decided that Radiohead were Important and Significant last time around, no one can accept the album as the crackpot art project it so obviously is."
Some critics felt the electronic elements were unoriginal. In the New York Times, Howard Hampton dismissed Radiohead as a "rock composite" and wrote that Kid A "recycles Pink Floyd's dark-side-of-the-moon solipsism to Me-Decade perfection". Beaumont said Radiohead were "simply ploughing furrows dug by DJ Shadow and Brian Eno before them"; the Irish Times felt the ambient elements were inferior to Eno's 1978 album Music For Airports and its "scary" elements inferior to Scott Walker's 1995 album Tilt. Select wrote: "What do they want for sounding like the Aphex Twin circa 1993, a medal?"
Rob Sheffield of Rolling Stone later wrote that the "mastery of Warp-style electronic effects" had appeared "clumsy and dated". In an NME editorial, James Oldham wrote that the electronic influences were "mired in compromise", with Radiohead still operating as a rock band, and concluded: "Time will judge it. But right now, Kid A has the ring of a lengthy, over-analysed mistake." Warp co-founder Rob Mitchell felt Kid A represented "an honest interpretation of [Warp] influences" and was not gratuitously electronic. He predicted it might one day be seen in the same way as David Bowie's 1977 album Low, which alienated some Bowie fans but was later acclaimed.
AllMusic gave Kid A a favourable review, but wrote that it "never is as visionary or stunning as OK Computer, nor does it really repay the intensive time it demands in order for it to sink in". The NME review was also positive, but described some songs as "meandering" and "anticlimactic", and concluded: "For all its feats of brinkmanship, the patently magnificent construct called Kid A betrays a band playing one-handed just to prove they can, scared to commit itself emotionally." In Rolling Stone, David Fricke called Kid A "a work of deliberately inky, often irritating obsession ... But this is pop, a music of ornery, glistening guile and honest ache, and it will feel good under your skin once you let it get there."
Spin said Kid A was "not the act of career suicide or feat of self-indulgence it will be castigated as", and predicted that fans would recognise it as Radiohead's "best and bravest" album. Billboard described it as "an ocean of unparalleled musical depth" and "the first truly groundbreaking album of the 21st century". Robert Christgau wrote that Kid A was "an imaginative, imitative variation on a pop staple: sadness made pretty". The Village Voice called it "oblique oblique oblique ... Also incredibly beautiful." Brent DiCrescenzo of Pitchfork gave Kid A a perfect score, calling it "cacophonous yet tranquil, experimental yet familiar, foreign yet womb-like, spacious yet visceral, textured yet vaporous, awakening yet dreamlike". He concluded that Radiohead "must be the greatest band alive, if not the best since you know who". The piece was one of the first Kid A reviews posted online; shared widely by Radiohead fans, it helped popularise Pitchfork and became notorious for its "obtuse" writing.
At Metacritic, which aggregates ratings from critics, Kid A has a score of 80 based on 24 reviews, indicating "generally favourable reviews". It was named one of the best albums of 2000 by publications including the Los Angeles Times, Spin, Melody Maker, Mojo, NME, Pitchfork, Q, the Times, Uncut, and the Wire. At the 2001 Grammy Awards, Kid A was nominated for Album of the Year and won for Best Alternative Album.
Legacy
In the years following its release, Kid A attracted acclaim. In 2005, Pitchfork wrote that it had "challenged and confounded" Radiohead's audience, and subsequently "transformed into an intellectual symbol of sorts ... Owning it became 'getting it'; getting it became 'anointing it'." In 2015, Rob Sheffield of Rolling Stone likened Radiohead's change in style to Bob Dylan's controversial move to rock music, writing that critics now hesitated to say they had disliked it at the time. He described Kid A as the "defining moment in the Radiohead legend". A year later, Billboard argued that Kid A was the first album since Bowie's Low to have moved "rock and electronic music forward in such a mature fashion". In an article for Kid A's 20th anniversary, the Quietus suggested that the negative reviews had been motivated by rockism, the tendency among music critics to venerate rock music over other genres.
Some critics still disliked the album. In a 2011 Guardian article about his critical Melody Maker review, Beaumont wrote that though his opinion had not changed, "Kid A status as a cultural cornerstone has proved me, if not wrong, then very much in the minority ... People whose opinions I trust claim it to be their favourite album ever." In 2014, Brice Ezell of PopMatters wrote that Kid A is "more fun to think and write about than it is to actually listen to" and a "far less compelling representation of the band's talents than The Bends and OK Computer". In 2016, Dorian Lysnkey wrote in the Guardian: "At times, Kid A is dull enough to make you fervently wish that they'd merged the highlights with the best bits of the similarly spotty Amnesiac ... Yorke had given up on coherent lyrics so one can only guess at what he was worrying about."
Radiohead denied that they had set out to create "difficult" music. Jonny Greenwood argued that the tracks were short and melodic, and suggested that "people basically want their hands held through 12 'Mull Of Kintyre's". Yorke said: "We're actually trying to communicate but, somewhere along the line, we just seemed to piss off a lot of people ... What we're doing isn't that radical." He recalled that the band had been "white as a sheet" before early performances on the Kid A tour, thinking they had been "absolutely trashed". At the same time, the reaction motivated them: "There was a sense of a fight to convince people, which was actually really exciting." He regretted having released no singles, feeling it meant much of the early judgement of the album came from critics.
Grantland credited Kid A for pioneering the use of internet to stream and promote music, writing: "For many music fans of a certain age and persuasion, Kid A was the first album experienced primarily via the internet – it's where you went to hear it, read the reviews, and argue about whether it was a masterpiece ... Listen early, form an opinion quickly, state it publicly, and move on to the next big record by the official release date. In that way, Kid A invented modern music culture as we know it." In his 2005 book Killing Yourself to Live, critic Chuck Klosterman interpreted Kid A as a prediction of the September 11 attacks.
Speaking at Radiohead's induction to the Rock and Roll Hall of Fame 2019, David Byrne of Talking Heads, one of Radiohead's formative influences, said: "What was really weird and very encouraging was that [Kid A] was popular. It was a hit! It proved to me that the artistic risk paid off and music fans sometimes are not stupid." In 2020, Billboard wrote that the success of Kid A, despite its "challenging" content, established Radiohead as "heavy hitters in the business for the long run".
Accolades
In 2020, Rolling Stone ranked Kid A number 20 on its updated "500 Greatest Albums of All Time" list, describing it as "a new, uniquely fearless kind of rock record for a new, increasingly fearful century ... [It] remains one of the more stunning sonic makeovers in music history." In previous versions of the list, Kid A ranked at number 67 (2012) and number 428 (2003). In 2005, Stylus and Pitchfork named Kid A the best album of the previous five years, with Pitchfork calling it "the perfect record for its time: ominous, surreal, and impossibly millennial".
In 2006, Time named Kid A one of the 100 best albums of all time, calling it "the opposite of easy listening, and the weirdest album to ever sell a million copies, but ... also a testament to just how complicated pop music can be". At the end of the decade, Rolling Stone, Pitchfork and the Times ranked Kid A the greatest album of the 2000s. The Guardian ranked it second best, calling it "a jittery premonition of the troubled, disconnected, overloaded decade to come. The sound of today, in other words, a decade early."
In 2011, Rolling Stone named "Everything in its Right Place" the 24th best song of the 2000s, describing it as "oddness at its most hummable". "Idioteque" was named one of the best songs of the decade by Pitchfork and Rolling Stone, and Rolling Stone ranked it #33 on its 2018 list of the "greatest songs of the century so far". In 2021, Pitchfork readers voted Kid A the greatest album of the previous 25 years.
(*) designates unordered list
Reissues
After a period of being out of print on vinyl, EMI reissued a double LP of Kid A on 19 August 2008 along with OK Computer, Amnesiac and Hail to the Thief as part of the "From the Capitol Vaults" series. In August 2009, EMI reissued Kid A in a two-CD "Collector's Edition" and a "Special Collector's Edition" containing an additional DVD. Both versions feature live tracks, taken mostly from television performances. Radiohead, who left EMI in 2007, had no input into the reissue and the music was not remastered. The "Collector's Editions" were discontinued after Radiohead's back catalogue was transferred to XL Recordings in 2016. In May 2016, XL reissued Kid A on vinyl, along with the rest of Radiohead's back catalogue.
An early demo of "The National Anthem" was included in the special edition of the 2017 OK Computer reissue OKNOTOK 1997 2017. In February 2020, Radiohead released an extended version of "Treefingers", previously released on the soundtrack for the 2000 film Memento, to digital platforms. On November 5, 2021, Radiohead released Kid A Mnesia, an anniversary reissue compiling Kid A and Amnesiac. It includes a third album, Kid Amnesiae, comprising previously unreleased material from the sessions. Radiohead promoted the reissue with two digital singles, the previously unreleased tracks "If You Say the Word" and "Follow Me Around". Kid A Mnesia Exhibition, an interactive experience with music and artwork from the albums, was released on November 18 for PlayStation 5, macOS and Windows.
Track listing
Notes
"Idioteque" contains two samples from the Odyssey record First Recordings – Electronic Music Winners (1976): Paul Lansky's "Mild und Leise" and Arthur Kreiger's "Short Piece".
Personnel
Credits adapted from liner notes.
Production
Nigel Godrich – production, engineering, mixing
Radiohead – production
Gerard Navarro – production assistance, additional engineering
Graeme Stewart – additional engineering
Stanley – artwork
Tchock – artwork
Chris Blair – mastering
Additional musicians
Orchestra of St John's – strings
John Lubbock – conducting
Jonny Greenwood – scoring
Horns on "The National Anthem"
Andy Bush – trumpet
Steve Hamilton – alto saxophone
Martin Hathaway – alto saxophone
Andy Hamilton – tenor saxophone
Mark Lockheart – tenor saxophone
Stan Harrison – baritone saxophone
Liam Kerkman – trombone
Mike Kearsey – bass trombone
Henry Binns – rhythm sampling on "The National Anthem"
Charts
Weekly charts
Year-end charts
Certifications and sales
Notes
References
Further reading
Ed's Diary: Ed O'Brien's studio diary from Kid A/Amnesiac recording sessions, 1999–2000 (archived at Green Plastic)
Marzorati, Gerald. "The Post-Rock Band". The New York Times. 1 October 2000. Retrieved on 4 November 2010.
"All Things Reconsidered: The 10th Anniversary of Radiohead's 'Kid A'" (a collection of articles). PopMatters. November 2010. Retrieved on 4 November 2010.
External links
2000 albums
Albums produced by Nigel Godrich
Ambient albums by English artists
Capitol Records albums
Electronic albums by English artists
Grammy Award for Best Alternative Music Album
Parlophone albums
Post-rock albums by English artists
Radiohead albums |
955698 | https://en.wikipedia.org/wiki/Computer%20Systems%20Research%20Group | Computer Systems Research Group | CSRG may also refer to China South Locomotive and Rolling Stock Industry (Group) Corporation
or Chauchat Sutter Ribeyrolles Gladiator World War I era machine gun.
The Computer Systems Research Group (CSRG) was a research group at the University of California, Berkeley that was dedicated to enhancing AT&T Unix operating system and funded by Defense Advanced Research Projects Agency.
History
Professor Bob Fabry of Berkeley acquired a UNIX source license from AT&T in 1974. His group started to modify UNIX, and distributed their version as the Berkeley Software Distribution (BSD). In April 1980, Fabry signed a contract with DARPA to develop UNIX even further and accommodate the specific requirements of the ARPAnet. With this funding, Fabry created the Computer Systems Research Group.
The Berkeley Sockets API and Berkeley Fast File System are some of the group's most significant innovations. The sockets interface solved the problem of supporting multiple protocols (e.g. XNS and TCP/IP), and extended UNIX's "everything is a file" notion to these network protocols, while the Fast File System increased the block allocation size from 512 bytes to 4096 bytes (or larger), improving disk transfer performance, while also allowing "micro-blocks" as small as 128 bytes, which improved disk use. Another noteworthy contribution was job control signals, which allowed a user to suspend a job with a key-press (control-Z), and then continue running the job in the background under the C shell.
Noteworthy versions of BSD were 3.0 BSD (the first version of BSD to support virtual memory), 4.0 BSD (which included the job-control CTRL-Z functionality, to suspend and restart a running job), a special 4.15 (interim) BSD version which had been released using BBN's TCP/IP, and 4.2 BSD (which included a full TCP/IP stack, FFS, and NFS support.)
By the early 1980s, CSRG was the best-known non-commercial Unix developer, and a majority of Unix sites used at least some Berkeley software. AT&T included some CSRG work in Unix System V. During the 1970s and 1980s, AT&T/USL raised the commercial licensing fee for UNIX from $20,000 to $100,000–$200,000. This became a big problem for small research labs and companies who used BSD, and the CSRG decided to replace all the source code that originated from AT&T. They succeeded in 1994, but AT&T didn't agree and took Berkeley to court. After the settlement in 1994, CSRG distributed its last version, called 4.4BSD-Lite2.
The group was disbanded in 1995, leaving a significant legacy: FreeBSD, OpenBSD, NetBSD, and DragonFly BSD are based on the 4.4BSD-Lite distribution and continue to play an important role in the open-source UNIX community today, including dictating the style of C programming used via KNF in the style man page.
Alongside the Free Software Foundation and Linux, the CSRG laid the foundations of the open source community.
Former members include Keith Bostic, Bill Joy, Marshall Kirk McKusick, Samuel J Leffler, Ozalp Babaoglu and Michael J. Karels, among others. The corporations Sun Microsystems, Berkeley Software Design and Sleepycat Software (later acquired by Oracle) can be considered spin-off companies of CSRG. Berkeley Software Design was led by Robert Kolstad, who led the development of BSD Unix for supercomputers at Convex Computer.
See also
Mach (kernel)
References
External links
The Computer Systems Research Group 1979 — 1993
A more detailed article
Berkeley Software Distribution
Science and technology in the San Francisco Bay Area
University of California, Berkeley
Unix history
Research groups |
35388541 | https://en.wikipedia.org/wiki/Robert%20Lafore | Robert Lafore | Robert W. Lafore (born March 11, 1938) is a computer programmer, systems analyst and entrepreneur. He coined the term "interactive fiction", and was an early software developer in this field.
Career
Lafore worked as a systems analyst for the Lawrence Berkeley National Laboratory. In the early days of microcomputing, he wrote programs in BASIC for the TRS-80 and founded his own software company.
Lafore has written a number of text adventure games, for which he coined the term "interactive fiction", for the company Adventure International.
Lafore has authored a number of books on the subject of computer programming, including Soul of CP/M., and Assembly Language Primer for the IBM PC and XT. Later books included C++ Interactive Course, Object-Oriented Programming in C++, Turbo C Programming for the IBM, and C Programming Using Turbo C++. At one time he was an editor for the Waite Group publishers.
References
External links
http://www.informit.com/authors/bio.aspx?a=E8178A8C-D171-4B68-A507-127DE6FF7B9C
http://www.pearsoned.co.in/web/authors/3304/Robert_Lafore.aspx
1938 births
Living people
American computer programmers
Lawrence Berkeley National Laboratory people |
2609907 | https://en.wikipedia.org/wiki/Microsoft%20Home | Microsoft Home | Microsoft Home was a line of software applications and personal hardware products published by Microsoft. Microsoft Home software titles first appeared in the middle of 1993. These applications were designed to bring multimedia to Microsoft Windows and Macintosh personal computers. With more than 60 products available under the Microsoft Home brand by 1994, the company's push into the consumer market took off. Microsoft Plus!, an add-on enhancement package for Windows, continued until the Windows XP era. The range of home software catered for many different consumer interests from gaming with Microsoft Arcade and Entertainment Packs to reference titles such as Microsoft Encarta, Bookshelf and Cinemania. Shortly after the release of Microsoft Windows 95, the company began to reduce the price of Microsoft Home products and by the rise of the World Wide Web by 1998, Microsoft began to phase out the line of software.
Titles
Microsoft Home produced software for all different home uses and environments. The products are divided into five categories: Reference & Exploration, Entertainment, Kids, Home Productivity and Sounds, and Sights & Gear. The category in which the product was divided is identifiable by the packaging. Generally, Reference & Exploration products have a purple base color, Entertainment has a black base color, Kids has a yellow base color, Home Productivity has a green color and Sounds, Sights & Gear products have a grey or red base color. Note that many applications were developed in conjunction with other reputable software and reference companies. For example, Microsoft Musical Instruments was developed with Dorling Kindersley.
Reference and exploration software
Microsoft Home Reference products brought information to Multimedia Personal Computers - it was an effective way of presenting and exploring information before the World Wide Web became mainstream. These products were embellished with hyperlink navigation systems, which were relatively new at this time. Most of these products were released on CD-ROM, giving the software the ability to display high-resolution graphics and animations, and play high-quality waveforms and MIDI files. These products proved that personal computers would revolutionize the way that we find and explore information.
Entertainment
In the early 1990s, games on personal computers generally ran on the now obsolete MS-DOS operating system. However, with the introduction of Microsoft Windows 3.1x in 1992, Microsoft Home published several entertainment applications that implemented the new technologies of Microsoft Windows such as DirectX. Furthermore, these applications encouraged the computer gamers of the time to migrate from MS-DOS to Microsoft Windows. This transition permitted better use of computer graphics, revolutionized game programming and resulted in a more realistic gaming experience, compared to DOS gaming. For example, Microsoft Windows Entertainment Pack Games have remained a classic for computer gamers, ever since their development in the early 1990s.
Kids
The Microsoft Kids division produced educational software aimed at children in 1993. Their products feature a purple-skinned character named McZee who wears wacky attire and leads children through the fictional town of Imaginopolis, where each building or room is a unique interface to a different part of the software. He is accompanied by a different partner in each software title.
Tying in with the TV series, Microsoft Scholastic's The Magic School Bus was a highly successful series that continued to be sold after Microsoft Home's kids range of software turned into a subsidiary called Microsoft Kids.
Home productivity software
Sights, Sounds & Gear
Legacy
Current products
Microsoft Publisher is still available even today as a part of Microsoft Office.
Microsoft Flight Simulator development was discontinued with the closure of ACES Game Studio. A replacement, Microsoft Flight, was later developed but subsequently also discontinued. However, the last version of the original software was later made available via Steam. In 2020, a brand-new Flight Simulator version was released with updated DirectX graphics technology.
Microsoft Picture It! eventually became Microsoft Digital Image and was discontinued after the release of Windows Vista. Windows Photo Gallery, itself later discontinued, and its successor Photos include similar features.
Discontinued products
Microsoft AutoMap later became Microsoft MapPoint and Microsoft Streets & Trips. This can be confirmed by "AutoMap" registry entries installed by these products. Both were discontinued in 2014.
Microsoft Works was replaced by Office Starter 2010 which is available to OEMs for installation on new PCs only and does not include a replacement for the Works Database program. Office Starter 2010 was discontinued before Office 2013, which does not offer a similar edition, was released.
Microsoft Encarta and Microsoft Money were discontinued in 2009, and no replacement products have been announced or released.
References
Microsoft Home Software Catalog Winter/Spring 1995 (from Microsoft The Ultimate Frank Lloyd Wright) 1194 Part No. 098-56862
Microsoft Knowledge Base
Home
Microsoft franchises |
49040455 | https://en.wikipedia.org/wiki/Tartan%20Laboratories | Tartan Laboratories | Tartan Laboratories, Inc., later known as Tartan, Inc., was an American software company founded in 1981 and based in Pittsburgh, Pennsylvania, that specialized in language compilers, especially for the Ada programming language. It was based on work initially done at Carnegie Mellon University and gradually shifted from a focus on research and contract work to being more product-oriented. It was sold to Texas Instruments in 1996 with part of it subsequently being acquired by DDC-I in 1998.
Company founding and initial history
Tartan was founded 1981 by Carnegie Mellon University computer science professors and husband and wife William A. Wulf and Anita K. Jones, with the goal of specializing in optimizing compilers. He was chair, president, and CEO while she was vice president of engineering. The professors left the university as part of this action, but still kept a reference to it, as "Tartan" is the name associated with Carnegie Mellon's athletics teams and school newspaper. There was also a third CMU professor who was a founder, John Nestor, a visiting professor who had previously worked at Intermetrics on the finalist "Red" candidate for the Ada language design.
Initial funding for the company was provided by a New York-based venture capital firm, but a second round came from the Pittsburgh-based PNC Financial Corporation. The Cleveland-based Morgenthaler Ventures was another early investor, with David Morgenthaler serving on Tartan's board of directors.
The company's offices were initially located in a former industrial warehouse on Melwood Avenue in the Oakland neighborhood of Pittsburgh. In 1983 the company hosted a visit by Governor of Pennsylvania Dick Thornburgh as part of a meeting of the Pittsburgh High Technology Council, an organization seeking to help change Pittsburgh from its former reliance on an industrial base of steel production to one that included an emphasis on high-technology.
Tartan's initial engineering focus was to commercialize use of the Production Quality Compiler-Compiler Project approach towards building optimizing compilers that Wulf had worked on at Carnegie Mellon. This involved having optimizing code generators semi-automatically produced from architecture descriptions. Tartan made native Ada compilers for VAX/VMS and Sun-3/SunOS, and embedded system Ada cross-compilers, hosted on those platforms, to the MIL-STD-1750A, Motorola 680x0, and later Intel i960 architectures.
In addition, in March 1982, the company received a contract to maintain and enhance the DIANA intermediate representation that was intended as the cornerstone to various Ada tools.
Tartan also produced compilers for the C programming language and Modula-2 programming language. Among the C compiler implementers there were Guy L. Steele Jr. and Samuel P. Harbison, who combined to publish C: A Reference Manual (1984) to provide a precise description of the language, which Tartan was trying to implement on a wide range of systems. Both authors participated in the ANSI C standardization process; several revisions of the book were subsequently issued to reflect the new standard.
After a while Wulf assumed the role of chairman and senior vice president for development. By early 1985, Tartan had some 60 employees, a payroll over $2 million, and had seen over $9 million invested by capital venture outfits. The company was considered one of the Pittsburgh area's foremost high-technology firms and part of, as The Pittsburgh Press put it, "changing [the city's] image as a smokestack wasteland". Tartan hosted an ACM SIGAda conference in Pittsburgh in July 1986.
In 1987, Tartan and integrated development environment maker Rational began a collaboration in producing a joint product for the 1750A, using Tartan's code generators.
Indeed, the Ada 1750A product generated very efficient code and established a strong reputation in the industry.
By 1987, the company had received $11 million in venture capital funding.
Both of the key founders would then leave Tartan: Jones in 1987 and Wulf followed in 1988. Both went on to distinguished further careers in government settings and at the University of Virginia.
Change of emphasis and name
Starting in 1985, Tartan had developed a relationship with Texas Instruments, and then in 1988 a main focus of the company became the development of Ada cross-compilers for digital signal processing (DSP) chips. These were for the Texas Instruments TMS320 series, specifically the C3x and C4x lines of processors. Within a few years, Tartan would become the first company to validate Ada compilers for DSPs.
As the 1980s came to a close there were manifest problems at Tartan Laboratories. Delays in getting products ready, or trouble in selling them if they were, had caused revenue shortfalls and most revenue came from contract development work. By 1989 Tartan had consumed some $15 million in venture funding but had never posted a profitable quarter. The Pittsburgh Post-Gazette characterized Tartan as a "promising young startup that never really got off the ground." Donald Evans, president and CEO, said that developing compilers for multiple languages had likely been a bad strategy and said that the company would now focus on selling Ada compilers to the government, military, and related sectors. At the same time, the company relocated its offices in 1989 out of the city to Monroeville, Pennsylvania. (The Oakland facility subsequently became the home of Pittsburgh Filmmakers.)
Following the retirement of Evans, in 1990 Lee B. Ehrlichman became president and CEO. He changed the name of the company to Tartan, Inc., saying that the old name suggested a research organization rather than a for-profit enterprise. He reduced engineering headcount and increased those for marketing and sales, and vowed that the company would focus on three major compiler lines, including ones for the C3x and i960 where there were no immediate competitors. By this point, Tartan had around 70 employees and an estimated annual revenue of $7–8 million.
Tartan had staff members who were prominent in the Ada language definition and standardization world, including Erhard Ploedereder and Joyce L. Tokar. The company also had a lead role in the U.S. Air Force-sponsored Common Ada Runtime System (CARTS) project towards providing standard interfaces into Ada runtime environments.
By the mid-1990s Tartan employed over 80 professional staff. Ehrlichman stayed as CEO until 1995, after which he was followed by Jaime Ellertson.
Sale and later history
Tartan was sold to Texas Instruments in 1996. The deal focused on Tartan's role in developing applications for the Texas Instruments DSPs.
In March 1998, DDC-I acquired from Texas Instruments the development and sales and marketing rights to the Tartan Ada cross-compilers for the MIL-STD-1750A, Motorola 680x0, and Intel i960 architectures. These were compilers for processors that Texas Instruments had become less interested in.
DDC-I kept the Tartan Ada compilers as a listed product into the 2010s.
Texas Instruments initially kept the Ada cross-compilers for the DSP architectures. In 2003 it closed down the Monroeville facility, which by that time had under 50 employees, and relocated the work to several of its offices around the world.
Subsequently, Texas Instruments licensed the remaining Ada compilers, for Texas Instruments C3x/C4x DSPs, to Tartan Software, Inc. doing business in Fombell, Pennsylvania. Then in 2018, the Tartan Ada product line for C3x/C4x was acquired by Tartan Ada LLC, doing business in New Kensington, Pennsylvania, which offers support maintenance and runtime licensing for the products.
References
External links
Tartan Ada cross-compiler systems for standard architectures at DDC-I, Inc.
Tartan Ada cross-compiler systems for DSP architectures at Tartan Ada LLC
American companies established in 1981
American companies disestablished in 1996
Software companies established in 1981
Software companies disestablished in 1996
Software companies based in Pennsylvania
Defunct software companies of the United States
Companies based in Pittsburgh
Ada (programming language) |
3648184 | https://en.wikipedia.org/wiki/Audacious%20%28software%29 | Audacious (software) | Audacious is a free and open-source audio player software with a focus on low resource use, high audio quality, and support for a wide range of audio formats. It is designed primarily for use on POSIX-compatible Unix-like operating systems, with limited support for Microsoft Windows. Audacious is the default audio player in Lubuntu and Ubuntu Studio.
History
Audacious began as a fork of Beep Media Player, which itself is a fork of XMMS. Ariadne "kaniini" Conill decided to fork Beep Media Player after the original development team announced that they were stopping development in order to create a next-generation version called BMPx. According to the Audacious home page, Conill and others "had [their] own ideas about how a player should be designed, which [they] wanted to try in a production environment."
Since version 2.1, Audacious includes both the Winamp-like interface known from previous versions and a new, GTK-based interface known as GTKUI, which resembles foobar2000 to some extent. GTKUI became the default interface in Audacious 2.4.
Change to C++ and Qt
Before version 3.0, Audacious used the GTK 2.x toolkit by default. Partial support for GTK3 was added in version 2.5, and Audacious 3.0 has full support for GTK3 and uses it by default. However, dissatisfied with the evolution of GTK3, the Audacious team chose to revert to GTK2 starting with the 3.6 release, with long-term plans of porting to Qt.
Since August 8, 2018, the official website has HTTPS enabled site-wide and GTK3 support was dropped completely.
As version 4.0, Audacious is using Qt as its primary toolkit but the GTK 2.x support is still available.
Features
Audacious contains built-in gapless playback.
Default codec support
MP3 using libmpg123
Advanced Audio Coding (AAC and AAC+)
Vorbis
FLAC
Wavpack
Shorten (SHN)
Musepack
TTA (codec)
Windows Media Audio (WMA)
Apple Lossless (ALAC)
150 different module formats
Several chiptune formats: AY, GBS, GYM, HES, KSS, NSF, NSFE, SAP, SPC, VGM, VGZ, VTX
PlayStation Audio: Portable Sound Format (PSF and PSF2)
Nintendo DS Sound Format: 2SF
Ad-lib chiptunes via AdPlug library
WAV formats provided by libsndfile plug-in.
MIDI via native OS synthesizer control or FluidSynth.
CD Audio
Plug-ins
Audacious owes a large portion of its functionality to plug-ins, including all codecs. More features are available via third-party plug-ins.
Current versions of the Audacious core classify plug-ins as follows (some are low level and not user-visible at this time):
Decoder plug-ins, which contain the actual codecs used for decoding content.
Transport plug-ins, which are lowlevel and implemented by the VFS layer.
General plug-ins, which provide user-added services to the player (such as sending tracks with AudioScrobbler)
Output plug-ins, which provide the audio system backend of the player.
Visualization plug-ins, which provide visualizations based on fast Fourier transforms of the wave data.
Effect plug-ins, which provide various sound processing on the decoded audio stream
Container plug-ins, which provide support for playlists and other similar structures.
Lowlevel plug-ins, which provide miscellaneous services to the player core and are not categorized into any of the other plug-ins.
Output plug-ins:
PulseAudio output
OSS4 output
ALSA output
Sndio output
SDL output
FileWriter plug-in – no sound is played, the output is instead redirected into a new file: this plug-in supports the output file formats: WAV, mp3, Ogg Vorbis and FLAC, it can be used to transcode a file and also to rip a CD
JACK output
Skins
Audacious has full support for Winamp 2 skins, and as of version 1.2, some free-form skinning is possible. Winamp .wsz skin files, a type of Zip archive, can be used directly, or can be unarchived to individual directories. The program can use Windows Bitmap (.bmp) graphics from the Winamp archive, although native skins for Linux are usually rendered in Portable Network Graphics (.png) format. Audacious 1.x allows the user to adjust the RGB color balance of any skin, effectively making a basic white skin equivalent to a host of colorized skins without editing them manually.
Clients
Audacious is intended to be a standalone media player not a server (unlike XMMS2), though it accepts connections from client software, such as Conky.
Connection to Audacious for remote control can be done over plain DBus, by using an MPRIS-compatible client, or using the official Audtool utility created just for this purpose.
See also
Comparison of free software for audio § Players
References
Further reading
External links
Bug tracker
Audio player software that uses GTK
Audio software that uses Qt
Free audio software
Free media players
Free software programmed in C
Free software programmed in C++
Linux CD ripping software
Linux media players
Software forks
Software that uses FFmpeg
Software that was ported from GTK to Qt
Software that was rewritten in C++ |
20893911 | https://en.wikipedia.org/wiki/Password%20Authenticated%20Key%20Exchange%20by%20Juggling | Password Authenticated Key Exchange by Juggling | The Password Authenticated Key Exchange by Juggling (or J-PAKE) is a password-authenticated key agreement protocol, proposed by Feng Hao and Peter Ryan. This protocol allows two parties to establish private and authenticated communication solely based on their shared (low-entropy) password without requiring a Public Key Infrastructure. It provides mutual authentication to the key exchange, a feature that is lacking in the Diffie–Hellman key exchange protocol.
Description
Two parties, Alice and Bob, agree on a group with generator of prime order in which the discrete log problem is hard. Typically a Schnorr group is used. In general, J-PAKE can use any prime order group that is suitable for public key cryptography, including Elliptic curve cryptography. Let be their shared (low-entropy) secret, which can be a password or a hash of a password (). The protocol executes in two rounds.
Round 1 Alice selects , and sends out , together with the Zero-knowledge proofs (using for example Schnorr non-interactive zero-knowledge proof as specified in RFC 8235) for the proof of the exponents and . Similarly, Bob selects , and sends out , together with the Zero-knowledge proofs for the proof of the exponents and . The above communication can be completed in one round as neither party depends on the other. When it finishes, Alice and Bob verify the received Zero-knowledge proofs and also check .
Round 2 Alice sends out and a Zero-knowledge proof for the proof of the exponent . (Note Alice actually derives a new public key using as the generator). Similarly, Bob sends out and a Zero-knowledge proof for the proof of the exponent .
After Round 2, Alice computes . Similarly, Bob computes . With the same keying material , Alice and Bob can derive a session key using a Cryptographic hash function: .
The two-round J-PAKE protocol is completely symmetric. This helps significantly simplify the security analysis. For example, the proof that one party does not leak any password information in the data exchange must hold true for the other party based on the symmetry. This reduces the number of the needed security proofs by half.
In practice, it is more likely to implement J-PAKE in three flows since one party shall normally take the initiative. This can be done trivially without loss of security. Suppose Alice initiates the communication by sending to Bob: and Zero-knowledge proofs. Then Bob replies with: and Zero-knowledge proofs. Finally, Alice sends to Bob: and a Zero-knowledge proof. Both parties can now derive the same session key.
Depending on the application requirement, Alice and Bob may perform an optional key confirmation step. There are several ways to do it. A simple method described in SPEKE works as follows: Alice sends to Bob , and then Bob replies with . Alternatively, Alice and Bob can realize explicit key confirmation by using the newly constructed session key to encrypt a known value (or a random challenge). EKE, Kerberos and Needham-Schroeder all attempt to provide explicit key confirmation by exactly this method.
Security properties
Given that the underlying Schnorr non-interactive zero-knowledge proof is secure, the J-PAKE protocol is proved to satisfy the following properties:
Off-line dictionary attack resistance - It does not leak any password verification information to a passive/active attacker.
Forward secrecy - It produces session keys that remain secure even when the password is later disclosed.
Known-key security - It prevents a disclosed session key from affecting the security of other sessions.
On-line dictionary attack resistance - It limits an active attacker to test only one password per protocol execution.
In 2015, Abdalla, Benhamouda and MacKenzie conducted an independent formal analysis of J-PAKE to prove its security in a random oracle model assuming algebraic adversaries.
The protocol design
The J-PAKE protocol is designed by combining random public keys in such a structured way to achieve a vanishing effect if both parties supplied exactly the same passwords. This is somehow similar to the Anonymous veto network protocol design. The essence of the idea, however, can be traced back to David Chaum's original Dining Cryptographers network protocol, where binary bits are combined in a structured way to achieve a vanishing effect.
The implementation
J-PAKE has been implemented in OpenSSL and OpenSSH as an experimental authentication protocol. It was removed from the OpenSSH source code at the end of January 2014. It has also been implemented in Smoke Crypto Chat Messenger, in NSS and was used by Firefox Sync version 1.1 but discontinued in 1.5 which uses a different key exchange and storage method. Mozilla's J-PAKE server was shut down along with the Sync 1.1 storage servers on 30 September 2015. Pale Moon continues to use J-PAKE as part of its Sync service. Since February 2013, J-PAKE has been added to the lightweight API in Bouncycastle (1.48 and onwards). J-PAKE is also used in the Thread (network protocol)
Standardization
J-PAKE has been included in ISO/IEC 11770-4 (2017) as an international standard. It is also published in RFC 8236.
References
External links
J-PAKE draft
A prototype demo of J-PAKE in C
A prototype demo of J-PAKE in Java
An example of implementing J-PAKE using Elliptic Curve
J-PAKE: From Dining Cryptographers to Jugglers
Cryptography
Cryptographic protocols |
51088515 | https://en.wikipedia.org/wiki/Libraries.io | Libraries.io | Libraries.io is an open source web service that lists software development project dependencies and alerts developers to new versions of the software libraries they are using.
Libraries.io is written by Andrew Nesbitt, who has also used the code as the basis for DependencyCI, a service that tests project dependencies. A key feature is that the service checks for software license compliance.
As of 30 November 2016, the web service monitors 1,930,496 open source libraries and supports 33 different package managers. To gather the information on libraries, it uses the dominant package manager for each programming language that is supported. The website organizes them by programming language, package manager, license (such as GPL or MIT), and by keyword.
On November 14, 2017, Libraries.io announced its acquisition by Tidelift, an open-source software support company, with an intention to continue to develop and operate the service.
The code that runs the web service is available on GitHub and under the GNU Affero General Public License.
External links
Libraries.io source code
References
Free software websites
Software metrics
Code search engines
Internet properties established in 2015 |
27488254 | https://en.wikipedia.org/wiki/The%20Computer%20Paper | The Computer Paper | The Computer Paper (sometimes referred to as TCP, for a time HUB, and then HUB-The Computer Paper) was a monthly computer magazine that was published in Canada (both in print and online) from February 1988 until November 2008. The magazine was originally published by Canada Computer Paper Inc. It was purchased in 1997 by Hebdo Mag International of Paris, France, and then to Piccolo Publishing Ltd of Toronto in 2003. Publication ceased in November 2008 due to declining ad revenues.
Overview
The Computer Paper which billed itself as "Canada's Computer Information Source", and "Canada's Largest Computer Monthly", provided reviews, previews of computer hardware and software for home users and information technology professionals. The intention was to provide a Canadian view of the rapidly changing computer marketplace. Articles were written by journalists and technology specialists in a wide range of fields. As the computer market changed, the publication was broadened to include coverage of printers, PDAs, digital cameras, video cameras, smart phones, personal music players and other consumer electronics. Each issue would have a focus article, usually featured on the cover of the magazine. Examples included Canadian accounting software, payroll programs, desktop publishing and telecommunications. Regular columns were devoted to specific topics such as shareware software. The Computer Paper also included wire stories from the Newsbytes News Network.
Similar in style to American regional magazines such as Computer Currents, Micro Times and Computer User, The Computer Paper was printed on newsprint on a monthly basis and was distributed free to readers as it was entirely advertising supported. At its peak The Computer Paper was distributed from six offices across Canada: Vancouver, Calgary, Winnipeg, Toronto, Ottawa and Montreal with a circulation of 365,000 copies a month in five separate, regional editions. Distribution was done largely through computer retail outlets, free street boxes and other high volume locations.
Commencing in January 1995, The Computer Paper launched TCP Labs, to provide benchmarking of computers, printers and other hardware. The goal was to provide Canadian purchasers with an unbiased overview of products available in the Canadian market. Winners of the hardware survey each month would be selected for an "Editor's Choice Award". The testing Lab was located in the Toronto offices of The Computer Paper. The first lab tests featured benchmark testing on a number of Canadian and internationally manufactured Pentium and 486 computers. The second lab featured laptops and color inkjet printers.
Competitors
Throughout the 1990s, The Computer Paper had competitors in most regional markets, including Our Computer Player in Vancouver, The Computer Post in Winnipeg, Toronto Computes! and later We Compute in Toronto and Monitor & M2 in Ottawa. The national distribution of The Computer Paper meant that most national level advertisers (IBM, Microsoft, Dell etc.) would select it over these other regional publications. According to an article in the Globe and Mail (June 1994), "Advertisers like the broad exposure. Ottawa software developer Corel Corp. says the paper is one of its 'priority' Canadian publications, as does IBM Canada Ltd. of Markham, Ont., which had 3 pages of ads in last month's issue."
Expansion
In addition to expanding from the BC market, across Canada, Canada Computer Paper Inc, owners of The Computer Paper, also purchased a number of competitive publications, as well as launching other titles based on these acquisitions. The BC Edition of The Computer Paper was launched in February 1988. The Alberta Edition was launched in June 1989, with the two Alberta partners being bought out in June 1990. In December 1990, a Prairie edition was launched in Winnipeg, Regina, and Saskatoon.
A Toronto edition launched in March 1992. In February 1994, Canada Computer Paper Inc., negotiated to purchase its major Toronto competitor Toronto Computes! from publisher David Carter of Context Publishing. In December 1994, Vancouver Computes! was launched from the editorial provided by Toronto Computes!. By owning two publications in both Toronto and Vancouver, Canada Computer Paper Inc., was able to effectively be bi-weekly in the two largest Canadian markets.
In February 1994, the Eastern Edition of The Computer Paper was launched for Ottawa, Montreal and a number of Atlantic cities. This was a zoned publication. It began with 75,000 circulation split three ways. In August 1994, in response to advertisers' requests, Montreal's circulation was increased to 50,000. Ottawa was also adjusted to 30,000 circulation.
In 1996, Our Computer Player was purchased in the Vancouver market and rebranded as Vancouver Computes!. A French-language version of the Computes! brand was launched in Montreal called Quebec Micro!. Also in 1996, Government Computer, a publication focused on purchasing of hardware and software by government and located in Ottawa, was purchased.
Editors and Writers
Editors, regular writers and contributors to The Computer Paper:
Douglas Alder (Founder, Publisher/Editor-in-Chief)
Kathryn Alexander Alder (Co-Publisher and Consulting Partner)
Graeme Bennett (Managing Editor)
Sean Carruthers (Test Lab Editor)
Jeff Evans (Technical Editor) (Toronto)
Megan Johnston (Editor)
James MacFarlane
Geoff Martin (Editor-in-Chief)
Andrew Moore-Crispin (Editor-in-Chief)
Dorian Nicholson
Linda L. Richards
Keith Schengili-Roberts
David Tanaka (Editor-in-Chief)
Geof Wheelwright
Rod Lamirand
References
July 2001, For computer industry watchers, Masthead Magazine
August 15, 1997, Domain Name Raids, Business in Vancouver
July 1997, Computer Marketing Vehicles in Canada, IDC Report
May 1997, Computer Publications go to the mat, Silicon Valley North
May 1997, Black Papers Targeted, The Georgia Straight Weekly
April 2, 1997, Technology's OK, but what about the stress?, Lethbridge Herald
November 13, 1994 Exploring Online Mags, Vancouver Province Newspaper
June 6, 1994, The Entrepreneurs, A journey of spirit and circulation, The Globe and Mail
October 1993, Letters to the Editor, Boardwatch Magazine
March 15, 1991, 40 under 40, Business in Vancouver
May 2006 Masthead Online Lifestyle mags to use new attitudes towards consumer electronics
Sample Issues
The Computer Paper Online Edition via the waybackmachine.org
November 1996
December 1996
January 1997
February 1997
March 1997
July 1997
November 1997
January 1998
July 1998
June 1999
"The Computer Paper" sample PDFs on Issuu.com
"The Computer Paper" full PDF issues on Archive.org
Defunct computer magazines
Defunct magazines published in Canada
Magazines established in 1988
Magazines disestablished in 2008
Magazines published in Toronto
Monthly magazines published in Canada |
32361249 | https://en.wikipedia.org/wiki/History%20of%20supercomputing | History of supercomputing | The term supercomputing arose in the late 1920s in the United States in response to the IBM tabulators at Columbia University. The CDC 6600, released in 1964, is sometimes considered the first supercomputer. However, some earlier computers were considered supercomputers for their day such as the 1960 UNIVAC LARC, the IBM 7030 Stretch, and the Manchester Atlas, both in 1962—all of which were of comparable power; and the 1954 IBM NORC.
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.
By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.
Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.
Beginnings: 1950s and 1960s
The term "Super Computing" was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.
In 1957, a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, Minnesota. Seymour Cray left Sperry a year later to join his colleagues at CDC. In 1960, Cray completed the CDC 1604, one of the first generation of commercially successful transistorised computers and at the time of its release, the fastest computer in the world. However, the sole fully transitorized Harwell CADET was operational in 1951, and IBM delivered its commercially successful transitorized IBM 7090 in 1959.
Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers Cray completed the CDC 6600 in 1964. Cray switched from germanium to silicon transistors, built by Fairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and the speed of light restriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush. The 6600 outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, it was dubbed a supercomputer and defined the supercomputing market when two hundred computers were sold at $9 million each.
The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The Minnesota FORTRAN compiler for the machine was developed by Liddiard and Mundstock at the University of Minnesota and with it the 6600 could sustain 500 kiloflops on standard mathematical operations. In 1968, Cray completed the CDC 7600, again the fastest computer in the world. At 36 MHz, the 7600 had 3.6 times the clock speed of the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company. Two years after his departure CDC delivered the STAR-100 which at 100 megaflops was three times the speed of the 7600. Along with the Texas Instruments ASC, the STAR-100 was one of the first machines to use vector processing - the idea having been inspired around 1964 by the APL programming language.
In 1956, a team at Manchester University in the United Kingdom, began development of MUSE — a name derived from microsecond engine — with the aim of eventually building a computer that could operate at processing speeds approaching one microsecond per instruction, about one million instructions per second. Mu (the name of the Greek letter µ) is a prefix in the SI and other systems of units denoting a factor of 10−6 (one millionth).
At the end of 1958, Ferranti agreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamed Atlas, with the joint venture under the control of Tom Kilburn. The first Atlas was officially commissioned on 7 December 1962—nearly three years before the Cray CDC 6600 supercomputer was introduced—as one of the world's first supercomputers. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to four IBM 7094s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. The Atlas pioneered virtual memory and paging as a way to extend its working memory by combining its 16,384 words of primary core memory with an additional 96K words of secondary drum memory. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognizable modern operating system".
The Cray era: mid-1970s and 1980s
Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, and it became the most successful supercomputer in history. The Cray-1 used integrated circuits with two gates per chip and was a vector processor which introduced a number of innovations such as chaining in which scalar and vector registers generate interim results which can be used immediately, without additional memory references which reduce computational speed. The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105 MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines. All three floating point pipelines on the X-MP could operate simultaneously. By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.
The Cray-2 released in 1985 was a 4 processor liquid cooled computer totally immersed in a tank of Fluorinert, which bubbled as it operated. It reached 1.9 gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier. The Cray-2 was a totally new design and did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory. The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware. That trend was partly responsible for a move away from the in-house, Cray Operating System to UNICOS based on Unix.
The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight vector processors at 167 MHz with a peak performance of 333 megaflops per processor. In the late 1980s, Cray's experiment on the use of gallium arsenide semiconductors in the Cray-3 did not succeed. Seymour Cray began to work on a massively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.
Massive processing: the 1990s
The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.
The SX-3/44R was announced by NEC Corporation in 1989 and a year later earned the fastest in the world title with a 4 processor model. However, Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor. The Hitachi SR2201 on the other hand obtained a peak performance of 600 gigaflops in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network.
In the same timeframe the Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface. By 1995, Cray was also shipping massively parallel systems, e.g. the Cray T3E with over 2,000 processors, using a three-dimensional torus interconnect.
The Paragon architecture soon led to the Intel ASCI Red supercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of the Advanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelf Pentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2 teraflops.
Petascale computing in the 21st century
Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. The Cray C90 used 500 kilowatts of power in 1991, while by 2003 the ASCI Q used 3,000 kW while being 2,000 times faster, increasing the performance per watt 300 fold.
In 2004, the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9 teraflops, using 640 nodes, each with eight proprietary vector processors. By comparison, as of 2020, a single NVidia RTX 3090 graphics card can deliver comparable performance at 35 TFLOPS per card.
The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.
Progress in China has been rapid, in that China placed 51st on the TOP500 list in June 2003, then 14th in November 2003, and 10th in June 2004 and then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflop Tianhe-I supercomputer.
In July 2011, the 8.1 petaflop Japanese K computer became the fastest in the world using over 60,000 SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide. By 2014, the Earth Simulator had dropped off the list and by 2018 K computer had dropped out of the top 10. By 2018, Summit had become the world's most powerful supercomputer, at 200 petaFLOPS. In 2020, the Japanese once again took the top spot with the Fugaku supercomputer, capable of 442 PFLOPS.
Historical TOP500 table
This is a list of the computers which appeared at the top of the Top500 list since 1993. The "Peak speed" is given as the "Rmax" rating.
Export controls
The CoCom and its later replacement, the Wassenaar Arrangement, legally regulated - required licensing and approval and record-keeping; or banned entirely - the export of high-performance computers (HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.
See also
Linpack
TOP500
Green 500
FLOPS
Instructions per second
Quasi-opportunistic supercomputing
Supercomputer architecture
Supercomputing in China
Supercomputing in Europe
Supercomputing in India
Supercomputing in Japan
Supercomputing in Pakistan
References
External links
Supercomputers (1960s-1980s) at the Computer History Museum
Supercomputers
History of computing hardware
History of Silicon Valley |
67523190 | https://en.wikipedia.org/wiki/Qalculate%21 | Qalculate! | Qalculate! is an arbitrary precision cross-platform software calculator.
Features
Qalculate! supports common mathematical functions and operations, multiple bases, autocompletion, complex numbers, infinite numbers, arrays and matrices, variables, mathematical and physical constants, user-defined functions, symbolic derivation and integration, solving of equations involving unknowns, uncertainty propagation using interval arithmetic, plotting using Gnuplot, unit and currency conversion and dimensional analysis, and provides a periodic table of elements, as well as several functions for computer science, such as character encoding and bitwise operations.
It provides three interfaces: A GUI using GTK for graphical usage (qalculate-gtk), a library for use in other programs (libqalculate), and a CLI program for use in a terminal (qalc).
Qalculate! (GTK+ and Qt GUI): qalculate-gtk and qalculate-qt
Qalculate! (CLI): qalc (usually provided by the libqalculate package)
Qalculate! (Library): libqalculate
Use in academic research
Bartel, Alexandre. "DOS Software Security: Is there Anyone Left to Patch a 25-year old Vulnerability?."
"In our example of Figure 7, we choose to execute /usr/bin/qalculate-gtk, a calculator. Since the stack of the DOSBox process is non-executable, we cannot directly inject our shellcode on it."
"The Gnome calculator was used to perform these calculations and the results were verified using the Qalculate! calculator and WolframAlpha (15) since spreadsheets are unable to perform these calculations."
See also
Mathematical software
List of arbitrary-precision arithmetic software
Comparison of software calculators
References
External links
Qalculate! - the ultimate desktop calculator at GitHub
Qalculate! - downloads at GitHub
Qalculate/qalculate-gtk GUI at GitHub
Qalculate! Manual at GitHub
QALC man page at GitHub
Ubuntu – Details of package qalculate in bionic
Ubuntu – Details of package qalculate in focal
Qalculate! code review by PVS-Studio
Free educational software
GNOME Applications
Software calculators |
194608 | https://en.wikipedia.org/wiki/IAS%20machine | IAS machine | The IAS machine was the first electronic computer to be built at the Institute for Advanced Study (IAS) in Princeton, New Jersey. It is sometimes called the von Neumann machine, since the paper describing its design was edited by John von Neumann, a mathematics professor at both Princeton University and IAS. The computer was built from late 1945 until 1951 under his direction.
The general organization is called von Neumann architecture, even though it was both conceived and implemented by others. The computer is in the collection of the Smithsonian National Museum of American History but is not currently on display.
History
Julian Bigelow was hired as chief engineer in May 1946.
Hewitt Crane, Herman Goldstine, Gerald Estrin, Arthur Burks, George W. Brown and Willis Ware also worked on the project.
The machine was in limited operation in the summer of 1951 and fully operational on June 10, 1952. It was in operation until July 15, 1958.
Description
The IAS machine was a binary computer with a 40-bit word, storing two 20-bit instructions in each word. The memory was 1,024 words (5.1 kilobytes). Negative numbers were represented in two's complement format. It had two general-purpose registers available: the Accumulator (AC) and Multiplier/Quotient (MQ). It used 1,700 vacuum tubes (triode types: 6J6, 5670, 5687, a few diodes: type 6AL5, 150 pentodes to drive the memory CRTs, and 41 CRTs (type: 5CP1A): 40 used as Williams tubes for memory plus one more to monitor the state of a memory tube). The memory was originally designed for about 2,300 RCA Selectron vacuum tubes. Problems with the development of these complex tubes forced the switch to Williams tubes.
It weighed about .
It was an asynchronous machine, meaning that there was no central clock regulating the timing of the instructions. One instruction started executing when the previous one finished. The addition time was 62 microseconds and the multiplication time was 713 microseconds.
Although some claim the IAS machine was the first design to mix programs and data in a single memory, that had been implemented four years earlier by the 1948 Manchester Baby. The Soviet MESM also became operational prior to the IAS machine.
Von Neumann showed how the combination of instructions and data in one memory could be used to implement loops, by modifying branch instructions when a loop was completed, for example. The requirement that instructions, data and input/output be accessed via the same bus later came to be known as the Von Neumann bottleneck.
IAS machine derivatives
Plans for the IAS machine were widely distributed to any schools, businesses, or companies interested in computing machines, resulting in the construction of several derivative computers referred to as "IAS machines," although they were not software compatible.
Some of these "IAS machines" were:
AVIDAC (Argonne National Laboratory)
BESK (Stockholm)
BESM (Moscow)
Circle Computer (Hogan Laboratories, Inc.), 1954
CYCLONE (Iowa State University)
DASK (Regnecentralen, Copenhagen 1958)
GEORGE (Argonne National Laboratory)
IBM 701 (19 installations)
ILLIAC I (University of Illinois at Urbana–Champaign)
MUSASINO-1 (Musashino, Tokyo, Japan)
JOHNNIAC (RAND)
MANIAC I (Los Alamos National Laboratory)
MISTIC (Michigan State University)
ORACLE (Oak Ridge National Laboratory)
ORDVAC (Aberdeen Proving Ground)
PERM (Munich)
SARA (SAAB)
SEAC (Washington, D.C.)
SILLIAC (University of Sydney)
SMIL (Lund University)
WEIZAC (Weizmann Institute)
See also
Von Neumann architecture
List of vacuum tube computers
References
Further reading
Gilchrist, Bruce, "Remembering Some Early Computers, 1948-1960", Columbia University EPIC, 2006, pp. 7–9. (archived 2006) Contains some autobiographical material on Gilchrist's use of the IAS computer beginning in 1952.
Dyson, George, Turing's Cathedral, 2012, Pantheon, A book about the history of the Institute of Advanced Study around the making of this computer. Chapters 6 onward deal with this computer specifically.
External links
Oral history interviews concerning the Institute for Advanced Study—see also individual interviews with Willis H. Ware, Arthur Burks, Herman Goldstine, Martin Schwarzschild, and others. Charles Babbage Institute, University of Minnesota.
First Draft of a Report on the EDVAC – Copy of the original draft by John von Neumann
Photos: JvN standing in front of IAS machine and another view of IAS machine from
IAS machine
Vacuum tube computers |
22393853 | https://en.wikipedia.org/wiki/Shri%20Ramdeobaba%20College%20of%20Engineering%20and%20Management | Shri Ramdeobaba College of Engineering and Management | Shri Ramdeobaba College of Engineering and Management (RCOEM), formerly Shri Ramdeobaba Kamla Nehru Engineering College (SRKNEC), is a college in Nagpur, Maharashtra, India.
It is an ISO 9001:2015 certified institution and NAAC Accredited with 'A' grade. The college was established in 1984 by Shri Ramdeobaba Sarvajanik Samiti trust. The college was granted academic autonomy from the session 2011–12. All engineering programs of the college are NBA accredited (Washington Accord). A new sports complex was inaugurated in 2017, at Mohali village, away from Nagpur.
The institute is approved by AICTE, New Delhi and Government of Maharashtra and is permanently affiliated to Rashtrasant Tukadoji Maharaj Nagpur University (RTMNU). It is accredited by National Board of Accreditation (AICTE) for all eligible branches. It is a recognized centre for research for Ph.D. and M.E. (by research) by RTMNU.
Notable alumni
The alumni of College are called Rcoemian.
Saiju Kurup, Prominent Malayalam cinema actor, has a cult following for his role as Arackal Abu in the 2015 comedy movie Aadu
Rajneesh Gurbani, Indian cricketer who plays for Vidarbha.
Departments
Civil Engineering
Computer Science and Engineering
Electrical Engineering
Electronics and Communication Engineering
Electronics Design Technology
Electronics Engineering
Industrial Engineering
Information Technology
Mechanical Engineering
Computer Science and Engineering (AI &ML)
Computer Science and Engineering (Cyber Security)
Computer Science and Engineering (data Science)
Biomedical Engineering
Computer Applications (Masters)
Business Administration (Masters)
Rankings
The National Institutional Ranking Framework (NIRF) ranked it 119 among engineering colleges in 2021.
References
Engineering colleges in Nagpur
Rashtrasant Tukadoji Maharaj Nagpur University |
68744855 | https://en.wikipedia.org/wiki/Higinio%20Ochoa | Higinio Ochoa | Higinio Ochoa, also known as w0rmer, is an American hacker. In 2012, while associated with the hacker group CabinCr3w, he was arrested by the US Federal Bureau of Investigation (FBI) and ultimately served two years in federal prison for hacking. , Ochoa is a member of the white-hat hacker group Sakura Samurai.
Career
Ochoa is a member of Sakura Samurai, a white-hat hacking group known for its large-scale breaches of governmental groups and corporations. Ochoa and others in Sakura Samurai were responsible for 2021 vulnerability disclosures pertaining to John Deere software.
Early hacking and conviction
In February 2012, Ochoa hacked protected computers including those of the Texas Department of Public Safety, Alabama Department of Public Safety, West Virginia Chiefs of Police Association and Houston County, Alabama. After accessing the systems, Ochoa downloaded and shared confidential and personal information from the systems, erased data, and vandalized websites. At the time, Ochoa was associated with CabinCr3w, a hacker group that had grown out of Anonymous.
Ochoa was arrested by the FBI specifically in relation to his access of Alabama Department of Public Safety computers, which had for some reason been connected with an FBI criminal database. Ochoa replaced the FBI database with his self-proclaimed trademark, a photo of a woman in a bikini, holding a sign reading "PwNd by w0rmer & CabinCr3w, <3 u BiTch's!" The woman in the photo had taken the picture with an iPhone that had location services enabled. Through this, the FBI traced the photo back to her exact coordinates, discovered her identity, and found her Facebook page, which revealed Ochoa as her fiancé. The FBI arrested Ochoa on March 20, 2012, in Galveston, Texas.
On June 25, 2012, Ochoa was charged by the FBI with hacking into law enforcement systems and publishing personal information of officers, including phone numbers and home addresses, in what he and CabinCr3w called "Operation Pig Roast". Ochoa was sentenced to two years in prison and ordered to pay approximately restitution for unauthorized access to the agencies' computers.
Media
In 2016, Ochoa was featured in Season 1, Episode 4 of the Showtime series Dark Net, in an episode titled "CTRL".
In 2020, Ochoa was featured in episode 63 of the podcast Darknet Diaries, in an episode titled "w0rmer".
References
Ethical hackers
Hackers
Living people
People convicted of cybercrime
People from Galveston, Texas
Sakura Samurai
Year of birth missing (living people) |
54457615 | https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Surat | Indian Institute of Information Technology, Surat | The Indian Institute of Information Technology Surat (IIIT Surat) is one of the Indian Institutes of Information Technology established by MHRD in PPP mode located in Surat, Gujarat.The Institute has been conferred as Institute of National Importance (INI) on Feb 5 , 2020. IIIT Surat is operating from its temporary premises at Sardar Vallabhbhai National Institute of Technology (SVNIT). The institute is mentored by SVNIT for an initial period of 2–3 years till the construction of the new campus. The IIIT Surat is built on a public-private partnership (PPP) model, jointly funded by State Government and industry partners Gujarat Narmada Valley Fertilisers & Chemicals, Gujarat Gas and Gujarat Informatics.
History
To address the challenges faced by the Indian IT Industry & growth of the domestic IT Market, the Ministry of Human Resource Development (MHRD), Government of India has established twenty Indian Institutes of Information Technology (IIIT), on not-for-profit Public Private Partnership basis. As a part of this, IIIT-Surat was planned and started with initial mentoring by SVNIT, Surat till the new campus of IIIT is ready in Surat. A memorandum of understanding and a memorandum of association have been signed between the President of India, the Governor of the State of Gujarat and Industry partners namely; Gujarat Narmada Fertilizer Corporation (GNFC), Gujarat Informatics Limited (GIL) and Gujarat Gas Limited (GAL). After series of meetings at MHRD, Directorate of Technical Education (Gujarat State) and SVNIT, Surat planned various academic activities. State Government has requested Collector of Surat to allocate 50 acres of land in Surat and is in progress. B. Tech. courses in Electronics & Communication Engineering and Computer Science & Engineering have begun in SVNIT campus from July, 2017.
Campus
Although the institute operates from its temporary campus at SVNIT Surat, the life at the campus is much more comfortable as compared to what it is at the other IIITs. The students of IIIT Surat enjoy the same campus facilities (although not all) as provided to SVNIT students.
The SVNIT campus is located in a posh locale, in the heart of Surat. The campus is at a distance of only 8 km for the Surat International Airport and 10 km from Surat Railway Station. The campus is very near to the various entertainment hubs of Surat. Being greener than most other NITs, IIITs, and IITs, it also is a much favored place for joggers and walkers who come on a regular basis to the campus in morning and evening time. There are also many peacocks in the campus.
The campus also provides ample opportunities for sports enthusiasts. There are indoor badminton courts, outdoor basketball courts, volleyball courts, lawn tennis courts, table tennis court etc. The campus also has a large Student Activity Ground which is ideal for playing football and cricket. There is also a state-of-the-art gym with all sorts of modern equipment.
There is also a state-of-the-art library which also provides access to digital magazines and books.
Academics
Academic programmes
The institute currently offers only B.Tech courses in Computer Science Engineering and Electronics & Communication Engineering. The academic activities of IIIT-Surat are in time-synchronization with those of SVNIT.
The curriculum of both the branches is designed by keeping in mind the latest standards of the industry. Latest topics in the field of Information Technology like Machine Learning, Natural Language Processing, Artificial Intelligence etc. are also prescribed in the syllabus of both the branches. Advanced courses and a total of seven electives in later years enable students to specialize in signal processing, robotics, embedded systems, and other streams. The syllabus is also updated periodically in order to cater to the needs of the industry. The core subjects of the branches are introduced from the first year itself.
Admissions
The admission to the above-mentioned courses is through JEE MAINS Entrance Exam. The Counselling & Seat Allotment is conducted by Joint Seat Allocation Authority (JoSAA).
References
External links
IIIT Surat
Surat
Universities and colleges in Gujarat
Education in Surat
2017 establishments in Gujarat
Educational institutions established in 2017 |
37415478 | https://en.wikipedia.org/wiki/Technologies%20in%202001%3A%20A%20Space%20Odyssey | Technologies in 2001: A Space Odyssey | The 1968 science fiction film 2001: A Space Odyssey featured numerous fictional future technologies, which have proven prescient in light of subsequent developments around the world. Before the film's production began, director Stanley Kubrick sought technical advice from over fifty organizations,
and a number of them submitted their ideas to Kubrick of what kind of products might be seen in a movie set in the year 2001. The film is also praised for its accurate portrayal of spaceflight and vacuum.
Science
Accuracy
2001 is, according to four NASA engineers who based their nuclear-propulsion spacecraft design in part on the film's Discovery One, "perhaps the most thoroughly and accurately researched film in screen history with respect to aerospace engineering". Several technical advisers were hired for 2001, some of whom were recommended by co-screenwriter Arthur C. Clarke, who himself had a background in aerospace. Advisors included Marshall Spaceflight Center engineer Frederick I. Ordway III, who worked on the film for two years, and I. J. Good, whom Kubrick consulted on supercomputers because of Good's authorship of treatises such as "Speculations Concerning the First Ultraintelligent Machine" and "Logic of Man and Machine".
Dr. Marvin Minsky, of MIT, was the main artificial intelligence adviser for the film.
2001 accurately presents outer space as not allowing the propagation of sound, in sharp contrast to other films with space scenes in which explosions or sounds of passing spacecraft are heard. 2001 portrayal of weightlessness in spaceships and outer space is also more realistic. Tracking shots inside the rotating wheel providing artificial gravity contrast with the weightlessness outside the wheel during the repair and Hal disconnection scenes. (Scenes of the astronauts in the Discovery pod bay, along with earlier scenes involving shuttle flight attendants, depict walking in zero-gravity with the help of velcro-equipped shoes labeled "Grip Shoes"). Other aspects that contribute to the film's realism are the depiction of the time delay in conversations between the astronauts and Earth due to the extreme distance between the two (which the BBC announcer explains have been edited out of the broadcast), the attention to small details such as the sound of breathing inside the spacesuits, the conflicting spatial orientation of astronauts inside a zero-gravity spaceship, and the enormous size of Jupiter in relation to the spaceship.
The general approach to how space travel is engineered is highly accurate; in particular, the design of the ships was based on actual engineering considerations rather than attempts to look aesthetically "futuristic". Many other science-fiction films give spacecraft an aerodynamic shape, which is superfluous in outer space (except for craft such as the Pan Am shuttle that are designed to function both in atmosphere and in space). Kubrick's science advisor, Frederick Ordway, notes that in designing the spacecraft "We insisted on knowing the purpose and functioning of each assembly and component, down to the logical labeling of individual buttons and the presentation on screens of plausible operating, diagnostic and other data." Onboard equipment and panels on various spacecraft have specific purposes such as alarm, communications, condition display, docking, diagnostic, and navigation, the designs of which relied heavily on NASA's input. Aerospace specialists were also consulted on the design of the spacesuits and space helmets. The space dock at Moon base Clavius shows multiple underground layers which could sustain high levels of air pressure typical of Earth. The lunar craft design takes into account the lower gravity and lighting conditions on the Moon. The Jupiter-bound Discovery is meant to be powered by a nuclear reactor at its rear, separated from the crew area at the front by hundreds of feet of fuel storage compartments. Although difficult to be recognized as such, actual nuclear reactor control panel displays appear in the astronaut's control area.
The suspended animation of three of the astronauts on board is accurately portrayed as worked out by consulting medical authorities. Such hibernation would likely be necessary to conserve resources on a flight of this kind, as Clarke's novelization implies.
A great deal of effort was made to get the look of the lunar landscape right, based on detailed lunar photographs taken from observatory telescopes. The depiction of early hominids was based on the writings of anthropologists such as Louis Leakey.
Inaccuracy
The film is scientifically inaccurate in minor but revealing details; some due to the technical difficulty involved in producing a realistic effect, and others simply being examples of artistic license.
The appearance of outer space is problematic, both in terms of lighting and the alignment of astronomical bodies. In the vacuum of outer space, stars do not twinkle, and light does not become diffuse and scattered as it does in air. The side of the Discovery spacecraft unlit by the sun, for example, would appear virtually pitch-black in space. The stars would not appear to move in relation to Discovery as it traveled towards Jupiter, unless it was changing direction. Proportionally, the Sun, Moon and Earth would not visually line up at the size ratios shown in the opening shot, nor would the Galilean moons of Jupiter align as in the shot just before Bowman enters the Star Gate. Kubrick himself was aware of this latter point. (Due to the perfect Laplace resonance of the orbits of the four large moons of Jupiter, the first three never align, and the third moon, Ganymede, is always exactly 90 degrees away from the other two whenever the two innermost moons are in perfect alignment.). Similarly, during the scene in the Dawn of Man, where the Sun is seen above the monolith, a crescent moon is depicted close by in the sky. During this phase of the lunar cycle the Moon would be "new" and therefore be invisible. Finally, the edge of Earth appears sharp in the movie, when in reality it is slightly diffuse due to the scattering of the sunlight by the atmosphere, as is seen in many photos of Earth taken from space since the film's release.
The sequence in which Bowman re-enters Discovery shows him holding his breath just before ejecting from the pod into the emergency airlock. Doing this before exposure to a vacuum—instead of exhaling—would, in reality, rupture the lungs. In an interview on the 2007 DVD release of the film, Clarke states that had he been on the set the day they filmed this, he would have caught this error. In the same scene, the blown pod hatch simply and inexplicably vanishes while concealed behind a puff of smoke.
When spacecraft land on the Moon in the film, dust is shown billowing as it would in air, not moving in a sheet as it would in the vacuum of the Lunar surface, as can be seen in Apollo Moon landing footage. While on the Moon, all actors move as if in normal Earth gravity, not as they would in the 1/6 gravity of the Moon. Similarly, the behavior of Dave and Frank in the weightless pod bay is not fully consistent with a zero-G environment. Although the astronauts are wearing zero-G 'grip shoes' in order to walk normally, they are oddly leaning on the table while testing the AE-35 unit as if held down by gravity. Finally, in an environment with a radius as small as the main quarters, the simulated gravity would vary significantly from the center of the crew quarters to the 'floor', even varying between feet, waist, and head. The rotation speed of the crew quarters was meant to be only fast enough to generate an approximation of the Moon's gravity, not that of the Earth. However, Clarke felt this was enough to prevent the physical atrophy that would result from complete weightlessness.
The first two appearances of the monolith, one on Earth and one on the Moon, conclude with the Sun at its zenith over the top of the monolith. While this could happen in an African veldt anywhere between the Tropic of Cancer and the Tropic of Capricorn, it could not happen anywhere near the crater Tycho (where the monolith is found) as it is 45 degrees south of the lunar equator. Also implausible is the Sun reaching its zenith so soon after a lunar sunrise, and the appearance of a crescent Earth near the Sun is in complete discontinuity with all previous appearances of Earth, whose position from any spot on the Moon varies only slightly due to libration.
During Floyd's approach to the space station, parts of the spinning wheel appear to be under construction, consisting of nothing more than bare internal structure. Geophysicist Dr. David Stephenson in the Canadian TV documentary 2001 and Beyond notes that "Every engineer that saw it [the space station] had a fit. You do not spin on a wheel that is not fully built. You have to finish it before you spin it or else you have real problems".
There are other problems that might be more appropriately described as continuity errors, such as the back-and-forth horizontal switching of Earth's lit side when viewed from Clavius, and the schematic of the space station on the Pan Am spaceplane's monitors continuing to rotate after the plane has synchronized its motion with the station. The latter is due to the position readout actually being a rear-projected film shown in a continuous loop, and being out of sync with other visual elements. The direction of the rotation of the Earth's image outside the space station window is clockwise when Floyd is greeted by a receptionist, but counterclockwise when he phones his daughter.
Imagining the future
Over fifty organizations contributed technical advice to the production, and a number of them submitted their ideas to Kubrick of what kind of products might be seen in a movie set in the year 2001. Much was made by MGM's publicity department of the film's realism, claiming in a 1968 brochure that "Everything in 2001: A Space Odyssey can happen within the next three decades, and...most of the picture will happen by the beginning of the next millennium." Although the predictions central to the plot—colonization of the Moon, manned interplanetary travel and artificial intelligence—did not materialize by that date, some of the film's other futuristic elements have indeed been realized.
Depiction of computers
As the central character of the "Jupiter Mission" segment of the film, HAL was shown by Kubrick to have as much intelligence as human beings, possibly more, while sharing their same "emotional potentialities". Kubrick agreed with computer theorists who believed that highly intelligent computers that can learn by experience will inevitably develop emotions such as fear, love, hate, and envy. Such a machine, he said, would eventually manifest human mental disorders as well, such as a nervous breakdown—as Hal did in the film.
Clarke noted that, contrary to popular rumor, it was a complete coincidence that each of the letters of Hal's name immediately preceded those of IBM in the alphabet. The meaning of HAL has been given both as "Heuristically programmed ALgorithmic computer" and as "Heuristic ALgorithmic computer". The former appears in Clarke's novel of 2001 and the latter in his sequel novel 2010. In computer science, a heuristic is a programmable procedure not necessarily based on fixed rules, producing informed guesses often using trial-and-error. The results can be false such as in predictions of stock market, sports scores, or the weather. Sometimes this can entail selecting on-the-fly one of several methods to solve a problem based on previous experience. On the other hand, an algorithm is a programmable procedure that produces reproducible results using invariant established methods (such as computing square roots).
A heuristic approach that usually works within a tolerable margin of error may be preferred over a perfect algorithm that requires a long time to run.
During Apple and Samsung's patent war over consumer electronics design, in 2011 Samsung used a still image from the scene in which two astronauts are eating at a table with what appear to be tablet computers as an exhibit to counter Apple's patent claiming the original abstract design of tablet computers.
Common 21st-century computer technology not depicted in the film include keyboards, mice, mobile phones, touch screens, interfaces with windows/menus/icons. Although there are devices that resemble tablet computers, they are only used in the film as portable video screens.
Depiction of spacecraft
All of the vehicles in 2001 were designed with extreme care in order for the small-scale models as well as full-scale interiors to appear realistic. The modeling team was led by Kubrick's two hirees from NASA, science advisor Fred Ordway and production designer Harry Lange, along with Anthony Masters who was responsible for turning Lange's 2-D sketches into models. Ordway and Lange insisted on knowing "the purpose and functioning of each assembly and component, down to the labeling of individual buttons and the presentation on screens of plausible operating, diagnostic and other data." Kubrick's team of thirty-five designers was often frustrated by script changes done after designs for various spacecraft had been created. Douglas Trumbull, chief special effects supervisor, writes "One of the most serious problems that plagued us throughout the production was simply keeping track of all ideas, shots, and changes and constantly re-evaluating and updating designs, storyboards, and the script itself. To handle all of this....a "control room"...was used to keep track of all progress on the film." Ordway (who worked on designing the station and the five principal space vehicles) has noted that U.S. industry had problems satisfying Kubrick with its equipment suggestions, while design aspects of the vehicles had to be updated often to accommodate rapid screenplay changes, one crew member resigning over an unspecified related issue. Eventually, conflicting ideas of what Kubrick had in mind, what Clarke was writing, and equipment and vehicular realities emerging from Ordway, Lange, Masters, and construction supervisor Dick Frift and his team were resolved, and coalesced into final designs and construction of the spacecraft before filming began in December 1965.
Other technologies
One futuristic device shown in the film already under development when the film was released in 1968 was voice-print identification; the first prototype was released in 1976. A credible prototype of a chess-playing computer already existed in 1968, even though it could be defeated by experts; computers did not defeat champions until the late 1980s. While 10-digit phone numbers for long-distance national dialing originated in 1951, longer phone numbers for international dialing became a reality in 1970. Installation of personal in-flight entertainment displays by major airlines began in the early-to-mid 1990s, offering video games, TV broadcasts and movies in a manner similar to that shown in the film. The film also shows flat-screen TV monitors, of which the first real-world prototype appeared in 1972 produced by Westinghouse, but was not used for broadcast television until 1998. Plane cockpit integrated system displays, known as "glass cockpits", were introduced in the 1970s (originally in NASA Langley's Boeing 737 Flying Laboratory). Today such cockpits appear not only in high-tech aircraft like the Boeing 777, but have also been employed in space shuttles, the first being Atlantis in 1985. Rudimentary voice-controlled computing began in the early 1980s with the SoftVoice Computer System and existed in more sophisticated form by the early 2000s, although not as sophisticated as depicted in the film. The first picture phone was demonstrated at the 1964 New York World's Fair; however, due to the bandwidth limitations of telephone lines, personal video communication did not succeed commercially and has only been practical over broadband internet connections. Personal (audio) wireless telephones were ubiquitous in 2001, and yet no one in the movie had a small personal communication device.
Some technologies portrayed as common in the film which had not materialized in the 2000s include commonplace civilian space travel, space stations with hotels, Moon colonization, suspended animation of humans, practical nuclear propulsion in spacecraft and strong artificial intelligence of the kind displayed by Hal.
Companies and countries
There are corporate logos and entities in the film that either didn't exist, no longer exist, or were broken up by anti-trust lawsuits. Still others changed their business model or represent countries that no longer exist.
The British Broadcasting Corporation never expanded to have a BBC-12. BBC Three and Four came into existence in 2003 and 2002 respectively and newer channels used names such as BBC News and BBC Parliament. The corporations IBM, Aeroflot, Howard Johnson's, Whirlpool Corporation and Hilton Hotels, visual references of which appear in the film, have survived beyond 2001, although by 2001 Howard Johnson's had switched its business focus to hotels, rather than the restaurants shown in the film. The film depicts a still-existing Pan Am (which went out of business in 1991) and a still-existing Bell System telephone company (which was broken up in 1984 as a result of an anti-monopoly lawsuit filed by the U.S. Justice Department). The Bell System logo seen in the film was modified in 1969 and dropped entirely in 1983.
See also
Technology in science fiction
References
External links
2001: A Space Odyssey Internet Resource Archive
The 2001: A Space Odyssey Collectibles Exhibit
The Alt.Movies.Kubrick FAQ many observations on the meaning of 2001
The Kubrick Site including many works on 2001
American Institute of Aeronautics, 40 Anniversary article in Houston Section, Horizons, April 2008
Fictional technology by work
Space Odyssey |
2844791 | https://en.wikipedia.org/wiki/OS%2010 | OS 10 | Operating systems OS 10 or operating system 10 or variation, may refer to:
Apple
Mac OS X, the Apple Macintosh operating system succeeding Classic Mac OS
Mac OS X 10.0, the initial release of Mac OS X in 2001, succeeding Mac OS System 9
OS X Yosemite (OS X 10.10), the 11th major version of Mac OS X in 2014
iOS 10, the 10th major version of iPod and iPhone OS in 2016
tvOS 10, the 7th major version of the AppleTV OS in 2016, tvOS being a variant of iOS
Other uses
Version 10 Unix, released in 1989, the last version of the original Unix of Bell Labs
Android 10, Google Android OS 10 released in 2019
BlackBerry 10 (BBX, BB10), BlackBerry OS 10.0 based on QNX succeeding the preceding BlackBerry OS 7.1
SmartFabric OS10, the networking hardware management OS by Dell EMC
TOPS-10, the Digital Equipment Corporation operating system
Windows 10, the Microsoft Windows major release (v 10.0) succeeding Windows 8 (v 6.4)
Windows 10 Mobile, the Microsoft Windows OS (v 10.0) for mobile devices succeeding Windows Phone 8.1
See also
System 10 (disambiguation)
X Window System core protocol version 10 (X10), the predecessor to the popular and current X11 X/Windows
OS (disambiguation)
OS9 (disambiguation)
O10 (disambiguation)
S10 (disambiguation) |
18436744 | https://en.wikipedia.org/wiki/Henrik%20I.%20Christensen | Henrik I. Christensen | Henrik Iskov Christensen (born July 16, 1962 in Frederikshavn, Denmark) is a Danish roboticist and Professor of Computer Science at Dept. of Computer Science and Engineering, at the UC San Diego Jacobs School of Engineering. He is also the Director of the Contextual Robotics Institute at UC San Diego.
Prior to UC San Diego, he was a Distinguished Professor of Computer Science in the School of Interactive Computing at the Georgia Institute of Technology. At Georgia Tech, Christensen served as the founding director of the Institute for Robotics and Intelligent Machines (IRIM@GT) and the KUKA Chair of Robotics.
Previously, Christensen was the Founding Chairman of European Robotics Research Network (EURON) and an IEEE Robotics and Automation Society Distinguished Lecturer in Robotics.
Biography
Christensen received his Certificate of Apprenticeship in Mechanical Engineering from the Frederikshavn Technical School, Denmark in 1981. He received his M.Sc. and Ph.D. in Electrical Engineering from Aalborg University in 1987 and 1990, respectively. His doctoral thesis Aspects of Real Time Image Sequence Analysis was advised by Erik Granum.
After receiving his Ph.D., Christensen held teaching and research positions at Aalborg University, Oak Ridge National Laboratory, and the Royal Institute of Technology. In 2006, Christensen accepted a part-time position at the Georgia Institute of Technology as a Distinguished Professor of Computer Science and the KUKA Chair of Robotics, and transitioned to full-time in early 2007. At Georgia Tech, Christensen served as the founding director of the Center for Robotics and Intelligent Machines (RIM@GT), an interdepartmental research units consists of the College of Computing, College of Engineering, and the Georgia Tech Research Institute (GTRI). During his tenure, RIM@GT experienced an unprecedented growth, including (as of 2008) 36 faculty members as well as a dedicated interdisciplinary Ph.D. program in Robotics.
He joined UC San Diego the fall of 2016 to be the director of the UC San Diego Contextual Robotics Institute. The institute does research on robots in the context of empowering people in their daily lives from work over leisure to domestic tasks. An important consideration is the context in which the robot is to perform its tasks.
Research
DARPA Urban Grand Challenge
In 2007, Christensen led Georgia Tech's team in the DARPA Urban Grand Challenge as the principal investigator. The 2007 UGC was the third installment of the DARPA Grand Challenges (in 2004 and 2005), and took place on November 3, 2007 at the site of the now-closed George Air Force Base (currently used as Southern California Logistics Airport), in Victorville, California. The course involved a urban area course, to be completed in less than 6 hours while obeying all traffic regulations.
Professional activities
Associate Editor of Journal of Machine Vision and Applications, Springer Verlag (1996–2004).
Associate Editor of International Journal of Pattern Recognition and Artificial Intelligence, World Scientific Press (1997–2005)
Associate Editor of MIT Press series on "Intelligent Robotics and Autonomous Agents", (1997–)
Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (1999–2003)
Associate Editor Robotics and Autonomous Systems journal, Elsevier, Competition Corner, (1999–2002)
Associate Editor of AAAI AI Magazine (2000–2007)
Associate Editor of Springer Series on "Springer Tracts in Advanced Robotics", (2001–)
Associate Editor of International Journal of Robotics Research, (2002–)
Associate Editor of Service Robotics, (2005–)
Patents
Position Estimation Method, H.I. Christensen & G. Zunino, World patent (WO03062937) 3 March 11, 2008
Förfarande för en anordning på hjul, G. Zunino & H.I. Christensen, Swedish Patent (SE0200197) Mobile Robot, P. Jensfelt & H.I. Christensen, World Patent.
Honors and awards
The Foundation Vision North 1991 Research Award. Awarded for contribution to advancement of research at the Laboratory of Image Analysis, Aalborg University. August 1991.
Elected Officer of International Foundation of Robotics Research (2003–)
IEEE RAS Distinguished Lecturer in Robotics (2004–2006)
Engelberger Award for Education (2011)
References
External links
Home page
UC San Diego announcement of Christensen's hire
1962 births
Living people
People from Frederikshavn
Roboticists
Control theorists
Georgia Tech faculty
University of California, San Diego faculty
Danish scientists
Computer vision researchers
Aalborg University alumni |
49077971 | https://en.wikipedia.org/wiki/PeopleDoc | PeopleDoc | PeopleDoc by Ultimate Software is a cloud-based HR service delivery and HR document management platform. It was named by Gartner as a Cool Vendor in Human Capital Management in 2014. PeopleDoc by Ultimate Software customers include American Express, Fast Retailing, Georgetown University, Match.com, Starbucks and Motorola. The company has also established partnerships with SAP SuccessFactors, Workday, Grant Thorton LLP, PwC and Accenture. In 2018, the France-based company was acquired by Ultimate Software, a U.S.-based HR technology company for approximately $300 million. PeopleDoc continues under that name as a division of UKG, the successor company of Ultimate Software.
History
PeopleDoc started as Novapost in 2007, on the campus of the HEC Business School in Paris as an idea to provide a unified digital file management system.
In 2009, the HR community took notice of the consumer document vault and asked the co-founders to design a product specifically for HR. PeopleDoc then developed cloud technology to assist HR administrative staff.
In 2014, PeopleDoc expanded its human resources software service and raised a $17.5 million Series B round led by Accel Partners.
In August 2015, PeopleDoc sponsored a national HR survey, focused on the intersection of technology, HR, and how HR professionals are handling documents.
In September 2015, PeopleDoc raised a $28 Million Series C round of investment led by Paris-based Eurazeo while PeopleDoc’s existing investors — Accel Partners, Alven Capital and Kernel Capital Partners — participated as well.
In 2018, the France-based technology company was acquired by Ultimate Software for $300 million in cash and stock.
Software
PeopleDoc by Ultimate Software has four main components to their HR service delivery. Its HR Document Management system allows for employee and HR documents to be centralized from multiple sources, including paper and existing HR systems, and stored in the cloud. PeopleDoc's Employee Portal and Case Management enables companies to automate HR processes, ensure consistency and regulatory compliance, and improve employee satisfaction. Last but not least, their Employee Onboarding portal allows for electronic documents and electronic signatures via a company branded portal.
References
Software companies established in 2007
American companies established in 2007
Software companies based in New York (state)
Document management systems |
17891242 | https://en.wikipedia.org/wiki/Jackpot247 | Jackpot247 | Jackpot247 (formerly Challenge Jackpot) is an interactive gambling website owned by Betsson Group, which previously had television segments on ITV (except ITV Channel Television) and Vox Africa. The "Challenge Jackpot" brand was dropped in September 2011 and was replaced by "Jackpot247" (Jackpot247.com). Challenge Jackpot was also a British interactive gaming channel owned by Living TV Group (later British Sky Broadcasting) and operated by NetPlay TV. Its final programme was in the early hours of 1 July 2019 on ITV.
In March 2017, Betsson Group acquired Jackpot247, having purchased Netplay TV for £26 million.
Business
NetPlay TV plc acquired the business assets of Two Way Gaming Ltd for £2 million in stock. NetPlay TV signed production and gaming agreements with Virgin Media for an initial period until 30 June 2013, and took over production of the Challenge Jackpot brand, including its website and television channel.
On 25 March 2010, NetPlay TV and Virgin Media Television agreed to the termination of the option agreement entered into on 7 April 2009 under which VMTV was granted options over 14.9m ordinary shares being 9.9 per cent of the share capital then in issue at a price of 18p per share (the "Option Agreement"). Under the revised agreement NetPlay TV took control of the Challenge Jackpot database and terminated the Option Agreement in exchange for a fixed cash payment of £1.82m. The database generated £2.9m of gross gaming margin from 12 May to 31 December 2009 and was subject to a revenue share agreement. Under the revised terms all revenues arising from this database will be retained by NetPlay TV, with VMTV receiving fixed monthly payments that reflected the value of its airtime.
Restrictions
Jackpot247 is not available to those who live in Northern Ireland or the Channel Islands.
Television channel
Challenge Jackpot was launched by Virgin Media Television, in collaboration with Two Way Media, on 1 July 2008 as a 24-hour television channel on Sky and Virgin Media. It was not available in Northern Ireland, Republic of Ireland or the Channel Islands due to "regulatory and legal restrictions". Games were overseen by Ofcom and, because Two Way's gaming division was based there, the Alderney Gambling Control Commission.
Publicis was appointed to handle the £3 million launch brief for Challenge Jackpot, creating a TV, print and online campaign and came up with the strapline your favourite place to play. The main programme, Roulette Nation (Live) aired between 10pm and 3am every day (previously live between 6pm and 4am), with Roulette Nation Express taking over between 3am and 10pm. Bingo Stars aired for 30 minutes, at around 12:30am.
Roulette Nation was also available on Freeview via Virgin1, later Channel One, between midnight and 3am. Bravo 2 broadcast the show between 1am and 3am (Roulette Nation Express took over for an hour, ending at 4am).
On 20 May 2010, ITV launched their new teleshopping strand, The Zone, launching NetPlay TV's new show Bingo Stars. Roulette Nation aired in the two-hour block for around 30 minutes. Since 2012, Jackpot247 is listed as a stand-alone programme on ITV, airing after midnight until 3am (except ITV Channel Television), no longer using The Zone branding.
On 12 February 2010, the Scottish ITV licensee STV replaced STV Casino with Roulette Nation – airing between midnight and 1am every night, until STV aired the show for the last time on 18 August 2010, when interactive quiz show Brain Box took over the slot.
On Virgin Media's cable television service, an interactive application developed by Two Way Media enabled viewers to play along with live programming on the channel; alternatively, viewers were able to participate on the channel's website.
The channel closed down on 1 January 2011, shortly after the purchase of Virgin Media Television by BSkyB from Virgin Media, and was replaced with Aastha TV on Sky.
Presenters
TV advertisements
In April 2014, Jackpot outed the Welsh village of Pwllgloyw in a TV commercial as one of the worst places in the UK for mobile internet reception. According to Ofcom, Powys has the poorest 3G reception in mainland England and Wales and the area around Pwllgloyw falls in the worst 6% of the UK for 3G coverage by all network operators. The company claimed that, in places like Pwllgloyw, players are unable to win jackpots 24/7. The follow up commercial showed a Jackpot247 employee, Terry, whose job it is to eat a slice of cake every time someone wins.
Data breach
In January 2020, Jackpot247 sent emails to all of their customers advising them that they had suffered a data breach with the following message:
"We regret to inform you that Jackpot247 has suffered a security incident and some of your personal data has been revealed to an unauthorized person. We took various mitigating measures and the unauthorised person is no longer able to access your data. Rest assured that our investigations show that your credit card, payment information, password and copies of any documents sent to Jackpot247 have not been accessed and remain secure. After conducting detailed investigations into the incident, we can confirm that the unauthorised person has been able to access your username and name, email address, telephone number, residential address, date of registration and some internal activity classifications that are not of relevance to the unauthorized person.
It is our duty to report this data breach to you and inform you what data has been compromised."
Users were advised to reset passwords and be wary of phishing emails being received.
References
External links
Living TV Group channels
ITV (TV network) original programming
Television channels in the United Kingdom
Television channels and stations established in 2008 |
10794420 | https://en.wikipedia.org/wiki/Ho%20Chi%20Minh%20City%20International%20University | Ho Chi Minh City International University | VNUHCM - International University (Ho Chi Minh City International University, Vietnamese: Trường Đại học Quốc tế, Đại học Quốc gia Thành phố Hồ Chí Minh) is the first and unique English-medium public research university in Vietnam. Established in 2003, it is now becoming as one of the leading research powerhouse in Vietnam. The university is affiliated to the Vietnam National University, Ho Chi Minh City (VNU-HCM).
The university conducts all its administrative, academic, and research activities in Thu Duc university village, a 77 hectare joint land endowment between Ho Chi Minh City and Binh Duong Province. It is home to Regional Centre of Expertise on Education for Sustainable Development, a non-profit organisation that works closely with the United Nations and other 136 RCEs to incorporate sustainable development into education.
The teaching is conducted in English. In addition to entrance exams, students also have to write an English language test or obtain TOEFL, TOEIC, IELTS or equivalent English certificate as required by HCMIU and its cooperative universities.
HCMIU offers a wide variety of courses in business studies and engineering in both undergraduate and postgraduate levels.
Admission
HCMIU provides instructions primarily in English, except some courses that are required to be conducted in Vietnamese.
These includes: business administration, finance & banking, logistics, biotechnology, biochemistry, food technology, electrical engineering, automation & control, information technology (computer science engineering), biomedical engineering, civil engineering, industrial systems engineering, space engineering, environmental engineering, financial engineering & risk management (applied mathematics) and English language.
Cooperative and Twinning Programs
The university has education partnerships with some universities from The United States of America, The United Kingdom, Australia and New Zealand. Students may study two years at the university and two or three graduate years overseas. The tuition fee is higher than in other Vietnamese universities. Diplomas will be issued by HCMIU's cooperative universities.
United States:
Rutgers, The State University of New Jersey
Binghamton University
University of Houston
United Kingdom:
The University of Nottingham
University of the West of England
Australia:
The University of New South Wales
New Zealand:
Auckland University of Technology
Postgraduate Programs
Doctor of Biotechnology
Doctor of Business administration
Master of Business Administration
Master of Science in Biotechnology
Master of Science in Electrical Engineering
Master of Science in Information Management
Master of Science in Biomedical Engineering
Master of Science in Industrial Systems Engineering
Master of Science in Applied Mathematics
Master of Science in Leadership, joint program with Northeastern University, United States
Schools and Departments
Currently, International University has 4 schools and 6 departments as listed below:
School of Business
Programs offered:
Business Administration, with 4 specializations:
Business Management
Marketing
Hospitality - Tourism Management
International Business
Finance and Banking, with 2 specializations:
Corporate Finance
Banking and Financial Investment
School of Biotechnology
Programs offered:
Biotechnology, with 4 tracks:
BioMedical
Molecular
Industrial
Marine and Environmental Science
Food Technology, with 2 tracks:
Production Management
Technology-Engineering
Aquatic Resource Management, with 2 tracks:
Management
Technology
Chemical Biology
School of Electrical Engineering
Programs offered:
Electrical Engineering, with the following specializations:
Electronics and Embedded Systems
Telecommunication Networks
Signal Processing
RF Design
Automation and Control Engineering
School of Computer Science and Engineering
Programs offered:
Information Technology, with 3 specializations:
Network Engineering
Computer Engineering
Computer Science
Departments Biomedical Engineering
Programs offered:
Biomedical Engineering, with 4 tracks:
Medical Instrumentation
Biomedical Signal and Image Processing
Pharmaceutical Engineering
Regenerative Medicine
Department of Industrial Systems Engineering
Programs offered:
Industrial Systems Engineering
Logistics and Supply Chain Management
Department of Civil Engineering
The department offers the program of Civil Engineering.
Department of Physics
The department offer undergraduate program in Space Engineering which specialises in Image Analysis and Remote Sensing. It is responsible for all Physics courses. Its scientific research includes the fields of Galactic Astronomy and Plasma Physics.
Department of Mathematics
The department offers the program of Applied Mathematics with the specialization of Financial Engineering and Risk Management. It is also responsible for other mathematical-related courses.
Department of English
The English Department provides globally standardized language programs to assist students in their studies, which are conducted wholly in English.
Accreditation
The International University has received accreditation from the ASEAN University Network - Quality Assurance (AUN-QA) for the following programs:
Computer Science (2009)
Biotechnology (2011)
Business Administration (2012)
Electronics and Telecommunications (2013, AUN - DAAD)
Industrial Systems Engineering (2015)
Student life
Student organizations
Students at the International University run over 15 clubs and organizations, including volunteer groups, academic clubs and common-interest teams. Most organizations are funded and governed by the Youth Union and the Students Union, while a few others are directly run and supported by Schools and Departments.
Youth Union:
Event Department
External Relations Department
Information Department
Science and Technology Department
Students Union:
Social Work Team (SWT)
English Teaching Volunteers (ETV)
Arts Team (Arteam)
Enactus IU (formerly SIFE IU)
Soft Skills Club (SSC)
ISE Art club (IAC)
Guitar Club (GC)
English Club (IEC)
French Club (FC)
Sports Club (SC)
Japan Club
Korean Club (KYG)
Other organizations:
Securities Exchange Club (SEC)
Marketing Club (Martic)
IT Club (ITC)
English Speaking Club (ESC)
IU Buddy (Exchange student support group)
Student accommodation
Students can register for housing services at the following dormitories:
VNU-HCMC Dormitory
VNU-HCMC Guest House
Dormitories in city center: Phan Liem and Ly Van Phuc streets.
Campuses
IU Main Campus, Thu Duc
Ground floor, VNU-HCMC Central Library
IU City Campus, 234 Pasteur, District 3 (old address: 33 Ly Tu Trong, District 1)
Board of Rectors
Rector: A/Prof. Dr. Trần Tiến Khoa, PhD.
Vice Rectors:
A/Prof. Lê Văn Cảnh, PhD.
Dr. Hồ Nhựt Quang, PhD.
A/Prof. Đinh Đức Anh Vũ, PhD.
References
External links
Official Website
Universities in Ho Chi Minh City
Educational institutions established in 2003
2003 establishments in Vietnam |
49542962 | https://en.wikipedia.org/wiki/FBI%E2%80%93Apple%20encryption%20dispute | FBI–Apple encryption dispute | The FBI–Apple encryption dispute concerns whether and to what extent courts in the United States can compel manufacturers to assist in unlocking cell phones whose data are cryptographically protected. There is much debate over public access to strong encryption.
In 2015 and 2016, Apple Inc. received and objected to or challenged at least 11 orders issued by United States district courts under the All Writs Act of 1789. Most of these seek to compel Apple "to use its existing capabilities to extract data like contacts, photos and calls from locked iPhones running on operating systems iOS 7 and older" in order to assist in criminal investigations and prosecutions. A few requests, however, involve phones with more extensive security protections, which Apple has no current ability to break. These orders would compel Apple to write new software that would let the government bypass these devices' security and unlock the phones.
The most well-known instance of the latter category was a February 2016 court case in the United States District Court for the Central District of California. The Federal Bureau of Investigation (FBI) wanted Apple to create and electronically sign new software that would enable the FBI to unlock a work-issued iPhone 5C it recovered from one of the shooters who, in a December 2015 terrorist attack in San Bernardino, California, killed 14 people and injured 22. The two attackers later died in a shootout with police, having first destroyed their personal phones. The work phone was recovered intact but was locked with a four-digit password and was set to eliminate all its data after ten failed password attempts (a common anti-theft measure on smartphones). Apple declined to create the software, and a hearing was scheduled for March 22. However, a day before the hearing was supposed to happen, the government obtained a delay, saying it had found a third party able to assist in unlocking the iPhone. On March 28, the government announced that the FBI had unlocked the iPhone and withdrew its request. In March 2018, the Los Angeles Times reported that "the FBI eventually found that Farook's phone had information only about work and revealed nothing about the plot."
In another case in Brooklyn, a magistrate judge ruled that the All Writs Act could not be used to compel Apple to unlock an iPhone. The government appealed the ruling, but then dropped the case on April 22, 2016, after it was given the correct passcode.
Background
In 1993, the National Security Agency (NSA) introduced the Clipper chip, an encryption device with an acknowledged backdoor for government access, that NSA proposed be used for phone encryption. The proposal touched off a public debate, known as the Crypto Wars, and the Clipper chip was never adopted.
It was revealed as a part of the 2013 mass surveillance disclosures by Edward Snowden that the NSA and the British Government Communications Headquarters (GCHQ) had access to the user data in iPhones, BlackBerry, and Android phones and could read almost all smartphone information, including SMS, location, emails, and notes. As well, the leak stated that Apple had been a part of the government's surveillance program since 2012, however, Apple per their spokesman at the time, "had never heard of it".
According to The New York Times, Apple developed new encryption methods for its iOS operating system, versions 8 and later, "so deep that Apple could no longer comply with government warrants asking for customer information to be extracted from devices." Throughout 2015, prosecutors advocated for the U.S. government to be able to compel decryption of iPhone contents.
In September 2015, Apple released a white paper detailing the security measures in its then-new iOS 9 operating system. The iPhone 5C model can be protected by a four-digit PIN code. After more than ten incorrect attempts to unlock the phone with the wrong PIN, the contents of the phone will be rendered inaccessible by erasing the AES encryption key that protects its stored data. According to the Apple white paper, iOS includes a Device Firmware Upgrade (DFU) mode, and that "[r]estoring a device after it enters DFU mode returns it to a known good state with the certainty that only unmodified Apple-signed code is present."
Apple ordered to assist the FBI
The FBI recovered an Apple iPhone 5C—owned by the San Bernardino County, California government—that had been issued to its employee, Syed Rizwan Farook, one of the shooters involved in the December 2015 San Bernardino attack. The attack killed 14 people and seriously injured 22. The two attackers died four hours after the attack in a shootout with police, having previously destroyed their personal phones. Authorities were able to recover Farook's work phone, but could not unlock its four-digit passcode, and the phone was programmed to automatically delete all its data after ten failed password attempts.
On February 9, 2016, the FBI announced that it was unable to unlock the county-owned phone it recovered, due to its advanced security features, including encryption of user data. The FBI first asked the National Security Agency to break into the phone, but they were unable to since they only had knowledge of breaking into other devices that are commonly used by criminals, and not iPhones. As a result, the FBI asked Apple Inc. to create a new version of the phone's iOS operating system that could be installed and run in the phone's random access memory to disable certain security features that Apple refers to as "GovtOS". Apple declined due to its policy which required it to never undermine the security features of its products. The FBI responded by successfully applying to a United States magistrate judge, Sheri Pym, to issue a court order, mandating Apple to create and provide the requested software. The order was not a subpoena, but rather was issued under the All Writs Act of 1789. The court order, called In the Matter of the Search of an Apple iPhone Seized During the Execution of a Search Warrant on a Black Lexus IS300, California License Plate 35KGD203, was filed in the United States District Court for the Central District of California.
The use of the All Writs Act to compel Apple to write new software was unprecedented and, according to legal experts, it was likely to prompt "an epic fight pitting privacy against national security." It was also pointed out that the implications of the legal precedent that would be established by the success of this action against Apple would go far beyond issues of privacy.
Technical details of the order
The court order specified that Apple provide assistance to accomplish the following:
"it will bypass or disable the auto-erase function whether or not it has been enabled" (this user-configurable feature of iOS 8 automatically deletes keys needed to read encrypted data after ten consecutive incorrect attempts)
"it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available"
"it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware"
The order also specifies that Apple's assistance may include providing software to the FBI that "will be coded by Apple with a unique identifier of the phone so that the [software] would only load and execute on the SUBJECT DEVICE"
There has been much research and analysis of the technical issues presented in the case since the court order was made available to the public.
Apple's opposition to the order
The February 16, 2016 order issued by Magistrate Judge Pym gave Apple five days to apply for relief if Apple believed the order was "unreasonably burdensome". Apple announced its intent to oppose the order, citing the security risks that the creation of a backdoor would pose towards customers. It also stated that no government had ever asked for similar access. The company was given until February 26 to fully respond to the court order.
On the same day the order was issued, chief executive officer Tim Cook released an online statement to Apple customers, explaining the company's motives for opposing the court order. He also stated that while they respect the FBI, the request they made threatens data security by establishing a precedent that the U.S. government could use to force any technology company to create software that could undermine the security of its products. He said in part:
In response to the opposition, on February 19, the U.S. Department of Justice filed a new application urging a federal judge to compel Apple to comply with the order. The new application stated that the company could install the software on the phone in its own premises, and after the FBI had hacked the phone via remote connection, Apple could remove and destroy the software. Apple hired attorneys Ted Olson and Theodore J. Boutrous Jr. to fight the order on appeal.
The same day, Apple revealed that in early January it had discussed with the FBI four methods to access data in the iPhone, but, as was revealed by a footnote in the February 19 application to the court, one of the more promising methods was ruled out by a mistake during the investigation of the attack. After the shooter's phone had been recovered, the FBI asked San Bernardino County, the owner of the phone, to reset the password to the shooter's iCloud account in order to acquire data from the iCloud backup. However, this rendered the phone unable to backup recent data to iCloud unless its pass-code was entered. This was confirmed by the U.S. Department of Justice, which then added that any backup would have been "insufficient" because they would not have been able to recover enough information from it.
Legal arguments
The government cited as precedent United States v. New York Telephone Co., in which the Supreme Court ruled in 1977 that the All Writs Act gave courts the power to demand reasonable technical assistance from the phone company in accessing phone calling records. Apple responded that New York Telephone was already collecting the data in question in the course of its business, something the Supreme Court took note of in its ruling. Apple also asserts that being compelled to write new software "amounts to compelled speech and viewpoint discrimination in violation of the First Amendment. ... What is to stop the government from demanding that Apple write code to turn on the microphone in aid of government surveillance, activate the video camera, surreptitiously record conversations, or turn on location services to track the phone's user?" Apple argued that the FBI had not made use of all of the government's tools, such as employing the resources of the NSA. A hearing on the case was scheduled for March 22, 2016.
San Bernardino County District Attorney Michael Ramos filed a brief stating the iPhone may contain evidence of a "lying dormant cyber pathogen" that could have been introduced into the San Bernardino County computer network, as well as identification of a possible third gunman who was alleged to have been seen at the scene of the attack by eyewitnesses. The following day, Ramos told the Associated Press that he did not know whether the shooters had compromised the county's infrastructure, but the only way to know for sure was by gaining access to the iPhone. This statement has been criticized by cyber-security professionals as being improbable.
Tim Cook's statements
In an interview for a Time magazine cover story, Cook said that the issue is not "privacy versus security ... it's privacy and security or privacy and safety versus security." Cook also said, "[T]his is the golden age of surveillance that we live in. There is more information about all of us, so much more than ten years ago, or five years ago. It's everywhere. You are leaving digital footprints everywhere."
In a March 21, 2016, Apple press conference, Cook talked about the ongoing conflict with the FBI, saying, "[W]e have a responsibility to protect your data and your privacy. We will not shrink from this responsibility."
FBI withdrawal of request
On March 21, 2016, the government requested and was granted a delay, saying a third party had demonstrated a possible way to unlock the iPhone in question and the FBI needed more time to determine if it will work. On March 28, 2016, the FBI said it had unlocked the iPhone with the third party's help, and an anonymous official said that the hack's applications were limited; the Department of Justice withdrew the case. The lawyer for the FBI has stated that they are using the extracted information to further investigate the case.
On April 7, 2016, FBI Director James Comey said that the tool used can only unlock an iPhone 5C like that used by the San Bernardino shooter, as well as older iPhone models lacking the Touch ID sensor. Comey also confirmed that the tool was purchased from a third party but would not reveal the source, later indicating the tool cost more than $1.3 million and that they did not purchase the rights to technical details about how the tool functions. Although the FBI was able to use other technological means to access the cellphone data from the San Bernardino shooter's iPhone 5C, without the aid of Apple, law enforcement still expresses concern over the encryption controversy.
Some news outlets, citing anonymous sources, identified the third party as Israeli company Cellebrite. However, The Washington Post reported that, according to anonymous "people familiar with the matter", the FBI had instead paid "professional hackers" who used a zero-day vulnerability in the iPhone's software to bypass its ten-try limitation, and did not need Cellebrite's assistance. In April 2021, The Washington Post reported that the Australian company Azimuth Security, a white hat hacking firm, had been the one to help the FBI.
Other All Writs Act cases involving iPhones
Apple had previously challenged the U.S. Department of Justice's authority to compel it to unlock an iPhone 5S in a drug case in the United States District Court for the Eastern District of New York in Brooklyn (In re Order Requiring Apple Inc. to Assist in the Execution of a Search Warrant Issued by the Court, case number 1:15-mc-01902), after the magistrate judge in the case, James Orenstein, requested Apple's position before issuing an order. On February 29, 2016, Judge Orenstein denied the government's request, saying the All Writs Act cannot be used to force a company to modify its products: "The implications of the government's position are so far-reaching – both in terms of what it would allow today and what it implies about Congressional intent in 1789 – as to produce impermissibly absurd results." Orenstein went on to criticize the government's stance, writing, "It would be absurd to posit that the authority the government sought was anything other than obnoxious to the law." The Justice Department appealed the ruling to District Court Judge Margot Brodie. Apple requested a delay while the FBI attempted to access the San Bernardino iPhone without Apple's help. On April 8, after the FBI succeeded, the Justice Department told the Brooklyn court it intended to press forward with its demand for assistance there, but on April 22, the government withdrew its request, telling the court "an individual" (the suspect, according to press reports) had provided the correct passcode.
In addition to the San Bernardino case and the Brooklyn case, Apple has received at least nine different requests from federal courts under the All Writs Act for iPhone or iPad products. Apple has objected to these requests. This fact was revealed by Apple in court filings in the Brooklyn case made at the request of the judge in that case. Most of these requests call upon Apple "to use its existing capabilities to extract data like contacts, photos and calls from locked iPhones running on operating systems iOS7 and older" (as in the Brooklyn case), while others "involve phones with more extensive encryption, which Apple cannot break" and presumably seek to order Apple to "design new software to let the government circumvent the device's security protocols and unlock the phone" (as in the San Bernardino case).
Reactions
National reactions to Apple's opposition of the order were mixed. A CBS News poll that sampled 1,022 Americans found that 50% of the respondents supported the FBI's stance, while 45% supported Apple's stance. Also, 1,002 surveyed Americans who own smartphones were divided into two sides; 51% were against Apple's decision, while 38% supported their stance.
Support for Apple
The Reform Government Surveillance coalition, which includes major tech firms Microsoft, Facebook, Yahoo!, Twitter, and LinkedIn, has indicated its opposition to the order. By March 3, the deadline, a large number of amicus curiae briefs were filed with the court, with numerous technology firms supporting Apple's position, including a joint brief from Amazon.com, Box, Cisco Systems, Dropbox, Evernote, Facebook, Google, Lavabit, Microsoft, Mozilla, Nest Labs, Pinterest, Slack Technologies, Snapchat, WhatsApp, and Yahoo!. Briefs from the American Civil Liberties Union, the Electronic Frontier Foundation, Access Now, and the Center for Democracy and Technology also supported Apple.
The think tank Niskanen Center has suggested that the case is a door-in-the-face technique designed to gain eventual approval for encryption backdoors and is viewed as a revival of the Crypto Wars.
U.S. Representative Mike Honda, a Democrat who represented the Silicon Valley region, voiced his support for Apple.
On February 23, 2016, a series of pro-Apple protests organized by Fight for the Future were held outside of Apple's stores in over 40 locations.
Zeid Raad al-Hussein, the United Nations High Commissioner for Human Rights, warned the FBI of the potential for "extremely damaging implications" on human rights and that they "risk unlocking a Pandora's box" through their investigation.
General Michael Hayden, former director of the NSA and the Central Intelligence Agency, in a March 7 interview with Maria Bartiromo on the Fox Business Network, supported Apple's position, noting that the CIA considers cyber-attacks the number one threat to U.S. security and saying that "this may be a case where we've got to give up some things in law enforcement and even counter terrorism in order to preserve this aspect, our cybersecurity."
Salihin Kondoker, whose wife was shot in the attack but survived, filed a friend of the court brief siding with Apple; his brief said that he "understand[s] that this software the government wants them to use will be used against millions of other innocent people. I share their fear."
Edward Snowden said that the FBI already has the technical means to unlock Apple's devices and said, "The global technological consensus is against the FBI."
McAfee founder and Libertarian Party presidential primary candidate John McAfee had publicly volunteered to decrypt the iPhone used by the San Bernardino shooters, avoiding the need for Apple to build a backdoor. He later indicated that the method he would employ, extracting the unique ID from inside the A7 processor chip, is difficult and risks permanently locking the phone, and that he was seeking publicity.
Ron Wyden, Democratic senator for Oregon and a noted privacy and encryption advocate, questioned the FBI's honesty concerning the contents of the phone. He said in a statement, "There are real questions about whether [the FBI] has been straight with the public on [the Apple case]."
Support for FBI
Some families of the victims and survivors of the attack indicated they would file a brief in support of the FBI.
The National Sheriffs' Association has suggested that Apple's stance is "putting profit over safety" and "has nothing to do with privacy." The Federal Law Enforcement Officers Association, the Association of Prosecuting Attorneys, and the National Sheriffs' Association filed a brief supporting the FBI.
"With Apple's privacy policy for the customers there is no way of getting into a phone without a person's master password. With this policy there will be no backdoor access on the phone for the law enforcement to access the person's private information. This has caused a great dispute between the FBI and Apple's encryption. Apple has closed this backdoor for the law enforcement because they believe that by creating this backdoor it would make it easier for law enforcement, and also make it easier for criminal hackers to gain access to people's personal data on their phone." Former FBI director James Comey says that "We are drifting to a place in this country where there will be zones that are beyond the reach of the law." He believes that this backdoor access is crucial to investigations, and without it many criminals will not be convicted.
Senator Dianne Feinstein of California, a Democrat and vice chairman of the Senate Intelligence Committee, has voiced her opposition to Apple. All candidates for the Republican nomination for the 2016 U.S. presidential election who had not dropped out of the race before February 19, 2016 supported the FBI's position, though several expressed concerns about adding backdoors to mobile phones.
On February 23, 2016, the Financial Times reported that Bill Gates, founder of Microsoft, has sided with the FBI in the case. However, Gates later said in an interview with Bloomberg News "that doesn't state my view on this." He added that he thought the right balance and safeguards need to be found in the courts and in Congress, and that the debate provoked by this case is valuable.
San Bernardino Police Chief Jarrod Burguan said in an interview:
Manhattan District Attorney Cyrus Vance Jr., said that he wants Apple to unlock 175 iPhones that his office's Cyber-Crime Lab has been unable to access, adding, "Apple should be directed to be able to unlock its phones when there is a court order by an independent judge proving and demonstrating that there's relevant evidence on that phone necessary for an individual case."
FBI Director Comey, testifying before the House Judiciary Committee, compared Apple's iPhone security to a guard dog, saying, "We're asking Apple to take the vicious guard dog away and let us pick the lock."
Apple's iOS 8 and later have encryption mechanisms that make it difficult for the government to get through. Apple provided no backdoor for surveillance without the company's discretion. However, Comey stated that he did not want a backdoor method of surveillance and that "We want to use the front door, with clarity and transparency, and with clear guidance provided by law." He believes that special access is required in order to stop criminals such as "terrorists and child molesters". Many companies such as Apple would not give the U.S. access due to the policies Apple has in place on users' confidentiality.
Calls for compromise
Both 2016 Democratic presidential candidates—former Secretary of State Hillary Clinton and Senator Bernie Sanders—suggested some compromise should be found.
U.S. Defense Secretary Ashton Carter called for Silicon Valley and the federal government to work together. "We are squarely behind strong data security and strong encryption, no question about it," he said. Carter also added that he is "not a believer in back doors."
In an address to the 2016 South by Southwest conference on March 11, President Barack Obama stated that while he could not comment on the specific case, "You cannot take an absolutist view on [encryption]. If your view is strong encryption no matter what, and we can and should create black boxes, that does not strike the balance that we've lived with for 200 or 300 years. And it's fetishizing our phones above every other value. That can't be the right answer."
Proposed legislation
On April 13, 2016 U.S. Senators Richard Burr and Dianne Feinstein, the Republican Chair and senior Democrat on the Senate Intelligence Committee, respectively, released draft legislation that would authorize state and federal judges to order "any person who provides a product or method to facilitate a communication or the processing or storage of data" to provide data in intelligible form or technical assistance in unlocking encrypted data and that any such person who distributes software or devices must ensure they are capable of complying with such an order.
Freedom of Information Act lawsuit
In September 2016, the Associated Press, Vice Media, and Gannett (the owner of USA Today) filed a Freedom of Information Act (FOIA) lawsuit against the FBI, seeking to compel the agency to reveal who it hired to unlock Farook's iPhone, and how much was paid. On September 30, 2017, a federal court ruled against the media organizations and granted summary judgment in the government's favor. The court ruled that the company that hacked the iPhone and the amount paid to it by the FBI were national security secrets and "intelligence sources or methods" that are exempt from disclosure under FOIA; the court additionally ruled that the amount paid "reflects a confidential law enforcement technique or procedure" that also falls under a FOIA exemption.
Inspector General Investigation
Background
On August 31, 2016 Amy Hess, the FBI's Executive Assistant Director, raised concerns with the Office of Inspector General alleging there was a disagreement between units of the Operational Technology Division (OTD) of their capability to access Farook's iPhone; namely between the Cryptographic and Electronic Analysis Unit (CEAU) and the Remote Operations Unit (ROU). She also alleged that some OTD officials were indifferent to FBI leadership (herself included) giving possibly misleading testimony to Congress and in court orders that they had no such capability.
Findings
Ultimately, the Inspector General's March 2018 report found no evidence that the OTD had withheld knowledge of the ability to unlock Farook's iPhone at the time of Director Comey's congressional testimony of February 9 and March 1, 2016. However, the report also found that poor communication and coordination between the CEAU and ROU meant that "not all relevant personnel had been engaged at the outset".
The ROU Chief (named by Vice to be Eric Chuang) said he only became aware of the access problem after a February 11 meeting of the Digital Forensics and Analysis Section (DFAS) - of which the ROU is not a member. While the OTD directors were in frequent contact during the investigation, including discussions about Farook's iPhone, Asst. Dir. Stephen Richardson and the Chief of DFAS, John F. Bennett, believed at the time that a court order was their only alternative.
Chuang claimed the CEAU Chief didn't ask for their help due to a "line in the sand" against using classified security tools in domestic criminal cases. The CEAU Chief denied such a line existed and that not using classified techniques was merely a preference. Nevertheless, the perception of this line resulted in the ROU not getting involved until after John Bennett's February 11 meeting asking "anyone" in the bureau to help.
Once Chuang "got the word out", he soon learned that a trusted vendor was "almost 90 percent of the way" to a solution after "many months" of work and asked they prioritize its completion. The unnamed vendor came forward with their solution on March 16, 2016 and successfully demonstrated it to FBI leadership on March 20. The US Attorneys Office was informed the next day and they withdrew their court action against Apple on March 28.
When asked why the ROU was not involved earlier the Chief of Technical Surveillance Section (TSS), Eric Chuang's superior, initially said it was not in his "lane" and it was handled exclusively by the DFAS because "that is their mandate". He later claimed that Farook's phone was discussed from the outset but he did not instruct his unit chiefs to contact outside vendors until after February 11. In either event, neither he nor the ROU were asked to request help from their vendors until mid-February. By the time the Attorneys Office filed their February 16 court order, the ROU had only just begun contacting its vendors.
The CEAU Chief was unable to say with certainty that the ROU had been consulted beforehand and that the February 11th meeting was a final "mop-up" before a court action was filed. The CEAU's search for solutions within the FBI was undocumented and was handled informally by a senior engineer that the CEAU Chief personally trusted had checked with "everybody".
On the other hand, it's possible that Hess' asking questions is what prompted the February 11 "mop-up" meeting. During the CEAU's search Hess became concerned that she wasn't getting straight answers from the OTD and that unit chiefs didn't know the capabilities of the others. The Inspector General stated further:
... the CEAU Chief may not have been interested in researching all possible solutions and instead focused only on unclassified techniques that could readily be disclosed in court that OTD and its partner agencies already had in-hand.
Both Hess and Chuang stated the CEAU Chief seemed not to want to use classified techniques and appeared to have an agenda in pursuing a favorable ruling against Apple. Chuang described the CEAU Chief as "definitely not happy" that they undermined his legal case against Apple and had vented his frustration with him.
Hess said the CEAU Chief wanted to use the case as a "poster child" to resolve the larger problem with encrypted devices known as the "Going Dark challenge". The challenge is defined by the FBI as "changes in technology [that] hinder law enforcement's ability to exercise investigative tools and follow critical leads". As The Los Angeles Times reported in March 2018, the FBI was unable to access data from 7,775 seized devices in their investigations. The unidentified method used to unlock Farook's phone - costing more than $1 million to obtain - quit working once Apple updated their operating system.
Conclusion
The Inspector General's report found that statements in the FBI's testimony before Congress were accurate but relied on assumptions that the OTD units were coordinating effectively from the beginning. They also believe the miscommunication delayed finding a technical solution to accessing Farook's iPhone. The FBI disputed this since the vendor had been working on the project independently "for some time". However, according to Chuang - whom described himself as a "relationship holder" for the vendor - they were not actively working to complete the solution and that it was moved to the "front burner" on his request; to which the TSS Chief agreed.
In response to the Inspector General's report, the FBI intended to add a new OTD section to consolidate resources to address the Going Dark problem and to improve coordination between units.
Notes
See also
Bernstein v. United Statesa case on software as speech
Key disclosure law
Riley v. Californiaholding unconstitutional the warrantless search of a cellphone during an arrest, noting cellphones' unique privacy implications
United States v. New York Telephone Co.holding that law enforcement officials may obtain a court order forcing telephone companies to install pen registers in order to record the numbers called from a particular telephone.
Universal City Studios, Inc. v. Reimerdesanother case holding software is a form of speech
Crypto wars
References
External links
Burr Encryption Bill Discussion Draft (leaked version)
Burr Encryption Bill Discussion Draft (official version)
Apple FAQ on the controversy
FBI director's comments on the 2016 dispute
Online source for legal filings in the 2016 dispute at Cryptome
PR Statement of United States Attorney Eileen M. Decker on Government Request to Vacate Order Directing Apple to Help Access iPhone
Hardware hack defeats iPhone passcode security
Apple Inc. litigation
Federal Bureau of Investigation operations
History of cryptography
History of telecommunications
United States computer case law
United States district court cases
United States Free Speech Clause case law
United States Fourth Amendment case law
2016 in United States case law
Digital forensics
IPhone
Data laws
Encryption debate
fr:Apple#Sécurité et vie privée |
2567707 | https://en.wikipedia.org/wiki/Formal%20specification | Formal specification | In computer science, formal specifications are mathematically based techniques whose purpose are to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools. These specifications are formal in the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.
Motivation
In each passing decade, computer systems have become increasingly more powerful and, as a result, they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this in software engineering reliability as once predicted. Other methods such as testing are more commonly used to enhance code quality.
Uses
Given such a specification, it is possible to use formal verification techniques to demonstrate that a system design is correct with respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use probably correct refinement steps to transform a specification into a design, which is ultimately transformed into an implementation that is correct by construction.
It is important to note that a formal specification is not an implementation, but rather it may be used to develop an implementation. Formal specifications describe what a system should do, not how the system should do it.
A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal
A good specification will have:
Constructability, manageability and evolvability
Usability
Communicability
Powerful and efficient analysis
One of the main reasons there is interest in formal specifications is that they will provide an ability to perform proofs on software implementations. These proofs may be used to validate a specification, verify correctness of design, or to prove that a program satisfies a specification.
Limitations
A design (or implementation) cannot ever be declared “correct” on its own. It can only ever be “correct with respect to a given specification”. Whether the formal specification correctly describes the problem to be solved is a separate issue. It is also a difficult issue to address since it ultimately concerns the problem constructing abstracted formal representations of an informal concrete problem domain, and such an abstraction step is not amenable to formal proof. However, it is possible to validate a specification by proving “challenge” theorems concerning properties that the specification is expected to exhibit. If correct, these theorems reinforce the specifier's understanding of the specification and its relationship with the underlying problem domain. If not, the specification probably needs to be changed to better reflect the domain understanding of those involved with producing (and implementing) the specification.
Formal methods of software development are not widely used in industry. Most companies do not consider it cost-effective to apply them in their software development processes. This may be for a variety of reasons, some of which are:
Time
High initial start up cost with low measurable returns
Flexibility
A lot of software companies use agile methodologies that focus on flexibility. Doing a formal specification of the whole system up front is often perceived as being the opposite of flexible. However, there is some research into the benefits of using formal specifications with "agile" development
Complexity
They require a high level of mathematical expertise and the analytical skills to understand and apply them effectively
A solution to this would be to develop tools and models that allow for these techniques to be implemented but hide the underlying mathematics
Limited scope
They do not capture properties of interest for all stakeholders in the project
They do not do a good job of specifying user interfaces and user interaction
Not cost-effective
This is not entirely true, by limiting their use to only core parts of critical systems they have shown to be cost-effective
Other limitations:
Isolation
Low-level ontologies
Poor guidance
Poor separation of concerns
Poor tool feedback
Paradigms
Formal specification techniques have existed in various domains and on various scales for quite some time. Implementations of formal specifications will differ depending on what kind of system they are attempting to model, how they are applied and at what point in the software life cycle they have been introduced. These types of models can be categorized into the following specification paradigms:
History-based specification
behavior based on system histories
assertions are interpreted over time
State-based specification
behavior based on system states
series of sequential steps, (e.g. a financial transaction)
languages such as Z, VDM or B rely on this paradigm
Transition-based specification
behavior based on transitions from state-to-state of the system
best used with a reactive system
languages such as Statecharts, PROMELA, STeP-SPL, RSML or SCR rely on this paradigm
Functional specification
specify a system as a structure of mathematical functions
OBJ, ASL, PLUSS, LARCH, HOL or PVS rely on this paradigm
Operational Specification
early languages such as Paisley, GIST, Petri nets or process algebras rely on this paradigm
In addition to the above paradigms, there are ways to apply certain heuristics to help improve the creation of these specifications. The paper referenced here best discusses heuristics to use when designing a specification. They do so by applying a divide-and-conquer approach.
Software tools
The Z notation is an example of a leading formal specification language. Others include the Specification Language (VDM-SL) of the Vienna Development Method and the Abstract Machine Notation (AMN) of the B-Method. In the Web services area, formal specification is often used to describe non-functional properties (Web services quality of service).
Some tools are:
Algebraic
Larch
OBJ
Lotos
Model-based
Z
B
VDM
CSP
Petri Nets
TLA+
Examples
For implementation examples, refer to the links in software tools section.
See also
Algebraic specification
Formal methods
Model-based specification
Software engineering
Specification language
Specification (technical standard)
References
External links
A Case for Formal Specification (Technology) by Coryoth 2005-07-30
Formal Specification
Formal methods
Formal specification |
40734 | https://en.wikipedia.org/wiki/ARJ | ARJ | ARJ (Archived by Robert Jung) is a software tool designed by Robert K. Jung for creating high-efficiency compressed file archives. ARJ is currently on version 2.86 for MS-DOS and 3.20 for Microsoft Windows and supports 16-bit, 32-bit and 64-bit Intel architectures.
ARJ was one of many file compression utilities for MS-DOS and Microsoft Windows during the early and mid-1990s. Parts of ARJ were covered by (expired). ARJ is well-documented and includes over 150 command line switches.
File format support in other software
ARJ archives can be unpacked with various tools other than the ARJ software. There exists a free software re-implementation of the tool. A number of software utilities, including 7-Zip, Zipeg, and WinRAR can also unpack .arj files. For macOS, standalone utilities, such as DeArj and UnArjMac, are available.
See also
List of archive formats
Comparison of file archivers
References
External links
ARJ Software
Open-source ARJ
Archive formats
File archivers
Windows compression software
Data compression software
Data compression |
65687612 | https://en.wikipedia.org/wiki/Nancy%20R.%20Mead | Nancy R. Mead | Nancy Rose Mead (born in 1942) is an American computer scientist. She is known for her contributions to security, software engineering education and requirements.
Background and education
Mead spent her childhood in New Jersey, growing up in a 2nd generation Armenian immigrant family. She had an early interest in mathematics. This led to a mathematics major at New York University. from which she received a BA in mathematics and French (Honors) in 1963, and an MS in mathematics in 1967. Mead received her PhD in mathematics in 1983 from the Polytechnic Institute of New York (now the NYU Tandon School of Engineering). Her thesis, "Complexity Measures for System Design", was supervised by Stanley Preiser.
Career
Mead spent a significant part of her career at IBM (1966-1990), in software development and management of large real-time systems, software engineering technology, and software engineering education. She was named a Senior Technical Staff Member at IBM in 1988.
Mead’s research at Carnegie Mellon University's Software Engineering Institute (1990-2018) was primarily in the study of software engineering and cybersecurity engineering, particularly software and security requirements, and the development of software assurance and software engineering curricula. She was the Principal Investigator and primary developer of the SQUARE method
for security requirements engineering, and led the Software Assurance Curriculum Project, which included the Master of Software Assurance Reference Curriculum recognized by IEEE and ACM. At present, her interests are in threat modeling and supply chain risk management.
Awards and Honors
She was named a Fellow of the IEEE in 2006,
and a Distinguished Educator of the ACM in 2009.
The Nancy Mead Award for Excellence in Software Engineering Education, given by the IEEE Conference on Software Engineering Education & Training (CSEE&T) since 2010, is named for her in honor of her leading role in establishing that conference. She was named a Fellow of the Software Engineering Institute (SEI) in 2013. In 2015 she received the Distinguished Education Award from the IEEE Computer Society Technical Council on Software Engineering (TCSE). In 2019 she was awarded the Parnas Fellowship
by Lero, the Irish Software Research Centre. In 2020 she was selected for the IEEE Distinguished Visitor Program.
Publications
Mead has more than 150 publications. She has co-authored two books:
Software Security Engineering (Addison-Wesley 2008)
Cyber Security Engineering (Addison-Wesley 2016).
The following are a selection of her most-cited
papers:
R Wieringa, N Maiden, N Mead, C Rolland. "Requirements engineering paper classification and evaluation criteria: a proposal and a discussion", Requirements engineering 11 (1), 102-107
NR Mead, T Stehney, "Security quality requirements engineering (SQUARE) methodology", ACM SIGSOFT Software Engineering Notes 30 (4), 1-7
RJ Ellison, DA Fisher, RC Linger, HF Lipson, TA Longstaff, NR Mead, "Survivability: Protecting your critical systems", IEEE Internet Computing 3 (6), 55-63,
RJ Ellison, RC Linger, T Longstaff, NR Mead, "Survivable network system analysis: A case study", IEEE software 16 (4), 70-77
References
External links
Nancy Mead at Carnegie Mellon University
1942 births
Living people
IBM employees
American software engineers
American computer scientists
New York University alumni
Carnegie Mellon University faculty
Fellow Members of the IEEE |
31692117 | https://en.wikipedia.org/wiki/Raspberry%20Pi | Raspberry Pi | Raspberry Pi () is a series of small single-board computers (SBCs) developed in the United Kingdom by the Raspberry Pi Foundation in association with Broadcom. The Raspberry Pi project originally leaned towards the promotion of teaching basic computer science in schools and in developing countries. The original model became more popular than anticipated, selling outside its target market for uses such as robotics. It is widely used in many areas, such as for weather monitoring, because of its low cost, modularity, and open design. It is typically used by computer and electronic hobbyists, due to its adoption of HDMI and USB devices.
After the release of the second board type, the Raspberry Pi Foundation set up a new entity, named Raspberry Pi Trading, and installed Eben Upton as CEO, with the responsibility of developing technology. The Foundation was rededicated as an educational charity for promoting the teaching of basic computer science in schools and developing countries. Most Pis are made in a Sony factory in Pencoed, Wales, while others are made in China and Japan.
Series and generations
There are three series of Raspberry Pi, and several generations of each have been released. Raspberry Pi SBCs feature a Broadcom system on a chip (SoC) with an integrated ARM-compatible central processing unit (CPU) and on-chip graphics processing unit (GPU), while Raspberry Pi Pico has a RP2040 system on chip with an integrated ARM-compatible central processing unit (CPU).
Raspberry Pi
The first generation (Raspberry Pi Model B) was released in February 2012, followed by the simpler and cheaper Model A.
In 2014, the Foundation released a board with an improved design, Raspberry Pi Model B+. These first generation boards feature ARM11 processors, are approximately credit-card sized and represent the standard mainline form-factor. Improved A+ and B+ models were released a year later. A "Compute Module" was released in April 2014 for embedded applications.
The Raspberry Pi 2 was released in February 2015 and initially featured a 900 MHz 32-bit quad-core ARM Cortex-A7 processor with 1 GB RAM. Revision 1.2 featured a 900 MHz 64-bit quad-core ARM Cortex-A53 processor (the same as that in the Raspberry Pi 3 Model B, but underclocked to 900 MHz).
Raspberry Pi 3 Model B was released in February 2016 with a 1.2 GHz 64-bit quad core ARM Cortex-A53 processor, on-board 802.11n Wi-Fi, Bluetooth and USB boot capabilities.
On Pi Day 2018, the Raspberry Pi 3 Model B+ was launched with a faster 1.4 GHz processor, a three-times faster gigabit Ethernet (throughput limited to ca. 300 Mbit/s by the internal USB 2.0 connection), and 2.4 / 5 GHz dual-band 802.11ac Wi-Fi (100 Mbit/s). Other features are Power over Ethernet (PoE) (with the add-on PoE HAT), USB boot and network boot (an SD card is no longer required).
Raspberry Pi 4 Model B was released in June 2019 with a 1.5 GHz 64-bit quad core ARM Cortex-A72 processor, on-board 802.11ac Wi-Fi, Bluetooth 5, full gigabit Ethernet (throughput not limited), two USB 2.0 ports, two USB 3.0 ports, 2-8 GB of RAM, and dual-monitor support via a pair of micro HDMI (HDMI Type D) ports for up to 4K resolution. The version with 1 GB RAM has been abandoned and the prices of the 2 GB version have been reduced. The 8 GB version has a revised circuit board. The Pi 4 is also powered via a USB-C port, enabling additional power to be provided to downstream peripherals, when used with an appropriate PSU. But the Pi can only be operated with 5 volts and not 9 or 12 volts like other mini computers of this class. The initial Raspberry Pi 4 board has a design flaw where third-party e-marked USB cables, such as those used on Apple MacBooks, incorrectly identify it and refuse to provide power. Tom's Hardware tested 14 different cables and found that 11 of them turned on and powered the Pi without issue. The design flaw was fixed in revision 1.2 of the board, released in late 2019. In mid-2021, Pi 4 B models appeared with the improved Broadcom BCM2711C0. The manufacturer is now using this chip for the Pi 4 B and Pi 400. However, the tack frequency of the Pi 4 B was not increased in the factory.
Raspberry Pi 400 was released in November 2020. It features a custom board that is derived from the existing Raspberry Pi 4, specifically remodelled with a keyboard attached. The case was derived from that of the Raspberry Pi Keyboard. A robust cooling solution (i.e. a broad metal plate) and an upgraded switched-mode power supply allow the Raspberry Pi 400's Broadcom BCM2711C0 processor to be clocked at 1.8 GHz, which is slightly higher than the Raspberry Pi 4 it's based on. The keyboard-computer features 4 GB of LPDDR4 RAM.
Raspberry Pi Zero
A Raspberry Pi Zero with smaller size and reduced input/output (I/O) and general-purpose input/output (GPIO) capabilities was released in November 2015 for US$5.
On 28 February 2017, the Raspberry Pi Zero W was launched, a version of the Zero with Wi-Fi and Bluetooth capabilities, for US$10.
On 12 January 2018, the Raspberry Pi Zero WH was launched, a version of the Zero W with pre-soldered GPIO headers.
On 28 October 2021, the Raspberry Pi Zero 2 W was launched, a version of the Zero W with a system in a package (SiP) designed by Raspberry Pi and based on the Raspberry Pi 3. In contrast to the older ones, the Pi 2 W is 64 bit capable.The price is around US$15.
Raspberry Pi Pico
Raspberry Pi Pico was released in January 2021 with a retail price of $4. It was Raspberry Pi's first board based upon a single microcontroller chip; the RP2040, which was designed by Raspberry Pi in the UK. The Pico has 264 KB of RAM and 2 MB of flash memory. It is programmable in MicroPython, CircuitPython, C and Rust. It has partnered with Vilros, Adafruit, Pimoroni, Arduino and SparkFun to build Accessories for Raspberry Pi Pico and variety of other boards using RP2040 Silicon Platform. Rather than perform the role of general purpose computer (like the others in the range) it is designed for physical computing, similar in concept to an Arduino.
Model comparison
As of 4 May 2021, the Foundation is committed to manufacture most Pi models until at least January 2026. Even the 1 GB Pi4B can still be specially-ordered.
Hardware
The Raspberry Pi hardware has evolved through several versions that feature variations in the type of the central processing unit, amount of memory capacity, networking support, and peripheral-device support.
This block diagram describes models B, B+, A and A+. The Pi Zero models are similar, but lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to an additional USB port. In Model A, A+, and the Pi Zero, the USB port is connected directly to the system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip contains a five-port USB hub, of which four ports are available, while the Pi 1 Model B only provides two. On the Pi Zero, the USB port is also connected directly to the SoC, but it uses a micro USB (OTG) port. Unlike all other Pi models, the 40 pin GPIO connector is omitted on the Pi Zero, with solderable through-holes only in the pin locations. The Pi Zero WH remedies this.
Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz for the Pi 4; on-board memory ranges from 256 MB to 8 GB random-access memory (RAM), with only the Raspberry Pi 4 having more than 1 GB. Secure Digital (SD) cards in MicroSDHC form factor (SDHC on early models) are used to store the operating system and program memory, however some models also come with onboard eMMC storage and the Raspberry Pi 4 can also make use of USB-attached SSD storage for its operating system. The boards have one to five USB ports. For video output, HDMI and composite video are supported, with a standard 3.5 mm tip-ring-sleeve jack for audio output. Lower-level output is provided by a number of GPIO pins, which support common protocols like I²C. The B-models have an 8P8C Ethernet port and the Pi 3, Pi 4 and Pi Zero W have on-board Wi-Fi 802.11n and Bluetooth.
Processor
The Broadcom BCM2835 SoC used in the first generation Raspberry Pi includes a 700 MHz ARM1176JZF-S processor, VideoCore IV graphics processing unit (GPU), and RAM. It has a level 1 (L1) cache of 16 KB and a level 2 (L2) cache of 128 KB. The level 2 cache is used primarily by the GPU. The SoC is stacked underneath the RAM chip, so only its edge is visible. The ARM1176JZ(F)-S is the same CPU used in the original iPhone, although at a higher clock rate, and mated with a much faster GPU.
The earlier V1.1 model of the Raspberry Pi 2 used a Broadcom BCM2836 SoC with a 900 MHz 32-bit, quad-core ARM Cortex-A7 processor, with 256 KB shared L2 cache. The Raspberry Pi 2 V1.2 was upgraded to a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor, the same one which is used on the Raspberry Pi 3, but underclocked (by default) to the same 900 MHz CPU clock speed as the V1.1. The BCM2836 SoC is no longer in production as of late 2016.
The Raspberry Pi 3 Model B uses a Broadcom BCM2837 SoC with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 processor, with 512 KB shared L2 cache. The Model A+ and B+ are 1.4 GHz
The Raspberry Pi 4 uses a Broadcom BCM2711 SoC with a 1.5 GHz (later models: 1.8 GHz) 64-bit quad-core ARM Cortex-A72 processor, with 1 MB shared L2 cache. Unlike previous models, which all used a custom interrupt controller poorly suited for virtualisation, the interrupt controller on this SoC is compatible with the ARM Generic Interrupt Controller (GIC) architecture 2.0, providing hardware support for interrupt distribution when using ARM virtualisation capabilities.
The Raspberry Pi Zero and Zero W use the same Broadcom BCM2835 SoC as the first generation Raspberry Pi, although now running at 1 GHz CPU clock speed.
The Raspberry Pi Zero W 2 uses the RP3A0-AU CPU, a 1 GHz 64 bit ARM Cortex A53, on 512MB of SDRAM. Documentation states this "system-on-package" is a Broadcom BCM2710A1 package, using a BCM2837 Broadcom chip as core, which is an ARM v8 quad-core. The RPi3 also uses the BCM2837, but at 1.2 GHz, since the Pi Zero W 2 clock is 1 GHz.
The Raspberry Pi Pico uses the RP2040 running at 133 MHz.
Performance
While operating at 700 MHz by default, the first generation Raspberry Pi provided a real-world performance roughly equivalent to 0.041 GFLOPS. On the CPU level the performance is similar to a 300 MHz Pentium II of 1997–99. The GPU provides 1 Gpixel/s or 1.5 Gtexel/s of graphics processing or 24 GFLOPS of general purpose computing performance. The graphical capabilities of the Raspberry Pi are roughly equivalent to the performance of the Xbox of 2001.
Raspberry Pi 2 V1.1 included a quad-core Cortex-A7 CPU running at 900 MHz and 1 GB RAM. It was described as 4–6 times more powerful than its predecessor. The GPU was identical to the original. In parallelised benchmarks, the Raspberry Pi 2 V1.1 could be up to 14 times faster than a Raspberry Pi 1 Model B+.
The Raspberry Pi 3, with a quad-core Cortex-A53 processor, is described as having ten times the performance of a Raspberry Pi 1. Benchmarks showed the Raspberry Pi 3 to be approximately 80% faster than the Raspberry Pi 2 in parallelised tasks.
The Raspberry Pi 4, with a quad-core Cortex-A72 processor, is described as having three times the performance of a Raspberry Pi 3.
Overclocking
Most Raspberry Pi systems-on-chip could be overclocked to 800 MHz, and some to 1000 MHz. There are reports the Raspberry Pi 2 can be similarly overclocked, in extreme cases, even to 1500 MHz (discarding all safety features and over-voltage limitations). In Raspberry Pi OS the overclocking options on boot can be made by a software command running "sudo raspi-config" without voiding the warranty. In those cases the Pi automatically shuts the overclocking down if the chip temperature reaches , but it is possible to override automatic over-voltage and overclocking settings (voiding the warranty); an appropriately sized heat sink is needed to protect the chip from serious overheating.
Newer versions of the firmware contain the option to choose between five overclock ("turbo") presets that, when used, attempt to maximise the performance of the SoC without impairing the lifetime of the board. This is done by monitoring the core temperature of the chip and the CPU load, and dynamically adjusting clock speeds and the core voltage. When the demand is low on the CPU or it is running too hot, the performance is throttled, but if the CPU has much to do and the chip's temperature is acceptable, performance is temporarily increased with clock speeds of up to 1 GHz, depending on the board version and on which of the turbo settings is used.
The overclocking modes are:
none; 700 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvolting,
modest; 800 MHz ARM, 250 MHz core, 400 MHz SDRAM, 0 overvolting,
medium; 900 MHz ARM, 250 MHz core, 450 MHz SDRAM, 2 overvolting,
high; 950 MHz ARM, 250 MHz core, 450 MHz SDRAM, 6 overvolting,
turbo; 1000 MHz ARM, 500 MHz core, 600 MHz SDRAM, 6 overvolting,
Pi 2; 1000 MHz ARM, 500 MHz core, 500 MHz SDRAM, 2 overvolting,
Pi 3; 1100 MHz ARM, 550 MHz core, 500 MHz SDRAM, 6 overvolting. In system information the CPU speed appears as 1200 MHz. When idling, speed lowers to 600 MHz.
In the highest (turbo) mode the SDRAM clock speed was originally 500 MHz, but this was later changed to 600 MHz because of occasional SD card corruption. Simultaneously, in high mode the core clock speed was lowered from 450 to 250 MHz, and in medium mode from 333 to 250 MHz.
The CPU of the first and second generation Raspberry Pi board did not require cooling with a heat sink or fan, even when overclocked, but the Raspberry Pi 3 may generate more heat when overclocked.
RAM
The early designs of the Raspberry Pi Model A and B boards included only 256 MB of random access memory (RAM). Of this, the early beta Model B boards allocated 128 MB to the GPU by default, leaving only 128 MB for the CPU. On the early 256 MB releases of models A and B, three different splits were possible. The default split was 192 MB for the CPU, which should be sufficient for standalone 1080p video decoding, or for simple 3D processing. 224 MB was for Linux processing only, with only a 1080p framebuffer, and was likely to fail for any video or 3D. 128 MB was for heavy 3D processing, possibly also with video decoding. In comparison, the Nokia 701 uses 128 MB for the Broadcom VideoCore IV.
The later Model B with 512 MB RAM, was released on 15 October 2012 and was initially released with new standard memory split files (arm256_start.elf, arm384_start.elf, arm496_start.elf) with 256 MB, 384 MB, and 496 MB CPU RAM, and with 256 MB, 128 MB, and 16 MB video RAM, respectively. But about one week later, the foundation released a new version of start.elf that could read a new entry in config.txt (gpu_mem=xx) and could dynamically assign an amount of RAM (from 16 to 256 MB in 8 MB steps) to the GPU, obsoleting the older method of splitting memory, and a single start.elf worked the same for 256 MB and 512 MB Raspberry Pis.
The Raspberry Pi 2 has 1 GB of RAM.
The Raspberry Pi 3 has 1 GB of RAM in the B and B+ models, and 512 MB of RAM in the A+ model. The Raspberry Pi Zero and Zero W have 512 MB of RAM.
The Raspberry Pi 4 is available with 2, 4 or 8 GB of RAM. A 1 GB model was originally available at launch in June 2019 but was discontinued in March 2020, and the 8 GB model was introduced in May 2020.
Networking
The Model A, A+ and Pi Zero have no Ethernet circuitry and are commonly connected to a network using an external user-supplied USB Ethernet or Wi-Fi adapter. On the the Ethernet port is provided by a built-in USB Ethernet adapter using the SMSC LAN9514 chip. The Raspberry Pi 3 and Pi Zero W (wireless) are equipped with 2.4 GHz WiFi 802.11n and Bluetooth 4.1 based on the Broadcom BCM43438 FullMAC chip with no official support for monitor mode (though it was implemented through unofficial firmware patching) and the Pi 3 also has a 10/100 Mbit/s Ethernet port. The Raspberry Pi 3B+ features dual-band IEEE 802.11b/g/n/ac WiFi, Bluetooth 4.2, and Gigabit Ethernet (limited to approximately 300 Mbit/s by the USB 2.0 bus between it and the SoC). The Raspberry Pi 4 has full gigabit Ethernet (throughput is not limited as it is not funnelled via the USB chip.)
Special-purpose features
The RPi Zero, RPi1A, RPi3A+ and RPi4 can be used as a USB device or "USB gadget", plugged into another computer via a USB port on another machine. It can be configured in multiple ways, for example to show up as a serial device or an ethernet device. Although originally requiring software patches, this was added into the mainline Raspbian distribution in May 2016.
Raspberry Pi models with a newer chipset can boot from USB mass storage, such as from a flash drive. Booting from USB mass storage is not available in the original Raspberry Pi models, the Raspberry Pi Zero, the Raspberry Pi Pico, the Raspberry Pi 2 A models and in Raspberry Pi 2 B models with a lower version than 1.2.
Peripherals
Although often pre-configured to operate as a headless computer, the Raspberry Pi may also optionally be operated with any generic USB computer keyboard and mouse. It may also be used with USB storage, USB to MIDI converters, and virtually any other device/component with USB capabilities, depending on the installed device drivers in the underlying operating system (many of which are included by default).
Other peripherals can be attached through the various pins and connectors on the surface of the Raspberry Pi.
Video
The video controller can generate standard modern TV resolutions, such as HD and Full HD, and higher or lower monitor resolutions as well as older NTSC or PAL standard CRT TV resolutions. As shipped (i.e., without custom overclocking) it can support the following resolutions: 640×350 EGA; 640×480 VGA; 800×600 SVGA; 1024×768 XGA; 1280×720 720p HDTV; 1280×768 WXGA variant; 1280×800 WXGA variant; 1280×1024 SXGA; 1366×768 WXGA variant; 1400×1050 SXGA+; 1600×1200 UXGA; 1680×1050 WXGA+; 1920×1080 1080p HDTV; 1920×1200 WUXGA.
Higher resolutions, up to 2048×1152, may work or even 3840×2160 at 15 Hz (too low a frame rate for convincing video). Allowing the highest resolutions does not imply that the GPU can decode video formats at these resolutions; in fact, the Raspberry Pis are known to not work reliably for H.265 (at those high resolutions), commonly used for very high resolutions (however, most common formats up to Full HD do work).
Although the Raspberry Pi 3 does not have H.265 decoding hardware, the CPU is more powerful than its predecessors, potentially fast enough to allow the decoding of H.265-encoded videos in software. The GPU in the Raspberry Pi 3 runs at higher clock frequencies of 300 MHz or 400 MHz, compared to previous versions which ran at 250 MHz.
The Raspberry Pis can also generate 576i and 480i composite video signals, as used on old-style (CRT) TV screens and less-expensive monitors through standard connectorseither RCA or 3.5 mm phono connector depending on model. The television signal standards supported are PAL-B/G/H/I/D, PAL-M, PAL-N, NTSC and NTSC-J.
Real-time clock
When booting, the time defaults to being set over the network using the Network Time Protocol (NTP). The source of time information can be another computer on the local network that does have a real-time clock, or to a NTP server on the internet. If no network connection is available, the time may be set manually or configured to assume that no time passed during the shutdown. In the latter case, the time is monotonic (files saved later in time always have later timestamps) but may be considerably earlier than the actual time. For systems that require a built-in real-time clock, a number of small, low-cost add-on boards with real-time clocks are available.
The RP2040 microcontroller has a built-in real-time clock but this can not be set automatically without some form of user entry or network facility being added.
Connectors
Pi Pico
Pi Compute Module
Pi Zero
Model A
Model B
General purpose input-output (GPIO) connector
Raspberry Pi 1 Models A+ and B+, Pi 2 Model B, Pi 3 Models A+, B and B+, Pi 4, and Pi Zero, Zero W, and Zero WH GPIO J8 have a 40-pin pinout. Raspberry Pi 1 Models A and B have only the first 26 pins.
In the Pi Zero and Zero W, the 40 GPIO pins are unpopulated, having the through-holes exposed for soldering instead. The Zero WH (Wireless + Header) has the header pins preinstalled.
Model B rev. 2 also has a pad (called P5 on the board and P6 on the schematics) of 8 pins offering access to an additional 4 GPIO connections. These GPIO pins were freed when the four board version identification links present in revision 1.0 were removed.
Models A and B provide GPIO access to the ACT status LED using GPIO 16. Models A+ and B+ provide GPIO access to the ACT status LED using GPIO 47, and the power status LED using GPIO 35.
Specifications
Simplified Model B Changelog
Software
Operating systems
The Raspberry Pi Foundation provides Raspberry Pi OS (formerly called Raspbian), a Debian-based Linux distribution for download, as well as third-party Ubuntu, Windows 10 IoT Core, RISC OS, LibreELEC (specialised media centre distribution) and specialised distributions for the Kodi media centre and classroom management. It promotes Python and Scratch as the main programming languages, with support for many other languages. The default firmware is closed source, while unofficial open source is available. Many other operating systems can also run on the Raspberry Pi. The formally verified microkernel seL4 is also supported. There are several ways of installing multiple operating systems on one SD card.
Other operating systems (not Linux- nor BSD-based)
Broadcom VCOS – Proprietary operating system which includes an abstraction layer designed to integrate with existing kernels, such as ThreadX (which is used on the VideoCore4 processor), providing drivers and middleware for application development. In the case of the Raspberry Pi, this includes an application to start the ARM processor(s) and provide the publicly documented API over a mailbox interface, serving as its firmware. An incomplete source of a Linux port of VCOS is available as part of the reference graphics driver published by Broadcom.
Haiku – an open source BeOS clone that has been compiled for the Raspberry Pi and several other ARM boards. Work on Pi 1 began in 2011, but only the Pi 2 will be supported.
HelenOS – a portable microkernel-based multiserver operating system; has basic Raspberry Pi support since version 0.6.0
Plan 9 from Bell Labs and Inferno (in beta)
RISC OS Pi (a special cut down version RISC OS Pico, for 16 MB cards and larger for all models of Pi 1 & 2, has also been made available.)
Ultibo Core - OS-less unikerel Run Time Library based on Free Pascal. Lazarus IDE (Windows with 3rd party ports to Linux and MacOS). Most Pi models supported.
Windows 10 IoT Core – a zero-price edition of Windows 10 offered by Microsoft that runs natively on the Raspberry Pi 2.
Other operating systems (Linux-based)
Alpine Linux – a Linux distribution based on musl and BusyBox, "designed for power users who appreciate security, simplicity and resource efficiency".
Android Things – an embedded version of the Android operating system designed for IoT device development.
Arch Linux ARM, a port of Arch Linux for ARM processors, and Arch-based Manjaro Linux ARM
Ark OS – designed for website and email self-hosting.
Batocera - a buildroot based Linux OS that uses Emulation Station as its frontend for RetroArch and other emulators plus auxiliary scripts. Instead of a classic Linux distribution with package managers handling individual software updates, Batocera is crafted to behave more like a video game console firmware with all tools and emulators included and updated as a single package during software updates.
CentOS for Raspberry Pi 2 and later
Devuan
emteria.OS – an embedded, managed version of the Android operating system for professional fleet management
Fedora (supports Pi 2 and later since Fedora 25, Pi 1 is supported by some unofficial derivatives) and RedSleeve (a RHEL port) for Raspberry Pi 1
Gentoo Linux
Kali Linux – a Debian-derived distro designed for digital forensics and penetration testing.
OpenEuler,– an Open source Linux OS.
openSUSE, SUSE Linux Enterprise Server 12 SP2 and Server 12 SP3 (Commercial support)
OpenWrt – a highly extensible Linux distribution for embedded devices (typically wireless routers). It supports Pi 1, 2, 3, 4 and Zero W.
postmarketOS - distribution based on Alpine Linux, primarily developed for smartphones.
RetroPie - an offshoot of Raspbian OS that uses Emulation Station as its frontend for RetroArch and other emulators like Mupen64 for retro gaming. Hardware like Freeplay tech can help replace Game boy internals with RetroPie emulation.
Sailfish OS with Raspberry Pi 2 (due to use ARM Cortex-A7 CPU; Raspberry Pi 1 uses different ARMv6 architecture and Sailfish requires ARMv7.)
Slackware ARM – version 13.37 and later runs on the Raspberry Pi without modification. The 128–496 MB of available memory on the Raspberry Pi is at least twice the minimum requirement of 64 MB needed to run Slackware Linux on an ARM or i386 system. (Whereas the majority of Linux systems boot into a graphical user interface, Slackware's default user environment is the textual shell / command line interface.) The Fluxbox window manager running under the X Window System requires an additional 48 MB of RAM.
SolydXK – a light Debian-derived distro with Xfce.
Tiny Core Linux – a minimal Linux operating system focused on providing a base system using BusyBox and FLTK. Designed to run primarily in RAM.
Ubuntu-based: Lubuntu and Xubuntu
Void Linux – a rolling release Linux distribution which was designed and implemented from scratch, provides images based on musl or glibc.
Other operating systems (BSD-based)
FreeBSD
NetBSD
OpenBSD (only on 64-bit platforms, such as Raspberry Pi 3)
Driver APIs
Raspberry Pi can use a VideoCore IV GPU via a binary blob, which is loaded into the GPU at boot time from the SD-card, and additional software, that initially was closed source. This part of the driver code was later released. However, much of the actual driver work is done using the closed source GPU code. Application software makes calls to closed source run-time libraries (OpenMax, OpenGL ES or OpenVG), which in turn call an open source driver inside the Linux kernel, which then calls the closed source VideoCore IV GPU driver code. The API of the kernel driver is specific for these closed libraries. Video applications use OpenMAX, use OpenGL ES and use OpenVG, which both in turn use EGL. OpenMAX and EGL use the open source kernel driver in turn.
Vulkan driver
The Raspberry Pi Foundation first announced it was working on a Vulkan driver in February 2020. A working Vulkan driver running Quake 3 at 100 frames per second on a 3B+ was revealed by a graphics engineer who had been working on it as a hobby project on 20 June. On 24 November 2020 Raspberry Pi Foundation announced that their driver for the Raspberry Pi 4 is Vulkan 1.0 conformant. On 26 October 2021 Raspberry Pi Trading announced that their driver for the Raspberry Pi 4 is Vulkan 1.1 conformant.
Firmware
The official firmware is a freely redistributable binary blob, that is proprietary software. A minimal proof-of-concept open source firmware is also available, mainly aimed at initialising and starting the ARM cores as well as performing minimal startup that is required on the ARM side. It is also capable of booting a very minimal Linux kernel, with patches to remove the dependency on the mailbox interface being responsive. It is known to work on Raspberry Pi 1, 2 and 3, as well as some variants of Raspberry Pi Zero.
Third-party application software
AstroPrint – AstroPrint's wireless 3D printing software can be run on the Pi 2.
C/C++ Interpreter Ch – Released 3 January 2017, C/C++ interpreter Ch and Embedded Ch are released free for non-commercial use for Raspberry Pi, ChIDE is also included for the beginners to learn C/C++.
Minecraft – Released 11 February 2013, a modified version that allows players to directly alter the world with computer code.
RealVNC – Since 28 September 2016, Raspbian includes RealVNC's remote access server and viewer software. This includes a new capture technology which allows directly rendered content (e.g. Minecraft, camera preview and omxplayer) as well as non-X11 applications to be viewed and controlled remotely.
UserGate Web Filter – On 20 September 2013, Florida-based security vendor Entensys announced porting UserGate Web Filter to Raspberry Pi platform.
Steam Link – On 13 December 2018, Valve released official Steam Link game streaming client for the Raspberry Pi 3 and 3 B+.
Software development tools
Arduino IDE – for programming an Arduino.
Algoid – for teaching programming to children and beginners.
BlueJ – for teaching Java to beginners.
Greenfoot – Greenfoot teaches object orientation with Java. Create 'actors' which live in 'worlds' to build games, simulations, and other graphical programs.
Julia – an interactive and cross-platform programming language/environment, that runs on the Pi 1 and later. IDEs for Julia, such as Visual Studio Code, are available. See also Pi-specific GitHub repository JuliaBerry.
Lazarus – a Free Pascal RAD IDE
LiveCode – an educational RAD IDE descended from HyperCard using English-like language to write event-handlers for WYSIWYG widgets runnable on desktop, mobile and Raspberry Pi platforms.
Ninja-IDE – a cross-platform integrated development environment (IDE) for Python.
Processing – an IDE built for the electronic arts, new media art, and visual design communities with the purpose of teaching the fundamentals of computer programming in a visual context.
Scratch – a cross-platform teaching IDE using visual blocks that stack like Lego, originally developed by MIT's Life Long Kindergarten group. The Pi version is very heavily optimised for the limited computer resources available and is implemented in the Squeak Smalltalk system. The latest version compatible with The 2 B is 1.6.
Squeak Smalltalk – a full-scale open Smalltalk.
TensorFlow – an artificial intelligence framework developed by Google. The Raspberry Pi Foundation worked with Google to simplify the installation process through pre-built binaries.
Thonny – a Python IDE for beginners.
V-Play Game Engine – a cross-platform development framework that supports mobile game and app development with the V-Play Game Engine, V-Play apps, and V-Play plugins.
Xojo – a cross-platform RAD tool that can create desktop, web and console apps for Pi 2 and Pi 3.
C-STEM Studio – a platform for hands-on integrated learning of computing, science, technology, engineering, and mathematics (C-STEM) with robotics.
Erlang – a functional language for building concurrent systems with light-weight processes and message passing.
LabVIEW Community Edition – a system-design platform and development environment for a visual programming language from National Instruments.
Accessories
Gertboard – A Raspberry Pi Foundation sanctioned device, designed for educational purposes, that expands the Raspberry Pi's GPIO pins to allow interface with and control of LEDs, switches, analogue signals, sensors and other devices. It also includes an optional Arduino compatible controller to interface with the Pi.
Camera – On 14 May 2013, the foundation and the distributors RS Components & Premier Farnell/Element 14 launched the Raspberry Pi camera board alongside a firmware update to accommodate it. The camera board is shipped with a flexible flat cable that plugs into the CSI connector which is located between the Ethernet and HDMI ports. In Raspbian, the user must enable the use of the camera board by running Raspi-config and selecting the camera option. The camera module costs €20 in Europe (9 September 2013). It uses the OmniVision OV5647 image sensor and can produce 1080p, 720p and 640x480p video. The dimensions are . In May 2016, v2 of the camera came out, and is an 8 megapixel camera using a Sony IMX219.
Infrared Camera – In October 2013, the foundation announced that they would begin producing a camera module without an infrared filter, called the Pi NoIR.
Official Display – On 8 September 2015, The foundation and the distributors RS Components & Premier Farnell/Element 14 launched the Raspberry Pi Touch Display
HAT (Hardware Attached on Top) expansion boardsTogether with the Model B+, inspired by the Arduino shield boards, the interface for HAT boards was devised by the Raspberry Pi Foundation. Each HAT board carries a small EEPROM (typically a CAT24C32WI-GT3) containing the relevant details of the board, so that the Raspberry Pi's OS is informed of the HAT, and the technical details of it, relevant to the OS using the HAT. Mechanical details of a HAT board, which uses the four mounting holes in their rectangular formation, are available online.
High Quality Camera – In May 2020, the 12.3 megapixel Sony IMXZ477 sensor camera module was released with support for C- and CS-mount lenses. The unit initially retailed for US$50 with interchangeable lenses starting at US$25.
e-CAM130_CURB – In Nov 2020, the 13 megapixel ON Semiconductor AR1335 sensor camera module was released with support for S-mount lenses. The unit initially retailed for US$99.
Vulnerability to flashes of light
In February 2015, a switched-mode power supply chip, designated U16, of the Raspberry Pi 2 Model B version 1.1 (the initially released version) was found to be vulnerable to flashes of light, particularly the light from xenon camera flashes and green and red laser pointers. The U16 chip has WL-CSP packaging, which exposes the bare silicon die. The Raspberry Pi Foundation blog recommended covering U16 with opaque material (such as Sugru or Blu-Tak) or putting the Raspberry Pi 2 in a case. This issue was not discovered before the release of the Raspberry Pi 2 because it is not standard or common practice to test susceptibility to optical interference, while commercial electronic devices are routinely subjected to tests of susceptibility to radio interference.
Reception and use
Technology writer Glyn Moody described the project in May 2011 as a "potential ", not by replacing machines but by supplementing them. In March 2012 Stephen Pritchard echoed the BBC Micro successor sentiment in ITPRO. Alex Hope, co-author of the Next Gen report, is hopeful that the computer will engage children with the excitement of programming. Co-author Ian Livingstone suggested that the BBC could be involved in building support for the device, possibly branding it as the BBC Nano. The Centre for Computing History strongly supports the Raspberry Pi project, feeling that it could "usher in a new era". Before release, the board was showcased by ARM's CEO Warren East at an event in Cambridge outlining Google's ideas to improve UK science and technology education.
Harry Fairhead, however, suggests that more emphasis should be put on improving the educational software available on existing hardware, using tools such as Google App Inventor to return programming to schools, rather than adding new hardware choices. Simon Rockman, writing in a ZDNet blog, was of the opinion that teens will have "better things to do", despite what happened in the 1980s.
In October 2012, the Raspberry Pi won T3's Innovation of the Year award, and futurist Mark Pesce cited a (borrowed) Raspberry Pi as the inspiration for his ambient device project MooresCloud. In October 2012, the British Computer Society reacted to the announcement of enhanced specifications by stating, "it's definitely something we'll want to sink our teeth into."
In June 2017, Raspberry Pi won the Royal Academy of Engineering MacRobert Award. The citation for the award to the Raspberry Pi said it was "for its inexpensive credit card-sized microcomputers, which are redefining how people engage with computing, inspiring students to learn coding and computer science and providing innovative control solutions for industry."
Clusters of hundreds of Raspberry Pis have been used for testing programs destined for supercomputers.
Community
The Raspberry Pi community was described by Jamie Ayre of FLOSS software company AdaCore as one of the most exciting parts of the project. Community blogger Russell Davis said that the community strength allows the Foundation to concentrate on documentation and teaching. The community developed a fanzine around the platform called The MagPi which in 2015, was handed over to the Raspberry Pi Foundation by its volunteers to be continued in-house. A series of community Raspberry Jam events have been held across the UK and around the world.
Education
, enquiries about the board in the United Kingdom have been received from schools in both the state and private sectors, with around five times as much interest from the latter. It is hoped that businesses will sponsor purchases for less advantaged schools. The CEO of Premier Farnell said that the government of a country in the Middle East has expressed interest in providing a board to every schoolgirl, to enhance her employment prospects.
In 2014, the Raspberry Pi Foundation hired a number of its community members including ex-teachers and software developers to launch a set of free learning resources for its website. The Foundation also started a teacher training course called Picademy with the aim of helping teachers prepare for teaching the new computing curriculum using the Raspberry Pi in the classroom.
In 2018, NASA launched the JPL Open Source Rover Project, which is a scaled down version of Curiosity rover and uses a Raspberry Pi as the control module, to encourage students and hobbyists to get involved in mechanical, software, electronics, and robotics engineering.
Home automation
There are a number of developers and applications that are using the Raspberry Pi for home automation. These programmers are making an effort to modify the Raspberry Pi into a cost-affordable solution in energy monitoring and power consumption. Because of the relatively low cost of the Raspberry Pi, this has become a popular and economical alternative to the more expensive commercial solutions.
Industrial automation
In June 2014, Polish industrial automation manufacturer TECHBASE released ModBerry, an industrial computer based on the Raspberry Pi Compute Module. The device has a number of interfaces, most notably RS-485/232 serial ports, digital and analogue inputs/outputs, CAN and economical 1-Wire buses, all of which are widely used in the automation industry. The design allows the use of the Compute Module in harsh industrial environments, leading to the conclusion that the Raspberry Pi is no longer limited to home and science projects, but can be widely used as an Industrial IoT solution and achieve goals of Industry 4.0.
In March 2018, SUSE announced commercial support for SUSE Linux Enterprise on the Raspberry Pi 3 Model B to support a number of undisclosed customers implementing industrial monitoring with the Raspberry Pi.
In January 2021, TECHBASE announced a Raspberry Pi Compute Module 4 cluster for AI accelerator, routing and file server use. The device contains one or more standard Raspberry Pi Compute Module 4s in an industrial DIN rail housing, with some versions containing one or more Coral Edge tensor processing units.
Commercial products
The Organelle is a portable synthesizer, a sampler, a sequencer, and an effects processor designed and assembled by Critter & Guitari. It incorporates a Raspberry Pi computer module running Linux.
OTTO is a digital camera created by Next Thing Co. It incorporates a Raspberry Pi Compute Module. It was successfully crowd-funded in a May 2014 Kickstarter campaign.
Slice is a digital media player which also uses a Compute Module as its heart. It was crowd-funded in an August 2014 Kickstarter campaign. The software running on Slice is based on Kodi.
Numerous commercial thin client computer terminals use the Raspberry Pi.
COVID-19 pandemic
In Q1 of 2020, during the coronavirus pandemic, Raspberry Pi computers saw a large increase in demand primarily due to the increase in working from home, but also because of the use of many Raspberry Pi Zeros in ventilators for COVID-19 patients in countries such as Colombia, which were used to combat strain on the healthcare system. In March 2020, Raspberry Pi sales reached 640,000 units, the second largest month of sales in the company's history.
Astro Pi and Proxima
A project was launched in December 2014 at an event held by the UK Space Agency. The Astro Pi was an augmented Raspberry Pi that included a sensor hat with a visible light or infrared camera. The Astro Pi competition, called Principia, was officially opened in January and was opened to all primary and secondary school aged children who were residents of the United Kingdom. During his mission, British ESA astronaut Tim Peake deployed the computers on board the International Space Station. He loaded the winning code while in orbit, collected the data generated and then sent this to Earth where it was distributed to the winning teams. Covered themes during the competition included spacecraft sensors, satellite imaging, space measurements, data fusion and space radiation.
The organisations involved in the Astro Pi competition include the UK Space Agency, UKspace, Raspberry Pi, ESERO-UK and ESA.
In 2017, the European Space Agency ran another competition open to all students in the European Union called Proxima. The winning programs were run on the ISS by Thomas Pesquet, a French astronaut. In December 2021, the Dragon 2 spacecraft launched by NASA had a pair of Astro pi in it.
History
In 2006, early concepts of the Raspberry Pi were based on the Atmel ATmega644 microcontroller. Its schematics and PCB layout are publicly available. Foundation trustee Eben Upton assembled a group of teachers, academics and computer enthusiasts to devise a computer to inspire children. The computer is inspired by Acorn's BBC Micro of 1981. The Model A, Model B and Model B+ names are references to the original models of the British educational BBC Micro computer, developed by Acorn Computers. The first ARM prototype version of the computer was mounted in a package the same size as a USB memory stick. It had a USB port on one end and an HDMI port on the other.
The Foundation's goal was to offer two versions, priced at US$25 and $35. They started accepting orders for the higher priced Model B on 29 February 2012, the lower cost Model A on 4 February 2013. and the even lower cost (US$20) A+ on 10 November 2014. On 26 November 2015, the cheapest Raspberry Pi yet, the Raspberry Pi Zero, was launched at US$5 or £4. According to Upton, the name "Raspberry Pi" was chosen with "Raspberry" as an ode to a tradition of naming early computer companies after fruit, and "Pi" as a reference to the Python programming language.
Pre-launch
August 2011 – 50 alpha boards are manufactured. These boards were functionally identical to the planned Model B, but they were physically larger to accommodate debug headers. Demonstrations of the board showed it running the LXDE desktop on Debian, Quake 3 at 1080p, and Full HD MPEG-4 video over HDMI.
October 2011 – A version of was demonstrated in public, and following a year of development the port was released for general consumption in November 2012.
December 2011 – Twenty-five Model B Beta boards were assembled and tested from one hundred unpopulated PCBs. The component layout of the Beta boards was the same as on production boards. A single error was discovered in the board design where some pins on the CPU were not held high; it was fixed for the first production run. The Beta boards were demonstrated booting Linux, playing a 1080p movie trailer and the Rightware Samurai OpenGL ES benchmark.
Early 2012 – During the first week of the year, the first 10 boards were put up for auction on eBay. One was bought anonymously and donated to the museum at The Centre for Computing History in Cambridge, England. The ten boards (with a total retail price of £220) together raised over £16,000, with the last to be auctioned, serial number No. 01, raising £3,500. In advance of the anticipated launch at the end of February 2012, the Foundation's servers struggled to cope with the load placed by watchers repeatedly refreshing their browsers.
Launch
19 February 2012 – The first proof of concept SD card image that could be loaded onto an SD card to produce a preliminary operating system is released. The image was based on Debian 6.0 (Squeeze), with the LXDE desktop and the Midori browser, plus various programming tools. The image also runs on QEMU allowing the Raspberry Pi to be emulated on various other platforms.
29 February 2012 – Initial sales commence 29 February 2012 at 06:00 UTC;. At the same time, it was announced that the model A, originally to have had 128 MB of RAM, was to be upgraded to 256 MB before release. The Foundation's website also announced: "Six years after the project's inception, we're nearly at the end of our first run of development – although it's just the beginning of the Raspberry Pi story." The web-shops of the two licensed manufacturers selling Raspberry Pi's within the United Kingdom, Premier Farnell and RS Components, had their websites stalled by heavy web traffic immediately after the launch (RS Components briefly going down completely). Unconfirmed reports suggested that there were over two million expressions of interest or pre-orders. The official Raspberry Pi Twitter account reported that Premier Farnell sold out within a few minutes of the initial launch, while RS Components took over 100,000 pre orders on day one. Manufacturers were reported in March 2012 to be taking a "healthy number" of pre-orders.
March 2012 – Shipping delays for the first batch were announced in March 2012, as the result of installation of an incorrect Ethernet port, but the Foundation expected that manufacturing quantities of future batches could be increased with little difficulty if required. "We have ensured we can get them [the Ethernet connectors with magnetics] in large numbers and Premier Farnell and RS Components [the two distributors] have been fantastic at helping to source components," Upton said. The first batch of 10,000 boards was manufactured in Taiwan and China.
8 March 2012 – Release Raspberry Pi Fedora Remix, the recommended Linux distribution, developed at Seneca College in Canada.
March 2012 – The Debian port is initiated by Mike Thompson, former CTO of Atomz. The effort was largely carried out by Thompson and Peter Green, a volunteer Debian developer, with some support from the Foundation, who tested the resulting binaries that the two produced during the early stages (neither Thompson nor Green had physical access to the hardware, as boards were not widely accessible at the time due to demand). While the preliminary proof of concept image distributed by the Foundation before launch was also Debian-based, it differed from Thompson and Green's Raspbian effort in a couple of ways. The POC image was based on then-stable Debian Squeeze, while Raspbian aimed to track then-upcoming Debian Wheezy packages. Aside from the updated packages that would come with the new release, Wheezy was also set to introduce the armhf architecture, which became the raison d'être for the Raspbian effort. The Squeeze-based POC image was limited to the armel architecture, which was, at the time of Squeeze's release, the latest attempt by the Debian project to have Debian run on the newest ARM embedded-application binary interface (EABI). The armhf architecture in Wheezy intended to make Debian run on the ARM VFP hardware floating-point unit, while armel was limited to emulating floating point operations in software. Since the Raspberry Pi included a VFP, being able to make use of the hardware unit would result in performance gains and reduced power use for floating point operations. The armhf effort in mainline Debian, however, was orthogonal to the work surrounding the Pi and only intended to allow Debian to run on ARMv7 at a minimum, which would mean the Pi, an ARMv6 device, would not benefit. As a result, Thompson and Green set out to build the 19,000 Debian packages for the device using a custom build cluster.
Post-launch
16 April 2012 – Reports appear from the first buyers who had received their Raspberry Pi.
20 April 2012 – The schematics for the Model A and Model B are released.
18 May 2012 – The Foundation reported on its blog about a prototype camera module they had tested. The prototype used a module.
22 May 2012 – Over 20,000 units had been shipped.
July 2012 – Release of Raspbian.
16 July 2012 – It was announced that 4,000 units were being manufactured per day, allowing Raspberry Pis to be bought in bulk.
24 August 2012 – Hardware accelerated video (H.264) encoding becomes available after it became known that the existing licence also covered encoding. Formerly it was thought that encoding would be added with the release of the announced camera module. However, no stable software exists for hardware H.264 encoding. At the same time the Foundation released two additional codecs that can be bought separately, MPEG-2 and Microsoft's VC-1. Also it was announced that the Pi will implement CEC, enabling it to be controlled with the television's remote control.
5 September 2012 – The Foundation announced a second revision of the Raspberry Pi Model B. A revision 2.0 board is announced, with a number of minor corrections and improvements.
6 September 2012 – Announcement that in future the bulk of Raspberry Pi units would be manufactured in the UK, at Sony's manufacturing facility in Pencoed, Wales. The Foundation estimated that the plant would produce 30,000 units per month, and would create about 30 new jobs.
15 October 2012 – It is announced that new Raspberry Pi Model Bs are to be fitted with 512 MB instead of 256 MB RAM.
24 October 2012 – The Foundation announces that "all of the VideoCore driver code which runs on the ARM" had been released as free software under a BSD-style licence, making it "the first ARM-based multimedia SoC with functional, vendor-provided (as opposed to partial, reverse engineered) fully open-source drivers", although this claim has not been universally accepted. On 28 February 2014, they also announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD licence
October 2012 – It was reported that some customers of one of the two main distributors had been waiting more than six months for their orders. This was reported to be due to difficulties in sourcing the CPU and conservative sales forecasting by this distributor.
17 December 2012 – The Foundation, in collaboration with IndieCity and Velocix, opens the Pi Store, as a "one-stop shop for all your Raspberry Pi (software) needs". Using an application included in Raspbian, users can browse through several categories and download what they want. Software can also be uploaded for moderation and release.
3 June 2013 – "New Out of Box Software" or NOOBS is introduced. This makes the Raspberry Pi easier to use by simplifying the installation of an operating system. Instead of using specific software to prepare an SD card, a file is unzipped and the contents copied over to a FAT formatted (4 GB or bigger) SD card. That card can then be booted on the Raspberry Pi and a choice of six operating systems is presented for installation on the card. The system also contains a recovery partition that allows for the quick restoration of the installed OS, tools to modify the config.txt and an online help button and web browser which directs to the Raspberry Pi Forums.
October 2013 – The Foundation announces that the one millionth Pi had been manufactured in the United Kingdom.
November 2013: they announce that the two millionth Pi shipped between 24 and 31 October.
28 February 2014 – On the day of the second anniversary of the Raspberry Pi, Broadcom, together with the Raspberry Pi foundation, announced the release of full documentation for the VideoCore IV graphics core, and a complete source release of the graphics stack under a 3-clause BSD licence.
7 April 2014 – The official Raspberry Pi blog announced the Raspberry Pi Compute Module, a device in a 200-pin DDR2 SO-DIMM-configured memory module (though not in any way compatible with such RAM), intended for consumer electronics designers to use as the core of their own products.
June 2014 – The official Raspberry Pi blog mentioned that the three millionth Pi shipped in early May 2014.
14 July 2014 – The official Raspberry Pi blog announced the Raspberry Pi Model B+, "the final evolution of the original Raspberry Pi. For the same price as the original Raspberry Pi model B, but incorporating numerous small improvements people have been asking for".
10 November 2014 – The official Raspberry Pi blog announced the Raspberry Pi Model A+. It is the smallest and cheapest Raspberry Pi so far and has the same processor and RAM as the Model A. Like the A, it has no Ethernet port, and only one USB port, but does have the other innovations of the B+, like lower power, micro-SD-card slot, and 40-pin HAT compatible GPIO.
2 February 2015 – The official Raspberry Pi blog announced the Raspberry Pi 2. Looking like a Model B+, it has a 900 MHz quad-core ARMv7 Cortex-A7 CPU, twice the memory (for a total of 1 GB) and complete compatibility with the original generation of Raspberry Pis.
14 May 2015 – The price of Model B+ was decreased from US$35 to $25, purportedly as a "side effect of the production optimizations" from the Pi 2 development. Industry observers have sceptically noted, however, that the price drop appeared to be a direct response to the CHIP, a lower-priced competitor discontinued in April 2017.
29 September 2015 – A new version of the Raspbian operating system, based on Debian Jessie, is released.
26 November 2015 – The Raspberry Pi Foundation launched the Raspberry Pi Zero, the smallest and cheapest member of the Raspberry Pi family yet, at 65 mm × 30 mm, and US$5. The Zero is similar to the Model A+ without camera and LCD connectors, while smaller and uses less power. It was given away with the Raspberry Pi magazine Magpi No. 40 that was distributed in the UK and US that day the MagPi was sold out at almost every retailer internationally due to the freebie.
29 February 2016 – Raspberry Pi 3 with a BCM2837 1.2 GHz 64-bit quad processor based on the ARMv8 Cortex-A53, with built-in Wi-Fi BCM43438 802.11n 2.4 GHz and Bluetooth 4.1 Low Energy (BLE). Starting with a 32-bit Raspbian version, with a 64-bit version later to come if "there is value in moving to 64-bit mode". In the same announcement it was said that a new BCM2837 based Compute Module was expected to be introduced a few months later.
February 2016 – The Raspberry Pi Foundation announces that they had sold eight million devices (for all models combined), making it the best-selling UK personal computer, ahead of the Amstrad PCW. Sales reached ten million in September 2016.
25 April 2016 – Raspberry Pi Camera v2.1 announced with 8 Mpixels, in normal and NoIR (can receive IR) versions. The camera uses the Sony IMX219 chip with a resolution of . To make use of the new resolution the software has to be updated.
10 October 2016 – NEC Display Solutions announces that select models of commercial displays to be released in early 2017 will incorporate a Raspberry Pi 3 Compute Module.
14 October 2016 – Raspberry Pi Foundation announces their co-operation with NEC Display Solutions. They expect that the Raspberry Pi 3 Compute Module will be available to the general public by the end of 2016.
25 November 2016 – 11 million units sold.
16 January 2017 – Compute Module 3 and Compute Module 3 Lite are launched.
28 February 2017 – Raspberry Pi Zero W with WiFi and Bluetooth via chip scale antennas launched.
17 August 2017 – The Raspbian operating system is upgraded to a new version, based on Debian Stretch.
14 March 2018 – On Pi Day, Raspberry Pi Foundation introduced Raspberry Pi 3 Model B+ with improvements in the Raspberry PI 3B computers performance, updated version of the Broadcom application processor, better wireless Wi-Fi and Bluetooth performance and addition of the 5 GHz band.
15 November 2018 – Raspberry Pi 3 Model A+ launched.
28 January 2019 – Compute Module 3+ (CM3+/Lite, CM3+/8 GB, CM3+/16 GB and CM3+/32 GB) launched.
24 June 2019 – Raspberry Pi 4 Model B launched, along with a new version of the Raspbian operating system based on Debian Buster.
10 December 2019 – 30 million units sold; sales are about 6 million per year.
28 May 2020 – An 8GB version of the Raspberry Pi 4 is announced for $75. Raspberry Pi OS is split off from Raspbian, and now includes a beta of a 64-bit version that allows programs to use more than 4GB of RAM.
19 October 2020 – Compute Module 4 launched.
2 November 2020 – Raspberry Pi 400 launched. It is a keyboard which incorporates Raspberry Pi 4 into it. GPIO pins of the Raspberry Pi 4 are accessible.
21 January 2021 – Raspberry Pi Pico launched. It is the first microcontroller-class product from Raspberry Pi. It is based on RP2040 Microcontroller developed by Raspberry Pi.
11 May 2021 – 40 million units sold.
30 October 2021 – Raspberry Pi OS (formerly Raspbian) is updated version 11, based on Debian Bullseye. With this release, the default clock speed for revision 1.4 of the Raspberry Pi 4 is increased to 1.8 GHz.
Sales
According to the Raspberry Pi Foundation, more than 5 million Raspberry Pis were sold by February 2015, making it the best-selling British computer. By November 2016 they had sold 11 million units, and 12.5 million by March 2017, making it the third best-selling "general purpose computer". In July 2017, sales reached nearly 15 million, climbing to 19 million in March 2018. By December 2019, a total of 30 million devices had been sold.
See also
Single-board computer
Plug computer
References
Further reading
Raspberry Pi For Dummies; Sean McManus and Mike Cook; 2013; .
Getting Started with Raspberry Pi; Matt Richardson and Shawn Wallace; 2013; .
Raspberry Pi User Guide; Eben Upton and Gareth Halfacree; 2014; .
Hello Raspberry Pi!; Ryan Heitz; 2016; .
External links
Raspberry Pi, Department of Computer Science and Technology, University of Cambridge
Raspberry Pi Wiki, supported by the RPF
The MagPi Magazine
"Raspberry Pi pinout" board GPIO pinout
"Raspberry Pi component map"
"RaspberryPi Boards: Hardware versions/revisions"
ARM1176JZF-S (ARM11 CPU Core) Technical Reference Manual, ARM Ltd.
2012 establishments in the United Kingdom
ARM architecture
British brands
Computers designed in the United Kingdom
British inventions
Computer science education in the United Kingdom
Educational hardware
Linux-based devices
Products introduced in 2012
Single-board computers |
40679118 | https://en.wikipedia.org/wiki/HealthCare.gov | HealthCare.gov | HealthCare.gov () is a health insurance exchange website operated under the United States federal government under the provisions of the Affordable Care Act (ACA, often referred as 'Obamacare'), which currently serves the residents of the U.S. states which have opted not to create their own state exchanges. The exchange facilitates the sale of private health insurance plans to residents of the United States and offers subsidies to those who earn between one and four times the federal poverty line, but not to those earning less than the federal poverty line. The website also assists those persons who are eligible to sign up for Medicaid, and has a separate marketplace for small businesses.
The October 1, 2013 roll-out of HealthCare.gov went through as planned, despite the concurrent partial government shutdown. However, the launch was marred by serious technological problems, making it difficult for the public to sign up for health insurance. The deadline to sign up for coverage that would begin January 2014 was December 23, 2013, by which time the problems had largely been fixed. The open enrollment period for 2016 coverage ran from November 1, 2015 to January 31, 2016. State exchanges also have had the same deadlines; their performance has been varied.
The design of the website was overseen by the Centers for Medicare and Medicaid Services and built by a number of federal contractors, most prominently CGI Group of Canada. The original budget for CGI was $93.7 million, but this grew to $292 million prior to launch of the website. While estimates that the overall cost for building the website had reached over $500 million prior to launch and in early 2014 HHS Secretary Sylvia Mathews Burwell said there would be "approximately $834 million on Marketplace-related IT contracts and interagency agreements," the Office of Inspector General released a report in August 2014 finding that the total cost of the HealthCare.gov website had reached $1.7 billion and a month later, including costs beyond "computer systems," Bloomberg News estimated it at $2.1 billion. On July 30, 2014, the Government Accountability Office released a non-partisan study that concluded the administration did not provide "effective planning or oversight practices" in developing the HealthCare.gov website.
Background and functionality
The site functions as a clearing house to allow Americans to compare prices on health insurance plans in their states, to begin enrollment in a chosen plan, and to simultaneously find out if they qualify for government healthcare subsidies. Visitors sign up and create their own specific user account first, listing some personal information, before receiving detailed information about what is available in their area. Designed to assist the millions of uninsured Americans, the comparison shopping features involve a visual format somewhat analogous to websites such as Amazon.com and Etsy.
HealthCare.gov also details Medicaid options for individuals. This relates to an expansion of the long-running program undertaken as a joint effort under the PPACA. The Congressional Budget Office (CBO) projected that the exchange would be used by an estimated seven million Americans to obtain coverage during the first year after its launch; current estimates suggest that the combined figure is slightly above eight million.
Development and history
President Barack Obama signed the Affordable Care Act (ACA) into law on March 23, 2010 in the East Room before a select audience of nearly 300. He stated that the health reform effort, designed after a long and acrimonious debate facing fierce opposition in the U.S. Congress to expand health insurance coverage, was based on "the core principle that everybody should have some basic security when it comes to their health care". The primary purpose of the ACA was to increase coverage to the American people either through public or private insurance and control healthcare costs. The Congressional Budget Office(CBO) estimated that the ACA would reduce the number of uninsured by 32 million increasing coverage for the non-elderly citizens from 83 to 94 per cent. Insurers were not allowed to deny insurance to applicants with pre-existing conditions. The Sunlight Foundation has stated that at least forty-seven private company contractors have been involved with the ACA in some capacity as of fall 2013, with the measure causing a wide variety of policy changes. Journalists writing for The New York Times have called the ACA "the most expansive social legislation enacted in decades".
A report by Reuters described HealthCare.gov itself as the "key" to the reform measure. Development of the website's interface as well as its supporting back-end services, to make sure that the website could work to help people compare between health insurance plans, were both outsourced to private companies. The front-end of the website was developed by the startup Development Seed. The back-end work was contracted out to CGI Federal Inc., a subsidiary of the Canadian IT multinational CGI Group, which subcontracted the work to other companies as is common on large government contracts. CGI was also responsible for building some of the state-level healthcare exchanges, with varying levels of success (some did not open on schedule).
According to author and journalist John J. Xenakis, CGI Federal's attempt in Massachusetts is characterized as a complete failure. In Xenakis' view, despite the Massachusetts connector being the type of website which a small team of five to ten could create in a few months on a 10 million dollar budget, a team of around 300 with a 200 million dollar budget failed. Xenakis claims CGI Federal were likely to have hired many incompetent programmers due to Massachusetts transferring the development contract to another firm, Optum Inc. The software created by CGI was of poor quality and unusable by Optum, who had to start from scratch. CGI has also been accused of committing fraudulent tests and reports to those in charge of oversight. Similar problems occurred in many other states.
According to John J. Xenakis, the Obama administration granted way too much money to create the federal and individual state websites, which led to large, unmanageable teams which contained many incompetent programmers. It also encouraged fraud and overspending by programmers.
Specifically, aspects of HealthCare.gov relating to digital identity authentication were assigned to Experian. Quality Software Services, Inc. (QSSI) also played a role. The total number of companies enlisted in the website's creation, and their names, has not been disclosed by the Department of Health and Human Services. The whole effort was officially coordinated by the Centers for Medicare and Medicaid Services (CMS), an agency that commentators such as journalists David Perera and Sean Gallagher have speculated was ill-suited to that task. Social activist and technologist Clay Johnson later said that the federal government had issues come up given that it "leans towards a write-down-all-the-requirements-then-build-to-those-requirements type of methodology" not well suited to current IT especially when government contractors are focused on maximizing profits.
"The firms that typically get contracts are the firms that are good at getting contracts, not typically good at executing on them," Alex Howard, a fellow at the Harvard Ash Center for Democratic Governance and Innovation, remarked to The Verge as he evaluated the back-end of the project. In contrast, the web-magazine's journalist Adrianne Jeffries praised the successful use of an "innovative" startup business for the front-end. However, she found the overall rollout "bone-headed".
The Obama administration repeatedly modified regulations and policies until summer 2013, meaning contractors had to deal with changing requirements. However, changing requirements are by no means unusual in a large, expensive custom software project; they are a well-known factor in historical project failures, and methodologies such as agile software development have been developed to cope with them. Unfortunately, regulations pertaining to large government contracts in many countries, including the United States, are not a good match for agile software development.
Statistics
Analysis by the Reuters news agency in mid-October stated that the total contract-based cost of building HealthCare.gov swelled threefold from its initial estimate of $93.7 million to about $292 million. In August 2014, the Office of Inspector General released a report finding that the cost of the HealthCare.gov website had reached $1.7 billion. As pointed out later by commentators such as Mark Steyn, the CGI company had already been embroiled in a mid-2000s controversy before over contract payments. While devising the Canadian Firearms Registry, estimated costs of $2 million ballooned to about $2 billion.
On March 25, 2019, the Centers for Medicare and Medicaid Services reported that 11.4 million Americans had selected enrolled in or automatically renewed their Exchange coverage during the 2019 Open Enrollment Period.
Concerns about the website
Issues during launch
The HealthCare.gov website was launched on the scheduled date of October 1, 2013. Although the government shutdown began on the same day, HealthCare.gov was one of the federal government websites that remained open through the events. Although it appeared to be up and running normally, visitors quickly encountered numerous types of technical problems, and, by some estimates, only 1% of interested people were able to enroll to the site in the first week of its operations. Even for those that did manage to enroll, insurance providers later reported some instances of applications submitted through the site with required information missing.
In Bloomberg Businessweek journalist Paul Ford summed up the issue by remarking, "Regardless of your opinions on the health-care law, this is the wrong way to make software." He also wrote, "In the meantime, it’s clear that tens of millions of dollars have been spent to launch something broken." A ConsumerReports.org article re-iterated previous advice, with the group recommending for people to stay "away from HealthCare.gov for at least another month". The group stated as well, "Hopefully that will be long enough for its software vendors to clean up the mess they've made."
In its third week of operations, technical problems continued. A CNN.com article highlighted the "maddeningly long wait times" as an issue. A variety of other problems included broken pull-down menus that have only worked intermittently, for example.
Todd Park, the U.S. chief technology officer, initially said on October 6 that the glitches were caused by unexpected high volume when the site drew 250,000 simultaneous users instead of the 50,000-60,000 expected. He claimed that the site would have worked with fewer simultaneous users. More than 8.1 million people visited the site from October 1 to 4. White House officials subsequently conceded that it was not just an issue of volume, but involved software and systems design issues. For example, consumers are required to create an account before being able to compare plans, and the registration process may have created a bottleneck that led to the long wait times. Also, stress tests done by contractors 1 day before the launch date revealed that the site became too slow with only 1,100 simultaneous users, nowhere near even the 50,000-60,000 expected.
Despite later comments, concerns about the readiness of the exchanges had been raised in March 2013, by Henry Chao, the deputy chief information officer at the Centers for Medicare and Medicaid Services (CMS), who had said that "let's just make sure it's not a third-world experience". A colleague of his, Gary Cohen, had also remarked, "Everyone recognizes that day one will not be perfect." Even by 2011, when the CMS awarded its private sector contracts, most of the PPACA regulations and implementation measures were still in flux.
The New York Times and The Washington Post reported in November 2013 that the Obama administration brought in consulting firm McKinsey & Company to assess the website. Their report, delivered in March 2013, warned that the effort to build the HealthCare.gov site was falling behind and was at risk of failure unless immediate steps were taken to correct the problems.
On October 21, 2013, President Barack Obama addressed the technical problems and other issues in a thirty-minute press conference at the White House Rose Garden, saying that there was "no excuse" for them. He remarked, "There's no sugar coating: the website has been too slow, people have been getting stuck during the application process and I think it's fair to say that nobody's more frustrated by that than I am." He also stated that a "tech surge" was underway to fix the problems. The President additionally pointed out that people could instead apply through a call center or in person.
White House Press Secretary Jay Carney said more time was needed to get the website working properly. Carney also hinted that if the problems remained unresolved for such a long time that it prevented people from meeting their legal obligation to obtain insurance in time for the February deadline, the legal penalty for not obtaining insurance would not be applicable because the Obamacare law states that if affordable care is not available, the penalty will not be payable.
So, shortly after HealthCare.gov's launch, the problems still did not affect the legal requirement for Americans to have health insurance by December 15, which remained on the books as stated. However, on October 23, the effective legal deadline for applying for health insurance via HealthCare.gov without getting a penalty via the individual mandate was extended to March 31, 2014, possibly because of the problems with HealthCare.gov and some of the state healthcare exchanges (but without a de jure explanation as such given).
The Obama administration appointed a contractor, Quality Software Services, Inc (QSSI), to coordinate the work of the fixing of the website problems. The company had already worked on the website's back-end before the website went live. As stated before, prior to the launch, the Centers for Medicare and Medicaid Services (CMS) had been playing the role of coordinator, but critics charged that it was ill-suited for such a systems integration role. The administration appointed Jeffrey Zients to act as their adviser in the matter.
On October 25, Zients promised, in a conference call to the media, that the site would be working well "for the vast majority of users" by the end of November. He also claimed that 90% of visitors are now able to complete the account-creation process and actually used HealthCare.gov to compare plans. Perhaps the largest issue he faces, as he acknowledged in the call, are the error-riddled reports given to insurers, often messing up basic details such as an individual's gender.
As stated before, HealthCare.gov problems have persisted even weeks after the launch. For example, a networking failure error at the related data services hub killed the website's functionality again October 28. This occurred the exact day after Health and Human Services head Kathleen Sebelius had highlighted the design of that data hub as a government success. However, state-based exchanges have mostly worked well in registering individuals during this time period, with CNN.com describing them as "largely error free".
A large number of technical fixes took place through October and November, with an NPR.org report later remarking that the website seemed to be "working more smoothly." Yet, on November 13, the Obama administration revealed that fewer than 27,000 people had signed up to private health insurance through the site. By November 30, more than 137,000 people had obtained health insurance through the federal website. That figure represented a strong increase, but enrollment figures were still vastly below past U.S. government forecasts.
Accenture was chosen to replace CGI Group as the lead contractor for the website in January 2014.
A large issue with future enrollments is dealing with the accuracy of HealthCare.gov information sent to insurance companies. As stated in an NPR.org article citing "continuing problems" with HealthCare.gov, about one in ten enrollment notices have contained a significant error.
A hacker broke into part of the HealthCare.gov insurance enrollment website in July and uploaded malicious software, according to federal officials.
2015 open enrollment period
Enrollment for the 2015 year through the federal government website, which serves 37 states with no enrollment websites, started at midnight of Nov 15, 2014, and ended on February 15, 2015.
The United States Department of Health and Human Services reported a relatively smooth experience for users. However, scattered reports of problems, such as blocking login access and long wait times, were encountered. In one case a call center worker told a reporter that resolving their issue may take five to seven business days.
In January USA Today reported that more health plans were offered in about 75% of counties for 2015, and while average insurance premium increased less than the average 10% annual jumps for plans before the Affordable Care Act, there were still some very large increases.
State and federal health care exchanges have enrolled more than 9.5 million people, but the numbers vary. Florida accounted for almost a seventh of all people who have selected plans on the exchanges. Texas, however, has the largest share of uninsured adults while enrollments lag.
2016 open enrollment period
The open enrollment period for 2016 began on November 1, 2015 and ended on January 31, 2016.
Fake websites
Before HealthCare.gov went online, there was concern about misleading or fake websites at the state or local level. In early December 2013, a third fake health insurance site was shut down in Kentucky. California Democratic Party politicians condemned a California Republican Party-created website which resembled the state's official Obamacare sign-up website but provided political criticisms of the law instead of insurance coverage.
Privately-operated exchanges
Partly in response to the Healthcare.gov outages, a number of privately operated services have launched to provide tools for consumers to calculate subsidy eligibility, as well as research, compare, and enroll for plans examples include HealthSherpa, Stride Health and HealthPocket. In November 2013, HealthSherpa was launched by a team of coders in San Francisco and received media attention for its comparative ease of use. Critics pointed out that by focusing only on providing information, the HealthSherpa site did not resolve some of the most difficult problems, including allowing consumers to actually enroll in a plan. Stride Health launched in 2014, and focused on simplifying health care enrollment by recommending plans to its users based on their data and offering a full service team on the phone who can help its users enroll in plans. The company saw early success through partnerships with a number of large companies. By March 2014, HealthSherpa had become a full-service broker allowing users to enroll directly on the HealthSherpa site.
Security
In July 2014 a hacker broke into a test server for healthcare.gov and uploaded malicious software. By the end of 2014 healthcare.gov had apparently rewritten a large portion of the site and moved important functions "server-side," instead of being executed client-side in the user's web browser.
Data privacy
The initial launch of healthcare.gov was plagued with security concerns and lead to information security experts publicly testifying before the Congressional Committee on Science, Space and Technology and others speaking to the government about security vulnerabilities in healthcare.gov. David Kennedy was able to locate 70,000 health records that were supposed to be private, but were publicly available via a google dork.
There are concerns that personal information put into the website may not be secure in the way that users expect: on January 24, 2015 Kevin Counihan, the C.E.O. of Healthcare.gov, addressed concerns about privacy on the federal website. He said they launched a review of their privacy policies, contracts for third-party tools and URL construction. He said that Healthcare.gov had encrypted a URL that contains data on users’ income and age, and whether they were pregnant.
On Jan. 20, 2015, the Associated Press reported in an article titled: "Government health care website quietly sharing personal data" that HealthCare.gov is providing access to enrollees' personal data to private companies that specialize in advertising. The data may include age, income, ZIP code, whether a person smokes, and if a person is pregnant. It may also include a computer's Internet address, which can identify a person's name or address when combined with other information collected by data brokers and online advertising firms.
There is no evidence that this data has been misused, but connections to dozens of third-party tech firms were documented. Some of these companies were also collecting highly specific information.
Copyright infringement
In October 2013, The Weekly Standard reported the site was violating the copyrights of SpryMedia, a UK-based technology company, by utilizing their software with the copyright notices removed. The software was DataTables, a free and open-source plugin for jQuery designed to improve presentation of data, and was dual-licensed under the GNU GPL version 2 and a modified 3-clause BSD license. HealthCare.gov subsequently rectified the license violation by providing appropriate attribution, license and copyright notices.
Reception and possible ramifications
The technical problems were heavily criticized, and Republican representatives sent President Obama a list of questions, demanding explanations for what went wrong. Some Republicans called for the Secretary of Health and Human Services, Kathleen Sebelius, to be fired, because she oversaw the planning for the site launch. Former White House Press Secretary Robert Gibbs described the technical problems as "excruciatingly embarrassing", and he said that some people should be fired. Scott Amey of the Project on Government Oversight pointed to the development cost ceiling being raised from $93.7 million to $292 million, and he asked: "Where was the contract oversight?"
American conservative commentators such as National Review writers Jonah Goldberg and Mark Steyn have argued that the website's launch was a disaster that presages larger problems throughout the entire law, with Goldberg asserting that "the Republicans who insisted that this monstrosity had to be delayed are looking just a little bit more reasonable with every passing tick." In a statement, the Republican National Committee (RNC) responded to President Obama's comments that the "tech surge" was code for a "spending surge" and will waste millions of dollars. Said statement also read, "The federal bureaucracy has proven itself too slow, too bloated, too incompetent, and too outdated to manage America’s health care."
Speaker of the House John Boehner, a Republican Representative from Ohio, told reporters that throughout November "more Americans are going to lose their health care than are going to sign up." Ohio Governor John Kasich stated on NBC's Meet the Press program on October 27 that the roll-out has "got everybody just shaking their heads". He also added that it seemed highly likely that most Ohioans would pay more on the HealthCare.gov plans. Kentucky Governor Steve Beshear counter-argued that while the website didn't work well yet that it soon would since HealthCare.gov represents "the future of health care", and he commented as well, "You know, the advice I would give the news media and the critics up here is take a deep breath."
The Daily Show host Jon Stewart notably lampooned the HealthCare.gov controversy during an interview with Sebelius. He jokingly challenged her to an online race: "I'm going to try and download every movie ever made, and you're going to try to sign up for Obamacare, and we'll see which happens first." She also faced grilling over the Obama administration's opposition to an extended individual mandate delay.
Sebelius later said in response to criticism, "The majority of people calling for me to resign I would say are people who I don't work for and who do not want this program to work in the first place". She also said, "I have had frequent conversations with the President and I have committed to him that my role is to get the program up and running, and we will do just that." Her popularity in her native Kansas where she previously served as governor, according to University of Kansas political science professor Burdett Loomis, has spurred her on to stay on.
U.S. House Minority Leader Nancy Pelosi, a Democratic Representative from California, commented about the controversy that she feels optimistic about things being fixed, saying "I have faith in technology" as well as "while there are glitches, there are solutions, as well." Democrats in Congress have accused Republican critics of HealthCare.gov of acting in bad faith. "We want the process to improve, but we're not interested in torpedoing the process," said Representative Xavier Becerra, another Democrat from California and chairman of the House Democratic Caucus.
Republican Senator Marco Rubio has drafted legislation as a result of the controversy to delay the individual mandate. The proposed legislation has drawn scattered Democratic support. Professor and author Victor Lombardi commented to Bloomberg Businessweek that the website's issues "don’t sound catastrophic", and he added that history "may judge this project as the catalyst that revolutionized the United States health-care system" such that "no one will remember a few hiccups at launch."
Although the law that decreed the creation of HealthCare.gov has been divisive and political speculation has taken place, polling done by the Gallup organization around the time of the difficult roll-out still have found that a majority of Americans support keeping at least some aspects of Obamacare. Specifically, just 29% of the public favoured a complete repeal. However, a joint The Washington Post and ABC News survey stated that 56% of respondents consider the website's problems a harbinger of other problems with the health care measure.
On October 29, 2013, Rep. Lee Terry (R, NE-2) introduced the Exchange Information Disclosure Act (H.R. 3362; 113th Congress). The bill would require the United States Department of Health and Human Services to submit weekly reports to Congress about the how many people are using HealthCare.gov and signing up for health insurance. These reports would be due every Monday until March 31, 2015 and would be available to the public. The bill would "require weekly updates on the number of unique website visitors, new accounts, and new enrollments in a qualified health plan, as well as the level of coverage," separating the data by state. The bill would also require reports on efforts to fix the broken portions of the website. The House was scheduled to vote on it on January 10, 2014.
Kathleen Sebelius resigned as Secretary of Health and Human Services on April 10, 2014. She was replaced by Sylvia Mathews Burwell on June 9.
See also
Centers for Medicare and Medicaid Services
eHealthInsurance
Health care in the United States
Health care reform in the United States
Health care reforms proposed during the Obama administration
Health insurance coverage in the United States
Health insurance marketplace
List of failed and overbudget custom software projects
The Mythical Man-Month
Provisions of the Patient Protection and Affordable Care Act
United States Department of Health and Human Services
References
Further reading
External links
HHS.gov/HealthCare
HealthCare.gov Sends Personal Data to Dozens of Tracking Websites Electronic Frontier Foundation, 2015
Computer-related introductions in 2013
CGI Group
Custom software projects
Healthcare reform in the United States
HealthCare.gov
Health insurance in the United States
Government services web portals in the United States
United States Department of Health and Human Services
2013 in American politics
2014 in American politics
Obama administration controversies
Online marketplaces of the United States |
1315350 | https://en.wikipedia.org/wiki/Meanings%20of%20minor%20planet%20names%3A%2031001%E2%80%9332000 | Meanings of minor planet names: 31001–32000 |
31001–31100
|-id=012
| 31012 Jiangshiyang || || Jiang Shiyang (born 1936) has made significant contributions to studies of pulsating variable stars and developments of astronomical instruments in China. He shared two National Science and Technology Progress Awards of China for participating in building the Xinglong telescopes, coude and radial velocity spectrometers. ||
|-id=015
| 31015 Boccardi || || Giovanni Boccardi, director of the Turin Observatory from 1900 until 1923 ||
|-id=020
| 31020 Skarupa || || Valerie Skarupa, American AMOS program manager ||
|-id=028
| 31028 Cerulli || || Vincenzo Cerulli, Italian astronomer ||
|-id=031
| 31031 Altiplano || || The Altiplano in the central Andes lies mostly within Bolivia and Peru, and hosts the cities of Puno, Potosi, Cuzco and La Paz. ||
|-id=032
| 31032 Scheidemann || || Heinrich Scheidemann (c. 1595–1663), a composer ||
|-id=037
| 31037 Mydon || || Mydon, a Paeonian charioteer fighting for the Trojans, was killed by Achilles near the Skamander river. ||
|-id=043
| 31043 Sturm || 1996 LT || Charles-François Sturm, 19th-century Swiss-French mathematician ||
|-id=061
| 31061 Tamao || || Tamao Nakamura, Japanese actress ||
|-id=065
| 31065 Beishizhang || || Shi-Zhang Bei, Chinese biophysicist, member of the Chinese Academy of Sciences, on the occasion of his 100th birthday ||
|-id=086
| 31086 Gehringer || || Tom Gehringer, American teacher † ||
|-id=087
| 31087 Oirase || || Oirase, the name of a gorge which runs through Towada, a city in Aomori Prefecture. ||
|-id=092
| 31092 Carolowilhelmina || || Carolowilhelmina is named in honor of the Dukes Karl and Wilhelm of Braunschweig. In 1745 they founded the alma mater of Carl-Friedrich Gauss, the Collegium Carolinum, nowadays known as the Technische Universität Braunschweig. ||
|-id=095
| 31095 Buneiou || 1997 DH || King Muryeong, known in Japanese as Buneiou, (462–523) was the 25th king of Baekje, an ancient kingdom located in the southwest of the Korean peninsula. ||
|-id=097
| 31097 Nucciomula || || Alfonso Mula (born 1956), Italian art critic, poet and writer, founder of the Empedocles International Academy of Culture and Philosophical Investigation, recipient of the 1994 Premio Telemone for literature ||
|-id=098
| 31098 Frankhill || || Frank Hill, American astronomer and heliosismologist ||
|}
31101–31200
|-id=104
| 31104 Annanetrebko || || Anna Netrebko (born 1971) is an Austrian soprano of Russian origin. She has an ample and powerful voice, allowing her to play a broad repertoire going from the Italian operas, to Mozart to the Wagnerian operas. ||
|-id=105
| 31105 Oguniyamagata || || Oguni, town which is situated in the southwestern part of Yamagata prefecture Japan ||
|-id=109
| 31109 Janpalouš || || Jan Palouš, Czech astronomer at the Astronomický Ústav (Astronomical Institute) of the Akademie věd České republiky (Czech Academy of Sciences), instrumental in negotiating the entry of the Czech Republic into the European Southern Observatory ||
|-id=110
| 31110 Clapas || || Clapàs, an occitan word meaning « pile of rock debris », now the nickname of the Montpellier area of France ||
|-id=113
| 31113 Stull || 1997 QC || John Stull, American telescope maker, builder of the observatory at Alfred University ||
|-id=122
| 31122 Brooktaylor || 1997 SD || Brook Taylor, 17th–18th-century British mathematician. ||
|-id=124
| 31124 Slavíček || || Karel Slavícek, Jesuit missionary and scientist was the first Czech sinologist. ||
|-id=129
| 31129 Langyatai || || Langyatai, a three-story platform with a perimeter of several kilometers, was built along the Langya Mountain and beside the Yellow Sea with rammed earth more than 2,200 years ago. ||
|-id=134
| 31134 Zurria || || Giuseppe Zurria, professor of mathematics at the University of Catania ||
|-id=139
| 31139 Garnavich || || Peter M. Garnavich, American observational astrophysicist and associate professor at the University of Notre Dame, Indiana ||
|-id=147
| 31147 Miriquidi || || A synonym for the Erzgebirge, a 10th-century Old Saxon word meaning "an impenetrable great dark forest" ||
|-id=151
| 31151 Sajichugaku || || Saji chugaku is a junior high school in Saji with an astronomical observatory. ||
|-id=152
| 31152 Daishinsai || || The Great East Japan earthquake (Higashi nihon daishinsai; Tōhoku earthquake and tsunami) caused widespread destruction in eastern Japan and killed about 20000 people in March 2011. ||
|-id=153
| 31153 Enricaparri || || Enrichetta Parri (born 1935) is a mathematician who graduated the University in Florence in 1965. She is the wife of the first discoverer. ||
|-id=174
| 31174 Rozelot || || Jean Pierre Rozelot (born 1942) is a solar astronomer who has worked at Pic du Midi Observatory and at CERGA, which he directed between 1982 and 1988. He has been active in the teaching of astronomy but also the popularization of astronomy through the support to various societies and astronomy clubs in the southeast of France. ||
|-id=175
| 31175 Erikafuchs || || Erika Fuchs (1906–2005), a translator of Disney stories ||
|-id=179
| 31179 Gongju || || Gongju, a city located in South Chungcheong province of Korea ||
|-id=189
| 31189 Tricomi || || Francesco Giacomo Tricomi, 20th-century Italian mathematician ||
|-id=190
| 31190 Toussaint || || Roberta Marie Toussaint, American experimental physicist ||
|-id=192
| 31192 Aigoual || || Mont Aigoual, highest (1567 m) mountain of the Cévennes of southern France ||
|-id=196
| 31196 Yulong || || Yulong (meaning jade dragon) is the only Naxi language autonomous county in China ||
|}
31201–31300
|-
| 31201 Michellegrand || || Michel Legrand (1932–2019) was a prolific French musical composer, arranger, conductor and jazz pianist. He composed the music of many well-known movies such as The Umbrellas of Cherbourg and The Young Girls of Rochefort. He won three Oscars and many other awards worldwide. ||
|-id=203
| 31203 Hersman || || Chris Becker Hersman, American spacecraft systems engineer for the New Horizons Pluto Kuiper Belt mission ||
|-id=230
| 31230 Tuyouyou || || Tu Youyouŋ (born 1930), a Chinese pharmacologist and Nobel Laureate. ||
|-id=231
| 31231 Uthmann || 1998 CA || Barbara Uthmann, 16th-century German businesswoman, said to have introduced the art of lace-making in the Erzgebirge Mountains of Saxony ||
|-id=232
| 31232 Slavonice || 1998 CF || Slavonice, Czech Republic ||
|-id=234
| 31234 Bea || || Beata Tomsza (born 1971) is a Polish nurse, a medical rescuer, and also a teacher at medical schools in Sosnowiec and Tychy. Bea has been a pen pal of the first discoverer for years ||
|-id=238
| 31238 Kroměříž || || Kroměříž, Moravia, Czech Republic, whose gardens and castle are a UNESCO World Heritage Site † ||
|-id=239
| 31239 Michaeljames || || Michael James, American high-school teacher of English ||
|-id=240
| 31240 Katrianne || || Katrin Susanne Lehmann, German teacher of physics and astronomy, and wife of the discoverer ||
|-id=249
| 31249 Renéefleming || || Renée Fleming (born 1959) is a well-known American lyrical soprano who has marked the scene with her roles in classical operas by Richard Strauss, Mozart, Handel, Verdi and Dvorak, as well as more modern pieces, such as \"le temps l´horloge\" by Henri Dutilleux. Name suggested by Natalie Dessay. ||
|-id=266
| 31266 Tournefort || || Joseph Pitton de Tournefort (1656–1708), French botanist ||
|-id=267
| 31267 Kuldiga || || Kuldīga, Latvia ||
|-id=268
| 31268 Welty || 1998 FA || Sandra Welty, American high-school teacher of English ||
|-id=271
| 31271 Nallino || || Carlo Alfonso Nallino (1872–1938), Italian orientalist ||
|-id=272
| 31272 Makosinski || || Ann Stasia Makosinski (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her electrical and mechanical engineering project. ||
|-id=276
| 31276 Calvinrieder || || Calvin James Rieder (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his environmental management project. ||
|-id=281
| 31281 Stothers || || Duncan Bayard Stothers (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for his electrical and mechanical engineering project. ||
|-id=282
| 31282 Nicoleticea || || Nicole Sabina Ticea (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for her medicine and health sciences project. ||
|-id=283
| 31283 Wanruomeng || || Wan Ruomeng (born 1996) was awarded first place in the 2014 Intel International Science and Engineering Fair for her plant sciences project. ||
|-id=291
| 31291 Yaoyue || || Yao Yue (born 1997) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his computer science project, and also received the European Union Contest for Young Scientists Award. ||
|-id=298
| 31298 Chantaihei || || Chan Tai Hei (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his chemistry team project, and also received the Philip V. Streich Memorial Award. ||
|}
31301–31400
|-id=312
| 31312 Fangerhai || || Fang Er Hai (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his chemistry team project, and also received the Philip V. Streich Memorial Award. ||
|-id=313
| 31313 Kanwingyi || || Kan Wing Yi (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental management project. ||
|-id=319
| 31319 Vespucci || || Amerigo Vespucci (1454–1512), an Italian explorer, navigator and cartographer. In 1507 the geographer Martin Waldseemüller published the first paper of the Mundus Novus associating the name America with Amerigo Vespucci ||
|-id=323
| 31323 Lysá hora || || Lysá hora, highest (1323 m) mountain of the Beskids (Beskydy) mountain range, the Czech Republic ||
|-id=324
| 31324 Jiřímrázek || || Jiří Mrázek, 20th-century Czech geophysicist, TV and radio popularizer of astronautics, astronomy, computer science and related subjects ||
|-id=336
| 31336 Chenyuhsin || || Chen Yu-Hsin (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for her earth science project, and also received the European Union Contest for Young Scientists Award. ||
|-id=338
| 31338 Lipperhey || || Hans Lipperhey (1570–1619), Dutch lensmaker, inventor of Dutch perspective glass, and first to design and seek a patent for a practical telescope ||
|-id=344
| 31344 Agathon || || Agathon, son of Priam and prince of Troy, is mentioned in Homer's Iliad as being one of the last surviving princes during the Trojan War. ||
|-id=349
| 31349 Uria-Monzon || 1998 SV || Béatrice Uria-Monzon (born 1963) is a French mezzo-soprano. She studied music and singing at the University of Bordeaux and at the lyric art school of the Paris opera. She has a broad repertoire but is noted for her many interpretations of the role of Carmen. ||
|-id=360
| 31360 Huangyihsuan || || Huang Yi-Hsuan (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his plant sciences project, and also received the Dudley R. Herschbach SIYSS Award. ||
|-id=363
| 31363 Shulga || || Valery Mikhailovich Shulga, Ukrainian radio astronomer ||
|-id=374
| 31374 Hruskova || || Aranka Hruskova (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for her mathematical sciences project. ||
|-id=375
| 31375 Krystufek || || Robin Krystufek (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for his biochemistry project. ||
|-id=376
| 31376 Leobauersfeld || || Leonard Bauersfeld (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy team project. ||
|-id=377
| 31377 Kleinwort || || Lennart Julian Kleinwort (born 1998) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his mathematical sciences project, and also received the Intel Foundation Young Scientist Award. ||
|-id=378
| 31378 Neidinger || || Leonard Bauersfeld (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy team project. ||
|-id=380
| 31380 Hegyesi || || Hegyesi Donat Sandor (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for his electrical and mechanical engineering project. ||
|-id=387
| 31387 Lehoucq || || Roland Lehoucq (born 1965) is a French astrophysicist, working on cosmic topology. He is also very active in public outreach and is well known for his books on science fiction novels and movies such as Making Science with StarWars. Since 2012, he has been the president of the annual sci-fi convention "les Utopiales". ||
|-id=389
| 31389 Alexkaplan || || Alexandre Kaplan (1901–1973) was an electrical engineer, with a passion for astronautics and astronomy. A member of the groupe de Lorraine of the Societe Astronomique de France, he built an observatory for the use of astronomy clubs around the city of Nancy. ||
|-id=399
| 31399 Susorney || || Hannah Susorney (born 1991) is a former postdoctoral researcher at the University of British Columbia and a Marie Sklodowska-Curie Fellow at the University of Bristol. She studies topography of asteroids and the role of impact cratering on the surface evolution of planetary bodies. ||
|-id=400
| 31400 Dakshdua || || Daksh Dua (born 1997) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his animal sciences team project, and also received the Intel Foundation Cultural and Scientific Visit to China Award. ||
|}
31401–31500
|-id=402
| 31402 Negishi || 1999 AR || Hiroyuki Negishi (born 1964), a Japanese amateur astronomer. ||
|-id=414
| 31414 Rotarysusa || || Rotary Club, Val Susa, Italy † ||
|-id=416
| 31416 Peteworden || || Pete Worden (born 1949), director of NASA's Ames Research Center. He was influential in many projects like the Clementine space mission, and indirectly in programs like ODAS, which allowed this asteroid to be discovered. An innovator and space enthusiast, he is a man of vision ||
|-id=418
| 31418 Sosaoyarzabal || || Andrea Sosa Oyarzabal (born 1968) is a professor of the Centro Universitario Regional del Este at the Universidad de la Republica de Uruguay. She specializes in the study of the dynamical and physical properties of the minor bodies of the Solar System. ||
|-id=426
| 31426 Davidlouapre || || David Louapre (born 1978) is a French physicist whose thesis was on loop quantum gravity. Now working in industry, he is a very well-known French scientific YouTuber. ||
|-id=429
| 31429 Diegoazzaro || || Diego Azzaro (1925–2014) was an Italian amateur astronomer. A popularizer of astronomy, he was president of the association ASTRIS and Astronomical Observatory of Cervara di Roma. ||
|-id=431
| 31431 Cabibbo || || Nicola Cabibbo, Italian physicist ||
|-id=435
| 31435 Benhauck || || Ben Hauck (born 1978) has been an amateur astronomer for most of his life and is heavily involved in astronomy education and outreach. He is a passionate activist in the fight against climate change. ||
|-id=437
| 31437 Verma || || Abhishek Verma (born 1999) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his animal sciences team project. ||
|-id=438
| 31438 Yasuhitohayashi || || Yasuhito Hayashi (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his animal sciences project. ||
|-id=439
| 31439 Mieyamanaka || || Mie Yamanaka (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her energy and transportation project ||
|-id=442
| 31442 Stark || || Lawrence W. Stark, American professor emeritus of physiological optics and engineering ||
|-id=450
| 31450 Stevepreston || || US occultation observer and skilled mathematician, Steve Preston (born 1956), introduced asteroidal predictions of unprecedented accuracy in 2001, significantly increasing observation rates worldwide. He was elected president of the International Occultation Timing Association in 2014. ||
|-id=451
| 31451 Joenickell || || Joe Nickell, the senior research fellow of the Committee for Skeptical Inquiry ||
|-id=453
| 31453 Arnaudthiry || || (born 1988), a French photographer and science popularizer, mainly known for his YouTube channel "Astronogeek" (in French). His channel features videos debunking false science and UFO-related stories. ||
|-id=458
| 31458 Delrosso || || Renzo Del Rosso (born 1957), an Italian amateur astronomer, astrophotographer, lecturer and writer of astronomical software ||
|-id=460
| 31460 Jongsowfei || || Faye Jong-Sow Fei (born 1998) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for her environmental management project, and also received the European Union Contest for Young Scientists Award. ||
|-id=461
| 31461 Shannonlee || || Shannon Xinjing Lee (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for her energy and transportation project. ||
|-id=462
| 31462 Brchnelova || || Michaela Brchnelova (born 1996) was awarded first place in the 2014 Intel International Science and Engineering Fair for her physics and astronomy project. ||
|-id=463
| 31463 Michalgeci || || Michal Geci (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy team project ||
|-id=464
| 31464 Liscinsky || || Martin Liscinsky (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy team project. ||
|-id=465
| 31465 Piyasiri || || Namal Udara Piyasiri (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his electrical and mechanical engineering project. ||
|-id=466
| 31466 Abualhassan || || Hayat Abdulredha Abu Alhassan (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental sciences team project. ||
|-id=468
| 31468 Albastaki || || Hayat Abdulredha Abu Alhassan (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental sciences team project. ||
|-id=469
| 31469 Aizawa || || Ken Aizawa (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his biochemistry project. ||
|-id=470
| 31470 Alagappan || || Perry Alagappan (born 1997) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his environmental sciences project. ||
|-id=471
| 31471 Sallyalbright || || Sally Albright (born 1999) was awarded first place in the 2014 Intel International Science and Engineering Fair for her environmental management team project. ||
|-id=473
| 31473 Guangning || || Guangning An (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his biochemistry project. ||
|-id=474
| 31474 Advaithanand || || Advaith Anand (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his materials and bioengineering project. ||
|-id=475
| 31475 Robbacchus || || Robert M. Bacchus (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his medicine and health sciences project. ||
|-id=476
| 31476 Bocconcelli || || Carlo Bocconcelli (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his cellular and molecular biology project. ||
|-id=477
| 31477 Meenakshi || || Meenakshi Bose (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her medicine and health sciences project. ||
|-id=479
| 31479 Botello || || Christopher Rafael Botello (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for his energy and transportation project. ||
|-id=480
| 31480 Jonahbutler || || Jonah Zachariah Butler (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his energy and transportation project. ||
|-id=482
| 31482 Caddell || || John Chapman Alexander Caddell (born 1998) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy project. ||
|-id=483
| 31483 Caulfield || || Sarayu Caulfield (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her behavioral and social sciences team project. ||
|-id=487
| 31487 Parthchopra || || Parth Chopra (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his computer science project. ||
|-id=489
| 31489 Matthewchun || || Matthew Leong Chun (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his chemistry project ||
|-id=490
| 31490 Swapnavdeka || || Swapnav Deka (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his microbiology project. ||
|-id=491
| 31491 Demessie || || Bluye DeMessie (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his environmental management project. ||
|-id=492
| 31492 Jennarose || || Jenna Rose DiRito (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her medicine and health sciences team project. ||
|-id=493
| 31493 Fernando-Peiris || || Achal James Fernando-Peiris (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy project. He also received the Innovation Exploration Award. ||
|-id=494
| 31494 Emmafreedman || || Emma R. Freedman (born 1999) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental management project. ||
|-id=495
| 31495 Sarahgalvin || || Sarah Nicole Galvin (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for her electrical and mechanical engineering project. ||
|-id=496
| 31496 Glowacz || || Julian Stefan Glowacz (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for his plant sciences team project. ||
|-id=500
| 31500 Grutzik || || Petra Luna Grutzik (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her behavioral and social sciences project. ||
|}
31501–31600
|-
| 31501 Williamhang || || William C. Hang (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his computer science project. ||
|-id=502
| 31502 Hellerstein || || Joshua Kopel Hellerstein (born 1996) was awarded first place in the 2014 Intel International Science and Engineering Fair for his electrical and mechanical engineering project. ||
|-id=503
| 31503 Jessicahong || || Jessica Hong (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her chemistry team project. ||
|-id=504
| 31504 Jaisonjain || || Jaison Jain (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for his plant sciences team project. ||
|-id=507
| 31507 Williamjin || || William Huang Jin (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for his microbiology project. ||
|-id=508
| 31508 Kanevsky || || Ariel Benjamin Kanevsky (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for his computer science project. ||
|-id=510
| 31510 Saumya || || Ariel Benjamin Kanevsky (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for his computer science project. ||
|-id=511
| 31511 Jessicakim || || Jessica Kim (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for her energy and transportation team project. ||
|-id=512
| 31512 Koyyalagunta || || Divya Koyyalagunta (born 1995) was awarded first place in the 2014 Intel International Science and Engineering Fair for her behavioral and social sciences project. ||
|-id=513
| 31513 Lafazan || || Justin Chase Lafazan (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his behavioral and social sciences project. ||
|-id=516
| 31516 Leibowitz || || Michal Leibowitz (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental management team project. ||
|-id=517
| 31517 Mahoui || || Iman Mahoui (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for her cellular and molecular biology project. ||
|-id=519
| 31519 Mimamarquez || || Michelle Marie Marquez (born 1999) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for her behavioral and social sciences project. ||
|-id=522
| 31522 McCutchen || || Jonathan James McCutchen (born 1999) was awarded first place in the 2014 Intel International Science and Engineering Fair for his environmental sciences project. ||
|-id=523
| 31523 Jessemichel || || Jesse Martin Michel (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his mathematical sciences project. ||
|-id=525
| 31525 Nickmiller || || Nicholas Paul Miller (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his microbiology project. ||
|-id=531
| 31531 ARRL || || American Radio Relay League, the largest membership organization of radio amateurs in the United States ||
|-id=555
| 31555 Wheeler || || John Archibald Wheeler, American theoretical physicist ||
|-id=556
| 31556 Shatner || || William Shatner (born 1931), a Canadian actor. ||
|-id=557
| 31557 Holleybakich || || Holley Bakich (born 1969) is an artist who has created graphics for astronomy books, websites, and magazine articles. She also pencils, inks, and colors the popular web comic Outer Space Pals. She has earned degrees in Fine Art and Interior Design and incorporates astronomical subjects in her work whenever possible. ||
|-id=559
| 31559 Alonmillet || || Alon Millet (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for his plant sciences project. ||
|-id=569
| 31569 Adriansonka || || Adrian Sonka (born 1977) is a Romanian astronomer at the Astronomical Institute (Bucharest) whose research contributions include astrometry and photometry of near-Earth objects, with dedication toward communicating astronomy to the public in Romania. ||
|-id=573
| 31573 Mohanty || || Ahneesh Jayant Mohanty (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his medicine and health sciences project. ||
|-id=574
| 31574 Moshova || || Andrew Moshova (born 1997) was awarded first place in the 2014 Intel International Science and Engineering Fair for his energy and transportation team project. ||
|-id=575
| 31575 Nikhilmurthy || || Nikhil Murthy (born 1999) was awarded second place in the 2014 Intel International Science and Engineering Fair for his chemistry project. ||
|-id=576
| 31576 Nandigala || || Vipul Nandigala (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his physics and astronomy project. ||
|-id=580
| 31580 Bridgetoei || || Bridget Ann Oei (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental sciences project. ||
|-id=581
| 31581 Onnink || || Carly Onnink (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for her animal sciences team project. ||
|-id=582
| 31582 Miraeparker || || Mirae Leigh Parker (born 1995) was awarded second place in the 2014 Intel International Science and Engineering Fair for her electrical and mechanical engineering team project. ||
|-id=584
| 31584 Emaparker || || Ema Linnea Parker (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for her electrical and mechanical engineering team project. ||
|-id=588
| 31588 Harrypaul || || Harry Paul (born 1996) was awarded best of category and first place in the 2014 Intel International Science and Engineering Fair for his materials and bioengineering project. ||
|-id=592
| 31592 Jacobplaut || || Jacob Mitchell Plaut (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his environmental management team project. ||
|-id=593
| 31593 Romapradhan || || Roma Vivek Pradhan (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her computer science project. ||
|-id=594
| 31594 Drewprevost || || Drew Prevost (born 1998) was awarded second place in the 2014 Intel International Science and Engineering Fair for his electrical and mechanical engineering project. ||
|-id=595
| 31595 Noahpritt || || Noah Christian Pritt (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his computer science project. ||
|-id=596
| 31596 Ragavender || || Ritesh Narayan Ragavender (born 1996) was awarded first place in the 2014 Intel International Science and Engineering Fair for his mathematical sciences project ||
|-id=597
| 31597 Allisonmarie || || Allison Marie Raines (born 1998) was awarded first place in the 2014 Intel International Science and Engineering Fair for her environmental management team project. ||
|-id=598
| 31598 Danielrudin || || Daniel Rudin (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for his environmental management team project. ||
|-id=599
| 31599 Chloesherry || || Chloe Sherry (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her animal sciences project. ||
|-id=600
| 31600 Somasundaram || || Sriram Somasundaram (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for his biochemistry project. ||
|}
31601–31700
|-id=605
| 31605 Braschi || || Nicoletta Braschi, Italian actress ||
|-id=617
| 31617 Meeraradha || || Meera Radha Srinivasan (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her environmental sciences project. ||
|-id=618
| 31618 Tharakan || || Serena Margaret Tharakan (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her medicine and health sciences team project. ||
|-id=619
| 31619 Jodietinker || || Jodie Leigh Tinker (born 1996) was awarded first place in the 2014 Intel International Science and Engineering Fair for her cellular and molecular biology project. ||
|-id=627
| 31627 Ulmera || || Alexandra Ulmer (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her behavioral and social sciences team project. ||
|-id=628
| 31628 Vorperian || || Sevahn Kayaneh Vorperian (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her biochemistry project. ||
|-id=630
| 31630 Jennywang || || Jenny Lynn Wang (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her computer science project. ||
|-id=631
| 31631 Abbywilliams || || Abigail Anne Williams (born 1997) was awarded second place in the 2014 Intel International Science and Engineering Fair for her animal sciences team project. ||
|-id=632
| 31632 Stephaying || || Stephanie Ying (born 1996) was awarded second place in the 2014 Intel International Science and Engineering Fair for her microbiology project. ||
|-id=633
| 31633 Almonte || || Carolyn Marie Almonte (born 2003), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her medicine and health sciences project. ||
|-id=635
| 31635 Anandarao || || Pranav Kumar Anandarao (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his energy project. ||
|-id=637
| 31637 Bhimaraju || || Manasa Hari Bhimaraju (born 2003), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her electrical and mechanical engineering project. ||
|-id=639
| 31639 Bodoni || || Evelyn Ariana Bodoni (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her behavioral and social sciences project. ||
|-id=640
| 31640 Johncaven || || John Blake Caven (born 2003), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his computer science and software engineering project. ||
|-id=641
| 31641 Cevasco || || Hannah Olivia Cevasco (born 2000), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her medicine and health sciences project. ||
|-id=642
| 31642 Soyounchoi || || Soyoun Choi (born 1999), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her behavioral and social sciences project. ||
|-id=643
| 31643 Natashachugh || || Natasha Chugh (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her electrical and mechanical engineering project. ||
|-id=648
| 31648 Pedrosada || || Pedro Antonio Valdes Sada (born 1973) is a professor of physics and mathematics at the Universidad de Monterrey (Mexico), whose research includes astrometry and photometry of asteroids as well as stellar occultations. ||
|-id=650
| 31650 Frýdek-Místek || 1999 HW || Frýdek-Místek, twin cities on the Silesia-Moravia border, Czech Republic, the discoverer's childhood home town ||
|-id=655
| 31655 Averyclowes || || Avery Parker Clowes (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his electrical and mechanical engineering project. ||
|-id=660
| 31660 Maximiliandu || || Maximilian Junqi Du (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his chemistry project. ||
|-id=661
| 31661 Eggebraaten || || Andrew John Eggebraaten (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his electrical and mechanical engineering project. ||
|-id=664
| 31664 Randiiwessen || || Randii Wessen, American program engineer at JPL ||
|-id=665
| 31665 Veblen || || Oswald Veblen, early 20th-century American mathematician ||
|-id=671
| 31671 Masatoshi || || Masatoshi Nakamura, Japanese actor and singer ||
|-id=677
| 31677 Audreyglende || || Audrey Glende (born 2003), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her microbiology and biochemistry project. ||
|-id=679
| 31679 Glenngrimmett || || Glenn Manuel Grimmett (born 2001) is a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his animal science project. ||
|-id=680
| 31680 Josephuitt || || Joseph Arthur Huitt (born 2000), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his animal science project. ||
|-id=682
| 31682 Kinsey || || Elizabeth Grace Kinsey (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her environmental and earth sciences project. ||
|-id=684
| 31684 Lindsay || || Mikayla Ann Lindsay (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her physics project. ||
|-id=688
| 31688 Bryantliu || || Bryant Michael Liu (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his plant science project ||
|-id=689
| 31689 Sebmellen || || Sebastian Lucas Mellen (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his computer science and software engineering project. ||
|-id=690
| 31690 Nayamenezes || || Naya Kiren Menezes (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her physics project. ||
|-id=696
| 31696 Rohitmital || || Rohit Rahul Mital (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his environmental and earth sciences project. ||
|-id=697
| 31697 Isaiahoneal || || Isaiah Logan O'Neal (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his plant science project. ||
|-id=698
| 31698 Nikolaiortiz || || Nikolai Victorovitch Ortiz (born 2002) is a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his environmental and earth sciences project. He attends the Seashore Middle Academy, Corpus Christi, Texas ||
|-id=700
| 31700 Naperez || || Nicholas Antonio Perez (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his materials & bioengineering project. ||
|}
31701–31800
|-
| 31701 Ragula || || Kanishka Ragula (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his energy project. ||
|-id=706
| 31706 Singhani || || Anish Singhani (born 2002), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his electrical and mechanical engineering project. ||
|-id=711
| 31711 Suresh || || Sriyaa Suresh (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her animal science project. ||
|-id=716
| 31716 Matoonder || || Madison Alise Toonder (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her environmental and earth sciences project. ||
|-id=719
| 31719 Davidyue || || David Yue (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for his computer science and software engineering project. ||
|-id=725
| 31725 Anushazaman || || Anusha Zaman (born 2001), a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students, for her cellular/molecular biology and biochemistry project. ||
|-id=727
| 31727 Amandalewis || || Amanda Lewis, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=728
| 31728 Rhondah || || Rhonda Hendrickson, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=729
| 31729 Scharmen || || Chris Scharmen, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=731
| 31731 Johnwiley || || John Wiley, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=737
| 31737 Carriecoombs || || Carrie Coombs, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=744
| 31744 Shimshock || || Nicole Shimshock, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=767
| 31767 Jennimartin || || Jennifer Martin, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=770
| 31770 Melivanhouten || || Melissa Van Houten, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=771
| 31771 Kirstenwright || || Kirsten Wright, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=772
| 31772 Asztalos || || Melissa Asztalos, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=774
| 31774 Debralas || || Amy Winegar, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=777
| 31777 Amywinegar || || Amy Winegar, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=778
| 31778 Richardschnur || || Richard Schnur, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=787
| 31787 Darcylawson || || Darcy Lawson, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|}
31801–31900
|-id=807
| 31807 Shaunalennon || || Shauna Lennon, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=823
| 31823 Viète || || François Viète, 16th-century French lawyer and mathematician, inventor of the modern algebraic notation ||
|-id=824
| 31824 Elatus || || Elatus, mythological centaur, killed during a battle with Hercules by a poisoned arrow that passed through his arm and continued to wound Chiron in the knee ||
|-id=836
| 31836 Poshedly || || Kenneth T. Poshedly (born 1949) is the tireless publisher and editor-in-chief of the Journal of the Association of Lunar and Planetary Observers. In 2010 he won the Peggy Haas Service Award for his work with that organization. ||
|-id=838
| 31838 Angelarob || || Angela Robinson, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=839
| 31839 Depinto || || Alyssa DePinto, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=840
| 31840 Normnegus || || Norm Negus, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=844
| 31844 Mattwill || || Matthew L. Will (born 1957) is an amateur astronomer and longtime secretary and treasurer of the Association of the Lunar and Planetary Observers. In 2003 he was presented with the Peggy Haas Service Award for his work with that organization. ||
|-id=846
| 31846 Elainegillum || || Elaine Gillum, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=848
| 31848 Mikemattei || || Michael Mattei (born 1940) worked at Harvard College Observatory Agassiz Station as a young man moving to optics with various institutions and companies culminating in his work with M.I.T. Lincoln Labs working on projects from microscope optics to space telescopes. ||
|-id=853
| 31853 Rahulmital || || Rahul Mital, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=854
| 31854 Darshanashah || || Darshana Shah, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=858
| 31858 Raykanipe || || Raymond Kanipe, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=859
| 31859 Zemaitis || || Valerie Zemaitis, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=861
| 31861 Darleshimizu || || Darlene Shimizu, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=862
| 31862 Garfinkle || || Robert A. Garfinkle (born 1947) is a Fellow of the Royal Astronomical Society and author of best-selling observational astronomy books and many articles. He is also the British Astronomical Association Lunar Section Historian and the Association of Lunar and Planetary Observers Book Review Editor. ||
|-id=863
| 31863 Hazelcoffman || || Hazel Coffman, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=872
| 31872 Terkán || || Lajos Terkán, early 20th-century member of the staff of the Konkoly Obszervatórium (Konkoly Observatory), who proposed and initiated the photographic observation of comets and minor planets there ||
|-id=873
| 31873 Toliou || || Athanasia (Sissy) Toliou (born 1988) is a postdoctoral researcher at the Luleå University of Technology (Sweden) whose studies include the orbital dynamics of near-Earth objects and primordial asteroids and comets. ||
|-id=875
| 31875 Saksena || || Hitu Saksena, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=876
| 31876 Jenkens || || Robert Jenkens (born 1962), Deputy Project Manager for the OSIRIS-REx Asteroid Sample Return Mission. ||
|-id=877
| 31877 Davideverett || || David Everett (born 1964), Project Systems Engineer for the OSIRIS-REx Asteroid Sample Return Mission. ||
|-id=883
| 31883 Susanstern || || Susan Stern, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=885
| 31885 Greggweger || || Gregg Weger, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=886
| 31886 Verlisak || || Verlisa Kennedy, a mentor of a finalist in the 2015 Broadcom MASTERS, a math and science competition for middle school students. ||
|-id=888
| 31888 Polizzi || || Cristian David Polizzi (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his energy team project. ||
|-id=893
| 31893 Rodriguezalvarez || || Agustin Rodriguez Alvarez (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his energy team project ||
|-id=896
| 31896 Gaydarov || || Petar Milkov Gaydarov (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his math project. ||
|-id=897
| 31897 Brooksdasilva || || Candace Rose Brooks-Da Silva (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for her engineering mechanics project. ||
|-id=899
| 31899 Adityamohan || || Aditya Anand Mohan (born 1997) was awarded first place in the 2015 Intel International Science and Engineering Fair for his biomedical and health sciences project. ||
|}
31901–32000
|-
| 31901 Amitscheer || || Amit Scheer (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his biomedical and health sciences project. ||
|-id=902
| 31902 Raymondwang || || Raymond Wang (born 1998) was awarded best of category award and first place in the 2015 Intel ISEF for his engineering mechanics project. He also received the Gordon E. Moore Award and the Cultural and Scientific Visit to China Award. ||
|-id=903
| 31903 Euniceyou || || Eunice Linh You (born 1996) was awarded first place in the 2015 Intel International Science and Engineering Fair for her microbiology project. ||
|-id=904
| 31904 Haoruochen || || Hao Ruochen (born 1997) was awarded best of category award and first place in the 2015 Intel ISEF for his physics and astronomy project. He also received the London International Youth Science Forum, Philip V. Streich Memorial Award. ||
|-id=905
| 31905 Likinpong || || Li Kin Pong Michael (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his materials science team project. ||
|-id=907
| 31907 Wongsumming || || Wong Sum Ming Simon (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his materials science team project. ||
|-id=909
| 31909 Chenweitung || || Chen Wei-Tung (born 1998) was awarded first place in the 2015 Intel International Science and Engineering Fair for his embedded systems project. ||
|-id=910
| 31910 Moustafa || || Yasmine Yehya Moustafa (born 1998) was awarded first place in the 2015 Intel International Science and Engineering Fair for her earth and environmental sciences project. ||
|-id=911
| 31911 Niklasfauth || || Niklas Fauth (born 1997) was awarded best of category award and first place in the 2015 Intel ISEF for his embedded systems project. He also received the Intel Foundation Cultural and Scientific Visit to China Award. ||
|-id=912
| 31912 Lukasgrafner || || Lukas Grafner (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his engineering mechanics team project. ||
|-id=916
| 31916 Arnehensel || || Arne Hensel (born 1996) was awarded best of category award and first place in the 2015 Intel ISEF for his chemistry project. He also received the Dudley R. Hershbach SIYSS Award. ||
|-id=917
| 31917 Lukashohne || || Lukas Hohne (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his engineering mechanics team project. ||
|-id=918
| 31918 Onkargujral || || Onkar Singh Gujral (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his systems software project. ||
|-id=919
| 31919 Carragher || || Christopher Carragher (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his computational biology and bioinformatics project. ||
|-id=920
| 31920 Annamcevoy || || Anna Maria McEvoy (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for her plant sciences project. ||
|-id=922
| 31922 Alsharif || || Shaima Lutfi Al-Sharif (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for her behavioral and social sciences project. ||
|-id=925
| 31925 Krutovskiy || || Roman Krutovskiy (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his math project. ||
|-id=926
| 31926 Alhamood || || Abdul Jabbar Abdulrazaq Alhamood (born 1996) was awarded best of category award and first place in the 2015 Intel ISEF for his plant sciences project. He also received the Dudley R. Hershbach SIYSS Award. ||
|-id=928
| 31928 Limzhengtheng || || Lim Zheng Theng (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his environmental engineering team project. ||
|-id=931
| 31931 Sipiera || || Paul P. Sipiera, American planetary geologist and meteoricist ||
|-id=933
| 31933 Tanyizhao || || Tan Yi Zhao (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his environmental engineering team project. ||
|-id=934
| 31934 Benjamintan || || Tan Kye Jyn Benjamin (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his environmental engineering team project. ||
|-id=935
| 31935 Midgley || || Anna Illing Midgley (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for her plant sciences project. ||
|-id=936
| 31936 Bernardsmit || || Bernard Adriaan Smit (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his microbiology project. ||
|-id=937
| 31937 Kangsunwoo || || Kang Sun Woo (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for her earth and environmental sciences project. ||
|-id=938
| 31938 Nattapong || || Nattapong Chueasiritaworn (born 2000) was awarded best of category award and first place in the 2015 Intel ISEF for his animal sciences team project. He also received the European Union Contest for Young Scientists Award. ||
|-id=939
| 31939 Thananon || || Thananon Hiranwanichchakorn (born 1998) was awarded best of category award and first place in the 2015 Intel ISEF for his animal sciences team project. He also received the European Union Contest for Young Scientists Award. ||
|-id=940
| 31940 Sutthiluk || || Sutthiluk Rakdee (born 1999) was awarded best of category award and first place in the 2015 Intel International Science and Engineering Fair for her animal sciences team project. ||
|-id=943
| 31943 Tahsinelmas || || Tahsin Elmas (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for his chemistry team project ||
|-id=944
| 31944 Seyitherdem || || Seyit Alp Herdem (born 1995) was awarded second place in the 2015 Intel International Science and Engineering Fair for his chemistry team project ||
|-id=946
| 31946 Sahilabbi || || Sahil Abbi (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for his systems software team project ||
|-id=951
| 31951 Alexisallen || || Alexis Sue Allen (born 1998) was awarded first place in the 2015 Intel International Science and Engineering Fair for her animal sciences project. ||
|-id=952
| 31952 Bialtdecelie || || Meghan Dong Duo Bialt-DeCelie (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for her plant sciences team project. ||
|-id=953
| 31953 Bontha || || Naveena Bontha (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for her engineering mechanics project ||
|-id=954
| 31954 Georgiebotev || || Georgie Botev (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his embedded systems project ||
|-id=956
| 31956 Wald || || Abraham Wald, 20th-century American statistician ||
|-id=957
| 31957 Braunstein || || Simone Braunstein (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for her robotics and intelligent machines project. ||
|-id=959
| 31959 Keianacave || || Keiana Ashli Cavé (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for her earth and environmental sciences project. ||
|-id=969
| 31969 Yihuachen || || Yi Hua Chen (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for her cellular and molecular biology project. ||
|-id=971
| 31971 Beatricechoi || || Seung Hye (Beatrice) Choi (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for her chemistry project. ||
|-id=972
| 31972 Carlycrump || || Carly Elizabeth Crump (born 1996) was awarded best of category award and first place in the 2015 Intel ISEF for her microbiology project. She also received the Dudley R. Hershbach SIYSS Award. ||
|-id=973
| 31973 Ashwindatta || || Ashwin Nivas Datta (born 1998) was awarded first place in the 2015 Intel International Science and Engineering Fair for his engineering mechanics project. ||
|-id=975
| 31975 Johndean || || John L. Dean (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his physics and astronomy project. ||
|-id=976
| 31976 Niyatidesai || || Niyati Ketan Desai (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for her physics and astronomy project. ||
|-id=977
| 31977 Devalapurkar || || Sanath Kumar Devalapurkar (born 2000) was awarded best of category award and first place in the 2015 Intel ISEF for his math project. He also received the European Union Contest for Young Scientists Award. ||
|-id=978
| 31978 Jeremyphilip || || Jeremy Philip D´Silva (born 1999) was awarded second place in the 2015 Intel International Science and Engineering Fair for his biomedical and health sciences project. ||
|-id=980
| 31980 Axelfeldmann || || Axel Stephan Feldmann (born 1997) was awarded second place in the 2015 Intel International Science and Engineering Fair for his computational biology and bioinformatics project. ||
|-id=982
| 31982 Johnwallis || || John Wallis, 17th-century British mathematician, inventor of the symbol ∞ for infinity ||
|-id=984
| 31984 Unger || || Adam Unger (born 1919), a basket maker by profession, was heavily involved in the construction of the Starkenburg Observatory. ||
|-id=985
| 31985 Andrewryan || || Andrew J. Ryan (born 1988) is a postdoctoral research associate at University of Arizona and works on the OSIRIS-Rex mission to asteroid (101955) Bennu. He is an expert in the thermal conductivity of planetary materials. ||
|-id=988
| 31988 Jasonfiacco || || Jason Christopher Fiacco (born 1998) was awarded second place in the 2015 Intel International Science and Engineering Fair for his biochemistry team project. ||
|-id=991
| 31991 Royghosh || || Roy Ghosh (born 2000) was awarded second place in the 2015 Intel International Science and Engineering Fair for his biomedical and health sciences project. ||
|-id=996
| 31996 Goecknerwald || || Claire Goeckner-Wald (born 1996) was awarded second place in the 2015 Intel International Science and Engineering Fair for her physics and astronomy team project. ||
|}
References
031001-032000 |
1336514 | https://en.wikipedia.org/wiki/Mostek | Mostek | Mostek was an integrated circuit manufacturer, founded in 1969 by L. J. Sevin, Louay E. Sharif, Richard L. Petritz and other ex-employees of Texas Instruments. Initially their products were manufactured in Worcester, Massachusetts, however by 1974 most of its manufacturing was done in the Carrollton, Texas facility on Crosby Road. At its peak in the late 1970s, Mostek held an 85% market share of the dynamic random-access memory (DRAM) memory chip market worldwide, until being eclipsed by Japanese DRAM manufacturers who offered equivalent chips at lower prices by dumping memory on the market.
In 1979, soon after its market peak, Mostek was purchased by United Technologies Corporation for $345M. In 1985, after several years of red ink and declining market share, UTC sold Mostek for $71M to the French electronics firm Thomson SA, later part of STMicroelectronics. Mostek's intellectual property portfolio, which included rights to the Intel x86 microprocessor family as well as many foundational patents in DRAM technology, provided a large windfall of royalty payments for STMicroelectronics in the 1990s.
Early calculator business
Mostek's first contract was from Burroughs, a $400 contract for circuit design.
The first design to be produced in their newly set-up MOS fab in Worcester, was the MK1001, a simple shifter chip. This was followed by a 1K PMOS aluminum-gate DRAM, the MK4006 that was manufactured in their Carrollton facility. Mostek had been working with Sprague Electric to develop the ion implantation process which provided a tremendous gain in the control of doping profiles. Using ion implantation, Mostek became an early leader in MOS manufacturing technology, while their competition was still mostly using the older bipolar technology. The resulting increased speed and lower cost of the MK4006 memory chip made it the runaway favorite to IBM and other mainframe and minicomputer manufacturers (cf. BUNCH, Digital Equipment Corporation).
In 1970 Busicom, a Japanese adding machine manufacturer, approached Intel and Mostek with a proposal to introduce a new electronic calculator line. Intel responded first, providing them with the Intel 4004, which they used in a line of desktop calculators. Mostek's device took longer to develop but was a single chip calculator solution, the MK6010. Busicom used the Mostek design in a new handheld line, the Busicom LE-120A, which went on the market in 1971 and was the smallest calculator available for some time. Hewlett-Packard also contracted with Mostek for mask development and production of chips for their HP-35 and HP-45 lines.
World leader in DRAM
Mostek co-founder Robert Proebsting invented DRAM address multiplexing with the MK4096 4096 X 1 bit DRAM introduced in 1973. Address multiplexing was a revolutionary approach which reduced cost and board space by fitting a 4K DRAM into a 16 pin package, while competitors took the evolutionary approach which led to a bulky and relatively expensive 22 pin package. Competitors derided the Mostek approach as unnecessarily complex, but Proebsting understood the future roadmap for DRAM memories would benefit greatly if only one new pin were needed for every 4X increase in memory size, instead of the two pins per 4X for the evolutionary approach. Computer manufacturers found address multiplexing to be a compelling feature as they saw a future 64K DRAM chip would save 8 pins if implemented with address multiplexing and subsequent generations even more. Per pin costs are a major cost driver in integrated circuits, plus the multiplexed approach used less silicon area, which further reduces chip cost. The MK4096 was produced using an NMOS aluminum-gate process with an added interconnect layer of polysilicon (dubbed the SPIN process)
The fear, uncertainty and doubt put up by the competition regarding address multiplexing was dispelled by the actual performance of the MK4096 which proved solid and robust in all types of computer memory designs.
In 1976 Mostek introduced the silicon-gate MK4027 (an improved version of the metal-gated MK4096), and the new MK4116 16K double-poly silicon-gate DRAM. They were designed by Paul Schroeder, who later left Mostek to co-found Inmos. From this point until the late 1970s Mostek was a continual leader in the DRAM field, holding as much as 85% of the world market for DRAM. The MK4027 and MK4116 were reverse-engineered by Mosaid and successfully cloned by many companies.
The 64K generation of DRAMs required a transition from 12 volt (and + and − 5 volt) to 5 volt-only operation, in order to free the +12 and −5 volt pins for use as addresses (the +5 volts and ground pins were assigned to pins 8 and 16, respectively, rather than the 16-pin TTL DIP standard of pin 8 for ground and pin 16 for +5 volts). While most competitors took a conservative approach by scaling the basic MK4116 design and process, Mostek undertook a major redesign which incorporated forward-looking features (such as controlled precharge current) that were not necessary at the 64K level and delayed their entry into the market. Mostek's 256K DRAM was further delayed by a then-ambitious two layer metallization design. When the price for 64K DRAMs collapsed in 1985, Mostek's 256K device was not ready for volume production, and the company suffered heavy losses.
Mostek's DRAM legacy is exemplified in the MK4116, MK4164 and MK41256, each of which were successfully cloned by competitors, both USA-based and overseas-based. "By four" DRAM was a simple adaptation of the MK4116/MK4164/MK41256 technology, utilizing a larger package to accommodate the additional data bits and multiplexing the data in/out pins as well; the basic *RAS, *CAS, *WRITE and multiplexed address bus concept was retained intact.
World leader in telecommunications products
Mostek enjoyed many years of mastery of the international market for telecommunications products. Their product line included telephone tone and pulse dialers, touchtone decoders, counters, top-octave generators (used by Hammond, Baldwin, and others), CODECs, watch circuits, and a host of custom products for a variety of customers. The custom products used the simple PMOS process and helped maintain Mostek's cash flow through a smoky fire in the fab that closed it for six months, intense DRAM competition, and other semiconductor market pressures. During the fab shutdown production of the PMOS products was shifted to several external fabs. Several employees played a key role in the Telecommunications and Industrial Products Department. Robert Paluck headed the department, assisted by Michael Callahan, Charles Johnson, William H. Bradley, Robert C. Jones, Robert Banks, Ted Lewis, Darin Kincaid, William Cummings, and a host of other employees. Ted Lewis and William H. Bradley were designated as key employees after the United Technologies purchase. Paluck left Mostek to work with Sevin-Rosen Partners and Convex Computer. Bradley designed all of the custom products based on the single-chip-calculator platform, as well as the code for the wristwatch devices produced by Mostek for Bulova and other customers. For a short while, Paluck headed a joint venture called Mostek Hong Kong, a collaboration with Bulova for the production of high-end wristwatches based on Mostek designs. Mr. Bradley was an employee of that joint venture. As Mostek's focus was shifted to its DRAM products, the industrial and telecommunications products were ignored and their market share vanished.
Microprocessor second sourcing deals
With this foundation in calculator chips and high volume DRAM production, Mostek gained a reputation as a leading semiconductor "fabrication house" (fab) in the early 1970s. In 1974 they introduced the 5065, an 8-bit PMOS microprocessor, with 51 instructions whose execution times range from 3 to 16 μs. Architectural features included multiple nested indirect addressing and three register sets (each consisting of an accumulator, a program counter and a carry/link bit) which could be used for interrupt processing or for subroutines. Bill Mensch was one of the designers of the Mostek 5065. A more popular product was the Mostek 3870, which combined the two-chip Fairchild F8 (3850 + 3851) into a single chip, which they introduced in 1977. William H. Bradley designed a host of custom products based on the MK3870. Fairchild later licensed the 3870 back from Mostek. They also produced ROM chips on demand, as well as the chips powering the Hammond electronic organ.
Mostek cut a deal with a startup, Zilog, in which Mostek provided fab resources to manufacture the Zilog microprocessors in return for second sourcing rights to the Zilog family. Mostek produced MK3880, the Zilog Z80 and a series of Z80 support chips, until Zilog built their own fab. The Z80 eventually became the most popular microcomputer family as it was used in millions of embedded devices as well as in many home computers and computers using the de facto standard CP/M operating system, such as the Osborne, Kaypro, and TRS-80 models.
When Vin Prothro, President, and L. J. Sevin, chairman of the board, discovered that Zilog had modified the recipe for Z80 chips to keep the yields low, thereby buying Zilog time to build their own fabs, Mostek sought a new microprocessor partner. They negotiated a deal with Intel to gain second sourcing rights to the Intel 8086 microprocessor family and the future x86 designs. After Sevin left to become a venture capitalist (founding Compaq and many other companies), Prothro signed another pivotal deal with Motorola to gain the rights to the Motorola 68000 and VME computers.
The Intel x86 microprocessors would go on to become the brains for the IBM PC, while the Motorola 68000 would become the heart of the Apple Macintosh line. Mostek had secured the rights to every microprocessor family that would be important for the next 25 years. However, Mostek chose not to aggressively follow-up its entry into microprocessors—instead maintaining its concentration on the highly competitive (and thereby unprofitable) DRAM business.
Decline in the face of Asian competition
Mostek merged with United Technologies in 1979 to prevent an unfriendly takeover from Gould at the 10th anniversary of the company's founding, when a large block of stock options controlled by Sprague stock became vested. United Technologies would go on to lose their investment. The leadership chosen for the semiconductor division did not appreciate the up-front investment required or the long time for ROI. They sacrificed the Mostek's leadership position in some markets, focusing on the DRAM basket. They eventually spent hundreds of millions trying to keep the company going during the various semiconductor and videogame crashes of the early 1980s, and eventually gave up and sold it to Thomson Semiconductor in 1985 for $71 million.
Unfortunately the DRAM marketplace was the beachhead where Japanese firms would make their successful assault on the global semiconductor market. Mostek was unable to match the Japanese extremely aggressive pricing, and succumbed during a particularly brutal price war when Korean firms (including Samsung, now the world's largest electronics conglomerate) tried to beat the Japanese at their own game. Micron Technology (one of several Mostek spinoffs) would later bring suit to prove the Japanese memory manufacturers guilty of price dumping, but the ruling would be too late to save Mostek.
Thomson proceeded to lay off 80% of the workforce in an attempt to return the company to profitability. The next year they merged with SGS-ATES to become STMicroelectronics, based in Geneva, Switzerland. Although by this time most of Mostek's designs were no longer commercially viable, their DRAM patents turned out to be valuable and STM started a series of lawsuits to collect royalties. Between 1987 and 1993 STM made $450 million on these licenses alone.
Spinoffs
Jerry Rogers founded Cyrix in 1988 to capitalize on the Mostek second source agreement that allowed any 80X86 processor to be legally copied, which Intel attempted to stop via lawsuits. Eventually, after losing many legal battles, Intel simply changed the name of the 80586 to the Pentium, thereby ending the agreement.
Micron Technology was a very successful spinoff founded by a handful of Mostek employees, including Ward Parkinson, Dennis Wilson, and Doug Pitman.
Sevin Rosen Funds co-founded by LJ Sevin who was also the co-founder of Mostek and CEO. Sevin Rosen funded Compaq Computers, Cyrix, Convex Computers and more.
References
"Oral History of Charles Phipps", Computer History Museum, May 28, 2009, Interviewer: Rosemary Remacle.
"Mostek Prospectus" March 17, 1972
External links
MK4116 DRAM (Smithsonian Chip Collection)
Mostek (Antique Tech)
American companies established in 1969
American companies disestablished in 1985
Computer companies established in 1969
Computer companies disestablished in 1985
Computer memory companies
Defunct computer companies of the United States
Electronics companies of the United States
Texas Instruments spinoffs |
4066459 | https://en.wikipedia.org/wiki/UnixWorld | UnixWorld | UnixWorld (Unixworld : McGraw-Hill's magazine of open systems computing.) is a defunct magazine about Unix systems, published from May 1984 until December 1995.
References
Defunct computer magazines published in the United States
Magazines established in 1984
Magazines disestablished in 1995
Magazines published in California
Unix history |
61325885 | https://en.wikipedia.org/wiki/PURB%20%28cryptography%29 | PURB (cryptography) | In cryptography, a padded uniform random blob or PURB is a discipline for encrypted data formats designed to minimize unintended information leakage either from its encryption format metadata or from its total length.
Properties of PURBs
When properly created, a PURB's content is indistinguishable from a uniform random bit string to any observer without a relevant decryption key. A PURB therefore leaks no information through headers or other cleartext metadata associated with the encrypted data format. This leakage minimization "hygiene" practice contrasts with traditional encrypted data formats such as Pretty Good Privacy, which include cleartext metadata encoding information such as the application that created the data, the data format version, the number of recipients the data is encrypted for, the identities or public keys of the recipients, and the ciphers or suites that were used to encrypt the data. While such encryption metadata was considered non-sensitive when these encrypted formats were designed, modern attack techniques have found numerous ways to employ such incidentally-leaked metadata in facilitating attacks, such as by identifying data encrypted with weak ciphers or obsolete algorithms, fingerprinting applications to track users or identify software versions with known vulnerabilities, or traffic analysis techniques such as identifying all users, groups, and associated public keys involved in a conversation from an encrypted message observed between only two of them.
In addition, a PURB is padded to a constrained set of possible lengths, in order to minimize the amount of information the encrypted data could potentially leak to observers via its total length. Without padding, encrypted objects such as files or bit strings up to bits in length can leak up to bits of information to an observer - namely the number of bits required to represent the length exactly. A PURB is padded to a length representable in a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This constraint limits the maximum amount of information a PURB's total length can leak to bits, a significant asymptotic reduction and the best achievable in general for variable-length encrypted formats whose multiplicative overhead is limited to a constant factor of the unpadded payload size. This asymptotic leakage is the same as one would obtain by padding encrypted objects to a power of some base, such as to a power of two. Allowing some significant mantissa bits in the length's representation rather than just an exponent, however, significantly reduces the overhead of padding. For example, padding to the next power of two can impose up to 100% overhead by nearly doubling the object's size, while a PURB's padding imposes overhead of at most 12% for small strings and decreasing gradually (to 6%, 3%, etc.) as objects get larger.
Experimental evidence indicate that on data sets comprising objects such as files, software packages, and online videos, leaving objects unpadded or padding to a constant block size often leaves them uniquely identifiable by total length alone. Padding objects to a power of two or to a PURB length, in contrast, ensures that most objects are indistinguishable from at least some other objects and thus have a nontrivial anonymity set.
Encoding and decoding PURBs
Because a PURB is a discipline for designing encrypted formats and not a particular encrypted format, there is no single prescribed method for encoding or decoding PURBs. Applications may use any encryption and encoding scheme provided it produces a bit string that appears uniformly random to an observer without an appropriate key, provided the appropriate hardness assumptions are satisfied of course, and provided the PURB is padded to one of the allowed lengths. Correctly-encoded PURBs therefore do not identify the application that created them in their ciphertext. A decoding application, therefore, cannot readily tell before decryption whether a PURB was encrypted for that application or its user, other than by trying to decrypt it with any available decryption keys.
Encoding and decoding a PURB presents technical efficiency challenges, in that traditional parsing techniques are not applicable because a PURB by definition has no metadata markers that a traditional parser could use to discern the PURB's structure before decrypting it. Instead, a PURB must be decrypted first obliviously to its internal structure, and then parsed only after the decoder has used an appropriate decryption key to find a suitable cryptographic entrypoint into the PURB.
Encoding and decoding PURBs intended to be decrypted by several different recipients, public keys, and/or ciphers presents the additional technical challenge that each recipient must find a different entrypoint at a distinct location in the PURB non-overlapping with those of the other recipients, but the PURB presents no cleartext metadata indicating the positions of those entrypoints or even the total number of them. The paper that proposed PURBs also included algorithms for encrypting objects to multiple recipients using multiple cipher suites. With these algorithms, recipients can find their respective entrypoints into the PURB with only a logarithmic number of trial decryptions using symmetric-key cryptography and only one expensive public-key operation per cipher suite.
A third technical challenge is representing the public-key cryptographic material that needs to be encoded into each entrypoint in a PURB, such as the ephemeral Diffie-Hellman public key a recipient needs to derive the shared secret, in an encoding indistinguishable from uniformly random bits. Because the standard encodings of elliptic-curve points are readily distinguishable from random bits, for example, special indistinguishable encoding algorithms must be used for this purpose, such as Elligator and its successors.
Tradeoffs and limitations
The primary privacy advantage that PURBs offer is a strong assurance that correctly-encrypted data leaks nothing incidental via internal metadata that observers might readily use to identify weaknesses in the data or software used to produce it, or to fingerprint the application or user that created the PURB. This privacy advantage can translate into a security benefit for data encrypted with weak or obsolete ciphers, or by software with known vulnerabilities that an attacker might exploit based on trivially-observable information gleaned from cleartext metadata.
A primary disadvantage of the PURB encryption discipline is the complexity of encoding and decoding, because the decoder cannot rely on conventional parsing techniques before decryption. A secondary disadvantage is the overhead that padding adds, although the padding scheme proposed for PURBs incurs at most only a few percent overhead for objects of significant size.
The Padme padding proposed in the PURB paper only creates files of specific very distinct sizes. Thus, an encrypted file may often be identified as PURB encrypted with high confidence, as the probability of any other file having exactly one of those padded sizes is very low. Another padding problem occurs with very short messages, where the padding does not effectively hide the size of the content.
One critique of incurring the complexity and overhead costs of PURB encryption is that the context in which a PURB is stored or transmitted may often leak metadata about the encrypted content anyway, and such metadata is outside of the encryption format's purview or control and thus cannot be addressed by the encryption format alone. For example, an application's or user's choice of filename and directory in which to store a PURB on disk may indicate allow an observer to infer the application that likely created it and to what purpose, even if the PURB's data content itself does not. Similarly, encrypting an E-mail's body as a PURB instead of with traditional PGP or S/MIME format may eliminate the encryption format's metadata leakage, but cannot prevent information leakage from the cleartext E-mail headers, or from the endpoint hosts and E-mail servers involved in the exchange. Nevertheless, separate but complementary disciplines are typically available to limit such contextual metadata leakage, such as appropriate file naming conventions or use of pseudonymous E-mail addresses for sensitive communications.
References
Cryptography
Padding algorithms |
6018283 | https://en.wikipedia.org/wiki/Path%20MTU%20Discovery | Path MTU Discovery | Path MTU Discovery (PMTUD) is a standardized technique in computer networking for determining the maximum transmission unit (MTU) size on the network path between two Internet Protocol (IP) hosts, usually with the goal of avoiding IP fragmentation. PMTUD was originally intended for routers in Internet Protocol Version 4 (IPv4). However, all modern operating systems use it on endpoints. In IPv6, this function has been explicitly delegated to the end points of a communications session.
As an extension to the standard path MTU discovery, a technique called Packetization Layer Path MTU Discovery works without support from ICMP.
Implementation
For IPv4 packets, Path MTU Discovery works by setting the Don't Fragment (DF) flag bit in the IP headers of outgoing packets. Then, any device along the path whose MTU is smaller than the packet will drop it, and send back an Internet Control Message Protocol (ICMP) Fragmentation Needed (Type 3, Code 4) message containing its MTU, allowing the source host to reduce its Path MTU appropriately. The process is repeated until the MTU is small enough to traverse the entire path without fragmentation.
IPv6 routers do not support fragmentation and consequently don't support the Don't Fragment option. For IPv6, Path MTU Discovery works by initially assuming the path MTU is the same as the MTU on the link layer interface where the traffic originates. Then, similar to IPv4, any device along the path whose MTU is smaller than the packet will drop the packet and send back an ICMPv6 Packet Too Big (Type 2) message containing its MTU, allowing the source host to reduce its Path MTU appropriately. The process is repeated until the MTU is small enough to traverse the entire path without fragmentation.
If the Path MTU changes after the connection is set up and is lower than the previously determined Path MTU, the first large packet will cause an ICMP error and the new, lower Path MTU will be found. Conversely, if PMTUD finds that the path allows a larger MTU than is possible on the lower link, the OS will periodically reprobe to see if the path has changed and now allows larger packets. On both Linux and Windows this timer is set by default to ten minutes.
Problems
Many network security devices block all ICMP messages for perceived security benefits, including the errors that are necessary for the proper operation of PMTUD. This can result in connections that complete the TCP three-way handshake correctly, but then hang when data are transferred. This state is referred to as a black hole connection.
Some implementations of PMTUD attempt to prevent this problem by inferring that large payload packets have been dropped due to MTU rather than because of link congestion. However, in order for the Transmission Control Protocol (TCP) to operate most efficiently, ICMP Unreachable messages (type 3) should be permitted. A robust method for PMTUD that relies on TCP or another protocol to probe the path with progressively larger packets has been standardized in RFC 4821.
A workaround used by some routers is to change the maximum segment size (MSS) of all TCP connections passing through links with MTU lower than the Ethernet default of 1500. This is known as MSS clamping.
References
Computer network analysis
Internet protocols |
8131170 | https://en.wikipedia.org/wiki/Single-level%20store | Single-level store | Single-level storage (SLS) or single-level memory is a computer storage term which has had two meanings. The two meanings are related in that in both, pages of memory may be in primary storage (RAM) or in secondary storage (disk), and that the physical location of a page is unimportant to a process.
The term originally referred to what is now usually called virtual memory, which was introduced in 1962 by the Atlas system at Manchester.
In modern usage, the term usually refers to the organization of a computing system in which there are no files, only persistent objects (sometimes called segments), which are mapped into processes' address spaces (which consist entirely of a collection of mapped objects). The entire storage of the computer is thought of as a single two-dimensional plane of addresses (segment, and address within segment).
The persistent object concept was first introduced by Multics in the mid-1960s, in a project shared by MIT, General Electric and Bell Labs. It also was implemented as virtual memory, with the actual physical implementation including a number of levels of storage types. (Multics, for instance, had three levels: originally, main memory, a high-speed drum, and disks.)
IBM holds patents to single-level storage as implemented in the IBM i operating system on IBM Power Systems and its predecessors as far back as the System/38 that was released in 1978.
Design
With a single-level storage the entire storage of a computer is thought of as a single two-dimensional plane of addresses, pointing to pages. Pages may be in primary storage (RAM) or in secondary storage (disk); however, the current location of an address is unimportant to a process. The operating system takes on the responsibility of locating pages and making them available for processing. If a page is in primary storage, it is immediately available. If a page is on disk, a page fault occurs and the operating system brings the page into primary storage. No explicit I/O to secondary storage is done by processes: instead, reads from secondary storage are done as the result of page faults; writes to secondary storage are done when pages that have been modified since being read from secondary storage into primary storage are written back to their location in secondary storage.
System/38 and IBM i design
IBM's design of the single-level storage was originally conceived and pioneered by Frank Soltis and Glenn Henry in the late 1970s as a way to build a transitional implementation to computers with 100% solid state memory. The thinking at the time was that disk drives would become obsolete, and would be replaced entirely with some form of solid state memory. System/38 was designed to be independent of the form of hardware memory used for secondary storage. This has not come to be, however, because while solid state memory has become exponentially cheaper, disk drives have also become similarly cheaper; thus, the price ratio in favour of disk drives continues: very much higher capacities than solid state memory, very much slower to access, and much less expensive.
In IBM i, the operating system believes it has access to an almost unlimited storage array of 'real memory' (i.e., primary storage). An address translator maps the available real memory to physical memory, residing on disk drives (either 'spinning' or solid-state), or on a SAN server (such as the V7000). The operating system simply places an object at an address in its memory space. The OS "doesn't know" (or care) if the object is physically in memory or on a slower disk-storage device. The Licensed Internal Code, atop which the OS runs, handles page faults on object pages not in physical memory, reading the page into an available page frame in primary storage.
With the IBM i implementation of single-level storage, page faults are divided into two categories. These are database faults and non-database faults. Database faults occur when a page associated with a relational database object like a table, view or index is not currently in primary storage. Non-database faults occur when any other type of object is not currently in primary storage.
IBM i treats all secondary storage as a single pool of data, rather than as a collection of multiple pools (file systems), as is usually done on other operating systems such as systems like Unix-like systems and Microsoft Windows. It intentionally scatters the pages of all objects across all disks so that the objects can be stored and retrieved much more rapidly. As a result, an IBM i server rarely becomes disk bound. Single-level storage operating systems also allow CPU, memory and disk resources to be freely substituted for each other at run time to smooth out performance bottlenecks.
See also
System/38
IBM i
Extremely Reliable Operating System
Memory-mapped file
References
IBM i
AS/400 |
26569 | https://en.wikipedia.org/wiki/Return%20to%20Castle%20Wolfenstein | Return to Castle Wolfenstein | Return to Castle Wolfenstein is a first-person shooter video game published by Activision, released on November 19, 2001, for Microsoft Windows and subsequently for PlayStation 2, Xbox, Linux and Macintosh. The game serves as a remake of the Wolfenstein series. It was developed by Gray Matter Interactive and Nerve Software developed its multiplayer mode. id Software, the creators of Wolfenstein 3D, oversaw the development and were credited as executive producers. The multiplayer side eventually became the most popular part of the game, and was influential in the genre. Splash Damage created some of the maps for the Game of the Year edition. A sequel, titled Wolfenstein, was released on August 18, 2009.
Gameplay
The game is played from the first-person perspective, where the player's task is to perform retrieval missions, sabotage or assassinations. Players can be armed with typical WW2 weaponry and can even use fictional weapons such as a German-made minigun or a Tesla gun. Players can also use stealth to eliminate enemies, with some missions strictly requiring for stealth to be used. Enemies vary from standard soldiers to undead and experimental creatures. Health is replenished by collecting health packs and food. Armor also can be collected for additional protection.
Multiplayer
Multiplayer is an objective-based game mode, in which players are split into two teams—Axis and Allies. Each team has a set of objectives to complete, the Allies usually being to destroy some sort of Axis objective, and the Axis objectives being to defend their objectives. These objectives are split into two categories, primary and secondary. Primary objectives are ones which must be completed for victory, generally stealing secret documents or destroying a radar array; however, secondary objectives are ones which are optional—they do not have to be completed, but if they are they may aid the appropriate team, such as blowing out a door to allow access into a tunnel which shortens travel time or allows less-noticeable infiltration of the enemy base.
Each team has access to a slightly different set of weapons, matching those used by each side in World War II. Players can choose from four different classes: Soldier, Medic, Engineer and Lieutenant. Soldiers can carry heavy weapons, such as the Panzerfaust, Venom Cannon or Flamethrower, which are not available to other classes. Medics can revive and heal other teammates. Engineers can breach obstacles and arm and defuse dynamite. Lieutenants can supply ammo to teammates and are able to call in air strikes.
Each class specializes in a certain aspect of the game, and an effective team will balance players out using all four classes, such as a soldier for blasting through enemy defenses, a medic for supporting the team and keeping them alive (Soldier making up for the lack of firepower with medics, medics making up for the lack of health), a Lieutenant to resupply teammates with ammo (especially soldiers) and engineers to complete the objective, having their way cleared by the soldier which is then supported by the Lieutenant. Return to Castle Wolfenstein was the first multiplayer FPS game to utilize classes with varying equipment and abilities, similar to gameplay seen in role-playing games.
There are three different modes of play, each allowing for a different experience—objective, stopwatch, and checkpoint. Stopwatch calls for the Allied side to complete a set of objectives within a predefined time limit. The opposing team then become the Allies and have to complete the objectives in a shorter time than the now Axis. Checkpoint gamemode is a mode in which teams capture flags. It may be more commonly known as Capture the Flag (CTF). Whichever team is first to control all the flags at once, wins. The team-based networked multiplayer features different character classes that must work together in order to win. There are four classes — lieutenant, medic, engineer, and soldier — the soldier can be one of several subclasses depending upon the special/heavy weapon that he selects. The multiplayer demo includes a beachhead assault map similar to Omaha Beach.
Plot
In 1943, assigned to the Office of Secret Actions (OSA) from the military, US Army Ranger William "B.J." Blazkowicz and British operative Agent One are sent into Egypt to investigate activity of the German SS Paranormal Division. The duo find themselves witness to the SS releasing an ancient curse around the dig site, resurrecting scores of zombies from their slumber. Pushing through the mummies and Nazis, B.J and Agent One are led to an airfield and a location to follow. As they tail the SS, the two are shot down near Austria and captured by the Nazis. Agent One and Blazkowicz are imprisoned in Castle Wolfenstein, a remote, medieval castle that serves as a stronghold, prison, and research station. During their incarceration, Agent One is tortured for information and dies from electrocution. B.J., however, manages to escape Castle Wolfenstein's dungeon and fights his way out of the castle, using a cable car to leave the area and meet up with Kessler, a member of the German resistance in a nearby village.
Meanwhile, the SS Paranormal Division, under Oberführer Helga von Bülow, has long since moved from Egypt and has been excavating the catacombs and crypts of an ancient church within the village itself in search of the resting place of a "Dark Knight". The Division's sloppy precautions have led to the release of an ancient curse and the awakening of hordes of undead creatures, this time including Saxon knights. The majority of the SS finally realize the dangers and seal off the entrance into the catacombs, leaving many soldiers trapped inside. B.J. descends regardless and fights both Nazis and undead until he arrives at the ancient house of worship, the Defiled Church, where Nazi scientist Professor Zemph is conducting a 'life essence extraction' on the corpse of a Dark Knight, which, thanks to some Nazi technology, succeeds. Shortly before B.J.'s arrival, Zemph tries to talk the impatient Helga von Bulow out of retrieving an ancient Thulian artifact, the "Dagger of Warding" from a nearby altar in an isolated area of the church, but she shoots him and proceeds. This final blunder awakens another monster, Olaric, which kills and dismembers her. Blazkowicz, after a heated battle against spirits and demon attacks, defeats Olaric, and then is airlifted out with Zemph's notes and the dagger.
With the lead with Helga seeming to have come to a close, the OSA begins to shift its focus to one of Germany's leading scientific researchers and Head of the SS Special Projects Division, Oberführer Wilhelm "Deathshead" Strasse. Their investigation leads the OSA to realizing that Deathshead is preparing to launch an attack on London. He intends to use a V-2 rocket fitted with an experimental biological warhead, launching it from his base near Katamarunde in the Baltics. Due to the stealthy nature in which the OSA needs to act, Blazkowicz is parachuted some distance from the missile base and separated from his equipment. After collecting his gear, he smuggles himself into a supply truck bound for the base. Once inside, Blazkowicz destroys the V-2 on its launchpad and fights his way out of the facility towards an airbase filled with experimental jet aircraft. There, he commandeers a "Kobra" rocket-plane and flies to safety in Malta.
Eager to know more about Deathshead and his secret projects, the OSA sends Blazkowicz to the bombed-out city of Kugelstadt, where he is assisted by members of the German Kreisau Circle resistance group in breaking into a ruined factory and exfiltrating a defecting scientist. It is there he discovers the blueprints (and prototype) of the Reich's latest weapon, an electrically operated hand-held minigun dubbed the Venom Gun. Blazkowicz eventually breaks into Deathshead's underground research complex, the Secret Weapons Facility. There he encounters the horrific fruits of Deathshead's labors: creatures, malformed, and twisted through surgery and mechanical implants. The creatures escape from their containments and go on a rampage. Blazkowicz fights his way through the facility, only to see Deathshead escape the chaos by U-boat, and learns of his destination by interrogating a captured German officer.
Blazkowicz is then parachuted into Norway, close to Deathshead's mysterious "X-Labs." After breaking into the facility, which has been overrun by the twisted creatures he encountered in Kugelstadt (dubbed 'Lopers'), Blazkowicz retrieves Deathshead's journal, which links Deathshead's research to the rest of the SS Paranormal's occult activity. Finally catching up with Deathshead, Blazkowicz comes face to face with a completed and fully armored Übersoldat, and kills the researchers who have developed it. After the Übersoldat is destroyed, Deathshead escapes in a Kobra rocket-plane and disappears for the rest of the game.
After studying the documents captured by Blazkowicz, the OSA has become aware of a scheme codenamed 'Operation: Resurrection', a plan to resurrect Heinrich I, a legendary and powerful Saxon warlock-king from 943 AD. Despite the skepticism of senior Allied commanders, the OSA parachutes Blazkowicz back near Castle Wolfenstein, at the Bramburg Dam, where he fights his way until he arrives at the village town of Paderborn. After assassinating all the senior officers of the SS Paranormal Division present there for the resurrection, Blazkowicz fights his way through Chateau Schufstaffel and into the grounds beyond. After fighting two more Übersoldaten, Blazkowicz enters an excavation site near Castle Wolfenstein.
Inside the excavation site, Blazkowicz fights Nazi guards and prototype Übersoldaten, and makes his way to a boarded-up entrance to Castle Wolfenstein's underground crypts. There, he finds that the ruined and decaying sections of the castle has become infested with undead creatures, which are attacking the castle's garrison. After fighting his way through the underworkings of the castle, Blazkowicz arrives too late at the site of a dark ceremony to prevent the resurrection of Heinrich I. At the ceremony, SS psychic and Oberführerin Marianna Blavatsky conjures up dark spirits, which transform three of Deathshead's Übersoldaten into Dark Knights, Heinrich's lieutenants. She ultimately raises Heinrich I, who turns her into his undead slave. Blazkowicz destroys the three Dark Knights, the undead Marianna Blavatsky, and eventually Heinrich I. In the distance, Reichsführer-SS Heinrich Himmler remarks how matters have been ruined as he leaves for Berlin to face an expectant Hitler.
Back in the OSA, Operation Resurrection is closed and Blazkowicz is off on some "R&R" — shooting Nazis.
Development
Return to Castle Wolfenstein includes a story-based single-player campaign, as well as a team-based networked multiplayer mode.
In the campaign, Allied agents from the fictional "Office of Secret Actions" (OSA) are sent to investigate rumors surrounding one of Heinrich Himmler's personal projects, the SS Paranormal Division (also see Ahnenerbe). The agents are, however, captured before completing their mission and are imprisoned in Castle Wolfenstein. Taking the role of Blazkowicz, the player must escape the castle. The player soon investigates the activities of the SS Paranormal Division, which include research on resurrecting corpses and biotechnology, while also sabotaging weapons of mass destruction such as V-2 rockets and biological warheads. During the game the player battles Waffen SS soldiers, elite Fallschirmjäger (paratroopers) known as Black Guards, undead creatures, and Übersoldaten (supersoldiers) formed from a blend of surgery and chemical engineering conducted by Wilhelm "Deathshead" Strasse. The end boss is an undead Saxon warrior-prince named Heinrich I.
The cable car in the castle is based on the 1968 movie Where Eagles Dare, where a U.S. Army Brigadier General is captured and taken prisoner to the Schloß Adler, a fortress high in the Alps above the town of Werfen, reachable only by cable car, and the headquarters of the German Secret Service in southern Bavaria. The supernatural element is based on the story of Castle Wewelsburg, a 17th-century castle occupied by the Germans under Heinrich Himmler's control, and used for occult rituals and practices.
One of the multiplayer maps (also released individually as the multiplayer demo) depicts Omaha Beach in Operation Overlord, and is inspired by the opening scene of Saving Private Ryan. This put Return to Castle Wolfenstein in competition with another id Tech 3-powered World War II-themed first person shooter, Medal of Honor: Allied Assault which also features its own take on Omaha Beach.
In the German version of the game, it avoids making direct reference to Nazi Party and the "Third Reich", in order to comply with strict laws in Germany. The player is not battling Nazis but a secret sect called the "Wolves" led by Heinrich Höller, whose name is a pun of the original character Himmler (Himmler roughly translates as "Heavener", Höller as "Heller"). The Nazi swastika is also not present: the German forces use a Wolfenstein logo which is a combination of a stylized double-headed eagle prominent in most Nazi symbolism, a "W" (standing for Wolfenstein), and the Quake III: Team Arena "QIII" logo (the game engine and network code that Return to Castle Wolfenstein is based upon). The "W" eagle logo is prominently seen on the cover art for the American version.
Music pieces such as Beethoven's Moonlight Sonata and Für Elise are used in the single-player campaign.
Some sound effects in the game are excerpts heard in the 1968 movie 2001: A Space Odyssey. A radar station and the X-Labs levels of the game feature these sounds prominently to give the effect of working scientific equipment at a research facility.
The game is powered by a modified version of the id Tech 3 engine, with changes made to support large outdoor areas. The Return to Castle Wolfenstein engine was subsequently used as the foundation for Wolfenstein: Enemy Territory (Splash Damage/Activision), Trinity: The Shatter Effect (Gray Matter Interactive/Activision) (shown at E3 in 2003, but assumed cancelled) and Call of Duty (Infinity Ward/Activision).
There are many different releases of Return to Castle Wolfenstein. The original release, version 1.0, came in a game box featuring a book-like flap. A Collector's Edition, packaged in a metal case, was released at the same time. The contents of the Collector's Edition changed depending on when it was purchased and could include a poster and fabric patch, a poster and a bonus CD, or just the bonus CD. The Game of the Year Edition (2002 - v.1.33) came with the original Wolfenstein 3D, game demos, and seven new multiplayer maps (Trenchtoast, Tram Siege, Ice, Chateau, Keep, The Damned, and Rocket.) The Platinum Edition (2004 - v.1.41) included Wolfenstein: Enemy Territory, a stand-alone multiplayer expansion, and Wolfenstein 3D. Return to Castle Wolfenstein: Tides of War also came with the original Wolfenstein 3D as an unlockable after beating the campaign, and included some enhancements like surround sound.
Ports
The game was released for Linux and Macintosh platforms in 2002, with the Linux port done in-house by Timothee Besset and the Mac port done by Aspyr Media. In 2003, the game was ported to the PlayStation 2 and Xbox video game consoles and subtitled as Operation Resurrection and Tides of War, respectively.
Console version differences
Both console versions include an additional single-player prequel mission, set in the fictional town of Ras El-Hadid in Egypt. The latter half of the level features an extensive underground burial site with many undead enemies, as does the original first mission. This prequel level is likely closer to the developers' true intentions for the story, as indicated by the distinctly Egyptian design of the burial site, including the presence of sand, traps, mummies and hieroglyphs on the walls in some areas (in the original storyline, this site is found in the middle of a German village during the second mission). By contrast, the single-player storyline in the Windows version starts at Castle Wolfenstein.
The PS2 version has a bonus feature which allows players to purchase items at the end of each level by finding secrets. In the Xbox version a Secret Bonus is awarded after every level when all the secret areas for that level have been found. It also has several new equipable items and weapons as well as new enemies. The two-player co-op mode is exclusive to Xbox and allows the second player to play as Agent One, altering the game in which he was never killed and played out the missions to the end. This allows for the story to support that Agent One either survived. Xbox version also has downloadable content, system link play and had online multiplayer via Xbox Live before Live play was disabled for original Xbox games. A Platinum Hits edition of the game was also released for the Xbox. The PlayStation 2 version does not support online multiplayer.
Completing the Xbox version unlocks a further bonus: the original Wolfenstein 3D.
Source code release
The source code for Return to Castle Wolfenstein and Enemy Territory was released under the GNU General Public License v3.0 or later on August 12, 2010. The ioquake3 developers at icculus.org announced the start of respective engine projects soon after.
Community mods
On 15 October 2020 a community overhaul mod RealRTCW was released on Steam as a free modification for original game. It features new renderer, expanded arsenal, rebalanced gunplay, new high-quality models, textures and sounds.
Film
A Return to Castle Wolfenstein film was announced in 2002 with Rob Cohen attached to direct. Little information has been available since, however, with the exception of a July 20, 2005 IGN interview. The interview discussed the Return to Castle Wolfenstein film with id employees. In the interview, Todd Hollenshead indicated that the movie was in the works, though still in the early stages.
On August 3, 2007, Variety confirmed Return to Castle Wolfenstein, to be written and directed by Roger Avary and produced by Samuel Hadida. On November 2, 2012, Roger Avary has signed on to write and direct the film. The film is being described as a mix of Inglourious Basterds and Captain America.
Reception
Sales
Return to Castle Wolfenstein debuted at #3 on NPD Intelect's computer game sales chart for the November 18–24 period, at an average retail price of $57. It fell to position 7 in its second week. By the end of 2001, the game's domestic sales totaled 253,852 units, for revenues of $13.1 million.
In the United States, Return to Castle Wolfenstein sold 350,000 copies and earned $17 million by August 2006. It was the country's 48th-best-selling computer game between January 2000 and August 2006. Combined sales of all Wolfenstein computer games released between January 2000 and August 2006 had reached 660,000 units in the United States by the latter date. Return to Castle Wolfenstein received a "Silver" sales award from the Entertainment and Leisure Software Publishers Association (ELSPA), indicating sales of at least 100,000 copies in the United Kingdom. By January 2002, Activision reported that shipments of Return to Castle Wolfenstein to retailers had surpassed one million units. The game sold 2 million copies by January 2004.
Reviews
Return to Castle Wolfenstein received favorable reviews from critics. At Metacritic, it scores 88/100 (based on 32 reviews), and on GameRankings it scores 87% (based on 50 reviews). Eurogamer Tom Bramwell called Return to Castle Wolfenstein "a worthy addition to the stable of id Software affiliated shoot 'em ups. The single-player game is average to good and takes quite a while to finish, but the game really earns its salt by shipping with a first class multiplayer element."
GameSpot named Return to Castle Wolfenstein: Tides of War the best Xbox game of May 2003.
Controversy
In March 2008, the United States Department of State published a report to Congress, "Contemporary Global Anti-Semitism", that described Return to Castle Wolfenstein as an "anti-Semitic video game" with no qualifications. The report picked up on an article originally written in 2002 by Jonathan Kay of The New York Times regarding the recent introduction of "Nazi protagonists" in the online gaming market (referring specifically to Day of Defeat and Wolfenstein). The article was published just 19 days before Medal of Honor: Allied Assault was released, which shares many similar features and Nazis as playable characters in multiplayer.
Todd Hollenshead, chief executive of id Software at the time of the original article stated:
The trend you're seeing with new games is, to some extent, a reflection of what's going in the culture ... For instance, you've now got games with terrorists and counterterrorists. And World War II games such as Return to Castle Wolfenstein and Day of Defeat reflect what you see in popular movies ... I don't doubt there are going to be people that go out and distort what the multiplayer gaming experience is and say, "Oh, I can't believe you guys did this." There are a lot of critics of the game industry, and they look for things to criticize.
Awards
The game was nominated by Sherman Archibald, John Carmack, and Ryan Feltrin for the "Excellence in Programming" category at the 2002 Game Developers Choice Awards.
PC Gamer US awarded Return to Castle Wolfenstein its 2001 "Best Multiplayer Game" prize. The editors wrote: "No other FPS rewards this level of teamplay, sports this kind of graphics, or is this blissfully free of cheaters."
Sequels
A multiplayer-only spinoff of the series, Wolfenstein: Enemy Territory, was originally planned as a full-fledged expansion pack for Return to Castle Wolfenstein developed by Splash Damage. The single-player component of the game was never completed and thus was removed entirely. The developers at that point decided the multiplayer part would be released as a free, downloadable standalone game. Enemy Territory is a team-based networked multiplayer game which involves completing objectives through teamwork using various character classes.
This gameplay was later reutilized in a full-fledged commercial game Enemy Territory: Quake Wars set in id Software's Quake universe. A semi-sequel called Wolfenstein was developed by Raven Software and id Software and published by Activision, and released on August 18, 2009. A successor to Wolfenstein titled Wolfenstein: The New Order and a standalone prequel expansion titled Wolfenstein: The Old Blood have also been released in 2014 and 2015. The Old Blood references RTCW with characters with similar names and the X-labs being mentioned.
The New Order storyline was followed up in Wolfenstein II: The New Colossus which was released in late 2017.
References
External links
Official id Software website
2001 video games
Activision games
AROS software
Aspyr games
Commercial video games with freely available source code
Cooperative video games
Experimental medical treatments in fiction
First-person shooters
Gray Matter Interactive games
Id Software games
Id Tech games
Interactive Achievement Award winners
Linux games
MacOS games
MorphOS games
Multiplayer online games
PlayStation 2 games
Splash Damage games
Video games about Nazi Germany
Video games scored by Bill Brown
Video games set in Egypt
Video games set in Germany
Video games set in Norway
Video game reboots
Video games developed in the United States
Windows games
Wolfenstein
World War II first-person shooters
Xbox games
Video games about zombies
Works set in castles |
214804 | https://en.wikipedia.org/wiki/Porting | Porting | In software engineering, porting is the process of adapting software for the purpose of achieving some form of execution in a computing environment that is different from the one that a given program (meant for such execution) was originally designed for (e.g., different CPU, operating system, or third party library). The term is also used when software/hardware is changed to make them usable in different environments.
Software is portable when the cost of porting it to a new platform is significantly less than the cost of writing it from scratch. The lower the cost of porting software relative to its implementation cost, the more portable it is said to be.
Etymology
The term "port" is derived from the Latin portāre, meaning "to carry". When code is not compatible with a particular operating system or architecture, the code must be "carried" to the new system.
The term is not generally applied to the process of adapting software to run with less memory on the same CPU and operating system, nor is it applied to the rewriting of source code in a different language (i.e. language conversion or translation).
Software developers often claim that the software they write is portable, meaning that little effort is needed to adapt it to a new environment. The amount of effort actually needed depends on several factors, including the extent to which the original environment (the source platform) differs from the new environment (the target platform), the experience of the original authors in knowing which programming language constructs and third party library calls are unlikely to be portable, and the amount of effort invested by the original authors in only using portable constructs (platform specific constructs often provide a cheaper solution).
History
The number of significantly different CPUs and operating systems used on the desktop today is much smaller than in the past. The dominance of the x86 architecture means that most desktop software is never ported to a different CPU. In that same market, the choice of operating systems has effectively been reduced to three: Microsoft Windows, macOS, and Linux. However, in the embedded systems and mobile markets, portability remains a significant issue, with the ARM being a widely used alternative.
International standards, such as those promulgated by the ISO, greatly facilitate porting by specifying details of the computing environment in a way that helps reduce differences between different standards-conforming platforms. Writing software that stays within the bounds specified by these standards represents a practical although nontrivial effort. Porting such a program between two standards-compliant platforms (such as POSIX.1) can be just a matter of loading the source code and recompiling it on the new platform. However, practitioners often find that various minor corrections are required, due to subtle platform differences. Most standards suffer from "gray areas" where differences in interpretation of standards lead to small variations from platform to platform.
There also exists an ever-increasing number of tools to facilitate porting, such as the GNU Compiler Collection, which provides consistent programming languages on different platforms, and Autotools, which automates the detection of minor variations in the environment and adapts the software accordingly before compilation.
The compilers for some high-level programming languages (e.g. Eiffel, Esterel) gain portability by outputting source code in another high level intermediate language (such as C) for which compilers for many platforms are generally available.
Two activities related to (but distinct from) porting are emulating and cross-compiling.
Porting compilers
Instead of translating directly into machine code, modern compilers translate to a machine independent intermediate code in order to enhance portability of the compiler and minimize design efforts.
The intermediate language defines a virtual machine that can execute all programs written in the intermediate language (a machine is defined by its language and vice versa). The intermediate code instructions are translated into equivalent machine code sequences by a code generator to create executable code. It is also possible to skip the generation of machine code by actually implementing an interpreter or JIT for the virtual machine.
The use of intermediate code enhances portability of the compiler, because only the machine dependent code (the interpreter or the code generator) of the compiler itself needs to be ported to the target machine. The remainder of the compiler can be imported as intermediate code and then further processed by the ported code generator or interpreter, thus producing the compiler software or directly executing the intermediate code on the interpreter.
The machine independent part can be developed and tested on another machine (the host machine). This greatly reduces design efforts, because the machine independent part needs to be developed only once to create portable intermediate code.
An interpreter is less complex and therefore easier to port than a code generator, because it is not able to do code optimizations due to its limited view of the program code (it only sees one instruction at a time, and you need a sequence to do optimization). Some interpreters are extremely easy to port, because they only make minimal assumptions about the instruction set of the underlying hardware. As a result, the virtual machine is even simpler than the target CPU.
Writing the compiler sources entirely in the programming language the compiler is supposed to translate, makes the following approach, better known as compiler bootstrapping, feasible on the target machine:
Port the interpreter. This needs to be coded in assembly code, using an already present assembler on the target.
Adapt the source of the code generator to the new machine.
Execute the adapted source using the interpreter with the code generator source as input. This will generate the machine code for the code generator.
The difficult part of coding the optimization routines is done using the high-level language instead of the assembly language of the target.
According to the designers of the BCPL language, interpreted code (in the BCPL case) is more compact than machine code; typically by a factor of two to one. Interpreted code however runs about ten times slower than compiled code on the same machine.
The designers of the Java programming language try to take advantage of the compactness of interpreted code, because a Java program may need to be transmitted over the Internet before execution can start on the target's Java Virtual Machine.
Porting of video games
Porting is also the term used when a video game designed to run on one platform, be it an arcade, video game console, or personal computer, is converted to run on a different platform, perhaps with some minor differences. From the beginning of video games through to the 1990s, "ports", at the time often known as "conversions", were often not true ports, but rather reworked versions of the games due to limitations of different systems. For example, the 1982 game The Hobbit, a text adventure augmented with graphic images, has significantly different graphic styles across the range of personal computers that its ports were developed for. However, many 21st century video games are developed using software (often in C++) that can output code for one or more consoles as well as for a PC without the need for actual porting (instead relying on the common porting of individual component libraries).
Porting arcade games to home systems with inferior hardware was difficult. The ported version of Pac-Man for the Atari 2600 omitted many of the visual features of the original game to compensate for the lack of ROM space and the hardware struggled when multiple ghosts appeared on the screen creating a flickering effect. The poor performance of the Atari 2600 Pac-Man is cited by some scholars as a cause of the video game crash of 1983.
Many early ports suffered significant gameplay quality issues because computers greatly differed. Richard Garriott stated in 1984 at Origins Game Fair that Origin Systems developed computer games for the Apple II series first then ported them to Commodore 64 and Atari 8-bit, because the latter machines' sprites and other sophisticated features made porting from them to Apple "far more difficult, perhaps even impossible". Reviews complained of ports that suffered from "Apple conversionitis", retaining the Apple's "lousy sound and black-white-green-purple graphics"; after Garriott's statement, when Dan Bunten asked "Atari and Commodore people in the audience, are you happy with the Apple rewrites?" the audience shouted "No!" Garriott responded, "[otherwise] the Apple version will never get done. From a publisher's point of view that's not money wise".
Others worked differently. Ozark Softscape, for example, wrote M.U.L.E. for the Atari first because it preferred to develop for the most advanced computers, removing or altering features as necessary during porting. Such a policy was not always feasible; Bunten stated that "M.U.L.E. can't be done for an Apple", and that the non-Atari versions of The Seven Cities of Gold were inferior. Compute!'s Gazette wrote in 1986 that when porting from Atari to Commodore the original was usually superior. The latter's games' quality improved when developers began creating new software for it in late 1983, the magazine stated.
In porting arcade games, the terms "arcade perfect" or "arcade accurate" were often used to describe how closely the gameplay, graphics, and other assets on the ported version matched the arcade version. Many arcade ports in the early 1980s were far from arcade perfect as home consoles and computers lacked the sophisticated hardware in arcade games, but games could still approximate the gameplay. Notably, Space Invaders on the Atari VCS became the console's killer app despite its differences, while the later Pac-Man port was notorious for its deviations from the arcade version. Arcade-accurate games became more prevalent starting in the 1990s, starting with the Neo Geo system from SNK, which offered both a home console and an arcade system with more advanced versions of the home console's main hardware. This allowed near-arcade perfect games to be played at home. Further consoles such as the PlayStation and Dreamcast, also based on arcade hardware, made arcade-perfect games a reality.
A "console port" is a game that was originally made for a console before an identical version is created which can be played on a personal computer. This term has been widely used by the gaming community. The process of porting a game from a console to a PC is often regarded negatively due to the higher levels of performance that computers generally have being underutilized, partially due to console hardware being fixed throughout their run (with games being developed for console specs), while PCs become more powerful as hardware evolves, but also due to ported games sometimes being poorly optimized for PCs, or lazily ported. While broadly similar, architectural differences may exist such as the use of unified memory on a console.
See also
Console emulator
Cross-platform
Language binding
List of system quality attributes
Poshlib
Program transformation
Software portability
Source port
Write once, compile anywhere
Meaning of unported
Notes
References
Interoperability
Source code
de:Portierung |
1765838 | https://en.wikipedia.org/wiki/Comprehensive%20School%20Mathematics%20Program | Comprehensive School Mathematics Program | Comprehensive School Mathematics Program (CSMP) stands for both the name of a curriculum and the name of the project that was responsible for developing curriculum materials in the United States.
Two major curricula were developed as part of the overall CSMP project: the Comprehensive School Mathematics Program (CSMP), a K–6 mathematics program for regular classroom instruction, and the Elements of Mathematics (EM) program, a grades 7–12 mathematics program for gifted students. EM treats traditional topics rigorously and in-depth, and was the only curriculum that strictly adhered to Goals for School Mathematics: The Report of the Cambridge Conference on School Mathematics (1963). As a result, it includes much of the content generally required for an undergraduate mathematics major. These two curricula are unrelated to one another, but certain members of the CSMP staff contributed to the development of both projects. Additionally, some staff of the Elements of Mathematics were also involved with the Secondary School Mathematics Curriculum Improvement Study program being. What follows is a description of the K–6 program that was designed for a general, heterogeneous audience.
The CSMP project was established in 1966, under the direction of Burt Kaufman, who remained director until 1979, succeeded by Clare Heidema. It was originally affiliated with Southern Illinois University in Carbondale, Illinois. After a year of planning, CSMP was incorporated into the Central Midwest Regional Educational Laboratory (later CEMREL, Inc.), one of the national educational laboratories funded at that time by the U.S. Office of Education. In 1984, the project moved to Mid-continental Research for Learning (McREL) Institute's Comprehensive School Reform program, who supported the program until 2003. Heidema remained director to its conclusion. In 1984, it was implemented in 150 school districts in 42 states and about 55,000 students.
Overview
The CSMP project employs four non-verbal languages for the purpose of posing problems and representing mathematical concepts: the Papy Minicomputer (mental computation), Arrows (relations), Strings (classification), and Calculators (patterns). It was designed to teach mathematics as a problem-solving activity rather than simply teaching arithmetic skills, and uses the Socratic method, guiding students to figure out concepts on their own rather than directly lecturing or demonstrating the material. The curriculum uses a spiral structure and philosophy, providing students chances to learn materials at different times and rates. By giving students repeated exposure to a variety of content – even if all students may not initially fully understand – students may experience, assimilate, apply, and react to a variety of mathematical experiences, learning to master different concepts over time, at their own paces, rather than being presented with a single topic to study until mastered.
The curriculum introduced many basic concepts such as fractions and integers earlier than normal. Later in the project's development, new content in probability and geometry was introduced. The curriculum contained a range of supporting material including story books with mathematical problems, with lessons often posed in a story, designed to feature both real world and fantasy situations. One character in these books was Eli the Elephant, a pachyderm with a bag of magic peanuts, some representing positive integers and some negative. Another lesson was titled "Nora's Neighborhood," which taught taxicab geometry.
Minicomputer
One device used throughout the program was the Papy Minicomputer, named after Frédérique Papy-Lenger – the most influential figure to the project – and her husband Georges Papy. A Minicomputer is a 2 by 2 grid of squares, with the quarters representing the numbers 1, 2, 4, and 8. Checkers can be placed on the grid to represent different numbers in a similar fashion to the way the binary numeral system is used to represent numbers in a computer.
The Minicomputer is laid out as follows: a white square in the lower right corner with a value of 1, a red square in the lower-left with a value of 2, a purple square in the upper right with a value of 4, and a brown square in the upper left with a value of 8. Each Minicomputer is designed to represent a single decimal digit, and multiple Minicomputers can be used together to represent multiple-digit numbers. Each successive board's values are increased by a power of ten. For example, a second Minicomputer's squares – placed to the left of the first – will represent 10, 20, 40, and 80; a third, 100, 200, 400, and 800, and so on. Minicomputers to the right of a vertical bar (placed to the right of the first board, representing a decimal point) may be used to represent decimal numbers.
Students are instructed to represent values on the Minicomputers by adding checkers to the proper squares. To do this only requires a memorization of representations for the digits zero through nine, although non-standard representations are possible since squares can hold more than one checker. Each checker is worth the value of the square it is in, and the sum of the checkers on the board(s) determine the overall value represented. Most checkers used by students are a solid color – any color is fine. The only exception is checkers marked with a caret (^), which are negative.
An example of representing a number: 9067 requires four boards. The leftmost board has two checkers in the 8 and 1 squares (8000 + 1000). The second board has none, as the value has zero hundreds. The third board has checkers in the 4 and 2 squares (40 + 20), and the rightmost board has checkers in the 4, 2, and 1 squares (4 + 2 + 1). Together, these 7 values (8000 + 1000 + 40 + 20 + 4 + 2 + 1) total up to 9067. This would be considered a standard way to represent the number as it involves the fewest checkers possible without involving negatives. It would require fewer checkers to replace the last board with a positive checker in the 8 and a negative checker in the 1, but this is not taught as the standard.
Arithmetic can be performed on the Minicomputer by combining two numbers' representations into a single board and performing simplification techniques. One such technique is to replace checkers from the 8 and 2 squares of one board with a checker on the 1 square of the adjacent board to the left. Another technique is to replace a pair of checkers in the same square with one checker in the next higher square, such as two 4s with an 8.
Study results
The program received extensive evaluation, with over 50 studies. These studies showed broadly similar results for non CSMP students in computation, concepts and applications; however, there was a marked improvement when students were assessed according to The Mathematics Applied to Novel Situations (MANS) tests, introduced to measure students' ability to problem solve in novel situations.
Copyright
Copyright is currently held by McREL International.
Current curriculum use
Burt Kaufman, a mathematics curriculum specialist, headed the team at Southern Illinois University writing CSMP. In July 1993, he started the Institute for Mathematics and Computer Science (IMACS) with his son and two colleagues. IMACS uses elements of the EM and CSMP programs in their "Mathematics Enrichment" program. For instance, Minicomputers and "Eli the Elephant" are present in the IMACS material. IMACS is a private education business focusing on the instruction of students from first grade through high school. Including online courses, IMACS currently serves over 4,000 students across the U.S. and in over ten countries.
CSMP is also used by some homeschooling families either as a core math program or for enrichment exercises.
References
External links
CSMP Preservation Project and archived materials at Buffalo State
CSMP Final Evaluation Report
Report to the Program Effectiveness Panel
Goals for School Mathematics: The Report of the Cambridge Conference on School Mathematics.
Institute for Mathematics and Computer Science
McREL
Mathematics education in the United States |
618552 | https://en.wikipedia.org/wiki/Heathkit | Heathkit | Heathkit is the brand name of kits and other electronic products produced and marketed by the Heath Company. The products over the decades have included electronic test equipment, high fidelity home audio equipment, television receivers, amateur radio equipment, robots, electronic ignition conversion modules for early model cars with point style ignitions, and the influential Heath H-8, H-89, and H-11 hobbyist computers, which were sold in kit form for assembly by the purchaser.
Heathkit manufactured electronic kits from 1947 until 1992. After closing that business, the Heath Company continued with its products for education, and motion-sensor lighting controls. The lighting control business was sold around 2000. The company announced in 2011 that they were reentering the kit business after a 20-year hiatus but then filed for bankruptcy in 2012, and under new ownership began restructuring in 2013. , the company has a live website with newly designed products, services, vintage kits, and replacement parts for sale.
Founding
The Heath Company was founded as an aircraft company in 1911 by Edward Bayard Heath with the purchase of Bates Aeroplane Co, soon renamed to E.B. Heath Aerial Vehicle Co. Starting in 1926 it sold a light aircraft, the Heath Parasol, in kit form. Heath died during a 1931 test flight. The company reorganized and moved from Chicago to Niles, Michigan.
In 1935, Howard Anthony purchased the then-bankrupt Heath Company, and focused on selling accessories for small aircraft. After World War II, Anthony decided that entering the electronics industry was a good idea, and bought a large stock of surplus wartime electronic parts with the intention of building kits with them. In 1947, Heath introduced its first electronic kit, the O1 oscilloscope that sold for US$50 () – the price was unbeatable at the time, and the oscilloscope went on to be a huge seller.
Heathkit product concept
After the success of the oscilloscope kit, Heath went on to produce dozens of Heathkit products. Heathkits were influential in shaping two generations of electronic hobbyists. The Heathkit sales premise was that by investing the time to assemble a Heathkit, the purchasers could build something comparable to a factory-built product at a significantly lower cash cost and, if it malfunctioned, could repair it themselves. During those decades, the premise was basically valid.
Commercial factory-built electronic products were constructed from generic, discrete components such as vacuum tubes, tube sockets, capacitors, inductors, and resistors, mostly hand-wired and assembled using point-to-point construction technology. The home kit-builder could perform these labor-intensive assembly tasks himself, and if careful, attain at least the same standard of quality. In the case of Heathkit's most expensive product at the time, the Thomas electronic organ, building the kit version represented substantial savings.
One category in which Heathkit enjoyed great popularity was amateur radio. Ham radio operators had frequently been forced to build their equipment from scratch before the advent of kits, with the difficulty of procuring all the parts separately and relying on often-experimental designs. Kits brought the convenience of all parts being supplied together, with the assurance of a predictable finished product; many Heathkit model numbers became well known in the ham radio community. The HW-101 HF transceiver became so ubiquitous that even today the "Hot Water One-Oh-One" can be found in use, or purchased as used equipment at hamfests, decades after it went out of production.
In the case of electronic test equipment, Heathkits often filled a low-end entry-level niche, giving hobbyists access at an affordable price.
The instruction books were regarded as among the best in the kit industry, being models of clarity, beginning with basic lessons on soldering technique, and proceeding with explicit step-by-step directions, illustrated with numerous line drawings; the drawings could be folded out to be visible next to the relevant text (which might be bound several pages away) and were aligned with the assembler's viewpoint. Also in view was a checkbox to mark with a pencil as each task was accomplished. The instructions usually included complete schematic diagrams, block diagrams depicting different subsystems and their interconnections, and a "Theory of Operation" section that explained the basic function of each section of the electronics.
Heathkits as education
No knowledge of electronics was needed to assemble a Heathkit. The assembly process itself did not teach much about electronics, but provided a great deal of what could have been called basic "electronics literacy", such as the ability to identify tube pin numbers or to read a resistor color code. Many hobbyists began by assembling Heathkits, became familiar with the appearance of components like capacitors, transformers, resistors, and tubes, and were motivated to understand just what these components actually did. For those builders who had a deeper knowledge of electronics (or for those who wanted to be able to troubleshoot/repair the product in the future), the assembly manuals usually included a detailed "Theory of Operation" chapter, which explained the functioning of the kit's circuitry, section by section.
Heath developed a business relationship with electronics correspondence schools (e.g., NRI and Bell & Howell), and supplied electronic kits to be assembled as part of their courses, with the schools basing their texts and lessons around the kits. In the 1960s, Heathkit marketed a line of its electronic instruments which had been modified for use in teaching physics at the high school (Physical Science Study Committee, PSSC) and college levels (Berkeley Physics Course).
Heathkits could teach deeper lessons. "The kits taught Steve Jobs that products were manifestations of human ingenuity, not magical objects dropped from the sky", writes a business author, who goes on to quote Jobs as saying "It gave a tremendous level of self-confidence, that through exploration and learning one could understand seemingly very complex things in one's environment."
Diversification
After the death of Howard Anthony in a 1954 airplane crash, his widow sold the company to Daystrom Company, a management holding company that also owned several other electronics companies. Daystrom was absorbed by oilfield service company Schlumberger Limited in 1962, and the Daystrom/Schlumberger days were to be among Heathkit's most successful.
Those years saw some "firsts" in the general consumer market. The early 1960s saw the introduction of the AA-100 integrated amplifier. The early 1970s saw Heath introduce the AJ-1510, an FM tuner using digital synthesis, the GC-1005 digital clock, and the GR-2000 color television set. In 1974, Heathkit started "Heathkit Educational Systems", which expanded their manuals into general electronics and computer training materials. Heathkit also expanded their expertise into digital and, eventually, computerized equipment, producing among other things digital clocks and weather stations with the new technology.
Kits were compiled in small batches mostly by hand, using roller conveyor lines. These lines were put up and taken down as needed. Some kits were sold completely "assembled and tested" in the factory. These models were differentiated with a "W" suffix after the model number, indicating that they were factory-wired.
For much of Heathkit's history, there were competitors. In electronic kits: Allied Radio, an electronic parts supply house, had its KnightKits, Lafayette Radio offered some kits, Radio Shack made a few forays into this market with its Archerkit line, Dynaco made its audio products available in kit form (Dynakits), as did H. H. Scott, Inc., Fisher, and Eico; and later such companies as Southwest Technical Products and the David Hafler Company.
Personal computers
In 1978 Heathkit introduced the Heathkit H8 computer. The H8 was very successful, as were the H19 and H29 terminals, and the H89 "All in One" computer. The H8 and H89 ran their custom operating system, HDOS, and the popular CP/M operating system. The H89 contained two Zilog Z80 8-bit processors, one for the computer and one for the built-in H-19 terminal. The H11, a low-end DEC LSI-11 16-bit computer, was less successful, probably because it was substantially more expensive than the 8-bit computer line.
Seeing the potential in personal computers, Zenith Radio Company bought Heath Company from Schlumberger in 1979 for $63 million, renaming the computer division Zenith Data Systems (ZDS). Zenith purchased Heath for the flexible assembly line infrastructure at the nearby St. Joseph facility as well as the R&D assets.
Heath/Zenith was in the vanguard of companies to start selling personal computers to small businesses. The H-89 kit was re-branded as the Zenith Z-89/Z-90, an assembled all in one system with a monitor and a floppy disk drive. They had agreements with Peachtree Software to sell a customized "turn-key" version of their accounting, CPA, and real estate management software. Shortly after the release of the Z-90, they released a 5MB hard disk unit and double-density external floppy disk drives, which were much more practical for business data storage than punched paper tapes.
While the H11 was popular with hard-core hobbyists, Heath engineers realized that DEC's low-end PDP-11 microprocessors would not be able to get Heath up the road to more powerful systems at an affordable price. Heath/Zenith then designed a dual Intel 8085/8088-based system dubbed the H100 (or Z-100, in preassembled form, sold by ZDS). The machine featured advanced (for the day) bit mapped video that allowed up to 640 x 512 pixels of 8 color graphics. The H100 was interesting in that it could run either the CP/M operating system, or their OEM version of MS-DOS named Z-DOS, which were the two leading business PC operating systems at the time. Although the machine had to be rebooted to change modes, the competing operating systems could read each other's disks.
In 1982 Heath introduced the Hero-1 robot kit to teach principles of industrial robotics. The robot included a Motorola 6808 processor, ultrasonic sensor, and optionally a manipulator arm; the complete robot could be purchased assembled for $2495 or a basic kit without the arm purchased for $999. This was the first in a popular series of HeathKit robot kits sold to educational and hobbyist users.
Kit era comes to a close
While Heath/Zenith's computer business was successful, the growing popularity of home computers as a hobby hurt the company because many customers began writing computer programs instead of assembling Heathkits. Also, while their assembly was still an interesting and educational hobby, kits were no longer less expensive than preassembled products; BYTE reported in 1984 that the kit version of the Z-150 IBM PC compatible cost $100 more than the preassembled computer from some dealers, but needed about 20 hours and soldering skills to assemble. The continuation of the integration trend (printed circuit boards, integrated circuits, etc.), and mass production of electronics (especially computer manufacturing overseas and plug-in modules) eroded the basic Heathkit business model. Assembling a kit might still be fun, but it could no longer save much money. The switch to surface mount components and LSI ICs finally made it impossible for the home assembler to construct an electronic device for significantly less money than assembly line factory products.
As sales of its kits dwindled during the decade, Heath relied on its training materials and a new venture in home automation and lighting products to stay afloat. When Zenith eventually sold ZDS to Groupe Bull in 1989, Heathkit was included in the deal.
In March 1992, Heath announced that it was discontinuing electronic kits after 45 years. The company had been the last sizable survivor of a dozen kit manufacturers from the 1960s. In 1995, Bull sold Heathkit to a private investor group called HIG, which then sold it to another investment group in 1998. Wanting to only concentrate on the educational products, this group sold the Heath/Zenith name and products to DESA International, a maker of specialty tools and heaters. In late 2008, Heathkit Educational Systems sold a large portion of its physical collection of legacy kit schematics and manuals along with permission to make reproductions to Don Peterson, though it still retained the copyrights and trademarks, and had pointers to people that could help with the older equipment.
DESA filed bankruptcy in December 2008. The Heathkit company existed for a few years as Heathkit Educational Systems located in Saint Joseph, Michigan, concentrating on the educational market. The Heathkit company filed for bankruptcy in 2012.
Revival
In May 2013, Heathkit's corporate restructuring was announced on their website. An extensive FAQ accessible from their homepage stated clearly that Heathkit was back, and that they would resume electronic kit production and sales.
On October 8, 2015, Heathkit circulated an email to its "insiders", who had indicated an interest in the company's progress by completing its online marketing survey. It had now secured the rights to all Heathkit designs and trademarks; secured several new patents; established new offices, warehouse space, and a factory in Santa Cruz, California; and had introduced the renewed company's first new electronic kit in decades. Since then, Heathkit has announced and sold further kits in its new lineup of products. In addition, limited repair service on vintage products, reprints of manuals and schematics, remaining inventories of original parts, and upgrades of some vintage models are available.
Amateur radio
Heathkit made amateur radio kits almost from the beginning. In addition to their low prices compared with commercially manufactured equipment, Heathkits appealed to amateurs who had an interest in building their own equipment, but did not necessarily have the expertise or desire to design it and obtain all the parts themselves. They expanded and enhanced their line of amateur radio gear through nearly four decades. By the late 1960s, Heathkit had as large a selection of ham equipment as any company in the field.
Beginnings
They entered the market in 1954 with the AT-1, a simple, three tube, crystal controlled transmitter. It was capable of operating CW on the six most popular amateur short wave bands, and sold for $29.50 ().
The 39-page catalog contained only two pages of “ham gear”. An antenna coupler was the only other piece of equipment specifically intended for amateur radio use. The other two items were a general coverage short wave receiver, the AR-2, and an impedance meter. A VFO for the AT-1, the model VF-1, came out the following year.
Early DX-series transmitters
The company's first full featured transmitter, the DX-100, appeared in 1956. It filled two facing catalog pages, indicating Heathkit's seriousness in building kits for amateurs. The description noted that it was “amateur designed” – meant to convey expertise in designing specifically for the amateur radio operator – not the usual sense of the term amateur. And it stated that “amateurs in the field are enthusiastic about praising its performance under actual operating conditions”, indicating that it had been through what we would call beta testing today.
Heathkit had been including schematic diagrams of nearly every major kit in its catalog since 1954. In addition, the DX-100's listing contained two interior pictures and a block diagram. The 15-tube design could transmit either CW or AM (voice) with 100 to 140 watts output on all seven short wave amateur bands. It had a built-in power supply and VFO, and weighed 100 pounds. Priced at $189.50, it was expensive for the time (), yet undercut other amateur transmitters having similar features. It became quite popular.
The following year they introduced two scaled-down transmitters: the CW-only DX-20 model, meant for beginners, and the DX-35, capable of both CW and AM phone. Both models covered six bands, only lacking the DX-100's coverage of the 160m (1.8 MHz) band. Although they resembled the DX-100 in appearance, they lacked many of its features. But at $35.95 and $56.95, they were much more affordable. The DX-35 was superseded a year later by the improved DX-40.
The DX-100 was upgraded in 1959 to the DX-100B (there apparently was no DX-100A) and sold for the same price. By 1960, the catalog advertised it as the “best watts per dollar value” and called the 5-year-old design “classic”.
Heathkit tribes
Apache, Mohawk, Chippewa, Seneca
In 1959, a year before the last DX-100 was sold, a new deluxe line of amateur equipment was introduced. The TX-1 Apache transmitter and the RX-1 Mohawk receiver were about the same size and weight as the DX-100 but had updated styling and a new cabinet (to which the DX-100 also changed). The transmitter had many more features than its predecessor, and the RX-1 was Heathkit's first full featured amateur band receiver.
Both units used a "slide rule dial" with a scale on a rotating drum that changed with the band selection, and provided more accurate tuning. Together, Heath's top-of-the-line pair sold for $504.45 ().
The SB-10 SSB adapter was introduced in 1959 to enable both the Apache and DX-100 to operate on the new mode. The next year, a matching kilowatt linear amplifier, the KL-1 Chippewa, was added to the line. Completing the line, the model VHF-1 Seneca covered the 6 meter (50 MHz) and 2 meter (144 MHz) bands.
Cheyenne, Comanche
The MT-1 Cheyenne transmitter and MR-1 Comanche receiver were considerably smaller and lighter than the Apache-Mohawk pair. Used with either an AC or DC external power supply, they could be operated in fixed or mobile service. Without transceive capability, this pair was probably challenging to operate while driving. A year later these units were reborn as the HX-20 transmitter and HR-20 receiver (and were no longer given names), capable of SSB operation.
Marauder, Warrior
The HX-10 Marauder was a redesigned replacement for the Apache, operating on SSB without an external adapter. It appeared in the 1962–63 catalog along with a new linear amplifier, the HA-10 Warrior.
VHF
The last new entry in the tribes generation was the HX-30 transmitter and HA-20 linear amplifier, both capable of SSB operation on the six meter (50 MHz) band.
Heathkit also brought out a pair of single band, low power, CW and AM phone VHF transceivers – the HW-10 and HW-20 for the 6 meter and 2 meter bands, respectively. Designed primarily for mobile use, they were much smaller than the tribes but bore a strong family resemblance down to their chrome knobs.
In 1961 they also brought out a distinctive set of low cost, compact, single band transceivers for 6 and 2 meters, the HW-29 and HW-30, also called the Sixer and Twoer. Completely self-contained, with a built-in speaker and a matching microphone, they could operate from AC or DC power. Somewhat limited in features, they were designed for AM phone operation only and frequency control was crystal controlled on transmit.
These portable transceivers looked distinctly different from other Heathkit gear. Tan and brown rather than the pervasive green, they were roughly rectangular shaped with rounded corners and had a handle on top. That particular shape and appearance would lead to them being dubbed the “Benton Harbor Lunchboxes” in the 1966 catalog.
New novice station
To succeed the DX-series that started in the 1950s, Heathkit designed an entirely new novice station consisting of the DX-60 transmitter, HR-10 receiver, and HG-10 VFO. These matching units were smaller and lighter than the tribes, covered five bands, and were much lower priced. They would go through incremental improvement and sell for more than a decade. In 1969 Heathkit added the HW-16 to its beginner-level line – a transceiver designed specifically for the Novice licensee. It covered the three HF Novice bands, CW only, and was crystal controlled but could be used with the HG-10 VFO.
SB-series and HW-series
By the early 1960s, a large majority of amateurs had adopted SSB as their primary mode of voice communication on the HF bands. This led to the development of equipment that was specifically designed for transceive operation on SSB, and also much smaller and lighter than the previous generation of ham gear.
As with other manufacturers, such as Drake and Collins, Heathkit began in 1964 by introducing a transceiver. It covered only one band and came in three models: The HW-12, -22, and -32, covering the 20m (14 MHz), 40m (7 MHz) and 75m (3.8 MHz) bands, respectively.
Influenced heavily by the S/Line from Collins, Heathkit designed the SB-series to become their top-line set of amateur radio equipment. Like the S/Line, these new products were designed to operate together in various combinations as a system. The first models appeared in the 1965 catalog, displacing the large, heavy units of the tribes generation (except for the Marauder and Warrior, and the 6 meter units which remained for one year).
When used together, the SB-300 receiver and SB-400 transmitter could transceive and had many other features of the S/Line, including crystal bandwidth filters and 1 kHz tuning dial resolution. They could also operate separately (if the optional crystal pack was installed in the transmitter), giving the operator more flexibility in communicating with foreign stations, aka "DX Stations", who were not authorized to transmit within the same frequency ranges as the U.S. stations were authorized to use. The S/Line influence was easy to see too, in its cabinet styling, tuning mechanism and knobs. But by designing them as kits and using less expensive construction, Heathkit could offer these units at much lower prices. The pair sold for $590 that same year (). The matching SB-200 1,200-watt input/700-800-watt output linear amplifier completed the line for 1965.
The following year two more units were added: the SB-110 transceiver for the 6 meter band and the HA-14 “Kompact Kilowatt”, a smaller kilowatt linear amplifier based on the SB-200. The HA-14 also used grounded grid 572Bs but with external AC and DC power supplies. At 7 lbs the amplifier itself was very small, matching the HW mono-banders in style, and usable in both mobile and desktop service. Like the SB-200 from which it derived, its input was designed to match the 100-watt output of the Heathkit SB and HW series plus as Collins and others.
In a last minute, four page, center insert to the 1966 catalog titled “New Product News” Heathkit announced the SB-100 five-band SSB transceiver. Like the other transceivers of this time, the SB-100 (and later improved models SB-101 and SB-102) would become one of Heathkit's best selling amateur radio products. This included a scaled-back, lower-priced version of the SB-100 called the HW-100 (later updated to the HW-101) introduced in 1969. While the SB series were affectionately nicknamed the "Sugar Baker" series, the HW series were affectionately nicknamed the "Hot Water" series.
In the next three years, Heathkit brought out several more SB-series accessories, including a 2 kilowatt input/1.4 kilowatt output linear amplifier, the SB-220. The SB-400 model transmitter was slightly updated with the newer SB-401 model. The final model in the original SB-series was the SB-303 receiver, a solid state replacement (in a smaller case) for the SB-301 and its earlier sibling, the SB-300 receiver. The 2000 film Frequency, starring Dennis Quaid and Jim Caviezel, featured a Heathkit SB-301 receiver being used with artistic license as a transceiver (film studio prop), although the SB-301 did not have any transmitter stages in it and was not a transceiver. An SB-302 receiver was never produced (no reason ever given for why the 302 model number was skipped) and some hams who worked at Heath hinted that there was talk of a solid state SB-103 transceiver, but it never made it past the proposal stage.
The SB-series would continue to be improved and sell well until 1974 and the arrival of solid state and digital design, with the SB-104 transceiver, its accessories and a new generation of amateur radio gear. Though somewhat redesigned physically it had a similar appearance to the earlier SB-series generation. The SB-670 was a short-lived dream antenna tuner that matched the SB-104. Unfortunately, only a few were produced and those were considered "prototypes". However, technical issues with the first production run of the SB-104 led to Heath having to quickly update it with the SB-104A. By that time, amateurs were buying transceivers made overseas being produced for the same amount of money with more features (including the AM and FM modes that the Heathkit SB and HW series did not have) and did not require user assembly.
In 1983 Heathkit came out with their last ham radio kit, the HW-5400 transceiver. It was all solid state with 100 watts output on 160 through 10 meters including the newly available WARC bands. Also available was a matching power supply.
Solid state and digital
In the late 1970s Heathkit redesigned the line again, bringing out a series of transceivers and separates with more advanced digital features and new styling (abandoning the green motif, a distinguishing feature of Heathkits for more than two decades).
During the 1980s, with increasing competition primarily from Japanese equipment makers, wide use of automated manufacturing techniques, and increasingly complex designs, it became much more difficult to produce kits that were both easy to construct and feature-rich at a competitive price. Heathkit began to introduce models that were unavailable in kit form such as the SS-9000. The SS-9000 is an all solid-state, synthesized transceiver covering 160 through 10 meters (including the WARC bands) with 100 watts output. A total of 375 were produced according to the Yahoo Heath user group. This continued until they left the electronic kit business in 1992. , the Heathkit company has been revived, and is offering both newly designed and vintage products for the amateur radio market.
See also
HERO (robot)
History of personal computers
Homebuilt aircraft
Vintage amateur radio
References
Further reading
External links
Heathkit Virtual Museum
Heathkit, at RigReference.com
Heathkit Information, at Nostalgic Kits Central
Heathkit 1967 vintage catalog; (archive copy)
Amateur radio companies
Audio equipment manufacturers of the United States
Electronic kit manufacturers
Electronic calculator companies
Companies based in Michigan
Electronics companies established in 1926
Companies disestablished in 2012
American brands
1926 establishments in Illinois
2012 disestablishments in Michigan |
36189845 | https://en.wikipedia.org/wiki/SingingCoach | SingingCoach | SingingCoach is a downloadable, learn-to-sing software program from Electronic Learning Products, Inc.
Development and early years
Brother and sister Carlo and Alix Franzblau of Tampa, Florida, founded Electronic Learning Products, Inc. in January 2000. In 2003, their first product, a learn-to-sing program was released with the name of Carry-a-Tune. The name was changed to SingingCoach in 2004.
Versions and releases
The original software disk version of Carry-a-Tune was released in 2003. In 2004, the trademark “Carry-a-Tune” was replaced with “SingingCoach” in an effort to clarify the software’s specific function as the software was gaining publicity and notoriety by various publications such as PC magazine. SingingCoach was sold in software brick and mortar giants, Best Buy, Comp USA, and Target. In 2012, Franzblau opened his software up to exclusively e-commerce shoppers by offering a downloadable version of SingingCoach, allowing customers to download the program directly from the website as opposed to picking up the software in a brick and mortar store.
Pitch recognition technology
SingingCoach utilizes a pitch recognition technology that was developed by Carlo Franzblau and engineered by his team of programmers. Its technology uses a white tracking line on screen to record and display the pitch of one’s voice in reference to the "in tune" bars of the song. This allows the user to understand whether or not they are singing in tune, and to adjust their voice accordingly. A score is given after completion of a song based on the percentage of time the singer was in tune.
Development of Tunein to Reading
In 2006, Franzblau realized that struggling readers who used SINGINGCoach also became more effective readers – an unintended response derived from using the software. Franzblau approached literacy researchers from the University of South Florida in Tampa who performed research studies over a period of five years that documented the benefits of the singing approach on all reader levels. In 2011, white papers were published describing the mechanism that made the singing approach so effective. Researchers dubbed the instructional method "melodic learning" or the combination of music with visual imagery in the learning process. Melodic learning is the simultaneous use of traditional multi-sensory/multi-modality learning in which the auditory and kinesthetic modes embody significant rhythmic and tonal elements.
Patents
In September 2007, Franzblau’s patent # 7,271,329 covering a "computer-aided learning system that employs a pitch tracking line" was issued. Other patents protecting surrounding inventions are currently pending.
See also
University of South Florida
List of music software
References
Music software |
3849008 | https://en.wikipedia.org/wiki/MSC%20Software | MSC Software | MSC Software Corporation is an American simulation software technology company based in Newport Beach, California, that specializes in simulation software.
In February 2017, the company was acquired by Swedish technology company Hexagon AB for $834 million. It operates as an independent subsidiary
History
MSC Software Corporation was formed in 1963 under the name MacNeal-Schwendler Corporation (MSC), by Dr. Richard H. MacNeal and Robert Schwendler. The company developed its first structural analysis software called SADSAM (Structural Analysis by Digital Simulation of Analog Methods) at that time and was deeply involved in the early efforts of the aerospace industry to improve early finite element analysis technology.
A key milestone was responding to a 1965 request for proposal (RFP) from the National Aeronautics and Space Administration (NASA) for a general-purpose structural analysis program that would eventually become Nastran (NASA Structural Analysis). The company subsequently pioneered many of the technologies that are now relied upon by industry to analyze and predict stress and strain, vibration and dynamics, acoustics, and thermal analysis. In 1971, the company released a commercial version of Nastran, named MSC/Nastran.
Two years after MSC began marketing MSC/Nastran, the company established its first overseas office in Munich, Germany. Three years after entering Europe, MSC moved eastward and opened an office in Tokyo, Japan. In 1983, MSC made its debut as a public company, and a year later the stock migrated to the American Stock Exchange. The company expanded in 1992 by adding a subsidiary in Moscow, Russia. In 1995, it further expanded its growth by adding an office in Brazil. In June 1999, MSC's stockholders voted to change the company's name to MSC.Software Corporation.
In July 2009, MSC Software was acquired by the private equity firm Symphony Technology Group. The "dot" was dropped from the company's name in 2011 and the company's name is currently MSC Software Corporation.
The company is headquartered in Newport Beach, California and employs approximately 1,400 people in 20 countries. 2016 revenue was US$230 million. In February 2017, the company was acquired by Swedish engineering services conglomerate Hexagon.
Acquisitions
September 1994 - PDA Engineering
December 1998 - Knowledge Revolution
May 1999 - MSC Software acquired MARC Analysis Research Corporation to add software that tests complex designs and materials.
June 1999 - Universal Analytics Incorporation (UAI)
November 1999 - Computerized Structural Analysis Research Corporation (CSAR)
May 2001 - Advanced Enterprise Solutions Inc. (AES).
March 2002 - MSC Software acquired Mechanical Dynamics Inc. to increase its client base to over 10,000 companies.
January 2008 - MSC Software acquired thermal analysis company Network Analysis Inc. to solidify its ability to serve the thermal management market.
September 2011 - MSC Software acquired acoustic simulation company Free Field Technologies, S.A.FFT to extend its solutions to acoustic simulation
September 2012 - MSC Software acquired composite material simulation leader e-Xstream engineering company.
February 2015 - MSC Software acquired Simufact Engineering leader in the simulation of metal forming and joining processes.
December 2016 - MSC Software Acquired Software Cradle Co., Ltd, a leader in CFD simulation.
May 2017 - MSC acquires Vires Simulationstechnologie GmbH leader in autonomous vehicle simulation.
See also
MSC Adams
Actran
MSC Nastran
References
Notes
"Competition, Defense Industry Cuts Hurt Price of MacNeal-Schwendler Corp. Stock," Los Angeles Business Journal, June 4, 1990, p. 32.
Deady, Tim, "Revenge of the Nerd," Los Angeles Business Journal, April 29, 1996, p. 13.
"MacNeal-Schwendler Corp.," Machine Design, November 26, 1992, p. 103.
Teague, Paul E., "Pioneer in Engineering Analysis: Dick MacNeal Conceived One of the Most Widely Used Finite Element Analysis Codes in the World," Design News, July 10, 1995, p. 50.
External links
MSC Software
Business software
3D graphics software
Computer-aided design software
Computer-aided manufacturing software
Computer-aided engineering software
Product lifecycle management
Mesh generators |
233956 | https://en.wikipedia.org/wiki/Anti-pattern | Anti-pattern | An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. The term, coined in 1995 by computer programmer Andrew Koenig, was inspired by the book Design Patterns, which highlights a number of design patterns in software development that its authors considered to be highly reliable and effective.
The term was popularized three years later by the book AntiPatterns, which extended its use beyond the field of software design to refer informally to any commonly reinvented but bad solution to a problem. Examples include analysis paralysis, cargo cult programming, death march, groupthink and vendor lock-in.
Definition
According to the authors of Design Patterns, there are two key elements to an anti-pattern that distinguish it from a bad habit, bad practice, or bad idea:
The anti-pattern is a commonly-used process, structure or pattern of action that, despite initially appearing to be an appropriate and effective response to a problem, has more bad consequences than good ones
Another solution to the problem the anti-pattern is attempting to address exists, which is documented, repeatable and proven to be effective where the anti-pattern is not
Documenting anti-patterns can be an effective way to analyze a problem space and to capture expert knowledge.
Examples
Social and business operations
Organizational
Analysis paralysis: A project that has stalled in the analysis phase of development, and is unable to achieve support for any of the potential plans of its implementation
Bicycle shed: Giving disproportionate weight to trivial issues
Bleeding edge: Operating with cutting-edge technologies that are still untested or unstable, leading to cost overruns, under-performance or delayed delivery of the product
Bystander apathy: The phenomenon in which people are less likely to or do not offer help to a person in need when others are present
Cash cow: A profitable legacy product that often leads to complacency about new products
Design by committee: The result of having many contributors to a design, but no unifying vision
Escalation of commitment: Failing to revoke a decision when it proves wrong
Groupthink: A collective state where group members begin, often unknowingly, to think alike and reject differing viewpoints
Management by objectives (SAFe): Management operating with the exclusive focus on quantitative management criteria, such as number of sales, when these are non-essential or cost too much to acquire
Micromanagement: Ineffective results stemming from excessive observation, supervision, or other hands-on involvement from management
Moral hazard: Insulating a decision-maker from the consequences of their decision
Mushroom management: Keeping employees "in the dark and fed manure" (also "left to stew and finally canned") about decisions being taken by management
Peter principle: Continually promoting otherwise well-performing employees up to a position they are unsuited for, with responsibilities they are incompetent at completing, where they remain indefinitely
Seagull management: Management in which managers only interact with employees when a problem arises, when they "fly in, make a lot of noise, dump on everyone, do not solve the problem, then fly out"
Stovepipe or Silos: An organizational structure of isolated or semi-isolated teams, in which too many communications take place up and down the hierarchy, rather than directly with other teams across the organization
Typecasting: Locking successful employees into overly-safe, narrowly defined, predictable roles based on their past successes rather than their potential
Vendor lock-in: Making a system excessively dependent on an externally supplied component
Project management
Cart before the horse: Focusing too many resources on a stage of a project out of its sequence
Death march: A project whose staff, while expecting it to fail, are compelled to continue, often with much overwork, by management in denial of the project's possible failure
Ninety-ninety rule: Tendency to underestimate the amount of time to complete a project when it is "nearly done"
Overengineering: Spending resources making a project more robust and complex than is needed
Scope creep: Uncontrolled changes or continuous growth in a project's scope, or adding new features to the project after the original requirements have been drafted and accepted (also known as requirement creep and feature creep)
Smoke and mirrors: Demonstrating unimplemented functions as if they were already implemented
Brooks's law: Adding more resources to a project to increase velocity, when the project is already slowed by coordination overhead
Gold plating: Continuing to work on a task or project well past the point at which extra effort is not adding value
Software engineering
Software design
Abstraction inversion: Not exposing implemented functionality required by callers of a function/method/constructor, so that the calling code awkwardly re-implements the same functionality in terms of those calls
Ambiguous viewpoint: Presenting a model (usually Object-oriented analysis and design (OOAD)) without specifying its viewpoint
Big ball of mud: A system with no recognizable structure
Database-as-IPC: Using a database as the message queue for routine interprocess communication where a much more lightweight mechanism would be suitable
Inner-platform effect: A system so customizable as to become a poor replica of the software development platform
Input kludge: Failing to specify and implement the handling of possibly invalid input
Interface bloat: Making an interface so powerful that it is extremely difficult to implement
Magic pushbutton: A form with no dynamic validation or input assistance, such as dropdowns
Race hazard (or race condition): Failing to see the consequences of events that can sometimes interfere with each other.
Stovepipe system: A barely maintainable assemblage of ill-related components
Object-oriented programming
Anemic domain model: The use of the domain model without any business logic. The domain model's objects cannot guarantee their correctness at any moment, because their validation and mutation logic is placed somewhere outside (most likely in multiple places). Martin Fowler considers this to be an anti-pattern, but some disagree that it is always an anti-pattern.
Call super: Requiring subclasses to call a superclass's overridden method
Circle–ellipse problem: Subtyping variable-types on the basis of value-subtypes
Circular dependency: Introducing unnecessary direct or indirect mutual dependencies between objects or software modules
Constant interface: Using interfaces to define constants
God object: Concentrating too many functions in a single part of the design (class)
Object cesspool: Reusing objects whose state does not conform to the (possibly implicit) contract for re-use
Object orgy: Failing to properly encapsulate objects permitting unrestricted access to their internals
Poltergeists: Objects whose sole purpose is to pass information to another object
Sequential coupling: A class that requires its methods to be called in a particular order
Singleton Pattern: This design pattern brings coupling and is considered a bad solution
Yo-yo problem: A structure (e.g., of inheritance) that is hard to understand due to excessive fragmentation
Programming
Accidental complexity: Programming tasks that could be eliminated with better tools (as opposed to essential complexity inherent in the problem being solved)
Action at a distance: Unexpected interaction between widely separated parts of a system
Boat anchor: Retaining a part of a system that no longer has any use
Busy waiting: Consuming CPU while waiting for something to happen, usually by repeated checking instead of messaging
Caching failure: Forgetting to clear a cache that holds a negative result (error) after the error condition has been corrected
Cargo cult programming: Using patterns and methods without understanding why
Coding by exception: Adding new code to handle each special case as it is recognized
Error hiding: Catching an error message before it can be shown to the user and either showing nothing or showing a meaningless message. This anti-pattern is also named Diaper Pattern. Also can refer to erasing the Stack trace during exception handling, which can hamper debugging.
Hard code: Embedding assumptions about the environment of a system in its implementation
Lasagna code: Programs whose structure consists of too many layers of inheritance
Lava flow: Retaining undesirable (redundant or low-quality) code because removing it is too expensive or has unpredictable consequences
Loop-switch sequence: Encoding a set of sequential steps using a switch within a loop statement
Magic numbers: Including unexplained numbers in algorithms
Magic strings: Implementing presumably unlikely input scenarios, such as comparisons with very specific strings, to mask functionality.
Repeating yourself: Writing code that contains repetitive patterns and substrings over again; avoid with once and only once (abstraction principle)
Shooting the messenger: Throwing exceptions from the scope of a plugin or subscriber in response to legitimate input, especially when this causes the outer scope to fail.
Shotgun surgery: Developer adds features to an application codebase that span a multiplicity of implementors or implementations in a single change
Soft code: Storing business logic in configuration files rather than source code
Spaghetti code: Programs whose structure is barely comprehensible, especially because of misuse of code structures
Methodological
Copy and paste programming: Copying (and modifying) existing code rather than creating generic solutions
Golden hammer: Assuming that a favorite solution is universally applicable (See: Silver bullet)
Invented here: The tendency towards dismissing any innovation or less than trivial solution originating from inside the organization, usually because of lack of confidence in the staff
Not invented here (NIH) syndrome: The tendency towards reinventing the wheel (failing to adopt an existing, adequate solution)
Premature optimization: Coding early-on for perceived efficiency, sacrificing good design, maintainability, and sometimes even real-world efficiency
Programming by permutation (or "programming by accident", or "programming by coincidence"): Trying to approach a solution by successively modifying the code to see if it works
Reinventing the square wheel: Failing to adopt an existing solution and instead adopting a custom solution that performs much worse than the existing one
Silver bullet: Assuming that a favorite technical solution can solve a larger process or problem
Tester-driven development: Software projects in which new requirements are specified in bug reports
Configuration management
Dependency hell: Problems with versions of required products
DLL hell: Inadequate management of dynamic-link libraries (DLLs), specifically on Microsoft Windows
Extension conflict: Problems with different extensions to classic Mac OS attempting to patch the same parts of the operating system
JAR hell: Overutilization of multiple JAR files, usually causing versioning and location problems because of misunderstanding of the Java class loading model
See also
Code smell – symptom of unsound programming
Design smell
Dark pattern
List of software development philosophies – approaches, styles, maxims and philosophies for software development
List of tools for static code analysis
Software rot
Software Peter principle
Capability Immaturity Model
ISO/IEC 29110: Software Life Cycle Profiles and Guidelines for Very Small Entities (VSEs)
The Innovator's Dilemma
References
Further reading
External links
Anti-pattern at WikiWikiWeb
Anti-patterns catalog
AntiPatterns.com Web site for the AntiPatterns book
Patterns of Toxic Behavior
C Pointer Antipattern
Email Anti-Patterns book
Patterns of Social Domination
Anti-patterns
Software architecture
Design
Industrial and organizational psychology
Organizational behavior
Anti-social behaviour
Workplace |
56693975 | https://en.wikipedia.org/wiki/BrickerBot | BrickerBot | BrickerBot was malware that attempted to permanently destroy ("brick") insecure Internet of Things devices. BrickerBot logged into poorly-secured devices and ran harmful commands to disable them. It was first discovered by Radware after it attacked their honeypot in April 2017. On December 10, 2017, BrickerBot was retired.
Discovery
BrickerBot.1 and BrickerBot.2
The BrickerBot family of malware was first discovered by Radware on April 20, 2017, when BrickerBot attacked their honeypot 1,895 times over four days. BrickerBot's method of attack was to brute-force the telnet password, then run commands using BusyBox to corrupt MMC and MTD storage, delete all files, and disconnect the device from the Internet. Less than an hour after the initial attack, bots began sending a slightly different set of malicious commands, indicating a new version, BrickerBot.2. BrickerBot.2 used the Tor network to hide its location, did not rely on the presence of busybox on the target, and was able to corrupt more types of storage devices.
BrickerBot.3 and BrickerBot.4
BrickerBot.3 was detected on May 20, 2017, one month after the initial discovery of BrickerBot.1. On the same day, one device was identified as a BrickerBot.4 bot. No other instances of BrickerBot.4 were seen since.
Shutdown and Impact
According to Janit0r, the author of BrickerBot, it destroyed more than ten million devices before Janit0r announced the retirement of BrickerBot on December 10, 2017. In an interview with Bleeping Computer, Janit0r stated that BrickerBot was intended to prevent devices from being infected by Mirai. US-CERT released an alert regarding BrickerBot on April 12, 2017.
References
IoT malware |
9232391 | https://en.wikipedia.org/wiki/Risk-based%20testing | Risk-based testing | Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure. In theory, there are an infinite number of possible tests. Risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. This includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective.
Assessing risks
Comparing the changes between two releases or versions is key in order to assess risk.
Evaluating critical business modules is a first step in prioritizing tests, but it does not include the notion of evolutionary risk. This is then expanded using two methods: change-based testing and regression testing.
Change-based testing allows test teams to assess changes made in a release and then prioritize tests towards modified modules.
Regression testing ensures that a change, such as a bug fix, did not introduce new faults into the software under test. One of the main reasons for regression testing is to determine whether a change in one part of the software has any effect on other parts of the software.
These two methods permit test teams to prioritize tests based on risk, change, and criticality of business modules. Certain technologies can make this kind of test strategy very easy to set up and to maintain with software changes.
Types of risk
Risk can be identified as the probability that an undetected software bug may have a negative impact on the user of a system.
The methods assess risks along a variety of dimensions:
Business or operational
High use of a subsystem, function or feature
Criticality of a subsystem, function or feature, including the cost of failure
Technical
Geographic distribution of development team
Complexity of a subsystem or function
External
Sponsor or executive preference
Regulatory requirements
E-business failure-mode related
Static content defects
Web page integration defects
Functional behavior-related failure
Service (Availability and Performance) related failure
Usability and Accessibility-related failure
Security vulnerability
Large scale integration failure
References
Software testing |
8273596 | https://en.wikipedia.org/wiki/Full%20virtualization | Full virtualization | In computer science, virtualization is a modern technique developed in late 1990s and is different from simulation and emulation. Virtualization employs techniques used to create instances of an environment, as opposed to simulation, which models the environment; or emulation, which replicates the target environment such as certain kinds of virtual machine environments. Full virtualization requires that every salient feature of the hardware be reflected into one of several virtual machines – including the full instruction set, input/output operations, interrupts, memory access, and whatever other elements are used by the software that runs on the bare machine, and that is intended to run in a virtual machine. In such an environment, any software capable of execution on the raw hardware can be run in the virtual machine and, in particular, any operating systems. The obvious test of full virtualization is whether an operating system intended for stand-alone use can successfully run inside a virtual machine.
The cornerstone of full virtualization or type-1 virtualization is a hypervisor or Super Operating system that operates at a higher privilege level than the OS. This Hypervisor or Super OS requires two key features to provision and protect virtualized environments. These two features are:
OS-Independent Storage Management to provision resources for all supported Virtual Environments such as Linux, Microsoft Windows or embedded environments and to protect those environments from unauthorized access and,
Switching of Virtualized environments to allocate physical computing resources to Virtual Environments.
See Intel VT-x or AMD-V for a detailed description of privilege levels for Hypervisor, OS and User modes, VMCS, VM-Exit and VM-Entry. This virtualization is not to be confused with IBM Virtual Machine implementations of late 60's and early 70's as IBM systems architecture supported only two modes of Supervisor and Program which provided no security or separation of Virtual Machines.
Other forms of platform virtualization allow only certain or modified software to run within a virtual machine. The concept of full virtualization is well established in the literature, but it is not always referred to by this specific term; see platform virtualization for terminology.
An important example of Virtual Machines, not to be confused with Virtualization implemented by emulation was that provided by the control program of IBM's CP/CMS operating system. It was first demonstrated with IBM's CP-40 research system in 1967, then distributed via open source in CP/CMS in 1967–1972, and re-implemented in IBM's VM family from 1972 to the present. Each CP/CMS user was provided a simulated, stand-alone computer. Each such virtual machine had the complete capabilities of the underlying machine, and (for its user) the virtual machine was indistinguishable from a private system. This simulation was comprehensive, and was based on the Principles of Operation manual for the hardware. It thus included such elements as an instruction set, main memory, interrupts, exceptions, and device access. The result was a single machine that could be multiplexed among many users.
Full virtualization is possible only with the right combination of hardware and software elements. For example, it was not possible with most of IBM's System/360 series with the exception being the IBM System/360-67; nor was it possible with IBM's early System/370 system. IBM added virtual memory hardware to the System/370 series in 1972 which is not the same as Intel VT-x Rings providing a higher privilege level for Hypervisor to properly control Virtual Machines requiring full access to Supervisor and Program or User modes.
Similarly, full virtualization was not quite possible with the x86 platform until the 2005–2006 addition of the AMD-V and Intel VT-x extensions (see x86 virtualization). Many platform hypervisors for the x86 platform came very close and claimed full virtualization even prior to the AMD-V and Intel VT-x additions. Examples include Adeos, Mac-on-Linux, Parallels Desktop for Mac, Parallels Workstation, VMware Workstation, VMware Server (formerly GSX Server), VirtualBox, Win4BSD, and Win4Lin Pro. VMware, for instance, employs a technique called binary translation to automatically modify x86 software on-the-fly to replace instructions that "pierce the virtual machine" with a different, virtual machine safe sequence of instructions; this technique provides the appearance of full virtualization.
A key challenge for full virtualization is the interception and simulation of privileged operations, such as I/O instructions. The effects of every operation performed within a given virtual machine must be kept within that virtual machine – virtual operations cannot be allowed to alter the state of any other virtual machine, the control program, or the hardware. Some machine instructions can be executed directly by the hardware, since their effects are entirely contained within the elements managed by the control program, such as memory locations and arithmetic registers. But other instructions that would "pierce the virtual machine" cannot be allowed to execute directly; they must instead be trapped and simulated. Such instructions either access or affect state information that is outside the virtual machine.
Full virtualization has proven highly successful for:
sharing a computer system among multiple users;
isolating users from each other (and from the control program);
emulating new hardware to achieve improved reliability, security and productivity.
See also
Comparison of platform virtualization software
CP/CMS
Hardware-assisted virtualization
Hyperjacking
Hypervisor
I/O virtualization
LPAR
Operating system-level virtualization
Paravirtualization
Platform virtualization
Popek and Goldberg Virtualization Requirements
PR/SM
Virtual machine
References
See specific sources listed under platform virtualization and (for historical sources) CP/CMS.
External links
Compatibility is Not Transparency: VMM Detection Myths and Realities
Hardware virtualization |
40883 | https://en.wikipedia.org/wiki/Closed%20captioning | Closed captioning | Closed captioning (CC) and subtitling are both processes of displaying text on a television, video screen, or other visual display to provide additional or interpretive information. Both are typically used as a transcription of the audio portion of a program as it occurs (either verbatim or in edited form), sometimes including descriptions of non-speech elements. Other uses have included providing a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable.
HTML5 defines subtitles as a "transcription or translation of the dialogue when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and captions as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).
Terminology
The term "closed" (versus "open") indicates that the captions are not visible until activated by the viewer, usually via the remote control or menu option. On the other hand, "open", "burned-in", "baked on", "hard-coded", or simply "hard" captions are visible to all viewers as they are embedded in the video.
In the United States and Canada, the terms "subtitles" and "captions" have different meanings. Subtitles assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they transcribe only dialogue and some on-screen text. Captions aim to describe to the deaf and hard of hearing all significant audio content — spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking — along with any significant music or sound effects using words or symbols. Also, the term closed caption has come to be used to also refer to the North American EIA-608 encoding that is used with NTSC-compatible video.
The United Kingdom, Ireland, and most other countries do not distinguish between subtitles and closed captions and use "subtitles" as the general term. The equivalent of "captioning" is usually referred to as "subtitles for the hard of hearing". Their presence is referenced on screen by notation which says "Subtitles", or previously "Subtitles 888" or just "888" (the latter two are in reference to the conventional videotext channel for captions), which is why the term subtitle is also used to refer to the Ceefax-based videotext encoding that is used with PAL-compatible video. The term subtitle has been replaced with caption in a number of markets — such as Australia and New Zealand — that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have subtitling/captioning and which do not.
Remote control handsets for TVs, DVDs, and similar devices in most European markets often use "SUB" or "SUBTITLE" on the button used to control the display of subtitles/captions.
History
Open captioning
Regular open-captioned broadcasts began on PBS's The French Chef in 1972. WGBH began open captioning of the programs Zoom, ABC World News Tonight, and Once Upon a Classic shortly thereafter.
Technical development of closed captioning
Closed captioning was first demonstrated in the United States at the First National Conference on Television for the Hearing Impaired in Nashville, Tennessee, in 1971. A second demonstration of closed captioning was held at Gallaudet College (now Gallaudet University) on February 15, 1972, where ABC and the National Bureau of Standards demonstrated closed captions embedded within a normal broadcast of The Mod Squad.
At the same time in the UK the BBC was demonstrating its Ceefax text based broadcast service which they were already using as a foundation to the development of a closed caption production system. They were working with Professor Alan Newell from the University of Southampton who had been developing prototypes in the late 1960s.
The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station WETA. As a result of these tests, the FCC in 1976 set aside line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs.
The BBC in the UK was the first broadcaster to include closed captions (subtitles in the UK) in 1979 based on the Teletext framework for pre-recorded programming.
Real-time captioning, a process for captioning live broadcasts, was developed by the National Captioning Institute in 1982. In real-time captioning, stenotype operators who are able to type at speeds of over 225 words per minute provide captions for live television programs, allowing the viewer to see the captions within two to three seconds of the words being spoken.
Major US producers of captions are WGBH-TV, VITAC, CaptionMax and the National Captioning Institute. In the UK and Australasia, Ai-Media, Red Bee Media, itfc, and Independent Media Support are the major vendors.
Improvements in speech recognition technology means that live captioning may be fully or partially automated. BBC Sport broadcasts use a "respeaker": a trained human who repeats the running commentary (with careful enunciation and some simplification and markup) for input to the automated text generation system. This is generally reliable, though errors are not unknown.
Full-scale closed captioning
The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.
The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980. Sears had developed and sold the Telecaption adapter, a decoding unit that could be connected to a standard television set. The first programs seen with captioning were a Disney's Wonderful World presentation of the film Son of Flubber on NBC, an ABC Sunday Night Movie airing of Semi-Tough, and Masterpiece Theatre on PBS.
Since 2010 BBC provides a 100% broadcast captioning service across all 7 of its main broadcast channels BBC One, BBC Two, BBC Three, BBC Four, CBBC, Cbeebies and BBC News (TV channel).
BBC iPlayer launched in 2008 as the first captioned Video on demand service from a major broadcaster meeting comparable levels of captioning provided on its broadcast channels.
Legislative development in the U.S.
Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the Television Decoder Circuitry Act of 1990 was passed by Congress. This Act gave the Federal Communications Commission (FCC) power to enact rules on the implementation of Closed Captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.
Also, in 1990, the Americans with Disabilities Act (ADA) was passed to ensure equal opportunity for persons with disabilities. The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities—such as hospitals, bars, shopping centers and museums (but not movie theaters)—provide access to verbal information on televisions, films or slide shows.
The Federal Communications Commission requires all providers of programs to caption material which has audio in English or Spanish, with certain exceptions specified in Section 79.1(d) of the Commission's rules. These exceptions apply to new networks; programs in languages other than English or Spanish; networks having to spend over 2% of income on captioning; networks having less than US$3,000,000 in revenue; and certain local programs; among other exceptions. Those who are not covered by the exceptions may apply for a hardship waiver.
The Telecommunications Act of 1996 expanded on the Decoder Circuity Act to place the same requirements on digital television receivers by July 1, 2002. All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.
A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010. A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010, by the House of Representatives on September 28, 2010, and was signed by President Barack Obama on October 8, 2010. The Act requires, in part, for ATSC-decoding set-top box remotes to have a button to turn on or off the closed captioning in the output signal. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.
On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning, addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions.
Philippines
As amended by RA 10905, all TV networks in the Philippines are required to give CC. As of 2018, the three major TV networks in the country are currently testing the closed captioning system on their transmissions. ABS-CBN added CC in their daily 3 O'Clock Habit in the afternoon. 5 started implementing CCs on their live noon and nightly news programs. GMA was once started broadcasting nightly and late night news programs, but then they stopped adding CCs lately. Only select Korean drama and local or foreign movies, Biyahe ni Drew (English: Drew's Explorations) and Idol sa Kusina (English: Kitchen Idol) are the programs and shows that they air with proper closed captioning.
Close Captioning in some Filipino Films either to be "included" if film production companies have a bias on having impact on their viewing experience for those who did not understand the language nor not understand what it heard to it or distributed internationally. Since 2016, all Filipino-Language Films and also on some Streaming Services like iWant had included their English Subtitles in some showing on films. The Law regarding do that was passed by Gerald Anthony Gullas Jr. a lawmaker from Cebu City had implemented the regulations on standardizing both official languages of the Philippines as the people had not well fluently in those mastering their English vocabulary.
Legislative development in Australia
The government of Australia provided seed funding in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as Media Access Australia, sold its commercial captioning division to Red Bee Media in December 2005. Red Bee Media continues to provide captioning services in Australia today.
Funding development in New Zealand
In 1981, TVNZ held a telethon to raise funds for Teletext-encoding equipment used for the creation and editing of text-based broadcast services for the deaf. The service came into use in 1984 with caption creation and importing paid for as part of the public broadcasting fee until the creation of the NZ on Air taxpayer fund, which is used to provide captioning for NZ On Air content, TVNZ news shows and conversion of EIA-608 US captions to the preferred EBU STL format for only TVNZ 1, TV 2 and TV 3 with archived captions available to FOUR and select Sky programming. During the second half of 2012, TV3 and FOUR began providing non-Teletext DVB image-based captions on their HD service and used the same format on the satellite service, which has since caused major timing issues in relation to server load and the loss of captions from most SD DVB-S receivers, such as the ones Sky Television provides their customers. As of April 2, 2013, only the Teletext page 801 caption service will remain in use with the informational Teletext non-caption content being discontinued.
Application
Closed captions were created for deaf and hard of hearing individuals to assist in comprehension. They can also be used as a tool by those learning to read, learning to speak a non-native language, or in an environment where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.
In the United States, the National Captioning Institute noted that English as a foreign or second language (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.
Closed captions are also used in public environments, such as bars and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs. In addition, online videos may be treated through digital processing of their audio content by various robotic algorithms (robots). Multiple chains of errors are the result. When a video is truly and accurately transcribed, then the closed-captioning publication serves a useful purpose, and the content is available for search engines to index and make available to users on the internet.
Some television sets can be set to automatically turn captioning on when the volume is muted.
Television and video
For live programs, spoken words comprising the television program's soundtrack are transcribed by a human operator (a speech-to-text reporter) using stenotype or stenomask type of machines, whose phonetic output is instantly translated into text by a computer and displayed on the screen. This technique was developed in the 1970s as an initiative of the BBC's Ceefax teletext service. In collaboration with the BBC, a university student took on the research project of writing the first phonetics-to-text conversion program for this purpose. Sometimes, the captions of live broadcasts, like news bulletins, sports events, live entertainment shows, and other live shows, fall behind by a few seconds. This delay is because the machine does not know what the person is going to say next, so after the person on the show says the sentence, the captions appear. Automatic computer speech recognition works well when trained to recognize a single voice, and so since 2003, the BBC does live subtitling by having someone re-speak what is being broadcast. Live captioning is also a form of real-time text. Meanwhile, sport events on ESPN are using court reporters, using a special (steno) keyboard and individually constructed "dictionaries."
In some cases, the transcript is available beforehand, and captions are simply displayed during the program after being edited. For programs that have a mix of pre-prepared and live content, such as news bulletins, a combination of techniques is used.
For prerecorded programs, commercials, and home videos, audio is transcribed and captions are prepared, positioned, and timed in advance.
For all types of NTSC programming, captions are "encoded" into line 21 of the vertical blanking interval - a part of the TV picture that sits just above the visible portion and is usually unseen. For ATSC (digital television) programming, three streams are encoded in the video: two are backward compatible "line 21" captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format.
Captioning is modulated and stored differently in PAL and SECAM 625 line 25 frame countries, where teletext is used rather than in EIA-608, but the methods of preparation and the line 21 field used are similar. For home Betamax and VHS videotapes, a shift down of this line 21 field must be done due to the greater number of VBI lines used in 625 line PAL countries, though only a small minority of European PAL VHS machines support this (or any) format for closed caption recording. Like all teletext fields, teletext captions can't be stored by a standard 625 line VHS recorder (due to the lack of field shifting support); they are available on all professional S-VHS recordings due to all fields being recorded. Recorded Teletext caption fields also suffer from a higher number of caption errors due to increased number of bits and a low SNR, especially on low-bandwidth VHS. This is why Teletext captions used to be stored separately on floppy disk to the analogue master tape. DVDs have their own system for subtitles and captions, which are digitally inserted in the data stream and decoded on playback into video.
For older televisions, a set-top box or other decoder is usually required. In the US, since the passage of the Television Decoder Circuitry Act, manufacturers of most television receivers sold have been required to include closed captioning display capability. High-definition TV sets, receivers, and tuner cards are also covered, though the technical specifications are different (high-definition display screens, as opposed to high-definition TVs, may lack captioning). Canada has no similar law but receives the same sets as the US in most cases.
During transmission, single byte errors can be replaced by a white space which can appear at the beginning of the program. More byte errors during EIA-608 transmission can affect the screen momentarily, by defaulting to a real-time mode such as the "roll up" style, type random letters on screen, and then revert to normal. Uncorrectable byte errors within the teletext page header will cause whole captions to be dropped. EIA-608, due to using only two characters per video frame, sends these captions ahead of time storing them in a second buffer awaiting a command to display them; Teletext sends these in real-time.
The use of capitalization varies among caption providers. Most caption providers capitalize all words while others such as WGBH and non-US providers prefer to use mixed-case letters.
There are two main styles of line 21 closed captioning:
Roll-up or scroll-up or paint-on or scrolling: Real-time words sent in paint-on or scrolling mode appear from left to right, up to one line at a time; when a line is filled in roll-up mode, the whole line scrolls up to make way for a new line, and the line on top is erased. The lines usually appear at the bottom of the screen, but can actually be placed on any of the 14 screen rows to avoid covering graphics or action. This method is used when captioning video in real-time such as for live events, where a sequential word-by-word captioning process is needed or a pre-made intermediary file isn't available. This method is signaled on EIA-608 by a two-byte caption command or in Teletext by replacing rows for a roll-up effect and duplicating rows for a paint-on effect. This allows for real-time caption line editing.
Pop-on or pop-up or block: A caption appears on any of the 14 screen rows as a complete sentence, which can be followed by additional captions. This method is used when captions come from an intermediary file (such as the Scenarist or EBU STL file formats) for pre-taped television and film programming, commonly produced at captioning facilities. This method of captioning can be aided by digital scripts or voice recognition software, and if used for live events, would require a video delay to avoid a large delay in the captions' appearance on-screen, which occurs with Teletext-encoded live subtitles.
Caption formatting
TVNZ Access Services and Red Bee Media for BBC and Australia example:
I got the machine ready.
ENGINE STARTING
(speeding away)
UK IMS for ITV and Sky example:
(man) I got the machine ready. (engine starting)
US WGBH Access Services example:
MAN: I got the machine ready. (engine starting)
US National Captioning Institute example:
- I GOT THE MACHINE READY.
US other provider example:
I GOT THE MACHINE READY.
[engine starting]
US in-house real-time roll-up example:
>> Man: I GOT THE MACHINE READY.
[engine starting]
Non-US in-house real-time roll-up example:
MAN: I got the machine ready.
(ENGINE STARTING)
US CaptionMax example:
[MAN]
I got the machine ready.
[engine starting]
Syntax
For real-time captioning done outside of captioning facilities, the following syntax is used:
'>>' (two prefixed greater-than signs) indicates a change in single speaker.
Sometimes appended with the speaker's name in alternate case, followed by a colon.
'>>>' (three prefixed greater-than signs) indicates a change in news story or multiple speakers.
Styles of syntax that are used by various captioning producers:
Capitals indicate main on-screen dialogue and the name of the speaker.
Legacy EIA-608 home caption decoder fonts had no descenders on lowercase letters.
Outside North America, capitals with background coloration indicate a song title or sound effect description.
Outside North America, capitals with black or no background coloration indicates when a word is stressed or emphasized.
Descenders indicate background sound description and off-screen dialogue.
Most modern caption producers, such as WGBH-TV, use mixed case for both on-screen and off-screen dialogue.
'-' (a prefixed dash) indicates a change in single speaker (used by CaptionMax).
Words in italics indicate when a word is stressed or emphasized and when real world names are quoted.
Italics and bold type are only supported by EIA-608.
Some North American providers use this for narrated dialogue.
Italics are also applied when a word is spoken in a foreign language.
Text coloration indicates captioning credits and sponsorship.
Used by music videos in the past, but generally has declined due to system incompatibilities.
In Ceefax/Teletext countries, it indicates a change in single speaker in place of '>>'.
Some Teletext countries use coloration to indicate when a word is stressed or emphasized.
Coloration is limited to white, green, blue, cyan, red, yellow and magenta.
UK order of use for text is white, green, cyan, yellow; and backgrounds is black, red, blue, magenta, white.
US order of use for text is white, yellow, cyan, green; and backgrounds is black, blue, red, magenta, white.
Square brackets or parentheses indicate a song title or sound effect description.
Parentheses indicate speaker's vocal pitch e.g., (man), (woman), (boy) or (girl).
Outside North America, parentheses indicate a silent on-screen action.
A pair of eighth notes is used to bracket a line of lyrics to indicate singing.
A pair of eighth notes on a line of no text are used during a section of instrumental music.
Outside North America, a single number sign is used on a line of lyrics to indicate singing.
An additional musical notation character is appended to the end of the last line of lyrics to indicate the song's end.
As the symbol is unsupported by Ceefax/Teletext, a number sign - which resembles a musical sharp - is substituted.
Technical aspects
There were many shortcomings in the original Line 21 specification from a typographic standpoint, since, for example, it lacked many of the characters required for captioning in languages other than English. Since that time, the core Line 21 character set has been expanded to include quite a few more characters, handling most requirements for languages common in North and South America such as French, Spanish, and Portuguese, though those extended characters are not required in all decoders and are thus unreliable in everyday use. The problem has been almost eliminated with a market specific full set of Western European characters and a private adopted Norpak extension for South Korean and Japanese markets. The full EIA-708 standard for digital television has worldwide character set support, but there has been little use of it due to EBU Teletext dominating DVB countries, which has its own extended character sets.
Captions are often edited to make them easier to read and to reduce the amount of text displayed onscreen. This editing can be very minor, with only a few occasional unimportant missed lines, to severe, where virtually every line spoken by the actors is condensed. The measure used to guide this editing is words per minute, commonly varying from 180 to 300, depending on the type of program. Offensive words are also captioned, but if the program is censored for TV broadcast, the broadcaster might not have arranged for the captioning to be edited or censored also. The "TV Guardian", a television set-top box, is available to parents who wish to censor offensive language of programs—the video signal is fed into the box and if it detects an offensive word in the captioning, the audio signal is bleeped or muted for that period of time.
Caption channels
The Line 21 data stream can consist of data from several data channels multiplexed together. Odd field 1 can have four data channels: two separate synchronized captions (CC1, CC2) with caption-related text, such as website URLs (T1, T2). Even field 2 can have five additional data channels: two separate synchronized captions (CC3, CC4) with caption related text (T3, T4), and Extended Data Services (XDS) for Now/Next EPG details. XDS data structure is defined in CEA-608.
As CC1 and CC2 share bandwidth, if there is a lot of data in CC1, there will be little room for CC2 data and is generally only used for the primary audio captions. Similarly, CC3 and CC4 share the second even field of line 21. Since some early caption decoders supported only single field decoding of CC1 and CC2, captions for SAP in a second language were often placed in CC2. This led to bandwidth problems, and the U.S. Federal Communications Commission (FCC) recommendation is that bilingual programming should have the second caption language in CC3. Many Spanish television networks such as Univision and Telemundo, for example, provides English subtitles for many of its Spanish programs in CC3. Canadian broadcasters use CC3 for French translated SAPs, which is also a similar practice in South Korea and Japan.
Ceefax and Teletext can have a larger number of captions for other languages due to the use of multiple VBI lines. However, only European countries used a second subtitle page for second language audio tracks where either the NICAM dual mono or Zweikanalton were used.
Digital television interoperability issues
The US ATSC digital television system originally specified two different kinds of closed captioning datastream standards: the original analog-compatible (available by Line 21) and the more modern digital-only CEA-708 formats are delivered within the video stream. The US FCC mandates that broadcasters deliver (and generate, if necessary) both datastream formats with the CEA-708 format merely a conversion of the Line 21 format. The Canadian CRTC has not mandated that broadcasters either broadcast both datastream formats or exclusively in one format. Most broadcasters and networks to avoid large conversion cost outlays just provide EIA-608 captions along with a transcoded CEA-708 version encapsulated within CEA-708 packets.
Incompatibility issues with digital TV
Many viewers find that when they acquire a digital television or set-top box they are unable to view closed caption (CC) information, even though the broadcaster is sending it and the TV is able to display it.
Originally, CC information was included in the picture ("line 21") via a composite video input, but there is no equivalent capability in digital video interconnects (such as DVI and HDMI) between the display and a "source". A "source", in this case, can be a DVD player or a terrestrial or cable digital television receiver. When CC information is encoded in the MPEG-2 data stream, only the device that decodes the MPEG-2 data (a source) has access to the closed caption information; there is no standard for transmitting the CC information to a display monitor separately. Thus, if there is CC information, the source device needs to overlay the CC information on the picture prior to transmitting to the display over the interconnect's video output.
The responsibility of decoding the CC information and overlaying onto the visible video image has been taken away from the TV display and put into the "source" of DVI and HDMI digital video interconnects.
Because the TV handles "mute" and, when using DVI and HDMI, a different device handles turning on and off CC, this means the "captions come on automatically when the TV is muted" feature no longer works.
That source device -- such as a DVD player or set-top box -- must "burn" the image of the CC text into the picture data carried by the HDMI or DVI cable; there's no other way for the CC text to be carried over the HDMI or DVI cable.
Many source devices do not have the ability to overlay CC information, for controlling the CC overlay can be complicated. For example, the Motorola DCT-5xxx and -6xxx cable set-top receivers have the ability to decode CC information located on the MPEG-2 stream and overlay it on the picture, but turning CC on and off requires turning off the unit and going into a special setup menu (it is not on the standard configuration menu and it cannot be controlled using the remote). Historically, DVD players, VCRs and set-top tuners did not need to do this overlaying, since they simply passed this information on to the TV, and they are not mandated to perform this overlaying.
Many modern digital television receivers can be directly connected to cables, but often cannot receive scrambled channels that the user is paying for. Thus, the lack of a standard way of sending CC information between components, along with the lack of a mandate to add this information to a picture, results in CC being unavailable to many hard-of-hearing and deaf users.
The EBU Ceefax-based teletext systems are the source for closed captioning signals, thus when teletext is embedded into DVB-T or DVB-S the closed captioning signal is included. However, for DVB-T and DVB-S, it is not necessary for a teletext page signal to also be present (ITV1, for example, does not carry analogue teletext signals on Sky Digital, but does carry the embedded version, accessible from the "Services" menu of the receiver, or more recently by turning them off/on from a mini menu accessible from the "help" button).
The BBC's Subtitle (Captioning) Editorial Guidelines were born out of the capabilities of Teletext but are now used by multiple European broadcasters as the editorial and design best practice guide
New Zealand
In New Zealand, captions use an EBU Ceefax-based teletext system on DVB broadcasts via satellite and cable television with the exception of MediaWorks New Zealand channels who completely switched to DVB RLE subtitles in 2012 on both Freeview satellite and UHF broadcasts, this decision was made based on the TVNZ practice of using this format on only DVB UHF broadcasts (aka Freeview HD). This made composite video connected TVs incapable of decoding the captions on their own. Also, these pre-rendered subtitles use classic caption style opaque backgrounds with an overly large font size and obscure the picture more than the more modern, partially transparent backgrounds.
Digital television standard captioning improvements
The CEA-708 specification provides for dramatically improved captioning
An enhanced character set with more accented letters and non-Latin letters, and more special symbols
Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions
More text and background colors, including both transparent and translucent backgrounds to optionally replace the big black block
More text styles, including edged or drop shadowed text rather than the letters on a solid background
More text fonts, including monospaced and proportional spaced, serif and sans-serif, and some playful cursive fonts
Higher bandwidth, to allow more data per minute of video
More language channels, to allow the encoding of more independent caption streams
As of 2009, most closed captioning for digital television environments is done using tools designed for analog captioning (working to the CEA-608 NTSC specification rather than the CEA-708 ATSC specification). The captions are then run through transcoders made by companies like EEG Enterprises or Evertz, which convert the analog Line 21 caption format to the digital format. This means that none of the CEA-708 features are used unless they were also contained in CEA-608.
Uses in other media
DVDs and Blu-ray Discs
NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format. They are output by the player to the composite video (or an available RF connector) for a connected TV's built-in decoder or a set-top decoder as usual. They can not be output on S-Video or component video outputs due to the lack of a colorburst signal on line 21. (Actually, regardless of this, if the DVD player is in interlaced rather than progressive mode, closed captioning will be displayed on the TV over component video input if the TV captioning is turned on and set to CC1.) When viewed on a personal computer, caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. Windows Media Player (before Windows 7) in Vista supported only closed caption channels 1 and 2 (not 3 or 4). Apple's DVD Player does not have the ability to read and decode Line 21 caption data which are recorded on a DVD made from an over-the-air broadcast. It can display some movie DVD captions.
In addition to Line 21 closed captions, video DVDs may also carry subtitles, which generally rendered from the EIA-608 captions as a bitmap overlay that can be turned on and off via a set top DVD player or DVD player software, just like the textual captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (subtitled for the deaf and Hard of hearing). Many popular Hollywood DVD-Videos can carry both subtitles and closed captions (e.g. Stepmom DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information (even sometimes song lyrics) needed for deaf and hard-of-hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available-English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional information for the deaf and hard-of-hearing audience. (Many deaf/ subtitle files on DVDs are reworkings of original teletext subtitle files.)
Blu-ray media cannot carry any VBI data such as Line 21 closed captioning due to the design of DVI-based High-Definition Multimedia Interface (HDMI) specifications that was only extended for synchronized digital audio replacing older analog standards, such as VGA, S-Video, component video, and SCART. Both Blu-ray and DVD can use either PNG bitmap subtitles or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML-based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio".
Movies
There are several competing technologies used to provide captioning for movies in theaters. Cinema captioning falls into the categories of open and closed. The definition of "closed" captioning in this context is different from television, as it refers to any technology that allows as few as one member of the audience to view the captions.
Open captioning in a film theater can be accomplished through burned-in captions, projected text or bitmaps, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign. In a digital theater, open caption display capability is built into the digital projector. Closed caption capability is also available, with the ability for 3rd-party closed caption devices to plug into the digital cinema server.
Probably the best known closed captioning option for film theaters is the Rear Window Captioning System from the National Center for Accessible Media. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an LED display that shows the captions in mirror image. The panel reflects captions for the viewer but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel, and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theater to enable them to work. Proprietary systems also incur license fees.
For film projection systems, Digital Theater Systems, the company behind the DTS surround sound standard, has created a digital captioning device called the DTS-CSS (Cinema Subtitling System). It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a CD that holds many languages. If the Rear Window Captioning System is used, the DTS-CSS player is also required for sending caption text to the Rear Window sign located in the rear of the theater.
Special effort has been made to build accessibility features into digital projection systems (see digital cinema). Through SMPTE, standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually impaired narrative audio, are packaged with the rest of the digital movie. This eliminates the proprietary caption distributions required for film, and the associated royalties. SMPTE has also standardized the communication of closed caption content between the digital cinema server and 3rd-party closed caption systems (the CSP/RPL protocol). As a result, new, competitive closed caption systems for digital cinema are now emerging that will work with any standards-compliant digital cinema server. These newer closed caption devices include cupholder-mounted electronic displays and wireless glasses which display caption text in front of the wearer's eyes. Bridge devices are also available to enable the use of Rear Window systems. As of mid-2010, the remaining challenge to the wide introduction of accessibility in digital cinema is the industry-wide transition to SMPTE DCP, the standardized packaging method for very high quality, secure distribution of digital movies.
Sports venues
Captioning systems have also been adopted by most major league and high-profile college stadiums and arenas, typically through dedicated portions of their main scoreboards or as part of balcony fascia LED boards. These screens display captions of the public address announcer and other spoken content, such as those contained within in-game segments, public service announcements, and lyrics of songs played in-stadium. In some facilities, these systems were added as a result of discrimination lawsuits. Following a lawsuit under the Americans with Disabilities Act, FedExField added caption screens in 2006. Some stadiums utilize on-site captioners while others outsource them to external providers who caption remotely.
Video games
The infrequent appearance of closed captioning in video games became a problem in the 1990s as games began to commonly feature voice tracks, which in some cases contained information which the player needed in order to know how to progress in the game. Closed captioning of video games is becoming more common. One of the first video game companies to feature closed captioning was Bethesda Softworks in their 1990 release of Hockey League Simulator and The Terminator 2029. Infocom also offered Zork Grand Inquisitor in 1997. Many games since then have at least offered subtitles for spoken dialog during cutscenes, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the Metal Gear Solid series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in many of developer Valve's video games (such as Half-Life 2 or Left 4 Dead), when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned.
Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either; each game must have its subtitle display programmed individually.
Reid Kimball, a game designer who is hearing impaired, is attempting to educate game developers about closed captioning for games. Reid started the Games[CC] group to closed caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system, writes articles and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show.
Online video streaming
Internet video streaming service YouTube offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), SubRip (*.SRT) or *.SBV file. As a beta feature, the site also added the ability to automatically transcribe and generate captioning on videos, with varying degrees of success based upon the content of the video. However, on August 30, 2020, the company announced that communal captions will end on September 28. The automatic captioning is often inaccurate on videos with background music or exaggerated emotion in speaking. Variations in volume can also result in nonsensical machine-generated captions. Additional problems arise with strong accents, sarcasm, differing contexts, or homonyms.
On June 30, 2010, YouTube announced a new "YouTube Ready" designation for professional caption vendors in the United States. The initial list included twelve companies who passed a caption quality evaluation administered by the Described and Captioned Media Project, have a website and a YouTube channel where customers can learn more about their services and have agreed to post rates for the range of services that they offer for YouTube content.
Flash video also supports captions using the Distribution Exchange profile (DFXP) of W3C timed text format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a web page. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the timed-text exchange profile, Captionate .XML, or SAMI file (e.g. Hulu captioning). This is the preferred method for most US broadcast and cable networks that are mandated by the U.S. Federal Communications Commission to provide captioned on-demand content. The media encoding firms generally use software such as MacCaption to convert EIA-608 captions to this format. The Silverlight Media Framework also includes support for the timed-text exchange profile for both download and adaptive streaming media.
Windows Media Video can support closed captions for both video on demand streaming or live streaming scenarios. Typically Windows Media captions support the SAMI file format but can also carry embedded closed caption data.
EBU-TT-D distribution format supports multiple players across multiple platforms EBU-TT-D Subtitling (Captions) Distribution Format
QuickTime video supports raw 608 caption data via proprietary closed caption track, which are just EIA-608 byte pairs wrapped in a QuickTime packet container with different IDs for both line 21 fields. These captions can be turned on and off and appear in the same style as TV closed captions, with all the standard formatting (pop-on, roll-up, paint-on), and can be positioned and split anywhere on the video screen. QuickTime closed caption tracks can be viewed in Macintosh or Windows versions of QuickTime Player, iTunes (via QuickTime), iPod Nano, iPod Classic, iPod Touch, iPhone, and iPad.
Theatre
Live plays can be open captioned by a captioner who displays lines from the script and including non-speech elements on a large display screen near the stage. Software is also now available that automatically generates the captioning and streams the captioning to individuals sitting in the theater, with that captioning being viewed using heads-up glasses or on a smartphone or computer tablet.
Telephones
A captioned telephone is a telephone that displays real-time captions of the current conversation. The captions are typically displayed on a screen embedded into the telephone base.
Video conferencing
Some online video conferencing services, such as Google Meet, offer the ability to display captions in real time of the current conversation.
Media monitoring services
In the United States especially, most media monitoring services capture and index closed captioning text from news and public affairs programs, allowing them to search the text for client references. The use of closed captioning for television news monitoring was pioneered by Universal Press Clipping Bureau (Universal Information Services) in 1992, and later in 1993 by Tulsa-based NewsTrak of Oklahoma (later known as Broadcast News of Mid-America, acquired by video news release pioneer Medialink Worldwide Incorporated in 1997). US patent 7,009,657 describes a "method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations" as used by news monitoring services.
Conversations
Software programs are available that automatically generate a closed-captioning of conversations. Examples of such conversations include discussions in conference rooms, classroom lectures, or religious services.
Non-linear video editing systems and closed captioning
In 2010, Vegas Pro, the professional non-linear editor, was updated to support importing, editing, and delivering CEA-608 closed captions. Vegas Pro 10, released on October 11, 2010, added several enhancements to the closed captioning support. TV-like CEA-608 closed captioning can now be displayed as an overlay when played back in the Preview and Trimmer windows, making it easy to check placement, edits, and timing of CC information. CEA708 style Closed Captioning is automatically created when the CEA-608 data is created. Line 21 closed captioning is now supported, as well as HD-SDI closed captioning capture and print from AJA and Blackmagic Design cards. Line 21 support provides a workflow for existing legacy media. Other improvements include increased support for multiple closed captioning file types, as well as the ability to export closed caption data for DVD Architect, YouTube, RealPlayer, QuickTime, and Windows Media Player.
In mid-2009, Apple released Final Cut Pro version 7 and began support for inserting closed caption data into SD and HD tape masters via FireWire and compatible video capture cards. Up until this time, it was not possible for video editors to insert caption data with both CEA-608 and CEA-708 to their tape masters. The typical workflow included first printing the SD or HD video to a tape and sending it to a professional closed caption service company that had a stand-alone closed caption hardware encoder.
This new closed captioning workflow known as e-Captioning involves making a proxy video from the non-linear system to import into a third-party non-linear closed captioning software. Once the closed captioning software project is completed, it must export a closed caption file compatible with the non-linear editing system. In the case of Final Cut Pro 7, three different file formats can be accepted: a .SCC file (Scenarist Closed Caption file) for Standard Definition video, a QuickTime 608 closed caption track (a special 608 coded track in the .mov file wrapper) for standard-definition video, and finally a QuickTime 708 closed caption track (a special 708 coded track in the .mov file wrapper) for high-definition video output.
Alternatively, Matrox video systems devised another mechanism for inserting closed caption data by allowing the video editor to include CEA-608 and CEA-708 in a discrete audio channel on the video editing timeline. This allows real-time preview of the captions while editing and is compatible with Final Cut Pro 6 and 7.
Other non-linear editing systems indirectly support closed captioning only in Standard Definition line-21. Video files on the editing timeline must be composited with a line-21 VBI graphic layer known in the industry as a "blackmovie" with closed caption data. Alternately, video editors working with the DV25 and DV50 FireWire workflows must encode their DV .avi or .mov file with VAUX data which includes CEA-608 closed caption data.
Logo
The current and most familiar logo for closed captioning consists of two Cs (for "closed captioned") inside a television screen. It was created at WGBH. The other logo, trademarked by the National Captioning Institute, is that of a simple geometric rendering of a television set merged with the tail of a speech balloon; two such versions exist – one with a tail on the left, the other with a tail on the right.
See also
Speech-to-text reporter (captioner), an occupation
Fansub
Same Language Subtitling
Synchronized Accessible Media Interchange (SAMI) file format
Sign language on television
Subtitle (captioning)
Surtitles
Synchronized Multimedia Integration Language (SMIL) file format
References
Sources
Realtime Captioning... The VITAC Way by Amy Bowlen and Kathy DiLorenzo (no ISBN)
BBC Subtitles (Captions) Editorial Guidelines
Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television by Gregory J. Downey ()
The Closed Captioning Handbook by Gary D. Robson ()
Alternative Realtime Careers: A Guide to Closed Captioning and CART for Court Reporters by Gary D. Robson ()
A New Civil Right: Telecommunications Equality for Deaf and Hard of Hearing Americans by Karen Peltz Strauss ()
Enabling The Disabled'' by Michael Karagosian (no ISBN)
External links
Closed Captioning of Video Programming - 47 C.F.R. 79.1-From the Federal Communications Commission Consumer & Governmental Affairs Bureau
FCC Consumer Facts on Closed Captioning
Alan Newell, Inventor of Closed Captioning, Teletext for the Deaf, 1982
Closed Captioned TV: A Resource for ESL Literacy Education-From the Education Resources Information Center Clearinghouse for ESL Literacy Education, Washington D.C.
Bill Kastner: The Man Behind Closed Captioning
First Sears Telecaption adapter advertised in 1980 Sears catalog
BBC Best Practice Guidelines for Captioning and Subtitling (UK)
EBU-TT-D Subtitling (Captions) Distribution Format
Subtitling
Assistive technology
Deafness
Television terminology
High-definition television
Transcription (linguistics)
de:Untertitel#Technische Ausführungen |
30857665 | https://en.wikipedia.org/wiki/F%C3%B3rum%20Internacional%20Software%20Livre | Fórum Internacional Software Livre | Fórum Internacional de Software Livre (FISL) (International Free Software Forum) is an event sponsored by Associação SoftwareLivre.org (Free Software Association), a Brazilian NGO that, among other goals, seeks the promotion and adoption of free software. It takes place every year in Porto Alegre, the capital of Rio Grande Do Sul.
The event is meant as a "get-together" of students, researchers, social movements for freedom of information, entrepreneurs, Information Technology (IT) enterprises, governments, and other interested people. It is considered one of the world's largest free software events, harboring technical, political and social debates in an integrated way. It gathers discussions, speeches, personalities and novelties both national and international in the free software world.
Event history
On 30 July 1999, a group of public servants, professors, students and members of the academic community, members of user groups and other interested people, joined efforts to start the PSL-RS (Free Software Project of Rio Grande do Sul).
The project's goals were:
To set up a network of labs in companies and universities devoted to the study of Linux and other free software;
To build a consortium to publish books and manuals about Free Software, Free Programming languages and such;
To massively promote free software.
After much difficulty and many delays, the first FISL happened on 4–5 May 2000 in the Noble Hall of UFRGS (Rio Grande do Sul' Federal University). The event was attended by 2,120 people, which also had the first WSL (Free Software Workshop) with 19 works presented during the event.
Among others, the first FISL had such notable attendees as Richard Stallman, writer of the "GNU Manifesto" and Founder of the Free Software Foundation.
Since then, the event has grown considerably in terms of visibility, influence and number of attendees.
FISL 18
FISL 18 took place from July, 11 to 14, 2018, in the PUCRS Convention Center, Porto Alegre.
FISL 17
FISL 17 took place from July, 13 to 16, 2016, in the PUCRS Convention Center, Porto Alegre. There were 3937 attendants.
FISL 16
FISL 16 took place from July, 08 to 11, 2015, in the PUCRS Convention Center, Porto Alegre. There were 5281 attendants.
FISL 15
FISL 15 took place from May, 07 to 10, 2014, in the PUCRS Convention Center, Porto Alegre. There were 6017 attendants.
FISL 14
FISL 14 took place from July, 3 to 6, 2013, in the PUCRS Convention Center, Porto Alegre. There were 7217 attendants.
FISL 13
FISL 13 took place from July, 25 to 28, 2012, in the PUCRS Convention Center, Porto Alegre. There were 7709 attendants.
FISL 12
The 12th edition of the International Free Software Forum happened from June, 29 to July, 2, in 2011 and the main subject was "Net Neutrality", because it is believed that content must be equally accessible to every person with no interferences in online traffic. Following this edition's subject, a contest for choosing the event's logomark to be used in fisl's graphic work was carried out. Raphael Greque's work was chosen in an election between ASL's associates. Freedom as a subject also was present between various discussions, in an effort to contribute to technological and socio-cultural evolution of the country. It also aimed at offering to participants the opportunity to get updated in free software usage in many different areas. The event also offered attendance in many speeches and workshops in areas such as mobility, free software in education, robotics, free networks, free culture, digital inclusion and women in IT.
Representatives of both RS and national public sector attend fisl every year, and strengthen their commitment to free software. Hence the participation of the then Brazilian President Luiz Inácio Lula da Silva, in 2009. In this edition, fisl hosted the presence of the Minister of Science and Technology Aloisio Mercadante, the RS Governor Tarso Genro and the Porto Alegre Mayor José fortunati, the President of the Federal Data Processing Service, Marcos Mazoni, the Manager of Technical Innovations of the Ministry of Planning Corinto Meffe, the Secretary of Logistics and IT of the Ministry of Planning, Delfino Natal de Souza, the President of the RS Data Processing Company (Procergs), Carlson Aquistapasse, the President of the Data Processing Company of Porto Alegre, (PROCEMPA), André Imar Kulczynski, federal deputies Paulo Pimenta e Manuela D'Avila, besides other authorities.
Public Hearing – the Minister of Science and Technology, Aloisio Mercadante participated in a public hearing with fisl´ organizers, people from the Hacker Transparency movement, developers from Ubuntu, Debian, Slackware and ODF among other groups, talking about how hackers can help increase government's transparency and technological development.
FISL12 in numbers
Participants: 6.914 Brazilian states: 25 +Federal district Countries: 13 Users groups: 40 Convoys: 58 Tracks: 21 Proposals submissions: 581 Speeches: 352 Speakers: 521 Total traffic: 391 GB Wireless: 104 GB Traffic peak: 151 Mbit/s Max simultaneous hosts: 267 Page views: 311.922 Sponsors: 28 Solutions showed: 21 Supporters: 6 Exhibitors: 26 Organization team people: 217
FISL 11
This editions happened from 21 to 24 July 2010. It hosted a total of 7.511 participants [2] which circulated between the Free Solutions and Business Fair and fisl itself, attending speeches, workshops and debates. fisl11 developed more than 500 activities, overcoming all previous editions. Innovation began with speech selection, which relied on the participation of registered people in order to choose them. This was also the first edition with translation to libras (Brazilian language of signals, aimed at the hearing-impaired). The activities of the Free Culture Festival received a larger room, in an effort to broaden debates about knowledge sharing also in other areas of activity. Besides the record number of participants, 84 people helped build the largest free software event in Latin America together with organization teams. The presence of the traditional Convoys and Users Groups from various parts of Brazil and abroad confirmed the success, year after year. Up to the last day of the Forum, fisl11´ website had more than 2 million pageviews. Specialists in free software gave lectures in fisl11 such as Linux International Jon "maddog" Hall, developer Jon Phillips, Mozilla Foundation executive director Mark Shurman and GIT specialist Scott Chacon.
FISL 10
This edition happened from 24 to 27, June, 2009, with names such as Jon "maddog" Hall, Peter Sunde, creator of the famous torrent sharing site The Pirate Bay. with 8232 registered participants, it was the largest edition ever [3]. The picture of President Lula hugging Peter Sunde shot by Mariel Zasso walked around all blogosphere.[4]
OBS: In this year the event's name in form of versions was abandoned, and a normal running number scheme was adopted.
FISL 9.0
The 2008 edition had more than 7417 registered people coming from 21 countries[5]. It became acknowledged by the presence of companies such as Sun Microsystems, Google and Intel and of personalities such as Linux International´ Jon "maddog" Hall and Louis Suarez-Potts from OpenOffice.org. The event was carried on at the PUCRS Convention Center (PUCRS).
FISL 8.0
The 2007 edition hosted the presence of companies such as Sun Microsystems, IBM and Intel and people like Jon "maddog" Hall from Linux International, X.Org's Keith Packard, Sun's Simon Phipps and Louis Suarez-Potts from OpenOffice.org. It harboured 5363 participants[6]. The event happened at FIERGS Convention Center. 19 mobilizing activities to fisl 8.0 were carried on, among free software events, speeches, promotions, courses and institutional visits, between various Brazilian cities.
Among the outstanding attractions it may be quoted the participation of the educational coordinator from OLPC (One Laptop per Child) at the UFRGS Laboratory of Cognitive Studies Léa Fagundes, who introduced a prototype of the laptop developed for public school children. Paul Singer National Secretary of Solidarity Economy at the Ministry of Work and Employment and Edgar Piccino, from the Institute of IT, cabinet of the President of the Republic made a speech based on parallels between solidarity economy the development model of free software. In the panel "Digital Communication and the Building of the Commons: viral networks, open spectra and the new ways of regulation" the sociologist Sérgio Amadeu da Silveira, journalist Gustavo Gindre and Intervozes member João Brant and Jon "maddog" Hall debated the deep changes that digital technologies impose to the telecommunications sector.
FISL 7.0
2006 edition happened from 19 to 22, April, and was marked by the presence of Richard Stallman and discussions around GPLv3. It is important to remark the discussions about digital inclusion, technical speeches about software usage and community meetings about communication and popular participation. More information can be found at the fisl 7.0 website.
A total of 3.385 people (83,82% men and 12,67% women), 445 speakers, 119 journalists and 550 exhibitors. Speakers from 10 countries participated [7].
FISL 6.0
The 2005 edition had more than 4.300 registered people, according to the official website. The integration of new modules to the management system takes place. They are called GREVE and PAPERS.
In this year the event's organization decides to abandon the roman numbering system and adopt the "version" scheme. Aiming at helping indigenous and maroons communities from Rio Grande do Sul, a fee of R$3,00 (about U$1,62 as of dec-2011) was charged at registration. The funds gathered were used to buy food donated to these communities.
In the event's grid technical speeches, panels and cases in the banking, health, education, county management, hardware, networks and security areas were given, showing the strong evolution in applications that happened in the last years and trend analysis for the coming years.
A total of 2.715 people (82.96% men and 13.54% women), 222 speakers, 298 scholars, 81 journalists, 500 exhibitors have been participated and about 4.400 people have been to the fair. 23 countries were represented [8].
V FISL
V FISL happened from 2 to 5, June, 2004 at PUCRS. The system PAPERS comes up as an update to the old system. In the 2004 edition, the Forum gathered representatives from more than 35 countries, all the Brazilian states plus the Federal District. More than 300 speakers, 1.014 companies and institutions and 4.854 people.
IV FISL
The IV FISL happened at the PUCRS Convention Center from 5 to 7, June, 2003, in Porto Alegre.
With more than 4.000 people registered and the presence of famous speakers such as Sérgio Amadeu (then ITI President – Presidency of Brazil) and Miguel de Icaza (GNOME Foundation), the event had as a main subject Use of Free Software in the Private and Public Sector. It was the first edition to use an exclusive system for event management (called YES! Eventos).
III FISL
The 2002 edition happened from 2 to 4, May, at the PUCRS Convention Center. Main subjects were Use of Free Software in the public sector, testimonies and case studies about free software use in companies, tech tutorials and debates about free software diffusion in universities.
II FISL
The second edition of FISL took place from 29 to 31, 2001 at the Federal University of Rio Grande do Sul. The official opening ceremony was led by the Governor of Rio Grande do Sul and had the attendance of the President of the RS Deputy Assembly, Unesco, Mayors, Universities and NGO's. The main speech was about Software License and Freedom, from Timothy Ney, Free Software Foundation Executive – USA. Other subjects were private and corporate security, Free Software use in the armed forces and popular computers.
I FISL
The first edition of fisl happened from 4 to 5 May 2000, at the Federal University of Rio Grande do Sul (UFRGS). The main activities were case studies of free software use and free software concepts.
Speakers
Among others, founders and important members of great Free Software projects that have spoken at FISL:
Ralf Nolden: Maintainer of KDE's Kdevelop IDE
Amir Taaki: Bitcoin developer and founder of Bitcoin Consultancy.
Jon "maddog" Hall: Executive Director of Linux International
Larry Wall: Creator of the Perl programming language;
Peter Salus: Author of "A Quarter Century of Unix" and "The Daemon, The Gnu and The Penguin"
Rik van Riel: Linux kernel Developer
Timothy Ney: GNOME Foundation Executive Director
The seventh Fórum Internacional Software Livre happened from the 19th to the 22nd of April. More than 5000 attendees registered attending.
The speakers included:
Keith Packard
Miguel de Icaza
Marcelo Tosatti
Jim Gettys
Jim McQuillan
Richard Stallman
Georg Greve
Aaron Seigo
Zack Rusin
The 2nd international GPLv3 conference was held in conjunction with FISL on April 21 and 22.
References
Further reading
External links
Associação Software Livre
Projeto Software Livre RS
Associação SoftwareLivre.org
Videos of the conference
Official Twitter @fisl_oficial
Official Identi.ca @fisl
Free-software events
Free-software conferences |
19788381 | https://en.wikipedia.org/wiki/3D%20Indiana | 3D Indiana | 3D Indiana is a commercial Educational software for teaching and research on the human anatomy. The name is an acronym for Three-Dimensional Interactive Digital Anatomy. This software is based on the principles of volumetric anatomy which uses three intersecting coordinate planes to locate the organs of the human body based on mathematical calculations. This is in contrast to the traditional method of describing the location of the organs in relation to one another. The Gall bladder is traditionally described thus: 'It is situated on the right side of the body closely in contact with the inferior surface of the liver, along the right edge of the quadrate lobe in a shallow fossa extending from the right edge of the porta hepatis to the inferior lobe of the liver'.
History
This patent-pending software was designed by the Kalister Foundation led by Jerome Kalister, an alumnus of TD Medical College, Alapuzha, Kerala, India. A team of about fifteen professionals of the Foundation worked for nearly three years to create the software. It was formally unveiled at TD Medical College by Jerome Kalister on 13 October 2008 before an audience of more than a thousand students and staff of various medical colleges in Kerala.
Features
According to its creators, 3D Indiana is a virtual human body and its use would be complementary to the conventional mode of anatomy studies. Eventually this software may replace the human cadavers in anatomical studies. Without any additional programming, a researcher could elicit normal body responses from this virtual body. This gives the package the potential as a tool for doing clinical trials of drugs and chemicals in it.
Every named structure in the body is digitally recreated in detail and deployed in their true anatomical positions in the body. By clicking on any part of the body using a computer mouse, the user can get the name of the part displayed and can zoom in, rotate or isolate the part for further details.
Endorsements
The 3D Indiana package has been already endorsed by the Anatomical society of India. The Kalister Foundation is in discussion with the National Rural Health Mission for its endorsement of 3D Indiana as an educational software suitable for application in medical colleges.
See also
Visible Human Project
Primal Pictures
References
External links
3D Indiana
National Rural Health Mission
Anatomical simulation |
29593019 | https://en.wikipedia.org/wiki/AirPrint | AirPrint | AirPrint is a feature in Apple Inc.'s macOS and iOS operating systems for printing
without installing printer-specific drivers. Connection is via a wireless LAN (Wi-Fi), either directly to AirPrint-compatible printers, or to non-compatible shared printers by way of a computer running Microsoft Windows, Linux, or macOS. It was originally intended for iOS devices and connected via a Wi-Fi network only, and thus required a Wi-Fi access point. However, with the introduction of AirPrint to the macOS desktop platform in 2012, Macs connected to the network via Ethernet connection could also print using the AirPrint protocol—not just those connected via Wi-Fi. Direct Wi-Fi connection between the device and the printer is not supported by default, but has appeared as the 'HP ePrint Wireless Direct AirPrint' feature.
History and printer compatibility
Following the iPad's introduction in 2010, user concerns were raised about the product's inability to print, at least through a supported Apple solution. Apple founder and CEO Steve Jobs reportedly replied "It will come" in May 2010 to a user request for printing.
AirPrint's Fall 2010 introduction, as part of iOS 4.2, gave iPhones and iPads printing capability for the first time. AirPrint for Mac computers was introduced in the Mac OS X Lion release.
At launch, twelve printers were AirPrint compatible, all of them from the HP Photosmart Plus e-All-in-One series. As of July 2020, that number had grown to about 6,000 compatible printer models from two dozen different manufacturers. The current list can be found on Apple's support site. The related technology is covered by .
Legacy printer support
A number of software solutions allow for non-AirPrint printers to be used with iOS devices, by configuring support on an intermediary system accessible via Wi-Fi, connected to the printer. Since AirPrint is driverless, such a configuration compensates for the printer's lack of native AirPrint support by using the drivers on the intermediary system instead.
The simplest solution for all platforms is to create a new Bonjour service that tricks iOS clients into believing they're talking to an AirPrint device. Many blog posts and commercial software products exist to accomplish this, as well as open-source solutions in Linux. This works in many cases because AirPrint is an extension of the Internet Printing Protocol (IPP), which many printers already support either directly, or as a result of being shared through an intermediary system (typically CUPS, the Mac/Linux printing system). This approach is limited however, as the AirPrint-specific components of the protocol are missing. This can lead to compatibility issues and unexpected results. Some software packages address this completely by translating between the two dialects of IPP, avoiding compatibility issues, while most just re-share printers using the AirPrint service name.
For Microsoft Windows, there are free and paid solutions.
On macOS, a Bonjour service exists that enables AirPrint support for legacy printers. Commercial macOS software for this purpose includes Netputing handyPrint and Ecamm Printopia.
In most Linux distributions, AirPrint support should be automatic with the CUPS default printing subsystem since version 1.4.6 (such as Trisquel 5 and Ubuntu 11.04). CUPS servers before version 1.4.6 with DNS based Service Discovery can also be configured manually, by adding DNS-SD printer service discovery records to a name server.
Apps and utilities
There are a number of third party solutions, available on the Apple App Store and elsewhere, that allow printing to legacy printers directly or via an application helper. Netgear Genie, for both Mac OS X 10.6 or above and Windows XP, Vista, 7 and 8. Genie permits any shared, network attached printer to be made accessible via AirPrint. The application is free for customers of current Netgear routers. Printopia Pro is a commercial solution designed to allow AirPrint to work on large business and education networks. It offers features useful to large organizations including centralized management, directory integration, and allows AirPrint to operate across subnets. It requires a server running Mac OS X 10.7 or later, and one server can potentially serve an entire organization.
See also
Google Cloud Print
Internet Printing Protocol
Mopria Alliance
References
IOS
Computer printing
Printing protocols |
33820549 | https://en.wikipedia.org/wiki/PhpList | PhpList | phpList is an open source software for managing mailing lists.
It is designed for the dissemination of information, such as newsletters, news, advertising to list of subscribers. It is written in PHP and uses a MySQL database to store the information. phpList is free and open-source software subject to the terms of the Affero General Public License (AGPL).
Overview
phpList can manage a list of subscribers and send e-mail messages to large numbers of subscribers.
The subscription management, registration, personal data changes and unsubscribe requests are automated.
Subscriptions to one or more lists are made through a subscription page that can be integrated into a website. The information requested during registration, for example country of residence, language, date of birth, favorite food, etc. -
is determined by the list administrators and can be modified at any time. This information can then be used for targeted messaging, i.e. sending may be limited to subscribers who meet certain criteria, such as country or city of residence. Public interface is available in 35 languages. The documentation detailing the functions and features of the software is available in English and is partially translated into Spanish, French, and Dutch.
This software is comparable to GNU Mailman or Sympa, which also manage large-scale mailing lists, but there are 2 major differences. phpList is only for direct mail, not for discussion: people who subscribe to a list receive messages from the list, they can not reply to the list. Unlike discussion lists, phpList allows you to send messages to some of the subscribers in a list, based on complex criteria determined by the administrator.
This software is useful for anyone who wants to manage a database that is more than just a collection of emails. phpList allows targeted sending with the use of sometimes very complex criteria. It is a useful tool to make information campaigns for large companies, associations, or small projects with a wide audience.
Operating systems supported
The software was developed for Linux, Apache and MySQL, see LAMP but is also compatible with OpenBSD, FreeBSD, OS X, and Windows.
Database connectivity
phpList is designed to connect to a MySQL database. With its support for ADOdb it is possible to extend the connectivity to other databases such as PostgreSQL, Microsoft SQL Server, SQLite, Sybase, IBM DB2, and Oracle.
See also
List of mailing list software
LAMP (software bundle)
References
External links
phpList on SitePoint
Free mailing list software
Mailing list software for Linux
Free software programmed in PHP
Email marketing software
Software using the GNU AGPL license |
20018805 | https://en.wikipedia.org/wiki/Wind%20resource%20assessment | Wind resource assessment | Wind resource assessment is the process by which wind power developers estimate the future energy production of a wind farm. Accurate wind resource assessments are crucial to the successful development of wind farms.
History
Modern wind resource assessments have been conducted since the first wind farms were developed in the late 1970s. The methods used were pioneered by developers and researchers in Denmark, where the modern wind power industry first developed.
Wind resource maps
High resolution mapping of wind power resource potential has traditionally been carried out at the country level by government or research agencies, in part due to the complexity of the process and the intensive computing requirements involved. However, in 2015 the Technical University of Denmark, under framework of the Clean Energy Ministerial, launched the Global Wind Atlas (version 1.0) to provide freely available data on wind resource potential globally. The Global Wind Atlas was relaunched in November 2017 (version 2.0) in partnership with the World Bank, with wind resource maps now available for all countries at 250m resolution.
Another similar international example is the European Wind Atlas, which is in the process of being updated under the New European Wind Atlas project funded by the European Union.
Examples of country wind resource maps include the Canadian Wind Atlas, the Wind Resource Atlas of the United States, and a series of wind maps published by the World Bank under an initiative launched by ESMAP in 2013 focused on developing countries. This followed a previous initiative of the United Nations Environment Program, the Solar and Wind Energy Resource Assessment (SWERA) project, which was launched in 2002 with funding from the Global Environment Facility. However, these country wind resource maps have been largely superseded by the Global Wind Atlas in terms of data quality, methodology, and output resolution.
The above global and country mapping outputs, and many others, are also available via the Global Atlas for Renewable Energy developed by the International Renewable Energy Agency (IRENA), which brings together publicly available GIS data on wind and other renewable energy resources effort.
Wind prospecting can begin with the use of such maps, but the lack of accuracy and fine detail make them useful only for preliminary selection of sites for collecting wind speed data. With increasing numbers of ground-based measurements from specially installed anemometer stations, as well as operating data from commissioned wind farms, the accuracy of wind resource maps in many countries has improved over time, although coverage in most developing countries is still patchy. In addition to the publicly available sources listed above, maps are available as commercial products through specialist consultancies, or users of GIS software can make their own using publicly available GIS data such as the US National Renewable Energy Laboratory's High Resolution Wind Data Set.
Although the accuracy has improved, it is unlikely that wind resource maps, whether public or commercial, will eliminate the need for on-site measurements for utility-scale wind generation projects. However, mapping can help speed up the process of site identification and the existence of high quality, ground-based data can shorten the amount of time that on-site measurements need to be collected.
In addition to 'static' wind resource atlases which average estimates of wind speed and power density across multiple years, tools such as Renewables.ninja provide time-varying simulations of wind speed and power output from different wind turbine models at an hourly resolution.
Measurements
To estimate the energy production of a wind farm, developers must first measure the wind on site. Meteorological towers equipped with anemometers, wind vanes, and sometimes temperature, pressure, and relative humidity sensors are installed. Data from these towers must be recorded for at least one year to calculate an annually representative wind speed frequency distribution.
Since onsite measurements are usually only available for a short period, data is also collected from nearby long-term reference stations (usually at airports). This data is used to adjust the onsite measured data so that the mean wind speeds are representative of a long-term period for which onsite measurements are not available. Versions of these maps can be seen and used with software applications such as WindNavigator.
Calculations
The following calculations are needed to accurately estimate the energy production of a proposed wind farm project:
Correlations between onsite meteorological towers:
Multiple meteorological towers are usually installed on large wind farm sites. For each tower, there will be periods of time where data is missing but has been recorded at another onsite tower. Least squares linear regressions and other, more wind-specific regression methods can be used to fill in the missing data. These correlations are more accurate if the towers are located near each other (a few km distance), the sensors on the different towers are of the same type, and are mounted at the same height above the ground.
Correlations between long term weather stations and onsite meteorological towers:
Because wind is variable year to year, and power produced is related to the cube of windspeed, short-term (< 5 years) onsite measurements can result in highly inaccurate energy estimates. Therefore, wind speed data from nearby longer term weather stations (usually located at airports) are used to adjust the onsite data. Least squares linear regressions are usually used, although several other methods exist as well.
Vertical shear to extrapolate measured wind speeds to turbine hub height:
The hub heights of modern wind turbines are usually 80 m or greater, but developers are often reluctant to install towers taller than 60m due to the need for FAA permitting in the US, and costs. The power law and log law vertical shear profiles are the most common methods of extrapolating measured wind speed to hub height.
Wind flow modeling to extrapolate wind speeds across a site:
Wind speeds can vary considerably across a wind farm site if the terrain is complex (hilly) or there are changes in roughness (the height of vegetation or buildings). Wind flow modeling software, based on either the traditional WAsP linear approach or the newer CFD approach, is used to calculate these variations in wind speed.
Energy production using a wind turbine manufacturer's power curve:
When the long term hub height wind speeds have been calculated, the manufacturer's power curve is used to calculate the gross electrical energy production of each turbine in the wind farm.
Application of energy loss factors:
To calculate the net energy production of a wind farm, the following loss factors are applied to the gross energy production:
wind turbine wake loss
wind turbine availability
electrical losses
blade degradation from ice/dirt/insects
high/low temperature shutdown
high wind speed shutdown
curtailments due to grid issues
Software applications
Wind power developers use various types of software applications to assess wind resources.
Wind data management
Wind data management software assists the user in gathering, storing, retrieving, analyzing, and validating wind data. Typically the wind data sets are collected directly from a data logger, located at a meteorological monitoring site, and are imported into a database. Once the data set is in the database it can be analyzed and validated using tools built into the system or it can be exported for use in external wind data analysis software, wind flow modeling software, or wind farm modeling software.
Many data logger manufacturers offer wind data management software that is compatible with their logger. These software packages will typically only gather, store, and analyze data from the manufacturer's own loggers.
Third party data management software and services exist that can accept data from a wide variety of loggers and offer more comprehensive analysis tools and data validation.
Wind data analysis
Wind data analysis software assist the user in removing measurement errors from wind data sets and perform specialized statistical analysis.
Atmospheric simulation modeling
Wind flow modeling methods calculate very high-resolution maps of wind flow, often at horizontal resolution finer than 100-m. When doing fine resolution modeling, to avoid exceeding available computing resource, the typical model domains used by these small-scale models have a few kilometers in the horizontal direction and several hundred meters in the vertical direction. Models with such a small domain are not capable of capturing meso-scale atmospheric phenomena that often drive wind patterns. To over come this limitation nested modeling is sometimes used.
Wind flow modeling
Wind flow modeling software aims to predict important characteristics of the wind resource at locations where measurements are not available. The most commonly used such software application is WAsP, created at Risø National Laboratory in Denmark. WAsP uses a potential flow model to predict how wind flows over the terrain at a site. Meteodyn WT and WindStation are similar applications that use computational fluid dynamics (CFD) calculations instead, which are potentially more accurate, particularly for complex terrains.
Wind farm modeling
Wind farm modeling software aims to simulate the behavior of a proposed or existing wind farm, most importantly to calculate its energy production. The user can usually input wind data, height and roughness contour lines, wind turbine specifications, background maps, and define objects that represent environmental restrictions. This information is then used to design a wind farm that maximizes energy production while taking restrictions and construction issues into account. There are several wind farm modeling software applications available, including ZephyCFD, Meteodyn WT, Openwind, Windfarmer, WindPRO, WindSim, and WAsP.
Medium scale wind farm modelling
In recent years a new breed of wind farm development has grown from the increased need for distributed generation of electricity from local wind resources. This type of wind projects is mostly driven by land owners with high energetic requirements such as farmers and industrial site managers. A particular requirement from a wind modelling point of view is the inclusion of all local features such as trees, hedges and buildings as turbine hub-heights range from as little as 10m to 50m. Wind modelling approaches need to include these features but very few of the available wind modelling commercial software provide this capability. Several work groups have been set up around the world to look into this modelling requirement and companies including Digital Engineering Ltd (UK), NREL (USA), DTU Wind Energy (Denmark) are at the forefront of development in this area and look at the application of meso-CFD wind modelling techniques for this purpose.
References
Wind power |
3069269 | https://en.wikipedia.org/wiki/Opera%20Mini | Opera Mini | Opera Mini is a mobile web browser developed by Opera. It was primarily designed for the Java ME platform, as a low-end sibling for Opera Mobile, but it is now developed exclusively for Android. It was previously developed for iOS, Windows 10 Mobile, Windows Phone 8.1, BlackBerry, Symbian, and Bada. As of 2018, the Android build is the only version still under active development.
Opera Mini was derived from the Opera web browser. Opera Mini requests web pages through Opera Software's compression proxy server. The compression server processes and compresses requested web pages before sending them to the mobile phone. The compression ratio is 90% and the transfer speed is increased by two to three times as a result. The pre-processing increases compatibility with web pages not designed for mobile phones. However, interactive sites which depend upon the device processing JavaScript do not work properly.
In July 2012, Opera Software reported that Opera Mini had 168.8 million users as of March 2012. In February 2013, Opera reported 300 million unique Opera Mini active users and 150 billion page views served during that month. This represented an increase of 25 million users from September 2012.
History
Origin
Opera Mini was derived from the Opera web browser for personal computers, which has been publicly available since 1996. Opera Mini was originally intended for use on mobile phones not capable of running a conventional Web browser. It was introduced on 10 August 2005, as a pilot project in cooperation with the Norwegian television station TV 2, and only available to TV 2 customers.
A beta version was made available in Sweden, Denmark, Norway, and Finland on 20 October 2005. After the final version was launched in Germany on 10 November 2005, and quietly released to all countries through the Opera Mini website in December, the browser was officially launched worldwide on 24 January 2006.
On 3 May 2006, Opera Mini 2.0 was released. It included new features such as the ability to download files, new custom skins, more search engine options on the built-in search bar, a speed dial option, new search engines, and improved navigation.
On 1 November 2006, Opera Mini 3 beta introduced secure browsing, RSS feeds, photo uploading and content folding into its list of features and capabilities. Content folding works by folding long lists such as navigation bars into a single line that can be expanded as needed. A second beta was released on 22 November, and on 28 November, the final version of Opera Mini 3 was released.
Opera Mini 4
On 7 November 2007, Opera Mini 4 was released. According to Johan Schön, technical lead of Opera Mini development, the entire code was rewritten. Opera Mini 4 includes the ability to view web pages similarly to a desktop based browser by introducing Overview and Zoom functions, and a landscape view setting. In Overview mode, the user can scroll a zoomed-out version of certain web pages. Using a built-in pointer, the user can zoom into a portion of the page to provide a clearer view; this is similar to the functionality of Opera's Nintendo-based web browsers. This version also includes the ability to synchronise with Opera on a personal computer.
Prior to Opera Mini 4, the browser was offered in two editions: Opera Mini Advanced for high-memory MIDP 2 phones, and Opera Mini Basic for low-memory MIDP 1 phones. Opera Mini 4 replaced Opera Mini Advanced. Originally, Google was the default search engine on Opera Mini. On 8 January 2007, Opera Software and Yahoo! announced a partnership to make Yahoo! search the default instead. On 27 February 2008, Opera Software announced that Google would henceforth be the default search engine for Opera Mini and Opera Mobile.
A version for the Android operating system was announced on 10 April 2008. Rather than port the code to Android, a wrapper was created to translate Java ME API calls to Android API calls.
Later versions
On 16 August 2009, Opera Software released Opera Mini 5.0 beta, which included tabbed browsing, a password manager, improved touch screen support, and a new interface, with a visual Speed Dial similar to the one introduced by Opera Software in their desktop browser.
The browser's use of compression and encrypted proxy-based technology to reduce traffic and speed page display has the side effect of allowing it to circumvent several approaches to Internet censorship. Since 20 November 2009, there have been reports from Chinese users that when they use Opera Mini, they are redirected to an error page leading them to download Opera Mini China version. This is almost certainly due to the Chinese government being concerned that users are using Opera Mini to bypass the Great Firewall of China. Opera agreed to route all of their traffic through government servers.
In 2009–10: A press release announcing that Indonesia's Smart Telecom had chosen Opera Mini for their devices said that Opera Mini was the world's most popular mobile browser, and that Russia and Indonesia were the largest users.
An iPhone version was approved for distribution by the Apple App Store on 13 April 2010.
On 3 September 2014, Opera started taking registrations for the beta version of Opera Mini for Windows Phone. Opera Mini was released for Windows Phone six days later, on 9 September 2014, as a public beta. This marked Opera's return to Microsoft's mobile platform since the demise of Windows Mobile.
Functionality
Opera Mini uses a server to translate HTML, CSS and JavaScript into a more compact format. It can also shrink any images to fit as the handset screen. This step makes Opera Mini fast.
Most Opera Mini versions use only the server-based compression method, with maximal compression but some issues with interactive web apps. Opera Mini can operate in three compression modes: "mini" (or "extreme" on Android versions), "turbo" (or "high" on Android versions) and uncompressed. The turbo and the mini modes reduce the amount of data transferred, and increase speed on the slower connections.
The functionality of the Mini mode is somewhat different from a conventional Web browser, with the amount of data which has to be transferred much reduced, but with some loss to functionality. Unlike straightforward web browsers, Opera Mini fetches all content through a proxy server, renders it using the Presto layout engine, and reformats web pages into a format more suitable for small screens. A page is compressed, then delivered to the phone in a markup language called Opera Binary Markup Language (OBML), which Opera Mini can interpret. According to Opera Software, the data compression makes transfer time about two to three times faster, and the pre-processing improves the display of web pages not designed for small screens. The turbo mode was added later and is similar to Mini mode but bypasses compression for interactive functionality, at the expense of less extreme data compression. The turbo and uncompressed modes use the "WebView" on Android and the WebKit layout engine on iOS.
The Java ME and Windows Phone versions only have access to the mini compression mode. Other versions can switch between various modes, gaining functionality at the cost of lower or no compression. Opera Software claims that Opera Mini reduced the amount of data transmitted up to 90% in the mini (extreme) mode; in turbo (high) mode, it reduced amount up to 60%, similar to Google Chrome's Reduced Data mode.
When a user browses the web using Opera Mini, the request is sent, via the connectivity available at that moment by the device (Mobile broadband, Wi-Fi or any other option given by the device) to access the Internet, to one of the Opera Software company's proxy servers, which retrieves the web page, processes and compresses it, and sends it back to the client (user's mobile phone).
By default, Opera Mini opens one connection to the proxy servers, which it keeps open and re-uses as required. This improves transfer speed and enables the servers to quickly synchronize changes to bookmarks stored in Opera Mini server.
The Opera Software company maintains over 100 proxy servers to handle Opera Mini traffic. They run Linux and are massively parallel and massively redundant."
Standard support
From 16 March 2015, Opera Mini's extreme compression mode uses an upgraded version of the Presto layout engine that is included in Opera 12. Consequently, Opera Mini supports most of the web standards supported in Opera 12. Presto's development has continued for Opera Mini and further support was added for HTML5 input types, CSS Flexbox model, CSS rem units and ECMAScript 5. However, unlike the desktop edition of Opera, frames are flattened because of client limitations, and dotted or dashed borders are displayed as solid borders due to bandwidth and memory issues. As Opera Mini reformats web pages, it does not pass the Acid2 standards compliance test. Opera Mini supports bi-directional text, meaning that it can correctly display right-to-left scripts such as Arabic and Hebrew in addition to languages written left-to-right. However, it will not display right-to-left text if the font size is set to small or very small. Indic and Chinese scripts are supported only if an appropriate font is installed on the device as the default system font.
Small-Screen Rendering
For devices with screens 128 pixels wide or smaller, the default rendering mode is Small-Screen Rendering (SSR). In this mode, the page is reformatted into a single vertical column so that it need only be scrolled vertically. Long lists and navigation bars are automatically collapsed (hiding most of the list or bar) by a feature known as "content folding". A plus (+) sign is displayed next to the collapsed content; when clicked, it toggles content folding. Web developers can turn on SSR on the desktop edition of Opera to see how their websites will be displayed on mobile editions of Opera. In SSR mode images are scaled down to no more than 70% of the screen size in either direction.
Complex script rendering
Opera Mini can send content in bitmap image form if a font required is not available on the device, which is useful for indic scripts. Hindi, Bengali and a few other non-Latin character sets are supported.
JavaScript support
When browsing the Web in Opera Mini mode, JavaScript is processed by the proxy server, and is merely rendered on the device. This limits interactivity. Scripts cannot be run in the background on the device. If a script is paused (on the server), the browser must communicate with the server to unpause it. JavaScript will only run for a couple of seconds on the Mini server before pausing, due to resource constraints. On Opera Mini, before the page is sent to the mobile device, its onLoad events are fired and all scripts are allowed a maximum of two seconds to execute. The setInterval and setTimeout functions are disabled, so scripts designed to wait a certain amount of time before executing will not execute at all. After the scripts have finished or the timeout is reached, all scripts are stopped and the page is compressed and sent to the mobile device. Once on the device, only a handful of events are allowed to trigger scripts:
onUnload: Fires when the user navigates away from a page
onSubmit: Fires when a form is submitted
onChange: Fires when the value of an input control is changed
onClick: Fires when an element is clicked
When one of these events is triggered, it sends a request to the proxy server to process the event. The proxy server then executes the JavaScript and returns the revised page to the mobile device. Pop-ups, if not blocked by the JavaScript restrictions, replace the web page being viewed. Opera has published Web content authoring guidelines to assist authors.
Opera Mini can run in Turbo and Uncompressed modes, in addition to Mini mode. In Turbo mode, the amount of data transferred is still much reduced by compression, but, unlike Mini mode, JavaScript is not intercepted by the server and works properly.
Privacy and security
Opera Mini encrypts the connection between the mobile device and the Opera proxy server for security. The encryption key is obtained on the first start by requesting random keys a certain number of times. Opera Mini supports most advanced version of Transport Layer Security (TLS) protocol
it also supports modern secure ciphers like AES-GCM and ECC.
However, Opera Mini's Extreme mode does not offer true end-to-end security when visiting HTTPS encrypted websites only for data saving purpose. With "Extreme/Mini mode" when visiting an encrypted web page, first the Opera Mini's servers decrypt the page, compress it for data saving then re-encrypt it themselves and finally forward it to the destination phone.
While browsing a secured site with "High/Turbo mode" or "Uncompressed mode" the connection isn't intercepted by the Opera Mini server. That means High mode or Uncompressed mode does not break end-to-end integrity.
Features
Opera Mini uses cloud acceleration and data compression technology. Opera Mini servers act as a proxy which compresses and renders the data of web pages before sending it to users. This process helps to load web content faster.
The display may be toggled between portrait and landscape mode by keystrokes, or will switch automatically on phones with orientation sensors. The default orientation can be changed.
The image quality may be set to "Low", "Medium", or "High". Page load times are affected by the chosen image quality setting.
Opera Mini supports only one font, which can be set to "Small", "Medium", or "Large" size. If a web page uses Courier or a generic monospaced font, the one font is still used, but the characters are spaced out so that each character takes up the same amount of space.
Browsing tools
Opera Mini's address bar is capable of using several pre-configured search engines. The user can add more search engines. The default search engines are Google and Wikipedia.
Opera Mini features an ad blocker. When activated, Opera Mini servers try to filter out advertisement before rendering the page and sending it to the client phone.
Opera Mini features an AI-powered news aggregator, serving personalised news, night mode and private browsing. And can save bookmarks, download files, streaming, save web pages for offline reading, and it remembers the user's browsing history.
Opera synchronization
By signed into an Opera Account; Saved Bookmarks, Speed Dials, Opened Tabs could be backed-up and synchronized between different phones or with the Opera browser on computers, using the "Opera Sync" service. And can be accessed through web interface at Opera synchronization
Market adoption
The overall share of the Opera family in the mobile Web browser market was about 5.01% in June 2018.
Data centers
Opera Mini relies on data centers processing the Web page before sending it back to the phone in a compressed binary form. Opera Software operates data centers in the United States, Norway, China, Korea, Poland and Iceland.
Network operators
Several mobile network companies pre-install Opera Mini on their mobile phones, including Telenor, AT&T, Vodafone, T-Mobile, KDDI, Omnitel, Pannon GSM, Telefónica Móviles de España and TMN.
Devices
The following devices came pre-installed with Opera Mini . Some listed devices only included Opera Mini when bought from certain network operators.
Motorola V980, E2, L7, i1
Nokia Nokia Asha series, 2610, 2700 classic, 2730 classic, 3110 classic, 3120 classic, 3500 classic, 3600, 3600 slide, 3710 fold, 3720 classic, 5000, 5070, 5130, 5230, 5310, 5500 Sport, 5610, 6080, 6085, 6103, 6131, 6233, 6288, 6300, 6303 classic, 6600 slide, 7373, 8800 Arte, Nokia C2-01, Nokia C3, E65, N71, N73, N95, and 3310 (2017)
Sony Ericsson K310i, K530i, K550, W200i, W205, W760i, W910i, Z530i, Z550i, Z780i
Samsung X160, E570, E420, F480, X510, X650, E900, E250, U700, ZV60, D900i
LG K880, KU250, KE970, and KU311
SAGEM My411x and P9521
BenQ-Siemens EL71 and EF81
BenQ E71 fight
Orange Rio (ZTE-G X991)
While not officially supported on Chrome OS, Vlad Filippov published a guide that teaches how to run Opera Mini inside the Chromium browser.
See also
Opera (web browser)
Opera Mobile
UC Browser
History of the web browser
List of web browsers
References
External links
Mini
Cross-platform software
Java device platform
Java platform software
IOS software
Android web browsers
Symbian software
BlackBerry software
Windows Mobile software
Mobile web browsers
Client/server split web browsers
Zeebo |
22709668 | https://en.wikipedia.org/wiki/Lean%20IT | Lean IT | Lean IT is the extension of lean manufacturing and lean services principles to the development and management of information technology (IT) products and services. Its central concern, applied in the context of IT, is the elimination of waste, where waste is work that adds no value to a product or service.
Although lean principles are generally well established and have broad applicability, their extension from manufacturing to IT is only just emerging. Lean IT poses significant challenges for practitioners while raising the promise of no less significant benefits. And whereas Lean IT initiatives can be limited in scope and deliver results quickly, implementing Lean IT is a continuing and long-term process that may take years before lean principles become intrinsic to an organization's culture.
Extension to IT
As lean manufacturing has become more widely implemented, the extension of lean principles is beginning to spread to IT (and other service industries). Industry analysts have identified many similarities or analogues between IT and manufacturing. For example, whereas the manufacturing function manufactures goods of value to customers, the IT function “manufactures” business services of value to the parent organization and its customers. Similar to manufacturing, the development of business services entails resource management, demand management, quality control, security issues, and so on.
Moreover, the migration by businesses across virtually every industry sector towards greater use of online or e-business services suggests a likely intensified interest in Lean IT as the IT function becomes intrinsic to businesses’ primary activities of delivering value to their customers. Already, even today, IT's role in business is substantial, often providing services that enable customers to discover, order, pay, and receive support. IT also provides enhanced employee productivity through software and communications technologies and allows suppliers to collaborate, deliver, and receive payment.
Consultants and evangelists for Lean IT identify an abundance of waste across the business service “production line”, including legacy infrastructure and fractured processes. By reducing waste through application of lean Enterprise IT Management (EITM) strategies, CIOs and CTOs in companies such as Tesco, Fujitsu Services, and TransUnion are driving IT from the confines of a back-office support function to a central role in delivering customer value.
Types of waste
Lean IT promises to identify and eradicate waste that otherwise contributes to poor customer service, lost business, higher than necessary business costs, and lost employee productivity. To these ends, Lean IT targets eight elements within IT operations that add no value to the finished product or service or to the parent organization (see Table 1).
Whereas each element in the table can be a significant source of waste in itself, linkages between elements sometimes create a cascade of waste (the so-called domino effect). For example, a faulty load balancer (waste element: Defects) that increases web server response time may cause a lengthy wait for users of a web application (waste element: Waiting), resulting in excessive demand on the customer support call center (waste element: Excess Motion) and, potentially, subsequent visits by account representatives to key customers’ sites to quell concerns about the service availability (waste element: Transportation). In the meantime, the company's most likely responses to this problem — for example, introducing additional server capacity and/or redundant load balancing software), and hiring extra customer support agents — may contribute yet more waste elements (Overprovisioning and Excess Inventory).
Principles
Value streams
In IT, value streams are the services provided by the IT function to the parent organization for use by customers, suppliers, employees, investors, regulators, the media, and any other stakeholders. These services may be further differentiated into:
Business services (primary value streams). Examples: point-of-sale transaction processing, ecommerce, and supply chain optimization
IT services (secondary value streams). Examples: application performance management, data backup, and service catalog
The distinction between primary and secondary value streams is meaningful. Given Lean IT's objective of reducing waste, where waste is work that adds no value to a product or service, IT services are secondary (i.e. subordinate or supportive) to business services. In this way, IT services are tributaries that feed and nourish the primary business service value streams. If an IT service is not contributing value to a business service, it is a source of waste. Such waste is typically exposed by value-stream mapping.
Value-stream mapping
Lean IT, like its lean manufacturing counterpart, involves a methodology of value-stream mapping — diagramming and analyzing services (value streams) into their component process steps and eliminating any steps (or even entire value streams) that do not deliver value.
Flow
Flow relates to one of the fundamental concepts of Lean as formulated within the Toyota Production System — namely, mura. A Japanese word that translates as “unevenness,” mura is eliminated through just-in-time systems that are tightly integrated. For example, a server provisioning process may carry little or no inventory (a waste element in Table 1 above) with labor and materials flowing smoothly into and through the value stream.
A focus on mura reduction and flow may bring benefits that would be otherwise missed by focus on muda (the Japanese word for waste) alone. The former necessitates a system-wide approach whereas the latter may produce suboptimal results and unintended consequences. For example, a software development team may produce code in a language familiar to its members and which is optimal for the team (zero muda). But if that language lacks an API standard by which business partners may access the code, a focus on mura will expose this otherwise hidden source of waste.
Pull/demand system
Pull (also known as demand) systems are themselves closely related to the aforementioned flow concept. They contrast with push or supply systems. In a pull system, a pull is a service request. The initial request is from the customer or consumer of the product or service. For example, a customer initiates an online purchase. That initial request in turn triggers a subsequent request (for example, a query to a database to confirm product availability), which in turn triggers additional requests (input of the customer's credit card information, credit verification, processing of the order by the accounts department, issuance of a shipping request, replenishment through the supply-chain management system, and so on).
Push systems differ markedly. Unlike the “bottom-up,” demand-driven, pull systems, they are “top-down,” supply-driven systems whereby the supplier plans or estimates demand. Push systems typically accumulate large inventory stockpiles in anticipation of customer need. In IT, push systems often introduce waste through an over-abundance of “just-in-case” inventory, incorrect product or service configuration, version control problems, and incipient quality issues.
Implementation
Implementation begins with identification and description of one or more IT value streams. For example, aided by use of interviews and questionnaires, the value stream for a primary value stream such as a point-of-sale business service may be described as shown in Table 2.
Table 2 suggests that the Executive Vice President (EVP) of Store Operations is ultimately responsible for the point-of-sale business service, and he/she assesses the value of this service using metrics such as CAPEX, OPEX, and check-out speed. The demand pulls or purposes for which the EVP may seek these metrics might be to conduct a budget review or undertake a store redesign. Formal service-level agreements (SLAs) for provision of the business service may monitor transaction speed, service continuity, and implementation speed. The table further illustrates how other users of the point-of-sale service — notably, cashiers and shoppers — may be concerned with other value metrics, demand pulls, and SLAs.
Having identified and described a value stream, implementation usually proceeds with construction of a value stream map — a pictorial representation of the flow of information, beginning with an initial demand request or pull and progressing up the value stream. Although value streams are not as readily visualizable as their counterparts in lean manufacturing, where the flow of materials is more tangible, systems engineers and IT consultants are practiced in the construction of schematics to represent information flow through an IT service. To this end, they may use productivity software such as Microsoft Visio and computer-aided design (CAD) tools. However, alternatives to these off-the-shelf applications may be more efficient (and less wasteful) in the mapping process.
One alternative is use of a configuration management database (CMDB), which describes the authorized configuration of the significant components of an IT environment. Workload automation software, which helps IT organizations optimize real-time performance of complex business workloads across diverse IT infrastructures, and other application dependency mapping tools can be an additional help in value stream mapping.
After mapping one or more value streams, engineers and consultants analyze the stream(s) for sources of waste. The analysis may adapt and apply traditional efficiency techniques such as time-and-motion studies as well as more recent lean techniques developed for the Toyota Production System and its derivatives. Among likely outcomes are methods such as process redesign, the establishment of “load-balanced” workgroups (for example, cross-training of software developers to work on diverse projects according to changing business needs), and the development of performance management “dashboards” to track project and business performance and highlight trouble spots.
Trends
Recessionary pressure to reduce costs
The onset of economic recession in December 2007 was marked by a decrease in individuals’ willingness to pay for goods and services — especially in face of uncertainty about their own economic futures. Meanwhile, tighter business and consumer credit, a steep decline in the housing market, higher taxes, massive lay-offs, and diminished returns in the money and bond markets have further limited demand for goods and services.
When an economy is strong, most business leaders focus on revenue growth. During periods of weakness, when demand for good and services is curbed, the focus shifts to cost-cutting. In-keeping with this tendency, recessions initially provoke aggressive (and somes panic-ridden) actions such as deep discounting, fire sales of excess inventory, wage freezes, short-time working, and abandonment of former supplier relationships in favor of less costly supplies. Although such actions may be necessary and prudent, their impact may be short-lived. Lean IT can expect to garner support during economic downturns as business leaders seek initiatives that deliver more enduring value than is achievable through reactive and generalized cost-cutting.
Proliferation of online transactions
IT has traditionally been a mere support function of business, in common with other support functions such as shipping and accounting. More recently, however, companies have moved many mission-critical business functions to the Web. This migration is likely to accelerate still further as companies seek to leverage investments in service-oriented architectures, decrease costs, improve efficiency, and increase access to customers, partners, and employees.
The prevalence of web-based transactions is driving a convergence of IT and business. In other words, IT services are increasingly central to the mission of providing value to customers. Lean IT initiatives are accordingly becoming less of a peripheral interest and more of an interest that is intrinsic to the core business.
Green IT
Though not born of the same motivations, lean IT initiatives are congruent with a broad movement towards conservation and waste reduction, often characterized as green policies and practices. Green IT is one part of this broad movement.
Waste reduction directly correlates with reduced energy consumption and carbon generation. Indeed, IBM asserts that IT and energy costs can account for up to 60% of an organization's capital expenditures and 75% of operational expenditures. In this way, identification and streamlining of IT value streams supports the measurement and improvement of carbon footprints and other green metrics. For instance, implementation of Lean IT initiatives is likely to save energy through adoption of virtualization technology and data center consolidation.
Challenges
Value-stream visualization
Unlike lean manufacturing, from which the principles and methods of Lean IT derive, Lean IT depends upon value streams that are digital and intangible rather than physical and tangible. This renders difficult the visualization of IT value streams and hence the application of Lean IT. Whereas practitioners of lean manufacturing can apply visual management systems such as the kanban cards used in the Toyota Production System, practitioners of Lean IT must use enterprise IT management tools to help visualize and analyze the more abstract context of IT value streams.
Reference implementations
As an emerging area in IT management (see Deployment and Commercial Support), lean IT has relatively few reference implementations. Moreover, whereas much of the supporting theory and methodology is grounded in the more established field of lean manufacturing, adaptation of such theory and methodology to the digital service-oriented process of IT is likewise only just beginning. This lack makes implementation challenging, as evidenced by the problems experienced with the March 2008 opening of Heathrow Terminal 5. British airports authority BAA and airline British Airways, which has exclusive use of the new terminal, used process methodologies adapted from the motor industry to speed development and achieve cost savings in developing and integrating systems at the new terminal. However, the opening was marred by baggage handling backlogs, staff parking problems, and cancelled flights.
Resistance to change
The conclusions or recommendations of Lean IT initiatives are likely to demand organizational, operational, and/or behavioral changes that may meet with resistance from workers, managers, and even senior executives. Whether driven by a fear of job losses, a belief that existing work practices are superior, or some other concern, such changes may encounter resistance.
For example, a lean IT recommendation to introduce flexible staffing whereby application development and maintenance managers share personnel is often met with resistance by individual managers who may have relied on certain people for many years. Also, existing incentives and metrics may not align with the proposed staff sharing.
Fragmented IT departments
Even though business services and the ensuing flow of information may span multiple departments, IT organizations are commonly structured in a series of operational or technology-centric silos, each with its own management tools and methods to address perhaps just one particular aspect of waste. Unfortunately, fragmented efforts at Lean IT contribute little benefit because they lack the integration necessary to manage cumulative waste across the value chain.
Integration of lean production and lean consumption
Related to the aforementioned issue of fragmented IT departments is the lack of integration across the entire supply chain, including not only all business partners but also consumers. To this end, lean IT consultants have recently proposed so-called lean consumption of products and services as a complement to lean production. In this regard, the processes of provision and consumption are tightly integrated and streamlined to minimize total cost and waste and to create new sources of value.
Deployment and commercial support
Deployment of lean IT has been predominantly limited to application development and maintenance (ADM). This focus reflects the cost of ADM. Despite a trend towards increased ADM outsourcing to lower-wage economies, the cost of developing and maintaining applications can still consume more than half of the total IT budget. In this light, the potential of Lean IT to increase productivity by as much as 40% while improving the quality and speed of execution makes ADM a primary target within the IT department.
Opportunity to apply Lean IT exists in multiple other areas of IT besides ADM. For example, service catalog management is a Lean IT approach to provisioning IT services. When, say, a new employee joins a company, the employee's manager can log into a web-based catalog and select the services needed. This particular employee may need a CAD workstation as well as standard office productivity software and limited access to the company's extranet. On submitting this request, provisioning of all hardware and software requirements would then be automatic through a lean value stream. In another example, a Lean IT approach to application performance monitoring would automatically detect performance issues at the customer experience level as well as triage, notify support personnel, and collect data to assist in root-cause analysis. Research suggests that IT departments may achieve sizable returns from investing in these and other areas of the IT function.
Among notable corporate examples of Lean IT adopters is UK-based grocer Tesco, which has entered into strategic partnerships with many of its suppliers, including Procter & Gamble, Unilever, and Coca-Cola, eventually succeeding in replacing weekly shipments with continuous deliveries throughout the day. By moving to eliminate stock from either the back of the store or in high-bay storage, Tesco has gotten markedly closer to a just-in-time pull system (see Pull/demand system).
Lean IT is also attracting public-sector interest, in-keeping with the waste-reduction aims of the lean government movement. One example is the City of Cape Coral, Florida, where several departments have deployed lean IT. The city's police records department, for instance, reviewed its processing of some 20,000 traffic tickets written by police officers each year, halving the time for an officer to write a ticket and saving $2 million. Comparable benefits have been achieved in other departments such as public works, finance, fire, and parks and recreation.
Complementary methodologies
Although Lean IT typically entails particular principles and methods such as value streams and value-stream mapping, Lean IT is, on a higher level, a philosophy rather than a prescribed metric or process methodology. In this way, Lean IT is pragmatic and agnostic. It seeks incremental waste reduction and value enhancement, but it does not require a grand overhaul of an existing process, and is complementary rather than alternative to other methodologies.
Agile, Scrum and lean software development
Agile software development is a set of software development methods that originated as a response for the indiscriminated use of CMMI, RUP and PMBOK creating fat and slow software development processes that normally increased the lead time, the work in progress and non-value added/value added activities ratio on projects. Agile software development methods include XP, Scrum, FDD, AUP, DSDM, Crystal, and others.
Scrum is one of the more well known agile methods for project management, and has as one of its origins concepts from Lean Thinking. Scrum also organizes work in a cross-functional, multidisciplinary work cell. It uses some form of kanban system to visualize and limit work in progress, and follows the PDCA cycle, and continuous improvements, that is the base of Lean.
Six Sigma
Whereas Lean IT focuses on customer satisfaction and reducing waste, Six Sigma focuses on removing the causes of defects (errors) and the variation (inconsistency) in manufacturing and business processes using quality management and, especially, statistical methods. Six Sigma also differs from Lean methods by introducing a special infrastructure of personnel (e.g. so-called “Green Belts” and “ Black Belts”) in the organization. Six Sigma is more oriented around two particular methods (DMAIC and DMADV), whereas Lean IT employs a portfolio of tools and methods. These differences notwithstanding, Lean IT may be readily combined with Six Sigma such that the latter brings statistical rigor to measurement of the former's outcomes.
Capability Maturity Model Integration (CMMI)
The Capability Maturity Model Integration (CMMI) from the Software Engineering Institute of Carnegie Mellon University (Pittsburgh, Pennsylvania) is a process improvement approach applicable to a single project, a division, or an entire organization. It helps integrate traditionally separate organizational functions, set process improvement goals and priorities, provide guidance for quality processes, and provide a benchmark or point of reference for assessing current processes. However, unlike Lean IT, CMMI (and other process models) doesn't directly address sources of waste such as a lack of alignment between business units and the IT function or unnecessary architectural complexity within a software application.
ITIL
ITIL contains concepts, policies, and recommended practices on a broad range of IT management topics. These are again entirely compatible with the objectives and methods of Lean IT. Indeed, as another best-practice framework, ITIL may be considered alongside the CMMI for process improvement and COBIT for IT governance.
Universal Service Management Body of Knowledge (USMBOK)
The Universal Service Management Body of Knowledge (USMBOK) — is a single book published by Service Management 101 and endorsed by numerous professional trade associations as the definitive reference for service management. The USMBOK contains a detailed specification of a service system and organization and leverages the rich history of service management as defined within product management and marketing professions. The service organization specification describes seven key knowledge domains, equivalent to roles, and forty knowledge areas, representing areas of practice and skills. Amongst these, within the Service Value Management knowledge domain, are a number of Lean relevant skills, including Lean Thinking and Value Mapping. The USMBOK also provides detailed information on how problem management and lean thinking are combined with outside-in (customer centric) thinking, in the design of a continuous improvement program.
COBIT
Control Objectives for Information and Related Technology – better known as COBIT – is a framework or set of best practices for IT management created by the Information Systems Audit and Control Association (ISACA), and the IT Governance Institute (ITGI). It provides managers, auditors, and IT users a set of metrics, processes, and best practices to assist in maximizing the benefits derived through the use of IT, achieving compliance with regulations such as Sarbanes-Oxley, and aligning IT investments with business objectives. COBIT also aims to unify global IT standards, including ITIL, CMMI, and ISO 17799.
Notes
References
Bell, Steve (2012), Run Grow Transform, Integrating Business and Lean IT, Productivity Press, .
Bell, Steve and Orzen, Mike (2010) Lean IT, Enabling and Sustaining Your Lean Transformation, Productivity Press, . Shingo Prize Research Award 2011
Bell, Steve (2006) Lean Enterprise Systems, Using IT for Continuous Improvement, John R. Wiley, .
Yasuhiro Monden (1998), Toyota Production System, An Integrated Approach to Just-In-Time, Third edition, Norcross, GA: Engineering & Management Press, .
Womack, James P., and Roos, Daniel T. (2007), The Machine That Changed the World, Free Press, .
Womack, James P. and Jones, Daniel T. (2005) “Lean Consumption.” Harvard Business Review.
Information technology management |
1277761 | https://en.wikipedia.org/wiki/Acronis | Acronis | Acronis International GmbH, simply referred to as Acronis, is a global technology company with its corporate headquarters in Schaffhausen, Switzerland and global headquarters in Singapore. Acronis develops on-premises and cloud software for backup, disaster recovery, and secure file sync and share and data access. Acronis has 18 offices worldwide. Its R&D centers, Acronis Labs, are based in Bulgaria, the United States and Singapore. Acronis has 37 Cloud data centers around the world, including the United States, France, Singapore, Japan, and Germany.
History
Acronis was founded by Serguei Beloussov, Ilya Zubarev, Stanislav Protassov and Max Tsyplyaev in 2001 as a separate business unit within SWsoft. In 2003, Acronis was spun off as a separate company. The company moved from a focus on disk partitioning and boot loader software to a focus on backup and disaster recovery software based on disk imaging technology.
In 2006, SWsoft partnered with Acronis to resell Acronis True Image Server for SWsoft Plesk 8.1 control panel software. The software is standalone and works with other control panels, which enables service providers to offer backup and recovery capabilities with dedicated hosting packages.
In September 2012, Acronis acquired GroupLogic, Inc., which allowed Acronis to integrate mobile devices, including Apple, into enterprise environments through acquisition of software that formed Acronis Access Advanced. The acquisition expanded Acronis’ data protection on mobile devices. GroupLogic gained access to Acronis’ customer base.
In May 2013, co-founder and Board Director, Serguei Beloussov, returned as CEO after working on other ventures. In December that same year, Acronis announced the launch of an official research and development wing, Acronis Labs.
Acronis won the Mobility Product of the Year award at the Network Computing Awards in 2014. In 2014, Acronis acquired BackupAgent, a cloud backup company, and nScaled, a disaster recovery software company.
The Global Partner Program through Acronis was launched in March 2015. The program gives partner companies access to the Acronis AnyData Engine. Also in 2015, Acronis won the ChannelPro Readers’ Choice Award for Best Backup and Disaster Recovery Vendor. In July 2015, Acronis announced a partnership with ProfitBrick to make Acronis Backup Cloud available for ProfitBrick's cloud computing platform. In 2019, it was announced that Acronis would sponsor the ROKiT Williams F1 Team and SportPesa Racing Point. Acronis was the Gold Stevie Award winner under the Awards for the Innovation in Technology Development category at the 2019 Asia-Pacific Stevie Awards.
In 2019, company acquired 5nine Software, a cloud management and security company. In 2020, company acquired endpoint Data Loss Prevention Vendor DeviceLock. Acronis chief officers (Steiner in 2019, Magdanurov in 2020) confirmed their plans to integrate the capabilities of both software product lines into the Acronis Cyber Cloud Solutions portal over time.
In July 2021, founder and CEO Serguei Beloussov voluntarily stepped down from his role and was replaced by Patrick Pulvermueller - former president of GoDaddy. In October 2021, Acronis partnered with Addigy, provider of the cloud-based Apple device management platform, for its latest integration of Acronis Cyber Protect Cloud.
Software
Backup and security
Acronis develops backup, disaster recovery, secure file access, sync and share, and partitioning software for the home, and small to medium-sized enterprises. Its backup software includes Acronis Cyber Protect Home Office, which uses full-system image backup technology. Acronis Cyber Cloud includes backups, disaster recovery, and file sync and share for a set of cloud applications which are delivered by service providers. Acronis Cyber Protect is a disk-based backup and recovery program.
System administration
Aside from backup software, Acronis also created Acronis Disk Director, which is a shareware application that partitions a machine and allows it to run multiple operating systems; Acronis Snap Deploy, which creates a standard configuration to organize new machines; Acronis Files Advanced, which secures access to files that are synced between devices; and Acronis Access Connect, which is designed for Mac clients and runs on Windows servers. Acronis Data Protection Platform includes backup, disaster recover, and secure file sync and share.
See also
Comparison of disk cloning software
List of disk cloning software
List of disk partitioning software
References
External links
Technology companies established in 2001
Software companies of Switzerland
Companies of Singapore
Schaffhausen |
7093937 | https://en.wikipedia.org/wiki/Metacomputing | Metacomputing | Metacomputing is all computing and computing-oriented activity which involves computing knowledge (science and technology) utilized for the research, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of large computer networks/grids, such as the Internet, intranet and other territorially distributed computer networks for special purposes.
Uses
In computer science
Metacomputing, as a computing of computing, includes: the organization of large computer networks, choice of the design criteria (for example: peer-to-peer or centralized solution) and metacomputing software (middleware, metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of software meta-layers which are networked platforms for the development of user-oriented calculations, for example for computational physics and bio-informatics.
Here, serious scientific problems of systems/networks complexity emerge, not only related to domain-dependent complexities but focused on systemic meta-complexity of computer network infrastructures.
Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional as fifth-generation computer languages which require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform).
In socio-cognitive engineering
From the human and social perspectives, metacomputing is especially focused on: human-computer software, cognitive interrelations/interfaces, the possibilities of the development of intelligent computer grids for the cooperation of human organizations, and on ubiquitous computing technologies. In particular, it relates to the development of software infrastructures for the computational modeling and simulation of cognitive architectures for various decision support systems.
In systemics and from philosophical perspective
Metacomputing refers to the general problems of computationality of human knowledge, to the limits of the transformation of human knowledge and individual thinking to the form of computer programs. These and similar questions are also of interest of mathematical psychology.
See also
Complex system
Computer
Distributed computing
High-performance computing
Meta-
Meta-knowledge
Meta-mathematics
Metacomputing software
Metaprogramming
Parallel computing
Supercomputing
References
Further reading
Special Issue on Metacomputing: From Workstation Clusters to Internet computing, Future Generation Computer Systems, Gentzsch W. (editor), No. 15, North Holland (1999)
Metacomputing Project- with DARPA contribution
The Grid: International Efforts in Global Computing, Mark Baker, Rajkumar Buyya and Domenico Laforenza (2005)
Toward the Identification of the Real-World Meta-Complexity, (2004) NEST-IDEA Interdisciplinary Research
Journal of Mathematical Psychology
Classes of computers
Systems theory |
3263092 | https://en.wikipedia.org/wiki/EMC%20NetWorker | EMC NetWorker | EMC NetWorker (formerly Legato NetWorker) is an enterprise-level data protection software product from Dell EMC that unifies and automates backup to tape, disk-based, and flash-based storage media across physical and virtual environments for granular and disaster recovery.
Description
Cross-platform support is provided for Linux, Windows, macOS, NetWare, OpenVMS and Unix environments. Deduplication of backup data is provided by integration with Dell EMC Data Domain products.
A central NetWorker server manages backup clients and NetWorker storage nodes that access the backup media. Platforms supported by the core NetWorker server are: AIX, HP-UX PA-RISC and Itanium, Linux (fully featured on x86, x86-64, Itanium, client only on PowerPC and IBM Z), macOS (client only), Solaris SPARC and x64, SGI IRIX (client only), Tru64 and Windows.
The Java based NetWorker Management Console (NMC) software, which is bundled with the NetWorker distribution, provides a user interface for functions such as client configuration, policy settings, schedules, monitoring, reports, and daily operations for deduplicated and non-deduplicated backups.
The core NetWorker software backs up client file systems and operating system environment. Add-on database and application modules provide backup services for products such as Oracle, DB2, SAP, Lotus, Informix, and Sybase, as well as Microsoft Exchange Server, SharePoint, and SQL Server. Client backup data can be sent to a remote NetWorker storage node or stored on a locally attached device by the use of a dedicated storage node. Additionally, NetWorker supports Client Direct backups allowing clients to back up directly to shared devices bypassing the storage node processes.
NetWorker Snapshot Management automates the generation of point-in-time data snapshots and cloning on supported storage arrays such as EMC VNX, XtremIO, and Symmetrix. The NDMP module can be used for client-less backups of NAS Filers like Isilon, EMC VNX or Netapp.
VMware virtual machines can be directly backed up either by installing the NetWorker client on the virtual machine or through the NetWorker VMware Protection to perform application consistent image and filesystem backups.
NetWorker also supports backup and recovery of Microsoft Windows Server and Hyper-V virtual servers by using the Volume Shadow Copy Services interface. This support protects the parent and child partitions (guests) as well as applications running within the virtual machines.
History
Legato Systems, Inc., the original developer of NetWorker, was founded in 1988 by four individuals who worked together at Sun Microsystems: Jon Kepecs, Bob Lyon, Joe Moran and Russell Sandberg.
NetWorker for UNIX was first introduced in 1990.
The company, located in Palo Alto, California, had its initial public offering in July 1995.
Although a relatively modest number of 2 million shares, it was considered run-up to the dot-com bubble.
Litigation in 2002 alleged that Legato inflated its revenues in 1999, after restating its earnings.
NetWorker came to EMC by the acquisition of Legato in October 2003, for an estimated $1.3 billion.
See also
List of backup software
References
External links
EMC NetWorker Product Page
NetWorker FAQ/Wiki
EMC NetWorker Online Community
Mailing List
EMC Legato NetWorker Commands Reference
EMC Legato NetWorker Links
Backup software
Dell EMC
Backup software for Linux
Backup software for macOS
Backup software for Windows |
Subsets and Splits