id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
236964
https://en.wikipedia.org/wiki/Utah%20teapot
Utah teapot
The Utah teapot, or the Newell teapot, is a 3D test model that has become a standard reference object and an in-joke within the computer graphics community. It is a mathematical model of an ordinary Melitta-brand teapot that appears solid with a nearly rotationally symmetrical body. Using a teapot model is considered the 3D equivalent of a "Hello, World!" program, a way to create an easy 3D scene with a somewhat complex model acting as the basic geometry for a scene with a light setup. Some programming libraries, such as the OpenGL Utility Toolkit, even have functions dedicated to drawing teapots. The teapot model was created in 1975 by early computer graphics researcher Martin Newell, a member of the pioneering graphics program at the University of Utah. It was one of the first to be modeled using bézier curves rather than precisely measured. History For his work, Newell needed a simple mathematical model of a familiar object. His wife, Sandra Newell, suggested modelling their tea set since they were sitting down for tea at the time. He sketched the teapot free-hand using graph paper and a pencil. Following that, he went back to the computer laboratory and edited bézier control points on a Tektronix storage tube, again by hand. The teapot shape contained a number of elements that made it ideal for the graphics experiments of the time: it was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, and could be displayed accurately without a surface texture. Newell made the mathematical data that described the teapot's geometry (a set of three-dimensional coordinates) publicly available, and soon other researchers began to use the same data for their computer graphics experiments. These researchers needed something with roughly the same characteristics that Newell had, and using the teapot data meant they did not have to laboriously enter geometric data for some other object. Although technical progress has meant that the act of rendering the teapot is no longer the challenge it was in 1975, the teapot continued to be used as a reference object for increasingly advanced graphics techniques. Over the following decades, editions of computer graphics journals (such as the ACM SIGGRAPH's quarterly) regularly featured versions of the teapot: faceted or smooth-shaded, wireframe, bumpy, translucent, refractive, even leopard-skin and furry teapots were created. Having no surface to represent its base, the original teapot model was not intended to be seen from below. Later versions of the data set fixed this. The real teapot is 33% taller (ratio 4:3) than the computer model. Jim Blinn stated that he scaled the model on the vertical axis during a demo in the lab to demonstrate that they could manipulate it. They preferred the appearance of this new version and decided to save the file out of that preference. Versions of the teapot model — or sample scenes containing it — are distributed with or freely available for nearly every current rendering and modelling program and even many graphic APIs, including AutoCAD, Houdini, Lightwave 3D, MODO, POV-Ray, 3ds Max, and the OpenGL and Direct3D helper libraries. Some RenderMan-compliant renderers support the teapot as a built-in geometry by calling RiGeometry("teapot", RI_NULL). Along with the expected cubes and spheres, the GLUT library even provides the function glutSolidTeapot() as a graphics primitive, as does its Direct3D counterpart D3DX (D3DXCreateTeapot()). While D3DX for Direct3D 11 does not provide this functionality anymore, it is supported in the DirectX Tool Kit. Mac OS X Tiger and Leopard also include the teapot as part of Quartz Composer; Leopard's teapot supports bump mapping. BeOS included a small demo of a rotating 3D teapot, intended to show off the platform's multimedia facilities. Later same demo appears and in Haiku. Teapot scenes are commonly used for renderer self-tests and benchmarks. Original teapot model The original, physical teapot was purchased from ZCMI (a department store in Salt Lake City) in 1974. It was donated to the Boston Computer Museum in 1984, where it was on display until 1990. It now resides in the ephemera collection at the Computer History Museum in Mountain View, California where it is catalogued as "Teapot used for Computer Graphics rendering" and bears the catalogue number X00398.1984. The original teapot the Utah teapot was based on is still available from Friesland Porzellan, once part of the German Melitta group. Originally it was given the rather plain name ('household teapot'); the company only found out about their product's "fame" in 2017, whereupon they officially renamed it "Utah Teapot". It is available in three different sizes and various colors; the one Martin Newell had used is the white "1,4L Utah Teapot". Appearances One famous ray-traced image, by James Arvo and David Kirk in 1987, shows six stone columns, five of which are surmounted by the Platonic solids (tetrahedron, cube, octahedron, dodecahedron, icosahedron). The sixth column supports a teapot. The image is titled "The Six Platonic Solids", with Arvo and Kirk calling the teapot "the newly discovered Teapotahedron". This image appeared on the covers of several books and computer graphic journals. The Utah teapot sometimes appears in the "Pipes" screensaver shipped with Microsoft Windows, but only in versions prior to Windows XP, and has been included in the "polyhedra" XScreenSaver hack since 2008. Jim Blinn (in one of his "Project MATHEMATICS!" videos) proves an amusing (but trivial) version of the Pythagorean theorem: construct a (2D) teapot on each side of a right triangle and the area of the teapot on the hypotenuse is equal to the sum of the areas of the teapots on the other two sides. Loren Carpenter's 1980 CGI film Vol Libre features the teapot, appearing briefly at the beginning and end of the film in the foreground with a fractal-rendered mountainscape behind it. Vulkan and OpenGL graphics APIs feature the Utah teapot along with the Stanford dragon and the Stanford bunny on their badges. With the advent of the first computer-generated short films, and later full-length feature films, it has become an in-joke to hide the Utah teapot in films' scenes. For example, in the movie Toy Story, the Utah teapot appears in a short tea-party scene. The teapot also appears in The Simpsons episode "Treehouse of Horror VI" in which Homer discovers the "third dimension." In The Sims 2, a picture of the Utah teapot is one of the paintings available to buy in-game, titled "Handle and Spout". An origami version of the teapot, folded by Tomohiro Tachi, was shown at the Tikotin Museum of Japanese Art in Israel in a 2007–2008 exhibit. OBJ conversion Although the original tea set by Newell can be downloaded directly, this tea set is specified using a set of Bézier patches in a custom format, which can be difficult to import directly into many popular 3D modeling applications. As such, a tesselated conversion of the dataset in the popular OBJ file format can be useful. One such conversion of the complete Newell teaset is available on the University of Utah website. 3D printing Through 3D printing, the Utah Teapot has come full circle from being a computer model based on an actual teapot to being an actual teapot based on the computer model. It is widely available in many renderings in different materials from small plastic knick-knacks to a fully functional ceramic teapot. It is sometimes intentionally rendered as a low poly object to celebrate its origin as a computer model. In 2009, a Belgian design studio, Unfold, 3D printed the Utah Teapot in ceramic with the objective of returning the iconographic teapot to its roots as a piece of functional dishware while showing its status as an icon of the digital world. In 2015, the California-based company Emerging Objects followed suit, but this time printed the teapot, along with teacups and teaspoons, out of actual tea. Gallery See also 3D modeling Stanford bunny Stanford dragon Suzanne (3D model) Cornell box List of common 3D test models List of filmmaker's signatures Lenna References External links Image of Utah teapot at the Computer History Museum Newell's teapot sketch at the Computer History Museum S.J. Baker's History of the teapot, including patch data Teapot history and images, from A Critical History of Computer Graphics and Animation (Wayback Machine copy) WebGL teapot demonstration History of the Teapot video from Udacity's online Interactive 3D Graphics course The World's Most Famous Teapot - Tom Scott explains the story of Martin Newell's digital creation (YouTube) 3D graphics models Test items Teapots In-jokes
1187720
https://en.wikipedia.org/wiki/Barbarian%3A%20The%20Ultimate%20Warrior
Barbarian: The Ultimate Warrior
Barbarian: The Ultimate Warrior is a video game first released for Commodore 64 personal computers in 1987; the title was developed and published by Palace Software, and ported to other computers in the following months. The developers licensed the game to Epyx, who published it as Death Sword in the United States. Barbarian is a fighting game that gives players control over sword-wielding barbarians. In the game's two-player mode, players pit their characters against each other. Barbarian also has a single-player mode, in which the player's barbarian braves a series of challenges set by an evil wizard to rescue a princess. Instead of using painted artwork for the game's box, Palace Software used photos of hired models. The photos, also used in advertising campaigns, featured Michael Van Wijk (who would later become famous as 'Wolf' in the TV series Gladiators) as the hero and bikini-clad Maria Whittaker, a model who was then associated with The Sun tabloid's Page 3 topless photo shoots. Palace Software's marketing strategy provoked controversy in the United Kingdom, with protests focused on the sexual aspects of the packaging rather than decapitations and other violence within the game. The ensuing controversy boosted Barbarians profile, helping to make it a commercial success. Game critics were impressed with its fast and furious combat, and dashes of humour. The game was Palace Software's critical hit; boosted by Barbarians success, Palace Software expanded its operations and started publishing other developers' work. In 1988, the company released a sequel, Barbarian II: The Dungeon of Drax. Gameplay Barbarian: The Ultimate Warrior is a fighting game that supports one or two players. Players assume the roles of sword-wielding barbarians, who battle in locales such as a forest glade and a "fighting pit". The game's head-to-head mode lets a player fight against another or the computer in time-limited matches. The game also features a single-player story mode, which comprises a series of plot-connected challenges. Using joysticks or the keyboard, players move their characters around the arena, jumping to dodge low blows and rolling to dodge or trip the opponent. By holding down the fire button and moving the controller, players direct the barbarians to kick, headbutt, or attack with their swords. Each barbarian has 12 life points, which are represented as 6 circles in the top corners of the interface. A successful attack on a barbarian takes away one of his life points (half a circle). The character dies when his life points are reduced to zero. Alternatively, a well-timed blow to the neck decapitates the barbarian, killing him instantly, upon which a goblin enters the arena, kicks the head, and drags the body away. If the players do not input any commands for a time, the game attempts a self-referencing action to draw their attentions: the barbarians turn to face the players, shrug their shoulders, and say "C'mon". The game awards points for successful attacks; the more complex the move, the higher the score awarded. A score board displays the highest points achieved for the game. Single-player story mode In the single-player story mode, the player controls a nameless barbarian who is on a quest to defeat the evil wizard Drax. Princess Mariana has been kidnapped by Drax, who is protected by 8 barbarian warriors. The protagonist engages each of the other barbarians in a fight to the death. Overcoming them, he faces the wizard. After the barbarian has killed Drax, Mariana drops herself at her saviour's feet and the screen fades to black. The United States version of the game names the protagonist Gorth. Development In 1985, Palace Software hired Steve Brown as a game designer and artist. He thought up the concept of pitting a broom-flying witch against a monster pumpkin, and created Cauldron and Cauldron II: The Pumpkin Strikes Back. The two games were commercial successes and Brown was given free rein for his third work. He was inspired by Frank Frazetta's fantasy paintings to create a sword fighting game that was "brutal and as realistic as possible". Brown based the game and its characters on the Conan the Barbarian series, having read all of Robert E. Howard's stories of the eponymous warrior. He conceptualised 16 moves and practised them with wooden swords, filming his sessions as references for the game's animation. One move, the Web of Death, was copied from the 1984 sword and sorcery film Conan the Destroyer. Spinning the sword like a propeller, Brown "nearly took [his] eye out" when he practised the move. Playing back the videos, the team traced each frame of action onto clear plastic sheets laid over the television screen. The tracings were transferred on a grid that helped the team map the swordplay images, pixel by pixel, to a digital form. Brown refused to follow the convention of using small sprites to represent the fighters in the game, forcing the coders to conceive a method to animate larger blocks of graphics: Palace Software's co-founder Richard Leinfellner said they "multiplexed the sprites and had different look-up tables for different frames." Feeling that most of the artwork on game boxes at that time were "pretty poor", Brown suggested that an "iconic fantasy imagery with real people would be a great hook for the publicity campaign." His superiors agreed and arranged a photo shoot, hiring models Michael Van Wijk and Maria Whittaker to pose as the barbarian and princess. Whittaker was a topless model, who frequently appeared on Page 3 of the tabloid, The Sun. She wore a tiny bikini for the shoot while Van Wijk, wearing only a loincloth, posed with a sword. Palace Software also packaged a poster of Whittaker in costume with the game. Just before release, the company discovered that fellow developer Psygnosis was producing a game also titled Barbarian, albeit of the platform genre. After several discussions, Palace Software appended the subtitle "The Ultimate Warrior" to differentiate the two products. The sounds of the characters are taken from the 1985 film Red Sonja. Most notably the "EEY-ECH!" sound that plays when the player attempts to decapitate an opponent. This particular sound can be found near the beginning of the movie when Arnold's character is ambushed after pulling an arrow out of the lady's back. Releases Barbarian was released in 1987 for the Commodore 64 and in the months that followed, most other home computers. These machines were varied in their capabilities, and the software ported to them was modified accordingly. The version for the 8-bit is mostly monochromatic, displaying the outlines of the barbarians against single-colour backgrounds. The sounds are recorded at a lower sampling rate. Conversely, the version for the Atari ST, which has 16- and 32-bit buses, presents a greater variety of backgrounds and slightly higher quality graphics than the original version. Its story mode also pits 10 barbarians against the player instead of the usual 8. Digitised sound samples are used in the Atari ST and 32-bit Amiga versions; the latter also features digitised speech. Each fight begins with the announcement of "Prepare to die!", and metallic sounding thuds and clangs ring out as swords clash against each other. After the initial releases, Barbarian was re-released several times; budget label Kixx published these versions without Whittaker on the covers. Across the Atlantic, video game publisher Epyx acquired the license to Barbarian and released it under the title Death Sword as part of their "Maxx Out!" video game series. Reception and legacy Barbarians advertisements, showing a scantily-dressed model known for topless poses, triggered significant outcries of moral indignity. Electron User magazine received letters from readers and religious bodies, who called the image "offensive and particularly insulting to women" and an "ugly pornographic advertisement". Chris Jager, a writer for PC World, considered the cover "a trashy controversy-magnet featuring a glamour-saucepot" and a "big bloke [in leotard]". According to Leinfellner, the controversy did not negatively affect Barbarian, but boosted the game's sales and profile tremendously. Video game industry observers Russell DeMaria and Johnny Wilson commented that the United Kingdom public were more concerned over the scantily-clad Whittaker than the gory contents in the game. Conversely, Barbarian was banned in Germany by the Bundesprüfstelle für jugendgefährdende Medien for its violent content. The ban forbade promotion of the game and its sale to customers under the age of 18. A censored version of the game, which changed the colour of the blood to green, was later permitted to be freely sold in the country. Barbarians mix of sex and violence was such that David Houghton, writer for GamesRadar, said the game would be rated "Mature" by the Entertainment Software Rating Board if it was published in 2009. Reviewers were impressed with Barbarians gory gameplay. Zzap!64s Steve Jarratt appreciated the "fast and furious" action and his colleague Ciaran Brennan said Barbarian should have been the licensed video game to the fantasy action film Highlander (which had a lot of sword fights and decapitations) instead. Amiga Computings Brian Chappell enjoyed "hacking the foe to bits, especially when a well aimed blow decapitates him." Several other reviewers express the same satisfaction in chopping the heads off their foes. Although shocked at the game's violence, Antics reviewer said the "sword fight game is the best available on the ST." According to Jarratt, Barbarian represented "new heights in bloodsports". Equally pleasing to the reviewers at Zzap!64 and Amiga User Internationals Tony Horgan was the simplicity of the game; they observed that almost anyone could quickly familiarise themselves with the game mechanics, making the two-player mode a fun and quick pastime. Although the barbarian characters use the same basic blocky sprites, they impressed reviewers at Zzap!64 and Amiga Computing with their smooth animation and lifelike movements. Reviewers of the Amiga version, however, expressed disappointment with the port for failing to exploit the computer's greater graphics capability and implement more detailed character sprites. Its digitised sounds, however, won praise from Commodore Users Gary Penn. Advanced Computer Entertainments reviewers had similar thoughts over the Atari ST port. Reviewing for Computer and Video Games, Paul Boughton was impressed by the game's detailed gory effects, such as the aftermath of a decapitation, calling them "hypnotically gruesome". It was these little touches that "[makes] the game worthwhile", according to Richard Eddy in Crash. Watching "the head [fall] to the ground [as blood spurts from the] severed neck, accompanied by a scream and satisfying thud as the torso tumbles" proved to be "wholesome stuff" for Chappell, and the scene was a "great retro gaming moment" for Retro Gamers staff. The cackling goblin, which drags off the bodies, endeared him to some reviewers; the team at Retro Gamer regretted that the creature did not have his own game. The actions of the barbarian also impressed them to nominate him as one of their top 50 characters from the early three decades of video gaming. Your Sinclair reviewed the ZX Spectrum version in 1990, giving it a 90% score. Barbarian proved to be a big hit, and Palace started planning to publish a line of sequels; Leinfellner said he received royalty cheques for approximately seven years, the first of which was for £20,000. Barbarian II: The Dungeon of Drax was released in 1988, and Barbarian III was in the works. Van Wijk and Whittaker were hired again to grace the box cover and advertisements. After the success with Barbarian, Palace Software began to expand its portfolio by publishing games that were created by other developers. Barbarian, however, remained its most popular game, best remembered for its violent sword fights and Maria Whittaker. In 2011, Anuman Interactive (French publisher) launched a remake of the game, adapted to mobile devices and computers: Barbarian – The Death Sword. References External links Images of Commodore 64 version of Death Sword box, manual and screen shots at C64Sets.com 1987 video games Amiga games Amstrad CPC games Apple II games Atari ST games BBC Micro and Acorn Electron games Commodore 64 games DOS games Fighting games Multiplayer and single-player video games Obscenity controversies in video games Video games about death games Video games scored by Richard Joseph Video games developed in the United Kingdom ZX Spectrum games Epyx games
56088533
https://en.wikipedia.org/wiki/Linguamatics
Linguamatics
Linguamatics, headquartered in Cambridge, England, with offices in the United States and UK, is a provider of text mining systems through software licensing and services, primarily for pharmaceutical and healthcare applications. Founded in 2001, the company was purchased by IQVIA in January 2019. Technology The company develops enterprise search tools for the life sciences sector. The core natural language processing engine (I2E) uses a federated architecture to incorporate data from 3rd party resources. Initially developed to be used interactively through a graphic user interface, the core software also has an application programming interface that can be used to automate searches. LabKey, Penn Medicine, Atrius Health and Mercy all use Linguamatics software to extract electronic health record data into data warehouses. Linguamatics software is used by 17 of the top 20 global pharmaceutical companies, the US Food and Drug Administration, as well as healthcare providers. Software community The core software, "I2E", is used by a number of companies to either extend their own software or to publish their data. Copyright Clearance Center uses I2E to produce searchable indexes of material that would otherwise be unsearchable due to copyright. Thomson Reuters produces Cortellis Informatics Clinical Text Analytics, which depends on I2E to make clinical data accessible and searchable. Pipeline Pilot can integrate I2E as part of a workflow. ChemAxon can be used alongside I2E to allow named entity recognition of chemicals within unstructured data. Data sources include MEDLINE, ClinicalTrials.gov, FDA Drug Labels, PubMed Central, and Patent Abstracts. See also List of academic databases and search engines References Companies based in Cambridge Companies established in 2001 Computer companies of the United Kingdom Data mining and machine learning software
25652303
https://en.wikipedia.org/wiki/Computer%20architecture
Computer architecture
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. The architecture of a system refers to its structure in terms of separately specified components of that system and their interrelationships. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. History The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. When building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are: John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited John von Neumann's paper. The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of “system architecture”, a term that seemed more useful than “machine organization”. Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, “Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.” Brooks went on to help develop the IBM System/360 (now called the IBM zSeries) line of computers, in which “architecture” became a noun defining “what the user needs to know”. Later, computer users came to use the term in many less explicit ways. The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architecture prototypes were physically built in the form of a transistor–transistor logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form. Subcategories The discipline of computer architecture has three main subcategories: Instruction set architecture (ISA): defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type. Microarchitecture: also known as "computer organization", this describes how a particular processor will implement the ISA. The size of a computer's CPU cache for instance, is an issue that generally has nothing to do with the ISA. Systems design: includes all of the other hardware components within a computing system, such as data processing other than the CPU (e.g., direct memory access), virtualization, and multiprocessing. There are other technologies in computer architecture. The following technologies are used in bigger companies like Intel, and were estimated in 2002 to count for 1% of all of computer architecture: Macroarchitecture: architectural layers more abstract than microarchitecture Assembly instruction set architecture: A smart assembler may convert an abstract assembly language common to a group of machines into slightly different machine language for different implementations. Programmer-visible macroarchitecture: higher-level language tools such as compilers may define a consistent interface or contract to programmers using them, abstracting differences between underlying ISA, UISA, and microarchitectures. For example, the C, C++, or Java standards define different programmer-visible macroarchitectures. Microcode: microcode is software that translates instructions to run on a chip. It acts like a wrapper around the hardware, presenting a preferred version of the hardware's instruction set interface. This instruction translation facility gives chip designers flexible options: E.g. 1. A new improved version of the chip can use microcode to present the exact same instruction set as the old chip version, so all software targeting that instruction set will run on the new chip without needing changes. E.g. 2. Microcode can present a variety of instruction sets for the same underlying chip, allowing it to run a wider variety of software. UISA: User Instruction Set Architecture, refers to one of three subsets of the RISC CPU instructions provided by PowerPC RISC Processors. The UISA subset, are those RISC instructions of interest to application developers. The other two subsets are VEA (Virtual Environment Architecture) instructions used by virtualisation system developers, and OEA (Operating Environment Architecture) used by Operation System developers. Pin architecture: The hardware functions that a microprocessor should provide to a hardware platform, e.g., the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated (emptied). Pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term "architecture" fits, because the functions must be provided for compatible systems, even if the detailed method changes. Roles Definition Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the x86 Loop instruction). However, longer and more complex instructions take longer for the processor to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways. The implementation involves integrated circuit design, packaging, power, and cooling. Optimization of the design requires familiarity with compilers, operating systems to logic design, and packaging. Instruction set architecture An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand. Besides instructions, the ISA defines items in the computer that are available to a program—e.g., data types, registers, addressing modes, and memory. Instructions locate these available items with register indexes (or names) and memory addressing modes. The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs. ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy the code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself. During design emulation, emulators can run programs written in a proposed instruction set. Modern emulators can measure size, cost, and speed to determine whether a particular ISA is meeting its goals. Computer organization Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite a detailed analysis of the computer's organization. For example, in an SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way. Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost. Implementation Once an instruction set and micro-architecture have been designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps: Logic implementation designs the circuits required at a logic-gate level. Circuit implementation does transistor-level designs of basic elements (e.g., gates, multiplexers, latches) as well as of some larger blocks (ALUs, caches etc.) that may be implemented at the logic-gate level, or even at the physical level if the design calls for it. Physical implementation draws physical circuits. The different circuit components are placed in a chip floorplan or on a board and the wires connecting them are created. Design validation tests the computer as a whole to see if it works in all situations and all timings. Once the design validation process starts, the design at the logic level are tested using logic emulators. However, this is usually too slow to run a realistic test. So, after making corrections based on the first test, prototypes are constructed using Field-Programmable Gate-Arrays (FPGAs). Most hobby projects stop at this stage. The final step is to test prototype integrated circuits, which may require several redesigns. For CPUs, the entire implementation process is organized differently and is often referred to as CPU design. Design goals The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency (latency is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors. The most common scheme does an in-depth power analysis and figures out how to keep power consumption low while maintaining adequate performance. Performance Modern computer performance is often described in instructions per cycle (IPC), which measures the efficiency of the architecture at any clock frequency; a faster IPC rate means the computer is faster. Older computers had IPC counts as low as 0.1 while modern processors easily reach near 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle. Counting machine-language instructions would be misleading because they can do varying amounts of work in different ISAs. The "instruction" in the standard measurements is not a count of the ISA's machine-language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture. Many people used to measure a computer's speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance. Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs. There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable and limited time period after the brake pedal is sensed or else failure of the brake will occur. Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it shouldn't be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but don't offer similar advantages to general tasks. Power efficiency Power efficiency is another important measurement in modern computers. A higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt). Modern circuits have less power required per transistor as the number of transistors per chip grows. This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible. In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency. Shifts in market demand Increases in clock frequency have grown more slowly over the past few years, compared to power reduction improvements. This has been driven by the end of Moore's Law and demand for longer battery life and reductions in size for mobile technology. This change in focus from higher clock rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell microarchitecture; where they dropped their power consumption benchmark from 30 to 40 watts down to 10-20 watts. Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006) it can be seen that the focus in research and development are shifting away from clock frequency and moving towards consuming less power and taking up less space. See also Comparison of CPU architectures Computer hardware CPU design Floating point Harvard architecture (Modified) Dataflow architecture Transport triggered architecture Reconfigurable computing Influence of the IBM PC on the personal computer market Orthogonal instruction set Software architecture von Neumann architecture Flynn's taxonomy References Sources Barton, Robert S., "Functional Design of Computers", Communications of the ACM 4(9): 405 (1961). Barton, Robert S., "A New Approach to the Functional Design of a Digital Computer", Proceedings of the Western Joint Computer Conference, May 1961, pp. 393–396. About the design of the Burroughs B5000 computer. Bell, C. Gordon; and Newell, Allen (1971). "Computer Structures: Readings and Examples", McGraw-Hill. Blaauw, G.A., and Brooks, F.P., Jr., "The Structure of System/360, Part I-Outline of the Logical Structure", IBM Systems Journal, vol. 3, no. 2, pp. 119–135, 1964. External links ISCA: Proceedings of the International Symposium on Computer Architecture Micro: IEEE/ACM International Symposium on Microarchitecture HPCA: International Symposium on High Performance Computer Architecture ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems ACM Transactions on Architecture and Code Optimization IEEE Transactions on Computers The von Neumann Architecture of Computer Systems Central processing unit
19235380
https://en.wikipedia.org/wiki/Vouch%20by%20Reference
Vouch by Reference
Vouch by Reference (VBR) is a protocol used in Internet mail systems for implementing sender certification by third-party entities. Independent certification providers vouch for the reputation of senders by verifying the domain name that is associated with transmitted electronic mail. VBR information can be used by a message transfer agent, a mail delivery agent or by an email client. The protocol is intended to become a standard for email sender certification, and is described in RFC 5518. Operation Email sender A user of a VBR email certification service signs its messages using DomainKeys Identified Mail (DKIM) and includes a VBR-Info field in the signed header. The sender may also use the Sender Policy Framework to authenticate its domain name. The VBR-Info: header field contains the domain name that is being certified, typically the responsible domain in a DKIM signature (d= tag), the type of content in the message, and a list of one or more vouching services, that is the domain names of the services that vouch for the sender for that kind of content: VBR-Info: md=domain.name.example; mc=type; mv=vouching.example:vouching2.example Email receiver An email receiver can authenticate the message's domain name using DKIM or SPF, thus finding the domains that are responsible for the message. It then obtains the name of a vouching service that it trusts, either from among the set supplied by the sender or from a locally configured set of preferred vouching services. Using the Domain Name System, the receiver can verify whether a vouching service actually vouches for a given domain. To do so, the receiver queries a TXT resource record for the name composed: domain.name.example._vouch.vouching.example The returned data, if any, is a space-delimited list of all the types that the service vouches, given as lowercase ASCII. They should match the self-asserted message content. The types defined are transaction, list, and all. Auditing the message may allow to establish whether its content corresponds. The result of the authentication can be saved in a new header field, according to RFC 6212, like so: Authentication-Results: receiver.example; vbr=pass header.mv=vouching.example header.md=domain.name.example Implementations and variations OpenDKIM and MDaemon Messaging Server by Alt-N Technologies have been among the first software implementations of VBR. OpenDKIM provides a milter as well as a standalone library. Roaring Penguin Software's CanIt anti-spam filter supports VBR as of version 7.0.8 released on 2010-11-09. Spamhaus has released The Spamhaus Whitelist that includes a domain based whitelist, the DWL, where a domain name can be queried as, e.g., dwltest.com._vouch.dwl.spamhaus.org. Although the standard only specifies TXT resource records, following a long established DNSBL practice, Spamhaus has also assigned A resource records with values 127.0.2.0/24 for whitelist return codes. The possibility to query an address may allow easier deployment of existing code. However, their techfaq recommends checking the domain (the value of the d= tag) of a valid DKIM-Signature by querying the corresponding TXT record, and their howto gives details about inserting VBR-Info header fields in messages signed by whitelisted domains. By 2013, one of the protocol authors considered it a flop. References Email authentication Cryptographic protocols Spam filtering
6776471
https://en.wikipedia.org/wiki/NTT%20Communications
NTT Communications
, or NTT Com, is a Japanese telecommunications company which has operated its network services on a global scale with over 190 countries/regions and have locations in more than 70 countries/regions and employs approximately 5,500 people (NTT Communications Group: 11,500 employees) as of March 2020. According to the source, its headquarter is located in the Otemachi Place West tower, Otemachi, Chiyoda, Tokyo. NTT Communications Corporation was founded in July 1999 as the wholly owned subsidiary of Nippon Telegraph & Telephone (NTT) Corp, known as one of the largest and most famous telecom companies in Japan and around the world. Currently, NTT Communications is offering network management, telecommunication services such as VPN, and communications technology (ICT) solutions including cloud, consulting, managed services to companies and governments other than their individual customers. History 1996-2005 Founding and early years In 1996, several new policies were issued for Telecommunications Law, and as a result of this policy changing NTT Communications Corporation was established in July 1999., and has served as a parent company, controlling NTT Communications which responsible for long-distance and worldwide telephone services and two other local telecom companies since then. In 2000, the firm launched new international services called “0033 SAMURAI Mobile”, allowing users to make an international phone call with reasonable international calling fees. Moreover, they began Data center services both within Japan and overseas to provide support for E-business conducted by corporations. On 1 March 2001, NTT Com accepted a license agreement with InterWise, a major provider of live eLearning and solutions for software-based enterprise. Due to the agreement, NTT Communications could offer the InterWise system to their customers as a cross-corporation solution, enabling the firm to expand its eLearning market, by using cutting-edge eLearning technology developed by InterWise to encourage information sharing besides development of company corporate. In December 2003, NTT Communications decided to take over operations from a major data communications services provider; Crosswave Communications Inc. (CWC) which failed for bankruptcy. In addition, the source reported that NTT Com reached an agreement to acquire CWC with approximately 10 billion yen, NTT Communications PM Suzuki said. On 3 October 2005, the company won for the Best Customer Care at the World Communication Awards 2005 held in London. Becoming the first Asian company to earn the award in the communication field. 2006-2015 expansion In 2006, NTT Communications started a new Open Computer Network also known as OCN Hosting Service, offering to mainly small and medium-sized enterprises in domestic. In May 2011, NTT purchased 70% of Frontline Systems, an Australian IT services provider. In October 2013 it merged Frontline Systems with NTT Australia to form NTT ICT. On 17 July 2013, the world's first 100Gbit/s Ethernet technology on a cable system linking Japan and the United States was deployed by NTT Com, and the technology could enhance the design capacity of the company's system by 2.5 times. On 2 December 2014, the company, NTT Group's ICT solutions, and business associated international communications won at the World Communications Awards (WCA) as the Best Global Operator, in which the prize is generally given to those who innovate and provide great customer experiences. 2016–Present In October 2017, Gartner Inc. positioned NTT Communications Corp. as the Leaders quadrant in the “Magic Quadrant for Managed Hybrid Cloud Hosting (MHCH), Asia/Pacific” for the third consecutive year. On 28 May 2020, the firm announced the possibility of information leakage because of unauthorized access, however, no information related to consumer customers was disclosed. According to the official website, "on May 28 that some information—although no information on consumer customers—was possibly leaked externally on May 11 due to unauthorized access to NTT Com facilities by attackers on May 7." Corporate governance NTT Communications Corporation's current board of directors as of September 2020. CEO Toru Maruoka Senior executive vice president Hidemune Sugahara Hiroki Kuriyama Executive vice president Tomohiro Ando Senior vice president Shuichi Sasakura Hiromasa Takaoka Junichi Kudo Mamoru Watanabe Hidetaka Nishikawa Toshio Kanai Katsushige Kojima Shuji Inaba Masayuki Oikawa Sachiko Oonishi Yoshiyuki Kobayashi Hiraku Otsuchi Satoshi Daimon Takashi Ohira Audit and Supervisory Board Member Kazuhiko Aramoto Sakuo Sakamoto Ikuo Izutsu Products and services API gateway A system enables both individual and corporate clients to directly control data operating and maintaining their services as one of the business processes associated with application procedures. Moreover, the gateway allows customers, especially corporate clients to access NTT Com services for their global business activities effortlessly and with more high-speed. DDoS Protection Services The company has expanded its Distributed Denial of Service (DDoS) Protection Services which allows the worldwide customers for global IP network to have an access to the network and to customize the level of services effortlessly and find their suitable supports. NTT Communications had developed the protection services due to the fact that DDoS attacks could occur whenever, potentially being harmful to the network infrastructure, firm's performance, as well as the accessibility of a website or other IP system, and the impact caused by the attack may lead to the noticeable losses in revenues. Machine-to-Machine (M2M) A secure worldwide available mobile service called M2M was started in Hong Kong and Thailand on 30 January 2015, and then the service has been expanded gradually in other global markets. Today, the Machine-to-Machine network is available in around 200 countries/regions. The Internet of Things (IoT) IoT enables all sorts of physical objects to have network connectivity on the purpose of collecting data and information. NTT Com believes that IoT technology would produce innovations leading and this is why the corporation provides the IoT Platform to their customers for encouraging digital transformation of the customer's business. UDDI Since September 2003, a de facto basic registry for internet services UDDI has offered by NTT Communications, and the registry allows users to find and search company names, its major businesses and services descriptions with greater ease. WebRTC Platform NTT Com and two departments within the NTT Group including the ICT solutions international communications services have launched a new service Enterprise Cloud WebRTC Platform (known as ECL WebRTC). Web RTC enables the same-day development of real-time communication via online platforms, specifically, voice and video communications, and data sharing on multiple devices including smartphones, tablets and web-based applications. NTT Group The holding company Nippon Telegraph and Telephone Corporation (called NTT) is one of the major players in the Japanese telecommunication industry, which was founded in 1952. Initially the mother company operated as a public telecom provider in the country, however, the corporation has developed their innovation strategy and has expanded its services and network management today. As of March 31, 2020, 319,050 people work for NTT Group as a whole, and consolidated operating revenues and that of income are ¥11,899.4 billion and ¥1,562.2 billion respectively. Nippon Telegraph and Telephone group currently contains multiple corporations and these are operated in different kind of technological fields in order to meet their customer demands: Mobile communications Business NTT DOCOMO, INC.: One of the largest mobile communications companies in Japan, which expanded its domestic mobile internet market by adopting strategies based on community management principles. Regional communications business - mostly providing regional telecom operations in domestic and related business. NTT East Corporation: The firm provides long-distance and international communication services, cable TV business etc. NTT West Corporation: Generally, NTT West Corporation offers similar services with NTT East Corporation, however in the other area. Moreover, West Corporation has entered into overseas markets. Long distance and, global or cross-border communications business - both of the following firms provide solutions services as well as long-distance communication services including cloud services, network services, data center services, and so forth. NTT Ltd.: Global operating company NTT Communications Corporation: Japan operating company Data communications business NTT Data: NTT Group established a firm about data communication bureau in 1967 which is now known as NTT DATA Corporation and currently, the firm offers its services in both Japan and overseas. Its main business activity is system integration, as well as network system services and their strength, is considered as the development of wide-ranging custom systems. International market Israel In January 2002, NTT Communications invested around $1 million for a unique technology power developed by Israel in order to expand their international services. The new technology could fascinated NTT Communications since it had the ability to transfer audio or video files while connecting people in different locations. India The Israeli technology was utilized for a new service targeting corporate customers and NTT Communications started offering the service in India in two months. In June 2015, NTT Com's subsidiary NTT Communications India Private Limited (known as NTT Com India) built two more new branch offices in Ahmedabad and Guiarat in order to deliver its customers ICT solutions. Germany The article "NTT Communications: PoP" describes that "NTT Communications ... announced it has deepened its network connectivity in Germany with a new Point of Presence (PoP) on its Tier-1 Global IP Network in Munich." The POPs could offer Tier-1 network, allowing the company to deliver its customers network connections with high speed and low latency. China and Hong Kong Financial Data Center Tower 2 (FDC2) which is the first data center in the countries was established by NTT Communications with the purpose of minimizing the data center costs and enhancing the efficiency of data center energy in an eco-friendly way. As examples of energy efficient innovative technologies, cooling walls and batteries and water-side economization are included, and FDC2 has utilized renewable energy sources such as solar power by installing solar panels to the data center and has incorporated other ecological facilities like smart lighting systems for energy saving. In fact, FDC2 has reduced 60% of energy consumption every year, leading to NTT Groups’ eco strategies. Malaysia, the Philippines, Singapore, and Hong Kong NTT Com has launched a new operation called Asia Submarine-cable Express (ASE), an undersea cable connecting multiple big Asian cities with 40 Gbit/s on 20 August 2012. The company has invested heavily in ASE, which has been built in cooperation with several Asian firms such as Telekom Malaysia, Philippines-based PLDT, and StarHub based in Singapore. The leading points for the cable system have been constructed in Japan, Malaysia, the Philippines, and Singapore and Hong Kong was added in early 2013. The direct connection between the countries allows customers in different 5 countries to utilize the data centers, cloud services as well as network provided by NTT Com. Subsidiaries NTT Com consists of the following major subsidiary companies: Arkadin Emerio Netmagic Solutions NTT Brazil NTT Com Asia - the East Asia headquarters of NTT Communications. It is responsible for the Hong Kong, Macau, Taiwan and Korea markets, as well as managing its wholly owned subsidiary HKNet. It employs over 380 staff. NTT Com Security (ex Integralis) NTT Com Thailand NTT Communications India Private Limited NTT Communications Russia NTT Europe Ltd - founded in 1988 and has its headquarters in London, UK with branch offices in many European locations NTT ICT (Australia) NTT MSC NTT Resonant NTT Singapore NTTCom Managed Services OCN or Open Computer Network Verio Virtela Partnerships Dimension Data NTT Communications contracted with Dimension Data to create a provider called “cloud powerhouse”, which allows the companies. to provide their clients a solution with hybrid IT. The alliance will also encourage access to worldwide software-based network services, connecting more than 190 countries and approximately 140 global data centers. Mitsui Chemicals Inc. The two companies presented a new prediction technology, utilizing deep learning-based artificial intelligence (AI) which were evolved by NTT Communication. According to the article "Deep-learning-based", "The predictions are produced in just 20 minutes after sampling process data, by modeling the relationships between process data and raw material, and furnace conditions, using deep-learning-based artificial intelligence (AI) developed by NTT Com." Geminare NTT Communications expanded its service called “Disaster Recovery as a Service (DRaaS)” solution across the European network market, allowing corporate customers to develop their disaster recovery business by utilizing the enterprise cloud platform. Originally, NTT Com's DRaaS solution was already available in the United States, however, the number of corporations incorporating disaster recovery service was expected to exceed the counterpart using conventional recovery services, in other words, there was a higher possibility of increasing in the demand of disaster recovery service. Thus, NTT Communications decided to launch their DRaaS solutions in Europe with the support of Geminare. Arkadin In the 2010s, both NTT Communications and a worldwide collaboration services provider “Arkadin” have determined to expand their partnership for the sake of offering video conferencing to Japanese organizations and businesses, especially for those are multinational clients based in the country, since NTT Communications has believed that the ability of Arkadin's video conferencing services based on cloud platform would allow the company to deliver customers their high quality video experiences with the HD video image and simple one-click access from any internet connected devices, such as laptop and mobile phone. Sponsorships Sports The corporation has sponsored a rugby team NTT Communications Shining Arcs (commonly called as the Shining Arcs) which is presently playing in a rugby competition “the Japan Rugby Top League”. Theme park NTT Communications has sponsored one of the rides in Tokyo Disneyland, Peter Pan's Flight which is themed to the Peter Pan's world. Also, the company has presented a ride in Tokyo DisneySea called Jasmine's Flying Carpets which is a ride attraction about the famous Disney movie Aladdin. Event A network operator enterprise in North America NANOG has been sponsored by NTT Communications, and NTT Com has agreed to support NANOG's three specific events including NANOG 70 held in Bellevue, Washington from 5 to 7 June 2018, NANOG71 scheduled in San Jose, California on 2–4 October and lastly NANOG 72 which was took place from 5 to 7 February in Atlanta, Georgia. See also NTT Europe Online NTT Ltd. Open Computer Network Telegraph Telephone Verio References Nippon Telegraph and Telephone Telecommunications companies based in Tokyo Tier 1 networks Telecommunications companies established in 1999 Mobile virtual network operators Japanese companies established in 1999
8817944
https://en.wikipedia.org/wiki/Mall%20Madness
Mall Madness
Mall Madness is a shopping themed board game released by Milton Bradley (later versions are titled as Electronic Mall Madness). The original game was released in 1988, and an electronic talking version was sold starting in 1989. Milton Bradley updated the game in 1996 with a new design, and another updated version was released in 2004. A redesigned version was released in 2020. Marketing The game was designed for players aged 9 and above, mainly targeted towards young teenage girls. Milton Bradley made several commercials for the game. In one from 1990, the camera showed alternating shots of four girls shopping in a real shopping mall and playing the game at home. After one girl moves her pawn to the game board's parking lot (see Gameplay) she exclaims: "I win!" The other three demonstrate dismay at having lost. The commercial's last line is "Mall Madness, it's the mall with it all!" Another version has recently been released; a Hannah Montana special edition and a "Littlest Pet Shop Edition". The Hannah Montana version was the first version to picture a male on the front of the box. Game contents Mall Madness was sold with the following pieces; Box, game board, electronic computer, instruction manual, four rubber pads to prevent wall pieces from slipping, six plastic wall pieces, four cardboard shopping lists, two sale signs, one clearance sign, eight plastic pawns (two for each colour; red, blue, yellow and green, one was female the other male), forty plastic pegs (used to mark shopping lists), paper money (that resembles U.S. currency, except each bill denomination is color-coded for the game), four cardboard credit cards, and 29 pieces of cardboard which held the game board together. The board The board is a three-dimensional field representing a mall with two stories. The bank and the speaker are located in the center. Some of the stores and locations are on the second floor and can only be reached by stairs or elevator. Electronic computer The game featured an electronic computer that dictated gameplay. Its colour varied from game to game but was almost always peach or grey. The computer uses four AA alkaline batteries. All computers in the early version of the game were manufactured in the United States, and Milton Bradley copyrighted the computer in 1989. The computers complied with Part 15 of the FCC's rules. The top of the computer featured three buttons; one to start or reset gameplay, one to begin and end turns, and one to repeat the last announcement. The computer has two voices, one is female, the other male. There are two slots on the computer's top. Both of these slots were designed for the credit cards that accompanied the game. One slot was to buy items, the other was to use the banking feature. The 2004 version uses only a female voice. Money The game had two components of currency; paper cash and credit cards. These were used together to accomplish the game's objective. Four credit cards were included, one for each player. The names of the credit cards are: Fast Cash (from Good Cents Bank), Quick Draw (from Dollar Daze Bank), MEGAmoney (from Big Bucks Bank), and Easy Money (from Cash n' Carry Bank). In the 2004 edition, they were simply known as "cash cards". Stores Mall Madness featured eighteen stores: I.M. Coughin Drug Store Suits Me Fine Men's Shop 2 Left Feet Shoes Short Circuit Electronics Yuppy Puppies Pets Scratchy's Records Novel Idea Books Frump's Fashion Boutique The Write Stuff Card Shop Fork It Over Kitchen Store Hokus Focus Cameras Sweaty's Sports Made in the Shade Sunglasses Chip's Computers Ruby's Jewelry DingaLing Phones M.T Wallet's Department Store Tinker's Toys Players could also visit four other areas: Conehead's Ice Cream Restrooms Vidiots Arcade Aunt Chovie's Pizza A limited variety of items could be purchased, the most inexpensive being pizza, which is five dollars, and the most expensive is a regular priced exotic parrot, which is two hundred dollars. Object of the game The object of the game is to be the first player to purchase six items on the player's shopping list and get back to the parking lot or go to their final destination, depending on the version of the game. For more challenging gameplay, the goal could be increased up to ten items in the 1989 and 1996 editions. Gameplay The game takes place on a board representing a two-story shopping mall. The game is designed for two to four players. Each player receives $150 ($200 in previous versions) from one player who is designated to be the banker. The banker dispenses cash in the following manner: one $50 bill (two in previous versions), three $20 bills, three $10 bills, and two $5 bills. The first player presses the computer's gameplay button, which directs the player to move a random number of spaces. Players can move horizontally (across) or vertically (up and down), but not diagonally. Players do not have to move the full count to enter a store and can only move into a store through its doors and not its walls. When arriving at a store, each player can make purchases with a cardboard credit card by inserting it into the computer's "buy" slot, and the computer tracks the gameplay. After the player purchases items with the credit card (signified by a cash register sound), he/she will pay the banker with the appropriate amount of cash, and then use a peg to mark that item off on their shopping list. Once a player buys an item from a particular store, they cannot shop that store again. At the start of each turn, an electronic voice announces a clearance at one store and sales at two others. Players can use these sales to their advantage since it takes up a turn to get to the ATM. At random intervals, a player may be given a clearance or a sale at a store that does not currently have one. Other times, a player may have to pay an additional $5 fee for the item. Sometimes, the game will refuse a sale, or will refuse to dispense more cash. Occasionally, the game will randomly instruct players to move to the ATM, the arcade, restrooms, or to various stores. Once a player gets six of the items on their list, they must be the first to reach their respective parking lot (1989, 1996, and 2020 editions) or final destination (which may change at any time; 2004 and "Littlest Pet Shop" editions). The first person to accomplish this wins the game. Legacy Milton Bradley released a line of digital electronic voice board games following Mall Madness. In 1990, Milton Bradley under its Parker Brothers' brand, released an updated version of the 1984 Mystery Mansion board game, adding an electronic voice device. Then in 1992, they released Omega Virus, taking place on a space station infected by an extraterrestrial computer virus. Unlike previous electronic voice games by Milton Bradley games, Omega Virus was unique in that it was the only one that had a countdown timer that would end the game if not completed before time ran out. Michael Gray, who created Mall Madness, also designed Omega Virus as well as another electronic game called "Dream Phone." External links Mall Madness 1996 instructions at Hasbro.com Mall Madness 2004 instructions at Hasbro.com Mall Madness 2020 instructions at Hasbro.com Mall Madness: "The Littlest Pet Shop" instructions at Hasbro.com Board games introduced in 1988 Children's board games Milton Bradley Company games Roll-and-move board games Electronic board games
8181214
https://en.wikipedia.org/wiki/MIT/GNU%20Scheme
MIT/GNU Scheme
MIT/GNU Scheme is a programming language, a dialect and implementation of the language Scheme, which is a dialect of Lisp. It can produce native binary files for the x86 (IA-32, x86-64) processor architecture. It supports the R7RS-small standard. It is free and open-source software released under a GNU General Public License (GPL). It was first released by the developers at the Massachusetts Institute of Technology (MIT), in 1986, as free software even before the Free Software Foundation, GNU, and the GPL existed. It is now part of the GNU Project. It features a rich runtime software library, a powerful source code level debugger, a native code compiler and a built-in Emacs-like editor named Edwin. The books Structure and Interpretation of Computer Programs and Structure and Interpretation of Classical Mechanics include software that can be run on MIT/GNU Scheme. Edwin Edwin is a built-in Emacs-like editor that comes with MIT/GNU Scheme. Edwin normally displays the *scheme* data buffer, the mode line, and the mini-buffer when it starts. As in Emacs, the mode line gives information like the name of the buffer above it and whether that buffer is read-only, modified, or unmodified. References External links MIT/GNU Scheme page at MIT's AI Lab Scheme (programming language) compilers Scheme (programming language) interpreters Scheme (programming language) implementations GNU Project software
10673204
https://en.wikipedia.org/wiki/Check%20Point%20IPSO
Check Point IPSO
Check Point IPSO is the operating system for the 'Check Point firewall' appliance and other security devices, based on FreeBSD, with numerous hardening features applied. The IP in IPSO refers to Ipsilon Networks, a company specialising in IP switching acquired by Nokia in 1997. In 2009, Check Point acquired the Nokia security appliance business, including IPSO, from Nokia. Variations IPSO, now at version 6.2, is a fork of FreeBSD 6. There were two other systems, called IPSO-SX and IPSO-LX, that were Linux-based: IPSO SX was Nokia's first release of a Linux-based IPSO, and was deployed in 2002 on the now-defunct Message Protector, and briefly thereafter on a short-lived appliance version of the "Nokia Access Mobilizer", acquired from Eizel. It had a partitioning scheme somewhat reminiscent of IPSO SB, a LILO configuration and boot manager also somewhat inspired by IPSO SB, and a software package installer that made RPM packaging look more familiar to a Nokia IPSO administrator. It did not, however, include a full configuration database or Voyager web interface, the two things that normally define IPSO. IPSO LX is a nearly vanilla Gentoo-based Linux OS, and is used on Nokia appliances sold with Sourcefire 3D. It includes a full Voyager and database implementation—in fact, the Voyager look and feel in IPSO SB 4.0 onwards was based on that implemented for IPSO LX. Check Point offers three lines of security appliances - one based on IPSO 6.x, one based on an operating system called SecurePlatform and the latest based on Gaia platform (RHEL4 based). Features IPSO notable features or firsts include: Effective firewall load-balancing (in conjunction with Check Point synchronization), derived from Network Alchemy clustering technology, predating and still independently developed from Check Points ClusterXL. The first commercial IPv6 router out of beta-testing (ahead of Cisco and Juniper Networks) Firewall Flows for putting Check Point security rule implementation into the dedicated network processor circuitry on-the-fly (though this is now largely evolved into Check Point's SecureXL) Versions IPSO SB was originally derived by Ipsilon Networks from FreeBSD 2.1-STABLE and cross-compiled on FreeBSD 2.2.6-RELEASE and 3.5-RELEASE platforms. Its major components are: A configuration database held in memory by the "xpand" daemon, that creates legacy UNIX configuration in /etc on-the-fly. A partitioning scheme which places a mini-IPSO in a separate boot manager partition for recovery A partition-slicing scheme which segregates read-only and read-write content A software packaging scheme which requires all packages to remain in a single location under /opt A web interface, Voyager, which was closely integrated with the configuration database. (It has now diverged somewhat.) IPSO versions up to 2.x were sold by Ipsilon Networks as part of the ATM tag-switching solutions that they originally pioneered. IPSO 3.0 onwards were designed to host Check Point FireWall-1 and other third party packages. IPSO 3.0 to 3.9 spanned from 1999 to 2005 and, while adding many features and significant performance and hardware refinements, were recognizably the same to the administrator. IPSO 4.0 was not designed as a major update and was internally numbered as IPSO 3.10. However, Check Point software was unable to process a two-digit dot version, and it also included a refresh of the Voyager HTML interface. Up to that point, JavaScript and frames had been avoided in order to facilitate the use of Lynx as a command line interface. These together resulted in it being renumbered as 4.0. IPSO 4.1 and IPSO 4.2 are incremental releases. IPSO 4.2 will gain source-based routing as its last scheduled new feature. All new development will continue on IPSO 6.x. IPSO 5.0 build 056 was released in 2009 for VSX R65 support on IP Appliance. IPSO 6.0 was announced by Nokia in relation to the IP2450 and IP690 hardware. It is based on FreeBSD 6.x. Its primary advantage over IPSO 4.x are improved memory management, performance, scheduling, threading, POSIX-compliance, and other operating system features. IPSO 6.0.7 was released in 2009 for IP690 and IP2450 with CoreXL (multi-core) support. IPSO 6.1 contains other enhancements from FreeBSD 6.x but without CoreXL support. Because of the step change, Nokia advertised that IPSO 4.2, 6.07 and 6.1 will run alongside each other for a period of time. When Check Point acquired Nokia IP appliance business, 6.07 and 6.1 development branches were merged and combined to 6.2. Most recent version is IPSO 6.2MR6, released in February 2017. For a while, Nokia offered IPSO 7, which was actually IPSO LX. It was discontinued after 7.2, in 2008. After acquiring the Nokia IP appliance business, Check Point announced project Gaia to combine both IPSO and Secure Platform. The first release is expected in 2011. References External links FreeBSD 2.2.6 base manual pages Check Point firewall packages Other packages and directory Nokia platforms
1364072
https://en.wikipedia.org/wiki/Xsan
Xsan
Xsan () is Apple Inc.'s storage area network (SAN) or clustered file system for macOS. Xsan enables multiple Mac desktop and Xserve systems to access shared block storage over a Fibre Channel network. With the Xsan file system installed, these computers can read and write to the same storage volume at the same time. Xsan is a complete SAN solution that includes the metadata controller software, the file system client software, and integrated setup, management and monitoring tools. Xsan has all the normal features to be expected in an enterprise shared disk file system, including support for large files and file systems, multiple mounted file systems, metadata controller failover for fault tolerance, and support for multiple operating systems. Interoperability Xsan is based on the StorNext File System made by Quantum Corporation. The StorNext File System and the Xsan file system share the same file system layout and the same protocol when talking to the metadata server. They also seem to share a common code base or very close development based on the new features developed for both file systems. The Xsan website claims complete interoperability with the StorNext File System: "And because Xsan is completely interoperable with Quantum’s StorNext File System, you can even provide clients on Windows, Linux, and other UNIX platforms with direct Fibre Channel block-level access to the data in your Xsan-managed storage pool." Quantum Corporation claims: "Complete interoperability with Apple’s Xsan and Promise RAID and Allows Xsan and Xserve RAID to support AIX, HP-UX, IRIX, Red Hat Linux, SuSE Linux, Mac OS X, Solaris, and Windows clients, including support for 64 Bit Windows and Windows Vista." Some of the command line tools for Xsan begin with the letters cv, which stand for CentraVision – the original name for the file system. XSan clients use TCP ports 49152–65535, with TCP/63146 frequently showing in log files. Data representation Xsan file system uses several logical storages to distribute information. The two main classes of information appear on Xsan: the user data (such as files) and the file system metadata (such as folders, file names, file allocation information and so on). Most configurations use different storages for data and metadata. The file system supports dynamic expansion and distribution of both data and metadata areas. History On January 4, 2005, Apple announced shipping of Xsan. In May 2006, Apple released Xsan 1.2 with support for volume sizes of nearly 2 petabytes. On August 7, 2006, Apple announced Xsan 1.4, which is available for Intel-based Macintosh computers as a Universal binary and supports file system access control lists. On December 5, 2006, Apple released Xsan 1.4.1. On October 18, 2007, Apple released Xsan 1.4.2, which resolves several reliability and compatibility issues. On February 19, 2008, Apple released Xsan 2, the first major update, which introduces MultiSAN, and completely redesigned administration tools. 2.1 was introduced on June 10, 2008. 2.1.1 was introduced on October 15, 2008. 2.2 was released September 14, 2009. On July 20, 2011, Apple released Xsan 2.3, included in Mac OS X Lion. This was the first version of Xsan included with macOS. On August 25, 2011, Apple released Xsan 2.2.2, which brought along several reliability fixes. On July 25, 2012, Apple released Xsan 3, included in OS X Mountain Lion. On October 17, 2014, Apple released Xsan 4 with OS X Yosemite. On September 20, 2016, Apple released Xsan 5 with macOS Sierra and macOS Server 5.2. On November 12, 2020, Apple release Xsan 7 with macOS Big Sur. References Krypted.com Xsan Tutorials and Documentation External links Apple's Xsan page Shared disk file systems Apple Inc. file systems Apple Inc. software
3960131
https://en.wikipedia.org/wiki/Electronic%20data%20capture
Electronic data capture
An electronic data capture (EDC) system is a computerized system designed for the collection of clinical data in electronic format for use mainly in human clinical trials. EDC replaces the traditional paper-based data collection methodology to streamline data collection and expedite the time to market for drugs and medical devices. EDC solutions are widely adopted by pharmaceutical companies and contract research organizations (CRO). Typically, EDC systems provide: a graphical user interface component for data entry a validation component to check user data a reporting tool for analysis of the collected data EDC systems are used by life sciences organizations, broadly defined as the pharmaceutical, medical device and biotechnology industries in all aspects of clinical research, but are particularly beneficial for late-phase (phase III-IV) studies and pharmacovigilance and post-market safety surveillance. EDC can increase data accuracy and decrease the time to collect data for studies of drugs and medical devices. The trade-off that many drug developers encounter with deploying an EDC system to support their drug development is that there is a relatively high start-up process, followed by significant benefits over the duration of the trial. As a result, for an EDC to be economical the saving over the life of the trial must be greater than the set-up costs. This is often aggravated by two conditions: that initial design of the study in EDC does not facilitate the decrease in costs over the life of the study due to poor planning or inexperience with EDC deployment; and initial set-up costs are higher than anticipated due to initial design of the study in EDC due to poor planning or experience with EDC deployment. The net effect is to increase both the cost and risk to the study with insignificant benefits. However, with the maturation of today's EDC solutions, much of the earlier burdens for study design and set-up have been alleviated through technologies that allow for point-and-click, and drag-and-drop design modules. With little to no programming required, and reusability from global libraries and standardized forms such as CDISC's CDASH, deploying EDC can now rival the paper processes in terms of study start-up time. As a result, even the earlier phase studies have begun to adopt EDC technology. History EDC is often cited as having its origins in remote data entry (RDE) software, which surfaced in the life sciences market in the late 1980s and early 1990s. However, its origins might be tracked to a contract research organization known then as Institute for Biological Research and Development (IBRD). Drs. Nichol, Pickering, and Bollert offered "a controlled system for post-marketing surveillance (PMS) of newly approved (NDA) pharmaceutical products," with surveillance data being "entered into an electronic data base on site" at least as early as 1980. Clinical research data—patient data collected during the investigation of a new drug or medical device is collected by physicians, nurses, and research study coordinators in medical settings (offices, hospitals, universities) throughout the world. Historically, this information was collected on paper forms which were then sent to the research sponsor (e.g., a pharmaceutical company) for data entry into a database and subsequent statistical analysis environment. However, this process had a number of shortcomings: data are copied multiple times, which produces errors errors that are generated are not caught until weeks later visibility into the medical status of patients by sponsors is delayed To address these and other concerns, RDE systems were invented so that physicians, nurses, and study coordinators could enter the data directly at the medical setting. By moving data entry out of the sponsor site and into the clinic or other facility, a number of benefits could be derived: data checks could be implemented during data entry (real-time), preventing some errors altogether and immediately prompting for resolution of other errors data could be transmitted nightly to sponsors, thereby improving the sponsor's ability to monitor the progress and status of the research study and its patients These early RDE systems used "thick client" software—software installed locally on a laptop computer's hardware—to collect the patient data. The system could then use a modem connection over an analog phone line to periodically transmit the data back to the sponsor, and to collect questions from the sponsor that the medical staff would need to answer. Though effective, RDE brought with it several shortcomings as well. The most significant shortcoming was that hardware (e.g., a laptop computer) needed to be deployed, installed, and supported at every investigational (medical) site. This became expensive for sponsors and complicated for medical staff. Usability and space constraints led to a lot of dissatisfaction among medical practitioners. With the rise of the internet in the mid-1990s, the obvious solution to some of these issues was the adoption of web-based software that could be accessed using existing computers at the investigational sites. EDC represents this new class of software. Current landscape The EDC landscape has continued to evolve from its evolution from RDE in the late 1990s. Today, the market consists of a variety of new and established software providers. Many of these providers offer specialized solutions targeting certain customer profiles or study phases. Modern features of EDC now include features like cloud data storage, role-based permissions, and case report form designers, as well as clinical trials analytics, interactive dashboards, and electronic medical record integration. Future In 2013, the U.S. Food and Drug Administration (FDA) introduced its eSource guidance, which suggests methods of capturing clinical trial data electronically from the very beginning and moving it to the cloud, as opposed to EDC's more traditional method of capturing data initially on paper and transcribing it into the EDC system. Adoption of eSource was initially slow, with the FDA producing a webinar in July 2015 to further promote the guidance. Efforts like the TransCelerate eSource Initiative (in 2016) have been founded "to facilitate the understanding of the eSource landscape and the optimal use of electronic data sources in the industry to improve global clinical science and global clinical trial execution for stakeholders." A 2017 study by the Tufts Center for the Study of Drug Development suggested that with the following three years a "majority of [surveyed clinical information] companies" (growing from 38 percent to 84 percent) planned to incorporate eSource data. With 87 percent of research sites (2017) stating that eSource would be "helpful" or "very helpful" if integrated with today's EDC, a shift away from EDC (or EDC taking a more complementary role) may be possible. See also Clinical data acquisition Clinical data management system (CDMS) Case report form (CRF) Remote data entry (RDE) Remote data capture (RDC) Patient-reported outcome (PRO) Electronic patient-reported outcome (ePRO) Title 21 CFR Part 11 Mobile forms References Clinical research Telemetry Clinical data management
1962747
https://en.wikipedia.org/wiki/HCL%20Technologies
HCL Technologies
HCL Technologies is an Indian multinational information technology (IT) services and consulting company headquartered in Noida, Uttar Pradesh, India. It is a subsidiary of HCL Enterprise. Originally a research and development division of HCL, it emerged as an independent company in 1991 when HCL entered into the software services business. The company has offices in 50 countries including United Kingdom, United States, France, and Germany with a worldwide network of R&D, "innovation labs" and "delivery centers", over 187,000 employees and its customers include 250 of the Fortune 500 and 650 of the Global 2,000 companies. It operates across sectors including aerospace and defense, automotive, banking, capital markets, chemical and process industries, energy and utilities, healthcare, hi-tech, industrial manufacturing, consumer goods, insurance, life sciences, manufacturing, media and entertainment, mining and natural resources, oil and gas, retail, telecom, and travel, transportation, logistics & hospitality. HCL Technologies is on the Forbes Global 2000 list. It is among the top 20 largest publicly traded companies in India with a market capitalisation of $50 billion as of September 2021. As of July 2020, the company, along with its subsidiaries, had a consolidated annual revenue of ₹71,265 crore (US$10 billion). History of HCL HCL Enterprise HCL Enterprise was founded in 1976. The first three subsidiaries of parent HCL Enterprise were: HCL Technologies - originally HCL's R&D division, it emerged as a subsidiary in 1991 HCL Infosystems HCL Healthcare The company tried to stay focused on hardware, but, via HCL Technologies, software and services is a main focus. Revenues for 2007 were US$4.9 billion. Revenues for 2017 were US$6.5 billion, and HCL employed over 105,000 professionals in 31 countries. Revenues for 2018 were US$9 billion, and HCL employed over 110,000 professionals in 31 countries. A unit named HCL Enterprise Solutions (India) Limited was formed in July 2001. Currently HCL Technologies is a subsidiary of Vamasundari Delhi through a chain of entities in between. Vamasundari (Delhi) is owned by Shiv Nadar and it in turns holds majority of shares in most HCL group companies. On 1 July 2019, HCL Technologies acquired a select few products of IBM. HCL Technologies took the full ownership of research and development, sales, marketing, delivery, and support for AppScan, BigFix, Commerce, Connections, Digital Experience (Portal and Content Manager), Notes Domino, and Unica. Formation and early years In 1976, a group of six engineers, all former employees of Delhi Cloth & General Mills, led by Shiv Nadar, started a company that would make personal computers. Initially floated as Microcomp Limited, Nadar and his team (which also included Arjun Malhotra, Ajai Chowdhry, D.S. Puri, Yogesh Vaidya and Subhash Arora) started selling teledigital calculators to gather capital for their main product. On 11 August 1976, the company was renamed Hindustan Computers Limited (HCL). On 12 November 1991, a company called HCL Overseas Limited was incorporated as a provider of technology development services. It received the certificate of commencement of business on 10 February 1992 after which it began its operations. Two years later, in July 1994, the company name was changed to HCL Consulting Limited and eventually to HCL Technologies Limited in October 1999. HCL Technologies is one of the four companies under HCL Corporation, the second company being HCL Infosystems. In February 2014 HCL launched HCL Healthcare. HCL TalentCare is the fourth and latest venture of HCL Corporation. HCL Technologies began as the R&D Division of HCL Enterprise, a company which was a contributor to the development and growth of the IT and computer industry in India. HCL Enterprise developed an indigenous microcomputer in 1978, and a networking OS and client-server architecture in 1983. On 12 November 1991, HCL Technologies was spun off as a separate unit to provide software services. HCL Technologies was originally incorporated as HCL Overseas Limited. The name was changed to HCL Consulting Limited on 14 July 1994. On 6 October 1999, the company was renamed 'HCL Technologies Limited' for "a better reflection of its activities." Between 1991 and 1999, the company expanded its software development capacities to US, European and APAC markets. IPO and subsequent expansion The company went public on 10 November 1999, with an issue of 142 crore (14.2 million) shares, valued at ₹4 each. During 2000, the company set up an offshore development centre in Chennai, India, for KLA-Tencor Corporation. In 2002, it acquired Gulf Computers Inc. In March 2021, HCL Technologies expands partnership with Google Cloud to bring HCL Software's Digital Experience (DX) and Unica Marketing cloud-native platforms to Google Cloud. Acquisitions Acquired In July 2018 US-based Actian was acquired by HCL Technologies and Sumeru Equity Partners for $330 million. Joint venture On 23 July 2015, CSC (NYSE: CSC) and HCL Technologies (BSE: HCLTECH) announced a joint venture agreement to form a banking software and services company, Celeriti FinTech. In October 2017, IBM struck a "strategic partnership" with HCL Technologies that had the latter firm take over development of the IBM Lotus Software's Notes, Domino, Sametime and Verse collaboration tools. In May 2018, HCL Technologies announced that it has joined hands with Transportation Alliance (BITA), known for incorporating blockchain in the transportation industry, to implement blockchain. Partnership On 9 June 2015 PC maker Dell announced a strategic distribution partnership with HCL Infosystems. In October 2018, TransGrid signed a 5-year managed services deal with HCL Technologies for IT services delivery and providing outsourcing support, with the outsourcing teams to be based in Australia. HCL Tech signs seven-year exclusive partnership with Temenos - The exclusive strategic agreement will be for non-financial services enterprises, where HCL has been granted a license to develop, market, and support Temenos multi-experience development platform (MXDP). This agreement will help provide HCL's non-financial services clients leading technology and the higher levels of services and support. Operations HCL Technologies operate in 50 countries, including its headquarters in Noida, India. It has establishments in Australia, China, Hong Kong, India, Indonesia, Israel, Japan, Malaysia, New Zealand, Saudi Arabia, Singapore, South Africa, the United Arab Emirates and Qatar. In Europe it covers Belgium, Bulgaria, Czech Republic, Denmark, Estonia and Romania Finland, France, Germany, Italy, Lithuania, Netherlands, Norway, Poland, Sweden, Switzerland, Portugal, and United Kingdom. In the Americas, the company has offices in Brazil, Canada, Mexico, Puerto Rico, Guatemala, and United States. Business lines Applications Services and Systems Integrations BPO/Business Services: This division has "delivery centres" in India, Philippines, Latin America, USA, HCL BPO Northern Ireland, and Europe. Engineering and R&D Services (ERS) Infrastructure Management Services (IMS) IoT WoRKS DRYiCE Digital & Analytics and e-publishing Cybersecurity and GRC Services Financial Risk & Compliance Solutions Infrastructure Services Division A subsidiary of HCL Technologies, HCL Infrastructure Services Division (ISD) is an IT services company. Headquartered in Delhi, NCR, India, HCL ISD was instituted in 1993 with the objective to address the demand for cost-effective management of technology infrastructure across geographically dispersed locations. HCL ISD, also known as HCL Comnet Systems and Services Ltd. in India, diversified ito provide enterprise IT infrastructure globally in 1993 winning the first order to establish India's first floorless stock exchange United Kingdom and Ireland On 7 September 2005, HCL Technologies expanded its operations base in County Armagh and Belfast in Northern Ireland. At the 2006 UK Trade and Investment India Business Awards in New Delhi, the then UK Prime Minister Tony Blair announced the expansion, which was aimed at creating more IT and BPO jobs in the area. HCL acquired the Armagh-based AnswerCall Direct earlier in 2005. HCL BPO services in Ireland are carried out through its main delivery centres in Armagh and Belfast. In November 2011, after HCL revealed an expansion plan in County Kilkenny in Ireland, its Business Process Outsourcing (BPO) division in Northern Ireland won a contract for back-office services from the Department of Health. It was aimed at increasing the number of jobs and other employment opportunities in the region. Sri Lanka HCL announced on 16 June 2020 that it had commenced operations in Sri Lanka. The company plans to create 1,500 jobs in the country within the first 18 months of operations. HCL Infosystems HCL Enterprise's Infosystems subsidiary, as of 2015, was still active. This part of HCL was formed in 1976 to produce calculators. See also List of IT consulting firms List of public listed software companies of India Software industry in Telangana Information technology in India List of IT consulting firms List of Indian IT companies References External links Indian companies established in 1991 Companies based in Noida International information technology consulting firms Information technology consulting firms of India Multinational companies headquartered in India Software companies of India Indian brands Business process outsourcing companies of India Consulting firms established in 1991 Software companies established in 1991 Outsourcing in India Business process outsourcing companies 1999 initial public offerings Companies based in Uttar Pradesh NIFTY 50 BSE SENSEX 1991 establishments in Uttar Pradesh Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
30423710
https://en.wikipedia.org/wiki/List%20of%20disk%20operating%20systems%20called%20DOS
List of disk operating systems called DOS
This is a list of disk operating systems in which the acronym DOS is used to form their names. Many of these are simply referred to as "DOS" within their respective communities. MS-DOS / IBM PC DOS compatible systems MS-DOS (since 1981), Microsoft operating system based on 86-DOS for x86-based personal computers IBM PC DOS (since 1981), OEM version of MS-DOS for the IBM Personal Computer and compatibles, manufactured and sold by IBM from the 1980s to the 2000s DR-DOS (since 1988), MS-DOS-compatible operating system originally developed by Digital Research ROM-DOS (1989), MS-DOS clone by Datalight PTS-DOS (since 1993), MS-DOS clone developed in Russia by PhysTechSoft FreeDOS (since 1998), open source MS-DOS clone Other x86 disk operating systems with "DOS" in the name 86-DOS (a.k.a. QDOS, created 1980), an operating system developed by Seattle Computer Products for its 8086-based S-100 computer kit, heavily inspired by CP/M Concurrent DOS (a.k.a. CDOS, Concurrent PC DOS and CPCDOS) (since 1983), a CP/M-86 and MS-DOS 2.11 compatible multiuser, multitasking DOS, based on Concurrent CP/M-86 developed by Digital Research DOS Plus (since 1985), a PC DOS and CP/M-86 compatible multitasking operating system for early x86-based personal computers, based on Concurrent PC DOS 4.1/5.0 by Digital Research Multiuser DOS (a.k.a. MDOS), a PC DOS and CP/M-86 compatible multiuser multitasking operating system based on Concurrent DOS by Digital Research NetWare PalmDOS, a successor of DR DOS 6.0 specifically tailored for early mobile and palmtop PCs by Novell Novell DOS, a multitasking successor of DR DOS 6.0 by Novell OpenDOS, a successor of Novell DOS by Caldera Disk operating systems for other platforms AmigaDOS, the disk operating system portion of AmigaOS AMSDOS, for Amstrad CPC compatibles ANDOS, operating system for the Russian Electronika BK computers Apple DOS, operating system for the Apple II series from late 1978 through early 1983 Apple ProDOS, name for both ProDOS 8 for the Apple II and ProDOS 16 for the Apple IIGS Atari DOS, for the Atari 8-bit family of home computers Commodore DOS, for Commodore's 8-bit computers Cromemco DOS (CDOS), a CP/M-like operating system CSI-DOS, for the Soviet Elektronika BK computers DOS (Diskette Operating System), a small OS for 16-bit Data General Nova computers, a cut-down version of their RDOS. DEC BATCH-11/DOS-11, the first operating system to run on the PDP-11 minicomputer Delta DOS, third party option from Premier Microsystems for the Dragon 32/64 DIP DOS, the operating system of the Atari Portfolio DOS/360, 1966 IBM System/360 mainframe computer Disk Operating System DOS XL, Atari 8-bit family DOS from Optimized Systems Software DragonDOS, for the Dragon 32/64 GEMDOS, one of the components of the Atari TOS IS-DOS, for Russian ZX Spectrum clones, developed in 1990 or 1991 MasterDOS, replacement DOS for the SAM Coupé MDOS, Myarc Disk Operating System for the Geneve 9640 MSX-DOS, a cross between MS-DOS 1.0 and CP/M developed by Microsoft for the MSX computer standard MyDOS, third-party Atari 8-bit family DOS NewDos/80, Apparat's feature-rich alternative to TRSDOS for the TRS-80. Oric DOS, for the Oric-1 home computer PTDOS, for the 1970s Sol-20 from Processor Technology SAMDOS, original DOS for the SAM Coupé Sinclair QDOS, for the Sinclair QL RDOS, a real-time operating system released in 1972 for the Data General Nova and Eclipse minicomputers SK*DOS, for Motorola 68000-based systems SmartDOS, third-party Atari 8-bit family DOS SpartaDOS, third-party Atari 8-bit family DOS TOP-DOS, third-party Atari 8-bit family DOS TR-DOS, for the ZX Spectrum TRSDOS, for the Tandy TRS-80 line of 8-bit Zilog Z80 microcomputers Xtal DOS for the Tatung Einstein See also DOS (disambiguation) DOS 1 (disambiguation) DOS 2 (disambiguation) DOS 3 (disambiguation) DOS 4 (disambiguation) DOS 5 (disambiguation) DOS 6 (disambiguation) DOS 7 (disambiguation) DOS 8 (disambiguation) DOS 10 (disambiguation) DOS 20 (disambiguation) DOS 30 References D
171148
https://en.wikipedia.org/wiki/Jomo%20Kenyatta
Jomo Kenyatta
Jomo Kenyatta (22 August 1978) was a Kenyan anti-colonial activist and politician who governed Kenya as its Prime Minister from 1963 to 1964 and then as its first President from 1964 to his death in 1978. He was the country's first indigenous head of government and played a significant role in the transformation of Kenya from a colony of the British Empire into an independent republic. Ideologically an African nationalist and conservative, he led the Kenya African National Union (KANU) party from 1961 until his death. Kenyatta was born to Kikuyu farmers in Kiambu, British East Africa. Educated at a mission school, he worked in various jobs before becoming politically engaged through the Kikuyu Central Association. In 1929, he travelled to London to lobby for Kikuyu land affairs. During the 1930s, he studied at Moscow's Communist University of the Toilers of the East, University College London, and the London School of Economics. In 1938, he published an anthropological study of Kikuyu life before working as a farm labourer in Sussex during the Second World War. Influenced by his friend George Padmore, he embraced anti-colonialist and Pan-African ideas, co-organising the 1945 Pan-African Congress in Manchester. He returned to Kenya in 1946 and became a school principal. In 1947, he was elected President of the Kenya African Union, through which he lobbied for independence from British colonial rule, attracting widespread indigenous support but animosity from white settlers. In 1952, he was among the Kapenguria Six arrested and charged with masterminding the anti-colonial Mau Mau Uprising. Although protesting his innocence—a view shared by later historians—he was convicted. He remained imprisoned at Lokitaung until 1959 and was then exiled to Lodwar until 1961. On his release, Kenyatta became President of KANU and led the party to victory in the 1963 general election. As Prime Minister, he oversaw the transition of the Kenya Colony into an independent republic, of which he became president in 1964. Desiring a one-party state, he transferred regional powers to his central government, suppressed political dissent, and prohibited KANU's only rival—Oginga Odinga's leftist Kenya People's Union—from competing in elections. He promoted reconciliation between the country's indigenous ethnic groups and its European minority, although his relations with the Kenyan Indians were strained and Kenya's army clashed with Somali separatists in the North Eastern Province during the Shifta War. His government pursued capitalist economic policies and the "Africanisation" of the economy, prohibiting non-citizens from controlling key industries. Education and healthcare were expanded, while UK-funded land redistribution favoured KANU loyalists and exacerbated ethnic tensions. Under Kenyatta, Kenya joined the Organisation of African Unity and the Commonwealth of Nations, espousing a pro-Western and anti-communist foreign policy amid the Cold War. Kenyatta died in office and was succeeded by Daniel arap Moi. Kenyatta was a controversial figure. Prior to Kenyan independence, many of its white settlers regarded him as an agitator and malcontent, although across Africa he gained widespread respect as an anti-colonialist. During his presidency, he was given the honorary title of Mzee and lauded as the Father of the Nation, securing support from both the black majority and the white minority with his message of reconciliation. Conversely, his rule was criticised as dictatorial, authoritarian, and neo-colonial, of favouring Kikuyu over other ethnic groups, and of facilitating the growth of widespread corruption. Early life Childhood A member of the Kikuyu people, Kenyatta was born with the name Kamau in the village of Nginda. Birth records were not then kept among the Kikuyu, and Kenyatta's date of birth is not known. One biographer, Jules Archer, suggested he was likely born in 1890, although a fuller analysis by Jeremy Murray-Brown suggested a birth circa 1897 or 1898. Kenyatta's father was named Muigai, and his mother Wambui. They lived in a homestead near the River Thiririka, where they raised crops, bred sheep and goats. Muigai was sufficiently wealthy that he could afford to keep several wives, each living in a separate nyūmba (woman's hut). Kenyatta was raised according to traditional Kikuyu custom and belief, and was taught the skills needed to herd the family flock. When he was ten, his earlobes were pierced to mark his transition from childhood. Wambui subsequently bore another son, Kongo, shortly before Muigai died. In keeping with Kikuyu tradition, Wambui then married her late husband's younger brother, Ngengi. Kenyatta then took the name of Kamau wa Ngengi ("Kamau, son of Ngengi"). Wambui bore her new husband a son, whom they also named Muigai. Ngengi was harsh and resentful toward the three boys, and Wambui decided to take her youngest son to live with her parental family further north. It was there that she died, and Kenyatta—who was very fond of the younger Muigai—travelled to collect his infant half-brother. Kenyatta then moved in with his grandfather, Kongo wa Magana, and assisted the latter in his role as a traditional healer. In November 1909, Kenyatta left home and enrolled as a pupil at the Church of Scotland Mission (CSM) at Thogoto. The missionaries were zealous Christians who believed that bringing Christianity to the indigenous peoples of Eastern Africa was part of Britain's civilizing mission. While there, Kenyatta stayed at the small boarding school, where he learnt stories from the Bible, and was taught to read and write in English. He also performed chores for the mission, including washing the dishes and weeding the gardens. He was soon joined at the mission dormitory by his brother Kongo. The longer the pupils stayed, the more they came to resent the patronising way many of the British missionaries treated them. Kenyatta's academic progress was unremarkable, and in July 1912 he became an apprentice to the mission's carpenter. That year, he professed his dedication to Christianity and began undergoing catechism. In 1913, he underwent the Kikuyu circumcision ritual; the missionaries generally disapproved of this custom, but it was an important aspect of Kikuyu tradition, allowing Kenyatta to be recognized as an adult. Asked to take a Christian name for his upcoming baptism, he first chose both John and Peter after Jesus' apostles. Forced by the missionaries to choose just one, he chose Johnstone, the -stone chosen as a reference to Peter. Accordingly, he was baptized as Johnstone Kamau in August 1914. After his baptism, Kenyatta moved out of the mission dormitory and lived with friends. Having completed his apprenticeship to the carpenter, Kenyatta requested that the mission allow him to be an apprentice stonemason, but they refused. He then requested that the mission recommend him for employment, but the head missionary refused because of an allegation of minor dishonesty. Nairobi: 1914–1922 Kenyatta moved to Thika, where he worked for an engineering firm run by the Briton John Cook. In this position, he was tasked with fetching the company wages from a bank in Nairobi, away. Kenyatta left the job when he became seriously ill; he recuperated at a friend's house in the Tumutumu Presbyterian mission. At the time, the British Empire was engaged in the First World War, and the British Army had recruited many Kikuyu. One of those who joined was Kongo, who disappeared during the conflict; his family never learned of his fate. Kenyatta did not join the armed forces, and like other Kikuyu he moved to live among the Maasai, who had refused to fight for the British. Kenyatta lived with the family of an aunt who had married a Maasai chief, adopting Maasai customs and wearing Maasai jewellery, including a beaded belt known as kinyata in the Kikuyu language. At some point, he took to calling himself "Kinyata" or "Kenyatta" after this garment. In 1917, Kenyatta moved to Narok, where he was involved in transporting livestock to Nairobi, before relocating to Nairobi to work in a store selling farming and engineering equipment. In the evenings, he took classes in a church mission school. Several months later he returned to Thika before obtaining employment building houses for the Thogota Mission. He also lived for a time in Dagoretti, where he became a retainer for a local sub-chief, Kioi; in 1919 he assisted Kioi in putting the latter's case in a land dispute before a Nairobi court. Desiring a wife, Kenyatta entered a relationship with Grace Wahu, who had attended the CMS School in Kabete; she initially moved into Kenyatta's family homestead, although she joined Kenyatta in Dagoretti when Ngengi drove her out. On 20 November 1920 she gave birth to Kenyatta's son, Peter Muigui. In October 1920, Kenyatta was called before the Thogota Kirk Session and suspended from taking Holy Communion; the suspension was in response to his drinking and his relations with Wahu out of wedlock. The church insisted that a traditional Kikuyu wedding would be inadequate, and that he must undergo a Christian marriage; this took place on 8 November 1922. Kenyatta had initially refused to cease drinking, but in July 1923 officially renounced alcohol and was allowed to return to Holy Communion. In April 1922, Kenyatta began working as a stores clerk and meter reader for Cook, who had been appointed water superintendent for Nairobi's municipal council. He earned 250 shillings a month, a particularly high wage for a native African, which brought him financial independence and a growing sense of self-confidence. Kenyatta lived in the Kilimani neighbourhood of Nairobi, although he financed the construction of a second home at Dagoretti; he referred to this latter hut as the Kinyata Stores for he used it to hold general provisions for the neighborhood. He had sufficient funds that he could lend money to European clerks in the offices, and could enjoy the lifestyle offered by Nairobi, which included cinemas, football matches, and imported British fashions. Kikuyu Central Association: 1922–1929 Anti-imperialist sentiment was on the rise among both native and Indian communities in Kenya following the Irish War of Independence and the Russian October Revolution. Many indigenous Africans resented having to carry kipande identity certificates at all times, being forbidden from growing coffee, and paying taxes without political representation. Political upheavals occurred in Kikuyuland—the area inhabited largely by the Kikuyu—following World War I, among them the campaigns of Harry Thuku and the East African Association, resulting in the government massacre of 21 native protesters in March 1922. Kenyatta had not taken part in these events, perhaps so as not to disrupt his lucrative employment prospects. Kenyatta's interest in politics stemmed from his friendship with James Beauttah, a senior figure in the Kikuyu Central Association (KCA). Beauttah took Kenyatta to a political meeting in Pumwani, although this led to no firm involvement at the time. In either 1925 or early 1926, Beauttah moved to Uganda, but remained in contact with Kenyatta. When the KCA wrote to Beauttah and asked him to travel to London as their representative, he declined, but recommended that Kenyatta—who had a good command of English—go in his place. Kenyatta accepted, probably on the condition that the Association matched his pre-existing wage. He thus became the group's secretary. It is likely that the KCA purchased a motorbike for Kenyatta, which he used to travel around Kikuyuland and neighbouring areas inhabited by the Meru and Embu, helping to establish new KCA branches. In February 1928, he was part of a KCA party that visited Government House in Nairobi to give evidence in front of the Hilton Young Commission, which was then considering a federation between Kenya, Uganda, and Tanganyika. In June, he was part of a KCA team which appeared before a select committee of the Kenyan Legislative Council to express concerns about the recent introduction of Land Boards. Introduced by the British Governor of Kenya, Edward Grigg, these Land Boards would hold all land in native reserves in trust for each tribal group. Both the KCA and the Kikuyu Association opposed these Land Boards, which treated Kikuyu land as collectively-owned rather than recognising individual Kikuyu land ownership. Also in February, his daughter, Wambui Margaret, was born. By this point he was increasingly using the name "Kenyatta", which had a more African appearance than "Johnstone". In May 1928, the KCA launched a Kikuyu-language magazine, Muĩgwithania (roughly translated as "The Reconciler" or "The Unifier"), in which it published news, articles, and homilies. Its purpose was to help unify the Kikuyu and raise funds for the KCA. Kenyatta was listed as the publication's editor, although Murray-Brown suggested that he was not the guiding hand behind it and that his duties were largely confined to translating into Kikuyu. Aware that Thuku had been exiled for his activism, Kenyatta's took a cautious approach to campaigning, and in Muĩgwithania he expressed support for the churches, district commissioners, and chiefs. He also praised the British Empire, stating that: "The first thing [about the Empire] is that all people are governed justly, big or small—equally. The second thing is that nobody is regarded as a slave, everyone is free to do what he or she likes without being hindered." This did not prevent Grigg from writing to the authorities in London requesting permission to shut the magazine down. Overseas London: 1929–1931 After the KCA raised sufficient funds, in February 1929 Kenyatta sailed from Mombasa to Britain. Grigg's administration could not stop Kenyatta's journey but asked London's Colonial Office not to meet with him. He initially stayed at the West African Students' Union premises in West London, where he met Ladipo Solanke. He then lodged with a prostitute; both this and Kenyatta's lavish spending brought concern from the Church Mission Society. His landlord subsequently impounded his belongings due to unpaid debt. In the city, Kenyatta met with W. McGregor Ross at the Royal Empire Society, Ross briefing him on how to deal with the Colonial Office. Kenyatta became friends with Ross' family, and accompanied them to social events in Hampstead. He also contacted anti-imperialists active in Britain, including the League Against Imperialism, Fenner Brockway, and Kingsley Martin. Grigg was in London at the same time and, despite his opposition to Kenyatta's visit, agreed to meet with him at the Rhodes Trust headquarters in April. At the meeting, Kenyatta raised the land issue and Thuku's exile, the atmosphere between the two being friendly. In spite of this, following the meeting, Grigg convinced Special Branch to monitor Kenyatta. Kenyatta developed contacts with radicals to the left of the Labour Party, including several communists. In the summer of 1929, he left London and traveled by Berlin to Moscow before returning to London in October. Kenyatta was strongly influenced by his time in the Soviet Union. Back in England, he wrote three articles on the Kenyan situation for the Communist Party of Great Britain's newspapers, the Daily Worker and Sunday Worker. In these, his criticism of British imperialism was far stronger than it had been in Muĩgwithania. These communist links concerned many of Kenyatta's liberal patrons. In January, Kenyatta met with Drummond Shiels, the Under-Secretary of State for the Colonies, at the House of Commons. Kenyatta told Shiels that he was not affiliated with communist circles and was unaware of the nature of the newspaper which published his articles. Shiels advised Kenyatta to return home to promote Kikuyu involvement in the constitutional process and discourage violence and extremism. After eighteen months in Europe, Kenyatta had run out of money. The Anti-Slavery Society advanced him funds to pay off his debts and return to Kenya. Although Kenyatta enjoyed life in London and feared arrest if he returned home, he sailed back to Mombasa in September 1930. On his return, his prestige among the Kikuyu was high because of his time spent in Europe. In his absence, female genital mutilation (FGM) had become a topic of strong debate in Kikuyu society. The Protestant churches, backed by European medics and the colonial authorities, supported the abolition of this traditional practice, but the KCA rallied to its defence, claiming that its abolition would damage the structure of Kikuyu society. Anger between the two sides had heightened, several churches expelling KCA members from their congregations, and it was widely believed that the January 1930 killing of an American missionary, Hulda Stumpf, had been due to the issue. As Secretary of the KCA, Kenyatta met with church representatives. He expressed the view that although personally opposing FGM, he regarded its legal abolition as counter-productive, and argued that the churches should focus on eradicating the practice through educating people about its harmful effects on women's health. The meeting ended without compromise, and John Arthur—the head of the Church of Scotland in Kenya—later expelled Kenyatta from the church, citing what he deemed dishonesty during the debate. In 1931, Kenyatta took his son out of the church school at Thogota and enrolled him in a KCA-approved, independent school. Return to Europe: 1931–1933 In May 1931, Kenyatta and Parmenas Mockerie sailed for Britain, intent on representing the KCA at a Joint Committee of Parliament on the future of East Africa. Kenyatta would not return to Kenya for fifteen years. In Britain, he spent the summer attending an Independent Labour Party summer school and Fabian Society gatherings. In June, he visited Geneva, Switzerland to attend a Save the Children conference on African children. In November, he met the Indian independence leader Mohandas Gandhi while in London. That month, he enrolled in the Woodbrooke Quaker College in Birmingham, where he remained until the spring of 1932, attaining a certificate in English writing. In Britain, Kenyatta befriended an Afro-Caribbean Marxist, George Padmore, who was working for the Soviet-run Comintern. Over time, he became Padmore's protégé. In late 1932, he joined Padmore in Germany. Before the end of the year, the duo relocated to Moscow, where Kenyatta studied at the Communist University of the Toilers of the East. There he was taught arithmetic, geography, natural science, and political economy, as well as Marxist-Leninist doctrine and the history of the Marxist-Leninist movement. Many Africans and members of the African diaspora were attracted to the institution because it offered free education and the opportunity to study in an environment where they were treated with dignity, free from the institutionalised racism present in the U.S. and British Empire. Kenyatta complained about the food, accommodation, and poor quality of English instruction. There is no evidence that he joined the Communist Party of the Soviet Union, and one of his fellow students later characterised him as "the biggest reactionary I have ever met." Kenyatta also visited Siberia, probably as part of an official guided tour. The emergence of Germany's Nazi government shifted political allegiances in Europe; the Soviet Union pursued formal alliances with France and Czechoslovakia, and thus reduced its support for the movement against British and French colonial rule in Africa. As a result, Comintern disbanded the International Trade Union Committee of Negro Workers, with which both Padmore and Kenyatta were affiliated. Padmore resigned from the Soviet Communist Party in protest, and was subsequently vilified in the Soviet press. Both Padmore and Kenyatta left the Soviet Union, the latter returning to London in August 1933. The British authorities were highly suspicious of Kenyatta's time in the Soviet Union, suspecting that he was a Marxist-Leninist, and following his return the MI5 intelligence service intercepted and read all his mail. Kenyatta continued writing articles, reflecting Padmore's influence. Between 1931 and 1937 he wrote several articles for the Negro Worker and joined the newspaper's editorial board in 1933. He also produced an article for a November 1933 issue of Labour Monthly, and in May 1934 had a letter published in The Manchester Guardian. He also wrote the entry on Kenya for Negro, an anthology edited by Nancy Cunard and published in 1934. In these, he took a more radical position than he had in the past, calling for complete self-rule in Kenya. In doing so he was virtually alone among political Kenyans; figures like Thuku and Jesse Kariuki were far more moderate in their demands. The pro-independence sentiments that he was able to express in Britain would not have been permitted in Kenya itself. University College London and the London School of Economics: 1933–1939 Between 1935 and 1937, Kenyatta worked as a linguistic informant for the Phonetics Department at University College London (UCL); his Kikuyu voice recordings assisted Lilias Armstrong's production of The Phonetic and Tonal Structure of Kikuyu. The book was published under Armstrong's name, although Kenyatta claimed he should have been listed as co-author. He enrolled at UCL as a student, studying an English course between January and July 1935 and then a phonetics course from October 1935 to June 1936. Enabled by a grant from the International African Institute, he also took a social anthropology course under Bronisław Malinowski at the London School of Economics (LSE). Kenyatta lacked the qualifications normally required to join the course, but Malinowski was keen to support the participation of indigenous peoples in anthropological research. For Kenyatta, acquiring an advanced degree would bolster his status among Kenyans and display his intellectual equality with white Europeans in Kenya. Over the course of his studies, Kenyatta and Malinowski became close friends. Fellow course-mates included the anthropologists Audrey Richards, Lucy Mair, and Elspeth Huxley. Another of his fellow LSE students was Prince Peter of Greece and Denmark, who invited Kenyatta to stay with him and his mother, Princess Marie Bonaparte, in Paris during the spring of 1936. Kenyatta returned to his former dwellings at 95 Cambridge Street, but did not pay his landlady for over a year, owing over £100 in rent. This angered Ross and contributed to the breakdown of their friendship. He then rented a Camden Town flat with his friend Dinah Stock, whom he met at an anti-imperialist rally in Trafalgar Square. Kenyatta socialised at the Student Movement House in Russell Square, which he had joined in the spring of 1934, and befriended Africans in the city. To earn money, he worked as one of 250 black extras in the film Sanders of the River, filmed at Shepperton Studios in Autumn 1934. Several other Africans in London criticized him for doing so, arguing that the film degraded black people. Appearing in the film also allowed him to meet and befriend its star, the African-American Paul Robeson. In 1935, Italy invaded Ethiopia (Abyssinia), incensing Kenyatta and other Africans in London; he became the honorary secretary of the International African Friends of Abyssinia, a group established by Padmore and C. L. R. James. When Ethiopia's monarch Haile Selassie fled to London in exile, Kenyatta personally welcomed him at Waterloo station. This group developed into a wider pan-Africanist organisation, the International African Service Bureau (IASB), of which Kenyatta became one of the vice chairs. Kenyatta began giving anti-colonial lectures across Britain for groups like the IASB, the Workers' Educational Association, Indian National Congress of Great Britain, and the League of Coloured Peoples. In October 1938, he gave a talk to the Manchester Fabian Society in which he described British colonial policy as fascism and compared the treatment of indigenous people in East Africa to the treatment of Jews in Nazi Germany. In response to these activities, the British Colonial Office reopened their file on him, although could not find any evidence that he was engaged in anything sufficiently seditious to warrant prosecution. Kenyatta assembled the essays on Kikuyu society written for Malinowski's class and published them as Facing Mount Kenya in 1938. Featuring an introduction written by Malinowski, the book reflected Kenyatta's desire to use anthropology as a weapon against colonialism. In it, Kenyatta challenged the Eurocentric view of history by presenting an image of a golden African past by emphasising the perceived order, virtue, and self-sufficiency of Kikuyu society. Utilising a functionalist framework, he promoted the idea that traditional Kikuyu society had a cohesion and integrity that was better than anything offered by European colonialism. In this book, Kenyatta made clear his belief that the rights of the individual should be downgraded in favour of the interests of the group. The book also reflected his changing views on female genital mutilation; where once he opposed it, he now unequivocally supported the practice, downplaying the medical dangers that it posed to women. The book's jacket cover featured an image of Kenyatta in traditional dress, wearing a skin cloak over one shoulder and carrying a spear. The book was published under the name "Jomo Kenyatta", the first time that he had done so; the term Jomo was close to a Kikuyu word describing the removal of a sword from its scabbard. Facing Mount Kenya was a commercial failure, selling only 517 copies, but was generally well received; an exception was among white Kenyans, whose assumptions about the Kikuyu being primitive savages in need of European civilization it challenged. Murray-Brown later described it as "a propaganda tour de force. No other African had made such an uncompromising stand for tribal integrity." Bodil Folke Frederiksen, a scholar of development studies, referred to it as "probably the most well-known and influential African scholarly work of its time", while for fellow scholar Simon Gikandi, it was "one of the major texts in what has come to be known as the invention of tradition in colonial Africa". World War II: 1939–1945 After the United Kingdom entered World War II in September 1939, Kenyatta and Stock moved to the Sussex village of Storrington. Kenyatta remained there for the duration of the war, renting a flat and a small plot of land to grow vegetables and raise chickens. He settled into rural Sussex life, and became a regular at the village pub, where he gained the nickname "Jumbo". In August 1940, he took a job at a local farm as an agricultural worker—allowing him to evade military conscription—before working in the tomato greenhouses at Lindfield. He attempted to join the local Home Guard, but was turned down. On 11 May 1942 he married an English woman, Edna Grace Clarke, at Chanctonbury Registry Office. In August 1943, their son, Peter Magana, was born. Intelligence services continued monitoring Kenyatta, noting that he was politically inactive between 1939 and 1944. In Sussex, he wrote an essay for the United Society for Christian Literature, My People of Kikuyu and the Life of Chief Wangombe, in which he called for his tribe's political independence. He also began—although never finished—a novel partly based on his life experiences. He continued to give lectures around the country, including to groups of East African soldiers stationed in Britain. He became frustrated by the distance between him and Kenya, telling Edna that he felt "like a general separated by 5000 miles from his troops". While he was absent, Kenya's authorities banned the KCA in 1940. Kenyatta and other senior IASB members began planning the fifth Pan-African Congress, held in Manchester in October 1945. They were assisted by Kwame Nkrumah, a Gold Coast (Ghanaian) who arrived in Britain earlier that year. Kenyatta spoke at the conference, although made no particular impact on the proceedings. Much of the debate that took place centred on whether indigenous Africans should continue pursuing a gradual campaign for independence or whether they should seek the military overthrow of the European imperialists. The conference ended with a statement declaring that while delegates desired a peaceful transition to African self-rule, Africans "as a last resort, may have to appeal to force in the effort to achieve Freedom". Kenyatta supported this resolution, although was more cautious than other delegates and made no open commitment to violence. He subsequently authored an IASB pamphlet, Kenya: The Land of Conflict, in which he blended political calls for independence with romanticised descriptions of an idealised pre-colonial African past. Return to Kenya Presidency of the Kenya African Union: 1946–1952 After British victory in World War II, Kenyatta received a request to return to Kenya in September 1946, sailing back that month. He decided not to bring Edna—who was pregnant with a second child—with him, aware that if they joined him in Kenya their lives would be made very difficult by the colony's racial laws. On his arrival in Mombasa, Kenyatta was greeted by his first wife, Grace Wahu and their children. He built a bungalow at Gatundu, near to where he was born, and began farming his 32-acre estate. Kenyatta met with the new Governor of Kenya, Philip Euen Mitchell, and in March 1947 accepted a post on an African Land Settlement Board, holding the post for two years. He also met with Mbiyu Koinange to discuss the future of the Koinange Independent Teachers' College in Githungui, Koinange appointing Kenyatta as its Vice-Principal. In May 1947, Koinange moved to England, leaving Kenyatta to take full control of the college. Under Kenyatta's leadership, additional funds were raised for the construction of school buildings and the number of boys in attendance rose from 250 to 900. It was also beset with problems, including a decline in standards and teachers' strikes over non-payment of wages. Gradually, the number of enrolled pupils fell. Kenyatta built a friendship with Koinange's father, a Senior Chief, who gave Kenyatta one of his daughters to take as his third wife. She bore him another child, but later died in childbirth. In 1951, he married his fourth wife, Ngina, who was one of the few female students at his college; she then gave birth to a daughter. In August 1944, the Kenya African Union (KAU) had been founded; at that time it was the only active political outlet for indigenous Africans in the colony. At its June 1947 annual general meeting, KAU's President James Gichuru stepped down and Kenyatta was elected as his replacement. Kenyatta began to draw large crowds wherever he travelled in Kikuyuland, and Kikuyu press began describing him as the "Saviour", "Great Elder", and "Hero of Our Race". He was nevertheless aware that to achieve independence, KAU needed the support of other indigenous tribes and ethnic groups. This was made difficult by the fact that many Maasai and Luo—tribes traditionally hostile to the Kikuyu—regarded him as an advocate of Kikuyu dominance. He insisted on intertribal representation on the KAU executive and ensured that party business was conducted in Swahili, the lingua franca of indigenous Kenyans. To attract support from Kenya's Indian community, he made contact with Jawaharlal Nehru, the first Prime Minister of the new Indian republic. Nehru's response was supportive, sending a message to Kenya's Indian minority reminding them that they were the guests of the indigenous African population. Relations with the white minority remained strained; for most white Kenyans, Kenyatta was their principal enemy, an agitator with links to the Soviet Union who had the impertinence to marry a white woman. They too increasingly called for further Kenyan autonomy from the British government, but wanted continued white-minority rule and closer links to the white-minority governments of South Africa, Northern Rhodesia, and Southern Rhodesia; they viewed Britain's newly elected Labour government with great suspicion. The white Electors' Union put forward a "Kenya Plan" which proposed greater white settlement in Kenya, bringing Tanganyika into the British Empire, and incorporating it within their new British East African Dominion. In April 1950, Kenyatta was present at a joint meeting of KAU and the East African Indian National Congress in which they both expressed opposition to the Kenya Plan. By 1952, Kenyatta was widely recognized as a national leader, both by his supporters and by his opponents. As KAU leader, he was at pains to oppose all illegal activity, including workers' strikes. He called on his supporters to work hard, and to abandon laziness, theft, and crime. He also insisted that in an independent Kenya, all racial groups would be safeguarded. Kenyatta's gradualist and peaceful approach contrasted with the growth of the Mau Mau Uprising, as armed guerrilla groups began targeting the white minority and members of the Kikuyu community who did not support them. By 1959, the Mau Mau had killed around 1,880 people. For many young Mau Mau militants, Kenyatta was regarded as a hero, and they included his name in the oaths they gave to the organisation; such oathing was a Kikuyu custom by which individuals pledged allegiance to another. Kenyatta publicly distanced himself from the Mau Mau. In April 1952, he began a speaking tour in which he denounced the Mau Mau to assembled crowds, insisting that independence must be achieved through peaceful means. In August he attended a much-publicised mass meeting in Kiambu where—in front of 30,000 people—he said that "Mau Mau has spoiled the country. Let Mau Mau perish forever. All people should search for Mau Mau and kill it." Despite Kenyatta's vocal opposition to the Mau Mau, KAU had moved towards a position of greater militancy. At its 1951 AGM, more militant African nationalists had taken senior positions and the party officially announced its call for Kenyan independence within three years. In January 1952, KAU members formed a secret Central Committee devoted to direct action, formulated along a cell structure. Whatever Kenyatta's views on these developments, he had little ability to control them. He was increasingly frustrated, and—without the intellectual companionship he experienced in Britain—felt lonely. Trial: 1952–1953 In October 1952, Kenyatta was arrested and driven to Nairobi, where he was taken aboard a plane and flown to Lokitaung, northwest Kenya, one of the most remote locations in the country. From there he wrote to his family to let them know of his situation. Kenya's authorities believed that detaining Kenyatta would help quell civil unrest. Many white settlers wanted him exiled, but the government feared this would turn him into a martyr for the anti-colonialist cause. They thought it better that he be convicted and imprisoned, although at the time had nothing to charge him with, and so began searching his personal files for evidence of criminal activity. Eventually, they charged him and five senior KAU members with masterminding the Mau Mau, a proscribed group. The historian John M. Lonsdale stated that Kenyatta had been made a "scapegoat", while the historian A. B. Assensoh later suggested that the authorities "knew very well" that Kenyatta was not involved in the Mau Mau, but that they were nevertheless committed to silencing his calls for independence. The trial took place in Kapenguria, a remote area near the Ugandan border that the authorities hoped would not attract crowds or attention. Together, Kenyatta, Bildad Kaggia, Fred Kubai, Paul Ngei, Achieng Oneko and Kung'u Karumba—the "Kapenguria Six"—were put on trial. The defendants assembled an international and multiracial team of defence lawyers, including Chaman Lall, H. O. Davies, F. R. S. De Souza, and Dudley Thompson, led by British barrister and Member of Parliament Denis Nowell Pritt. Pritt's involvement brought much media attention; during the trial he faced government harassment and was sent death threats. The judge selected, Ransley Thacker, had recently retired from the Supreme Court of Kenya; the government knew he would be sympathetic to their case and gave him £20,000 to oversee it. The trial lasted five months: Rawson Macharia, the main prosecution witness, turned out to have perjured himself; the judge had only recently been awarded an unusually large pension and maintained secret contact with the then colonial Governor Evelyn Baring. The prosecution failed to produce any strong evidence that Kenyatta or the other accused had any involvement in managing the Mau Mau. In April 1953, Judge Thacker found the defendants guilty. He sentenced them to seven years' hard labour, to be followed by indefinite restriction preventing them from leaving a given area without permission. In addressing the court, Kenyatta stated that he and the others did not recognise the judge's findings; they claimed that the government had used them as scapegoats as a pretext to shut down KAU. The historian Wunyabari O. Maloba later characterised it as "a rigged political trial with a predetermined outcome". The government followed the verdict with a wider crackdown, banning KAU in June 1953, and closing down most of the independent schools in the country, including Kenyatta's. It appropriated his land at Gatundu and demolished his house. Kenyatta and the others were returned to Lokitaung, where they resided on remand while awaiting the results of the appeal process. Pritt pointed out that Thacker had been appointed magistrate for the wrong district, a technicality voiding the whole trial; the Supreme Court of Kenya concurred and Kenyatta and the others were freed in July 1953, only to be immediately re-arrested. The government took the case to the East African Court of Appeal, which reversed the Supreme Court's decision in August. The appeals process resumed in October 1953, and in January 1954 the Supreme Court upheld the convictions against all but Oneko. Pritt finally took the case to the Privy Council in London, but they refused his petition without providing an explanation. He later noted that this was despite the fact his case was one of the strongest he had ever presented during his career. According to Murray-Brown, it is likely that political, rather than legal considerations, informed their decision to reject the case. Imprisonment: 1954–1961 During the appeal process, a prison had been built at Lokitaung, where Kenyatta and the four others were then interned. The others were made to break rocks in the hot sun but Kenyatta, because of his age, was instead appointed their cook, preparing a daily diet of beans and posho. In 1955, P. de Robeck became the District Officer, after which Kenyatta and the other inmates were treated more leniently. In April 1954, they had been joined by a captured Mau Mau commander, Waruhiu Itote; Kenyatta befriended him, and gave him English lessons. By 1957, the inmates had formed into two rival cliques, with Kenyatta and Itote on one side and the other KAU members—now calling themselves the "National Democratic Party"—on the other. In one incident, one of his rivals made an unsuccessful attempt to stab Kenyatta at breakfast. Kenyatta's health had deteriorated in prison; manacles had caused problems for his feet and he had eczema across his body. Kenyatta's imprisonment transformed him into a political martyr for many Kenyans, further enhancing his status. A Luo anti-colonial activist, Jaramogi Oginga Odinga, was the first to publicly call for Kenyatta's release, an issue that gained growing support among Kenya's anti-colonialists. In 1955, the British writer Montagu Slater—a socialist sympathetic to Kenyatta's plight—released The Trial of Jomo Kenyatta, a book which raised the profile of the case. In 1958, Rawson Macharia, the key witness in the state's prosecution of Kenyatta, signed an affidavit swearing that his evidence against Kenyatta had been false; this was widely publicised. By the late 1950s, the imprisoned Kenyatta had become a symbol of African nationalism across the continent. His sentence served, in April 1959 Kenyatta was released from Lokitaung. The administration then placed a restricting order on Kenyatta, forcing him to reside in the remote area of Lodwar, where he had to report to the district commissioner twice a day. There, he was joined by his wife Ngina. In October 1961 she bore him another son, Uhuru, and later on another daughter, Nyokabi, and a further son, Muhoho. Kenyatta spent two years in Lodwar. The Governor of Kenya, Patrick Muir Renison, insisted that it was necessary; in a March 1961 speech, he described Kenyatta an "African leader to darkness and death" and stated that if he were released, violence would erupt. This indefinite detention was widely interpreted internationally as a reflection of the cruelties of British imperialism. Calls for his release came from the Chinese government, India's Nehru, and Tanganyika's Prime Minister Julius Nyerere. Kwame Nkrumah—whom Kenyatta had known since the 1940s and who was now President of a newly independent Ghana—personally raised the issue with British Prime Minister Harold Macmillan and other UK officials, with the Ghanaian government offering Kenyatta asylum in the event of his release. Resolutions calling for his release were produced at the All-African Peoples' Conferences held in Tunis in 1960 and Cairo in 1961. Internal calls for his release came from Kenyan Asian activists in the Kenya Indian Congress, while a colonial government commissioned poll revealed that most of Kenya's indigenous Africans wanted this outcome. By this point, it was widely accepted that Kenyan independence was inevitable, the British Empire having been dismantled throughout much of Asia and Macmillan having made his "Wind of Change" speech. In January 1960, the British government made its intention to free Kenya apparent. It invited representatives of Kenya's anti-colonial movement to discuss the transition at London's Lancaster House. An agreement was reached that an election would be called for a new 65-seat Legislative Council, with 33 seats reserved for black Africans, 20 for other ethnic groups, and 12 as 'national members' elected by a pan-racial electorate. It was clear to all concerned that Kenyatta was going to be the key to the future of Kenyan politics. After the Lancaster House negotiations, the anti-colonial movement had split into two parties, the Kenya African National Union (KANU), which was dominated by Kikuyu and Luo, and the Kenya African Democratic Union (KADU), which was led largely by members of smaller ethnic groups like the Kalenjin and Maasai. In May 1960, KANU nominated Kenyatta as its president, although the government vetoed it, insisting that he had been an instigator of the Mau Mau. KANU then declared that it would refuse to take part in any government unless Kenyatta was freed. KANU campaigned on the issue of Kenyatta's detainment in the February 1961 election, where it gained a majority of votes. KANU nevertheless refused to form a government, which was instead created through a KADU-led coalition of smaller parties. Kenyatta had kept abreast of these developments, although he had refused to back either KANU or KADU, instead insisting on unity between the two parties. Preparing for independence: 1961–1963 Renison decided to release Kenyatta before Kenya achieved independence. He thought public exposure to Kenyatta prior to elections would make the populace less likely to vote for a man Renison regarded as a violent extremist. In April 1961, the government flew Kenyatta to Maralal, where he maintained his innocence of the charges but told reporters that he bore no grudges. He reiterated that he had never supported violence or the illegal oathing system used by the Mau Mau, and denied having ever been a Marxist, stating: "I shall always remain an African Nationalist to the end". In August, he was moved to Gatundu in Kikuyuland, where he was greeted by a crowd of 10,000. There, the colonial government had built him a new house to replace that they had demolished. Now a free man, he travelled to cities like Nairobi and Mombasa to make public appearances. After his release, Kenyatta set about trying to ensure that he was the only realistic option as Kenya's future leader. In August he met with Renison at Kiambu, and was interviewed by the BBC's Face to Face. In October 1961, Kenyatta formally joined KANU and accepted its presidency. In January 1962 he was elected unopposed as KANU's representative for the Fort Hall constituency in the legislative council after its sitting member, Kariuki Njiiri, resigned. Kenyatta traveled elsewhere in Africa, visiting Tanganyika in October 1961 and Ethiopia in November at the invitation of their governments. A key issue facing Kenya was a border dispute in North East Province, alongside Somalia. Ethnic Somalis inhabited this region and claimed it should be part of Somalia, not Kenya. Kenyatta disagreed, insisting the land remain Kenyan, and stated that Somalis in Kenya should "pack up [their] camels and go to Somalia". In June 1962, Kenyatta travelled to Mogadishu to discuss the issue with the Somalian authorities, but the two sides could not reach an agreement. Kenyatta sought to gain the confidence of the white settler community. In 1962, the white minority had produced 80% of the country's exports and were a vital part of its economy, yet between 1962 and 1963 they were emigrating at a rate of 700 a month; Kenyatta feared that this white exodus would cause a brain drain and skills shortage that would be detrimental to the economy. He was also aware that the confidence of the white minority would be crucial to securing Western investment in Kenya's economy. Kenyatta made it clear that when in power, he would not sack any white civil servants unless there were competent black individuals capable of replacing them. He was sufficiently successful that several prominent white Kenyans backed KANU in the subsequent election. In 1962 he returned to London to attend one of the Lancaster House conferences. There, KANU and KADU representatives met with British officials to formulate a new constitution. KADU desired a federalist state organised on a system they called Majimbo with six largely autonomous regional authorities, a two-chamber legislature, and a central Federal Council of Ministers who would select a rotating chair to serve as head of government for a one-year term. Renison's administration and most white settlers favoured this system as it would prevent a strong central government implementing radical reform. KANU opposed Majimbo, believing that it served entrenched interests and denied equal opportunities across Kenya; they also insisted on an elected head of government. At Kenyatta's prompting, KANU conceded to some of KADU's demands; he was aware that he could amend the constitution when in office. The new constitution divided Kenya into six regions, each with a regional assembly, but also featured a strong central government and both an upper and a lower house. It was agreed that a temporary coalition government would be established until independence, several KANU politicians being given ministerial posts. Kenyatta accepted a minor position, that of the Minister of State for Constitutional Affairs and Economic Planning. The British government considered Renison too ill at ease with indigenous Africans to oversee the transition to independence and thus replaced him with Malcolm MacDonald as Governor of Kenya in January 1963. MacDonald and Kenyatta developed a strong friendship; the Briton referred to the latter as "the wisest and perhaps strongest as well as most popular potential Prime Minister of the independent nation to be". MacDonald sped up plans for Kenyan independence, believing that the longer the wait, the greater the opportunity for radicalisation among African nationalists. An election was scheduled for May, with self-government in June, followed by full independence in December. Leadership Premiership: 1963–1964 The May 1963 general election pitted Kenyatta's KANU against KADU, the Akamba People's Party, and various independent candidates. KANU was victorious with 83 seats out of 124 in the House of Representatives; a KANU majority government replaced the pre-existing coalition. On 1 June 1963, Kenyatta was sworn in as prime minister of the autonomous Kenyan government. Kenya remained a monarchy, with Queen Elizabeth II as its head of state. In November 1963, Kenyatta's government introduced a law making it a criminal offence to disrespect the Prime Minister, exile being the punishment. Kenyatta's personality became a central aspect of the creation of the new state. In December, Nairobi's Delamere Avenue was renamed Kenyatta Avenue, and a bronze statue of him was erected beside the country's National Assembly. Photographs of Kenyatta were widely displayed in shop windows, and his face was also printed on the new currency. In 1964, Oxford University Press published a collection of Kenyatta's speeches under the title of Harambee!. Kenya's first cabinet included not only Kikuyu but also members of the Luo, Kamba, Kisii, and Maragoli tribal groups. In June 1963, Kenyatta met with Julius Nyerere and Ugandan President Milton Obote in Nairobi. The trio discussed the possibility of merging their three nations (plus Zanzibar) into a single East African Federation, agreeing that this would be accomplished by the end of the year. Privately, Kenyatta was more reluctant regarding the arrangement and as 1964 came around the federation had not come to pass. Many radical voices in Kenya urged him to pursue the project; in May 1964, Kenyatta rejected a back-benchers resolution calling for speedier federation. He publicly stated that talk of a federation had always been a ruse to hasten the pace of Kenyan independence from Britain, but Nyerere denied that this was true. Continuing to emphasize good relations with the white settlers, in August 1963 Kenyatta met with 300 white farmers at Nakuru. He reassured them that they would be safe and welcome in an independent Kenya, and more broadly talked of forgiving and forgetting the conflicts of the past. Despite his attempts at wooing white support, he did not do the same with the Indian minority. Like many indigenous Africans in Kenya, Kenyatta bore a sense of resentment towards this community, despite the role that many Indians had played in securing the country's independence. He also encouraged the remaining Mau Mau fighters to leave the forests and settle in society. Throughout Kenyatta's rule, many of these individuals remained out of work, unemployment being one of the most persistent problems facing his government. A celebration to mark independence was held in a specially constructed stadium on 12 December 1963. During the ceremony, Prince Philip, Duke of Edinburgh—representing the British monarchy—formally handed over control of the country to Kenyatta. Also in attendance were leading figures from the Mau Mau. In a speech, Kenyatta described it as "the greatest day in Kenya's history and the happiest day in my life." He had flown Edna and Peter over for the ceremony, and in Kenya they were welcomed into Kenyatta's family by his other wives. Disputes with Somalia over the Northern Frontier District (NFD) continued; for much of Kenyatta's rule, Somalia remained the major threat to his government. To deal with sporadic violence in the region by Somali shifta guerrillas, Kenyatta sent soldiers into the region in December 1963 and gave them broad powers of arrest and seizure in the NFD in September 1964. British troops were assigned to assist the Kenyan Army in the region. Kenyatta also faced domestic opposition: in January 1964, sections of the army launched a mutiny in Nairobi, and Kenyatta called on the British Army to put down the rebellion. Similar armed uprisings had taken place that month in neighboring Uganda and Tanganyika. Kenyatta was outraged and shaken by the mutiny. He publicly rebuked the mutineers, emphasising the need for law and order in Kenya. To prevent further military unrest, he brought in a review of the salaries of the army, police, and prison staff, leading to pay rises. Kenyatta also wanted to contain parliamentary opposition and at Kenyatta's prompting, in November 1964 KADU officially dissolved and its representatives joined KANU. Two of the senior members of KADU, Ronald Ngala and Daniel arap Moi, subsequently became some of Kenyatta's most loyal supporters. Kenya therefore became a de facto one-party state. Presidency: 1964–1978 In December 1964, Kenya was officially proclaimed a republic. Kenyatta became its executive president, combining the roles of head of state and head of government. Over the course of 1965 and 1966, several constitutional amendments enhanced the president's power. For instance, a May 1966 amendment gave the president the ability to order the detention of individuals without trial if he thought the security of the state was threatened. Seeking the support of Kenya's second largest ethnic group, the Luo, Kenyatta appointed the Luo Oginga Odinga as his vice president. The Kikuyu—who made up around 20 percent of population—still held most of the country's important government and administrative positions. This contributed to a perception among many Kenyans that independence had simply seen the dominance of a British elite replaced by the dominance of a Kikuyu elite. Kenyatta's calls to forgive and forget the past were a keystone of his government. He preserved some elements of the old colonial order, particularly in relation to law and order. The police and military structures were left largely intact. White Kenyans were left in senior positions within the judiciary, civil service, and parliament, with the white Kenyans Bruce Mackenzie and Humphrey Slade being among Kenyatta's top officials. Kenyatta's government nevertheless rejected the idea that the European and Asian minorities could be permitted dual citizenship, expecting these communities to offer total loyalty to the independent Kenyan state. His administration pressured whites-only social clubs to adopt multi-racial entry policies, and in 1964 schools formerly reserved for European pupils were opened to Africans and Asians. Kenyatta's government believed it necessary to cultivate a united Kenyan national culture. To this end, it made efforts to assert the dignity of indigenous African cultures which missionaries and colonial authorities had belittled as "primitive". An East African Literature Bureau was created to publish the work of indigenous writers. The Kenya Cultural Centre supported indigenous art and music, and hundreds of traditional music and dance groups were formed; Kenyatta personally insisted that such performances were held at all national celebrations. Support was given to the preservation of historic and cultural monuments, while street names referencing colonial figures were renamed and symbols of colonialism—like the statue of British settler Hugh Cholmondeley, 3rd Baron Delamere in Nairobi city centre—were removed. The government encouraged the use of Swahili as a national language, although English remained the main medium for parliamentary debates and the language of instruction in schools and universities. The historian Robert M. Maxon nevertheless suggested that "no national culture emerged during the Kenyatta era", most artistic and cultural expressions reflecting particular ethnic groups rather than a broader sense of Kenyanness, while Western culture remained heavily influential over the country's elites. Economic policy Independent Kenya had an economy heavily molded by colonial rule; agriculture dominated while industry was limited, and there was a heavy reliance on exporting primary goods while importing capital and manufactured goods. Under Kenyatta, the structure of this economy did not fundamentally change, remaining externally oriented and dominated by multinational corporations and foreign capital. Kenyatta's economic policy was capitalist and entrepreneurial, with no serious socialist policies being pursued; its focus was on achieving economic growth as opposed to equitable redistribution. The government passed laws to encourage foreign investment, recognising that Kenya needed foreign-trained specialists in scientific and technical fields to aid its economic development. Under Kenyatta, Western companies regarded Kenya as a safe and profitable place for investment; between 1964 and 1970, large-scale foreign investment and industry in Kenya nearly doubled. In contrast to his economic policies, Kenyatta publicly claimed he would create a democratic socialist state with an equitable distribution of economic and social development. In 1965, when Thomas Mboya was minister for economic planning and development, the government issued a session paper titled "African Socialism and its Application to Planning in Kenya", in which it officially declared its commitment to what it called an "African socialist" economic model. The session proposed a mixed economy with an important role for private capital, with Kenyatta's government specifying that it would consider only nationalisation in instances where national security was at risk. Left-wing critics highlighted that the image of "African socialism" portrayed in the document provided for no major shift away from the colonial economy. Kenya's agricultural and industrial sectors were dominated by Europeans and its commerce and trade by Asians; one of Kenyatta's most pressing issues was to bring the economy under indigenous control. There was growing black resentment towards the Asian domination of the small business sector, with Kenyatta's government putting pressure on Asian-owned businesses, intending to replace them with African-owned counterparts. The 1965 session paper promised an "Africanization" of the Kenyan economy, with the government increasingly pushing for "black capitalism". The government established the Industrial and Commercial Development Corporation to provide loans for black-owned businesses, and secured a 51% share in the Kenya National Assurance Company. In 1965, the government established the Kenya National Trading Corporation to ensure indigenous control over the trade in essential commodities, while the Trade Licensing Act of 1967 prohibited non-citizens from involvement in the rice, sugar, and maize trade. During the 1970s, this expanded to cover the trade in soap, cement, and textiles. Many Asians who had retained British citizenship were affected by these measures. Between late 1967 and early 1968, growing numbers of Kenyan Asians migrated to Britain; in February 1968 large numbers migrated quickly before a legal change revoked their right to do so. Kenyatta was not sympathetic to those leaving: "Kenya's identity as an African country is not going to be altered by the whims and malaises of groups of uncommitted individuals." Under Kenyatta, corruption became widespread throughout the government, civil service, and business community. Kenyatta and his family were tied up with this corruption as they enriched themselves through the mass purchase of property after 1963. Their acquisitions in the Central, Rift Valley, and Coast Provinces aroused great anger among landless Kenyans. His family used his presidential position to circumvent legal or administrative obstacles to acquiring property. The Kenyatta family also heavily invested in the coastal hotel business, Kenyatta personally owning the Leonard Beach Hotel. Other businesses they were involved with included ruby mining in Tsavo National Park, the casino business, the charcoal trade—which was causing significant deforestation—and the ivory trade. The Kenyan press, which was largely loyal to Kenyatta, did not delve into this issue; it was only after his death that publications appeared revealing the scale of his personal enrichment. Kenyan corruption and Kenyatta's role in it was better known in Britain, although many of his British friends—including McDonald and Brockway—chose to believe Kenyatta was not personally involved. Land, healthcare, and education reform The question of land ownership had deep emotional resonance in Kenya, having been a major grievance against the British colonialists. As part of the Lancaster House negotiations, Britain's government agreed to provide Kenya with £27 million with which to buy out white farmers and redistribute their land among the indigenous population. To ease this transition, Kenyatta made Bruce McKenzie, a white farmer, the Minister of Agriculture and Land. Kenyatta's government encouraged the establishment of private land-buying companies that were often headed by prominent politicians. The government sold or leased lands in the former White Highlands to these companies, which in turn subdivided them among individual shareholders. In this way, the land redistribution programs favoured the ruling party's chief constituency. Kenyatta himself expanded the land that he owned around Gatundu. Kenyans who made claims to land on the basis of ancestral ownership often found the land given to other people, including Kenyans from different parts of the country. Voices began to condemn the redistribution; in 1969, the MP Jean-Marie Seroney censured the sale of historically Nandi lands in the Rift to non-Nandi, describing the settlement schemes as "Kenyatta's colonization of the rift". In part fuelled by high rural unemployment, Kenya witnessed growing rural-to-urban migration under Kenyatta's government. This exacerbated urban unemployment and housing shortages, with squatter settlements and slums growing up and urban crime rates rising. Kenyatta was concerned by this, and promoted the reversal of this rural-to-urban migration, but in this was unsuccessful. Kenyatta's government was eager to control the country's trade unions, fearing their ability to disrupt the economy. To this end it emphasised social welfare schemes over traditional industrial institutions, and in 1965 transformed the Kenya Federation of Labour into the Central Organization of Trade (COT), a body which came under strong government influence. No strikes could be legally carried out in Kenya without COT's permission. There were also measures to Africanise the civil service, which by mid-1967 had become 91% African. During the 1960s and 1970s the public sector grew faster than the private sector. The growth in the public sector contributed to the significant expansion of the indigenous middle class in Kenyatta's Kenya. The government oversaw a massive expansion in education facilities. In June 1963, Kenyatta ordered the Ominda Commission to determine a framework for meeting Kenya's educational needs. Their report set out the long-term goal of universal free primary education in Kenya but argued that the government's emphasis should be on secondary and higher education to facilitate the training of indigenous African personnel to take over the civil service and other jobs requiring such an education. Between 1964 and 1966, the number of primary schools grew by 11.6%, and the number of secondary schools by 80%. By the time of Kenyatta's death, Kenya's first universities—the University of Nairobi and Kenyatta University—had been established. Although Kenyatta died without having attained the goal of free, universal primary education in Kenya, the country had made significant advances in that direction, with 85% of Kenyan children in primary education, and within a decade of independence had trained sufficient numbers of indigenous Africans to take over the civil service. Another priority for Kenyatta's government was improving access to healthcare services. It stated that its long-term goal was to establish a system of free, universal medical care. In the short-term, its emphasis was on increasing the overall number of doctors and registered nurses while decreasing the number of expatriates in those positions. In 1965, the government introduced free medical services for out-patients and children. By Kenyatta's death, the majority of Kenyans had access to significantly better healthcare than they had had in the colonial period. Before independence, the average life expectancy in Kenya was 45, but by the end of the 1970s it was 55, the second-highest in Sub-Saharan Africa. This improved medical care had resulted in declining mortality rates while birth rates remained high, resulting in a rapidly growing population; from 1962 to 1979, Kenya's population grew by just under 4% a year, the highest rate in the world at the time. This put a severe strain on social services; Kenyatta's government promoted family planning projects to stem the birth-rate, but these had little success. Foreign policy In part due to his advanced years, Kenyatta rarely traveled outside of Eastern Africa. Under Kenyatta, Kenya was largely uninvolved in the affairs of other states, including those in the East African Community. Despite his reservations about any immediate East African Federation, in June 1967 Kenyatta signed the Treaty for East African Co-operation. In December he attended a meeting with Tanzanian and Ugandan representatives to form the East African Economic Community, reflecting Kenyatta's cautious approach toward regional integration. He also took on a mediating role during the Congo Crisis, heading the Organisation of African Unity's Conciliation Commission on the Congo. Facing the pressures of the Cold War, Kenyatta officially pursued a policy of "positive non-alignment". In reality, his foreign policy was pro-Western and in particular pro-British. Kenya became a member of the British Commonwealth, using this as a vehicle to put pressure on the white-minority apartheid regimes in South Africa and Rhodesia. Britain remained one of Kenya's foremost sources of foreign trade; British aid to Kenya was among the highest in Africa. In 1964, Kenya and the UK signed a Memorandum of Understanding, one of only two military alliances Kenyatta's government made; the British Special Air Service trained Kenyatta's own bodyguards. Commentators argued that Britain's relationship with Kenyatta's Kenya was a neo-colonial one, with the British having exchanged their position of political power for one of influence. The historian Poppy Cullen nevertheless noted that there was no "dictatorial neo-colonial control" in Kenyatta's Kenya. Although many white Kenyans accepted Kenyatta's rule, he remained opposed by white far-right activists; while in London at the July 1964 Commonwealth Conference, he was assaulted by Martin Webster, a British neo-Nazi. Kenyatta's relationship with the United States was also warm; the United States Agency for International Development played a key role in helping respond to a maize shortage in Kambaland in 1965. Kenyatta also maintained a warm relationship with Israel, including when other East African nations endorsed Arab hostility to the state; he for instance permitted Israeli jets to refuel in Kenya on their way back from the Entebbe raid. In turn, in 1976 the Israelis warned of a plot by the Palestinian Liberation Army to assassinate him, a threat he took seriously. Kenyatta and his government were anti-communist, and in June 1965 he warned that "it is naive to think that there is no danger of imperialism from the East. In world power politics the East has as much designs upon us as the West and would like to serve their own interests. That is why we reject Communism. " His governance was often criticised by communists and other leftists, some of whom accused him of being a fascist. When Chinese Communist official Zhou Enlai visited Dar es Salaam, his statement that "Africa is ripe for revolution" was clearly aimed largely at Kenya. In 1964, Kenyatta impounded a secret shipment of Chinese armaments that passed through Kenyan territory on its way to Uganda. Obote personally visited Kenyatta to apologise. In June 1967, Kenyatta declared the Chinese Chargé d'Affairs persona non grata in Kenya and recalled the Kenyan ambassador from Peking. Relations with the Soviet Union were also strained; Kenyatta shut down the Lumumba Institute—an educational organisation named after the Congolese independence leader Patrice Lumumba—on the basis that it was a front for Soviet influence in Kenya. Dissent and the one-party state Kenyatta made clear his desire for Kenya to become a one-party state, regarding this as a better expression of national unity than a multi-party system. In the first five years of independence, he consolidated control of the central government, removing the autonomy of Kenya's provinces to prevent the entrenchment of ethnic power bases. He argued that centralised control of the government was needed to deal with the growth in demands for local services and to assist quicker economic development. In 1966, it launched a commission to examine reforms to local government operations, and in 1969 passed the Transfer of Functions Act, which terminated grants to local authorities and transferred major services from provincial to central control. A major focus for Kenyatta during the first three and a half years of Kenya's independence were the divisions within KANU itself. Opposition to Kenyatta's government grew, particularly following the assassination of Pio Pinto in February 1965. Kenyatta condemned the assassination of the prominent leftist politician, although UK intelligence agencies believed that his own bodyguard had orchestrated the murder. Relations between Kenyatta and Odinga were strained, and at the March 1966 party conference, Odinga's post—that of party vice president—was divided among eight different politicians, greatly limiting his power and ending his position as Kenyatta's automatic successor. Between 1964 and 1966, Kenyatta and other KANU conservatives had been deliberately trying to push Odinga to resign from the party. Under growing pressure, in 1966 Odinga stepped down as state vice president, claiming that Kenya had failed to achieve economic independence and needed to adopt socialist policies. Backed by several other senior KANU figures and trade unionists, he became head of the new Kenya Peoples Union (KPU). In its manifesto, the KPU stated that it would pursue "truly socialist policies" like the nationalisation of public utilities; it claimed Kenyatta's government "want[ed] to build a capitalist system in the image of Western capitalism but are too embarrassed or dishonest to call it that." The KPU were legally recognised as the official opposition, thus restoring the country's two party system. The new party was a direct challenge to Kenyatta's rule, and he regarded it as a communist-inspired plot to oust him. Soon after the KPU's creation, the Kenyan Parliament amended the constitution to ensure that the defectors—who had originally been elected on the KANU ticket—could not automatically retain their seats and would have to stand for re-election. This resulted in the election of June 1966. The Luo increasingly rallied around the KPU, which experienced localized violence that hindered its ability to campaign, although Kenyatta's government officially disavowed this violence. KANU retained the support of all national newspapers and the government-owned radio and television stations. Of the 29 defectors, only nine were re-elected on the KPU ticket; Odinga was among them, having retained his Central Nyanza seat with a high majority. Odinga was replaced as vice president by Joseph Murumbi, who in turn would be replaced by Moi. In July 1969, Mboya—a prominent and popular Luo KANU politician—was assassinated by a Kikuyu. Kenyatta had reportedly been concerned that Mboya, with U.S. backing, could remove him from the presidency, and across Kenya there were suspicions voiced that Kenyatta's government was responsible for Mboya's death. The killing sparked tensions between the Kikuyu and other ethnic groups across the country, with riots breaking out in Nairobi. In October 1969, Kenyatta visited Kisumu, located in Luo territory, to open a hospital. On being greeted by a crowd shouting KPU slogans, he lost his temper. When members of the crowd started throwing stones, Kenyatta's bodyguards opened fire on them, killing and wounding several. In response to the rise of KPU, Kenyatta had introduced oathing, a Kikuyu cultural tradition in which individuals came to Gatundu to swear their loyalty to him. Journalists were discouraged from reporting on the oathing system, and several were deported when they tried to do so. Many Kenyans were pressured or forced to swear oaths, something condemned by the country's Christian establishment. In response to the growing condemnation, the oathing was terminated in September 1969, and Kenyatta invited leaders from other ethnic groups to a meeting in Gatundu. Kenyatta's government resorted to un-democratic measures to restrict the opposition. It used laws on detention and deportation to perpetuate its political hold. In 1966, it passed the Public Security (Detained and Restricted Persons) Regulations, allowing the authorities to arrest and detain anyone "for the preservation of public security" without putting them on trial. In October 1969 the government banned the KPU, and arrested Odinga before putting him under indefinite detainment. With the organised opposition eliminated, from 1969, Kenya was once again a de facto one-party state. The December 1969 general election—in which all candidates were from the ruling KANU—resulted in Kenyatta's government remaining in power, but many members of his government lost their parliamentary seats to rivals from within the party. Over coming years, many other political and intellectual figures considered hostile to Kenyatta's rule were detained or imprisoned, including Seroney, Flomena Chelagat, George Anyona, Martin Shikuku, and Ngũgĩ wa Thiong'o. Other political figures who were critical of Kenyatta's administration, including Ronald Ngala and Josiah Mwangi Kariuki, were killed in incidents that many speculated were government assassinations. Illness and death For many years, Kenyatta had suffered health problems. He had a mild stroke in 1966, and a second in May 1968. He suffered from gout and heart problems, all of which he sought to keep hidden from the public. By 1970, he was increasingly feeble and senile, and by 1975 Kenyatta had—according to Maloba—"in effect ceased to actively govern". Four Kikuyu politicians—Koinange, James Gichuru, Njoroge Mungai, and Charles Njonjo—formed his inner circle of associates, and he was rarely seen in public without one of them present. This clique faced opposition from KANU back-benchers spearheaded by Josiah Mwangi Kariuki. In March 1975 Kariuki was kidnapped, tortured, and murdered, and his body was dumped in the Ngong Hills. After Kariuki's murder, Maloba noted, there was a "noticeable erosion" of support for Kenyatta and his government. Thenceforth, when the president spoke to crowds, they no longer applauded his statements. In 1977, Kenyatta had several further strokes or heart attacks. On 22 August 1978, he died of a heart attack in the State House, Mombasa. The Kenyan government had been preparing for Kenyatta's death since at least his 1968 stroke; it had requested British assistance in organising his state funeral as a result of the UK's longstanding experience in this area. McKenzie had been employed as a go-between, and the structure of the funeral was orchestrated to deliberately imitate that of deceased British Prime Minister Winston Churchill. In doing so, senior Kenyans sought to project an image of their country as a modern nation-state rather than one incumbent on tradition. The funeral took place at St. Andrew's Presbyterian Church, six days after Kenyatta's death. Britain's heir to the throne, Charles, Prince of Wales, attended the event, a symbol of the value that the British government perceived in its relationship with Kenya. African heads of state also attended, including Nyerere, Idi Amin, Kenneth Kaunda, and Hastings Banda, as did India's Morarji Desai and Pakistan's Muhammad Zia-ul-Haq. His body was buried in a mausoleum in the grounds of the Parliament Buildings in Nairobi. Kenyatta's succession had been an issue of debate since independence, and Kenyatta had not unreservedly nominated a successor. The Kikuyu clique surrounding him had sought to amend the constitution to prevent vice president Moi—who was from the Kalenjin people rather than the Kikuyu—from automatically becoming acting president, but their attempts failed amid sustained popular and parliamentary opposition. After Kenyatta's death, the transition of power proved smooth, surprising many international commentators. As vice president, Moi was sworn in as acting president for a 90-day interim period. In October he was unanimously elected KANU President and subsequently declared President of Kenya itself. Moi emphasised his loyalty to Kenyatta—"I followed and was faithful to him until his last day, even when his closest friends forsook him"—and there was much expectation that he would continue the policies inaugurated by Kenyatta. He nevertheless criticised the corruption, land grabbing, and capitalistic ethos that had characterised Kenyatta's period and expressed populist tendencies by emphasizing a closer link to the poor. In 1982 he would amend the Kenyan constitution to create a de jure one-party state. Political ideology Kenyatta was an African nationalist, and was committed to the belief that European colonial rule in Africa must end. Like other anti-colonialists, he believed that under colonialism, the human and natural resources of Africa had been used not for the benefit of Africa's population but for the enrichment of the colonisers and their European homelands. For Kenyatta, independence meant not just self-rule, but an end to the colour bar and to the patronising attitudes and racist slang of Kenya's white minority. According to Murray-Brown, Kenyatta's "basic philosophy" throughout his life was that "all men deserved the right to develop peacefully according to their own wishes". Kenyatta expressed this in his statement that "I have stood always for the purposes of human dignity in freedom, and for the values of tolerance and peace." This approach was similar to the Zambian President Kenneth Kaunda's ideology of "African humanism". Murray-Brown noted that "Kenyatta had always kept himself free from ideological commitments", while the historian William R. Ochieng observed that "Kenyatta articulated no particular social philosophy". Similarly, Assensoh noted that Kenyatta was "not interested in social philosophies and slogans". Several commentators and biographers described him as being politically conservative, an ideological viewpoint likely bolstered by his training in functionalist anthropology. He pursued, according to Maloba, "a conservatism that worked in concert with imperial powers and was distinctly hostile to radical politics". Kenyatta biographer Guy Arnold described the Kenyan leader as "a pragmatist and a moderate", noting that his only "radicalism" came in the form of his "nationalist attack" on imperialism. Arnold also noted that Kenyatta "absorbed a great deal of the British approach to politics: pragmatism, only dealing with problems when they become crises, [and] tolerance as long as the other side is only talking". Donald Savage noted that Kenyatta believed in "the importance of authority and tradition", and that he displayed "a remarkably consistent view of development through self-help and hard work". Kenyatta was also an elitist and encouraged the emergence of an elite class in Kenya. He wrestled with a contradiction between his conservative desire for a renewal of traditional custom and his reformist urges to embrace Western modernity. He also faced a contradiction between his internal debates on Kikuyu ethics and belief in tribal identity with his need to create a non-tribalised Kenyan nationalism. Views on Pan-Africanism and socialism While in Britain, Kenyatta made political alliances with individuals committed to Marxism and to radical Pan-Africanism, the idea that African countries should politically unify; some commentators have posthumously characterised Kenyatta as a Pan-Africanist. Maloba observed that during the colonial period Kenyatta had embraced "radical Pan African activism" which differed sharply from the "deliberate conservative positions, especially on the question of African liberation" that he espoused while Kenya's leader. As leader of Kenya, Kenyatta published two collected volumes of his speeches: Harambee and Suffering Without Bitterness. The material included in these publications was carefully selected so as to avoid mention of the radicalism he exhibited while in Britain during the 1930s. Kenyatta had been exposed to Marxist-Leninist ideas through his friendship with Padmore and the time spent in the Soviet Union, but had also been exposed to Western forms of liberal democratic government through his many years in Britain. He appears to have had no further involvement with the communist movement after 1934. As Kenya's leader, Kenyatta rejected the idea that Marxism offered a useful framework for analysing his country's socio-economic situation. The academics Bruce J. Berman and John M. Lonsdale argued that Marxist frameworks for analysing society influenced some of his beliefs, such as his view that British colonialism had to be destroyed rather than simply reformed. Kenyatta nevertheless disagreed with the Marxist attitude that tribalism was backward and retrograde; his positive attitude toward tribal society frustrated some of Kenyatta's Marxist Pan-Africanist friends in Britain, among them Padmore, James, and T. Ras Makonnen, who regarded it as parochial and un-progressive. Assensoh suggested that Kenyatta initially had socialist inclinations but "became a victim of capitalist circumstances"; conversely, Savage stated that "Kenyatta's direction was hardly towards the creation of a radical new socialist society", and Ochieng called him "an African capitalist". When in power, Kenyatta displayed a preoccupation with individual and mbari land rights that were at odds with any socialist-oriented collectivisation. According to Maloba, Kenyatta's government "sought to project capitalism as an African ideology, and communism (or socialism) as alien and dangerous". Personality and personal life Kenyatta was a flamboyant character, with an extroverted personality. According to Murray-Brown, he "liked being at the centre of life", and was always "a rebel at heart" who enjoyed "earthly pleasures". One of Kenyatta's fellow LSE students, Elspeth Huxley, referred to him as "a showman to his finger tips; jovial, a good companion, shrewd, fluent, quick, devious, subtle, [and] flesh-pot loving". Kenyatta liked to dress elaborately; throughout most of his adult life, he wore finger rings and while studying at university in London took to wearing a fez and cloak and carrying a silver-topped black cane. He adopted his surname, "Kenyatta", after the name of a beaded belt he often wore in early life. As President he collected a variety of expensive cars. Murray-Brown noted that Kenyatta had the ability to "appear all things to all men", also displaying a "consummate ability to keep his true purposes and abilities to himself", for instance concealing his connections with communists and the Soviet Union both from members of the British Labour Party and from Kikuyu figures at home. This deviousness was sometimes interpreted as dishonesty by those who met him. Referring to Kenyatta's appearance in 1920s Kenya, Murray-Brown stated the leader presented himself to Europeans as "an agreeable if somewhat seedy 'Europeanized' native" and to indigenous Africans as "a sophisticated man-about-town about whose political earnestness they had certain reservations". Simon Gikandi argued that Kenyatta, like some of his contemporaries in the Pan-African movement, was an "Afro-Victorian", someone whose identity had been shaped "by the culture of colonialism and colonial institutions", especially those of the Victorian era. During the 1920s and 1930s, Kenyatta cultivated the image of a "colonial gentleman"; in England, he displayed "pleasant manners" and a flexible attitude in adapting to urban situations dissimilar to the lands he had grown up in. A. R. Barlow, a member of the Church of Scotland Mission at Kikuyu, met with Kenyatta in Britain, later relating that he was impressed by how Kenyatta could "mix on equal terms with Europeans and to hold his end up in spite of his handicaps, educationally and socially." The South African Peter Abrahams met Kenyatta in London, noting that of all the black men involved in the city's Pan-Africanist movement, he was "the most relaxed, sophisticated and 'westernized' of the lot of us". As President, Kenyatta often reminisced nostalgically about his time in England, referring to it as "home" on several occasions. Berman and Lonsdale described his life as being preoccupied with "a search for the reconciliation of the Western modernity he embraced and an equally valued Kikuyuness he could not discard". Gikandi argued that Kenyatta's "identification with Englishness was much more profound than both his friends and enemies have been willing to admit". Kenyatta has also been described as a talented orator, author, and editor. He had dictatorial and autocratic tendencies, as well as a fierce temper that could emerge as rage on occasion. Murray-Brown noted that Kenyatta could be "quite unscrupulous, even brutal" in using others to get what he wanted, but he never displayed any physical cruelty or nihilism. Kenyatta had no racist impulses regarding white Europeans, as can, for instance, be seen through his marriage to a white English woman. He told his daughter "the English are wonderful people to live with in England." He welcomed white support for his cause, so long as it was generous and unconditional, and spoke of a Kenya in which indigenous Africans, Europeans, Arabs, and Indians could all regard themselves as Kenyans, working and living alongside each other peacefully. Despite this, Kenyatta exhibited a general dislike of Indians, believing that they exploited indigenous Africans in Kenya. Kenyatta was a polygamist. He viewed monogamy through an anthropological lens as an interesting Western phenomenon but did not adopt the practice himself, instead having sexual relations with a wide range of women throughout his life. Murray-Brown characterized Kenyatta as an "affectionate father" to his children, but one who was frequently absent. Kenyatta had two children from his first marriage with Grace Wahu: son Peter Muigai Kenyatta (born 1920), who later became a deputy minister; and daughter Margaret Kenyatta (born 1928). Margaret served as mayor of Nairobi between 1970 and 1976 and then as Kenya's ambassador to the United Nations from 1976 to 1986. Of these children, it was Margaret who was Kenyatta's closest confidante. During his trial, Kenyatta described himself as a Christian saying, "I do not follow any particular denomination. I believe in Christianity as a whole." Arnold stated that in England, Kenyatta's adherence to Christianity was "desultory". While in London, Kenyatta had taken an interest in the atheist speakers at Speakers' Corner in Hyde Park, while an Irish Muslim friend had unsuccessfully urged Kenyatta to convert to Islam. During his imprisonment, Kenyatta read up on Islam, Hinduism, Buddhism, and Confucianism through books supplied to him by Stock. The Israeli diplomat Asher Naim visited him in this period, noting that although Kenyatta was "not a religious man, he was appreciative of the Bible". Despite portraying himself as a Christian, he found the attitudes of many European missionaries intolerable, in particular their readiness to see everything African as evil. In Facing Mount Kenya, he challenged the missionaries' dismissive attitude toward ancestor veneration, which he instead preferred to call "ancestor communion". In that book's dedication, Kenyatta invoked "ancestral spirits" as part of "the Fight for African Freedom." Legacy Within Kenya, Kenyatta came to be regarded as the "Father of the Nation", and was given the unofficial title of Mzee, a Swahili term meaning "grand old man". From 1963 until his death, a cult of personality surrounded him in the country, one which deliberately interlinked Kenyan nationalism with Kenyatta's own personality. This use of Kenyatta as a popular symbol of the nation itself was furthered by the similarities between their names. He came to be regarded as a father figure not only by Kikuyu and Kenyans, but by Africans more widely. After 1963, Maloba noted, Kenyatta became "about the most admired post-independence African leader" on the world stage, one who Western countries hailed as a "beloved elder statesman." His opinions were "most valued" both by conservative African politicians and by Western leaders. On becoming Kenya's leader, his anti-communist positions gained favour in the West, and some pro-Western governments gave him awards; in 1965 he, for instance, received medals from both Pope Paul VI and from the South Korean government. In 1974, Arnold referred to Kenyatta as "one of the outstanding African leaders now living", someone who had become "synonymous with Kenya". He added that Kenyatta had been "one of the shrewdest politicians" on the continent, regarded as "one of the great architects of African nationalist achievement since 1945". Kenneth O. Nyangena characterised him as "one of the greatest men of the twentieth century", having been "a beacon, a rallying point for suffering Kenyans to fight for their rights, justice and freedom" whose "brilliance gave strength and aspiration to people beyond the boundaries of Kenya". In 2018, Maloba described him as "one of the legendary pioneers of modern African nationalism". In their examination of his writings, Berman and Lonsdale described him as a "pioneer" for being one of the first Kikuyu to write and publish; "his representational achievement was unique". Domestic influence and posthumous assessment Maxon noted that in the areas of health and education, Kenya under Kenyatta "achieved more in a decade and a half than the colonial state had accomplished in the preceding six decades." By the time of Kenyatta's death, Kenya had gained higher life expectancy rates than most of Sub-Saharan Africa. There had been an expansion in primary, secondary, and higher education, and the country had taken what Maxon called "giant steps" toward achieving its goal of universal primary education for Kenyan children. Another significant success had been in dismantling the colonial-era system of racial segregation in schools, public facilities, and social clubs peacefully and with minimal disruption. During much of his life, Kenya's white settlers had regarded Kenyatta as a malcontent and an agitator; for them, he was a figure of hatred and fear. As noted by Arnold, "no figure in the whole of British Africa, with the possible exception of [Nkrumah], excited among the settlers and the colonial authorities alike so many expressions of anger, denigration and fury as did Kenyatta." As the historian Keith Kyle put it, for many whites Kenyatta was "Satan Incarnate". This white animosity reached its apogee between 1950 and 1952. By 1964, this image had largely shifted, and many white settlers referred to him as "Good Old Mzee". Murray-Brown expressed the view that for many, Kenyatta's "message of reconciliation, 'to forgive and forget', was perhaps his greatest contribution to his country and to history." To Ochieng, Kenyatta was "a personification of conservative social forces and tendencies" in Kenya. Towards the end of his presidency, many younger Kenyans—while respecting Kenyatta's role in attaining independence—regarded him as a reactionary. Those desiring a radical transformation of Kenyan society often compared Kenyatta's Kenya unfavourably with its southern neighbour, Julius Nyerere's Tanzania. The criticisms that leftists like Odinga made of Kenyatta's leadership were similar to those that the intellectual Frantz Fanon had made of post-colonial leaders throughout Africa. Drawing upon Marxist theory, Jay O'Brien, for instance, argued that Kenyatta had come to power "as a representative of a would-be bourgeoisie", a coalition of "relatively privileged petty bourgeois African elements" who wanted simply to replace the British colonialists and "Asian commercial bourgeoisie" with themselves. He suggested that the British supported Kenyatta in this, seeing him as a bulwark against growing worker and peasant militancy who would ensure continued neo-colonial dominance. Providing a similar leftist critique, the Marxist writer Ngũgĩ wa Thiong'o stated that "here was a black Moses who had been called by history to lead his people to the promised land of no exploitation, no oppression, but who failed to rise to the occasion". Ngũgĩ saw Kenyatta as a "twentieth-century tragic figure: he could have been a Lenin, a Mao Tse-Tung, or a Ho Chi Minh; but he ended up being a Chiang Kai-Shek, a Park-Chung Hee, or a Pinochet." Ngũgĩ was among Kenyan critics who claimed that Kenyatta treated Mau Mau veterans dismissively, leaving many of them impoverished and landless while seeking to remove them from the centre stage of national politics. In other areas Kenyatta's government also faced criticism; it for instance made little progress in advancing women's rights in Kenya. Assensoh argued that in his life story, Kenyatta had a great deal in common with Ghana's Kwame Nkrumah. Simon Gikandi noted that Kenyatta, like Nkrumah, was remembered for "initiating the discourse and process that plotted the narrative of African freedom", but at the same time both were "often remembered for their careless institution of presidential rule, one party dictatorship, ethnicity and cronyism. They are remembered both for making the dream of African independence a reality and for their invention of postcolonial authoritarianism." In 1991, the Kenyan lawyer and human rights activist Gibson Kamau Kuria noted that in abolishing the federal system, banning independent candidates from standing in elections, setting up a unicameral legislature, and relaxing restrictions on the use of emergency powers, Kenyatta had laid "the groundwork" for Moi to further advance dictatorial power in Kenya during the late 1970s and 1980s. Kenyatta was accused by Kenya's Truth, Justice and Reconciliation Commission in its 2013 report of using his authority as president to allocate large tracts of land to himself and his family across Kenya. The Kenyatta family is among Kenya's biggest landowners. During the 1990s, there was still much frustration among tribal groups, namely in the Nandi, Nakuru, Uasin-Gishu, and Trans-Nzoia Districts, where under Kenyatta's government they had not regained the land taken by European settlers and more of it had been sold to those regarded as "foreigners"—Kenyans from other tribes. Among these groups there were widespread calls for restitution and in 1991 and 1992 there were violent attacks against many of those who obtained land through Kenyatta's patronage in these areas. The violence continued sporadically until 1996, with an estimated 1500 killed and 300,000 displaced in the Rift Valley. Bibliography Notes References Footnotes Sources Further reading External links A 1964 newsreel from British Pathe of Kenyatta's swearing in as President of Kenya |- |- 1890s births 1978 deaths Year of birth uncertain People from Kiambu County Kenyan Presbyterians Kikuyu people Kenya African National Union politicians Alumni of University College London Alumni of the London School of Economics Anti-imperialism Converts to Christianity Kenyan anti-communists Kenyan expatriates in the United Kingdom Kenyan prisoners and detainees Kenyan writers Kenyan male writers Kenyan pan-Africanists Operation Entebbe Presidents of Kenya Prime Ministers of Kenya Jomo Members of the Legislative Council of Kenya Kenyan independence activists Articles containing video clips People from Storrington Burials in Kenya
4066308
https://en.wikipedia.org/wiki/Multiple%20sequence%20alignment
Multiple sequence alignment
Multiple sequence alignment (MSA) may refer to the process or the result of sequence alignment of three or more biological sequences, generally protein, DNA, or RNA. In many cases, the input set of query sequences are assumed to have an evolutionary relationship by which they share a linkage and are descended from a common ancestor. From the resulting MSA, sequence homology can be inferred and phylogenetic analysis can be conducted to assess the sequences' shared evolutionary origins. Visual depictions of the alignment as in the image at right illustrate mutation events such as point mutations (single amino acid or nucleotide changes) that appear as differing characters in a single alignment column, and insertion or deletion mutations (indels or gaps) that appear as hyphens in one or more of the sequences in the alignment. Multiple sequence alignment is often used to assess sequence conservation of protein domains, tertiary and secondary structures, and even individual amino acids or nucleotides. Computational algorithms are used to produce and analyse the MSAs due to the difficulty and intractability of manually processing the sequences given their biologically-relevant length. MSAs require more sophisticated methodologies than pairwise alignment because they are more computationally complex. Most multiple sequence alignment programs use heuristic methods rather than global optimization because identifying the optimal alignment between more than a few sequences of moderate length is prohibitively computationally expensive. On the other hand, heuristic methods generally fail to give guarantees on the solution quality, with heuristic solutions shown to be often far below the optimal solution on benchmark instances. Problem statement Given sequences , similar to the form below: A multiple sequence alignment is taken of this set of sequences by inserting any amount of gaps needed into each of the sequences of until the modified sequences, , all conform to length and no values in the sequences of of the same column consists of only gaps. The mathematical form of an MSA of the above sequence set is shown below: To return from each particular sequence to , remove all gaps. Graphing approach A general approach when calculating multiple sequence alignments is to use graphs to identify all of the different alignments. When finding alignments via graph, a complete alignment is created in a weighted graph that contains a set of vertices and a set of edges. Each of the graph edges has a weight based on a certain heuristic that helps to score each alignment or subset of the original graph. Tracing alignments When determining the best suited alignments for each MSA, a trace is usually generated. A trace is a set of realized, or corresponding and aligned, vertices that has a specific weight based on the edges that are selected between corresponding vertices. When choosing traces for a set of sequences it is necessary to choose a trace with a maximum weight to get the best alignment of the sequences. Alignment methods There are various alignment methods used within multiple sequence to maximize scores and correctness of alignments. Each is usually based on a certain heuristic with an insight into the evolutionary process. Most try to replicate evolution to get the most realistic alignment possible to best predict relations between sequences. Dynamic programming A direct method for producing an MSA uses the dynamic programming technique to identify the globally optimal alignment solution. For proteins, this method usually involves two sets of parameters: a gap penalty and a substitution matrix assigning scores or probabilities to the alignment of each possible pair of amino acids based on the similarity of the amino acids' chemical properties and the evolutionary probability of the mutation. For nucleotide sequences, a similar gap penalty is used, but a much simpler substitution matrix, wherein only identical matches and mismatches are considered, is typical. The scores in the substitution matrix may be either all positive or a mix of positive and negative in the case of a global alignment, but must be both positive and negative, in the case of a local alignment. For n individual sequences, the naive method requires constructing the n-dimensional equivalent of the matrix formed in standard pairwise sequence alignment. The search space thus increases exponentially with increasing n and is also strongly dependent on sequence length. Expressed with the big O notation commonly used to measure computational complexity, a naïve MSA takes O(LengthNseqs) time to produce. To find the global optimum for n sequences this way has been shown to be an NP-complete problem. In 1989, based on Carrillo-Lipman Algorithm, Altschul introduced a practical method that uses pairwise alignments to constrain the n-dimensional search space. In this approach pairwise dynamic programming alignments are performed on each pair of sequences in the query set, and only the space near the n-dimensional intersection of these alignments is searched for the n-way alignment. The MSA program optimizes the sum of all of the pairs of characters at each position in the alignment (the so-called sum of pair score) and has been implemented in a software program for constructing multiple sequence alignments. In 2019, Hosseininasab and van Hoeve showed that by using decision diagrams, MSA may be modeled in polynomial space complexity. Progressive alignment construction The most widely used approach to multiple sequence alignments uses a heuristic search known as progressive technique (also known as the hierarchical or tree method) developed by Da-Fei Feng and Doolittle in 1987. Progressive alignment builds up a final MSA by combining pairwise alignments beginning with the most similar pair and progressing to the most distantly related. All progressive alignment methods require two stages: a first stage in which the relationships between the sequences are represented as a tree, called a guide tree, and a second step in which the MSA is built by adding the sequences sequentially to the growing MSA according to the guide tree. The initial guide tree is determined by an efficient clustering method such as neighbor-joining or UPGMA, and may use distances based on the number of identical two-letter sub-sequences (as in FASTA rather than a dynamic programming alignment). Progressive alignments are not guaranteed to be globally optimal. The primary problem is that when errors are made at any stage in growing the MSA, these errors are then propagated through to the final result. Performance is also particularly bad when all of the sequences in the set are rather distantly related. Most modern progressive methods modify their scoring function with a secondary weighting function that assigns scaling factors to individual members of the query set in a nonlinear fashion based on their phylogenetic distance from their nearest neighbors. This corrects for non-random selection of the sequences given to the alignment program. Progressive alignment methods are efficient enough to implement on a large scale for many (100s to 1000s) sequences. Progressive alignment services are commonly available on publicly accessible web servers so users need not locally install the applications of interest. The most popular progressive alignment method has been the Clustal family, especially the weighted variant ClustalW to which access is provided by a large number of web portals including GenomeNet, EBI, and EMBNet. Different portals or implementations can vary in user interface and make different parameters accessible to the user. ClustalW is used extensively for phylogenetic tree construction, in spite of the author's explicit warnings that unedited alignments should not be used in such studies and as input for protein structure prediction by homology modeling. Current version of Clustal family is ClustalW2. EMBL-EBI announced that CLustalW2 will be expired in August 2015. They recommend Clustal Omega which performs based on seeded guide trees and HMM profile-profile techniques for protein alignments. They offer different MSA tools for progressive DNA alignments. One of them is MAFFT (Multiple Alignment using Fast Fourier Transform). Another common progressive alignment method called T-Coffee is slower than Clustal and its derivatives but generally produces more accurate alignments for distantly related sequence sets. T-Coffee calculates pairwise alignments by combining the direct alignment of the pair with indirect alignments that aligns each sequence of the pair to a third sequence. It uses the output from Clustal as well as another local alignment program LALIGN, which finds multiple regions of local alignment between two sequences. The resulting alignment and phylogenetic tree are used as a guide to produce new and more accurate weighting factors. Because progressive methods are heuristics that are not guaranteed to converge to a global optimum, alignment quality can be difficult to evaluate and their true biological significance can be obscure. A semi-progressive method that improves alignment quality and does not use a lossy heuristic while still running in polynomial time has been implemented in the program PSAlign. Iterative methods A set of methods to produce MSAs while reducing the errors inherent in progressive methods are classified as "iterative" because they work similarly to progressive methods but repeatedly realign the initial sequences as well as adding new sequences to the growing MSA. One reason progressive methods are so strongly dependent on a high-quality initial alignment is the fact that these alignments are always incorporated into the final result — that is, once a sequence has been aligned into the MSA, its alignment is not considered further. This approximation improves efficiency at the cost of accuracy. By contrast, iterative methods can return to previously calculated pairwise alignments or sub-MSAs incorporating subsets of the query sequence as a means of optimizing a general objective function such as finding a high-quality alignment score. A variety of subtly different iteration methods have been implemented and made available in software packages; reviews and comparisons have been useful but generally refrain from choosing a "best" technique. The software package PRRN/PRRP uses a hill-climbing algorithm to optimize its MSA alignment score and iteratively corrects both alignment weights and locally divergent or "gappy" regions of the growing MSA. PRRP performs best when refining an alignment previously constructed by a faster method. Another iterative program, DIALIGN, takes an unusual approach of focusing narrowly on local alignments between sub-segments or sequence motifs without introducing a gap penalty. The alignment of individual motifs is then achieved with a matrix representation similar to a dot-matrix plot in a pairwise alignment. An alternative method that uses fast local alignments as anchor points or "seeds" for a slower global-alignment procedure is implemented in the CHAOS/DIALIGN suite. A third popular iteration-based method called MUSCLE (multiple sequence alignment by log-expectation) improves on progressive methods with a more accurate distance measure to assess the relatedness of two sequences. The distance measure is updated between iteration stages (although, in its original form, MUSCLE contained only 2-3 iterations depending on whether refinement was enabled). Consensus methods Consensus methods attempt to find the optimal multiple sequence alignment given multiple different alignments of the same set of sequences. There are two commonly used consensus methods, M-COFFEE and MergeAlign. M-COFFEE uses multiple sequence alignments generated by seven different methods to generate consensus alignments. MergeAlign is capable of generating consensus alignments from any number of input alignments generated using different models of sequence evolution or different methods of multiple sequence alignment. The default option for MergeAlign is to infer a consensus alignment using alignments generated using 91 different models of protein sequence evolution. Hidden Markov models Hidden Markov models are probabilistic models that can assign likelihoods to all possible combinations of gaps, matches, and mismatches to determine the most likely MSA or set of possible MSAs. HMMs can produce a single highest-scoring output but can also generate a family of possible alignments that can then be evaluated for biological significance. HMMs can produce both global and local alignments. Although HMM-based methods have been developed relatively recently, they offer significant improvements in computational speed, especially for sequences that contain overlapping regions. Typical HMM-based methods work by representing an MSA as a form of directed acyclic graph known as a partial-order graph, which consists of a series of nodes representing possible entries in the columns of an MSA. In this representation a column that is absolutely conserved (that is, that all the sequences in the MSA share a particular character at a particular position) is coded as a single node with as many outgoing connections as there are possible characters in the next column of the alignment. In the terms of a typical hidden Markov model, the observed states are the individual alignment columns and the "hidden" states represent the presumed ancestral sequence from which the sequences in the query set are hypothesized to have descended. An efficient search variant of the dynamic programming method, known as the Viterbi algorithm, is generally used to successively align the growing MSA to the next sequence in the query set to produce a new MSA. This is distinct from progressive alignment methods because the alignment of prior sequences is updated at each new sequence addition. However, like progressive methods, this technique can be influenced by the order in which the sequences in the query set are integrated into the alignment, especially when the sequences are distantly related. Several software programs are available in which variants of HMM-based methods have been implemented and which are noted for their scalability and efficiency, although properly using an HMM method is more complex than using more common progressive methods. The simplest is POA (Partial-Order Alignment); a similar but more generalized method is implemented in the packages SAM (Sequence Alignment and Modeling System). and HMMER. SAM has been used as a source of alignments for protein structure prediction to participate in the CASP structure prediction experiment and to develop a database of predicted proteins in the yeast species S. cerevisiae. HHsearch is a software package for the detection of remotely related protein sequences based on the pairwise comparison of HMMs. A server running HHsearch (HHpred) was by far the fastest of the 10 best automatic structure prediction servers in the CASP7 and CASP8 structure prediction competitions. Phylogeny-aware methods Most multiple sequence alignment methods try to minimize the number of insertions/deletions (gaps) and, as a consequence, produce compact alignments. This causes several problems if the sequences to be aligned contain non-homologous regions, if gaps are informative in a phylogeny analysis. These problems are common in newly produced sequences that are poorly annotated and may contain frame-shifts, wrong domains or non-homologous spliced exons. The first such method was developed in 2005 by Löytynoja and Goldman. The same authors released a software package called PRANK in 2008. PRANK improves alignments when insertions are present. Nevertheless, it runs slowly compared to progressive and/or iterative methods which have been developed for several years. In 2012, two new phylogeny-aware tools appeared. One is called PAGAN that was developed by the same team as PRANK. The other is ProGraphMSA developed by Szalkowski. Both software packages were developed independently but share common features, notably the use of graph algorithms to improve the recognition of non-homologous regions, and an improvement in code making these software faster than PRANK. Motif finding Motif finding, also known as profile analysis, is a method of locating sequence motifs in global MSAs that is both a means of producing a better MSA and a means of producing a scoring matrix for use in searching other sequences for similar motifs. A variety of methods for isolating the motifs have been developed, but all are based on identifying short highly conserved patterns within the larger alignment and constructing a matrix similar to a substitution matrix that reflects the amino acid or nucleotide composition of each position in the putative motif. The alignment can then be refined using these matrices. In standard profile analysis, the matrix includes entries for each possible character as well as entries for gaps. Alternatively, statistical pattern-finding algorithms can identify motifs as a precursor to an MSA rather than as a derivation. In many cases when the query set contains only a small number of sequences or contains only highly related sequences, pseudocounts are added to normalize the distribution reflected in the scoring matrix. In particular, this corrects zero-probability entries in the matrix to values that are small but nonzero. Blocks analysis is a method of motif finding that restricts motifs to ungapped regions in the alignment. Blocks can be generated from an MSA or they can be extracted from unaligned sequences using a precalculated set of common motifs previously generated from known gene families. Block scoring generally relies on the spacing of high-frequency characters rather than on the calculation of an explicit substitution matrix. The BLOCKS server provides an interactive method to locate such motifs in unaligned sequences. Statistical pattern-matching has been implemented using both the expectation-maximization algorithm and the Gibbs sampler. One of the most common motif-finding tools, known as MEME, uses expectation maximization and hidden Markov methods to generate motifs that are then used as search tools by its companion MAST in the combined suite MEME/MAST. Non-coding multiple sequence alignment Non-coding DNA regions, especially TFBSs, are rather more conserved and not necessarily evolutionarily related, and may have converged from non-common ancestors. Thus, the assumptions used to align protein sequences and DNA coding regions are inherently different from those that hold for TFBS sequences. Although it is meaningful to align DNA coding regions for homologous sequences using mutation operators, alignment of binding site sequences for the same transcription factor cannot rely on evolutionary related mutation operations. Similarly, the evolutionary operator of point mutations can be used to define an edit distance for coding sequences, but this has little meaning for TFBS sequences because any sequence variation has to maintain a certain level of specificity for the binding site to function. This becomes specifically important when trying to align known TFBS sequences to build supervised models to predict unknown locations of the same TFBS. Hence, Multiple Sequence Alignment methods need to adjust the underlying evolutionary hypothesis and the operators used as in the work published incorporating neighbouring base thermodynamic information to align the binding sites searching for the lowest thermodynamic alignment conserving specificity of the binding site, EDNA . Optimization Genetic algorithms and simulated annealing Standard optimization techniques in computer science — both of which were inspired by, but do not directly reproduce, physical processes — have also been used in an attempt to more efficiently produce quality MSAs. One such technique, genetic algorithms, has been used for MSA production in an attempt to broadly simulate the hypothesized evolutionary process that gave rise to the divergence in the query set. The method works by breaking a series of possible MSAs into fragments and repeatedly rearranging those fragments with the introduction of gaps at varying positions. A general objective function is optimized during the simulation, most generally the "sum of pairs" maximization function introduced in dynamic programming-based MSA methods. A technique for protein sequences has been implemented in the software program SAGA (Sequence Alignment by Genetic Algorithm) and its equivalent in RNA is called RAGA. The technique of simulated annealing, by which an existing MSA produced by another method is refined by a series of rearrangements designed to find better regions of alignment space than the one the input alignment already occupies. Like the genetic algorithm method, simulated annealing maximizes an objective function like the sum-of-pairs function. Simulated annealing uses a metaphorical "temperature factor" that determines the rate at which rearrangements proceed and the likelihood of each rearrangement; typical usage alternates periods of high rearrangement rates with relatively low likelihood (to explore more distant regions of alignment space) with periods of lower rates and higher likelihoods to more thoroughly explore local minima near the newly "colonized" regions. This approach has been implemented in the program MSASA (Multiple Sequence Alignment by Simulated Annealing). Mathematical programming and exact solution algorithms Mathematical programming and in particular Mixed integer programming models are another approach to solve MSA problems. The advantage of such optimization models is that they can be used to find the optimal MSA solution more efficiently compared to the traditional DP approach. This is due in part, to the applicability of decomposition techniques for mathematical programs, where the MSA model is decomposed into smaller parts and iteratively solved until the optimal solution is found. Example algorithms used to solve mixed integer programming models of MSA include branch and price and Benders decomposition. Although exact approaches are computationally slow compared to heuristic algorithms for MSA, they are guaranteed to reach the optimal solution eventually, even for large-size problems. Simulated quantum computing In January 2017, D-Wave Systems announced that its qbsolv open-source quantum computing software had been successfully used to find a faster solution to the MSA problem. Alignment visualization and quality control The necessary use of heuristics for multiple alignment means that for an arbitrary set of proteins, there is always a good chance that an alignment will contain errors. For example, an evaluation of several leading alignment programs using the BAliBase benchmark found that at least 24% of all pairs of aligned amino acids were incorrectly aligned. These errors can arise because of unique insertions into one or more regions of sequences, or through some more complex evolutionary process leading to proteins that do not align easily by sequence alone. As the number of sequence and their divergence increases many more errors will be made simply because of the heuristic nature of MSA algorithms. Multiple sequence alignment viewers enable alignments to be visually reviewed, often by inspecting the quality of alignment for annotated functional sites on two or more sequences. Many also enable the alignment to be edited to correct these (usually minor) errors, in order to obtain an optimal 'curated' alignment suitable for use in phylogenetic analysis or comparative modeling. However, as the number of sequences increases and especially in genome-wide studies that involve many MSAs it is impossible to manually curate all alignments. Furthermore, manual curation is subjective. And finally, even the best expert cannot confidently align the more ambiguous cases of highly diverged sequences. In such cases it is common practice to use automatic procedures to exclude unreliably aligned regions from the MSA. For the purpose of phylogeny reconstruction (see below) the Gblocks program is widely used to remove alignment blocks suspect of low quality, according to various cutoffs on the number of gapped sequences in alignment columns. However, these criteria may excessively filter out regions with insertion/deletion events that may still be aligned reliably, and these regions might be desirable for other purposes such as detection of positive selection. A few alignment algorithms output site-specific scores that allow the selection of high-confidence regions. Such a service was first offered by the SOAP program, which tests the robustness of each column to perturbation in the parameters of the popular alignment program CLUSTALW. The T-Coffee program uses a library of alignments in the construction of the final MSA, and its output MSA is colored according to confidence scores that reflect the agreement between different alignments in the library regarding each aligned residue. Its extension, TCS : (Transitive Consistency Score), uses T-Coffee libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. Another alignment program that can output an MSA with confidence scores is FSA, which uses a statistical model that allows calculation of the uncertainty in the alignment. The HoT (Heads-Or-Tails) score can be used as a measure of site-specific alignment uncertainty due to the existence of multiple co-optimal solutions. The GUIDANCE program calculates a similar site-specific confidence measure based on the robustness of the alignment to uncertainty in the guide tree that is used in progressive alignment programs. An alternative, more statistically justified approach to assess alignment uncertainty is the use of probabilistic evolutionary models for joint estimation of phylogeny and alignment. A Bayesian approach allows calculation of posterior probabilities of estimated phylogeny and alignment, which is a measure of the confidence in these estimates. In this case, a posterior probability can be calculated for each site in the alignment. Such an approach was implemented in the program BAli-Phy. There are free programs available for visualization of multiple sequence alignments, for example Jalview and UGENE. Phylogenetic use Multiple sequence alignments can be used to create a phylogenetic tree. This is made possible by two reasons. The first is because functional domains that are known in annotated sequences can be used for alignment in non-annotated sequences. The other is that conserved regions known to be functionally important can be found. This makes it possible for multiple sequence alignments to be used to analyze and find evolutionary relationships through homology between sequences. Point mutations and insertion or deletion events (called indels) can be detected. Multiple sequence alignments can also be used to identify functionally important sites, such as binding sites, active sites, or sites corresponding to other key functions, by locating conserved domains. When looking at multiple sequence alignments, it is useful to consider different aspects of the sequences when comparing sequences. These aspects include identity, similarity, and homology. Identity means that the sequences have identical residues at their respective positions. On the other hand, similarity has to do with the sequences being compared having similar residues quantitatively. For example, in terms of nucleotide sequences, pyrimidines are considered similar to each other, as are purines. Similarity ultimately leads to homology, in that the more similar sequences are, the closer they are to being homologous. This similarity in sequences can then go on to help find common ancestry. See also Alignment-free sequence analysis Cladistics Generalized tree alignment Multiple sequence alignment viewers PANDIT, a biological database covering protein domains Phylogenetics Sequence alignment software Structural alignment References Survey articles External links ExPASy sequence alignment tools Archived Multiple Alignment Resource Page — from the Virtual School of Natural Sciences Tools for Multiple Alignments — from Pôle Bioinformatique Lyonnais An entry point to clustal servers and information An entry point to the main T-Coffee servers An entry point to the main MergeAlign server and information European Bioinformatics Institute servers: ClustalW2 — general purpose multiple sequence alignment program for DNA or proteins. Muscle — MUltiple Sequence Comparison by Log-Expectation T-coffee — multiple sequence alignment. MAFFT — Multiple Alignment using Fast Fourier Transform KALIGN — a fast and accurate multiple sequence alignment algorithm. Lecture notes, tutorials, and courses Multiple sequence alignment lectures — from the Max Planck Institute for Molecular Genetics Lecture Notes and practical exercises on multiple sequence alignments at the EMBL Molecular Bioinformatics Lecture Notes Molecular Evolution and Bioinformatics Lecture Notes Bioinformatics Computational phylogenetics Markov models
51546259
https://en.wikipedia.org/wiki/C.%20Pandu%20Rangan
C. Pandu Rangan
Chandrasekaran Pandurangan (born September 20, 1955) is a computer scientist and academic professor of the Computer Science and Engineering Department at Indian Institute of Technology - Madras (IITM). He mainly focuses on the design of pragmatic algorithms, graph theory and cryptography. Early life Pandu Rangan was born on September 20, 1955 to S.R. Chandrasekharan in Madras, India. He is married and has two children. Education Pandu Rangan completed his B.Sc. from University of Madras in 1975. After obtaining B.Sc, he received his M.Sc. from the same university in 1977. He completed his PhD from IISc, Bangalore in 1984. Research interests Pandu Rangan has published over two hundred research papers in the following areas of computer science and engineering: Restricting the problem domain Approximate algorithm design Randomized algorithms Parallel and VLSI algorithms Applied cryptography Secure multi-part computation Game theory and Graph theory Problems of practical interest in graph theory, combinatorics and computational geometry were his main interests in research. In cryptology his current focus is on secure message transmission and provable security of cryptographic protocols / primitives. Awards and honours In 2018, he won Institute Chair Professor at IIT Madras. Fellow, Indian National Academy of Engineering, (2006). Member of the Board of Directors of International Association for Cryptologic Research (IACR), (2002-2005). Member, Board of Directors, Society for Electronics Transaction and Security (SETS), (2005-2007). Member, Editorial Board, Lecture Notes in Computer Science Series (LNCS Series), Springer-Verlag, Germany, (2005-2008). Member, Editorial Board, Journal of Parallel and Distributed Computing, (2005-2008). Bibliography K. Srinathan, M. V. N. Ashwin Kumar, C. Pandu Rangan: Asynchronous Secure Communication Tolerating Mixed Adversaries. Advances in Cryptology - ASIACRYPT 2002, 8th International Conference on the Theory and Application of Cryptology and Information Security, Queenstown, New Zealand, 1–5 December 2002: Pages 224-242 K. Srinathan, Arvind Narayanan, C. Pandu Rangan: Optimal Perfectly Secure Message Transmission. Advances in Cryptology - CRYPTO 2004, 24th Annual International CryptologyConference, Santa Barbara, California, USA, 15–19 August 2004: Pages 545-561 Kannan Srinathan, N. R. Prasad, C. Pandu Rangan: On the Optimal Communication Complexity of Multiphase Protocols for Perfect Communication. 2007 IEEE Symposium on Security and Privacy (S&P 2007), 20–23 May 2007, Oakland, California, USA, 2007: Pages 311-320 Kannan Srinathan, Arpita Patra, Ashish Choudhary, C. Pandu Rangan: Probabilistic Perfectly Reliable and Secure Message Transmission - Possibility, Feasibility and Optimality. Progress in Cryptology - INDOCRYPT 2007, 8th International Conference on Cryptology in India, Chennai, India, 9–13 December 2007: Pages 101-122 S. Sharmila Deva Selvi, S. Sree Vivek, Deepanshu Shukla, C. Pandu Rangan: Efficient and Provably Secure Certificateless Multi-receiver Signcryption. Provable Security, Second International Conference, ProvSec 2008, Shanghai, China, 30 October – 1 November 2008: Pages 52–67 Bhavani Shankar, Prasant Gopal, Kannan Srinathan, C. Pandu Rangan: Unconditionally reliable message transmission in directed networks. Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2008, San Francisco, California, USA, 20–22 January 2008. SIAM: Pages 1048-1055 Arpita Patra, Ashish Choudhary, C. Pandu Rangan: Round Efficient Unconditionally Secure Multiparty Computation Protocol. Progress in Cryptology - INDOCRYPT 2008, 9th International Conference on Cryptology in India, Kharagpur, India, 14–17 December 2008: Pages 185-199 S. Sharmila Deva Selvi, S. Sree Vivek, C. Pandu Rangan: Breaking and Fixing of an Identity Based Multi-Signcryption Scheme. Provable Security, Third International Conference, ProvSec 2009, Guangzhou, China, 11–13 November 2009: Pages 61–75 S. Sharmila Deva Selvi, S. Sree Vivek, J. Shriram, S. Kalaivani, C. Pandu Rangan: Identity Based Aggregate Signcryption Schemes. Progress in Cryptology - INDOCRYPT 2009, 10th International Conference on Cryptology in India, New Delhi, India, 13–16 December 2009: Pages 378-397 Arpita Patra, Ashish Choudhary, C. Pandu Rangan: Round Efficient Unconditionally Secure MPC and Multiparty Set Intersection with Optimal Resilience. Progress in Cryptology - INDOCRYPT 2009, 10th International Conference on Cryptology in India, New Delhi, India, 13–16 December 2009: Pages 398-417 Arpita Patra, Ashish Choudhary, Tal Rabin, C. Pandu Rangan: The Round Complexity of Verifiable Secret Sharing Revisited. Advances in Cryptology - CRYPTO 2009, 29th Annual International Cryptology Conference, Santa Barbara, CA, USA, 16–20 August 2009: Pages 487-504 Ranjit Kumaresan, Arpita Patra, C. Pandu Rangan: The Round Complexity of Verifiable Secret Sharing: The Statistical Case. Advances in Cryptology - ASIACRYPT 2010 - 16th International Conference on the Theory and Application of Cryptology and Information Security, Singapore, 5–9 December 2010: Pages 431-447 S. Sharmila Deva Selvi, S. Sree Vivek, C. Pandu Rangan: Identity Based Public Verifiable Signcryption Scheme. Provable Security - 4th International Conference, ProvSec 2010, Malacca, Malaysia, 13–15 October 2010: Pages 244-260 S. Sharmila Deva Selvi, S. Sree Vivek, Dhinakaran Vinayagamurthy, C. Pandu Rangan: ID Based Signcryption Scheme in Standard Model. Provable Security - 6th International Conference, ProvSec 2012, Chengdu, China, 26–28 September 2012: Pages 35–52 S. Sree Vivek, S. Sharmila Deva Selvi, Layamrudhaa Renganathan Venkatesan, C. Pandu Rangan: Efficient, Pairing-Free, Authenticated Identity Based Key Agreement in a Single Round. Provable Security - 7th International Conference, ProvSec 2013, Melaka, Malaysia, 23–25 October 2013: Pages 38–58 Arpita Patra, Ashish Choudhury, C. Pandu Rangan: Efficient Asynchronous Verifiable Secret Sharing and Multiparty Computation. Journal of Cryptology Volume 28, Number 1, January 2015: Pages 49–109 (2015) Priyanka Bose, Dipanjan Das, Chandrasekaran Pandu Rangan: Constant Size Ring Signature Without Random Oracle. Information Security and Privacy - 20th Australasian Conference, ACISP 2015, Brisbane, QLD, Australia, 29 June – 1 July 2015: Pages 230-247 Suvradip Chakraborty, Goutam Paul, C. Pandu Rangan: Forward-Secure Authenticated Symmetric Key Exchange Protocol: New Security Model and Secure Construction. Provable Security - 9th International Conference, ProvSec 2015, Kanazawa, Japan, 24–26 November 2015: Pages 149-166 Sree Vivek Sivanandam, S. Sharmila Deva Selvi, Akshayaram Srinivasan, Chandrasekaran Pandu Rangan: Stronger public key encryption system withstanding RAM scraper like attacks. Security and Communication Networks Volume 9, Number 12, January 2016: Pages 1650-1662 Kunwar Singh, C. Pandu Rangan, A. K. Banerjee: Lattice-based identity-based resplittable threshold public key encryption scheme. International Journal of Computer Mathematics, Volume 93, Number 2, 2016: Pages 289-307 K. Srinathan, C. Pandu Rangan, Moti Yung: Progress in Cryptology - INDOCRYPT 2007, 8th International Conference on Cryptology in India, Chennai, India, 9–13 December 2007, Proceedings. Lecture Notes in Computer Science 4859, Springer 2007, . C. Pandu Rangan, Cunsheng Ding: Progress in Cryptology - INDOCRYPT 2001, Second International Conference on Cryptology in India, Chennai, India, 16–20 December 2001, Proceedings. Lecture Notes in Computer Science 2247, Springer 2001, . C. Pandu Rangan, Venkatesh Raman, Ramaswamy Ramanujam: Foundations of Software Technology and Theoretical Computer Science, 19th Conference, Chennai, India, 13–15 December 1999, Proceedings. Lecture Notes in Computer Science 1738, Springer 1999, . Alok Aggarwal, C. Pandu Rangan: Algorithms and Computation, 10th International Symposium, ISAAC '99, Chennai, India, 16–18 December 1999, Proceedings. Lecture Notes in Computer Science 1741, Springer 1999, . References Living people IIT Madras faculty 1955 births
41211324
https://en.wikipedia.org/wiki/Global%20surveillance%20disclosures%20%282013%E2%80%93present%29
Global surveillance disclosures (2013–present)
Ongoing news reports in the international media have revealed operational details about the Anglophone cryptographic agencies' global surveillance of both foreign and domestic nationals. The reports mostly emanate from a cache of top secret documents leaked by ex-NSA contractor Edward Snowden, which he obtained whilst working for Booz Allen Hamilton, one of the largest contractors for defense and intelligence in the United States. In addition to a trove of U.S. federal documents, Snowden's cache reportedly contains thousands of Australian, British and Canadian intelligence files that he had accessed via the exclusive "Five Eyes" network. In June 2013, the first of Snowden's documents were published simultaneously by The Washington Post and The Guardian, attracting considerable public attention. The disclosure continued throughout 2013, and a small portion of the estimated full cache of documents was later published by other media outlets worldwide, most notably The New York Times (United States), the Canadian Broadcasting Corporation, the Australian Broadcasting Corporation, Der Spiegel (Germany), O Globo (Brazil), Le Monde (France), L'espresso (Italy), NRC Handelsblad (the Netherlands), Dagbladet (Norway), El País (Spain), and Sveriges Television (Sweden). These media reports have shed light on the implications of several secret treaties signed by members of the UKUSA community in their efforts to implement global surveillance. For example, Der Spiegel revealed how the German Federal Intelligence Service (; BND) transfers "massive amounts of intercepted data to the NSA", while Swedish Television revealed the National Defence Radio Establishment (FRA) provided the NSA with data from its cable collection, under a secret treaty signed in 1954 for bilateral cooperation on surveillance. Other security and intelligence agencies involved in the practice of global surveillance include those in Australia (ASD), Britain (GCHQ), Canada (CSE), Denmark (PET), France (DGSE), Germany (BND), Italy (AISE), the Netherlands (AIVD), Norway (NIS), Spain (CNI), Switzerland (NDB), Singapore (SID) as well as Israel (ISNU), which receives raw, unfiltered data of U.S. citizens that is shared by the NSA. On June 14, 2013, United States prosecutors charged Edward Snowden with espionage and theft of government property. In late July 2013, he was granted a one-year temporary asylum by the Russian government, contributing to a deterioration of Russia–United States relations. Towards the end of October 2013, the British Prime Minister David Cameron warned The Guardian not to publish any more leaks, or it will receive a DA-Notice. In November 2013, a criminal investigation of the disclosure was being undertaken by Britain's Metropolitan Police Service. In December 2013, The Guardian editor Alan Rusbridger said: "We have published I think 26 documents so far out of the 58,000 we've seen." The extent to which the media reports have responsibly informed the public is disputed. In January 2014, Obama said that "the sensational way in which these disclosures have come out has often shed more heat than light" and critics such as Sean Wilentz have noted that many of the Snowden documents released do not concern domestic surveillance. The US & British Defense establishment weigh the strategic harm in the period following the disclosures more heavily than their civic public benefit. In its first assessment of these disclosures, the Pentagon concluded that Snowden committed the biggest "theft" of U.S. secrets in the history of the United States. Sir David Omand, a former director of GCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever". Background Barton Gellman, a Pulitzer Prize–winning journalist who led The Washington Posts coverage of Snowden's disclosures, summarized the leaks as follows: The disclosure revealed specific details of the NSA's close cooperation with U.S. federal agencies such as the Federal Bureau of Investigation (FBI) and the Central Intelligence Agency (CIA), in addition to the agency's previously undisclosed financial payments to numerous commercial partners and telecommunications companies, as well as its previously undisclosed relationships with international partners such as Britain, France, Germany, and its secret treaties with foreign governments that were recently established for sharing intercepted data of each other's citizens. The disclosures were made public over the course of several months since June 2013, by the press in several nations from the trove leaked by the former NSA contractor Edward J. Snowden, who obtained the trove while working for Booz Allen Hamilton. George Brandis, the Attorney-General of Australia, asserted that Snowden's disclosure is the "most serious setback for Western intelligence since the Second World War." Global surveillance , global surveillance programs include: The NSA was also getting data directly from telecommunications companies code-named Artifice, Lithium, Serenade, SteelKnight, and X. The real identities of the companies behind these code names were not included in the Snowden document dump because they were protected as Exceptionally Controlled Information which prevents wide circulation even to those (like Snowden) who otherwise have the necessary security clearance. Disclosures Although the exact size of Snowden's disclosure remains unknown, the following estimates have been put up by various government officials: At least 15,000 Australian intelligence files, according to Australian officials At least 58,000 British intelligence files, according to British officials About 1.7 million U.S. intelligence files, according to U.S. Department of Defense talking points As a contractor of the NSA, Snowden was granted access to U.S. government documents along with top secret documents of several allied governments, via the exclusive Five Eyes network. Snowden claims that he currently does not physically possess any of these documents, having surrendered all copies to journalists he met in Hong Kong. According to his lawyer, Snowden has pledged not to release any documents while in Russia, leaving the responsibility for further disclosures solely to journalists. As of 2014, the following news outlets have accessed some of the documents provided by Snowden: Australian Broadcasting Corporation, Canadian Broadcasting Corporation, Channel 4, Der Spiegel, El Pais, El Mundo, L'espresso, Le Monde, NBC, NRC Handelsblad, Dagbladet, O Globo, South China Morning Post, Süddeutsche Zeitung, Sveriges Television, The Guardian, The New York Times, and The Washington Post. Historical context In the 1970s, NSA analyst Perry Fellwock (under the pseudonym "Winslow Peck") revealed the existence of the UKUSA Agreement, which forms the basis of the ECHELON network, whose existence was revealed in 1988 by Lockheed employee Margaret Newsham. Months before the September 11 attacks and during its aftermath, further details of the global surveillance apparatus were provided by various individuals such as the former MI5 official David Shayler and the journalist James Bamford, who were followed by: NSA employees William Binney and Thomas Andrews Drake, who revealed that the NSA is rapidly expanding its surveillance GCHQ employee Katharine Gun, who revealed a plot to bug UN delegates shortly before the Iraq War British Cabinet Minister Clare Short, who revealed in 2004 that the UK had spied on UN Secretary-General Kofi Annan NSA employee Russ Tice, who triggered the NSA warrantless surveillance controversy after revealing that the Bush Administration had spied on U.S. citizens without court approval Journalist Leslie Cauley of USA Today, who revealed in 2006 that the NSA is keeping a massive database of Americans' phone calls AT&T employee Mark Klein, who revealed in 2006 the existence of Room 641A of the NSA Activists Julian Assange and Chelsea Manning, who revealed in 2011 the existence of the mass surveillance industry Journalist Michael Hastings, who revealed in 2012 that protestors of the Occupy Wall Street movement were kept under surveillance In the aftermath of Snowden's revelations, The Pentagon concluded that Snowden committed the biggest theft of U.S. secrets in the history of the United States. In Australia, the coalition government described the leaks as the most damaging blow dealt to Australian intelligence in history. Sir David Omand, a former director of GCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever". Timeline In April 2012, NSA contractor Edward Snowden began downloading documents. That year, Snowden had made his first contact with journalist Glenn Greenwald, then employed by The Guardian, and he contacted documentary filmmaker Laura Poitras in January 2013. 2013 May In May 2013, Snowden went on temporary leave from his position at the NSA, citing the pretext of receiving treatment for his epilepsy. Towards the end of May, he traveled to Hong Kong. Greenwald, Poitras and the Guardian's defence and intelligence correspondent Ewen MacAskill flew to Hong Kong to meet Snowden. June After the U.S.-based editor of The Guardian, Janine Gibson, held several meetings in New York City, she decided that Greenwald, Poitras and the Guardians defence and intelligence correspondent Ewen MacAskill would fly to Hong Kong to meet Snowden. On June 5, in the first media report based on the leaked material, The Guardian exposed a top secret court order showing that the NSA had collected phone records from over 120 million Verizon subscribers. Under the order, the numbers of both parties on a call, as well as the location data, unique identifiers, time of call, and duration of call were handed over to the FBI, which turned over the records to the NSA. According to The Wall Street Journal, the Verizon order is part of a controversial data program, which seeks to stockpile records on all calls made in the U.S., but does not collect information directly from T-Mobile US and Verizon Wireless, in part because of their foreign ownership ties. On June 6, 2013, the second media disclosure, the revelation of the PRISM surveillance program (which collects the e-mail, voice, text and video chats of foreigners and an unknown number of Americans from Microsoft, Google, Facebook, Yahoo, Apple and other tech giants), was published simultaneously by The Guardian and The Washington Post. Der Spiegel revealed NSA spying on multiple diplomatic missions of the European Union and the United Nations Headquarters in New York. During specific episodes within a four-year period, the NSA hacked several Chinese mobile-phone companies, the Chinese University of Hong Kong and Tsinghua University in Beijing, and the Asian fiber-optic network operator Pacnet. Only Australia, Canada, New Zealand and the UK are explicitly exempted from NSA attacks, whose main target in the European Union is Germany. A method of bugging encrypted fax machines used at an EU embassy is codenamed Dropmire. During the 2009 G-20 London summit, the British intelligence agency Government Communications Headquarters (GCHQ) intercepted the communications of foreign diplomats. In addition, GCHQ has been intercepting and storing mass quantities of fiber-optic traffic via Tempora. Two principal components of Tempora are called "Mastering the Internet" (MTI) and "Global Telecoms Exploitation". The data is preserved for three days while metadata is kept for thirty days. Data collected by GCHQ under Tempora is shared with the National Security Agency (NSA) of the United States. From 2001 to 2011, the NSA collected vast amounts of metadata records detailing the email and internet usage of Americans via Stellar Wind, which was later terminated due to operational and resource constraints. It was subsequently replaced by newer surveillance programs such as ShellTrumpet, which "processed its one trillionth metadata record" by the end of December 2012. The NSA follows specific procedures to target non-U.S. persons and to minimize data collection from U.S. persons. These court-approved policies allow the NSA to: keep data that could potentially contain details of U.S. persons for up to five years; retain and make use of "inadvertently acquired" domestic communications if they contain usable intelligence, information on criminal activity, threat of harm to people or property, are encrypted, or are believed to contain any information relevant to cybersecurity; preserve "foreign intelligence information" contained within attorney–client communications; and access the content of communications gathered from "U.S. based machine[s]" or phone numbers in order to establish if targets are located in the U.S., for the purposes of ceasing further surveillance. According to Boundless Informant, over 97 billion pieces of intelligence were collected over a 30-day period ending in March 2013. Out of all 97 billion sets of information, about 3 billion data sets originated from U.S. computer networks and around 500 million metadata records were collected from German networks. In August 2013, it was revealed that the Bundesnachrichtendienst (BND) of Germany transfers massive amounts of metadata records to the NSA. Der Spiegel disclosed that out of all 27 member states of the European Union, Germany is the most targeted due to the NSA's systematic monitoring and storage of Germany's telephone and Internet connection data. According to the magazine the NSA stores data from around half a billion communications connections in Germany each month. This data includes telephone calls, emails, mobile-phone text messages and chat transcripts. July The NSA gained massive amounts of information captured from the monitored data traffic in Europe. For example, in December 2012, the NSA gathered on an average day metadata from some 15 million telephone connections and 10 million Internet datasets. The NSA also monitored the European Commission in Brussels and monitored EU diplomatic Facilities in Washington and at the United Nations by placing bugs in offices as well as infiltrating computer networks. The U.S. government made as part of its UPSTREAM data collection program deals with companies to ensure that it had access to and hence the capability to surveil undersea fiber-optic cables which deliver e-mails, Web pages, other electronic communications and phone calls from one continent to another at the speed of light. According to the Brazilian newspaper O Globo, the NSA spied on millions of emails and calls of Brazilian citizens, while Australia and New Zealand have been involved in the joint operation of the NSA's global analytical system XKeyscore. Among the numerous allied facilities contributing to XKeyscore are four installations in Australia and one in New Zealand: Pine Gap near Alice Springs, Australia, which is partly operated by the U.S. Central Intelligence Agency (CIA) The Shoal Bay Receiving Station near Darwin, Australia, is operated by the Australian Signals Directorate (ASD) The Australian Defence Satellite Communications Station near Geraldton, Australia, is operated by the ASD HMAS Harman outside Canberra, Australia, is operated by the ASD Waihopai Station near Blenheim, New Zealand, is operated by New Zealand's Government Communications Security Bureau (GCSB) O Globo released an NSA document titled "Primary FORNSAT Collection Operations", which revealed the specific locations and codenames of the FORNSAT intercept stations in 2002. According to Edward Snowden, the NSA has established secret intelligence partnerships with many Western governments. The Foreign Affairs Directorate (FAD) of the NSA is responsible for these partnerships, which, according to Snowden, are organized such that foreign governments can "insulate their political leaders" from public outrage in the event that these global surveillance partnerships are leaked. In an interview published by Der Spiegel, Snowden accused the NSA of being "in bed together with the Germans". The NSA granted the German intelligence agencies BND (foreign intelligence) and BfV (domestic intelligence) access to its controversial XKeyscore system. In return, the BND turned over copies of two systems named Mira4 and Veras, reported to exceed the NSA's SIGINT capabilities in certain areas. Every day, massive amounts of metadata records are collected by the BND and transferred to the NSA via the Bad Aibling Station near Munich, Germany. In December 2012 alone, the BND handed over 500 million metadata records to the NSA. In a document dated January 2013, the NSA acknowledged the efforts of the BND to undermine privacy laws: According to an NSA document dated April 2013, Germany has now become the NSA's "most prolific partner". Under a section of a separate document leaked by Snowden titled "Success Stories", the NSA acknowledged the efforts of the German government to expand the BND's international data sharing with partners: In addition, the German government was well aware of the PRISM surveillance program long before Edward Snowden made details public. According to Angela Merkel's spokesman Steffen Seibert, there are two separate PRISM programs – one is used by the NSA and the other is used by NATO forces in Afghanistan. The two programs are "not identical". The Guardian revealed further details of the NSA's XKeyscore tool, which allows government analysts to search through vast databases containing emails, online chats and the browsing histories of millions of individuals without prior authorization. Microsoft "developed a surveillance capability to deal" with the interception of encrypted chats on Outlook.com, within five months after the service went into testing. NSA had access to Outlook.com emails because "Prism collects this data prior to encryption." In addition, Microsoft worked with the FBI to enable the NSA to gain access to its cloud storage service SkyDrive. An internal NSA document dating from August 3, 2012, described the PRISM surveillance program as a "team sport". Even if there is no reason to suspect U.S. citizens of wrongdoing, the CIA's National Counterterrorism Center is allowed to examine federal government files for possible criminal behavior. Previously the NTC was barred to do so, unless a person was a terror suspect or related to an investigation. Snowden also confirmed that Stuxnet was cooperatively developed by the United States and Israel. In a report unrelated to Edward Snowden, the French newspaper Le Monde revealed that France's DGSE was also undertaking mass surveillance, which it described as "illegal and outside any serious control". August Documents leaked by Edward Snowden that were seen by Süddeutsche Zeitung (SZ) and Norddeutscher Rundfunk revealed that several telecom operators have played a key role in helping the British intelligence agency Government Communications Headquarters (GCHQ) tap into worldwide fiber-optic communications. The telecom operators are: Verizon Business (codenamed "Dacron") BT (codenamed "Remedy") Vodafone Cable (codenamed "Gerontic") Global Crossing (codenamed "Pinnage") Level 3 (codenamed "Little") Viatel (codenamed "Vitreous") Interoute (codenamed "Streetcar") Each of them were assigned a particular area of the international fiber-optic network for which they were individually responsible. The following networks have been infiltrated by GCHQ: TAT-14 (EU-UK-US), Atlantic Crossing 1 (EU-UK-US), Circe South (France-UK), Circe North (Netherlands-UK), Flag Atlantic-1, Flag Europa-Asia, SEA-ME-WE 3 (Southeast Asia-Middle East-Western Europe), SEA-ME-WE 4 (Southeast Asia-Middle East-Western Europe), Solas (Ireland-UK), UK-France 3, UK-Netherlands 14, ULYSSES (EU-UK), Yellow (UK-US) and Pan European Crossing (EU-UK). Telecommunication companies who participated were "forced" to do so and had "no choice in the matter". Some of the companies were subsequently paid by GCHQ for their participation in the infiltration of the cables. According to the SZ, GCHQ has access to the majority of internet and telephone communications flowing throughout Europe, can listen to phone calls, read emails and text messages, see which websites internet users from all around the world are visiting. It can also retain and analyse nearly the entire European internet traffic. GCHQ is collecting all data transmitted to and from the United Kingdom and Northern Europe via the undersea fibre optic telecommunications cable SEA-ME-WE 3. The Security and Intelligence Division (SID) of Singapore co-operates with Australia in accessing and sharing communications carried by the SEA-ME-WE-3 cable. The Australian Signals Directorate (ASD) is also in a partnership with British, American and Singaporean intelligence agencies to tap undersea fibre optic telecommunications cables that link Asia, the Middle East and Europe and carry much of Australia's international phone and internet traffic. The U.S. runs a top-secret surveillance program known as the Special Collection Service (SCS), which is based in over 80 U.S. consulates and embassies worldwide. The NSA hacked the United Nations' video conferencing system in Summer 2012 in violation of a UN agreement. The NSA is not just intercepting the communications of Americans who are in direct contact with foreigners targeted overseas, but also searching the contents of vast amounts of e-mail and text communications into and out of the country by Americans who mention information about foreigners under surveillance. It also spied on Al Jazeera and gained access to its internal communications systems. The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic. U.S. Law-enforcement agencies use tools used by computer hackers to gather information on suspects. An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional. The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and it must trust the government to report when it improperly spies on Americans. A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011. Under the Corporate Partner Access project, major U.S. telecommunications providers receive hundreds of millions of dollars each year from the NSA. Voluntary cooperation between the NSA and the providers of global communications took off during the 1970s under the cover name BLARNEY. A letter drafted by the Obama administration specifically to inform Congress of the government's mass collection of Americans' telephone communications data was withheld from lawmakers by leaders of the House Intelligence Committee in the months before a key vote affecting the future of the program. The NSA paid GCHQ over £100 Million between 2009 and 2012, in exchange for these funds GCHQ "must pull its weight and be seen to pull its weight." Documents referenced in the article explain that the weaker British laws regarding spying are "a selling point" for the NSA. GCHQ is also developing the technology to "exploit any mobile phone at any time." The NSA has under a legal authority a secret backdoor into its databases gathered from large Internet companies enabling it to search for U.S. citizens' email and phone calls without a warrant. The Privacy and Civil Liberties Oversight Board urged the U.S. intelligence chiefs to draft stronger US surveillance guidelines on domestic spying after finding that several of those guidelines have not been updated up to 30 years. U.S. intelligence analysts have deliberately broken rules designed to prevent them from spying on Americans by choosing to ignore so-called "minimisation procedures" aimed at protecting privacy and used the NSA's agency's enormous eavesdropping power to spy on love interests. After the U.S. Foreign Secret Intelligence Court ruled in October 2011 that some of the NSA's activities were unconstitutional, the agency paid millions of dollars to major internet companies to cover extra costs incurred in their involvement with the PRISM surveillance program. "Mastering the Internet" (MTI) is part of the Interception Modernisation Programme (IMP) of the British government that involves the insertion of thousands of DPI (deep packet inspection) "black boxes" at various internet service providers, as revealed by the British media in 2009. In 2013, it was further revealed that the NSA had made a £17.2  million financial contribution to the project, which is capable of vacuuming signals from up to 200 fibre-optic cables at all physical points of entry into Great Britain. September The Guardian and The New York Times reported on secret documents leaked by Snowden showing that the NSA has been in "collaboration with technology companies" as part of "an aggressive, multipronged effort" to weaken the encryption used in commercial software, and GCHQ has a team dedicated to cracking "Hotmail, Google, Yahoo and Facebook" traffic. Germany's domestic security agency Bundesverfassungsschutz (BfV) systematically transfers the personal data of German residents to the NSA, CIA and seven other members of the United States Intelligence Community, in exchange for information and espionage software. Israel, Sweden and Italy are also cooperating with American and British intelligence agencies. Under a secret treaty codenamed "Lustre", French intelligence agencies transferred millions of metadata records to the NSA. The Obama Administration secretly won permission from the Foreign Intelligence Surveillance Court in 2011 to reverse restrictions on the National Security Agency's use of intercepted phone calls and e-mails, permitting the agency to search deliberately for Americans' communications in its massive databases. The searches take place under a surveillance program Congress authorized in 2008 under Section 702 of the Foreign Intelligence Surveillance Act. Under that law, the target must be a foreigner "reasonably believed" to be outside the United States, and the court must approve the targeting procedures in an order good for one year. But a warrant for each target would thus no longer be required. That means that communications with Americans could be picked up without a court first determining that there is probable cause that the people they were talking to were terrorists, spies or "foreign powers." The FISC extended the length of time that the NSA is allowed to retain intercepted U.S. communications from five years to six years with an extension possible for foreign intelligence or counterintelligence purposes. Both measures were done without public debate or any specific authority from Congress. A special branch of the NSA called "Follow the Money" (FTM) monitors international payments, banking and credit card transactions and later stores the collected data in the NSA's own financial databank "Tracfin". The NSA monitored the communications of Brazil's president Dilma Rousseff and her top aides. The agency also spied on Brazil's oil firm Petrobras as well as French diplomats, and gained access to the private network of the Ministry of Foreign Affairs of France and the SWIFT network. In the United States, the NSA uses the analysis of phone call and e-mail logs of American citizens to create sophisticated graphs of their social connections that can identify their associates, their locations at certain times, their traveling companions and other personal information. The NSA routinely shares raw intelligence data with Israel without first sifting it to remove information about U.S. citizens. In an effort codenamed GENIE, computer specialists can control foreign computer networks using "covert implants," a form of remotely transmitted malware on tens of thousands of devices annually. As worldwide sales of smartphones began exceeding those of feature phones, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone combines a myriad of data that would interest an intelligence agency, such as social contacts, user behavior, interests, location, photos and credit card numbers and passwords. An internal NSA report from 2010 stated that the spread of the smartphone has been occurring "extremely rapidly"—developments that "certainly complicate traditional target analysis." According to the document, the NSA has set up task forces assigned to several smartphone manufacturers and operating systems, including Apple Inc.'s iPhone and iOS operating system, as well as Google's Android mobile operating system. Similarly, Britain's GCHQ assigned a team to study and crack the BlackBerry. Under the heading "iPhone capability", the document notes that there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of the iOS 3 and iOS 4 operating systems. These include the mapping feature, voicemail and photos, as well as Google Earth, Facebook and Yahoo! Messenger. On September 9, 2013, an internal NSA presentation on iPhone Location Services was published by Der Spiegel. One slide shows scenes from Apple's 1984-themed television commercial alongside the words "Who knew in 1984..."; another shows Steve Jobs holding an iPhone, with the text "...that this would be big brother..."; and a third shows happy consumers with their iPhones, completing the question with "...and the zombies would be paying customers?" October On October 4, 2013, The Washington Post and The Guardian jointly reported that the NSA and GCHQ had made repeated attempts to spy on anonymous Internet users who have been communicating in secret via the anonymity network Tor. Several of these surveillance operations involved the implantation of malicious code into the computers of Tor users who visit particular websites. The NSA and GCHQ had partly succeeded in blocking access to the anonymous network, diverting Tor users to insecure channels. The government agencies were also able to uncover the identity of some anonymous Internet users. The Communications Security Establishment (CSE) has been using a program called Olympia to map the communications of Brazil's Mines and Energy Ministry by targeting the metadata of phone calls and emails to and from the ministry. The Australian Federal Government knew about the PRISM surveillance program months before Edward Snowden made details public. The NSA gathered hundreds of millions of contact lists from personal e-mail and instant messaging accounts around the world. The agency did not target individuals. Instead it collected contact lists in large numbers that amount to a sizable fraction of the world's e-mail and instant messaging accounts. Analysis of that data enables the agency to search for hidden connections and to map relationships within a much smaller universe of foreign intelligence targets. The NSA monitored the public email account of former Mexican president Felipe Calderón (thus gaining access to the communications of high-ranking cabinet members), the emails of several high-ranking members of Mexico's security forces and text and the mobile phone communication of current Mexican president Enrique Peña Nieto. The NSA tries to gather cellular and landline phone numbers—often obtained from American diplomats—for as many foreign officials as possible. The contents of the phone calls are stored in computer databases that can regularly be searched using keywords. The NSA has been monitoring telephone conversations of 35 world leaders. The U.S. government's first public acknowledgment that it tapped the phones of world leaders was reported on October 28, 2013, by the Wall Street Journal after an internal U.S. government review turned up NSA monitoring of some 35 world leaders. GCHQ has tried to keep its mass surveillance program a secret because it feared a "damaging public debate" on the scale of its activities which could lead to legal challenges against them. The Guardian revealed that the NSA had been monitoring telephone conversations of 35 world leaders after being given the numbers by an official in another U.S. government department. A confidential memo revealed that the NSA encouraged senior officials in such Departments as the White House, State and The Pentagon, to share their "Rolodexes" so the agency could add the telephone numbers of leading foreign politicians to their surveillance systems. Reacting to the news, German leader Angela Merkel, arriving in Brussels for an EU summit, accused the U.S. of a breach of trust, saying: "We need to have trust in our allies and partners, and this must now be established once again. I repeat that spying among friends is not at all acceptable against anyone, and that goes for every citizen in Germany." The NSA collected in 2010 data on ordinary Americans' cellphone locations, but later discontinued it because it had no "operational value." Under Britain's MUSCULAR programme, the NSA and GCHQ have secretly broken into the main communications links that connect Yahoo and Google data centers around the world and thereby gained the ability to collect metadata and content at will from hundreds of millions of user accounts. The mobile phone of German Chancellor Angela Merkel might have been tapped by U.S. intelligence. According to the Spiegel this monitoring goes back to 2002 and ended in the summer of 2013, while The New York Times reported that Germany has evidence that the NSA's surveillance of Merkel began during George W. Bush's tenure. After learning from Der Spiegel magazine that the NSA has been listening in to her personal mobile phone, Merkel compared the snooping practices of the NSA with those of the Stasi. It was reported in March 2014, by Der Spiegel that Merkel had also been placed on an NSA surveillance list alongside 122 other world leaders. On October 31, 2013, Hans-Christian Ströbele, a member of the German Bundestag, met Snowden in Moscow and revealed the former intelligence contractor's readiness to brief the German government on NSA spying. A highly sensitive signals intelligence collection program known as Stateroom involves the interception of radio, telecommunications and internet traffic. It is operated out of the diplomatic missions of the Five Eyes (Australia, Britain, Canada, New Zealand, United States) in numerous locations around the world. The program conducted at U.S. diplomatic missions is run in concert by the U.S. intelligence agencies NSA and CIA in a joint venture group called "Special Collection Service" (SCS), whose members work undercover in shielded areas of the American Embassies and Consulates, where they are officially accredited as diplomats and as such enjoy special privileges. Under diplomatic protection, they are able to look and listen unhindered. The SCS for example used the American Embassy near the Brandenburg Gate in Berlin to monitor communications in Germany's government district with its parliament and the seat of the government. Under the Stateroom surveillance programme, Australia operates clandestine surveillance facilities to intercept phone calls and data across much of Asia. In France, the NSA targeted people belonging to the worlds of business, politics or French state administration. The NSA monitored and recorded the content of telephone communications and the history of the connections of each target i.e. the metadata. The actual surveillance operation was performed by French intelligence agencies on behalf of the NSA. The cooperation between France and the NSA was confirmed by the Director of the NSA, Keith B. Alexander, who asserted that foreign intelligence services collected phone records in "war zones" and "other areas outside their borders" and provided them to the NSA. The French newspaper Le Monde also disclosed new PRISM and Upstream slides (See Page 4, 7 and 8) coming from the "PRISM/US-984XN Overview" presentation. In Spain, the NSA intercepted the telephone conversations, text messages and emails of millions of Spaniards, and spied on members of the Spanish government. Between December 10, 2012, and January 8, 2013, the NSA collected metadata on 60 million telephone calls in Spain. According to documents leaked by Snowden, the surveillance of Spanish citizens was jointly conducted by the NSA and the intelligence agencies of Spain. November The New York Times reported that the NSA carries out an eavesdropping effort, dubbed Operation Dreadnought, against the Iranian leader Ayatollah Ali Khamenei. During his 2009 visit to Iranian Kurdistan, the agency collaborated with GCHQ and the U.S.'s National Geospatial-Intelligence Agency, collecting radio transmissions between aircraft and airports, examining Khamenei's convoy with satellite imagery, and enumerating military radar stations. According to the story, an objective of the operation is "communications fingerprinting": the ability to distinguish Khamenei's communications from those of other people in Iran. The same story revealed an operation code-named Ironavenger, in which the NSA intercepted e-mails sent between a country allied with the United States and the government of "an adversary". The ally was conducting a spear-phishing attack: its e-mails contained malware. The NSA gathered documents and login credentials belonging to the enemy country, along with knowledge of the ally's capabilities for attacking computers. According to the British newspaper The Independent, the British intelligence agency GCHQ maintains a listening post on the roof of the British Embassy in Berlin that is capable of intercepting mobile phone calls, wi-fi data and long-distance communications all over the German capital, including adjacent government buildings such as the Reichstag (seat of the German parliament) and the Chancellery (seat of Germany's head of government) clustered around the Brandenburg Gate. Operating under the code-name "Quantum Insert", GCHQ set up a fake website masquerading as LinkedIn, a social website used for professional networking, as part of its efforts to install surveillance software on the computers of the telecommunications operator Belgacom. In addition, the headquarters of the oil cartel OPEC were infiltrated by GCHQ as well as the NSA, which bugged the computers of nine OPEC employees and monitored the General Secretary of OPEC. For more than three years GCHQ has been using an automated monitoring system code-named "Royal Concierge" to infiltrate the reservation systems of at least 350 prestigious hotels in many different parts of the world in order to target, search and analyze reservations to detect diplomats and government officials. First tested in 2010, the aim of the "Royal Concierge" is to track down the travel plans of diplomats, and it is often supplemented with surveillance methods related to human intelligence (HUMINT). Other covert operations include the wiretapping of room telephones and fax machines used in targeted hotels as well as the monitoring of computers hooked up to the hotel network. In November 2013, the Australian Broadcasting Corporation and The Guardian revealed that the Australian Signals Directorate (DSD) had attempted to listen to the private phone calls of the president of Indonesia and his wife. The Indonesian foreign minister, Marty Natalegawa, confirmed that he and the president had contacted the ambassador in Canberra. Natalegawa said any tapping of Indonesian politicians' personal phones "violates every single decent and legal instrument I can think of—national in Indonesia, national in Australia, international as well". Other high-ranking Indonesian politicians targeted by the DSD include: Boediono (Vice President) Jusuf Kalla (Former Vice President) Dino Patti Djalal (Ambassador to the United States) Andi Mallarangeng (Government spokesperson) Hatta Rajasa (State Secretary) Sri Mulyani Indrawati (Former Finance Minister and current managing director of the World Bank) Widodo Adi Sutjipto (Former Commander-in-Chief of the military) Sofyan Djalil (Senior government advisor) Carrying the title "3G impact and update", a classified presentation leaked by Snowden revealed the attempts of the ASD/DSD to keep up to pace with the rollout of 3G technology in Indonesia and across Southeast Asia. The ASD/DSD motto placed at the bottom of each page reads: "Reveal their secrets—protect our own." Under a secret deal approved by British intelligence officials, the NSA has been storing and analyzing the internet and email records of British citizens since 2007. The NSA also proposed in 2005 a procedure for spying on the citizens of the UK and other Five-Eyes nations alliance, even where the partner government has explicitly denied the U.S. permission to do so. Under the proposal, partner countries must neither be informed about this particular type of surveillance, nor the procedure of doing so. Towards the end of November, The New York Times released an internal NSA report outlining the agency's efforts to expand its surveillance abilities. The five-page document asserts that the law of the United States has not kept up with the needs of the NSA to conduct mass surveillance in the "golden age" of signals intelligence, but there are grounds for optimism because, in the NSA's own words: The report, titled "SIGINT Strategy 2012–2016", also said that the U.S. will try to influence the "global commercial encryption market" through "commercial relationships", and emphasized the need to "revolutionize" the analysis of its vast data collection to "radically increase operational impact". On November 23, 2013, the Dutch newspaper NRC Handelsblad reported that the Netherlands was targeted by U.S. intelligence agencies in the immediate aftermath of World War II. This period of surveillance lasted from 1946 to 1968, and also included the interception of the communications of other European countries including Belgium, France, West Germany and Norway. The Dutch Newspaper also reported that NSA infected more than 50,000 computer networks worldwide, often covertly, with malicious spy software, sometimes in cooperation with local authorities, designed to steal sensitive information. December According to the classified documents leaked by Snowden, the Australian Signals Directorate (ASD), formerly known as the Defense Signals Directorate, had offered to share intelligence information it had collected with the other intelligence agencies of the UKUSA Agreement. Data shared with foreign countries include "bulk, unselected, unminimized metadata" it had collected. The ASD provided such information on the condition that no Australian citizens were targeted. At the time the ASD assessed that "unintentional collection [of metadata of Australian nationals] is not viewed as a significant issue". If a target was later identified as being an Australian national, the ASD was required to be contacted to ensure that a warrant could be sought. Consideration was given as to whether "medical, legal or religious information" would be automatically treated differently to other types of data, however a decision was made that each agency would make such determinations on a case-by-case basis. Leaked material does not specify where the ASD had collected the intelligence information from, however Section 7(a) of the Intelligence Services Act 2001 (Commonwealth) states that the ASD's role is "...to obtain intelligence about the capabilities, intentions or activities of people or organizations outside Australia...". As such, it is possible ASD's metadata intelligence holdings was focused on foreign intelligence collection and was within the bounds of Australian law. The Washington Post revealed that the NSA has been tracking the locations of mobile phones from all over the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. In the process of doing so, the NSA collects more than five billion records of phone locations on a daily basis. This enables NSA analysts to map cellphone owners' relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths. The Washington Post also reported that both GCHQ and the NSA make use of location data and advertising tracking files generated through normal internet browsing (with cookies operated by Google, known as "Pref") to pinpoint targets for government hacking and to bolster surveillance. The Norwegian Intelligence Service (NIS), which cooperates with the NSA, has gained access to Russian targets in the Kola Peninsula and other civilian targets. In general, the NIS provides information to the NSA about "Politicians", "Energy" and "Armament". A top secret memo of the NSA lists the following years as milestones of the Norway–United States of America SIGINT agreement, or NORUS Agreement: 1952 – Informal starting year of cooperation between the NIS and the NSA 1954 – Formalization of the agreement 1963 – Extension of the agreement for coverage of foreign instrumentation signals intelligence (FISINT) 1970 – Extension of the agreement for coverage of electronic intelligence (ELINT) 1994 – Extension of the agreement for coverage of communications intelligence (COMINT) The NSA considers the NIS to be one of its most reliable partners. Both agencies also cooperate to crack the encryption systems of mutual targets. According to the NSA, Norway has made no objections to its requests from the NIS. On December 5, Sveriges Television reported the National Defense Radio Establishment (FRA) has been conducting a clandestine surveillance operation in Sweden, targeting the internal politics of Russia. The operation was conducted on behalf of the NSA, receiving data handed over to it by the FRA. The Swedish-American surveillance operation also targeted Russian energy interests as well as the Baltic states. As part of the UKUSA Agreement, a secret treaty was signed in 1954 by Sweden with the United States, the United Kingdom, Canada, Australia and New Zealand, regarding collaboration and intelligence sharing. As a result of Snowden's disclosures, the notion of Swedish neutrality in international politics was called into question. In an internal document dating from the year 2006, the NSA acknowledged that its "relationship" with Sweden is "protected at the TOP SECRET level because of that nation's political neutrality." Specific details of Sweden's cooperation with members of the UKUSA Agreement include: The FRA has been granted access to XKeyscore, an analytical database of the NSA. Sweden updated the NSA on changes in Swedish legislation that provided the legal framework for information sharing between the FRA and the Swedish Security Service. Since January 2013, a counterterrorism analyst of the NSA has been stationed in the Swedish capital of Stockholm The NSA, GCHQ and the FRA signed an agreement in 2004 that allows the FRA to directly collaborate with the NSA without having to consult GCHQ. About five years later, the Riksdag passed a controversial legislative change, briefly allowing the FRA to monitor both wireless and cable bound signals passing the Swedish border without a court order, while also introducing several provisions designed to protect the privacy of individuals, according to the original proposal. This legislation was amended 11 months later, in an effort to strengthen protection of privacy by making court orders a requirement, and by imposing several limits on the intelligence-gathering. According to documents leaked by Snowden, the Special Source Operations of the NSA has been sharing information containing "logins, cookies, and GooglePREFID" with the Tailored Access Operations division of the NSA, as well as Britain's GCHQ agency. During the 2010 G-20 Toronto summit, the U.S. embassy in Ottawa was transformed into a security command post during a six-day spying operation that was conducted by the NSA and closely coordinated with the Communications Security Establishment Canada (CSEC). The goal of the spying operation was, among others, to obtain information on international development, banking reform, and to counter trade protectionism to support "U.S. policy goals." On behalf of the NSA, the CSEC has set up covert spying posts in 20 countries around the world. In Italy the Special Collection Service of the NSA maintains two separate surveillance posts in Rome and Milan. According to a secret NSA memo dated September 2010, the Italian embassy in Washington, D.C. has been targeted by two spy operations of the NSA: Under the codename "Bruneau", which refers to mission "Lifesaver", the NSA sucks out all the information stored in the embassy's computers and creates electronic images of hard disk drives. Under the codename "Hemlock", which refers to mission "Highlands", the NSA gains access to the embassy's communications through physical "implants". Due to concerns that terrorist or criminal networks may be secretly communicating via computer games, the NSA, GCHQ, CIA, and FBI have been conducting surveillance and scooping up data from the networks of many online games, including massively multiplayer online role-playing games (MMORPGs) such as World of Warcraft, as well as virtual worlds such as Second Life, and the Xbox gaming console. The NSA has cracked the most commonly used cellphone encryption technology, A5/1. According to a classified document leaked by Snowden, the agency can "process encrypted A5/1" even when it has not acquired an encryption key. In addition, the NSA uses various types of cellphone infrastructure, such as the links between carrier networks, to determine the location of a cellphone user tracked by Visitor Location Registers. US district court judge for the District of Columbia, Richard Leon, declared on December 16, 2013, that the mass collection of metadata of Americans' telephone records by the National Security Agency probably violates the fourth amendment prohibition of unreasonable searches and seizures. Leon granted the request for a preliminary injunction that blocks the collection of phone data for two private plaintiffs (Larry Klayman, a conservative lawyer, and Charles Strange, father of a cryptologist killed in Afghanistan when his helicopter was shot down in 2011) and ordered the government to destroy any of their records that have been gathered. But the judge stayed action on his ruling pending a government appeal, recognizing in his 68-page opinion the "significant national security interests at stake in this case and the novelty of the constitutional issues." However federal judge William H. Pauley III in New York City ruled the U.S. government's global telephone data-gathering system is needed to thwart potential terrorist attacks, and that it can only work if everyone's calls are swept in. U.S. District Judge Pauley also ruled that Congress legally set up the program and that it does not violate anyone's constitutional rights. The judge also concluded that the telephone data being swept up by NSA did not belong to telephone users, but to the telephone companies. He further ruled that when NSA obtains such data from the telephone companies, and then probes into it to find links between callers and potential terrorists, this further use of the data was not even a search under the Fourth Amendment. He also concluded that the controlling precedent is Smith v. Maryland: "Smith's bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties," Judge Pauley wrote. The American Civil Liberties Union declared on January 2, 2012, that it will appeal Judge Pauley's ruling that NSA bulk the phone record collection is legal. "The government has a legitimate interest in tracking the associations of suspected terrorists, but tracking those associations does not require the government to subject every citizen to permanent surveillance," deputy ACLU legal director Jameel Jaffer said in a statement. In recent years, American and British intelligence agencies conducted surveillance on more than 1,100 targets, including the office of an Israeli prime minister, heads of international aid organizations, foreign energy companies and a European Union official involved in antitrust battles with American technology businesses. A catalog of high-tech gadgets and software developed by the NSA's Tailored Access Operations (TAO) was leaked by the German news magazine Der Spiegel. Dating from 2008, the catalog revealed the existence of special gadgets modified to capture computer screenshots and USB flash drives secretly fitted with radio transmitters to broadcast stolen data over the airwaves, and fake base stations intended to intercept mobile phone signals, as well as many other secret devices and software implants listed here: The Tailored Access Operations (TAO) division of the NSA intercepted the shipping deliveries of computers and laptops in order to install spyware and physical implants on electronic gadgets. This was done in close cooperation with the FBI and the CIA. NSA officials responded to the Spiegel reports with a statement, which said: "Tailored Access Operations is a unique national asset that is on the front lines of enabling NSA to defend the nation and its allies. [TAO's] work is centred on computer network exploitation in support of foreign intelligence collection." In a separate disclosure unrelated to Snowden, the French Trésor public, which runs a certificate authority, was found to have issued fake certificates impersonating Google in order to facilitate spying on French government employees via man-in-the-middle attacks. 2014 January The NSA is working to build a powerful quantum computer capable of breaking all types of encryption. The effort is part of a US$79.7 million research program known as "Penetrating Hard Targets". It involves extensive research carried out in large, shielded rooms known as Faraday cages, which are designed to prevent electromagnetic radiation from entering or leaving. Currently, the NSA is close to producing basic building blocks that will allow the agency to gain "complete quantum control on two semiconductor qubits". Once a quantum computer is successfully built, it would enable the NSA to unlock the encryption that protects data held by banks, credit card companies, retailers, brokerages, governments and health care providers. According to The New York Times, the NSA is monitoring approximately 100,000 computers worldwide with spy software named Quantum. Quantum enables the NSA to conduct surveillance on those computers on the one hand, and can also create a digital highway for launching cyberattacks on the other hand. Among the targets are the Chinese and Russian military, but also trade institutions within the European Union. The NYT also reported that the NSA can access and alter computers which are not connected with the internet by a secret technology in use by the NSA since 2008. The prerequisite is the physical insertion of the radio frequency hardware by a spy, a manufacturer or an unwitting user. The technology relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers. In some cases, they are sent to a briefcase-size relay station that intelligence agencies can set up miles away from the target. The technology can also transmit malware back to the infected computer. Channel 4 and The Guardian revealed the existence of Dishfire, a massive database of the NSA that collects hundreds of millions of text messages on a daily basis. GCHQ has been given full access to the database, which it uses to obtain personal information of Britons by exploiting a legal loophole. Each day, the database receives and stores the following amounts of data: Geolocation data of more than 76,000 text messages and other travel information Over 110,000 names, gathered from electronic business cards Over 800,000 financial transactions that are either gathered from text-to-text payments or by linking credit cards to phone users Details of 1.6 million border crossings based on the interception of network roaming alerts Over 5 million missed call alerts About 200 million text messages from around the world The database is supplemented with an analytical tool known as the Prefer program, which processes SMS messages to extract other types of information including contacts from missed call alerts. The Privacy and Civil Liberties Oversight Board report on mass surveillance was released on January 23, 2014. It recommends to end the bulk telephone metadata, i.e., bulk phone records – phone numbers dialed, call times and durations, but not call content collection – collection program, to create a "Special Advocate" to be involved in some cases before the FISA court judge and to release future and past FISC decisions "that involve novel interpretations of FISA or other significant questions of law, technology or compliance." According to a joint disclosure by The New York Times, The Guardian, and ProPublica, the NSA and GCHQ have begun working together to collect and store data from dozens of smartphone application software by 2007 at the latest. A 2008 GCHQ report, leaked by Snowden asserts that "anyone using Google Maps on a smartphone is working in support of a GCHQ system". The NSA and GCHQ have traded recipes for various purposes such as grabbing location data and journey plans that are made when a target uses Google Maps, and vacuuming up address books, buddy lists, phone logs and geographic data embedded in photos posted on the mobile versions of numerous social networks such as Facebook, Flickr, LinkedIn, Twitter and other services. In a separate 20-page report dated 2012, GCHQ cited the popular smartphone game "Angry Birds" as an example of how an application could be used to extract user data. Taken together, such forms of data collection would allow the agencies to collect vital information about a user's life, including his or her home country, current location (through geolocation), age, gender, ZIP code, marital status, income, ethnicity, sexual orientation, education level, number of children, etc. A GCHQ document dated August 2012 provided details of the Squeaky Dolphin surveillance program, which enables GCHQ to conduct broad, real-time monitoring of various social media features and social media traffic such as YouTube video views, the Like button on Facebook, and Blogspot/Blogger visits without the knowledge or consent of the companies providing those social media features. The agency's "Squeaky Dolphin" program can collect, analyze and utilize YouTube, Facebook and Blogger data in specific situations in real time for analysis purposes. The program also collects the addresses from the billions of videos watched daily as well as some user information for analysis purposes. During the 2009 United Nations Climate Change Conference in Copenhagen, the NSA and its Five Eyes partners monitored the communications of delegates of numerous countries. This was done to give their own policymakers a negotiating advantage. The Communications Security Establishment Canada (CSEC) has been tracking Canadian air passengers via free Wi-Fi services at a major Canadian airport. Passengers who exited the airport terminal continued to be tracked as they showed up at other Wi-Fi locations across Canada. In a CSEC document dated May 2012, the agency described how it had gained access to two communications systems with over 300,000 users in order to pinpoint a specific imaginary target. The operation was executed on behalf of the NSA as a trial run to test a new technology capable of tracking down "any target that makes occasional forays into other cities/regions." This technology was subsequently shared with Canada's Five Eyes partners – Australia, New Zealand, Britain, and the United States. February According to research by Süddeutsche Zeitung and TV network NDR the mobile phone of former German chancellor Gerhard Schröder was monitored from 2002 onwards, reportedly because of his government's opposition to military intervention in Iraq. The source of the latest information is a document leaked by Edward Snowden. The document, containing information about the National Sigint Requirement List (NSRL), had previously been interpreted as referring only to Angela Merkel's mobile. However Süddeutsche Zeitung and NDR claim to have confirmation from NSA insiders that the surveillance authorisation pertains not to the individual, but the political post – which in 2002 was still held by Schröder. According to research by the two media outlets, Schröder was placed as number 388 on the list, which contains the names of persons and institutions to be put under surveillance by the NSA. GCHQ launched a cyber-attack on the activist network "Anonymous", using denial-of-service attack (DoS) to shut down a chatroom frequented by the network's members and to spy on them. The attack, dubbed Rolling Thunder, was conducted by a GCHQ unit known as the Joint Threat Research Intelligence Group (JTRIG). The unit successfully uncovered the true identities of several Anonymous members. The NSA Section 215 bulk telephony metadata program which seeks to stockpile records on all calls made in the U.S. is collecting less than 30 percent of all Americans' call records because of an inability to keep pace with the explosion in cellphone use according to the Washington Post. The controversial program permits the NSA after a warrant granted by the secret Foreign Intelligence Surveillance Court to record numbers, length and location of every call from the participating carriers. The Intercept reported that the U.S. government is using primarily NSA surveillance to target people for drone strikes overseas. In its report The Intercept author detail the flawed methods which are used to locate targets for lethal drone strikes, resulting in the deaths of innocent people. According to the Washington Post NSA analysts and collectors i.e. NSA personnel which controls electronic surveillance equipment use the NSA's sophisticated surveillance capabilities to track individual targets geographically and in real time, while drones and tactical units aimed their weaponry against those targets to take them out. An unnamed US law firm, reported to be Mayer Brown, was targeted by Australia's ASD. According to Snowden's documents, the ASD had offered to hand over these intercepted communications to the NSA. This allowed government authorities to be "able to continue to cover the talks, providing highly useful intelligence for interested US customers". NSA and GCHQ documents revealed that the anti-secrecy organization WikiLeaks and other activist groups were targeted for government surveillance and criminal prosecution. In particular, the IP addresses of visitors to WikiLeaks were collected in real time, and the US government urged its allies to file criminal charges against the founder of WikiLeaks, Julian Assange, due to his organization's publication of the Afghanistan war logs. The WikiLeaks organization was designated as a "malicious foreign actor". Quoting an unnamed NSA official in Germany, Bild am Sonntag reported that whilst President Obama's order to stop spying on Merkel was being obeyed, the focus had shifted to bugging other leading government and business figures including Interior Minister Thomas de Maiziere, a close confidant of Merkel. Caitlin Hayden, a security adviser to President Obama, was quoted in the newspaper report as saying, "The US has made clear it gathers intelligence in exactly the same way as any other states." The Intercept reveals that government agencies are infiltrating online communities and engaging in "false flag operations" to discredit targets among them people who have nothing to do with terrorism or national security threats. The two main tactics that are currently used are the injection of all sorts of false material onto the internet in order to destroy the reputation of its targets; and the use of social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable. The Guardian reported that Britain's surveillance agency GCHQ, with aid from the National Security Agency, intercepted and stored the webcam images of millions of internet users not suspected of wrongdoing. The surveillance program codenamed Optic Nerve collected still images of Yahoo webcam chats (one image every five minutes) in bulk and saved them to agency databases. The agency discovered "that a surprising number of people use webcam conversations to show intimate parts of their body to the other person", estimating that between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains "undesirable nudity". March The NSA has built an infrastructure which enables it to covertly hack into computers on a mass scale by using automated systems that reduce the level of human oversight in the process. The NSA relies on an automated system codenamed TURBINE which in essence enables the automated management and control of a large network of implants (a form of remotely transmitted malware on selected individual computer devices or in bulk on tens of thousands of devices). As quoted by The Intercept, TURBINE is designed to "allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually." The NSA has shared many of its files on the use of implants with its counterparts in the so-called Five Eyes surveillance alliance – the United Kingdom, Canada, New Zealand, and Australia. Among other things due to TURBINE and its control over the implants the NSA is capable of: breaking into targeted computers and to siphoning out data from foreign Internet and phone networks infecting a target's computer and exfiltrating files from a hard drive covertly recording audio from a computer's microphone and taking snapshots with its webcam launching cyberattacks by corrupting and disrupting file downloads or denying access to websites exfiltrating data from removable flash drives that connect to an infected computer The TURBINE implants are linked to, and relies upon, a large network of clandestine surveillance "sensors" that the NSA has installed at locations across the world, including the agency's headquarters in Maryland and eavesdropping bases used by the agency in Misawa, Japan and Menwith Hill, England. Codenamed as TURMOIL, the sensors operate as a sort of high-tech surveillance dragnet, monitoring packets of data as they are sent across the Internet. When TURBINE implants exfiltrate data from infected computer systems, the TURMOIL sensors automatically identify the data and return it to the NSA for analysis. And when targets are communicating, the TURMOIL system can be used to send alerts or "tips" to TURBINE, enabling the initiation of a malware attack. To identify surveillance targets, the NSA uses a series of data "selectors" as they flow across Internet cables. These selectors can include email addresses, IP addresses, or the unique "cookies" containing a username or other identifying information that are sent to a user's computer by websites such as Google, Facebook, Hotmail, Yahoo, and Twitter, unique Google advertising cookies that track browsing habits, unique encryption key fingerprints that can be traced to a specific user, and computer IDs that are sent across the Internet when a Windows computer crashes or updates. The CIA was accused by U.S. Senate Intelligence Committee Chairwoman Dianne Feinstein of spying on a stand-alone computer network established for the committee in its investigation of allegations of CIA abuse in a George W. Bush-era detention and interrogation program. A voice interception program codenamed MYSTIC began in 2009. Along with RETRO, short for "retrospective retrieval" (RETRO is voice audio recording buffer that allows retrieval of captured content up to 30 days into the past), the MYSTIC program is capable of recording "100 percent" of a foreign country's telephone calls, enabling the NSA to rewind and review conversations up to 30 days and the relating metadata. With the capability to store up to 30 days of recorded conversations MYSTIC enables the NSA to pull an instant history of the person's movements, associates and plans. On March 21, Le Monde published slides from an internal presentation of the Communications Security Establishment Canada, which attributed a piece of malicious software to French intelligence. The CSEC presentation concluded that the list of malware victims matched French intelligence priorities and found French cultural reference in the malware's code, including the name Babar, a popular French children's character, and the developer name "Titi". The French telecommunications corporation Orange S.A. shares its call data with the French intelligence agency DGSE, which hands over the intercepted data to GCHQ. The NSA has spied on the Chinese technology company Huawei. Huawei is a leading manufacturer of smartphones, tablets, mobile phone infrastructure, and WLAN routers and installs fiber optic cable. According to Der Spiegel this "kind of technology […] is decisive in the NSA's battle for data supremacy." The NSA, in an operation named "Shotgiant", was able to access Huawei's email archive and the source code for Huawei's communications products. The US government has had longstanding concerns that Huawei may not be independent of the People's Liberation Army and that the Chinese government might use equipment manufactured by Huawei to conduct cyberespionage or cyberwarfare. The goals of the NSA operation were to assess the relationship between Huawei and the PLA, to learn more the Chinese government's plans and to use information from Huawei to spy on Huawei's customers, including Iran, Afghanistan, Pakistan, Kenya, and Cuba. Former Chinese President Hu Jintao, the Chinese Trade Ministry, banks, as well as telecommunications companies were also targeted by the NSA. The Intercept published a document of an NSA employee discussing how to build a database of IP addresses, webmail, and Facebook accounts associated with system administrators so that the NSA can gain access to the networks and systems they administer. At the end of March 2014, Der Spiegel and The Intercept published, based on a series of classified files from the archive provided to reporters by NSA whistleblower Edward Snowden, articles related to espionage efforts by GCHQ and NSA in Germany. The British GCHQ targeted three German internet firms for information about Internet traffic passing through internet exchange points, important customers of the German internet providers, their technology suppliers as well as future technical trends in their business sector and company employees. The NSA was granted by the Foreign Intelligence Surveillance Court the authority for blanket surveillance of Germany, its people and institutions, regardless whether those affected are suspected of having committed an offense or not, without an individualized court order specifying on March 7, 2013. In addition Germany's chancellor Angela Merkel was listed in a surveillance search machine and database named Nymrod along with 121 others foreign leaders. As The Intercept wrote: "The NSA uses the Nymrod system to 'find information relating to targets that would otherwise be tough to track down,' according to internal NSA documents. Nymrod sifts through secret reports based on intercepted communications as well as full transcripts of faxes, phone calls, and communications collected from computer systems. More than 300 'cites' for Merkel are listed as available in intelligence reports and transcripts for NSA operatives to read." April Towards the end of April, Edward Snowden said that the United States surveillance agencies spy on Americans more than anyone else in the world, contrary to anything that has been said by the government up until this point. May An article published by Ars Technica shows NSA's Tailored Access Operations (TAO) employees intercepting a Cisco router. The Intercept and WikiLeaks revealed information about which countries were having their communications collected as part of the MYSTIC surveillance program. On May 19, The Intercept reported that the NSA is recording and archiving nearly every cell phone conversation in the Bahamas with a system called SOMALGET, a subprogram of MYSTIC. The mass surveillance has been occurring without the Bahamian government's permission. Aside from the Bahamas, The Intercept reported NSA interception of cell phone metadata in Kenya, the Philippines, Mexico and a fifth country it did not name due to "credible concerns that doing so could lead to increased violence." WikiLeaks released a statement on May 23 claiming that Afghanistan was the unnamed nation. In a statement responding to the revelations, the NSA said "the implication that NSA's foreign intelligence collection is arbitrary and unconstrained is false." Through its global surveillance operations the NSA exploits the flood of images included in emails, text messages, social media, videoconferences and other communications to harvest millions of images. These images are then used by the NSA in sophisticated facial recognition programs to track suspected terrorists and other intelligence targets. June Vodafone revealed that there were secret wires that allowed government agencies direct access to their networks. This access does not require warrants and the direct access wire is often equipment in a locked room. In six countries where Vodafone operates, the law requires telecommunication companies to install such access or allows governments to do so. Vodafone did not name these countries in case some governments retaliated by imprisoning their staff. Shami Chakrabarti of Liberty said "For governments to access phone calls at the flick of a switch is unprecedented and terrifying. Snowden revealed the internet was already treated as fair game. Bluster that all is well is wearing pretty thin – our analogue laws need a digital overhaul." Vodafone published its first Law Enforcement Disclosure Report on June 6, 2014. Vodafone group privacy officer Stephen Deadman said "These pipes exist, the direct access model exists. We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used." Gus Hosein, director of Privacy International said "I never thought the telcos would be so complicit. It's a brave step by Vodafone and hopefully the other telcos will become more brave with disclosure, but what we need is for them to be braver about fighting back against the illegal requests and the laws themselves." Above-top-secret documentation of a covert surveillance program named Overseas Processing Centre 1 (OPC-1) (codenamed "CIRCUIT") by GCHQ was published by The Register. Based on documents leaked by Edward Snowden, GCHQ taps into undersea fiber optic cables via secret spy bases near the Strait of Hormuz and Yemen. BT and Vodafone are implicated. The Danish newspaper Dagbladet Information and The Intercept revealed on June 19, 2014, the NSA mass surveillance program codenamed RAMPART-A. Under RAMPART-A, 'third party' countries tap into fiber optic cables carrying the majority of the world's electronic communications and are secretly allowing the NSA to install surveillance equipment on these fiber-optic cables. The foreign partners of the NSA turn massive amounts of data like the content of phone calls, faxes, e-mails, internet chats, data from virtual private networks, and calls made using Voice over IP software like Skype over to the NSA. In return these partners receive access to the NSA's sophisticated surveillance equipment so that they too can spy on the mass of data that flows in and out of their territory. Among the partners participating in the NSA mass surveillance program are Denmark and Germany. July During the week of July 4, a 31-year-old male employee of Germany's intelligence service BND was arrested on suspicion of spying for the United States. The employee is suspected of spying on the German Parliamentary Committee investigating the NSA spying scandal. Former NSA official and whistleblower William Binney spoke at a Centre for Investigative Journalism conference in London. According to Binney, "at least 80% of all audio calls, not just metadata, are recorded and stored in the US. The NSA lies about what it stores." He also stated that the majority of fiber optic cables run through the U.S., which "is no accident and allows the US to view all communication coming in." The Washington Post released a review of a cache provided by Snowden containing roughly 160,000 text messages and e-mails intercepted by the NSA between 2009 and 2012. The newspaper concluded that nine out of ten account holders whose conversations were recorded by the agency "were not the intended surveillance targets but were caught in a net the agency had cast for somebody else." In its analysis, The Post also noted that many of the account holders were Americans. On July 9, a soldier working within Germany's Federal Ministry of Defence (BMVg) fell under suspicion of spying for the United States. As a result of the July 4 case and this one, the German government expelled the CIA station chief in Germany on July 17. On July 18, former State Department official John Tye released an editorial in The Washington Post, highlighting concerns over data collection under Executive Order 12333. Tye's concerns are rooted in classified material he had access to through the State Department, though he has not publicly released any classified materials. August The Intercept reported that the NSA is "secretly providing data to nearly two dozen U.S. government agencies with a 'Google-like' search engine" called ICREACH. The database, The Intercept reported, is accessible to domestic law enforcement agencies including the FBI and the Drug Enforcement Administration and was built to contain more than 850 billion metadata records about phone calls, emails, cellphone locations, and text messages. 2015 February Based on documents obtained from Snowden, The Intercept reported that the NSA and GCHQ had broken into the internal computer network of Gemalto and stolen the encryption keys that are used in SIM cards no later than 2010. , the company is the world's largest manufacturer of SIM cards, making about two billion cards a year. With the keys, the intelligence agencies could eavesdrop on cell phones without the knowledge of mobile phone operators or foreign governments. March The New Zealand Herald, in partnership with The Intercept, revealed that the New Zealand government used XKeyscore to spy on candidates for the position of World Trade Organization director general and also members of the Solomon Islands government. April In January 2015, the DEA revealed that it had been collecting metadata records for all telephone calls made by Americans to 116 countries linked to drug trafficking. The DEA's program was separate from the telephony metadata programs run by the NSA. In April, USA Today reported that the DEA's data collection program began in 1992 and included all telephone calls between the United States and from Canada and Mexico. Current and former DEA officials described the program as the precursor of the NSA's similar programs. The DEA said its program was suspended in September 2013, after a review of the NSA's programs and that it was "ultimately terminated." September Snowden provided journalists at The Intercept with GCHQ documents regarding another secret program "Karma Police", calling itself "the world's biggest" data mining operation, formed to create profiles on every visible Internet user's browsing habits. By 2009 it had stored over 1.1 trillion web browsing sessions, and by 2012 was recording 50 billion sessions per day. 2016 January NSA documents show the US and UK spied on Israeli military drones and fighter jets. August A group called The Shadow Brokers says it infiltrated NSA's Equation Group and teases files including some named in documents leaked by Edward Snowden. Among the products affected by the leaked material was Cisco PIX and ASA VPN boxes. 2017 In March 2017, WikiLeaks published more than 8,000 documents on the CIA. The confidential documents, codenamed Vault 7, dated from 2013 to 2016, included details on the CIA's hacking capabilities, such as the ability to compromise cars, smart TVs, web browsers (including Google Chrome, Microsoft Edge, Firefox, and Opera), and the operating systems of most smartphones (including Apple's iOS and Google's Android), as well as other operating systems such as Microsoft Windows, macOS, and Linux. WikiLeaks did not name the source, but said that the files had "circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive." 2018 2019 2020 2021 In May 2021, it was reported that Danish Defence Intelligence Service collaborated with NSA to wiretap on fellow EU members and leaders, leading to wide backlash among EU countries and demands for explanation from Danish and American governments. Reaction Reactions of citizens The disclosure provided impetus for the creation of social movements against mass surveillance, such as Restore the Fourth, and actions like Stop Watching Us and The Day We Fight Back. On the legal front, the Electronic Frontier Foundation joined a coalition of diverse groups filing suit against the NSA. Several human rights organizations have urged the Obama administration not to prosecute, but protect, "whistleblower Snowden": Amnesty International, Human Rights Watch, Transparency International, and the Index on Censorship, among others. On the economic front, several consumer surveys registered a drop in online shopping and banking activity as a result of the Snowden revelations. Reactions of political leaders United States Domestically, President Barack Obama claimed that there is "no spying on Americans", and White House Press Secretary Jay Carney asserted that the surveillance programs revealed by Snowden have been authorized by Congress. On the international front, U.S. Attorney General Eric Holder stated that "we cannot target even foreign persons overseas without a valid foreign intelligence purpose." United Kingdom Prime Minister David Cameron warned journalists that "if they don't demonstrate some social responsibility it will be very difficult for government to stand back and not to act." Deputy Prime Minister Nick Clegg emphasized that the media should "absolutely defend the principle of secrecy for the intelligence agencies". Foreign Secretary William Hague claimed that "we take great care to balance individual privacy with our duty to safeguard the public and UK national security." Hague defended the Five Eyes alliance and reiterated that the British-U.S. intelligence relationship must not be endangered because it "saved many lives". Australia Former Prime Minister Tony Abbott stated that "every Australian governmental agency, every Australian official at home and abroad, operates in accordance with the law". Abbott criticized the Australian Broadcasting Corporation for being unpatriotic due to its reporting on the documents provided by Snowden, whom Abbott described as a "traitor". Foreign Minister Julie Bishop also denounced Snowden as a traitor and accused him of "unprecedented" treachery. Bishop defended the Five Eyes alliance and reiterated that the Australian–U.S. intelligence relationship must not be endangered because it "saves lives". Germany In July 2013, Chancellor Angela Merkel defended the surveillance practices of the NSA, and described the United States as "our truest ally throughout the decades". After the NSA's surveillance on Merkel was revealed, however, the Chancellor compared the NSA with the Stasi. According to The Guardian, Berlin is using the controversy over NSA spying as leverage to enter the exclusive Five Eyes alliance. Interior Minister Hans-Peter Friedrich stated that "the Americans take our data privacy concerns seriously." Testifying before the German Parliament, Friedrich defended the NSA's surveillance, and cited five terrorist plots on German soil that were prevented because of the NSA. However, in April 2014, another German interior minister criticized the United States for failing to provide sufficient assurances to Germany that it had reined in its spying tactics. Thomas de Maiziere, a close ally of Merkel, told Der Spiegel: "U.S. intelligence methods may be justified to a large extent by security needs, but the tactics are excessive and over-the-top." Sweden Minister for Foreign Affairs Carl Bildt, defended the FRA and described its surveillance practices as a "national necessity". Minister for Defence Karin Enström said that Sweden's intelligence exchange with other countries is "critical for our security" and that "intelligence operations occur within a framework with clear legislation, strict controls and under parliamentary oversight." Netherlands Interior Minister Ronald Plasterk apologized for incorrectly claiming that the NSA had collected 1.8 million records of metadata in the Netherlands. Plasterk acknowledged that it was in fact Dutch intelligence services who collected the records and transferred them to the NSA. Denmark The Danish Prime Minister Helle Thorning-Schmidt has praised the American intelligence agencies, claiming they have prevented terrorist attacks in Denmark, and expressed her personal belief that the Danish people "should be grateful" for the Americans' surveillance. She has later claimed that the Danish authorities have no basis for assuming that American intelligence agencies have performed illegal spying activities towards Denmark or Danish interests. Review of intelligence agencies Germany In July 2013, the German government announced an extensive review of Germany's intelligence services. United States In August 2013, the U.S. government announced an extensive review of U.S. intelligence services. United Kingdom In October 2013, the British government announced an extensive review of British intelligence services. Canada In December 2013, the Canadian government announced an extensive review of Canada's intelligence services. Criticism and alternative views In January 2014, U.S. President Barack Obama said that "the sensational way in which these disclosures have come out has often shed more heat than light" and critics such as Sean Wilentz claimed that "the NSA has acted far more responsibly than the claims made by the leakers and publicized by the press." In Wilentz' view "The leakers have gone far beyond justifiably blowing the whistle on abusive programs. In addition to their alarmism about [U.S.] domestic surveillance, many of the Snowden documents released thus far have had nothing whatsoever to do with domestic surveillance." Edward Lucas, former Moscow bureau chief for The Economist, agreed, asserting that "Snowden's revelations neatly and suspiciously fits the interests of one country: Russia" and citing Masha Gessen's statement that "The Russian propaganda machine has not gotten this much mileage out of a US citizen since Angela Davis's murder trial in 1971." Bob Cesca objected to The New York Times failing to redact the name of an NSA employee and the specific location where an al Qaeda group was being targeted in a series of slides the paper made publicly available. Russian journalist Andrei Soldatov argued that Snowden's revelations had had negative consequences for internet freedom in Russia, as Russian authorities increased their own surveillance and regulation on the pretext of protecting the privacy of Russian users. Snowden's name was invoked by Russian legislators who supported measures forcing platforms such as Google, Facebook, Twitter and Gmail and YouTube to locate their servers on Russian soil or install SORM black boxes on their servers so that Russian authorities could control them. Soldatov also contended that as a result of the disclosures, international support for having national governments take over the powers of the organizations involved in coordinating the Internet's global architectures had grown, which could lead to a Balkanization of the Internet that restricted free access to information. The Montevideo Statement on the Future of Internet Cooperation issued in October 2013, by ICANN and other organizations warned against "Internet fragmentation at a national level" and expressed "strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations". In late 2014, Freedom House said "[s]ome states are using the revelations of widespread surveillance by the U.S. National Security Agency (NSA) as an excuse to augment their own monitoring capabilities, frequently with little or no oversight, and often aimed at the political opposition and human rights activists." Gallery Comparison with other leaks See also Global surveillance whistleblowers Communications Assistance for Law Enforcement Act Harris Corporation PositiveID References External links Collections "Global Surveillance. An annotated and categorized "overview of the revelations following the leaks by the whistleblower Edward Snowden. There are also some links to comments and followups". By Oslo University Library. NSA Spying Scandal Der Spiegel Six months of revelations on NSA by the Washington Post's Kennedy Elliott and Terri Rupar on December 23, 2013 A collection of documents relating to surveillance. Part 2 of the above. Part 3 of the above. Documents relating to the surveillance against Dilma Rousseff and Enrique Peña Nieto The NSA Archive by the American Civil Liberties Union (ACLU) – All documents released since June 5, 2013—both by the media and the U.S. government—are housed in this database established and operated by the ACLU. ACLU article "Introducing the ACLU's NSA Documents Database" by the ACLU's Emily Weinrebe on April 3, 2014 on the above NSA Archive established and operated by the American Civil Liberties Union (ACLU). NSA Primary Sources – A List of all leaks and links to media articles related to the disclosures based on the material of Edward Snowden sorted by Date, Document and Media outlet established and operated by the Electronic Frontier Foundation. Snowden disclosures at the Internet Archive. Global surveillance 2010s in politics 2010s in international relations News leaks Edward Snowden 2010s scandals Articles containing video clips Cover-ups Surveillance scandals Works about security and surveillance
22563011
https://en.wikipedia.org/wiki/Juniper%20MX-Series
Juniper MX-Series
The Juniper MX-Series is a family of ethernet routers and switches designed and manufactured by Juniper Networks. In 2006, Juniper released the first of the MX-series, the MX960, MX240, and MX480. The second generation routers, called MX "3D", were first released in 2009 and featured a new Trio chipset and IPv6 support. In 2013, the MX routers were improved to increase their bandwidth, and a virtualized MX 3D router, the vMX 3D, was released in 2014. Utilizing the Juniper Extension Toolkit (JET), third party software can be integrated into the routers. History Early releases On October 18, 2006, the MX Series was publicly announced. Before its release, Ethernet aggregation was a missing component of Juniper's edge network products, which was causing it to lose market-share to Alcatel. The MX Series was late to market, but it was well received by analysts and customers. It was part of a trend at-the-time to incorporate additional software features in routers and switches. The first product release of the MX series was the MX960, a 14-slot, 480 Gbit/s switch and router. In late 2006, Juniper introduced the MX240 and MX480, which are smaller versions of the 960. They had a throughput of 240 Gbit/s and 480 Gbit/s respectively. Further development In 2009 a new line of MX "3D" products were introduced, using Juniper's programmable Trio chipset. Trio is a proprietary semiconductor technology with custom network instructions. It provides a cross between network processing units and ASICs. IPv6 features were added and the MX80, a smaller 80Gbit/s router, was introduced the following year. In 2011 new switch fabric cards increased the capacity of MX 3D routers. In May 2011 Juniper introduced several new products including the MX5, MX10 and MX40 3D routers, which have a throughput of 20, 40 and 60 Gbit/s respectively and can each be upgraded to an MX80. A collection of features called MobileNext was introduced in 2011 at Mobile World Congress, then discontinued in August 2013. According to Network World, it allowed MX 3D products to serve as a mobile "gateway, an authentication and management control plan for 2G/3G and LTE mobile packet cores and as a policy manager for subscriber management systems." In October 2012, Juniper introduced the MX2020 and 2010 3D Universal Edge Routers, with throughputs of 80 Tbit/s and 40 Tbit/s respectively. Juniper also released a video caching system for the MX family and a suite of software applications that include parental control, firewall and traffic monitoring. New "Virtual Chassis" features allowed network operators to manage multiple boxes as though they were a single router or switch. Recent developments In 2013, Juniper introduced new line cards for the MX series and a new switch fabric module, intended to upgrade the MX series' for higher bandwidth needs and for software defined networking applications. The capacity of the MX240, 480 and 960 were increased by double or more. A new Multiservice Modular Interface Card (MS-MIC) was incorporated that supports up to 9 Gbit/s for services like tunneling software. In March 2013, Juniper released the EX9200 switch, which isn't part of the MX Series, but uses the same software and Trio chipset. A virtualized MX series 3D router, the vMX 3D, was introduced in November 2014. A suite of updates were announced in late 2015. New MPC line cards were introduced, which have a throughput of up to 1.6 Tbit/s. Simultaneously the Juniper Extension Toolkit (JET) was announced. JET is a programming interface for integrating third-party applications that automate provisioning, maintenance and other tasks. The Junos Telemetry Interface was also announced at the same time. It reports data to applications and other equipment to automate changes to the network in response to faults or in order optimize performance. Current products and specifications According to Juniper's website, Juniper's current MX Series products include the following: References External links Official Site Juniper Networks Routers (computing)
13154296
https://en.wikipedia.org/wiki/Yorick%20Wilks
Yorick Wilks
Yorick Wilks FBCS (born 27 October 1939), a British computer scientist, is Emeritus Professor of Artificial Intelligence at the University of Sheffield, Visiting Professor of Artificial Intelligence at Gresham College (a post created especially for him), Former Senior Research Fellow at the Oxford Internet Institute, Senior Scientist at the Florida Institute for Human and Machine Cognition, and a member of the Epiphany Philosophers. Biography Wilks was educated at Torquay Boys' Grammar School, followed by Pembroke College, Cambridge, where he read Philosophy, joined the Epiphany Philosophers and obtained his Doctor of Philosophy degree (1968) under Professor R. B. Braithwaite for the thesis 'Argument and Proof'; he was an early pioneer in meaning-based approaches to the understanding of natural language content by computers. His main early contribution in the 1970s was called "Preference Semantics" (Wilks, 1973; Wilks and Fass, 1992), an algorithmic method for assigning the "most coherent" interpretation to a sentence in terms of having the maximum number of internal preferences of its parts (normally verbs or adjectives) satisfied. That early work was hand-coded with semantic entries (of the order of some hundreds) as was normal at the time, but since then has led to the empirical determinations of preferences (chiefly of English verbs) in the 1980s and 1990s. A key component of the notion of preference in semantics was that the interpretation of an utterance is not a well- or ill-formed notion, as was argued in Chomskyan approaches, such as those of Jerry Fodor and Jerrold Katz. It was rather that a semantic interpretation was the best available, even though some preferences might not be satisfied. So, in "The machine answered the question with a low whine" the agent of "answer" does not satisfy that verb's preference for a human answerer—which would cause it to be deemed ill-formed by Fodor and Katz—but is accepted as sub-optimal or metaphorical, and, now, conventional. The function of the algorithm is not to determine well-formedness at all but to make the optimal selection of word-senses to participate in the overall interpretation. Thus, in "The Pole answered..." the system will always select the human sense of the agent and not the inanimate one if it gives a more coherent interpretation overall. Preference Semantics is thus some of the earliest computational work—with programs run at Systems Development Corporation in Santa Monica in 1967 in LISP on an IBM360—in the now established field of word sense disambiguation. This approach was used in the first operational machine translation system based principally on meaning structures and built by Wilks at Stanford Artificial Intelligence Laboratory in the early 1970s (Wilks, 1973) at the same time and place as Roger Schank was applying his "Conceptual Dependency" approach to machine translation. The LISP code of Wilks' system was in The Computer Museum, Boston. Yorick Wilks has been elected a fellow of the American and European Associations for Artificial Intelligence, of the British Computer Society, a member of the UK Computing Research Committee, and a permanent member of ICCL, the International Committee on Computational Linguistics. He is professor of artificial intelligence at the University of Sheffield and a senior research fellow at the Oxford Internet Institute. In 1991 he received a Defense Advanced Projects Agency grant on interlingual pragmatics-based machine translation and in 1994 he received a grant by the Engineering and Physical Sciences Research Council to investigate in the field of large-scale information extraction (LaSIE); in the following years he would obtained more grants to carry on exploring the field of information extraction (AVENTINUS, ECRAN, PASTA...). In the 1990s Wilks also became interested in modelling human-computer dialogue and the team led by David Levy and him as chief researcher won the Loebner Prize in 1997. He was the founding director of the EU funded Companions Project on creating long-term computer companions for people. At his Festschrift in 2007 at the British Computer Society in London a volume of his own papers was presented along with a volume of essays in his honour. He was awarded the Antonio Zampolli prize in honour of his lifetime work at the LREC'2008 conference on 28 May 2008, and the Lifetime Achievement Award at the ACL'2008 conference on 18 June 2008. In 2009, he was awarded the British Computer Society's Lovelace Medal, its annual award for research achievement, and was awarded the Fellowship of the Association for Computing Machinery. In 1998, Wilks became head of the Department of Computer Science of the University of Sheffield, where he had started working in the year 1993 as professor of artificial intelligence, a post he still holds. In 1993 he became the founding director of the Institute of Language, Speech and Hearing (ILASH). Wilks also set up the Natural Language Processing Group of the University of Sheffield. In 1994 he (along with Rob Gaizauskas and Hamish Cunningham) designed GATE, an advanced NLP architecture that has been widely distributed. National Life Stories conducted an oral history interview (C1672/24) with Yorick Wilks in 2016 for its Science and Religion collection held by the British Library. Relevant data Awards Yorick Wilks has received many awards: (2009) Elected Fellow of the Association for Computing Machinery (2009) Lovelace Medal by the British Computer Society (2008) Zampolli Prize (ELRA, awarded at LREC in Marrakech, Morocco) (2008) Lifetime Achievement Award (Association for Computational Linguistics, in Columbus) (2006) Visiting Professor, University of Oxford (2004) Elected to UK Computing Research Committee (2004) Elected Fellow, British Computer Society (2003) Visiting Fellow, Oxford Internet Institute (1998) Elected Fellow of European Association for Artificial Intelligence (1997) Elected Fellow, EPSRC College of Computing (1991) Visiting Fellow, Trinity Hall, Cambridge (1991) Elected Fellow of the American Association for Artificial Intelligence (1983) Royal Society Travel Fellowship (1983) Commonwealth of Australia Visiting Professor (1981) Visiting Sloan Fellow, University of California, Berkeley (1980) Invited Participant in the Nobel Symposium on Language, Stockholm (1979) NATO Senior Scientist Fellowship (1979) Visiting Sloan Fellow, Yale University (1975) SRC Senior Visiting Fellowship, University of Edinburgh Membership Yorick Wilks is an active member of the following associations: Association for Computational Linguistics Society for the Study of AI and Simulation of Behaviour Association for Computing Machinery Cognitive Science Society British Society for the Philosophy of Science American Association for Artificial Intelligence Aristotelean Society Selected works Books Wilks, Y. (2019) Artificial Intelligence: Modern Magic or Dangerous Future?.Icon Books. Wilks, Y. (2015) Machine Translation: its scope and limits. Springer Wilks, Y (ed.) (2010) Close Engagements with Artificial Companions: Key Social, Psychological and Design issues. John Benjamins; Amsterdam Wilks, Y., Brewster, C. (2009) Natural Language Processing as a Foundation of the Semantic Web. Now Press: London. Wilks, Y. (2007) Words and Intelligence I, Selected papers by Yorick Wilks. In K. Ahmad, C. Brewster & M. Stevenson (eds.), Springer: Dordrecht. Wilks, Y. (ed. and with introduction and commentaries). (2006) Language, cohesion and form: selected papers of Margaret Masterman. Cambridge: Cambridge University Press. Wilks, Y., Nirenburg, S., Somers, H. (eds.) (2003) Readings in Machine Translation. Cambridge, MA: MIT Press. Wilks, Y.(ed.). (1999) Machine Conversations. Kluwer: New York. Wilks, Y., Slator, B., Guthrie, L. (1996) Electric Words: dictionaries, computers and meanings. Cambridge, MA: MIT Press. Ballim, A., Wilks, Y. (1991) Artificial Believers. Norwood, NJ: Erlbaum. Wilks, Y.(ed.). (1990) Theoretical Issues in Natural Language Processing. Norwood, NJ: Erlbaum. Wilks, Y., Partridge, D. (eds. plus three YW chapters and an introduction). (1990) The Foundations of Artificial Intelligence: a sourcebook. Cambridge: Cambridge University Press. Wilks, Y., Sparck-Jones, K.(eds.). (1984) Automatic Natural Language Processing, paperback edition. New York: Wiley. Originally published by Ellis Horwood. Wilks, Y., Charniak, E. (eds and principal authors). (1976) Computational Semantics—an Introduction to Artificial Intelligence and Natural Language Understanding. Amsterdam: North-Holland. Reprinted in Russian, in the series Progress in Linguistics, Moscow, 1981. Wilks, Y. (1972) Grammar, Meaning and the Machine Analysis of Language. London and Boston: Routledge. See also Artificial intelligence Computational linguistics Natural language processing References External links Yorick Wilks' profile at the University of Sheffield DCS Yorick Wilks' Profile at Gresham College Yorick Wilks' subsite at the Oxford Internet Institute, University of Oxford Yorick Wilks video on Voices from Oxford (VOA) Second VOA video {https://www.youtube.com/watch?v=-Xx5hgjD-Mw Yorick Wilks demonstrating a computer companion on YouTube}] {https://www.gresham.ac.uk/lectures-and-events/ai-religion Yorick Wilks lecture at Gresham College, London on Artificial Intelligence and Religion} A seminar by Yorick Wilks at the Brandeis University (Department of Computer Science)] Lecture by Professor Yorick Wilks Alumni of Pembroke College, Cambridge Artificial intelligence researchers British computer scientists Fellows of the Association for the Advancement of Artificial Intelligence Fellows of the British Computer Society Fellows of the Association for Computing Machinery Living people People educated at Torquay Boys' Grammar School 1939 births Florida Institute for Human and Machine Cognition people Fellows of the European Association for Artificial Intelligence Natural language processing researchers Computer scientists
29584982
https://en.wikipedia.org/wiki/St.%20Nicholas%20Monastery%20Church%2C%20Mesopotam
St. Nicholas Monastery Church, Mesopotam
St. Nicholas Monastery Church (, ) is the katholikon of the abandoned Orthodox monastery of Saint George in Mesopotam, Vlorë County, Albania. History The monastery is thought to have been built in 1224 or 1225. It was once enclosed by a circular wall which is today only partly preserved. Its double apse makes it unique in its genre, and it is thought of as this was due to the monastery being used by two religious rites (Catholic and Orthodox). The East–West Schism of 1054 appears not to have deterred the Catholic and Orthodox believers in Mesopotam, evidently successful in finding a compromise that enabled them to work together in the construction of the monastery. It is designated as a Cultural Monument of Albania and is a protected heritage site, although the church and temple building is in need of restoration, held against collapse with wooden props and scaffolding. The Orthodox monastery was built on the walls of a much older temple. An Albanian Heritage Foundation team, directed by architect Reshat Gega, conducted research on the monastery, performing excavations and restoration over a period of 20 years. Evidence the team found included Hellenic stones from the 3rd-4th centuries BC, confirming the connection with the capital of the Epirote League at Phoenice (Finik) located 3 km from the monastery. One of the decorative stones bears the inscription "Menelau", presumed to be a reference to the Spartan King Menelaus whose brother Agamemnon led the assault during the Trojan War. The nearby area includes the ancient site of Buthrotum which the Roman writer Virgil says was founded by Trojan descendants of Priam, who settled in the area after the Trojan War. The original openings in the temple walls have been used as either alcoves or windows by the builders of the monastery. An accompanying photograph shows such an alcove with an icon of the Saint Nicholas, the patron saint of the monastery. Agios Nikolaos, Shen Kollit or Saint Nicholas As is immediately evident to speakers of Greek "Agios Nikolaos", Serbian "Sveti Nikola", and the Albanian "Shën Kollit" are the same saint, anglicised to Saint Nicholas. Saint Nicholas was contemporary with Saint Spyridon, the patron saint of the island of Corfu, that lies 20 km west of Mesopotam. Bishops Nicholas and Spyridon both attended the First Council of Nicaea in the year 325. The bodies of both saints were "rescued" and sent by ship to Italy during the Fall of Constantinople in 1453. The Corfiots successfully petitioned for the body of Saint Spyridon to be relocated to the capital of Corfu. The remains of Saint Nicholas are revered at Bari's great Basilica di San Nicola Italy, in Venice and at Myra in Turkey. The Dragon Icons The temple walls contain several legendary icons including a lion, a serpent dragon with a knot in its tail, and a serpent dragon with its tail coiled around its neck and back. The following information may be co-incidental. (a) the Iliad refers to Agamemnon as "the Lion" and numerous Roman and Greek authors write that Laocoön, the priest of Troy and his two sons were strangled by a pair of serpent dragons. (b) The site of Butrint (Roman Buthrotum) has a great gate, referred to as the Lion Gate, whose headstone depicts a bull, possibly Paris, being wrestled to the ground by a lion - possibly Agamemnon. (c) In 297 BC, the occupants of Butrint and the builders of the temple walls forming Mesopotam Monastery were neighbors. (d) According to the Roman writer Virgil, Butrint's legendary founder was the seer Helenus, a son of king Priam of Troy, who moved West after the fall of Troy with Neoptolemus and his concubine Andromache. Access to the Site The monastery site is normally closed with fencing and locked gates, but the Papas of the modern Orthodox church in Mespotam has the keys and visits can be arranged if planned and coordinated in advance. In May each year the community of Mesopotam village hold an "open day" and "festival" to celebrate Saint Nicholas's "saint's day" at the site of the monastery. References Cultural Monuments of Albania Buildings and structures in Finiq 13th-century Eastern Orthodox church buildings Byzantine church buildings in Albania Churches in Vlorë County
34609626
https://en.wikipedia.org/wiki/Modern%20elementary%20mathematics
Modern elementary mathematics
Modern elementary mathematics is the theory and practice of teaching elementary mathematics according to contemporary research and thinking about learning. This can include pedagogical ideas, mathematics education research frameworks, and curricular material. In practicing modern elementary mathematics, teachers may use new and emerging media and technologies like social media and video games, as well as applying new teaching techniques based on the individualization of learning, in-depth study of the psychology of mathematics education, and integrating mathematics with science, technology, engineering and the arts. General practice Areas of mathematics Making all areas of mathematics accessible to young children is a key goal of modern elementary mathematics. Author and academic Liping Ma calls for "profound understanding of fundamental mathematics" by elementary teachers and parents of learners, as well as learners themselves. Algebra: Early algebra covers the approach to elementary mathematics which helps children generalize number and set ideas. Probability and statistics: Modern technologies make probability and statistics accessible to elementary learners with tools such as computer-assisted data visualization. Geometry: Specially developed physical and virtual manipulatives, as well as interactive geometry software, can make geometry (beyond basic sorting and measuring) available to elementary learners. Calculus: New innovations, such as Don Cohen's map to calculus, which was developed using children's work and level of understanding, is making calculus accessible to elementary learners. Problem solving: Creative problem solving, which contrasts with exercises in arithmetic, such as adding or multiplying numbers, is now a major part of elementary mathematics. Other areas of mathematics such as logical reasoning and paradoxes, which used to be reserved for advanced groups of learners, are now being integrated into more mainstream curricula. Use of psychology Psychology in mathematics education is an applied research domain, with many recent developments relevant to elementary mathematics. A major aspect is the study of motivation; while most young children enjoy some mathematical practices, by the age of seven to ten many lose interest and begin to experience mathematical anxiety. Constructivism and other learning theories consider the ways young children learn mathematics, taking child developmental psychology into account. Both practitioners and researchers focus on children's memory, mnemonic devices, and computer-assisted techniques such as spaces repetition. There is an ongoing discussion of relationships between memory, procedural fluency with algorithms, and conceptual understanding of elementary mathematics. Sharing songs, rhymes, visuals and other mnemonics is popular in teacher social networks. The understanding that young children benefit from hands-on learning is more than a century old, going back to the work of Maria Montessori. However, there are modern developments of the theme. Traditional manipulatives are now available on computers as virtual manipulatives, with many offering options not available in the physical world, such as zoom or cross-section of geometric shapes. Embodied mathematics, such as studies of numerical cognition or gestures in learning, are growing research topics in mathematics education. Accommodating individual students Modern tools such as computer-based expert systems allow higher individualization of learning. Students do mathematical work at their own pace, providing for each student's learning style, and scaling the same activity for multiple levels. Special education and gifted education in particular require level and style accommodations, such as using different presentation and response options. Changing some aspects of the environment, such as giving an auditory learner headphones with quiet music, can help children concentrate on mathematical tasks. Modern learning materials, both computer and physical, accommodate learners through the use of multiple representation, such as graphs, pictures, words, animations, symbols, and sounds. For example, recent research suggests that sign language isn't only a means of speaking for those who are deaf, but also a visual approach to communication and learning, appealing to many others students and particularly helping with mathematics. Another aspect of individual education is child-led learning, which is called unschooling when it encompasses most of the child's experiences. Child-led learning means incorporating mathematically rich projects that stem from personal interests and passions. Educators who support child-led learning need to provide tasks that are open to interpretation, and be ready to improvise, rather than prepare lessons ahead of time. This modern approach often involves seizing opportunities for discovery, and learning as the child's curiosity demands. This departure from conventional structured learning leaves the child free to explore his/her innate desires and curiosities. Child-led learning taps into the child's intrinsic love of learning. Problem solving can be an intensely individualized activity, with students working in their own ways and also sharing insights and results within groups. There are many means to one end, emphasizing the importance of creative approaches. Promoting discourse and focusing on language are important concepts for helping each students participate in problem solving meaningfully. Data-based assessment and comparison of learning methods, and ways children learn, is another big aspect of modern elementary mathematics. Use of emerging technologies Computation technology Modern computation technologies change elementary mathematics in several ways. Technology reduces the amount of attention, memory, and computation required by users, making higher mathematical topics accessible to young children. However, the main opportunity technology provides is not in making traditional mathematical tasks more accessible, but in introducing children to novel activities that are not possible without computers. For example, computer modeling allows children to change parameters in virtual systems created by educators and observe emergent mathematical behaviors, or remix and create their own models. The pedagogical approach of constructionism describes how creating algorithms, programs and models on computers promotes deep mathematical thinking. Technology allows children to experience these complex concepts in a more visual manner. Computer algebra systems are software environments that support and scaffold working with symbolic expressions. Some computer algebra systems have intuitive, child-friendly interfaces and therefore can be used in Early Algebra. Interactive geometry software supports creation and manipulation of geometric constructions. Both computer algebra systems and interactive geometry software help with several cognitive limitations of young children, such as attention and memory. The software scaffolds step-by-step procedures, helping children focus attention. It has "undo" capabilities, lowering frustration when errors happen, and promoting creativity and exploration. Also, such software supports metacognition by making all steps in a problem or a construction visible and editable, so children can reflect on individual steps or the whole journey. Social media Online communities and forums allow educators, researchers and students to share, discuss and remix elementary mathematical content they find or create. Sometimes, traditional media such as texts, pictures and movies are digitized and turned into online social objects, such as open textbooks. Other times, web-native mathematical objects are created, remixed and shared within the integrated authoring and discussion environment, such as applets made with Scratch or Geogebra constructions. Rich media, including video, virtual manipulatives, interactive models and mobile applications is a characteristic feature of online mathematical communication. Some global collaboration projects between teachers or groups of students with teachers use the web mostly for communication, but others happen in virtual worlds, such as Whyville. Professional development for elementary mathematics educators uses social media in the form of online courses, discussion forums, webinars, and web conferences. This supports teachers in forming PLNs (Personal Learning Networks). Some communities include both students and teachers, such as Art of Problem Solving. Teaching mathematics in context Games and play Learning through play is not new, but the themes of computer and mobile games are relatively more modern. Most teachers now use games in elementary classrooms, and most children in developed countries play learning games at home. Computer games with intrinsically mathematical game mechanics can help children learn novel topics. More extrinsic game mechanics and gamification can be used for time and task management, fluency, and memorization. Sometimes it's not obvious what mathematics children learn by "just playing," but basic spatial and numerical skills gained in free play help with mathematical concepts. Some abstract games such as chess can benefit learning mathematics by developing systems thinking, logic, and reasoning. Roleplaying games invite children to become a character who uses mathematics in daily life or epic adventures, and often use mathematical storytelling. Sandbox, also called open world games, such as Minecraft help children explore patterns, improvise, be mathematically artistic, and develop their own algorithms. Board games can have all of the above aspects, and also promote communication about mathematics in small groups. Teachers working with disadvantaged kids note especially large mathematical skill gains after using games in the classroom, possibly because kids don't play such games at home. Many teachers, parents and students design their own games or create versions of existing games. Designing mathematically rich games is one of staple tasks in constructionism. There is a concern that children who use computer games and technology in general may be more stressed when exposed to pen-and-paper tests. Family mathematics and everyday mathematics While learning mathematics in daily life, such as cooking and shopping, can't be considered modern, social media provides new twists. Online networks help parents and teachers share tips on how to integrate daily routines and more formal mathematical learning for children. For example, the "Let's play math" blog hosts carnivals for sharing family mathematics ideas, such as using egg cartoons for quick mathematical games. School tasks may involve families collecting data and aggregating it online for mathematical explorations. Pastimes such as geocaching involve families sharing mathematically rich sporting activities that depend on GPS systems or mobile devices. Museums, clubs, stores, and other public places provide blended learning opportunities, with visiting families accessing science and mathematics activities related to the place on their mobile devices. , social sciences, and the arts In the last several decades, many prominent mathematicians and mathematics enthusiasts embraced mathematical arts, from popular fractal art to origami. Likewise, elementary mathematics is becoming more artistic. Some popular topics for children include tessellation, computer art, symmetry, patterns, transformations and reflections. The discipline of ethnomathematics studies relationships between mathematics and cultures, including arts and crafts. Some hands-on activities, such as creating tiling, can help children and grown-ups see mathematical art all around them. Project-based learning approaches help students explore mathematics together with other disciplines. For example, children's robotics projects and competitions include mathematical tasks. Some elementary mathematical topics, such as measurement, apply to tasks in many professions and subject areas. Unit studies centered on such concepts contrast with project-based learning, where students use many concepts to achieve the project's goal. See also Natural math References Mathematics education
45001424
https://en.wikipedia.org/wiki/Braina
Braina
Braina is an intelligent personal assistant application for Microsoft Windows marketed by Brainasoft. Braina uses natural language interface and speech recognition to interact with its users and allows users to use natural language sentences to perform various tasks on their computer. The application can find information from the internet, search and play songs and videos of user's choice, take dictation, find and open files, set alarms and reminders, performs math calculations, controls windows and programs etc. Braina's Android and iOS apps can be used to interact with the system remotely over a Wi-Fi network. The name Braina is a short form of Brain Artificial. The software adapts to the user's behavior over time to better anticipate needs. The software also allows users to type commands using keyboard instead of saying them. Braina comes in both free and paid version. Future plc's TechRadar recognized Braina as one of the top 10 free essential software for 2015. Braina comes in two versions: freeware Lite and Pro. References Windows software Artificial intelligence applications Natural language processing software Speech recognition software
880860
https://en.wikipedia.org/wiki/Content%20delivery%20network
Content delivery network
A content delivery network, or content distribution network (CDN), is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. CDNs came into existence in the late 1990s as a means for alleviating the performance bottlenecks of the Internet as the Internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the Internet content today, including web objects (text, graphics and scripts), downloadable objects (media files, software, documents), applications (e-commerce, portals), live streaming media, on-demand streaming media, and social media sites. CDNs are a layer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN pays Internet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers. CDN is an umbrella term spanning different types of content delivery services: video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance, load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security, with DDoS protection and web application firewalls (WAF), and WAN optimization. Technology CDN nodes are usually deployed in multiple locations, often over multiple Internet backbones. Benefits include reducing bandwidth costs, improving page load times, or increasing global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs. Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations that are the fewest hops, the lowest number of network seconds away from the requesting client, or the highest availability in terms of server performance (both current and historical), so as to optimize delivery across local networks. When optimizing for cost, locations that are least expensive may be chosen instead. In an optimal scenario, these two goals tend to align, as edge servers that are close to the end user at the edge of the network may have an advantage in performance or cost. Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired, such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user. Security and privacy CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customer's websites inside their browser origin. As such these services are being pointed out as potential privacy intrusion for the purpose of behavioral targeting and solutions are being created to restore single-origin serving and caching of resources. CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them. Subresource Integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author. Content networking techniques The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets. Content Delivery Networks augment the end-to-end transport network by distributing on it a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server-load balancing, request routing, and content services. Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching). Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e. layer 4–7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among a number of servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks. A content cluster or service node can be formed using a layer 4–7 switch to balance load across a number of servers or a number of web caches within the network. Request routing directs client requests to the content source best able to serve the request. This may involve directing a client request to the service node that is closest to the client, or to the one with the most capacity. A variety of algorithms are used to route the request. These include Global Server Load Balancing, DNS-based request routing, Dynamic metafile generation, HTML rewriting, and anycasting. Proximity—choosing the closest service node—is estimated using a variety of techniques including reactive probing, proactive probing, and connection monitoring. CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers. Content service protocols Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol. This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes or ESI is a small markup language for edge level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI. Peer-to-peer CDNs In peer-to-peer (P2P) content-delivery networks, clients provide resources as well as use them. This means that unlike client–server systems, the content centric networks can actually perform better as more users begin to access the content (especially with protocols such as Bittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. Private CDNs If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers, reverse proxies or application delivery controllers. It can be as simple as two caching servers, or large enough to serve petabytes of content. Large content distribution networks may even build and set up their own private network to distribute copies of content across cache locations. Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of private network is not enough or there is a failure which leads to capacity reduction. Since the same content has to be distributed across many locations, a variety of multicasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity. CDN trends Emergence of telco CDNs The rapid growth of streaming video traffic uses large capital expenditures by broadband providers in order to meet this demand and to retain subscribers by delivering a sufficiently good quality of experience. To address this, telecommunications service providers (TSPs) have begun to launch their own content delivery networks as a means to lessen the demands on the network backbone and to reduce infrastructure investments. Telco CDN advantages Because they own the networks over which video content is transmitted, telco CDNs have advantages over traditional CDNs. They own the last mile and can deliver content closer to the end-user because it can be cached deep in their networks. This deep caching minimizes the distance that video data travels over the general Internet and delivers it more quickly and reliably. Telco CDNs also have a built-in cost advantage since traditional CDNs must lease bandwidth from them and build the operator's margin into their own cost model. In addition, by operating their own content delivery infrastructure, telco operators have better control over the utilization of their resources. Content management operations performed by CDNs are usually applied without (or with very limited) information about the network (e.g., topology, utilization etc.) of the telco-operators with which they interact or have business relationships. These pose a number of challenges for the telco-operators which have a limited sphere of actions in face of the impact of these operations on the utilization of their resources. In contrast, the deployment of telco-CDNs allow operators to implement their own content management operations, which enables them to have better control over the utilization of their resources and, as such, provide better quality of service and experience to their end users. Federated CDNs In June 2011, StreamingMedia.com reported that a group of TSPs had founded an Operator Carrier Exchange (OCX) to interconnect their networks and compete more directly against large traditional CDNs like Akamai and Limelight Networks, which have extensive PoPs worldwide. This way, telcos are building a Federated CDN offering, which is more interesting for a content provider willing to deliver its content to the aggregated audience of this federation. It is likely that in a near future, other telco CDN federations will be created. They will grow by enrollment of new telcos joining the federation and bringing network presence and their Internet subscriber bases to the existing ones. Improving CDN performance using the EDNS0 option Traditionally, CDNs have used the IP of the client's recursive DNS resolver to geo-locate the client. While this is a sound approach in many situations, this leads to poor client performance if the client uses a non-local recursive DNS resolver that is far away. For instance, a CDN may route requests from a client in India to its edge server in Singapore, if that client uses a public DNS resolver in Singapore, causing poor performance for that client. Indeed, a recent study showed that in many countries where public DNS resolvers are in popular use, the median distance between the clients and their recursive DNS resolvers can be as high as a thousand miles. In August 2011, a global consortium of leading Internet service providers led by Google announced their official implementation of the edns-client-subnet IETF Internet-Draft, which is intended to accurately localize DNS resolution responses. The initiative involves a limited number of leading DNS service providers, such as Google Public DNS, and CDN service providers as well. With the edns-client-subnet EDNS0 option, CDNs can now utilize the IP address of the requesting client's subnet when resolving DNS requests. This approach, called end-user mapping, has been adopted by CDNs and it has been shown to drastically reduce the round-trip latencies and improve performance for clients who use public DNS or other non-local resolvers. However, the use of EDNS0 also has drawbacks as it decreases the effectiveness of caching resolutions at the recursive resolvers, increases the total DNS resolution traffic, and raises a privacy concern of exposing the client's subnet. Virtual CDN (vCDN) Virtualization technologies are being used to deploy virtual CDNs (vCDNs) with the goal to reduce content provider costs, and at same time, increase elasticity and decrease service delay. With vCDNs, it is possible to avoid traditional CDN limitations, such as performance, reliability and availability since virtual caches are deployed dynamically (as virtual machines or containers) in physical servers distributed across the provider geographical coverage. As the virtual cache placement is based on both the content type and server or end-user geographic location, the vCDNs have a significant impact on service delivery and network congestion. Image Optimization and Delivery (Image CDNs) In 2017, Addy Osmany of Google started referring to software solutions that could integrate naturally with the Responsive Web Design paradigm (with particular reference to the <picture> element) as Image CDNs. The expression referred to the ability for a web architecture to serve multiple versions of the same image through HTTP, depending on the properties of the browser requesting it, as determined by either the browser or the server-side logic. The purpose of Image CDNs was, in Google's vision, to serve high-quality images (or, better, images perceived as high-quality by the human eye) while preserving download speed, thus contributing to a great User experience (UX). Arguably, the Image CDN term was originally a misnomer, as neither Cloudinary nor Imgix (the examples quoted by Google in the 2017 guide by Addy Osmany) were, at the time, a CDN in the classical sense of the term. Shortly afterwards, though, several companies offered solutions that allowed developers to serve different versions of their graphical assets according to several strategies. Many of these solutions were built on top of traditional CDNs, such as Akamai, CloudFront, Fastly, Verizon Digital Media Services and Cloudflare. At the same time, other solutions that already provided an image multi-serving service joined the Image CDN definition by either offering CDN functionality natively (ImageEngine) or integrating with one of the existing CDNs (Cloudinary/Akamai, Imgix/Fastly). While providing a universally agreed-on definition of what an Image CDN is may not be possible, generally speaking, an Image CDN supports the following three components: A Content Delivery Network (CDN) for fast serving of images. Image manipulation and optimization, either on-the-fly though URL directives, in batch mode (through manual upload of images) or fully-automatic (or a combination of these). Device Detection (also known as Device Intelligence), i.e. the ability to determine the properties of the requesting browser and/or device through analysis of the User-Agent string, HTTP Accept headers, Client-Hints or JavaScript. The following table summarizes the current situation with the main software CDNs in this space: Notable content delivery service providers Free CDNs cdnjs BootstrapCDN Cloudflare JSDelivr PageCDN Coral Content Distribution Network (Defunct) Traditional commercial CDNs Akamai Technologies Amazon CloudFront Aryaka Azure CDN CacheFly CDN77 CDNetworks CenterServ ChinaCache Cloudflare Cotendo EdgeCast Networks Fastly Google Cloud CDN HP Cloud Services Incapsula Instart Internap LeaseWeb Lumen Technologies, formerly Level 3 Communications Limelight Networks MetaCDN NACEVI OnApp GoDaddy OVHcloud Rackspace Cloud Files Speedera Networks StackPath StreamZilla Wangsu Science & Technology Yottaa Telco CDNs AT&T Inc. Bharti Airtel Bell Canada BT Group China Telecom Chunghwa Telecom Deutsche Telekom KT KPN Lumen Technologies, formerly CenturyLink Megafon NTT Pacnet PCCW Qualitynet Singtel SK Broadband Spark New Zealand Tata Communications Telecom Argentina Telefonica Telenor TeliaSonera Telin Telstra Telus TIM Turk Telekom Verizon Commercial CDNs using P2P for delivery BitTorrent, Inc. Internap Pando Networks Rawflow Multi CDN MetaCDN Warpcache In-house CDN Netflix See also Application software Bel Air Circuit Comparison of streaming media systems Comparison of video services Content delivery network interconnection Content delivery platform Data center Digital television Dynamic site acceleration Edge computing Internet radio Internet television IPTV List of music streaming services List of streaming media systems Multicast NetMind Open Music Model Over-the-top content P2PTV Protection of Broadcasts and Broadcasting Organizations Treaty Push technology Software as a service Streaming media Webcast Web syndication Web television References Further reading Applications of distributed computing Cloud storage Computer networking Digital television Distributed algorithms Distributed data storage Distributed data storage systems File sharing File sharing networks Film and video technology Internet broadcasting Internet radio Streaming television Multimedia Online content distribution Peer-to-peer computing Peercasting Streaming Streaming media systems Video hosting Video on demand
13247184
https://en.wikipedia.org/wiki/SMS%20%28hydrology%20software%29
SMS (hydrology software)
SMS (Surface-water Modeling System) is a complete program for building and simulating surface water models from Aquaveo. It features 1D and 2D modeling and a unique conceptual model approach. Currently supported models include ADCIRC, CMS-FLOW2D, FESWMS, TABS, TUFLOW, BOUSS-2D, CGWAVE, STWAVE, CMS-WAVE (WABED), GENESIS, PTM, and WAM. Version 9.2 introduced the use of XMDF (eXtensible Model Data Format), which is a compatible extension of HDF5. XMDF files are smaller and allow faster access times than ASCII files. History SMS was initially developed by the Engineering Computer Graphics Laboratory at Brigham Young University (later renamed in September, 1998 to Environmental Modeling Research Laboratory or EMRL) in the late 1980s on Unix workstations. The development of SMS was funded primarily by The United States Army Corps of Engineers and is still known as the Department of Defense Surface-water Modeling System or DoD SMS. It was later ported to Windows platforms in the mid 1990s and support for HP-UX, IRIX, OSF/1, and Solaris platforms was discontinued. In April 2007, the main software development team at EMRL entered private enterprise as Aquaveo LLC, and continue to develop SMS and other software products, such as WMS (Watershed Modeling System) and GMS (Groundwater Modeling System). Examples of SMS Implementation SMS modeling was used to "determine flooded areas in case of failure or revision of a weir in combination with a coincidental 100-year flood event" (Gerstner, Belzner, and Thorenz, 975). Furthermore, "concerning the water level calculations in case of failure of a weir, the Bavarian Environmental Agency provided the Federal Waterways Engineering and Research Institute with those two-dimensional depth-averaged hydrodynamic models, which are covering the whole Bavarian part of the river Main. The models were created with the software Surface-Modeling System (SMS) of Aquaveo LLC" (Gerstner, Belzner, and Thorenz, 976). This article "describes the mathematical formulation, numerical implementation, and input specifications of rubble mound structures in the Coastal Modeling System (CMS) operated through the Surface-water Modeling System (SMS)” (Li, et al., 1). Describing the input specifications, the authors write, "Working with the SMS interface, users can specify rubble mound structures in the CMS by creating datasets for different structure parameters. Five datasets are required for this application" (Li, et al., 3) and "users should refer to Aquaveo (2010) for generating a XMDF dataset (*.h5 file) under the SMS" (Li, et al., 5). This study examined the "need of developing mathematical models for determining and predicting water quality of 'river-type' systems. It presents a case study for determining the pollutant dispersion for a section of the River Prut, Ungheni town, which was filled with polluted water with oil products from its tributary river Delia" (Marusic and Ciufudean, 177). "The obtained numerical models were developed using the program Surface-water Modeling System (SMS) v.10.1.11, which was designed by experts from Aquaveo company. The hydrodynamics of the studied sector, obtained using the SMS module named RMA2 [13], served as input for the RMA module 4, which determined the pollutant dispersion" (Marusic and Ciufudean, 178–179). This study focused on finding "recommendations for optimization" of the "Chusovskoy water intake located in the confluence zone of two rivers with essentially different hydrochemical regimes and in the backwater zone of the Kamskaya hydroelectric power station" (Lyubimova, et al., 1). "A two-dimensional (in a horizontal plane) model for the examined region of the water storage basin was constructed by making use of the software product SMS v.10 of the American company AQUAVEO LLC" (Lyubimova, et al., 2). Evaluations of the SMS-derived, two-dimensional model as well as a three-dimensional model yielded the discovery that "the selective water intake from the near-surface layers can essentially reduce hardness of potable water consumed by the inhabitants of Perm" (Lyubimova, et al., 6). References External links US Army Corps of Engineers – DoD SMS white paper SMS Documentation Wiki Scientific simulation software Hydrology software
50148729
https://en.wikipedia.org/wiki/Wendy%20Hui%20Kyong%20Chun
Wendy Hui Kyong Chun
Wendy Hui Kyong Chun (born 1969) is the Canada 150 Research Chair in New Media in the School of Communication at Simon Fraser University. Previously, she was Professor and Chair of Modern Culture and Media at Brown University. Her theoretical and critical approach to digital media draws from her training in both Systems Design Engineering and English Literature. She is a leader at the Digital Democracies Institute at Simon Frasier University, Chun is a member of the directorial team at the digital democracies institute which is made up of a collective of people involved with the school of communication at Simon Fraser University. She is the author of several books, including Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (MIT Press, 2021), as well as a trilogy that includes Updating to Remain the Same: Habitual New Media (MIT Press, 2016), Programmed Visions: Software and Memory (MIT Press, 2011), and Control and Freedom: Power and Paranoia in the Age of Fiber Optics (MIT Press, 2006). She has also written and co-authored various articles pertaining to the digital media field. Her research spans the fields of digital media, new media, software studies, comparative media studies, critical race studies, and critical theory. Life Chun holds a B.S. in Systems Design Engineering and English Literature from the University of Waterloo (1992) and a Ph.D. in English from Princeton University. She is a Guggenheim Fellow (2017), American Academy of Berlin Fellow (2017), and ACLS Fellow (2016). She has been a member of the Institute for Advanced Study in Princeton, NJ, a fellow at Harvard's Radcliffe Institute for Advanced Study, and Wriston Fellow at Brown University. Chun has been the Velux Visiting Professor of Management, Politics, and Philosophy at the Copenhagen Business School (2015–16), the Wayne Morse Chair for Law and Politics at the University of Oregon (2014–15), Gerald LeBoff Visiting Scholar at NYU (2014), as well as Visiting Professor at the University of St. Gallen (Switzerland, 2014), Leuphana University (Germany, 2013–14), the Folger Institute (2013), and Visiting Associate Professor in Harvard's History of Science Department. Work and influence Chun's work has both set and questioned the terms of theory and criticism in new and digital media studies. In 2004, she co-edited Old Media, New Media: A History and Theory Reader with Thomas Keenan. Chun's introduction to the book is skeptical of the phrase "new media" and the emerging area of study it named, starting in the early 1990s. In "On Software, or the Persistence of Visual Knowledge" (2005), Chun links the emergence of software to shifts in labor that replaced the feminized function of the "computer" in science labs with the electronic computer. In the 1940s, early computers such as the ENIAC were largely programmed by women, under the direction of primarily male managers. As programming was professionalized, this work, that had been viewed as clerical, "sought to become an engineering and academic field in its own right" (32). The professionalization of programming grew as successive layers of code distanced programmers from machine language, eventually allowing for software to exist separate from the programmer as a commodity that could travel between machines. Women's work as the first computer programmers was, by contrast, closer to the physical machine, and potentially more difficult. Chun's first book, Control and Freedom: Power and Paranoia in the Age of Fiber Optics (2006) deconstructs the promises by which the early Internet, "one of the most compromising media to date" (144), was sold as an empowering technology of freedom. Control and Freedom: Power and Paranoia in the Age of Fiber Optics (2006) explores how freedom has become inextricable from control and how this conflation undermines the democratic potential of the Internet. Chun's work uses different approaches to analyze the relationship between control and freedom, those include, the freedom that the internet enables contrasted with the paranoia and control the technology can have over us, the link between software and networks, and societies expectations of technology. The book draws on a wide variety of texts—U.S. Court decisions on cyberporn, hardware specifications, software interfaces, cyberpunk novels—to examine how digital technologies remap forms of social control and produce new experiences of race and sexuality. Her second book, Programmed Visions: Software and Memory (2011), Chun argues that cycles of obsolescence and renewal (e.g. mobile mobs, Web 3.0, cloud computing) are byproducts of new media's logic of "programmability". The book asks how computers have become organizing metaphors for understanding our neoliberal, networked moment (Updating to Remain the Same, 19). Seb Franklin for The English Association's The Year's Work in Critical and Cultural Theory writes in regards to "The methodology developed in Control and Freedom," and ways ""in which archives of critical theory and the history of technology meet close analyses of software and hardware rooted in Chun's training as a systems design engineer, is refined and extended in Programmed Visions, providing a basis for a detailed inquiry into the ways in which software and governmentality are historically and logically intertwined." Casey Collan writes in a review of Programmed Visions for Rhizome, "'programmability,' the logic of computers, has come to reach beyond screens into both the systems of government and economics and the metaphors we use to make sense of the world. 'Without [computers, human and mechanical],' writes Chun, 'there would be no government, no corporations, no schools, no global marketplace, or, at the very least, they would be difficult to operate...Computers, understood as networked software and hardware machines, are—or perhaps more precisely set the grounds for—neoliberal governmental technologies...not simply through the problems (population genetics, bioinformatics, nuclear weapons, state welfare, and climate) they make it possible to both pose and solve, but also through their very logos, their embodiment of logic.'" In Updating to Remain the Same: Habitual New Media (2016), Chun argues that "our media matter most when they seem not to matter at all" (1). When they are no longer new but habitual, they become automatic and unconscious. Chun speaks of what she refers to as "creepy" instruments of social habituation they are nonetheless also sold as deeply personal, marking the distinction between public and private, memory and storage, individual action and social control. Chun's book moves forward from her other books and proposes a theory regarding habituation. The book deals with notions of new media and how people reorient themselves as new media continues to update. Zara Dinen's review of Chun frames the book into two important sections, the first regarding the imagined potential and the networks that make up the internet and the second section dealing with what Chun refers to as the "YOU's" that make up the internet. A recent work by Chun proposes the term "net-munity" to discuss the Covid-19 pandemic and the meaning of neighbor and community during times of uncertainty. Through her explanation of "net-munity" she describes the notions of neighborly and social responsibility through the lens of the Covid-19 pandemic and how contact tracing has displayed interesting notions of community and responsibility. Awards and recognitions Visiting Scholar, Annenberg School of Communications, University of Pennsylvania, Fall 2018 Selected works New Media, Old Media: A History and Theory Reader (co-edited with Thomas Keenan, Routledge, 2005) Control and Freedom: Power and Paranoia in the Age of Fiber Optics (MIT Press, 2006) Programmed Visions: Software and Memory (MIT Press, 2013) New Media, Old Media: A History and Theory Reader, 2nd edition (co-edited with Anna Watkins Fisher and Thomas Keenan, Routledge, 2015) Updating to Remain the Same: Habitual New Media (MIT Press, 2016) Chun also co-edited several journal special issues: "New Media and American Literature," American Literature (with Tara McPherson and Patrick Jagoda, 2013) "Race and/as Technology," Camera Obscura (with Lynne Joyrich, 2009) References 1969 births Living people Brown University faculty University of Waterloo alumni Princeton University alumni Simon Fraser University faculty
1197057
https://en.wikipedia.org/wiki/CellML
CellML
CellML is an XML based markup language for describing mathematical models. Although it could theoretically describe any mathematical model, it was originally created with the Physiome Project in mind, and hence used primarily to describe models relevant to the field of biology. This is reflected in its name CellML, although this is simply a name, not an abbreviation. CellML is growing in popularity as a portable description format for computational models, and groups throughout the world are using CellML for modelling or developing software tools based on CellML. CellML is similar to Systems Biology Markup Language SBML but provides greater scope for model modularity and reuse, and is not specific to descriptions of biochemistry. History The CellML language grew from a need to share models of cardiac cell dynamics among researchers at a number of sites across the world. The original working group formed in 1998 consisted of David Bullivant, Warren Hedley, and Poul Nielsen; all three were at that time members of the Department of Engineering Science at the University of Auckland. The language was an application of the XML specification developed by the World Wide Web Consortium – the decision to use XML was based on late 1998 recommendations from Warren Hedley and André (David) Nickerson. Existing XML-based languages were leveraged to describe the mathematics (content MathML), metadata (RDF), and links between resources (XLink). The CellML working group first became aware of the SBML effort in late 2000, when Warren Hedley attended the 2nd workshop on Software Platforms for Systems Biology in Tokyo. The working group collaborated with a number of researchers at Physiome Sciences Inc. (particularly Melanie Nelson, Scott Lett, Mark Grehlinger, Prasad Ramakrishna, Jeremy Rice, Adam Muzikant, and Kam-Chuen Jim) to draft the initial CellML 1.0 specification, which was published on the 11th of August 2001. This first draft was followed by specifications for CellML Metadata and an update to CellML to accommodate structured nesting of models with the addition of the <import> element. Physiome Sciences Inc. also produced the first CellML capable software. The National Resource for Cell Analysis and Modeling (NRCAM) at the University of Connecticut Health Center also produced early CellML capable software called Virtual Cell. In 2002 the CellML 1.1 specification was written, in which imports were added. Imports provide the ability to incorporate external components into a model, enabling modular modelling. This specification was frozen in early 2006. Work has continued on metadata and other specifications. In July 2009 the CellML website was completely revamped, and an initial version of the new CellML repository software (PMR2) was released. The structure of a CellML model A CellML model consists of a number of components, each described in their own component element. A component can be an entirely conceptual entity created for modelling convenience, or it can have some real physical interpretation (for example, it could represent the cell membrane). Each component contains a number of variables, which must be declared by placing a variable element inside the component. For example, a component representing a cell membrane may have a variable called V representing the potential difference (voltage) across the cell membrane. Mathematical relationships between variables are expressed within components, using MathML. MathML is used to make declarative expressions (as opposed to procedural statements as in a computer programming language). However, most CellML processing software will only accept a limited of range of mathematics (for example, some processing software requires equations with a single variable on one side of an equality). The choice of MathML makes CellML particularly suited for describing models containing differential equations. There is no mechanism for the expression of stochastic models or any other form of randomness. Components can be connected in other components using a connection element, which describes the name of two components to be connected, and the variables in the first component which are mapped to variables in the second component. Such connections are a statement that the variable in one component is equivalent to another variable in another component. CellML models also allow relationships between components to be expressed. The CellML specification defines two types of relationship, encapsulation and containment, however more can be defined by the user. The containment relationship is used to express that one component is physically within another. The encapsulation relationship is special because it is the only relationship that affects the interpretation of the rest of the model. The effect of encapsulation is that components encapsulated beneath other components are private and cannot be accessed except by the component directly above in the encapsulation tree. The modeller is free to use encapsulation as a conceptual tool, and it does not necessarily have any physical interpretation. Specifications CellML is defined by core specifications as well as additional specifications for metadata, used to annotate models and specify simulations. CellML 1.0 CellML 1.0 was the first final specification, and is used to describe many of the models in the CellML Model Repository. CellML 1.0 has some biochemistry specific elements for describing the role of variables in a reaction model. CellML 1.1 CellML 1.1 introduced the ability to import components and units. In order to fully support this feature, variables in CellML 1.1 accept variable names as initial values. Metadata specifications CellML has several metadata specifications, used to annotate models or provide information for running and/or visualizing simulations of models. The metadata 1.0 specification is used to annotate models with a variety of information; relevant references, authorship information, the species the model is relevant to, and so on. Simulation metadata provides the information required to reproduce specific simulations using a CellML model. Graphing metadata provides information to specify particular visualizations of simulation output, for example to reproduce a particular graph from a paper. CellML.org CellML.org aims to provide a focal point for the CellML community. Members can submit, review, and update models and receive feedback and help from the community. A CellML discussion mailing list can be found at CellML-discussion mailing list. The scope of this mailing list includes everything related to the development and use of CellML. A repository of several hundred biological models encoded into CellML can be found on the CellML community website at CellML Model Repository. These models are actively undergoing a curation process aiming to provide annotations with biological ontologies such as Gene Ontology and to validate the models against standards of unit balance and biophysical constrains such as conservation of mass, charge, energy etc. References External links CellML homepage IUPS Physiome Project Physiome JAPAN Project Interactive cell models Java versions of many of CellmL cardiac models. See also SBML BioPAX XML markup languages Mathematical markup languages XML-based standards
2413446
https://en.wikipedia.org/wiki/Standard%20%28warez%29
Standard (warez)
Standards in the warez scene are defined by groups of people who have been involved in its activities for several years and have established connections to large groups. These people form a committee, which creates drafts for approval of the large groups. Outside the warez scene, often referred to as p2p, there are no global rules similar to the scene, although some groups and individuals could have their own internal guidelines they follow. In warez distribution, all releases must follow these predefined standards to become accepted material. The standards committee usually cycles several drafts and finally decides which is best suited for the purpose, and then releases the draft for approval. Once the draft has been e-signed by several bigger groups, it becomes ratified and accepted as the current standard. There are separate standards for each category of releases. All groups are expected to know and follow the standards. What is defined There are rules of naming and organizing files, rules that dictate how a file must be packaged and an nfo file, that contains required information, must be added with the content. Format The first part of a standards document usually defines the format properties for the material, like codec, bitrate, resolution, file type and file size. Creators of the standard usually do comprehensive testing to find optimal codecs and settings for sound and video to maximize image quality in the selected file size. When choosing file size, the limiting factor is the size of the media to be used (such as 700MB for CD-R). The standards are designed such that a certain amount of content will fit on each piece of media, with the best possible quality in terms of size. If more discs are required for sufficient quality, the standard will define the circumstances where it is acceptable to expand to a second or third disc. Newer video standards moved away from the size constraints and replaced them with a quality based alternative such as the use of CRF. New codecs are usually tested annually to check if any offer any conclusive enhancement in quality or compression time. In general, quality is not sacrificed for speed, and the standards will usually opt for the highest quality possible, even if this takes much longer. For example, releases using the Xvid encoder must use the two-pass encoding method, which takes twice as long as a single pass, but achieves much higher quality; similarly, DVD-R releases that must be re-encoded often use 6 or 8 passes to get the best quality. When choosing the file format, platform compatibility is important. Formats are chosen such that they can be used on any major platform with little hassle. Some formats such as CloneCD can only be used on Windows computers, and these formats are generally not chosen for use in the standards. Packaging Next, the standard usually talks about how to package the material. Allowed package formats today are limited to RAR and ZIP, of which the latter is used only in 0-day releases. The sizes of the archives within the distributed file vary from the traditional 3½" floppy disk (1.44 MB) or extra-high density disk (2.88 MB) to 5 MB, 15 MB (typical for CD images) or 20 MB (typical for CD images of console releases), 50 MB files (typical for DVD images), and 100 MB (for dual-layer DVD images). These measurements are not equivalent to traditional measurement of file size (which is 1024 KB to a MB, 1024 MB to a GB); in a typical DVD release, each RAR file is exactly 50,000,000 bytes, not 52,428,800 bytes (50 megabytes in binary prefix). Formerly, the size of volumes were limited by the RAR file naming scheme, which produced extensions .rar, .r00 and so on through .r99. This allowed for 101 volumes in a single release before the naming switched to s00, s01 and so on. For example, a DVD-R image (4.37 GiB), split into 101 pieces, produces volumes smaller than 50 MB. The new RAR naming format, name.part001.rar, removes the limit, although the individual split archives continue to be 50 MB for historical reasons and because the old RAR naming format is still being widely used. Different compression levels are used for each type of material being distributed. The reason for this is that some material compresses much better than others. Movies and MP3 files are already compressed with near maximum capacity. Repacking them would just create larger files and increase decompression time. Ripped movies are still packaged due to the large file size, but compression is disallowed and the RAR format is used only as a container. Because of this, modern playback software can easily play a release directly from the packaged files, and even stream it as the release is downloaded (if the network is fast enough). MP3 and music video releases are an exception in that they are not packaged into a single archive like almost all other sections. These releases have content that is not further compressible without loss of quality, but also have small enough files that they can be transferred reliably without breaking them up. Since these releases rarely have large numbers of files, leaving them unpackaged is more convenient and allows for easier scripting. For example, scripts can read ID3 information from MP3s and sort releases based on those contents. Naming Rules for naming files and folders are an important part of the standards. Correctly named folders make it easier to maintain clean archives and unique filenames allow dupecheck to work properly. There's a defined character set which can be used in naming of the folders. The selected character set is chosen to minimize problems due to the many platforms a release may encounter during its distribution. Since FTP servers, operating systems or file systems may not allow special characters in file or directory names, only a small set of characters is allowed. Substitutions are made where special characters would normally be used (e.g. ç replaced by c) or these characters are omitted, such as an apostrophe. This can happen automatically by site scripts. As a note, spaces are explicitly disallowed in current standards and are substituted with underscores or full stops. The ubiquitous character set includes the upper- and lower-case English alphabet, numerals, and several basic punctuation marks. It is outlined below: ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz 0123456789-._() A typical example of the folder name of a movie release would be: Title.Of.The.Movie.YEAR.Source.Codec-GROUP The Xvid scene does not allow the use of parentheses and the BDR scene also doesn't allow the use of an underscore, while those are common with music releases. Dots aren't used in the required naming scheme for music videos. Square brackets aren't defined in any ruleset, however they are used by p2p groups that do not follow these rules. The best known example is aXXo. Date Standards documents have often a date defined when the rules take effect. The warez scene typically follows the UTC time standard. There is no formal record documenting correct times for all releases. Depending on geographical location and the timing of releases, release sites receive software releases at slightly different times. Release times in any single source may vary by as much as two weeks. Consequences If a group violates a standard, the release will be nuked. Another group will often proper the release. This proper usually requires a sample or a detailed explanation to prove the flaw in the material, unless the flaw was clear enough for the release to be nuked at releasing time. Flaws that aren't immediately visible can be found during testing of the material, such as a broken crack or a bad serial. These sanctions are social in nature and can be initiated by anyone within the community. Video standards There are several standards to release movies, TV show episodes and other video material to the scene. VCD releases use the less efficient MPEG-1 format, are low quality, but can be played back on most standalone DVD players. SVCD releases use MPEG-2 encoding, have half the video resolution of DVDs and can also be played back on most DVD players. DVD-R releases use the same format as retail DVD-Videos, and are therefore larger in size. Finally DivX, Xvid, H.264/MPEG-4 AVC and recently HEVC releases use the much more efficient MPEG standards. Generally, only middle to top-end DVD players can play back DivX or Xvid files, while Blu-ray players are required to handle H.264 files. There are many different formats because the whole thing was always a function of players, codec development and the pursuit of the best possible quality in terms of size. This results in a series of evolutionary stages and improvements that have been introduced gradually. The only film format that hasn't changed since the early days is the DVDR. The Scene still holds on to this format but it's becoming less important due to Blu-rays being the main source for retail releases. VCD Scene rules require the releasing group to spread theatrical VCDs in .bin/.cue files that can be burned on a CD. Although often the CD size is dictated by the length of the movie or video. One movie typically uses two CDs, although length may force the release to be a 3 or 4 CD release. The source of these theatrical releases is typically analog, such as CAM, telecine or telesync releases (movies recorded by a camera in theatres, often with external audio sources). VCDs from other sources such as DVD, VHS, TV, Pay-Per-View specials, Porn or Anime may also be released in the .mpg or .asf format. DVD and VHS rips are only allowed if there was no screener released before. The scene VCDs popped up in 1998, but digital unlicensed versions of films already appeared in early 1997 on private FTP networks. Eviliso, VCD-Europe, FTF and Immortal VCD are groups that have released VCD movies. In 1999 there were 15 to 20 groups. Because of its low quality, VCD releases declined in favor of SVCD and XviD. VCDs are often larger than these higher quality files, making VCDs even less attractive. VCDs once used for music videos got their own set of standards on October 1, 2002. SVCD Scene rules require the releasing group to spread SVCDs in .bin/.cue files, that fit on 700 MiB CDs. One movie typically uses two CDs, although length may force the release to be a 3 or 4 CD release. Content source is sometimes analog, such as Cam, Telecine or telesync releases. Also R5, DVDSCR or retail DVD is used as SVCD source. The advantage of SVCD is that it can be played on any standalone DVD player, but when DivX-capable players are taking over the market and more bandwidth becomes available to download DVDRs, SVCD became obsolete. Around 2007, the stream of SVCD releases from the scene died out. Standard definition video Standard definition rips have a resolution that is lower than high-definition video. DivX and Xvid for retail and bootleg sources MPEG-4 release standards are set in the so-called TDX rules. The DivX codec originally gained popularity because it provided a good compromise between film quality and file size. Approximately 25% of the space occupied by DVD is enough for a DivX encode to have DVD quality output. The first standards were created by meetings and debates of Team DivX (TDX) in 2000. This group consisted of the leaders of the top 5 DivX releasing groups, topsite operators along with rippers and encoders. It was formed because they thought "the new Div/X scene was a bit unmoderated, sloppy and pretty much a free-for-all." iSONEWS published the first standards on April 26. Earlier, on March 16, the database started to carry a DivX section on their website. A week later Betanews noticed the popularity of the then recently released DivX codec throughout IRC channels and asked whether this was a new threat to DVD after the DeCSS utility. The 2001 revision of the standards were organized by different people from iSONEWS. It consisted of 15 groups and signed by 18. This was the last one of the listed rulesets covering pornography. The once generally accepted TDX2002 ruleset requires movie releases to contain a DivX 3.11 or Xvid encoded video stream with an MP3 or AC3 encoded audio stream in an AVI container file. Movies are released in one, two or more 700 MiB files, so that they can be easily stored on CD-R. Two or four TV show episodes usually share one CD, hence 175 or 350 MiB releases are common. 233 MiB (three episodes per CD) are more rare but not forbidden, and are often used for full 30-minute programs with no adverts. 233 MiB is more used on whole season rips from retail sources or on single episodes that have a longer runtime. In July 2002, around the release of the new TDX2K2 ruleset, Xvid releases started to pop up. DivX with SBC was retired. VCDVaULT was the pioneer in promoting Xvid to the scene. The TDX2002 ruleset was followed by TXD2005. Because all DivX codecs are banned in this new ruleset, TDX became TXD: The XviD Releasing Standards. There is a rebuttal against this revision, proving it to be flawed in several aspects. Higher resolutions are not allowed. More efficient formats such as AVC and AAC have not been adopted yet, but are still being pushed by some release groups. There are also considerations to replace the old proprietary AVI file format with a modern container such as MP4 or MKV that can include multiple audio streams, subtitles and DVD-like menus. However, few standalone DVD players support these formats yet, and cross-platform playback is an important consideration. Nonetheless the introduction of MPEG-4 playback capabilities in standalone DVD players was a result of the huge amount of TDX-compliant movie material available on the internet. The latest TXD revision is TXD2009. As with each revision, there are some major changes. Multiple CD releases aren't necessary anymore, but most release groups keep following the tradition. The maximum width of a rip is lowered back to 640px for WS releases, the movie length versus file size rules and many other sections of the ruleset are redefined or extended. 91 releasegroups have signed the rules. as with the 2005 standards, there is a rebuttal that aims to allow "SOME of the fuckups and insanity in the 2009 ruleset". While the 2005 rebuttal made some valid points, this one is regarded as being pointless by other sceners. The reason for lowering the resolution is that some cheap Xvid players don't fully support resolutions above 640px. The pixel aspect ratio goes bad and makes the movie unwatchable. Other points made in the rebuttal are too hard to enforce, while still being backed by the releasing groups, or that the TXD is mainly meant for retail sources. Not all rules can be enforced on non-retail sources. DivX and Xvid for television sources XviD used for standard definition English television releases has been a ruleless world. However, in 2002 a ruleset for VCD, SVCD and DivX/Xvid tries to cover up the mess a little. SDTV, PDTV, HDTV and their dupe rules were being defined. Nuking had always been an issue in the TV scene. In 2007, a document was released that "intended to bring a level playing field to the TV-XviD scene and attempt to put down some rules to end some of the controversy that has plagued us in recent years", but it was only a draft. On January 1, 2011, a rule set written specifically for UK TV was released, taking into consideration various factors which differ from other regions. The introduction of HDTV and the availability of high-definition source material has resulted in the release of video files that exceed the maximum allowed resolution by the TDX rules, which anticipated DVD-Video rips as the ultimate source. Due to a missing standard these releases follow different rules. They are usually tagged as HR HDTV and use half the resolution of 1080i (960 × 540 px, vertically cropped to 528 or 544 px). Some releases also use a resolution of 1024 × 576 px to provide a proper aspect ratio of 16:9. Occasionally, shows (usually animated shows) aired in Standard Definition (PDTV) are often uploaded as HR (high resolution) PDTV using the H264 codec which offers much better compression than XviD, allowing a higher resolution in a file the same size as an XviD encoded video using a Standard Definition source. x264 for retail sources On October 17, 2013, the first standard definition ruleset for retail sources was released. A day later there was a revision that fixed some examples. The MKV container must be used. It is mandatory to support file streaming and playing from RARs. CRF must be used. A photograph as proof must be included. 81 groups signed the document. x264 for television sources On February 20, 2012, more than a year after the appearance of the first draft, the SD x264 TV Release Standards document was released with the goal to bring quality control back to the SD releases. According to the document x264 has become the most advanced H264 video encoder and compared to XviD it is able to provide higher quality and compression at greater SD resolutions. It also allows better control and transparency over encoding settings. With CRF (constant rate factor) in the mix it can be ensured that a diverse array of material will get the most appropriate bitrate and not arbitrary fixed file sizes. The video container must be MP4 and AAC is used for the audio. Thirteen groups, ASAP, BAJSKORV, C4TV, D2V, DiVERGE, FTP, KYR, LMAO, LOL, MOMENTUM, SYS, TLA and YesTV signed the document and began releasing TV shows in the new format. FQM and 2HD indicated they will keep releasing XviD. FQM said it's pointless losing a lot of standalone compatibility for slightly higher quality when there's already even better quality available. 2HD agrees and a vocal minority of the torrent community is quite upset because the MP4 container isn't compatible with many DVD players and other devices, but most scene groups don't really care about BitTorrent. Softpedia writer Lucian Parfeni called this interesting phenomenon the angry pirate and wrote that a lot of BitTorrent users are very disappointed about the move, though quite a few have no idea why it happened. A second reason FQM provided was that partial files can't be played back but LOL wrote the next day that the streaming issues were solved. On March 29, an updated version of the rules were released. This time 22 groups supported the document. MP4Box became the recommended muxer because it has support for file streaming and playing from RARs. For encoding audio, FFmpeg and FAAC encoders are banned. 2HD announced on April 15, 2012, that they would be abandoning XviD as show seasons end, but later this changed to only the seasons of the bigger shows. A month after 2HD's first announcement FQM released their first x264 rip. On April 3, 2016, the SD x264 TV Release Standard was updated with a new revision that "aims to update the standards from 2012 to standards suitable for 2016 and the future. Adding clarity and patching loopholes to once again allow for consistent and quality releases, which was the aim of this standard back in 2012." The video container in this revision was changed from mp4 to mkv which frustrated many users. Xvid and x264 sport rips On June 24, 2009, five groups released the first rule set specifically designed for x264 sport releases: TXSRS2K9. The idea was that the x264 encoder would be more suitable than Xvid. Some days after previewing the rule set, a rebuttal was released with concerns about the decisions made and them being in conflict with the high definition TV-X264 rule set. aAF called the rules unofficial nonsense and said that respected groups would not be following them. Only NOsegmenT and KICKOFF have released standard definition x264 sport rips under these rules. The following year, a rule set for Xvid sports releases appeared: TXSRS10. Its aim is to improve the overall quality of sports releases while retaining the compatibility that Xvid provides. It should bring standardization and get rid of restrictions applicable to the ruleless world of TV-XVID. Twelve groups signed the standards, including two of the original five of the x264 rule set. The SD x264 TV Releasing Standards 2012 also cover sport releases, making the previous standards obsolete. High definition video x264 for retail sources The latest High Definition x264 Standard is Revision 4.0 from 2011. This ruleset targets HD DVD and Blu-ray sourced 720p and 1080p movie and TV-show rips. The releases are made available in a Matroska .mkv container, using the x264 encoder. The file size must be a multiple of 1120 MiB. It has become quite normal that non-English spoken movies are tagged with their language tag, even when they contain English subtitles. This is different for Xvid releases. This practice has been accepted by all nukenets, but it was never written down in an addendum to the ruleset. Also the usage of both Dutch and Flemish audio tracks in one release has become a practice. There is a second ruleset from 2008 for x264 releases that has many similarities to the previous one, but it concentrates on BD5 and BD9 releases. The purpose of these releases is that the initial mkv file can be burned as a Blu-ray image to a single or double-layer DVD-R. The mkv file accompanying this kind of release is 200–300MB smaller than a similar release following the other ruleset due to the overhead of the Blu-ray image that will be created. Around May 2012, the stream of BD9 releases came to a halt. 2011 only had around thirty BD5 releases. x264 for anime On August 13, 2009, the first version of the standard was released. This standard is only a recommendation for anime from Blu-ray and its purpose is to improve quality over the then current HDX standard. The document reads that anime was always something special for the video codec experts at Doom9, as without anime there wouldn't be VirtualDub or many other video-related tools. This is why they decided to put out new standard to enjoy almost lossless anime quality. The French scene had rules for anime releases years earlier. The latest French ruleset for anime is from 2011. WMV Because of the x264 scene, many people think WMV-HD is redundant. The authors of the first document think this is not true because of the compatibility WMV-HD provides. They write that the only reason many people are against WMV-HD, is because WMV is from Microsoft. In 2007 they wrote that it can be played on the Xbox 360 and HTPCs while x264 is restricted to HTPCs. They point out that many movie studios utilize the VC-1 codec for their retail BD-ROMs. The changes in the 2008 ruleset were made because 1080p was getting more and more popular and the authors felt it was necessary to lower the 720p bitrate minimum as well to show x264 lovers WMV is equal quality. The video size wasn't determined by the length of the movie anymore, but by the minimum bitrate. In the 2009 standards, a nuke section was added to govern the WMV-HD section. All nukes based on any other rules are unacceptable. Because all the groups releasing WMV-HD have agreed to and signed this rule set, they are the only ones who will develop, implement, and mandate the rules governing the WMV-HD section. The groups are IGUANA, NOVO, BamHD and INSECTS. SMeG was also added in later versions. A special section for animated/anime titles was added in version 3.5 of the rule set. Version 4.0 of the rule set had a large ascii art added and was the first to have no minimum or maximum file size requirement for the final WMV file. WMV-HD was from then on purely quality driven by minimum bitrate. In version 4.1 the rule set was added that the group who releases an episode of a show first has exclusive rights to do the entire season of that show for a period of thirty days. This means that during those 30 days no other group can release an episode of the same show and season without being nuked. The source for a WMV release must be HD DVD or Blu-ray. The audio has to be encoded to WMA 10 Pro and the video codec must be Windows Media 9 Advanced Profile (VC-1). A 720p resolution dupes 1080p but 1080p does not dupe 720p. The WMV file must be in stored in RAR with a recovery record. Compression is not allowed. In 2013, at least ten new movie releases were seen, all released by the group INSECTS. This category died out in favor of x264 MKV releases, a format that is ubiquitous for non pornographic ripped video in 2016. x264 for television sources The first ever scene TV-x264 release, The.Unit.S01E04.HD720p.x264-MiRAGETV, was made by Spatula in early 2006. In May 2007, more than a year later, the first ruleset appeared. SAiNTS refused to sign this ruleset because it did not ban segmenting. According to them, this was the main reason for the crap releases in the HDTV scene. This first ruleset defined Matroska as the video container. A fixed file size for the resulting .mkv was used based on the shows length. The 2011 standards first introduced CRF, instead of 2-pass based encoding. In April 2016, a total rewrite of the ruleset was released, addressing all known issues and patching loopholes. In April 2012, QCF released the first 1080p x264 television ruleset shortly after one of their releases got nuked. The ruleset was nuked afterwards by LocalNet for one.group.does.not.make.a.ruleset.make_try.inviting.some.others.to.contribute. In December 2012, SYS agreed: "One group doesn't decide." In September 2013, DEADPOOL had the following to say about the rules: In April 2014, BATV announced they are dropping the INTERNAL tag: one group doesn't get to decide how we all release 1080p TV. A day later, BATV released the first 1080p HDTV x265 encode using that same episode for comparison. BATV thinks that 1080p is completely and utterly pointless unless done by a decent group. DIMENSION indicated they won't be using certain channels for 1080p captures due to insufficient quality. x264 for WEB sources In 2016, a new standard was introduced for web-sourced files, covering standard and high definition video. Web based streaming and video on-demand services have increased in popularity. They were initially used for missed broadcasts, but it evolved into a legitimate logo-free exclusive source for original content. DVD-R The scene requires DVD-Video releases to fit on a 4.7 GB DVD-R. Hence many released movies are not 1:1 copies of the retail DVDs. The latest standards revision is TDRS2K10. This ruleset appeared only two months after the 2009 ruleset, which has an addendum released to clarify a rule because of some confusion. The 2010 ruleset seems to have more similarities with TDRS2K5 than with the previous TDRS2K9 ruleset. According to the first nuke, the signing groups are crap. This resulted in a nukewar. Few days later, an addendum was released. According to XeoN-VorTeX on October 31, 2002, a milestone in DVDR ripping was reached with the COMPLETE release of The Matrix. The DVD was generally regarded as the most complex DVD on the market. Only a movie only rip was available. The new rip included things such as the white rabbit. Nowadays releases are in DVD5 or DVD9 size, have a menu available and are encoded with CCE. BD-R The scene requires BD-R releases to fit on a 25 GB single-layer Blu-ray Disc. Hence not all released movies are 1:1 copies of the retail Blu-rays, although those releases exist and are tagged COMPLETE.BLURAY. Music video The current Music video Council standard is version 6.0. X264 must be used in an mkv container in combination with an MP2, MP3, AC3, or DTS audio track. Pornography On November 15, 2012, the first XXX x264 SD standards were released. The movie file must not be split and an MP4 container must be used. The audio format is AAC. Xvid was used for standard definition rips the years before, just like DivX and SVCD, but did not have a ruleset. The groups Mirage and SMuT had their own list of rules they endorsed visible in their nfos. SMuT wrote: "We endorse the following XXX rules and encourage other groups/sites/scene members to insist they are followed also." Standards for DVDR, paysite videos and imagesets have been released before. Audio standards Both MP3 and FLAC releases can optionally include M3U playlist files. Lossy audio: MP3 At the start of the MP3 scene in 1995, there was little organization or standardization. Between 1999–2004, the pre-dominantly used MP3 encoding quality was 192 kbit/s at 44.1 kHz, which was nominal for the hardware and software encoding available at the time. This improved as computers got faster and the LAME MP3 encoder developed into its later versions. Due to broad support in hardware devices, unauthorized audio material is usually released in MP3 files at VBR quality. In 2007, new rules put forth that it is recommended to encode all files with Lame 3.97, using the "-V2 --vbr-new" switch. Other formats such as AAC or Vorbis are currently not allowed. In 2009, new rules were introduced. Homemade releases are forbidden. Every release needs an ID3 v1.1 AND ID3 v2 tag. Extra material that is available on the source material is allowed to be released. Flash storage mediums are allowed as sources to accommodate some retail releases made exclusively in those formats. The early MP3 release groups, 1996–1997, were considered "lamers", bottom feeders. In 2000, the former leader of Rabid Neurosis, Al Capone, posted a letter to the scene on the RNS website, complaining how the mp3 scene became more like the warez scene during its first 4 years. Lossless audio: FLAC An early scene release came in 2004 when the group ARA released Metallica's fifth performance in Gothenburg as FLAC files. These lossless files can be bought on LiveMetallica.com, a service that allows fans to buy and download files of soundboard recordings. From 2007 on some early FLAC releases came from justice, a group that already used APE for lossless music the years before. That same year the Polish group BFPMP3 thought to start promoting the FLAC standard with some internal releases. Single purpose groups such as judge, FLACH or CDDA created only a handful of releases in the years before the first ruleset. On October 2, 2011, the scene introduced a FLAC category by releasing a first ruleset. Eight days later an updated revision was released. According to the documents, the ruleset was created to satisfy the long lasting audiophile's demand for a music scene of higher quality than LAME mpeg compression encoding and because space and bandwidth were able to accommodate more. To avoid previously made mistakes in the music scene, a group of elder sceners gathered to decide upon the rules. A common understanding amongst all was that material from non-physical media can easily be of doubtful origin of source and hence of questionable quality. The rules consider only physical media as a valid source and they must be followed very strictly. A lot of releases are being nuked for various cosmetic flaws. Early 2016, anonymous sceners voiced a concern that nukers lack technical understanding to nuke improperly ripped vinyl sources and showed examples of how the ruleset gets twisted or misinterpreted for minor issues. The release got nuked within the hour after pre by ZoNeNET with just invalid.proper as reason. In June later that year, 4 years after version 2.0, a new ruleset was made to address misinterpretations in the wording and to update some rules. In response to version 3 of the rules, a scene notice called the rules invalid because it was not created by leading groups in the section or a council. It also pointed out that its block on WEB and PROMO releases causes a void due to the digital only distribution of many originals. The group CUSTODES mentioned in their farewell message that the FLAC ruleset isn't professional enough to archive music in the best quality possible. Software standards Applications Application releases are usually split in two different categories, 0-day and ISO apps. Categories originating from or still being put into 0-day are for example PDA, EBOOK or XXX-IMAGESET. 0-day (pronounced zero day) refers to any copyrighted work that has been released the same day as the original product, or sometimes even before. It is considered a mark of skill among warez distro groups to crack and distribute a program on the same day of its commercial release. 0-day applications are usually 150 MB or smaller, but can be 5 GB or larger as long as they are not CD/DVD images. The release format allows almost anything in 0-day section, but often 0day releases are cracks or keygens for different applications or small games with size varying from 1–50 MB. Sometimes e-books, imagesets, fonts or mobile software are released as 0-day. Executable programs such as keygens and cracks are often compressed with the open source UPX packer. LineZer0 indicated in the nfo file of their 20,000th release that they will change their packing ways to RAR/SFV and put the old ZIP/RAR/DIZ packaging to rest by the first of April 2012. They strongly encouraged other groups to do the same. After some threats to release information on individuals, Lz0 returned to the old packaging ways for the time being until a new ruleset can be put in place. The Minor Update (MU) rule is unique to 0-day releases. It makes sure that each month not more than one release of the same application is released. Major updates do not follow this rule. The exception to the rule happens when a group motivates in their NFO that the changes are considered to be a major update. For example the group Unleashed choose to ignore the MU rule for a hotfix, making a less blurry game available two weeks sooner. PDA rules require folder naming to define which application and version the release contains. Also required are CPU type, operating system and cracktype. Optional information such as language is expected, if the release is non-English. Packaging follows 0day guidelines. Generally lax security, simple programming and small filesizes make mobile software an attractive target for infringers. In more recent times it covers other portable devices such as iPod, iPhone, iPad or systems running Android. ISO applications are usually either in BIN/CUE or ISO format. Allowed media is CD and DVD, but release can be smaller than the media size. Applications are required to contain working key or keygen to generate valid serial. Patch cracking is also required, which is used to bypass hardware protection, such as serial or USB dongle. Some groups signed a Sample CD Scene Protocol for better quality sample discs. Game rips This is the scene for game releases that are changed to minimize the size of the distributed files. A first ten point document was made by "The Faction" in 1998. The grouping that created the rules that should be adhered to, and the rules themselves, were disbanded the following year. The NSA rules, or "the new rules", outlines the codes of conduct regarding game ripping. Releasing can be done in two fields: games and applications. It can also be done in two ways: it is possible to release disc images or groups can "rip". In the process of ripping, groups remove things such as introductory movies, multiple texture modes, big sound files and the like. Games The game must fit on CDs or DVDs, and the format should be either BIN/CUE, or ISO, respectively. Some sites allow CCD images too, as defined in the site's rules. Media descriptor files (MDF/MDS) seem to be permitted now as well. A draft version of Standard ISO Rules (S.I.R.) 2010 was included in TGSC #43. At the start of 2021 a new ruleset for PC games became active: After approximately 20 years without new, written rules for the PC games section the leading Game ISO groups assembled to collaborate on a long overdue modernization. A game must be authored into an ISO file when created for Microsoft Windows, but releases for other operating systems may use a .dmg Apple Disk Image file or even skip the image file altogether before packaging in RARs. A limited time exclusivity right for game updates is introduced to the group that wins the race. Digital distribution of games causes the amount of updates to increase considerably which results in little new data and a lot of duplicate content in the game updates. During this 60 day window it's at the group's discretion to join these updates as they see fit. Outside the Scene repacked games are in high demand. FitGirl, one of the leading names in this niche, often uses the scene release as source to create a better compressed version to save considerable bandwidth. DOX DOX is an abbreviation of documents or documentation (manuals). This category includes video game add-ons such as No-CDs, cracked updates, keygens, covers, trainers or cheat codes. DOX releases are amongst the rarest releases in the scene. This is due to their small size. In October 2007, TNT (The Nova Team) noted in the nfo of their 750th release that only the groups DEViANCE and FAiRLiGHT managed to reach the same amount of DOX releases. Console standards The console scene survived decades without rules. In 2009, a first set of rules for the PS2, Xbox 360 and the Wii was released. It's remarkable that a release must be pred no later than 30 days after retail date. Besides the 0-day standards, most other rulesets nowadays don't have such limitations. An example of a ruleset that did have such a limitation would be the deprecated TDX 2000 ruleset, but in the subsequent ruleset (TDX2k1) this limitation was removed. There are no written standards for the other console scenes. The first games released on a certain platform are often not playable because the console isn't cracked at the time. Nintendo 64 On January 25, 1997, the first game released for the N64 was Super Mario 64 by the group Anthrox and the console division of Swat. The games are released as one zipfile following the old traditional 8.3 naming convention. No folders were used. The ROM extensions ".v64" and ".z64" were used as naming conventions. Shortly before the closure of 64dd.net in January 2015, there were 883 releases numbered on the site. The last releases listed were done by the group Carrot in 2012. Dreamcast On June 23, 2000, the first ripped Dreamcast game, Dead or Alive 2, was released by Utopia., this was a CDRWIN ISO image (bin/cue) like in the PC game ISO scene. The day before, Utopia released a Dreamcast BootCD that was capable of booting copies and imports on a non-chipped standard consumer model. Less than two months later, when Kalisto released the first self bootable game, Dynamite Cop, the game was a Padus DiscJuggler (CDI) image. Later that month, the first copy protected game, Ultimate Fighting Championship, was released by Kalisto. Almost all releases that followed were released as a CDI image and thus became the de facto standard. When Kalisto announced their retirement in the DC scene, they had released more than 66% of all Dreamcast releases. Two days later, a new group called Echelon picked up where Kalisto left off. This group released Evil Twin: Cyprien's Chronicles their 188th and last Dreamcast game release on April 30, 2002. On October 12, 2000, PARADOX, another big and respected scene group, released the first trainer for the Dreamcast. Two weeks after that, they released their first game, Shadowman, for the Dreamcast console with an intro just to prove that we can do neat DC releases as well. Besides games and dox, also emulators and Linux distros were released in the DC scene. Xbox Xbox releases are by convention in the XISO format, a slight modification of the DVD ISO format. DVDRips of Xbox games were released so they could fit on a single CD. A lot of the first Xbox games were released by the group ProjectX on May 3, 2002. These first releases worked on a developer Xbox, but if it would be playable on retail versions was unknown at the time because no modchips existed yet. There are more than 4400 Xbox releases released in the scene. PlayStation 2 PlayStation 2 releases must be in standard DVD ISO format. PARADOX was the first group to do PS2 and PS2 DVD rips, but later on they were the ones motivating the scene to release full DVD ISOs. GameCube On June 12, 2003, the first game for the Nintendo GameCube, The Legend of Zelda: The Wind Waker, was released by STARCUBE. As of May 2016, there are more than 3100 NGC releases released in the scene. Xbox 360 On December 8, 2005, the first full game for the Xbox 360 was released in the scene by the warez group PI. Need for Speed: Most Wanted was the first of a batch of three games released that day by PI. A couple of minutes before that, they released an open source tool to extract Xbox 360 dumps. As of January 2017, there are more than 6700 Xbox 360 releases released in the scene. The image of the Xbox 360 game is a .iso with a .dvd file. The rars are split to volumes of 50 MB for DVD5 disks or 100 MB for DVD9 disks and must use compression. PlayStation 3 On November 25, 2006, PARADOX released the first PS3 ISO. The PS3 ISOs are now fully playable on a jailbroken PS3. As of January 2017, there were more than 4,800 PS3 releases in the scene. A first ruleset for the PlayStation 3 section was released on June 10, 2011. Shortly afterwards it was voided in classic scene style: "These rules dont mean shit you asshats." and the original rules were nuked for inadequate.and.unnecessary.ruleset_not.signed.by.all.listed.groups_see.Response.to.The.Official.PlayStation3.Ruleset.2011.PS3. A new "VOID" ruleset was released the day after and was nuked for no.valid.rebuttal.given_not.all.grps.need.to.sign_follow.the.new.ruleset. The better response followed a couple of days later. Wii Wii releases must be in standard DVD ISO format. The rar archives must use compression. PARADOX released the first Wii image on December 12, 2006. The game was Red Steel. On April 14, 2008, BlaZe was the first group to release a Virtual Console title. This emulated SNES Donkey Kong Country 2: Diddy's Kong Quest finally had a proper dump after its fourth release more than a year later. These DLC releases are tagged VC or WiiWare and exist of a packed WAD file. A large amount of these first releases were nuked. The main nuke reason was modified.ticket.info. Example:. Dupe or bad dump are other common reasons to receive a nuke. Another reason would be not trucha signed resulting not to be able to install. In January 2017, more than 7800 releases for the Wii were released in the scene. Wii U On May 3, 2013, the group VENOM released the first game for the Wii U: Marvel Avengers: Battle for Earth. The disk image is a .iso file in Wii U Optical Disc format (WUOD), Nintendo's proprietary disc format for the Wii U. Like other console scene firsts, the game isn't playable on the console yet. This first scene release was a few days after the Wii U was announced to be allegedly hacked by the mod chip developer WiiKey. Based on the file date of the RAR archives, VENOM had already created the ISO file more than a month before pre. This is the point in time when they first made the files available within the scene through their affiliated sites. Later, Venom's Release was found to be a bad dump, so the real first Wii U game dumped was by PoWeRUp, which has been confirmed as PROPER. Xbox One On November 19, 2013, a couple of days before the official console launch date, the group COMPLEX released the first game for the Xbox One: Call of Duty: Ghosts. The release contained a 42GB .iso file without the security sectors. The file system format of the disk is XGD4 (Xbox Game Data 4). PlayStation 4 On May 31, 2014, the group WaYsTeD brought the first PlayStation 4 game to the scene: Watch Dogs. Just like with other console firsts, the 25GB .iso file is not currently playable. It has the same file structure as the PlayStation 3 games. Within two months, some other groups started doing raw image dumps too. More than 3 years later on September 27, 2017, the group KOTF (Knights of the Fallen) released Grand Theft Auto V as the first playable decrypted game dump, DARKSiDERS on PC GAMES section are saying that this KOTF release would not been possible without them. As KOTF did not had good enough topsite at the time, sites IRC channel announced pre as: "DARKSiDERS RELEASED Grand.Theft.Auto.V.READNFO.PS4-KOTF". As the game was released from the DARKSiDERS pre directory, the release was given to DARKSiDERS on behalf of KOTF. Its though unclear how these groups are related, but many in the Scene remember that pre announcement mentioning also DARKSiDERS to the confusion of many. Among the first releases were games such as Assassin's Creed IV and Far Cry 4, with RAR volumes of 250MB and even 500MB sized parts. The outdated firmware that was required to play these games was a major drawback. Handheld standards A handheld game console is a lightweight, portable electronic device with a built-in screen, game controls, speakers and replaceable and or rechargeable batteries or battery pack. Handheld game consoles are run on machines of small size allowing people to carry them and play them at any time or place. Unlike video game consoles, the controls, screen and speakers are all part of a single unit. Game Boy Advance Game Boy Advance releases are in their native ROM format (.gba). However, like the 0day releases, due to their small size, these are often compressed into RAR files and then compressed into ZIP format; otherwise, they are simply compressed into ZIP format. Nintendo DS The Nintendo DS scene started out as an extension of the Game Boy Advance scene, and carried forward with mostly the same set of rules. On May 31, 2010, a first ruleset was released with the goal of establishing a clear and concise listing of what should be expected of a valid Nintendo DS release. "Having an official list to reference should prevent needless nuking, and clean up what is at present a cluttered and confused scene." 7 groups signed the rules. The day after, a second "official" ruleset was released but was nuked later on, although it is deemed to be the most relevant according to scenerules.irc.gs. Multiple of the 18 groups listed did not agree to sign the rules. A scene notice has been released by "Concerned Retired NDS/GBA Scene Founding Members" concerning the issue of the two rulesets. Nintendo DS releases are in their native ROM format (.neo or .nds). The Scene has been doing .zip from the GBA days, but now the releases need to be compressed into 5 MB split RAR volumes and contain a Nintendo DS title or a patch for a Nintendo DS title. Also an NFO file is a must. A patch is some modification or tool like a trainer, crack, language selector or save fix. The most common formats are .BDF and .IPS. The directory name must include the text "NDS" and the group name. DSi related releases are regarded as NDS releases and must have the tag DSi. See also DS Scene and DS Piracy on pocketheaven.com for more background and history. Nintendo 3DS Nintendo 3DS releases use the .3ds file format or the .cia file format. The group LEGACY (LGC) released the first three games on June 5, 2011. They included a picture of the dumper they used. The packaging is done with compressed RAR files and an SFV for verification. As of October 2017, there are more than 2400 3DS releases. PSP Sony PSP releases are by convention specified as FULL UMD or UMD RIP, meaning some parts were removed either out of non-necessity, or to fit it to a certain-sized memory stick. You can play an ISO with custom firmware or an emulator such as devhook. PARADOX released the first retail PSP game on May 4, 2005. In December 2006, the scene started releasing old PSX games that can be played with the official emulator on the PSP. These games are bought from the PlayStation Store with a PS3. Depending on the releasegroup, they are tagged PSXPSP, PSX_PSP, PSX.To.PSP, PSX.FOR.PSP or PS1_For_PSP. On May 19, 2006, PARADOX returned to the PSP scene to release a +9 trainer just to prove that trainers for Sony's handheld are possible. Since then, no other group or person has publicly released any trainers. See also PSP Scene and PSP Piracy on pocketheaven.com for more background and history. Unlike the games, there are standards for how to release movies for the PSP. All the releases must be in the MP4/THM format. Retail movies released for the PSP are tagged UMDMovie. When the first UMDMovie was released in September 2005, there wasn't a way to play it yet. Because Sony killed the format, the UMDMovie releases came to a halt in May 2007. 3 years later the group ABSTRAKT released some more UMD Movies. PlayStation Vita The first game release for the PlayStation Vita in the scene, Uncharted: Golden Abyss, was done by the group PSiCO on February 8, 2016. This was almost 4 years after the introduction of the PS Vita in Europe and North America. It was dumped from a PS Vita NVG game cartridge and the PFS encryption layer was removed since this data is not needed to become playable in the future. Releases are tagged with PSV in the directory name. There were already a handful of other PSV tagged releases before, but these contained covers. The first releases were made to be used with the Cobra BlackFin dongle for the PlayStation Vita handheld, using the .psv (BlackFin) format for its data dump. After the release of the Vitamin dumper in August 2016 following the HENkaku homebrew enabler, there weren't many new releases immediately after because of issues with this new tool. A couple of weeks later the first ruleset was signed by 4 groups. The best tool to make a copy of the game cartridge data at that moment was MaiDumpTool in combination with Vitamin 2.0 when needed. The file format used for these newer dumps is .vpk: a ZIP containing the decrypted files of the game folder. Other Other handheld platforms that had games released by the Scene include Neo Geo Pocket, Neo Geo Pocket Color, WonderSwan, WonderSwan Color, Tapwave Zodiac, Gizmondo, Game Boy, Game Boy Color, and N-Gage. E-books The first traceable scene release of an e-book can be dated back to around the year 2000. In 2008, sUppLeX wrote in one of their NFOs that "ebooks do not really belong to 0day anymore, so we definitely need rules for that. Most countries got their own rules for the local ebook section in their country, but there's nothing similar for the whole world." sUppLeX were releasing their ebooks in RAR/SFV format at that time, but reverted two years later. Standards existed for the German and Polish ebook scenes. In 2009, a new ruleset for German e-books was created, which closely matches general rules for English releases and which since has been adopted in most e-book releases. There was the option to choose between ZIP/DIZ or RAR/SFV to package a release. In 2012, international ebook rules were created. It was a unified agreement applicable to all groups to ensure high quality ebook releases. DRM-protection has to be removed prior to release and only the 0day ZIP/DIZ packaging can be used. Some examples showing naming conventions: Magazine.Name.Year.Month.LANGUAGE.SOURCE.eBOOk-GROUPTAG Journal.Name.Vol.xx.No.xx.Month.Year.LANGUAGE.SOURCE.COMiC.eBOOk-GROUPTAG (also for comics) Book.Title.xxth.Edition.Year.LANGUAGE.SOURCE.eBOOk-GROUPTAG xx stands for a number: volume, issue number or edition SOURCE may be SCAN for scanned documents or RETAiL for commercially available e-documents Allowed file formats are PDF, EPUB, Kindle (.azw, .kf8) and Mobipocket (.prc, .mobi). See also ARJ Rulesets References External links Largest collection of known scene rules How to package a Scene release? Music rules of the now defunct What.CD torrent tracker Warez Standards
4732813
https://en.wikipedia.org/wiki/Omnis%20Studio
Omnis Studio
Omnis Studio is a rapid application development (RAD) tool that allows programmers and application developers to create enterprise, web, and mobile applications for Windows, Linux, and macOS personal computers and servers across all business sectors. The Omnis JavaScript Client allows developers to build all types of web applications and mobile applications by presenting a highly functional interface in the user's desktop web browser, or on tablet and mobile devices. The business logic and database access in such web and mobile applications is handled by the Omnis server. The Omnis server also can act as a hub between database servers, services based on Java and .Net and clients like Adobe Air & Flex, transferring data in the form of XML or Web services. Omnis history 1979: On August 1 Geoff Smith and Paul Wright founded Blyth Computer Services (later renamed Blyth Software Ltd, then Omnis Software) in Wenhaston, Suffolk, in the UK, which became the first Apple dealership in East Anglia. Paul Wright was the nephew of Peter Harold Wright. 1981: In December Blyth released its first "OMNIS" product, a database application tool for the Apple II designed by David Seaman and written using Apple Pascal. OMNIS was also developed at the time using the UCSD Pascal environment which enabled a simple port over to other popular machines of the time. The company was later renamed Blyth Software. 1984: OMNIS 1, 2 and 3 were released together in April 1984 as a suite of Omnis products. Omnis 1 ("the file manager"), was intended to be an easy to use way of handling simple data, i.e. non-relational data. Omnis 2 ("the information manager"), was similar to the original Omnis but had more programmability. Omnis 3 ("the database manager") was designed for programmers and business owners to build their own customized applications. At about that time Blyth Software also produced the Blyth Accounting packages based on the Omnis 3 engine to enable Accounting for small businesses. Omnis 3 was one of the first cross-platform database application tools for Apple computers and IBM compatibles running under MS-DOS. 1984: (May) Blyth Software Inc. was incorporated and opened offices in San Mateo, CA. 1985: Following the previous year's launch of the Apple Macintosh, "Omnis 3 for Macintosh" was released in May 1985, one of the first database generation tools for the Mac. Initially as a textual product rather than a GUI. UK headquarters moved to Mitford House in Benhall, Suffolk. 1986: "Omnis 3 Plus for Macintosh" released in May. The "Express" module was added in 1988 to allow non programmers to create apps. 1986: Released "Blyth Craftware" in December, a set of off-the-shelf business packages for the management of mailing lists, personnel, assets and stock in small businesses. 1987: "Omnis Accounting" released in the UK in February. 1987: Released "Omnis Quartz", one of the first GUI databases for Microsoft Windows. 1987: Blyth Holdings Inc was created & floated on NASDAQ raising $7m. 1988: Paul Wright was its Chairman and Chief Executive Officer. 1989: Released Omnis 5, one of the first cross-platform development tools for building applications under Windows and Mac. 1991/93/94: Released Omnis 7 v1, v2, and v3 in near consecutive years, an integrated development environment providing client/server access to many industry standard server databases such as Oracle, Sybase, and Informix. Omnis 7 version 1 for Mac released Dec 1991, and Windows early 1992. Version 2 added an IDE shell and the so-called "dot notation" for referencing object attributes (properties and methods), and support for a VCS, CMS, ODBC connectivity, and Apple DAL support. 1997: Released Omnis Studio v1, a cross-platform, object oriented development environment for Windows and Mac OS. Company was renamed Omnis Software. 1998: Released Omnis Studio v2, a cross-platform, multi-database development environment for Windows and Mac OS. 1999: Released Omnis Studio v2.1 including the Omnis Web Client or "thin client" for browsing data and applications via the Web. 1999: Released Omnis Studio for Linux making Omnis one of the first RAD tools available under Linux, Windows, and Mac. 2000: Released Omnis Studio v3. Later that year Omnis Software merged with PICK Systems to become Raining Data Corporation. 2004: Released Omnis Studio v4 including support for MySQL, JDBC, and Java Objects. 2005: Released Omnis Studio v4.1 including support for Unicode. 2006: Release of Omnis Studio 4.2 including native support for Mac-Intel and introduction of Web Services Component 2007 Release of Omnis Studio 4.3 including Windows Vista and Mac OS 10.5 (Leopard) support, and component for accessing .Net objects. 2009 Release Omnis Studio 5.0 which includes application development for Windows Mobile-based devices, and Unicode support. 2010 Release Omnis Studio 5.1 which includes support for the iOS platform (iPhone, iPad) 2012 Release Omnis Studio 5.2 which includes a JavaScript based client for rendering applications in a browser on desktop and mobile devices. 2013 Release Omnis Studio 6.0 which includes significant updates to the JavaScript Client including new wrappers for creating standalone mobile apps, a new control for access to mobile device features, a new PDF printing device, enhanced JavaScript controls, and multi-tasking using SQL Worker objects. 2014 Release Omnis Studio 6.1 includes Native JavaScript components, tool for adapting to the different resolution for desktop and mobile devices, support for REST web services for server and client, 64 bit, improved JavaScript performance, error check for client-side methods 2016 Release Omnis Studio 8.0 which provides 64-bit and Cocoa support for Omnis Studio running on OS X, the ability to use HTML components in window classes for Desktop Apps, Drag and Drop capability for the JavaScript Client, a new Code Assistant available in the method editor to help you write Omnis code, plus some enhancements in the Studio Browser which will help new and existing developers. October 2016, the Omnis business was purchased by OLS Holdings Ltd, a UK company owned by a number of Omnis developers and distributors. August 2017 Release Omnis Studio 8.1 which provides GIT support, JSON controls, a new Welcome intro, Push notifications for mobile apps, responsive forms, a "headless" Linux server for deployment, and other enhancements. January 2019 Release Omnis Studio 10 which provides a new free-type Method Editor and Code Assistant, support for Accessibility standard WCAG 2.0, an Omnis datafile migration tool, new components for JavaScript and fat client, support for remote debugging, a new remote object class, new Worker Objects that support Node.JS JavaScript, POP3, Crypto, Hash and FTP. September 2019 Release Omnis Studio Version 10.1 with New and Updated JavaScript Components, New Animations for Desktop Apps, Further Improvements to Code Assistant (Method Name Matching), New Variable Panel, SQL Worker Lists, Enhancements in the management of web app sessions, improved user interaction with mobile apps with new "toast" messages, better support of the FHIR standard for medical applications. November 2020: support for JS client themes, SVG icons, position assistance (for alignment of visual objects), web form WYSIWYG design view, JS split button, and updates in the code editor including code folding. The Linux Headless Server can now be operated in MultiProcess Server (MPS) mode, utilizing the multi-core processor on the server. For fat client applications, new token entry control, breadcrumb control, side panels, toast messages, and updated drag & drop for system files. Support for Open API 3.0.0 and Swagger 2.0 for Web Services. See also EurOmnis References External links Omnis Blog Omnis Developer Mailing list & Archive European Omnis Developer Conference The DLA Group Integrated development environments Web development software
5406474
https://en.wikipedia.org/wiki/Consensus%20%28computer%20science%29
Consensus (computer science)
A fundamental problem in distributed computing and multi-agent systems is to achieve overall system reliability in the presence of a number of faulty processes. This often requires coordinating processes to reach consensus, or agree on some data value that is needed during computation. Example applications of consensus include agreeing on what transactions to commit to a database in which order, state machine replication, and atomic broadcasts. Real-world applications often requiring consensus include cloud computing, clock synchronization, PageRank, opinion formation, smart power grids, state estimation, control of UAVs (and multiple robots/agents in general), load balancing, blockchain, and others. Problem description The consensus problem requires agreement among a number of processes (or agents) for a single data value. Some of the processes (agents) may fail or be unreliable in other ways, so consensus protocols must be fault tolerant or resilient. The processes must somehow put forth their candidate values, communicate with one another, and agree on a single consensus value. The consensus problem is a fundamental problem in control of multi-agent systems. One approach to generating consensus is for all processes (agents) to agree on a majority value. In this context, a majority requires at least one more than half of available votes (where each process is given a vote). However, one or more faulty processes may skew the resultant outcome such that consensus may not be reached or reached incorrectly. Protocols that solve consensus problems are designed to deal with limited numbers of faulty processes. These protocols must satisfy a number of requirements to be useful. For instance, a trivial protocol could have all processes output binary value 1. This is not useful and thus the requirement is modified such that the output must somehow depend on the input. That is, the output value of a consensus protocol must be the input value of some process. Another requirement is that a process may decide upon an output value only once and this decision is irrevocable. A process is called correct in an execution if it does not experience a failure. A consensus protocol tolerating halting failures must satisfy the following properties. Termination Eventually, every correct process decides some value. Integrity If all the correct processes proposed the same value , then any correct process must decide . Agreement Every correct process must agree on the same value. Variations on the definition of integrity may be appropriate, according to the application. For example, a weaker type of integrity would be for the decision value to equal a value that some correct process proposed – not necessarily all of them. The Integrity condition is also known as validity in the literature. A protocol that can correctly guarantee consensus amongst n processes of which at most t fail is said to be t-resilient. In evaluating the performance of consensus protocols two factors of interest are running time and message complexity. Running time is given in Big O notation in the number of rounds of message exchange as a function of some input parameters (typically the number of processes and/or the size of the input domain). Message complexity refers to the amount of message traffic that is generated by the protocol. Other factors may include memory usage and the size of messages. Models of computation Varying models of computation may define a "consensus problem". Some models may deal with fully connected graphs, while others may deal with rings and trees. In some models message authentication is allowed, whereas in others processes are completely anonymous. Shared memory models in which processes communicate by accessing objects in shared memory are also an important area of research. Communication channels with direct or transferable authentication In most models of communication protocol participants communicate through authenticated channels. This means that messages are not anonymous, and receivers know the source of every message they receive. Some models assume a stronger, transferable form of authentication, where each message is signed by the sender, so that a receiver knows not just the immediate source of every message, but the participant that initially created the message. This stronger type of authentication is achieved by digital signatures, and when this stronger form of authentication is available, protocols can tolerate a larger number of faults. The two different authentication models are often called oral communication and written communication models. In an oral communication model, the immediate source of information is known, whereas in stronger, written communication models, every step along the receiver learns not just the immediate source of the message, but the communication history of the message. Inputs and outputs of consensus In the most traditional single-value consensus protocols such as Paxos, cooperating nodes agree on a single value such as an integer, which may be of variable size so as to encode useful metadata such as a transaction committed to a database. A special case of the single-value consensus problem, called binary consensus, restricts the input, and hence the output domain, to a single binary digit {0,1}. While not highly useful by themselves, binary consensus protocols are often useful as building blocks in more general consensus protocols, especially for asynchronous consensus. In multi-valued consensus protocols such as Multi-Paxos and Raft, the goal is to agree on not just a single value but a series of values over time, forming a progressively-growing history. While multi-valued consensus may be achieved naively by running multiple iterations of a single-valued consensus protocol in succession, many optimizations and other considerations such as reconfiguration support can make multi-valued consensus protocols more efficient in practice. Crash and Byzantine failures There are two types of failures a process may undergo, a crash failure or a Byzantine failure. A crash failure occurs when a process abruptly stops and does not resume. Byzantine failures are failures in which absolutely no conditions are imposed. For example, they may occur as a result of the malicious actions of an adversary. A process that experiences a Byzantine failure may send contradictory or conflicting data to other processes, or it may sleep and then resume activity after a lengthy delay. Of the two types of failures, Byzantine failures are far more disruptive. Thus, a consensus protocol tolerating Byzantine failures must be resilient to every possible error that can occur. A stronger version of consensus tolerating Byzantine failures is given by strengthening the Integrity constraint: IntegrityIf a correct process decides , then must have been proposed by some correct process. Asynchronous and synchronous systems The consensus problem may be considered in the case of asynchronous or synchronous systems. While real world communications are often inherently asynchronous, it is more practical and often easier to model synchronous systems, given that asynchronous systems naturally involve more issues than synchronous ones. In synchronous systems, it is assumed that all communications proceed in rounds. In one round, a process may send all the messages it requires, while receiving all messages from other processes. In this manner, no message from one round may influence any messages sent within the same round. The FLP impossibility result for asynchronous deterministic consensus In a fully asynchronous message-passing distributed system, in which at least one process may have a crash failure, it has been proven in the famous FLP impossibility result that a deterministic algorithm for achieving consensus is impossible. This impossibility result derives from worst-case scheduling scenarios, which are unlikely to occur in practice except in adversarial situations such as an intelligent denial-of-service attacker in the network. In most normal situations, process scheduling has a degree of natural randomness. In an asynchronous model, some forms of failures can be handled by a synchronous consensus protocol. For instance, the loss of a communication link may be modeled as a process which has suffered a Byzantine failure. Randomized consensus algorithms can circumvent the FLP impossibility result by achieving both safety and liveness with overwhelming probability, even under worst-case scheduling scenarios such as an intelligent denial-of-service attacker in the network. Permissioned versus permissionless consensus Consensus algorithms traditionally assume that the set of participating nodes is fixed and given at the outset: that is, that some prior (manual or automatic) configuration process has permissioned a particular known group of participants who can authenticate each other as members of the group. In the absence of such a well-defined, closed group with authenticated members, a Sybil attack against an open consensus group can defeat even a Byzantine consensus algorithm, simply by creating enough virtual participants to overwhelm the fault tolerance threshold. A permissionless consensus protocol, in contrast, allows anyone in the network to join dynamically and participate without prior permission, but instead imposes a different form of artificial cost or barrier to entry to mitigate the Sybil attack threat. Bitcoin introduced the first permissionless consensus protocol using proof of work and a difficulty adjustment function, in which participants compete to solve cryptographic hash puzzles, and probabilistically earn the right to commit blocks and earn associated rewards in proportion to their invested computational effort. Motivated in part by the high energy cost of this approach, subsequent permissionless consensus protocols have proposed or adopted other alternative participation rules for Sybil attack protection, such as proof of stake, proof of space, and proof of authority. Equivalency of agreement problems Three agreement problems of interest are as follows. Terminating Reliable Broadcast A collection of processes, numbered from to communicate by sending messages to one another. Process must transmit a value to all processes such that: if process is correct, then every correct process receives for any two correct processes, each process receives the same value. It is also known as The General's Problem. Consensus Formal requirements for a consensus protocol may include: Agreement: All correct processes must agree on the same value. Weak validity: For each correct process, its output must be the input of some correct process. Strong validity: If all correct processes receive the same input value, then they must all output that value. Termination: All processes must eventually decide on an output value Weak Interactive Consistency For n processes in a partially synchronous system (the system alternates between good and bad periods of synchrony), each process chooses a private value. The processes communicate with each other by rounds to determine a public value and generate a consensus vector with the following requirements: if a correct process sends , then all correct processes receive either or nothing (integrity property) all messages sent in a round by a correct process are received in the same round by all correct processes (consistency property). It can be shown that variations of these problems are equivalent in that the solution for a problem in one type of model may be the solution for another problem in another type of model. For example, a solution to the Weak Byzantine General problem in a synchronous authenticated message passing model leads to a solution for Weak Interactive Consistency. An interactive consistency algorithm can solve the consensus problem by having each process choose the majority value in its consensus vector as its consensus value. Solvability results for some agreement problems There is a t-resilient anonymous synchronous protocol which solves the Byzantine Generals problem, if and the Weak Byzantine Generals case where is the number of failures and is the number of processes. For systems with processors, of which are Byzantine, it has been shown that there exists no algorithm that solves the consensus problem for in the oral-messages model. The proof is constructed by first showing the impossibility for the three-node case and using this result to argue about partitions of processors. In the written-messages model there are protocols that can tolerate . In a fully asynchronous system there is no consensus solution that can tolerate one or more crash failures even when only requiring the non triviality property. This result is sometimes called the FLP impossibility proof named after the authors Michael J. Fischer, Nancy Lynch, and Mike Paterson who were awarded a Dijkstra Prize for this significant work. The FLP result has been mechanically verified to hold even under fairness assumptions. However, FLP does not state that consensus can never be reached: merely that under the model's assumptions, no algorithm can always reach consensus in bounded time. In practice it is highly unlikely to occur. Some consensus protocols The Paxos consensus algorithm by Leslie Lamport, and variants of it such as Raft, are used pervasively in widely deployed distributed and cloud computing systems. These algorithms are typically synchronous, dependent on an elected leader to make progress, and tolerate only crashes and not Byzantine failures. An example of a polynomial time binary consensus protocol that tolerates Byzantine failures is the Phase King algorithm by Garay and Berman. The algorithm solves consensus in a synchronous message passing model with n processes and up to f failures, provided n > 4f. In the phase king algorithm, there are f + 1 phases, with 2 rounds per phase. Each process keeps track of its preferred output (initially equal to the process's own input value). In the first round of each phase each process broadcasts its own preferred value to all other processes. It then receives the values from all processes and determines which value is the majority value and its count. In the second round of the phase, the process whose id matches the current phase number is designated the king of the phase. The king broadcasts the majority value it observed in the first round and serves as a tie breaker. Each process then updates its preferred value as follows. If the count of the majority value the process observed in the first round is greater than n/2 + f, the process changes its preference to that majority value; otherwise it uses the phase king's value. At the end of f + 1 phases the processes output their preferred values. Google has implemented a distributed lock service library called Chubby. Chubby maintains lock information in small files which are stored in a replicated database to achieve high availability in the face of failures. The database is implemented on top of a fault-tolerant log layer which is based on the Paxos consensus algorithm. In this scheme, Chubby clients communicate with the Paxos master in order to access/update the replicated log; i.e., read/write to the files. Many peer-to-peer online Real-time strategy games use a modified Lockstep protocol as a consensus protocol in order to manage game state between players in a game. Each game action results in a game state delta broadcast to all other players in the game along with a hash of the total game state. Each player validates the change by applying the delta to their own game state and comparing the game state hashes. If the hashes do not agree then a vote is cast, and those players whose game state is in the minority are disconnected and removed from the game (known as a desync.) Another well-known approach is called MSR-type algorithms which have been used widely from computer science to control theory. Permissionless consensus protocols Bitcoin uses proof of work, a difficulty adjustment function and a reorganization function to achieve permissionless consensus in its open peer-to-peer network. To extend Bitcoin's blockchain or distributed ledger, miners attempt to solve a cryptographic puzzle, where probability of finding a solution is proportional to the computational effort expended in hashes per second. The node that first solves such a puzzle has their proposed version of the next block of transactions added to the ledger and eventually accepted by all other nodes. As any node in the network can attempt to solve the proof-of-work problem, a Sybil attack is infeasible in principle unless the attacker has over 50% of the computational resources of the network. Other cryptocurrencies (i.e. DASH, NEO, STRATIS, ...) use proof of stake, in which nodes compete to append blocks and earn associated rewards in proportion to stake, or existing cryptocurrency allocated and locked or staked for some time period. One advantage of a 'proof of stake' over a 'proof of work' system, is the high energy consumption demanded by the latter, at least with current technology. As an example, Bitcoin mining (2018) is estimated to consume non-renewable energy sources at an amount similar to the entire nations of Czech Republic or Jordan. Some cryptocurrencies, such as Ripple, use a system of validating nodes to validate the ledger. This system used by Ripple, called Ripple Protocol Consensus Algorithm (RPCA), works in rounds: Step 1: every server compiles a list of valid candidate transactions; Step 2: each server amalgamates all candidates coming from its Unique Nodes List (UNL) and votes on their veracity; Step 3: transactions passing the minimum threshold are passed to the next round; Step 4: the final round requires 80% agreement Other participation rules used in permissionless consensus protocols to impose barriers to entry and resist sybil attacks include proof of authority, proof of space, proof of burn, or proof of elapsed time. These alternatives are again largely motivated by the high amount of computational energy consumed by the proof of work. Proof of space is used by cryptocoins such as Burstcoin. Contrasting with the above permissionless participation rules, all of which reward participants in proportion to amount of investment in some action or resource, proof of personhood protocols aim to give each real human participant exactly one unit of voting power in permissionless consensus, regardless of economic investment. Proposed approaches to achieving one-per-person distribution of consensus power for proof of personhood include physical pseudonym parties, social networks, pseudonymized government-issued identities, and biometrics. Consensus number To solve the consensus problem in a shared-memory system, concurrent objects must be introduced. A concurrent object, or shared object, is a data structure which helps concurrent processes communicate to reach an agreement. Traditional implementations using critical sections face the risk of crashing if some process dies inside the critical section or sleeps for an intolerably long time. Researchers defined wait-freedom as the guarantee that the algorithm completes in a finite number of steps. The consensus number of a concurrent object is defined to be the maximum number of processes in the system which can reach consensus by the given object in a wait-free implementation. Objects with a consensus number of can implement any object with a consensus number of or lower, but cannot implement any objects with a higher consensus number. The consensus numbers form what is called Herlihy's hierarchy of synchronization objects. According to the hierarchy, read/write registers cannot solve consensus even in a 2-process system. Data structures like stacks and queues can only solve consensus between two processes. However, some concurrent objects are universal (notated in the table with ), which means they can solve consensus among any number of processes and they can simulate any other objects through an operation sequence. See also Uniform consensus Quantum Byzantine agreement Byzantine fault tolerance References Further reading Distributed computing problems Fault-tolerant computer systems
6010
https://en.wikipedia.org/wiki/Computer%20worm
Computer worm
A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behavior will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on the law of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer. Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects. History The actual term "worm" was first used in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!" The first ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program. On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act. Features Independence Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks. Exploit attacks Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the "Nimda" virus exploits vulnerabilities to attack. Complexity Some worms are combined with web page scripts, and are hidden in HTML pages using VBScript, ActiveX and other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined with backdoor programs or Trojan horses, such as "Code Red". Contagiousness Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread through shared folders, e-mails, malicious web pages, and servers with a large number of vulnerabilities in the network. Harm Any code designed to do more than spread the worm is typically referred to as the "payload". Typical malicious payloads might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a ransomware attack, or exfiltrate data such as confidential documents or passwords. Some worms may install a backdoor. This allows the computer to be remotely controlled by the worm author as a "zombie". Networks of such machines are often referred to as botnets and are very commonly used for a range of malicious purposes, including sending spam or performing DoS attacks. Some special worms attack industrial systems in a targeted manner. Stuxnet was primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to "issue orders" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits (eg: ) in Windows systems and Siemens SIMATICWinCC systems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts. Countermeasures Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates (see "Patch Tuesday"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a zero-day attack is possible. Users need to be wary of opening unexpected email, and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running malicious code. Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended. Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and running firewall and antivirus software. Mitigation techniques include: ACLs in routers and switches Packet-filters TCP Wrapper/ACL enabled network service daemons Nullroute Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect. In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer. Worms with good intent A helpful worm or anti-worm is a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms at Xerox PARC, there have been attempts to create useful worms. Those worms allowed John Shoch and Jon Hupp to test the Ethernet principles on their network of Xerox Alto computers. Similarly, the Nachi family of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities. In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms as malware. One study proposed the first computer worm that operates on the second layer of the OSI model (Data link Layer), utilizing topology information such as Content-addressable memory (CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered. Anti-worms have been used to combat the effects of the Code Red, Blaster, and Santy worms. Welchia is an example of a helpful worm. Utilizing the same deficiencies exploited by the Blaster worm, Welchia infected computers and automatically began downloading Microsoft security updates for Windows without the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit. Other examples of helpful worms are "Den_Zuko", "Cheeze", "CodeGreen", and "Millenium". See also BlueKeep Botnet Code Shikara (Worm) Computer and network surveillance Computer virus Email spam Father Christmas (computer worm) Self-replicating machine Technical support scam – unsolicited phone calls from a fake "tech support" person, claiming that the computer has a virus or other problems Timeline of computer viruses and worms Trojan horse (computing) Worm memory test XSS worm Zombie (computer science) References External links Malware Guide – Guide for understanding, removing and preventing worm infections on Vernalex.com. "The 'Worm' Programs – Early Experience with a Distributed Computation", John Shoch and Jon Hupp, Communications of the ACM, Volume 25 Issue 3 (March 1982), pp. 172–180. "The Case for Using Layered Defenses to Stop Worms", Unclassified report from the U.S. National Security Agency (NSA), 18 June 2004. Worm Evolution (archived link), paper by Jago Maniscalchi on Digital Threat, 31 May 2009. Computer worms Security breaches Types of malware
35142247
https://en.wikipedia.org/wiki/Hortonworks
Hortonworks
Hortonworks was a data software company based in Santa Clara, California that developed and supported open-source software (primarily around Apache Hadoop) designed to manage big data and associated processing. Hortonworks software was used to build enterprise data services and applications such as IoT (connected cars, for example), single view of X (such as customer, risk, patient), and advanced analytics and machine learning (such as next best action and realtime cybersecurity). Hortonworks had three interoperable product lines: Hortonworks Data Platform (HDP): based on Apache Hadoop, Apache Hive, Apache Spark Hortonworks DataFlow (HDF): based on Apache NiFi, Apache Storm, Apache Kafka Hortonworks DataPlane services (DPS): based on Apache Atlas and Cloudbreak and a pluggable architecture into which partners such as IBM can add their services. In January 2019, Hortonworks completed its merger with Cloudera. History Hortonworks was formed in June 2011 as an independent company, funded by $23 million venture capital from Yahoo! and Benchmark Capital. Its first office was in Sunnyvale, California. The company employed contributors to the open source software project Apache Hadoop. The Hortonworks Data Platform (HDP) product included Apache Hadoop and was used for storing, processing, and analyzing large volumes of data. The platform was designed to deal with data from many sources and formats. The platform included Hadoop technology such as the Hadoop Distributed File System, MapReduce, Pig, Hive, HBase, ZooKeeper, and additional components. Eric Baldeschweiler (from Yahoo) was initial chief executive, and Rob Bearden chief operating officer, formerly from SpringSource. Benchmark partner Peter Fenton was a board member. The company name refers to the character Horton the Elephant, since the elephant is the symbol for Hadoop. In October 2018, Hortonworks and Cloudera announced they would be merging in an all-stock merger of equals. After the merger, the Apache products of Hortonworks became Cloudera Data Platform. References External links Software companies based in the San Francisco Bay Area Companies based in Sunnyvale, California Companies based in Santa Clara, California Companies formerly listed on the Nasdaq Hadoop Apache Software Foundation Software companies established in 2011 2011 establishments in the United States 2011 establishments in California Big data companies 2014 initial public offerings 2019 mergers and acquisitions Software companies of the United States
34858249
https://en.wikipedia.org/wiki/TSS%20Manx%20Maid%20%281910%29
TSS Manx Maid (1910)
TSS (RMS) Manx Maid (I) No. 131765 - the first ship in the Company's history to be so named - was a packet steamer which was bought by the Isle of Man Steam Packet Company from the London and Southwestern Railway Company, and commenced service with the Steam Packet in 1923. Dimensions Constructed for the London and Southwestern Railway Company and named Caesarea, the vessel was built by Cammell Laird at Birkenhead in 1910. Length 284'6"; beam 39'1"; depth 15'8". Caesarea was launched at Birkenhead on Wednesday 14 September 1910. Caesarea was a steel; triple-screw turbine vessel, which had a registered tonnage of . Powered by three directly coupled turbines, and producing 6,500 i.h.p., Caesarea's boilers were double-ended circular return type, with a working steam pressure of 160 pounds p.s.i. This gave Caesarea a service speed of 20 knots. Service life London and Southwestern Railway Company Caesarea entered service with the London and Southwestern Railway Company in 1910, who employed her on the Southampton - Channel Islands service. On 7 July 1923, in a thick fog, Caesarea struck a rock off Corbière as she was making passage from Jersey. Water began to enter the stokehold and engine room, whilst the stern began to fill, leading to the ship beginning to founder. She was able to turn round and almost made it back to St Helier Harbour, but sank just outside the pierheads. Caesarea was stuck fast for almost two weeks, but was refloated on 20 July on a spring tide. Nobody was injured. Following her salvage, Caesarea was taken under tow to Southampton for initial repair, and from there to Birkenhead at where her repairs were completed and she was acquired by the Isle of Man Steam Packet Company. Isle of Man Steam Packet Company Purchased by the Isle of Man Steam Packet Company in December 1923 for an initial price of £9,000 and renamed Manx Maid, she was refitted at a cost of £22,500 and converted to oil burning for a further £7,000 resulting in a total cost to the Company of £38,500. Manx Maid was fired by six furnaces for each boiler and at 18 knots would consume 84 tons of oil in 24 hours - or 36 tons at 12 knots. Manx Maid entered service with the Steam Packet fleet in time for the 1924 tourist season. She was employed operating to the numerous destinations then served by the Company, and continued to give reliable service to and from the Island, until with the dark clouds of war beginning to gather, she was requisitioned by the Admiralty on 27 August 1939, as an ABV - an armoured boarding vessel. War service Manx Maid saw service in both World Wars. In 1914, she was requisitioned and served throughout the World War I under her original name, Caesarea. In World War II she was requisitioned in August 1939, and served as an ABV, an Armed Boarding Vessel. As other Steam Packet ships were attending the Evacuation of Dunkirk, Manx Maid took no part in Operation Dynamo, as she was undergoing repairs at the time. However, once her repairs were completed, she was ordered to Southampton and made two crossings into the war zone as the retreat moved westwards along the French coast. Her first mission took her to St Malo, but by the time she arrived the port was already under German occupation. She escaped after being unable to go inshore, and returned to England. She then made passage to Brest, and in one trip brought out nearly 3,000 troops, roughly twice her allowable passenger complement. Manx Maid pulled out in a heavy swell followed by the , and a cross-channel railway steamer. Manx Maid was almost two feet below her marks, and consequently developed condenser trouble meaning she had to heave to for nearly three hours some distance off the French Coast with the main enemy force approximately 30 miles from the port. Even so, she finally reached Plymouth safely. In October 1941 she became a 'Special Duties' vessel and was renamed H.M.S. Bruce by the Royal Navy. From the end of March 1942 she became a Fleet Air Arm target vessel, continuing those duties until March, 1945. She was paid off at Ardrossan on 21 March 1945 (minus her mainmast), and returned to the Isle of Man Steam Packet Company that day. Post-war service and disposal Following her war service, Manx Maid returned to the Isle of Man. After a refit, she resumed her duties within the Steam Packet fleet, where she once again worked on the peak traffic routes, until with the introduction of the , , and , the decision was made to put her up for disposal. Manx Maid was towed to Barrow-in-Furness for breaking up in November 1950. Gallery References Bibliography Ships of the Isle of Man Steam Packet Company 1910 ships World War I merchant ships of the United Kingdom World War II merchant ships of the Isle of Man Ferries of the Isle of Man Steamships of the United Kingdom Maritime incidents in 1923 Steamships Merchant ships of the United Kingdom World War II merchant ships of the United Kingdom Ships built on the River Mersey
47475393
https://en.wikipedia.org/wiki/John%20Richard%20Thackeray
John Richard Thackeray
John Richard Thackeray (17 May 1772 – 19 August 1846) was an English churchman and member of the Thackeray literary family. Early life Thackeray was born on 17 May 1772, the fourth son of Thomas Thackeray (1736–1806), surgeon, of Cambridge and grandson of Thomas Thackeray DD (1693–1760). He attended Rugby School. He received his BA from Pembroke College, University of Cambridge, in 1794 and his MA in 1797. Clerical career Thackeray was the vicar of Broxted, Essex, from 1810, and the rector of Downham Market and vicar of Wiggenhall St Mary Magdalen, both in Norfolk, from 1811. He was the rector of the parish of Monken Hadley, north of Chipping Barnet, from 1819. Family Thackeray had brothers Elias (1790), William M. (1788), Frederick (1800), Joseph (1802) and Martin (1802). He married at Hatfield on 13 December 1810, Marianne Franks, daughter of William Franks of Beech Hill Park, Hadley Wood, and Fitzroy Square. The couple had a son, Richard W. (1833) and two daughters. Marianne died 23 March 1855. Death Thackeray died on 19 August 1846 at Hadley after a short illness and he was buried on 24 August in a vault underneath the south transept of his own church St Mary the Virgin, Monken Hadley. References 1772 births 1846 deaths Monken Hadley Alumni of Pembroke College, Cambridge John 19th-century English Anglican priests People educated at Rugby School St Mary the Virgin, Monken Hadley
64401060
https://en.wikipedia.org/wiki/List%20of%20unnumbered%20minor%20planets%3A%202001%20P%E2%80%93R
List of unnumbered minor planets: 2001 P–R
This is a partial list of unnumbered minor planets for principal provisional designations assigned between 1 August and 15 September 2001. , a total of 614 bodies remain unnumbered for this period. Objects for this year are listed on the following pages: A–E · Fi · Fii · G–O · P–R · S · T · U · V–W and X–Y. Also see previous and next year. P |- id="2001 PC" bgcolor=#d6d6d6 | 0 || 2001 PC || MBA-O || 16.30 || 3.1 km || multiple || 2001–2021 || 03 Dec 2021 || 413 || align=left | Disc.: NEAT || |- id="2001 PJ" bgcolor=#FFC2E0 | 6 || 2001 PJ || AMO || 21.3 || data-sort-value="0.20" | 200 m || single || 57 days || 30 Sep 2001 || 42 || align=left | Disc.: AMOS || |- id="2001 PX3" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.84 || 1.3 km || multiple || 1997–2021 || 03 May 2021 || 189 || align=left | Disc.: NEAT || |- id="2001 PA4" bgcolor=#FA8072 | 1 || || MCA || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2018 || 20 Dec 2018 || 72 || align=left | Disc.: NEAT || |- id="2001 PS4" bgcolor=#fefefe | 0 || || MBA-I || 16.99 || 1.2 km || multiple || 2001–2022 || 25 Jan 2022 || 394 || align=left | Disc.: AMOS || |- id="2001 PK5" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.36 || 3.0 km || multiple || 2001–2021 || 14 Apr 2021 || 91 || align=left | Disc.: NEAT || |- id="2001 PE8" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.6 || 1.4 km || multiple || 2001–2020 || 11 May 2020 || 97 || align=left | Disc.: NEAT || |- id="2001 PH9" bgcolor=#FFC2E0 | 1 || || AMO || 21.3 || data-sort-value="0.20" | 200 m || multiple || 2001–2019 || 04 Nov 2019 || 74 || align=left | Disc.: NEAT || |- id="2001 PS9" bgcolor=#FA8072 | 1 || || MCA || 17.97 || data-sort-value="0.76" | 760 m || multiple || 2001–2021 || 05 Jun 2021 || 46 || align=left | Disc.: NEAT || |- id="2001 PU9" bgcolor=#FFC2E0 | 1 || || AMO || 19.43 || data-sort-value="0.46" | 460 m || multiple || 2001–2021 || 28 May 2021 || 74 || align=left | Disc.: AMOS || |- id="2001 PO10" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.41 || 1.4 km || multiple || 2001–2021 || 09 Apr 2021 || 186 || align=left | Disc.: AMOSAlt.: 2010 VV89 || |- id="2001 PO13" bgcolor=#FA8072 | 0 || || MCA || 19.01 || data-sort-value="0.47" | 470 m || multiple || 2001–2021 || 01 Nov 2021 || 180 || align=left | Disc.: AMOS || |- id="2001 PU13" bgcolor=#FA8072 | 1 || || MCA || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2020 || 22 Jan 2020 || 88 || align=left | Disc.: NEATAlt.: 2015 XU261 || |- id="2001 PF14" bgcolor=#FFC2E0 | 2 || || AMO || 19.5 || data-sort-value="0.45" | 450 m || multiple || 2001–2007 || 25 Apr 2007 || 77 || align=left | Disc.: NEAT || |- id="2001 PG14" bgcolor=#FFC2E0 | 7 || || APO || 22.5 || data-sort-value="0.11" | 110 m || single || 30 days || 13 Sep 2001 || 68 || align=left | Disc.: NEAT || |- id="2001 PB15" bgcolor=#FA8072 | 0 || || HUN || 18.79 || data-sort-value="0.52" | 520 m || multiple || 2001–2021 || 17 Apr 2021 || 51 || align=left | Disc.: NEATAlt.: 2013 BQ70 || |- id="2001 PC15" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.1 || 1.0 km || multiple || 2001–2019 || 03 Jan 2019 || 108 || align=left | Disc.: AMOS || |- id="2001 PR15" bgcolor=#fefefe | 0 || || MBA-I || 17.54 || data-sort-value="0.92" | 920 m || multiple || 2001–2021 || 10 May 2021 || 138 || align=left | Disc.: NEATAlt.: 2005 VK46, 2014 HQ130, 2015 VV131 || |- id="2001 PN16" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.00 || 1.7 km || multiple || 2001–2021 || 03 May 2021 || 179 || align=left | Disc.: NEAT || |- id="2001 PX16" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.1 || 3.4 km || multiple || 2001–2019 || 20 Dec 2019 || 105 || align=left | Disc.: NEATAlt.: 2011 GA24 || |- id="2001 PA17" bgcolor=#FA8072 | 0 || || MCA || 17.87 || data-sort-value="0.79" | 790 m || multiple || 2001–2021 || 05 Jan 2021 || 181 || align=left | Disc.: NEATAlt.: 2005 UJ30 || |- id="2001 PH17" bgcolor=#fefefe | 1 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2020 || 07 Dec 2020 || 100 || align=left | Disc.: NEATAlt.: 2013 HU89 || |- id="2001 PG23" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.47 || 2.1 km || multiple || 2001–2021 || 10 Apr 2021 || 169 || align=left | Disc.: NEAT || |- id="2001 PN23" bgcolor=#fefefe | 0 || || MBA-I || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2021 || 18 Jan 2021 || 192 || align=left | Disc.: AMOSAlt.: 2012 SM11 || |- id="2001 PL24" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2019 || 24 Dec 2019 || 41 || align=left | Disc.: AMOSAlt.: 2019 TR28 || |- id="2001 PY27" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.9 || 1.8 km || multiple || 2001–2020 || 11 Dec 2020 || 70 || align=left | Disc.: AMOSAlt.: 2015 XV85 || |- id="2001 PH29" bgcolor=#FA8072 | – || || MCA || 17.8 || data-sort-value="0.82" | 820 m || single || 5 days || 20 Aug 2001 || 62 || align=left | Disc.: AMOS || |- id="2001 PJ29" bgcolor=#FFC2E0 | 4 || || APO || 23.0 || data-sort-value="0.089" | 89 m || single || 15 days || 30 Aug 2001 || 67 || align=left | Disc.: AMOS || |- id="2001 PX30" bgcolor=#E9E9E9 | – || || MBA-M || 17.6 || data-sort-value="0.90" | 900 m || single || 27 days || 16 Aug 2001 || 12 || align=left | Disc.: NEAT || |- id="2001 PB31" bgcolor=#fefefe | 0 || || MBA-I || 18.26 || data-sort-value="0.66" | 660 m || multiple || 2001–2021 || 09 Apr 2021 || 78 || align=left | Disc.: NEAT || |- id="2001 PM31" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.07 || 1.1 km || multiple || 1993–2021 || 07 Jul 2021 || 149 || align=left | Disc.: NEAT || |- id="2001 PA32" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.25 || 1.1 km || multiple || 2001–2021 || 08 May 2021 || 115 || align=left | Disc.: NEATAlt.: 2017 HB12 || |- id="2001 PM34" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.4 km || multiple || 2001–2018 || 11 Jul 2018 || 114 || align=left | Disc.: NEAT || |- id="2001 PY35" bgcolor=#d6d6d6 | – || || MBA-O || 17.4 || 1.8 km || single || 56 days || 06 Oct 2001 || 15 || align=left | Disc.: NEAT || |- id="2001 PE36" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.41 || 2.2 km || multiple || 2001–2021 || 15 May 2021 || 280 || align=left | Disc.: NEAT || |- id="2001 PR36" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.4 || 1.4 km || multiple || 1997–2014 || 15 Dec 2014 || 87 || align=left | Disc.: NEATAlt.: 2010 VV197 || |- id="2001 PT42" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.5 || 2.8 km || multiple || 2001–2020 || 19 Dec 2020 || 121 || align=left | Disc.: NEATAlt.: 2015 UJ76 || |- id="2001 PE43" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.4 km || multiple || 2001–2018 || 08 Aug 2018 || 104 || align=left | Disc.: AMOS || |- id="2001 PY43" bgcolor=#FA8072 | 0 || || MCA || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2018 || 14 Jun 2018 || 64 || align=left | Disc.: AMOS || |- id="2001 PK44" bgcolor=#fefefe | 0 || || MBA-I || 16.9 || 1.2 km || multiple || 2000–2021 || 04 Jan 2021 || 150 || align=left | Disc.: AMOSAlt.: 2009 VO90, 2015 FG340 || |- id="2001 PK47" bgcolor=#C2E0FF | 4 || || TNO || 7.32 || 176 km || multiple || 2001–2020 || 09 Dec 2020 || 34 || align=left | Disc.: Mauna Kea Obs.LoUTNOs, cubewano (hot) || |- id="2001 PA48" bgcolor=#FA8072 | 0 || || MCA || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2019 || 25 Nov 2019 || 121 || align=left | Disc.: NEATAlt.: 2017 BS91 || |- id="2001 PK48" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.4 || 4.6 km || multiple || 1999–2021 || 18 Jan 2021 || 193 || align=left | Disc.: AMOSAlt.: 2011 GU36, 2014 YM34, 2016 ER89 || |- id="2001 PP49" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.3 || 3.1 km || multiple || 2001–2020 || 22 Dec 2020 || 56 || align=left | Disc.: NEAT || |- id="2001 PR51" bgcolor=#fefefe | 0 || || MBA-I || 17.32 || 1.0 km || multiple || 2001–2022 || 24 Jan 2022 || 140 || align=left | Disc.: AMOSAlt.: 2010 CF208 || |- id="2001 PX51" bgcolor=#d6d6d6 | 2 || || MBA-O || 17.1 || 2.1 km || multiple || 2001–2017 || 23 Oct 2017 || 35 || align=left | Disc.: AMOSAlt.: 2017 NN3 || |- id="2001 PF52" bgcolor=#fefefe | 0 || || MBA-I || 17.52 || data-sort-value="0.93" | 930 m || multiple || 2001–2021 || 10 Jun 2021 || 135 || align=left | Disc.: AMOSAlt.: 2008 QS43 || |- id="2001 PQ53" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2018 || 17 Aug 2018 || 55 || align=left | Disc.: NEAT || |- id="2001 PA55" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2021 || 11 Jun 2021 || 38 || align=left | Disc.: AMOSAlt.: 2021 GO113 || |- id="2001 PN55" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2019 || 21 Nov 2019 || 172 || align=left | Disc.: AMOSAlt.: 2012 RJ20 || |- id="2001 PO55" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.59 || 2.7 km || multiple || 2001–2021 || 12 May 2021 || 50 || align=left | Disc.: AMOSAlt.: 2012 PX11 || |- id="2001 PS56" bgcolor=#fefefe | 1 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2019 || 05 Nov 2019 || 68 || align=left | Disc.: AMOS || |- id="2001 PL57" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.9 || 1.8 km || multiple || 2001–2020 || 23 Jan 2020 || 142 || align=left | Disc.: AMOS || |- id="2001 PJ60" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2020 || 25 Jan 2020 || 157 || align=left | Disc.: AMOSAlt.: 2010 LM53 || |- id="2001 PK60" bgcolor=#FA8072 | 0 || || MCA || 18.01 || data-sort-value="0.74" | 740 m || multiple || 2001–2020 || 31 Jan 2020 || 173 || align=left | Disc.: AMOSAlt.: 2006 AY1 || |- id="2001 PT60" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.5 || 2.1 km || multiple || 2001–2020 || 23 Jan 2020 || 142 || align=left | Disc.: AMOSAlt.: 2014 QO192 || |- id="2001 PJ61" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.67 || 1.4 km || multiple || 2001–2021 || 09 Jul 2021 || 317 || align=left | Disc.: AMOSAlt.: 2011 AD16 || |- id="2001 PJ65" bgcolor=#FA8072 | 2 || || MCA || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2013 || 05 May 2013 || 81 || align=left | Disc.: NEAT || |- id="2001 PV65" bgcolor=#fefefe | 1 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2019 || 27 Nov 2019 || 104 || align=left | Disc.: AMOS || |- id="2001 PT66" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 2001–2018 || 09 Nov 2018 || 63 || align=left | Disc.: NEATAlt.: 2012 NH1 || |- id="2001 PO67" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.7 || 1.6 km || multiple || 2001–2021 || 18 Jan 2021 || 74 || align=left | Disc.: AMOSAlt.: 2015 VJ125 || |- id="2001 PR67" bgcolor=#fefefe | 0 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2021 || 18 Jan 2021 || 110 || align=left | Disc.: NEATAlt.: 2005 UW502, 2016 WS26 || |- id="2001 PS67" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.65 || 2.0 km || multiple || 2001–2021 || 17 Apr 2021 || 241 || align=left | Disc.: NEATAlt.: 2012 BM130, 2013 GE86, 2014 NS58 || |- id="2001 PV67" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.99 || 1.7 km || multiple || 2001–2021 || 13 May 2021 || 165 || align=left | Disc.: AMOS || |- id="2001 PW67" bgcolor=#fefefe | 0 || || MBA-I || 18.19 || data-sort-value="0.68" | 680 m || multiple || 2001–2021 || 01 Nov 2021 || 160 || align=left | Disc.: AMOS || |- id="2001 PY67" bgcolor=#fefefe | 2 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2016 || 07 May 2016 || 29 || align=left | Disc.: NEAT || |- id="2001 PZ67" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2019 || 15 Nov 2019 || 87 || align=left | Disc.: AMOS || |- id="2001 PB68" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2019 || 28 Nov 2019 || 39 || align=left | Disc.: NEAT || |- id="2001 PC68" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.43 || data-sort-value="0.97" | 970 m || multiple || 2001–2021 || 28 Jul 2021 || 47 || align=left | Disc.: NEAT || |- id="2001 PF68" bgcolor=#fefefe | 2 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2019 || 06 Apr 2019 || 33 || align=left | Disc.: AMOSAdded on 9 March 2021 || |} back to top Q |- id="2001 QJ" bgcolor=#FFC2E0 | 1 || 2001 QJ || AMO || 21.2 || data-sort-value="0.20" | 200 m || multiple || 2001–2019 || 05 Nov 2019 || 51 || align=left | Disc.: LINEAR || |- id="2001 QP2" bgcolor=#FA8072 | 1 || || MCA || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2015 || 20 Jun 2015 || 80 || align=left | Disc.: LINEAR || |- id="2001 QS2" bgcolor=#fefefe | 2 || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2019 || 25 Nov 2019 || 133 || align=left | Disc.: Prescott Obs.Alt.: 2016 QS32 || |- id="2001 QU2" bgcolor=#fefefe | 0 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2021 || 08 Jan 2021 || 84 || align=left | Disc.: LINEAR || |- id="2001 QF6" bgcolor=#C7FF8F | 0 || || CEN || 15.52 || 5.0 km || multiple || 1982–2022 || 22 Jan 2022 || 144 || align=left | Disc.: LINEAR, albedo: 0.040 || |- id="2001 QJ6" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.68 || data-sort-value="0.87" | 870 m || multiple || 2001–2021 || 09 Jul 2021 || 53 || align=left | Disc.: LINEARAlt.: 2005 NG48 || |- id="2001 QJ9" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.20 || 2.4 km || multiple || 2001–2021 || 15 May 2021 || 402 || align=left | Disc.: LINEAR || |- id="2001 QA32" bgcolor=#FA8072 | 0 || || MCA || 18.72 || data-sort-value="0.54" | 540 m || multiple || 2001–2021 || 31 May 2021 || 72 || align=left | Disc.: LINEARAlt.: 2008 VF75 || |- id="2001 QR33" bgcolor=#FA8072 | 1 || || MCA || 19.4 || data-sort-value="0.39" | 390 m || multiple || 2001–2019 || 18 Nov 2019 || 103 || align=left | Disc.: NEAT || |- id="2001 QT33" bgcolor=#FA8072 | 6 || || MCA || 19.9 || data-sort-value="0.31" | 310 m || single || 60 days || 16 Oct 2001 || 40 || align=left | Disc.: NEAT || |- id="2001 QZ33" bgcolor=#FA8072 | 0 || || MCA || 17.53 || 1.7 km || multiple || 2001–2021 || 01 Jul 2021 || 97 || align=left | Disc.: LINEAR || |- id="2001 QA34" bgcolor=#FA8072 | 3 || || MCA || 19.0 || data-sort-value="0.67" | 670 m || multiple || 2001–2018 || 29 Sep 2018 || 48 || align=left | Disc.: NEAT || |- id="2001 QB34" bgcolor=#FFC2E0 | 0 || || AMO || 19.69 || data-sort-value="0.41" | 410 m || multiple || 2001–2014 || 23 Nov 2014 || 69 || align=left | Disc.: NEAT || |- id="2001 QC34" bgcolor=#FFC2E0 | 0 || || APO || 20.20 || data-sort-value="0.32" | 320 m || multiple || 2001–2020 || 14 Jul 2020 || 609 || align=left | Disc.: NEATPotentially hazardous object || |- id="2001 QD34" bgcolor=#FFC2E0 | 7 || || AMO || 22.8 || data-sort-value="0.098" | 98 m || single || 10 days || 27 Aug 2001 || 30 || align=left | Disc.: LINEAR || |- id="2001 QE34" bgcolor=#FFC2E0 | 0 || || APO || 19.0 || data-sort-value="0.56" | 560 m || multiple || 2001–2020 || 16 Oct 2020 || 397 || align=left | Disc.: LINEAR || |- id="2001 QM44" bgcolor=#fefefe | 1 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2021 || 17 Jan 2021 || 68 || align=left | Disc.: LINEAR || |- id="2001 QC45" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.09 || 1.6 km || multiple || 2001–2021 || 09 Apr 2021 || 169 || align=left | Disc.: LINEAR || |- id="2001 QD50" bgcolor=#fefefe | 0 || || MBA-I || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2021 || 18 Jan 2021 || 114 || align=left | Disc.: LINEARAlt.: 2012 RP40 || |- id="2001 QN50" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.00 || 1.7 km || multiple || 2001–2021 || 09 Apr 2021 || 304 || align=left | Disc.: LINEAR || |- id="2001 QE71" bgcolor=#FFC2E0 | 6 || || APO || 24.4 || data-sort-value="0.047" | 47 m || single || 25 days || 13 Sep 2001 || 68 || align=left | Disc.: LINEAR || |- id="2001 QL72" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2020 || 11 Oct 2020 || 108 || align=left | Disc.: SpacewatchAlt.: 2005 SU86 || |- id="2001 QO72" bgcolor=#FA8072 | 5 || || MCA || 19.1 || data-sort-value="0.45" | 450 m || single || 67 days || 15 Oct 2001 || 55 || align=left | Disc.: NEAT || |- id="2001 QO85" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.6 || 2.7 km || multiple || 2001–2021 || 17 Jan 2021 || 182 || align=left | Disc.: LINEARAlt.: 2006 WO196 || |- id="2001 QJ86" bgcolor=#FA8072 | 0 || || MCA || 18.69 || data-sort-value="0.54" | 540 m || multiple || 2001–2021 || 12 Jun 2021 || 97 || align=left | Disc.: NEAT || |- id="2001 QR87" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 1998–2019 || 24 Dec 2019 || 103 || align=left | Disc.: AMOS || |- id="2001 QU87" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.35 || data-sort-value="0.64" | 640 m || multiple || 2001–2021 || 07 Apr 2021 || 43 || align=left | Disc.: AMOS || |- id="2001 QC88" bgcolor=#fefefe | – || || MBA-I || 19.9 || data-sort-value="0.31" | 310 m || single || 12 days || 28 Aug 2001 || 14 || align=left | Disc.: Spacewatch || |- id="2001 QG88" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2019 || 01 Jul 2019 || 40 || align=left | Disc.: SpacewatchAlt.: 2005 UY347 || |- id="2001 QP88" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.7 || 1.6 km || multiple || 2001–2019 || 14 Jan 2019 || 50 || align=left | Disc.: Spacewatch || |- id="2001 QL89" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.8 || 1.3 km || multiple || 2001–2021 || 11 Jun 2021 || 205 || align=left | Disc.: NEAT || |- id="2001 QR92" bgcolor=#fefefe | 0 || || MBA-I || 17.59 || data-sort-value="0.90" | 900 m || multiple || 2001–2022 || 26 Jan 2022 || 264 || align=left | Disc.: LINEARAlt.: 2009 XW5 || |- id="2001 QU92" bgcolor=#E9E9E9 | 1 || || MBA-M || 18.15 || data-sort-value="0.70" | 700 m || multiple || 2001–2021 || 08 Jun 2021 || 32 || align=left | Disc.: LINEAR || |- id="2001 QB95" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.9 || 2.3 km || multiple || 2001–2019 || 24 Dec 2019 || 38 || align=left | Disc.: Spacewatch || |- id="2001 QQ95" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.4 || 2.9 km || multiple || 1994–2020 || 24 Dec 2020 || 99 || align=left | Disc.: Spacewatch || |- id="2001 QT95" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2019 || 15 Nov 2019 || 92 || align=left | Disc.: Spacewatch || |- id="2001 QZ95" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 2.0 km || multiple || 2001–2020 || 19 Oct 2020 || 78 || align=left | Disc.: Spacewatch || |- id="2001 QA96" bgcolor=#fefefe | 0 || || MBA-I || 18.98 || data-sort-value="0.48" | 480 m || multiple || 2001–2021 || 02 Oct 2021 || 73 || align=left | Disc.: Spacewatch || |- id="2001 QF96" bgcolor=#FFC2E0 | 4 || || APO || 24.5 || data-sort-value="0.045" | 45 m || single || 26 days || 13 Sep 2001 || 59 || align=left | Disc.: Spacewatch || |- id="2001 QJ96" bgcolor=#FFC2E0 | 0 || || APO || 22.27 || data-sort-value="0.12" | 120 m || multiple || 2001–2021 || 03 Sep 2021 || 76 || align=left | Disc.: LONEOSAlt.: 2015 PK229 || |- id="2001 QB98" bgcolor=#FA8072 | 0 || || MCA || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2019 || 28 Dec 2019 || 115 || align=left | Disc.: LINEARAlt.: 2012 VX45 || |- id="2001 QZ99" bgcolor=#FA8072 | 0 || || MCA || 17.2 || 2.0 km || multiple || 2001–2021 || 18 Jan 2021 || 423 || align=left | Disc.: LINEAR || |- id="2001 QR100" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.7 || 1.9 km || multiple || 2001–2018 || 09 Aug 2018 || 136 || align=left | Disc.: NEAT || |- id="2001 QQ102" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.1 || 1.6 km || multiple || 2001–2020 || 26 Jan 2020 || 114 || align=left | Disc.: LINEAR || |- id="2001 QQ106" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.09 || 1.6 km || multiple || 2001–2021 || 07 Mar 2021 || 205 || align=left | Disc.: LINEAR || |- id="2001 QX106" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.86 || data-sort-value="0.80" | 800 m || multiple || 2001–2021 || 18 May 2021 || 47 || align=left | Disc.: LINEAR || |- id="2001 QD107" bgcolor=#FA8072 | 1 || || HUN || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2020 || 02 Jan 2020 || 310 || align=left | Disc.: LINEAR || |- id="2001 QL107" bgcolor=#fefefe | 1 || || HUN || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2021 || 16 Jan 2021 || 78 || align=left | Disc.: LINEAR || |- id="2001 QM107" bgcolor=#fefefe | 1 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2020 || 29 Jan 2020 || 103 || align=left | Disc.: LINEAR || |- id="2001 QN107" bgcolor=#fefefe | 0 || || HUN || 17.9 || data-sort-value="0.78" | 780 m || multiple || 1996–2021 || 09 Jan 2021 || 217 || align=left | Disc.: LINEARAlt.: 2012 VT82 || |- id="2001 QR108" bgcolor=#FA8072 | 0 || || MCA || 18.9 || data-sort-value="0.49" | 490 m || multiple || 2001–2020 || 20 Jul 2020 || 99 || align=left | Disc.: NEAT || |- id="2001 QS108" bgcolor=#FA8072 | 0 || || MCA || 19.05 || data-sort-value="0.46" | 460 m || multiple || 2001–2021 || 28 Nov 2021 || 169 || align=left | Disc.: LINEAR || |- id="2001 QT108" bgcolor=#FA8072 | 0 || || MCA || 18.0 || 1.1 km || multiple || 2001–2020 || 23 Jan 2020 || 163 || align=left | Disc.: LINEAR || |- id="2001 QV108" bgcolor=#FA8072 | 1 || || MCA || 19.4 || data-sort-value="0.39" | 390 m || multiple || 2001–2019 || 27 May 2019 || 86 || align=left | Disc.: LONEOS || |- id="2001 QP110" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.3 || 2.0 km || multiple || 2001–2019 || 20 Dec 2019 || 97 || align=left | Disc.: Ondřejov Obs. || |- id="2001 QW110" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 1991–2020 || 29 Mar 2020 || 59 || align=left | Disc.: Ondřejov Obs.Alt.: 2001 QZ142 || |- id="2001 QE111" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.78 || 1.9 km || multiple || 2001–2020 || 27 Jan 2020 || 53 || align=left | Disc.: LINEAR || |- id="2001 QH111" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.07 || 1.1 km || multiple || 2001–2021 || 17 Apr 2021 || 114 || align=left | Disc.: NEAT || |- id="2001 QH112" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.8 || 1.2 km || multiple || 2001–2020 || 26 Jan 2020 || 77 || align=left | Disc.: LINEARAlt.: 2014 RG53 || |- id="2001 QN112" bgcolor=#FA8072 | 4 || || MCA || 18.2 || data-sort-value="0.68" | 680 m || single || 103 days || 05 Dec 2001 || 52 || align=left | Disc.: LINEAR || |- id="2001 QZ114" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.9 || 1.8 km || multiple || 2001–2018 || 17 Jul 2018 || 142 || align=left | Disc.: LINEAR || |- id="2001 QW116" bgcolor=#FA8072 | 1 || || MCA || 17.2 || 1.5 km || multiple || 2001–2020 || 25 Jan 2020 || 141 || align=left | Disc.: LINEAR || |- id="2001 QE119" bgcolor=#fefefe | 1 || || MBA-I || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2019 || 20 Dec 2019 || 193 || align=left | Disc.: LINEAR || |- id="2001 QS121" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.4 || 2.9 km || multiple || 2001–2019 || 20 Dec 2019 || 68 || align=left | Disc.: LINEAR || |- id="2001 QO126" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.5 || 1.5 km || multiple || 2001–2021 || 14 Jun 2021 || 129 || align=left | Disc.: LINEARAlt.: 2013 NO || |- id="2001 QP127" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2020 || 18 Jul 2020 || 186 || align=left | Disc.: LINEAR || |- id="2001 QS129" bgcolor=#d6d6d6 | 2 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2020 || 29 Jan 2020 || 109 || align=left | Disc.: LINEARAlt.: 2007 RF243 || |- id="2001 QO132" bgcolor=#E9E9E9 | 2 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2018 || 13 Jul 2018 || 35 || align=left | Disc.: LINEAR || |- id="2001 QJ133" bgcolor=#FA8072 | 0 || || MCA || 18.15 || data-sort-value="0.70" | 700 m || multiple || 2001–2022 || 05 Jan 2022 || 177 || align=left | Disc.: LINEAR || |- id="2001 QF136" bgcolor=#FA8072 | 0 || || MCA || 18.53 || data-sort-value="0.58" | 580 m || multiple || 2001–2021 || 03 Dec 2021 || 284 || align=left | Disc.: LINEAR || |- id="2001 QY140" bgcolor=#FA8072 | 0 || || MCA || 18.9 || data-sort-value="0.49" | 490 m || multiple || 1995–2020 || 17 Oct 2020 || 127 || align=left | Disc.: LINEAR || |- id="2001 QJ141" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.53 || data-sort-value="0.93" | 930 m || multiple || 2001–2021 || 08 May 2021 || 130 || align=left | Disc.: LINEAR || |- id="2001 QG142" bgcolor=#FFC2E0 | 0 || || AMO || 18.2 || data-sort-value="0.81" | 810 m || multiple || 2001–2019 || 29 May 2019 || 238 || align=left | Disc.: LINEARNEO larger than 1 kilometer || |- id="2001 QJ142" bgcolor=#FFC2E0 | 2 || || APO || 23.7 || data-sort-value="0.065" | 65 m || multiple || 2001–2012 || 25 Nov 2012 || 91 || align=left | Disc.: LINEAR || |- id="2001 QM142" bgcolor=#FFC2E0 | 2 || || APO || 22.6 || data-sort-value="0.11" | 110 m || multiple || 2001–2017 || 18 Apr 2017 || 41 || align=left | Disc.: Spacewatch || |- id="2001 QN142" bgcolor=#FFC2E0 | 1 || || APO || 21.7 || data-sort-value="0.16" | 160 m || multiple || 2001–2012 || 29 May 2012 || 56 || align=left | Disc.: LINEARAlt.: 2012 HP2 || |- id="2001 QO142" bgcolor=#FFC2E0 | 2 || || APO || 19.3 || data-sort-value="0.49" | 490 m || multiple || 2001–2008 || 07 May 2008 || 57 || align=left | Disc.: LINEAR || |- id="2001 QP142" bgcolor=#FFC2E0 | 8 || || AMO || 24.1 || data-sort-value="0.054" | 54 m || single || 26 days || 20 Sep 2001 || 24 || align=left | Disc.: LINEAR || |- id="2001 QT144" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2021 || 10 Sep 2021 || 33 || align=left | Disc.: SpacewatchAdded on 30 September 2021 || |- id="2001 QC145" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.3 || 1.9 km || multiple || 2001–2020 || 26 Jan 2020 || 56 || align=left | Disc.: SpacewatchAdded on 24 August 2020Alt.: 2018 TO19 || |- id="2001 QS145" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.62 || 2.0 km || multiple || 2001–2021 || 10 Apr 2021 || 159 || align=left | Disc.: SpacewatchAlt.: 2009 LP6, 2010 NP2 || |- id="2001 QW150" bgcolor=#FA8072 | 1 || || HUN || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2021 || 16 Jun 2021 || 47 || align=left | Disc.: LINEARAlt.: 2019 WA19 || |- id="2001 QZ152" bgcolor=#fefefe | 0 || || MBA-I || 18.32 || data-sort-value="0.64" | 640 m || multiple || 2001–2021 || 30 Oct 2021 || 96 || align=left | Disc.: Ondřejov Obs.Alt.: 2012 KQ6 || |- id="2001 QJ153" bgcolor=#FA8072 | 0 || || MCA || 17.0 || 2.3 km || multiple || 2001–2020 || 14 Feb 2020 || 256 || align=left | Disc.: NEAT || |- id="2001 QK153" bgcolor=#FFC2E0 | 5 || || APO || 20.6 || data-sort-value="0.27" | 270 m || single || 45 days || 11 Oct 2001 || 22 || align=left | Disc.: LINEAR || |- id="2001 QL153" bgcolor=#FFC2E0 | 1 || || AMO || 19.0 || data-sort-value="0.56" | 560 m || multiple || 2001–2008 || 17 Sep 2008 || 60 || align=left | Disc.: NEAT || |- id="2001 QV153" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.52 || data-sort-value="0.93" | 930 m || multiple || 2001–2021 || 26 Oct 2021 || 74 || align=left | Disc.: Ondřejov Obs. || |- id="2001 QQ154" bgcolor=#FA8072 | 0 || || MCA || 16.8 || 1.8 km || multiple || 2001–2020 || 29 Jan 2020 || 230 || align=left | Disc.: LINEAR || |- id="2001 QL163" bgcolor=#FFC2E0 | 8 || || APO || 22.6 || data-sort-value="0.11" | 110 m || single || 13 days || 07 Sep 2001 || 19 || align=left | Disc.: LINEAR || |- id="2001 QM163" bgcolor=#FFC2E0 | 3 || || APO || 19.8 || data-sort-value="0.39" | 390 m || multiple || 2001–2008 || 26 Sep 2008 || 34 || align=left | Disc.: NEAT || |- id="2001 QT164" bgcolor=#fefefe | 0 || || MBA-I || 18.63 || data-sort-value="0.56" | 560 m || multiple || 2001–2022 || 26 Jan 2022 || 67 || align=left | Disc.: AMOS || |- id="2001 QX164" bgcolor=#fefefe | 0 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2021 || 03 Jan 2021 || 119 || align=left | Disc.: AMOSAlt.: 2005 UX485 || |- id="2001 QL169" bgcolor=#FA8072 | 2 || || MCA || 17.8 || 1.5 km || multiple || 2001–2012 || 13 Oct 2012 || 60 || align=left | Disc.: NEAT || |- id="2001 QE170" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2020 || 31 Jan 2020 || 61 || align=left | Disc.: LINEARAlt.: 2013 WW50 || |- id="2001 QD172" bgcolor=#fefefe | 0 || || MBA-I || 17.97 || data-sort-value="0.76" | 760 m || multiple || 2001–2021 || 09 May 2021 || 120 || align=left | Disc.: LINEARAlt.: 2008 SN292 || |- id="2001 QS172" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.4 km || multiple || 2001–2019 || 03 Dec 2019 || 51 || align=left | Disc.: LINEAR || |- id="2001 QC173" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.30 || 1.5 km || multiple || 2001–2021 || 15 Apr 2021 || 129 || align=left | Disc.: LINEARAlt.: 2010 OU122, 2014 QS389 || |- id="2001 QG173" bgcolor=#fefefe | 1 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2021 || 16 Jan 2021 || 86 || align=left | Disc.: LINEAR || |- id="2001 QQ173" bgcolor=#FA8072 | 0 || || MCA || 18.8 || data-sort-value="0.52" | 520 m || multiple || 1998–2020 || 18 Jul 2020 || 89 || align=left | Disc.: LINEAR || |- id="2001 QN175" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2020 || 10 Dec 2020 || 62 || align=left | Disc.: Spacewatch || |- id="2001 QA176" bgcolor=#fefefe | 2 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2016 || 05 Nov 2016 || 30 || align=left | Disc.: SpacewatchAdded on 21 August 2021Alt.: 2005 VE20 || |- id="2001 QJ176" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.2 || data-sort-value="0.96" | 960 m || multiple || 2001–2018 || 13 Aug 2018 || 33 || align=left | Disc.: Spacewatch || |- id="2001 QB177" bgcolor=#FA8072 | 1 || || MCA || 18.98 || data-sort-value="0.48" | 480 m || multiple || 2001–2021 || 28 Nov 2021 || 60 || align=left | Disc.: Spacewatch || |- id="2001 QF178" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.25 || 1.5 km || multiple || 2001–2021 || 09 May 2021 || 140 || align=left | Disc.: Kvistaberg Obs. || |- id="2001 QS182" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.0 || 1.2 km || multiple || 2001–2020 || 02 Feb 2020 || 99 || align=left | Disc.: NEAT || |- id="2001 QH185" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.1 || 1.6 km || multiple || 2001–2017 || 25 Feb 2017 || 133 || align=left | Disc.: LINEAR || |- id="2001 QN186" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.9 || 1.5 km || multiple || 2001–2017 || 06 Nov 2017 || 90 || align=left | Disc.: Spacewatch || |- id="2001 QK187" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2019 || 05 Feb 2019 || 283 || align=left | Disc.: NEAT || |- id="2001 QN187" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 2.0 km || multiple || 2001–2020 || 17 Dec 2020 || 180 || align=left | Disc.: AMOS || |- id="2001 QT188" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.9 km || multiple || 2001–2021 || 06 Jan 2021 || 125 || align=left | Disc.: Spacewatch || |- id="2001 QU188" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.0 || 1.7 km || multiple || 2001–2021 || 15 Jan 2021 || 120 || align=left | Disc.: Spacewatch || |- id="2001 QW189" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.29 || 1.5 km || multiple || 2001–2021 || 01 May 2021 || 143 || align=left | Disc.: LINEAR || |- id="2001 QU190" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2018 || 04 Dec 2018 || 104 || align=left | Disc.: LINEARAlt.: 2014 WS67 || |- id="2001 QK195" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.43 || data-sort-value="0.97" | 970 m || multiple || 2001–2021 || 30 Jul 2021 || 102 || align=left | Disc.: LINEAR || |- id="2001 QO196" bgcolor=#fefefe | 0 || || MBA-I || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2020 || 17 Dec 2020 || 212 || align=left | Disc.: AMOSAlt.: 2010 FK124 || |- id="2001 QQ196" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.71 || data-sort-value="0.85" | 850 m || multiple || 2001–2021 || 15 May 2021 || 60 || align=left | Disc.: Spacewatch || |- id="2001 QL202" bgcolor=#fefefe | 0 || || MBA-I || 17.4 || data-sort-value="0.98" | 980 m || multiple || 1993–2020 || 15 Dec 2020 || 139 || align=left | Disc.: LONEOSAlt.: 2012 PV30 || |- id="2001 QP202" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2020 || 04 Jan 2020 || 174 || align=left | Disc.: LONEOSAlt.: 2010 VO162 || |- id="2001 QK205" bgcolor=#FA8072 | 0 || || MCA || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2019 || 31 Oct 2019 || 130 || align=left | Disc.: LINEAR || |- id="2001 QL205" bgcolor=#E9E9E9 | 3 || || MBA-M || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2019 || 11 Jan 2019 || 26 || align=left | Disc.: LINEAR || |- id="2001 QW205" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.8 km || multiple || 2001–2021 || 09 Jan 2021 || 150 || align=left | Disc.: LONEOSAlt.: 2015 RO19 || |- id="2001 QB207" bgcolor=#d6d6d6 | 1 || || MBA-O || 17.4 || 1.8 km || multiple || 2001–2018 || 09 Nov 2018 || 66 || align=left | Disc.: LINEAR || |- id="2001 QD207" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.30 || 4.8 km || multiple || 2001–2021 || 18 Apr 2021 || 119 || align=left | Disc.: LINEARAlt.: 2010 KE104, 2017 KN31 || |- id="2001 QM208" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.29 || 3.1 km || multiple || 2001–2021 || 15 May 2021 || 115 || align=left | Disc.: LONEOSAlt.: 2012 TV70 || |- id="2001 QO209" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2019 || 02 Nov 2019 || 53 || align=left | Disc.: LONEOSAlt.: 2010 RB182 || |- id="2001 QR209" bgcolor=#fefefe | 1 || || MBA-I || 18.34 || data-sort-value="0.64" | 640 m || multiple || 2001–2021 || 06 Apr 2021 || 45 || align=left | Disc.: LONEOSAlt.: 2015 TD263 || |- id="2001 QS209" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.4 || data-sort-value="0.98" | 980 m || multiple || 2001–2014 || 21 Dec 2014 || 32 || align=left | Disc.: LONEOS || |- id="2001 QO210" bgcolor=#fefefe | 0 || || MBA-I || 17.1 || 1.1 km || multiple || 2001–2021 || 04 Jan 2021 || 272 || align=left | Disc.: Desert Eagle Obs. || |- id="2001 QP210" bgcolor=#fefefe | 0 || || MBA-I || 18.60 || data-sort-value="0.57" | 570 m || multiple || 2001–2021 || 09 Nov 2021 || 72 || align=left | Disc.: LONEOS || |- id="2001 QO212" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.58 || 1.3 km || multiple || 2001–2021 || 15 Apr 2021 || 152 || align=left | Disc.: LONEOS || |- id="2001 QE213" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2019 || 13 Nov 2019 || 117 || align=left | Disc.: LONEOSAlt.: 2012 TW195 || |- id="2001 QD215" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2020 || 26 Jan 2020 || 222 || align=left | Disc.: LONEOS || |- id="2001 QX224" bgcolor=#fefefe | 2 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2013 || 26 Nov 2013 || 26 || align=left | Disc.: SpacewatchAdded on 22 July 2020Alt.: 2009 UA134 || |- id="2001 QY224" bgcolor=#fefefe | 0 || || MBA-I || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2021 || 04 Jan 2021 || 79 || align=left | Disc.: NEATAlt.: 2016 WX17 || |- id="2001 QA225" bgcolor=#FA8072 | 2 || || MCA || 18.3 || data-sort-value="0.92" | 920 m || multiple || 2001–2018 || 12 Nov 2018 || 55 || align=left | Disc.: NEAT || |- id="2001 QN225" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.46 || 1.4 km || multiple || 2001–2021 || 07 Apr 2021 || 102 || align=left | Disc.: LONEOSAlt.: 2014 QB405 || |- id="2001 QY226" bgcolor=#fefefe | 1 || || HUN || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 17 Nov 2020 || 213 || align=left | Disc.: LONEOS || |- id="2001 QY227" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.3 || 1.5 km || multiple || 2001–2019 || 05 Nov 2019 || 52 || align=left | Disc.: LONEOS || |- id="2001 QC229" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.6 || 2.7 km || multiple || 2001–2019 || 02 Jan 2019 || 69 || align=left | Disc.: LONEOS || |- id="2001 QB232" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2020 || 26 Jan 2020 || 68 || align=left | Disc.: LONEOSAlt.: 2014 MX8 || |- id="2001 QG236" bgcolor=#fefefe | 2 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2020 || 06 Dec 2020 || 51 || align=left | Disc.: LINEAR || |- id="2001 QE237" bgcolor=#E9E9E9 | 2 || || MBA-M || 17.4 || data-sort-value="0.98" | 980 m || multiple || 2001–2015 || 22 Jan 2015 || 76 || align=left | Disc.: LINEAR || |- id="2001 QM252" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.7 || 1.4 km || multiple || 2001–2021 || 07 Jun 2021 || 139 || align=left | Disc.: LINEAR || |- id="2001 QL253" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.34 || 1.4 km || multiple || 2001–2021 || 12 May 2021 || 103 || align=left | Disc.: LINEARAlt.: 2014 SM247, 2018 PX1 || |- id="2001 QO253" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.08 || data-sort-value="0.72" | 720 m || multiple || 2001–2021 || 10 May 2021 || 46 || align=left | Disc.: LINEARAlt.: 2014 WT277 || |- id="2001 QB262" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2020 || 19 Jan 2020 || 104 || align=left | Disc.: LINEAR || |- id="2001 QV264" bgcolor=#fefefe | 0 || || MBA-I || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2021 || 18 Jan 2021 || 109 || align=left | Disc.: AMOSAlt.: 2005 VU38 || |- id="2001 QC265" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.15 || 2.8 km || multiple || 2001–2021 || 01 Feb 2021 || 267 || align=left | Disc.: LINEAR || |- id="2001 QM268" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.1 || 1.0 km || multiple || 2001–2014 || 20 Nov 2014 || 69 || align=left | Disc.: NEAT || |- id="2001 QR269" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2021 || 18 Jan 2021 || 64 || align=left | Disc.: Pic du Midi || |- id="2001 QZ269" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.69 || 1.4 km || multiple || 1997–2021 || 08 Aug 2021 || 213 || align=left | Disc.: LINEARAlt.: 2016 EW6 || |- id="2001 QY271" bgcolor=#fefefe | 0 || || MBA-I || 17.39 || data-sort-value="0.99" | 990 m || multiple || 2001–2021 || 06 Apr 2021 || 247 || align=left | Disc.: LINEARAlt.: 2007 ET125, 2012 UH74 || |- id="2001 QT273" bgcolor=#E9E9E9 | 1 || || MBA-M || 16.9 || 1.8 km || multiple || 2001–2018 || 24 Jun 2018 || 63 || align=left | Disc.: LINEARAlt.: 2014 LN25 || |- id="2001 QF275" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.9 || 2.3 km || multiple || 1996–2021 || 18 Jan 2021 || 143 || align=left | Disc.: LINEAR || |- id="2001 QW276" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2021 || 11 Jun 2021 || 159 || align=left | Disc.: LINEAR || |- id="2001 QL278" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2018 || 04 Nov 2018 || 90 || align=left | Disc.: LINEARAlt.: 2010 VN78, 2013 JX76 || |- id="2001 QG279" bgcolor=#fefefe | 0 || || MBA-I || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2019 || 29 Nov 2019 || 90 || align=left | Disc.: LINEAR || |- id="2001 QJ279" bgcolor=#fefefe | 0 || || MBA-I || 17.3 || 1.0 km || multiple || 2001–2021 || 13 Jan 2021 || 152 || align=left | Disc.: LINEAR || |- id="2001 QP281" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.76 || 3.9 km || multiple || 2001–2021 || 18 Apr 2021 || 154 || align=left | Disc.: LINEAR || |- id="2001 QY282" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.93 || 2.3 km || multiple || 2001–2021 || 07 Jul 2021 || 180 || align=left | Disc.: Kvistaberg Obs. || |- id="2001 QR286" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.5 || 2.8 km || multiple || 2001–2021 || 08 Jun 2021 || 109 || align=left | Disc.: NEAT || |- id="2001 QS286" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2018 || 09 Nov 2018 || 56 || align=left | Disc.: NEAT || |- id="2001 QG288" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2018 || 13 Dec 2018 || 64 || align=left | Disc.: NEAT || |- id="2001 QS290" bgcolor=#fefefe | – || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || single || 12 days || 28 Aug 2001 || 9 || align=left | Disc.: LINEAR || |- id="2001 QO297" bgcolor=#C2E0FF | 3 || || TNO || 6.61 || 158 km || multiple || 2000–2020 || 13 Sep 2020 || 115 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold) || |- id="2001 QP297" bgcolor=#C2E0FF | 3 || || TNO || 6.84 || 142 km || multiple || 2001–2018 || 26 Nov 2018 || 61 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold) || |- id="2001 QQ297" bgcolor=#C2E0FF | 3 || || TNO || 7.0 || 132 km || multiple || 2001–2013 || 05 Oct 2013 || 18 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold) || |- id="2001 QR297" bgcolor=#C2E0FF | 3 || || TNO || 6.73 || 231 km || multiple || 2001–2021 || 08 Aug 2021 || 84 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (hot) || |- id="2001 QS297" bgcolor=#C2E0FF | E || || TNO || 5.4 || 285 km || single || 23 days || 12 Sep 2001 || 6 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano? || |- id="2001 QU297" bgcolor=#C2E0FF | E || || TNO || 6.0 || 217 km || single || 23 days || 12 Sep 2001 || 6 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano? || |- id="2001 QV297" bgcolor=#C2E0FF | E || || TNO || 6.3 || 189 km || single || 1 day || 21 Aug 2001 || 4 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano? || |- id="2001 QX297" bgcolor=#C2E0FF | 3 || || TNO || 6.3 || 183 km || multiple || 2000–2009 || 20 Nov 2009 || 31 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold) || |- id="2001 QZ297" bgcolor=#C2E0FF | 3 || || TNO || 6.9 || 139 km || multiple || 2000–2019 || 04 Sep 2019 || 30 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold) || |- id="2001 QA298" bgcolor=#C2E0FF | 2 || || TNO || 7.6 || 126 km || multiple || 2000–2014 || 19 Sep 2014 || 25 || align=left | Disc.: Cerro TololoLoUTNOs, other TNO || |- id="2001 QC298" bgcolor=#C2E0FF | 1 || || TNO || 6.8 || 235 km || multiple || 2000–2020 || 12 Aug 2020 || 74 || align=left | Disc.: Cerro TololoLoUTNOs, other TNO, albedo: 0.063; BR-mag: 1.03; taxonomy: BR; binary: 192 km || |- id="2001 QE298" bgcolor=#C2E0FF | 2 || || TNO || 7.68 || 105 km || multiple || 2001–2017 || 19 Nov 2017 || 114 || align=left | Disc.: Cerro TololoLoUTNOs, res4:7, BR-mag: 1.89; taxonomy: RR || |- id="2001 QH298" bgcolor=#C2E0FF | 3 || || TNO || 8.08 || 114 km || multiple || 2000–2017 || 22 Aug 2017 || 24 || align=left | Disc.: Cerro TololoLoUTNOs, plutino || |- id="2001 QU298" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.15 || data-sort-value="0.99" | 990 m || multiple || 2001–2021 || 18 Jan 2021 || 36 || align=left | Disc.: Cerro TololoAdded on 22 July 2020Alt.: 2015 VG184 || |- id="2001 QZ298" bgcolor=#fefefe | 3 || || MBA-I || 19.1 || data-sort-value="0.45" | 450 m || multiple || 2001–2020 || 14 Feb 2020 || 26 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QJ299" bgcolor=#fefefe | 0 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2021 || 14 Apr 2021 || 46 || align=left | Disc.: Cerro TololoAdded on 11 May 2021Alt.: 2014 EZ213 || |- id="2001 QU299" bgcolor=#fefefe | 1 || || MBA-I || 18.74 || data-sort-value="0.53" | 530 m || multiple || 2001–2021 || 30 Nov 2021 || 95 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QA300" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.9 || 1.5 km || multiple || 2001–2020 || 14 Sep 2020 || 49 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2006 QU74 || |- id="2001 QG300" bgcolor=#fefefe | 0 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2020 || 01 Feb 2020 || 25 || align=left | Disc.: Cerro TololoAdded on 30 September 2021Alt.: 2015 SW49 || |- id="2001 QV300" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.5 || 1.8 km || multiple || 2001–2021 || 04 Oct 2021 || 33 || align=left | Disc.: Cerro Tololo Obs.Added on 29 January 2022 || |- id="2001 QY300" bgcolor=#fefefe | 0 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2020 || 12 Sep 2020 || 44 || align=left | Disc.: Cerro Tololo || |- id="2001 QL301" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.88 || 1.5 km || multiple || 2001–2021 || 10 May 2021 || 32 || align=left | Disc.: Cerro Tololo || |- id="2001 QP301" bgcolor=#d6d6d6 | – || || MBA-O || 16.5 || 2.8 km || single || 32 days || 20 Aug 2001 || 7 || align=left | Disc.: Cerro Tololo || |- id="2001 QX301" bgcolor=#fefefe | 0 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2021 || 10 Apr 2021 || 34 || align=left | Disc.: Cerro TololoAdded on 17 June 2021 || |- id="2001 QD302" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.7 || 1.6 km || multiple || 2001–2021 || 09 May 2021 || 97 || align=left | Disc.: Cerro TololoAlt.: 2010 GV60, 2021 CK21 || |- id="2001 QG302" bgcolor=#fefefe | – || || MBA-I || 19.3 || data-sort-value="0.41" | 410 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QS302" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 2.0 km || multiple || 2001–2019 || 25 Sep 2019 || 53 || align=left | Disc.: Cerro Tololo || |- id="2001 QW302" bgcolor=#fefefe | 3 || || MBA-I || 19.3 || data-sort-value="0.41" | 410 m || multiple || 2001–2020 || 16 Dec 2020 || 29 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2020 XS18 || |- id="2001 QX302" bgcolor=#fefefe | 0 || || MBA-I || 18.10 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 18 Jul 2020 || 33 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2015 BJ387 || |- id="2001 QC303" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.09 || 2.1 km || multiple || 2001–2021 || 10 May 2021 || 60 || align=left | Disc.: Cerro Tololo || |- id="2001 QF303" bgcolor=#fefefe | – || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QG303" bgcolor=#fefefe | – || || MBA-I || 20.0 || data-sort-value="0.30" | 300 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QL303" bgcolor=#fefefe | – || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QY303" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.89 || data-sort-value="0.79" | 790 m || multiple || 2001–2021 || 07 May 2021 || 56 || align=left | Disc.: Cerro Tololo || |- id="2001 QB304" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.1 || 1.0 km || multiple || 2001–2020 || 27 Jan 2020 || 39 || align=left | Disc.: Cerro TololoAdded on 22 July 2020Alt.: 2016 CC92 || |- id="2001 QM304" bgcolor=#d6d6d6 | 2 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2019 || 25 Oct 2019 || 24 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QT304" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.7 || 2.5 km || multiple || 2001–2018 || 30 Sep 2018 || 44 || align=left | Disc.: Cerro Tololo || |- id="2001 QX304" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.91 || 2.3 km || multiple || 2001–2021 || 07 Jul 2021 || 61 || align=left | Disc.: Cerro TololoAdded on 11 May 2021Alt.: 2015 FR306 || |- id="2001 QA305" bgcolor=#fefefe | 0 || || MBA-I || 18.21 || data-sort-value="0.68" | 680 m || multiple || 2001–2022 || 26 Jan 2022 || 60 || align=left | Disc.: Cerro Tololo || |- id="2001 QB306" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2020 || 12 Apr 2020 || 37 || align=left | Disc.: Cerro Tololo || |- id="2001 QE306" bgcolor=#fefefe | – || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QF306" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.8 km || multiple || 2001–2020 || 10 Dec 2020 || 48 || align=left | Disc.: Cerro Tololo || |- id="2001 QJ306" bgcolor=#fefefe | 1 || || MBA-I || 19.6 || data-sort-value="0.36" | 360 m || multiple || 2001–2019 || 23 Oct 2019 || 25 || align=left | Disc.: Cerro Tololo || |- id="2001 QL306" bgcolor=#E9E9E9 | – || || MBA-M || 18.8 || data-sort-value="0.73" | 730 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QM306" bgcolor=#fefefe | 1 || || MBA-I || 19.1 || data-sort-value="0.45" | 450 m || multiple || 2001–2020 || 20 Oct 2020 || 88 || align=left | Disc.: Cerro TololoAdded on 22 July 2020Alt.: 2014 WF566 || |- id="2001 QS306" bgcolor=#d6d6d6 | 6 || || MBA-O || 16.9 || 2.3 km || multiple || 2001–2021 || 06 Feb 2021 || 10 || align=left | Disc.: Cerro Tololo || |- id="2001 QV306" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.52 || 1.7 km || multiple || 2001–2021 || 12 Nov 2021 || 71 || align=left | Disc.: Cerro Tololo || |- id="2001 QA307" bgcolor=#d6d6d6 | – || || MBA-O || 17.6 || 1.7 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QE307" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2020 || 05 Nov 2020 || 58 || align=left | Disc.: Cerro TololoAdded on 17 January 2021Alt.: 2020 TZ14 || |- id="2001 QG307" bgcolor=#E9E9E9 | – || || MBA-M || 17.9 || 1.5 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QJ307" bgcolor=#fefefe | 0 || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2020 || 10 Dec 2020 || 44 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QK307" bgcolor=#d6d6d6 | E || || MBA-O || 16.9 || 2.3 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QM307" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || 1.6 km || multiple || 2000–2019 || 28 Aug 2019 || 170 || align=left | Disc.: Cerro Tololo || |- id="2001 QO307" bgcolor=#d6d6d6 | – || || MBA-O || 16.8 || 2.4 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QR307" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.9 || 2.3 km || multiple || 2001–2021 || 08 Apr 2021 || 77 || align=left | Disc.: Cerro TololoAdded on 22 July 2020Alt.: 2005 EP285 || |- id="2001 QS307" bgcolor=#d6d6d6 | – || || MBA-O || 16.6 || 2.7 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QU307" bgcolor=#d6d6d6 | E || || MBA-O || 17.2 || 2.0 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QJ308" bgcolor=#d6d6d6 | E || || MBA-O || 18.0 || 1.4 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QN308" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.2 || 3.2 km || multiple || 2001–2021 || 07 Feb 2021 || 94 || align=left | Disc.: Cerro TololoAdded on 11 May 2021Alt.: 2015 BY558 || |- id="2001 QP308" bgcolor=#fefefe | 2 || || MBA-I || 19.4 || data-sort-value="0.39" | 390 m || multiple || 2001–2019 || 24 Oct 2019 || 22 || align=left | Disc.: Cerro Tololo || |- id="2001 QV308" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2020 || 15 Oct 2020 || 35 || align=left | Disc.: Cerro TololoAdded on 17 January 2021 || |- id="2001 QW308" bgcolor=#fefefe | 0 || || MBA-I || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2019 || 22 Sep 2019 || 61 || align=left | Disc.: Cerro Tololo || |- id="2001 QA309" bgcolor=#fefefe | 2 || || MBA-I || 19.4 || data-sort-value="0.39" | 390 m || multiple || 2001–2019 || 28 Aug 2019 || 21 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2019 PU64 || |- id="2001 QG309" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 2001–2020 || 21 Apr 2020 || 43 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2007 VC83 || |- id="2001 QO309" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.9 || 1.1 km || multiple || 2001–2018 || 11 Aug 2018 || 31 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QQ309" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.8 km || multiple || 2001–2020 || 24 Nov 2020 || 31 || align=left | Disc.: Cerro TololoAdded on 17 January 2021 || |- id="2001 QT309" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.12 || 2.1 km || multiple || 1996–2021 || 05 Aug 2021 || 72 || align=left | Disc.: Cerro Tololo || |- id="2001 QK310" bgcolor=#E9E9E9 | E || || MBA-M || 17.7 || data-sort-value="0.86" | 860 m || single || 3 days || 20 Aug 2001 || 7 || align=left | Disc.: Cerro Tololo || |- id="2001 QN310" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.3 || data-sort-value="0.92" | 920 m || multiple || 2001–2020 || 02 Feb 2020 || 56 || align=left | Disc.: Cerro Tololo || |- id="2001 QO310" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.0 || 1.1 km || multiple || 2001–2019 || 02 Nov 2019 || 53 || align=left | Disc.: Cerro TololoAdded on 22 July 2020Alt.: 2013 HZ112 || |- id="2001 QZ310" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.24 || 2.0 km || multiple || 2001–2021 || 15 Apr 2021 || 50 || align=left | Disc.: Cerro TololoAlt.: 2007 VY22, 2015 DK21 || |- id="2001 QA311" bgcolor=#fefefe | 0 || || MBA-I || 18.75 || data-sort-value="0.53" | 530 m || multiple || 2001–2021 || 11 Oct 2021 || 45 || align=left | Disc.: Cerro TololoAlt.: 2016 GC288 || |- id="2001 QB311" bgcolor=#fefefe | 1 || || MBA-I || 18.66 || data-sort-value="0.55" | 550 m || multiple || 2001–2022 || 07 Jan 2022 || 51 || align=left | Disc.: Cerro Tololo || |- id="2001 QN311" bgcolor=#fefefe | 0 || || MBA-I || 19.08 || data-sort-value="0.45" | 450 m || multiple || 2001–2021 || 29 Oct 2021 || 58 || align=left | Disc.: Cerro TololoAdded on 5 November 2021Alt.: 2021 PQ105 || |- id="2001 QS311" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.71 || 2.7 km || multiple || 2001–2021 || 02 Apr 2021 || 152 || align=left | Disc.: Cerro TololoAlt.: 2010 KA66 || |- id="2001 QU311" bgcolor=#fefefe | 0 || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2020 || 15 Oct 2020 || 89 || align=left | Disc.: Cerro TololoAdded on 19 October 2020Alt.: 2005 SH145 || |- id="2001 QY311" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.95 || 1.4 km || multiple || 2001–2021 || 08 Sep 2021 || 50 || align=left | Disc.: Cerro TololoAlt.: 2021 NY17 || |- id="2001 QB312" bgcolor=#d6d6d6 | 3 || || MBA-O || 17.2 || 2.0 km || multiple || 2001–2020 || 27 Jan 2020 || 25 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QV312" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.90 || 1.5 km || multiple || 2001–2021 || 30 Nov 2021 || 61 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QX312" bgcolor=#d6d6d6 | 1 || || MBA-O || 17.2 || 2.0 km || multiple || 2001–2018 || 14 Aug 2018 || 22 || align=left | Disc.: Cerro Tololo || |- id="2001 QU313" bgcolor=#d6d6d6 | 6 || || MBA-O || 18.0 || 1.4 km || multiple || 2001–2016 || 13 Mar 2016 || 7 || align=left | Disc.: Cerro Tololo || |- id="2001 QX313" bgcolor=#fefefe | 1 || || MBA-I || 18.9 || data-sort-value="0.49" | 490 m || multiple || 2001–2020 || 10 Dec 2020 || 50 || align=left | Disc.: Cerro Tololo || |- id="2001 QP314" bgcolor=#fefefe | 1 || || MBA-I || 18.65 || data-sort-value="0.55" | 550 m || multiple || 2001–2021 || 14 Nov 2021 || 87 || align=left | Disc.: Cerro Tololo || |- id="2001 QL315" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.13 || 1.3 km || multiple || 2001–2020 || 22 Oct 2020 || 79 || align=left | Disc.: Cerro TololoAdded on 9 March 2021 || |- id="2001 QZ315" bgcolor=#d6d6d6 | 3 || || MBA-O || 16.9 || 2.3 km || multiple || 2001–2019 || 28 Dec 2019 || 23 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2015 BU268 || |- id="2001 QC316" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.0 || 2.2 km || multiple || 2001–2020 || 11 Dec 2020 || 65 || align=left | Disc.: Cerro Tololo || |- id="2001 QU316" bgcolor=#d6d6d6 | 1 || || MBA-O || 17.37 || 1.9 km || multiple || 2001–2020 || 11 Apr 2020 || 29 || align=left | Disc.: Cerro TololoAdded on 11 May 2021Alt.: 2017 SN228 || |- id="2001 QB317" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2020 || 12 Dec 2020 || 38 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2011 GV || |- id="2001 QO317" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.8 || 1.5 km || multiple || 2001–2019 || 25 Sep 2019 || 38 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2006 UZ205 || |- id="2001 QP317" bgcolor=#d6d6d6 | 1 || || MBA-O || 17.9 || 1.5 km || multiple || 2001–2020 || 26 Jan 2020 || 28 || align=left | Disc.: Cerro TololoAlt.: 2007 TQ284 || |- id="2001 QR317" bgcolor=#fefefe | 0 || || MBA-I || 19.32 || data-sort-value="0.41" | 410 m || multiple || 2001–2021 || 08 Nov 2021 || 84 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2005 YM67 || |- id="2001 QL318" bgcolor=#d6d6d6 | 2 || || MBA-O || 17.3 || 1.9 km || multiple || 2001–2019 || 03 Dec 2019 || 26 || align=left | Disc.: Cerro Tololo || |- id="2001 QY318" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.91 || 2.3 km || multiple || 1999–2022 || 27 Jan 2022 || 48 || align=left | Disc.: Cerro Tololo || |- id="2001 QG319" bgcolor=#fefefe | 0 || || MBA-I || 18.72 || data-sort-value="0.54" | 540 m || multiple || 2001–2021 || 30 Nov 2021 || 52 || align=left | Disc.: Cerro Tololo || |- id="2001 QJ319" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.2 || 2.0 km || multiple || 2001–2021 || 07 Jun 2021 || 38 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QK319" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.2 || 2.0 km || multiple || 1999–2020 || 11 Dec 2020 || 47 || align=left | Disc.: Cerro TololoAlt.: 2010 CP39 || |- id="2001 QP319" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2019 || 08 Oct 2019 || 23 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QT319" bgcolor=#d6d6d6 | – || || MBA-O || 17.8 || 1.5 km || single || 23 days || 12 Sep 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QY319" bgcolor=#d6d6d6 | – || || MBA-O || 18.1 || 1.3 km || single || 23 days || 12 Sep 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QC321" bgcolor=#d6d6d6 | – || || MBA-O || 17.4 || 1.8 km || single || 30 days || 19 Sep 2001 || 7 || align=left | Disc.: Cerro Tololo || |- id="2001 QG321" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.55 || data-sort-value="0.58" | 580 m || multiple || 2001–2021 || 08 Aug 2021 || 41 || align=left | Disc.: Cerro TololoAdded on 19 October 2020 || |- id="2001 QP321" bgcolor=#fefefe | 1 || || MBA-I || 18.56 || data-sort-value="0.58" | 580 m || multiple || 2001–2021 || 08 Apr 2021 || 53 || align=left | Disc.: Cerro TololoAdded on 11 May 2021Alt.: 2014 GA20 || |- id="2001 QQ321" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2016 || 08 Jun 2016 || 37 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QV321" bgcolor=#E9E9E9 | E || || MBA-M || 18.3 || 1.2 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QA322" bgcolor=#fefefe | – || || MBA-I || 19.5 || data-sort-value="0.37" | 370 m || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QB322" bgcolor=#FA8072 | 0 || || MCA || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2019 || 28 Nov 2019 || 84 || align=left | Disc.: Cerro TololoAlt.: 2017 EW9 || |- id="2001 QD322" bgcolor=#fefefe | 0 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2020 || 11 Nov 2020 || 29 || align=left | Disc.: Cerro Tololo || |- id="2001 QG322" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.13 || data-sort-value="0.99" | 990 m || multiple || 2001–2021 || 31 May 2021 || 63 || align=left | Disc.: Cerro Tololo || |- id="2001 QQ322" bgcolor=#C2E0FF | 3 || || TNO || 6.56 || 120 km || multiple || 2000–2020 || 21 Sep 2020 || 112 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold), binary: 109 km || |- id="2001 QR322" bgcolor=#C2E0FF | 2 || || TNO || 8.12 || 132 km || multiple || 2001–2020 || 09 Dec 2020 || 99 || align=left | Disc.: Cerro TololoLoUTNOs, NT, albedo: 0.058 || |- id="2001 QS322" bgcolor=#C2E0FF | 2 || || TNO || 6.96 || 186 km || multiple || 2001–2018 || 26 Nov 2018 || 72 || align=left | Disc.: Cerro TololoLoUTNOs, cubewano (cold), albedo: 0.095 || |- id="2001 QW322" bgcolor=#C2E0FF | 4 || || TNO || 7.9 || 128 km || multiple || 2001–2019 || 04 Sep 2019 || 55 || align=left | Disc.: Mauna Kea Obs.LoUTNOs, cubewano (cold), albedo: 0.093; binary: 126 km || |- id="2001 QX322" bgcolor=#C2E0FF | 2 || || TNO || 6.49 || 190 km || multiple || 2001–2021 || 01 Dec 2021 || 89 || align=left | Disc.: La Palma Obs.LoUTNOs, SDO, BR-mag: 1.46; taxonomy: IR || |- id="2001 QB326" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.16 || 2.1 km || multiple || 2001–2021 || 15 Apr 2021 || 64 || align=left | Disc.: Cerro TololoAdded on 22 July 2020 || |- id="2001 QM326" bgcolor=#fefefe | 1 || || MBA-I || 19.2 || data-sort-value="0.43" | 430 m || multiple || 2001–2020 || 16 Aug 2020 || 33 || align=left | Disc.: Cerro TololoAdded on 21 August 2021Alt.: 2017 RN127 || |- id="2001 QO326" bgcolor=#d6d6d6 | E || || HIL || 16.1 || 3.4 km || single || 2 days || 21 Aug 2001 || 6 || align=left | Disc.: Cerro Tololo || |- id="2001 QH327" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.5 || 2.1 km || multiple || 2001–2021 || 18 Jan 2021 || 234 || align=left | Disc.: AMOSAlt.: 2008 AP130, 2014 KG94 || |- id="2001 QU327" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.2 || 3.2 km || multiple || 1984–2019 || 03 Dec 2019 || 145 || align=left | Disc.: NEAT || |- id="2001 QK328" bgcolor=#fefefe | 0 || || MBA-I || 18.34 || data-sort-value="0.64" | 640 m || multiple || 2001–2020 || 18 Dec 2020 || 110 || align=left | Disc.: NEAT || |- id="2001 QZ328" bgcolor=#E9E9E9 | – || || MBA-M || 18.4 || data-sort-value="0.62" | 620 m || single || 34 days || 19 Sep 2001 || 22 || align=left | Disc.: LINEAR || |- id="2001 QN329" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.7 || 1.2 km || multiple || 2001–2020 || 23 Jan 2020 || 99 || align=left | Disc.: LONEOSAlt.: 2014 QX18 || |- id="2001 QW329" bgcolor=#fefefe | 1 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2019 || 31 Oct 2019 || 113 || align=left | Disc.: LONEOSAlt.: 2012 TM256 || |- id="2001 QP331" bgcolor=#C2FFFF | D || || JT || 14.8 || 6.1 km || single || 6 days || 25 Aug 2001 || 21 || align=left | Disc.: La Palma Obs.Trojan camp (L5)Alt.: 2001 QQ331 || |- id="2001 QR331" bgcolor=#E9E9E9 | – || || MBA-M || 18.7 || data-sort-value="0.76" | 760 m || single || 7 days || 26 Aug 2001 || 21 || align=left | Disc.: La Palma Obs. || |- id="2001 QS331" bgcolor=#C2FFFF | – || || JT || 15.2 || 5.1 km || single || 6 days || 25 Aug 2001 || 17 || align=left | Disc.: La Palma Obs.Trojan camp (L5) || |- id="2001 QT331" bgcolor=#fefefe | 1 || || HUN || 19.1 || data-sort-value="0.45" | 450 m || multiple || 2001–2020 || 13 Nov 2020 || 39 || align=left | Disc.: La Palma Obs.Added on 17 January 2021 || |- id="2001 QU331" bgcolor=#C2FFFF | – || || JT || 15.4 || 4.6 km || single || 3 days || 26 Aug 2001 || 9 || align=left | Disc.: La Palma Obs.Trojan camp (L5) || |- id="2001 QV331" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.52 || data-sort-value="0.93" | 930 m || multiple || 2001–2021 || 30 Jun 2021 || 63 || align=left | Disc.: La Palma Obs. || |- id="2001 QX331" bgcolor=#d6d6d6 | – || || MBA-O || 18.7 || 1.0 km || single || 2 days || 26 Aug 2001 || 12 || align=left | Disc.: La Palma Obs. || |- id="2001 QZ331" bgcolor=#d6d6d6 | 3 || || MBA-O || 17.6 || 1.7 km || multiple || 2001–2017 || 12 Nov 2017 || 30 || align=left | Disc.: La Palma Obs. || |- id="2001 QL332" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.9 km || multiple || 2001–2020 || 23 Jan 2020 || 75 || align=left | Disc.: NEATAlt.: 2015 XS126 || |- id="2001 QP332" bgcolor=#fefefe | 0 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2019 || 25 Sep 2019 || 74 || align=left | Disc.: SpacewatchAlt.: 2012 UT160 || |- id="2001 QB334" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.03 || 2.2 km || multiple || 2001–2021 || 07 Apr 2021 || 66 || align=left | Disc.: SpacewatchAlt.: 2008 YC148 || |- id="2001 QY334" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.4 km || multiple || 2001–2020 || 27 Feb 2020 || 124 || align=left | Disc.: NEATAlt.: 2010 YJ5 || |- id="2001 QL335" bgcolor=#fefefe | 0 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2020 || 10 Nov 2020 || 25 || align=left | Disc.: NEATAdded on 17 January 2021 || |- id="2001 QN335" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.0 km || multiple || 2001–2020 || 22 Apr 2020 || 123 || align=left | Disc.: NEATAlt.: 2015 BZ307 || |- id="2001 QQ335" bgcolor=#fefefe | 1 || || HUN || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2021 || 06 Jan 2021 || 113 || align=left | Disc.: Spacewatch || |- id="2001 QR335" bgcolor=#fefefe | 0 || || MBA-I || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2020 || 23 Jan 2020 || 95 || align=left | Disc.: Cerro Tololo || |- id="2001 QT335" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.00 || 2.2 km || multiple || 2001–2021 || 08 Aug 2021 || 90 || align=left | Disc.: Spacewatch || |- id="2001 QU335" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.25 || 1.5 km || multiple || 2001–2021 || 16 Apr 2021 || 116 || align=left | Disc.: LONEOS || |- id="2001 QX335" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.3 || 4.8 km || multiple || 2001–2021 || 22 Jan 2021 || 185 || align=left | Disc.: LONEOS || |- id="2001 QA336" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.77 || data-sort-value="0.83" | 830 m || multiple || 2001–2021 || 10 Aug 2021 || 87 || align=left | Disc.: Cerro Tololo || |- id="2001 QB336" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 19 Nov 2020 || 83 || align=left | Disc.: Spacewatch || |- id="2001 QC336" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.8 || 1.8 km || multiple || 2001–2021 || 18 Jan 2021 || 141 || align=left | Disc.: LONEOSAlt.: 2010 NM82 || |- id="2001 QD336" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2017 || 08 Dec 2017 || 59 || align=left | Disc.: Spacewatch || |- id="2001 QE336" bgcolor=#fefefe | 0 || || MBA-I || 17.63 || data-sort-value="0.89" | 890 m || multiple || 2001–2021 || 10 Jun 2021 || 88 || align=left | Disc.: NEAT || |- id="2001 QF336" bgcolor=#fefefe | 0 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2017 || 24 Dec 2017 || 57 || align=left | Disc.: Spacewatch || |- id="2001 QG336" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2020 || 08 Dec 2020 || 98 || align=left | Disc.: Spacewatch || |- id="2001 QJ336" bgcolor=#fefefe | 0 || || MBA-I || 18.38 || data-sort-value="0.63" | 630 m || multiple || 2001–2019 || 06 Sep 2019 || 88 || align=left | Disc.: Cerro Tololo || |- id="2001 QK336" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.5 || 2.8 km || multiple || 2001–2021 || 05 Jan 2021 || 84 || align=left | Disc.: Cerro Tololo || |- id="2001 QM336" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.3 || 3.1 km || multiple || 2001–2021 || 17 Jan 2021 || 90 || align=left | Disc.: Siding Spring Obs. || |- id="2001 QN336" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2019 || 02 Jul 2019 || 60 || align=left | Disc.: Spacewatch || |- id="2001 QO336" bgcolor=#fefefe | 0 || || MBA-I || 18.22 || data-sort-value="0.67" | 670 m || multiple || 2001–2021 || 13 May 2021 || 127 || align=left | Disc.: Cerro Tololo || |- id="2001 QP336" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.09 || 1.6 km || multiple || 2001–2021 || 16 Apr 2021 || 55 || align=left | Disc.: NEAT || |- id="2001 QR336" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.0 || 1.1 km || multiple || 1997–2020 || 05 Jan 2020 || 55 || align=left | Disc.: Cerro Tololo || |- id="2001 QS336" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.69 || 1.2 km || multiple || 2001–2021 || 14 Apr 2021 || 71 || align=left | Disc.: Cerro Tololo || |- id="2001 QT336" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.6 || 1.3 km || multiple || 2001–2020 || 23 Dec 2020 || 47 || align=left | Disc.: Spacewatch || |- id="2001 QU336" bgcolor=#fefefe | 1 || || MBA-I || 19.2 || data-sort-value="0.43" | 430 m || multiple || 2001–2017 || 07 Nov 2017 || 36 || align=left | Disc.: Spacewatch || |- id="2001 QV336" bgcolor=#fefefe | 0 || || MBA-I || 18.06 || data-sort-value="0.73" | 730 m || multiple || 2001–2021 || 12 May 2021 || 104 || align=left | Disc.: Cerro Tololo || |- id="2001 QW336" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.0 || 2.2 km || multiple || 2001–2021 || 05 Jun 2021 || 68 || align=left | Disc.: Cerro Tololo || |- id="2001 QX336" bgcolor=#fefefe | 0 || || MBA-I || 18.73 || data-sort-value="0.53" | 530 m || multiple || 2001–2021 || 14 May 2021 || 48 || align=left | Disc.: Cerro Tololo || |- id="2001 QY336" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 1994–2021 || 10 Apr 2021 || 72 || align=left | Disc.: Spacewatch || |- id="2001 QZ336" bgcolor=#fefefe | 0 || || MBA-I || 18.85 || data-sort-value="0.50" | 500 m || multiple || 2001–2021 || 25 Nov 2021 || 64 || align=left | Disc.: NEAT || |- id="2001 QA337" bgcolor=#E9E9E9 | 1 || || MBA-M || 18.0 || 1.1 km || multiple || 2001–2016 || 15 Mar 2016 || 36 || align=left | Disc.: Cerro Tololo || |- id="2001 QB337" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.2 || 2.0 km || multiple || 2001–2019 || 04 Feb 2019 || 32 || align=left | Disc.: Spacewatch || |- id="2001 QC337" bgcolor=#fefefe | 0 || || MBA-I || 17.96 || data-sort-value="0.76" | 760 m || multiple || 2001–2022 || 25 Jan 2022 || 43 || align=left | Disc.: NEAT || |- id="2001 QD337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 2001–2019 || 06 Oct 2019 || 50 || align=left | Disc.: Spacewatch || |- id="2001 QE337" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.1 || 2.1 km || multiple || 2001–2020 || 15 May 2020 || 33 || align=left | Disc.: Spacewatch || |- id="2001 QF337" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2019 || 28 Oct 2019 || 72 || align=left | Disc.: Spacewatch || |- id="2001 QG337" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.70 || data-sort-value="0.86" | 860 m || multiple || 2001–2021 || 13 May 2021 || 87 || align=left | Disc.: Spacewatch || |- id="2001 QH337" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2018 || 08 Nov 2018 || 65 || align=left | Disc.: Spacewatch || |- id="2001 QK337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.63 || 2.6 km || multiple || 2001–2021 || 27 Nov 2021 || 112 || align=left | Disc.: Spacewatch || |- id="2001 QL337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.5 || 2.8 km || multiple || 2001–2018 || 15 Oct 2018 || 60 || align=left | Disc.: Spacewatch || |- id="2001 QM337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.98 || 2.2 km || multiple || 2001–2021 || 15 Apr 2021 || 66 || align=left | Disc.: Cerro Tololo || |- id="2001 QN337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 2001–2019 || 02 Nov 2019 || 52 || align=left | Disc.: Spacewatch || |- id="2001 QO337" bgcolor=#fefefe | 0 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2019 || 23 Sep 2019 || 58 || align=left | Disc.: NEAT || |- id="2001 QP337" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.0 || 1.2 km || multiple || 1995–2020 || 19 Apr 2020 || 90 || align=left | Disc.: Spacewatch || |- id="2001 QQ337" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.79 || data-sort-value="0.82" | 820 m || multiple || 2001–2021 || 30 Jun 2021 || 55 || align=left | Disc.: NEAT || |- id="2001 QR337" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.5 || 1.1 km || multiple || 2001–2019 || 01 Nov 2019 || 48 || align=left | Disc.: Cerro Tololo || |- id="2001 QS337" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.6 || 1.7 km || multiple || 2001–2020 || 16 Dec 2020 || 54 || align=left | Disc.: Spacewatch || |- id="2001 QT337" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.0 || 1.7 km || multiple || 2001–2020 || 15 Feb 2020 || 74 || align=left | Disc.: NEAT || |- id="2001 QV337" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.8 || 2.4 km || multiple || 1998–2020 || 21 Apr 2020 || 80 || align=left | Disc.: Spacewatch || |- id="2001 QW337" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.8 || 1.2 km || multiple || 2001–2018 || 09 Jul 2018 || 46 || align=left | Disc.: LONEOS || |- id="2001 QY337" bgcolor=#fefefe | 1 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2019 || 06 Sep 2019 || 45 || align=left | Disc.: Spacewatch || |- id="2001 QZ337" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2019 || 24 Oct 2019 || 42 || align=left | Disc.: Spacewatch || |- id="2001 QA338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2021 || 14 May 2021 || 104 || align=left | Disc.: Spacewatch || |- id="2001 QB338" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.6 || 2.7 km || multiple || 2001–2021 || 15 Jan 2021 || 43 || align=left | Disc.: Spacewatch || |- id="2001 QC338" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.3 || 1.9 km || multiple || 2001–2019 || 09 Feb 2019 || 45 || align=left | Disc.: Cerro Tololo || |- id="2001 QD338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.3 km || multiple || 2001–2021 || 12 Jan 2021 || 40 || align=left | Disc.: Spacewatch || |- id="2001 QF338" bgcolor=#fefefe | 0 || || MBA-I || 18.9 || data-sort-value="0.49" | 490 m || multiple || 2001–2019 || 26 Sep 2019 || 31 || align=left | Disc.: Spacewatch || |- id="2001 QG338" bgcolor=#fefefe | 1 || || MBA-I || 19.3 || data-sort-value="0.41" | 410 m || multiple || 2001–2019 || 19 Sep 2019 || 30 || align=left | Disc.: Spacewatch || |- id="2001 QH338" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.6 || 2.7 km || multiple || 2001–2020 || 01 Jan 2020 || 38 || align=left | Disc.: Spacewatch || |- id="2001 QJ338" bgcolor=#fefefe | 0 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2019 || 28 Aug 2019 || 30 || align=left | Disc.: Spacewatch || |- id="2001 QK338" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.0 || 1.4 km || multiple || 2001–2019 || 23 Sep 2019 || 70 || align=left | Disc.: Spacewatch || |- id="2001 QL338" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.4 || 2.9 km || multiple || 2001–2019 || 28 Nov 2019 || 57 || align=left | Disc.: Spacewatch || |- id="2001 QM338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.8 km || multiple || 2001–2019 || 28 Aug 2019 || 54 || align=left | Disc.: Spacewatch || |- id="2001 QN338" bgcolor=#fefefe | 0 || || MBA-I || 18.42 || data-sort-value="0.62" | 620 m || multiple || 2001–2019 || 07 Jun 2019 || 54 || align=left | Disc.: LONEOS || |- id="2001 QO338" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2019 || 28 Aug 2019 || 49 || align=left | Disc.: Spacewatch || |- id="2001 QP338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.8 km || multiple || 2001–2021 || 05 Jan 2021 || 49 || align=left | Disc.: Spacewatch || |- id="2001 QQ338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.9 || 1.5 km || multiple || 2001–2020 || 14 Nov 2020 || 39 || align=left | Disc.: Spacewatch || |- id="2001 QR338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 2.0 km || multiple || 2001–2020 || 12 Nov 2020 || 61 || align=left | Disc.: Calar Alto Obs. || |- id="2001 QS338" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || 1.6 km || multiple || 2001–2020 || 11 Dec 2020 || 37 || align=left | Disc.: Spacewatch || |- id="2001 QT338" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.1 || 2.1 km || multiple || 2001–2019 || 27 Nov 2019 || 61 || align=left | Disc.: Spacewatch || |- id="2001 QU338" bgcolor=#fefefe | 0 || || MBA-I || 18.27 || data-sort-value="0.66" | 660 m || multiple || 2001–2021 || 11 Apr 2021 || 48 || align=left | Disc.: NEAT || |- id="2001 QX338" bgcolor=#d6d6d6 | 1 || || MBA-O || 18.29 || 1.2 km || multiple || 2001–2021 || 08 Sep 2021 || 37 || align=left | Disc.: Spacewatch || |- id="2001 QY338" bgcolor=#E9E9E9 | 1 || || MBA-M || 16.7 || 1.9 km || multiple || 2001–2020 || 14 Feb 2020 || 87 || align=left | Disc.: NEAT || |- id="2001 QZ338" bgcolor=#fefefe | 0 || || MBA-I || 18.78 || data-sort-value="0.52" | 520 m || multiple || 2001–2021 || 08 Jun 2021 || 45 || align=left | Disc.: Spacewatch || |- id="2001 QB339" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.3 || 1.0 km || multiple || 1997–2020 || 23 Mar 2020 || 45 || align=left | Disc.: LONEOS || |- id="2001 QC339" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2021 || 07 Jun 2021 || 55 || align=left | Disc.: Spacewatch || |- id="2001 QE339" bgcolor=#d6d6d6 | 2 || || HIL || 16.0 || 3.5 km || multiple || 1993–2017 || 21 Sep 2017 || 27 || align=left | Disc.: SpacewatchAdded on 22 July 2020 || |- id="2001 QF339" bgcolor=#fefefe | 0 || || MBA-I || 18.14 || data-sort-value="0.70" | 700 m || multiple || 2001–2021 || 16 May 2021 || 66 || align=left | Disc.: SpacewatchAdded on 19 October 2020 || |- id="2001 QG339" bgcolor=#fefefe | 4 || || MBA-I || 19.7 || data-sort-value="0.34" | 340 m || multiple || 2001–2015 || 12 Sep 2015 || 27 || align=left | Disc.: SpacewatchAdded on 19 October 2020 || |- id="2001 QH339" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2018 || 13 Dec 2018 || 26 || align=left | Disc.: SpacewatchAdded on 19 October 2020 || |- id="2001 QK339" bgcolor=#fefefe | 1 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2020 || 26 Sep 2020 || 39 || align=left | Disc.: Spacewatch Added on 17 January 2021 || |- id="2001 QL339" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2021 || 07 Feb 2021 || 55 || align=left | Disc.: SpacewatchAdded on 11 May 2021 || |- id="2001 QM339" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2018 || 12 Dec 2018 || 36 || align=left | Disc.: SpacewatchAdded on 17 June 2021 || |- id="2001 QN339" bgcolor=#d6d6d6 | 1 || || MBA-O || 18.0 || 1.4 km || multiple || 2001–2021 || 08 May 2021 || 20 || align=left | Disc.: Cerro TololoAdded on 17 June 2021 || |- id="2001 QO339" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.64 || 1.7 km || multiple || 2001–2021 || 31 Aug 2021 || 42 || align=left | Disc.: SpacewatchAdded on 21 August 2021 || |} back to top R |- id="2001 RD2" bgcolor=#FA8072 | 0 || || MCA || 17.76 || data-sort-value="0.83" | 830 m || multiple || 2001–2019 || 08 Feb 2019 || 71 || align=left | Disc.: LINEAR || |- id="2001 RG2" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.6 || 1.3 km || multiple || 2001–2020 || 19 Jan 2020 || 58 || align=left | Disc.: LINEAR || |- id="2001 RK2" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.5 || 4.4 km || multiple || 2001–2021 || 03 Jun 2021 || 210 || align=left | Disc.: LINEAR || |- id="2001 RX2" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.41 || 1.4 km || multiple || 2001–2021 || 09 Apr 2021 || 117 || align=left | Disc.: Desert Eagle Obs.Alt.: 2014 SG301 || |- id="2001 RO3" bgcolor=#FFC2E0 | 2 || || APO || 23.5 || data-sort-value="0.071" | 71 m || single || 34 days || 11 Oct 2001 || 68 || align=left | Disc.: LINEARAMO at MPC || |- id="2001 RP3" bgcolor=#FFC2E0 | 4 || || AMO || 23.4 || data-sort-value="0.074" | 74 m || single || 33 days || 11 Oct 2001 || 61 || align=left | Disc.: LONEOS || |- id="2001 RR3" bgcolor=#FA8072 | 1 || || MCA || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2020 || 04 Dec 2020 || 102 || align=left | Disc.: LINEAR || |- id="2001 RK4" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.80 || data-sort-value="0.82" | 820 m || multiple || 2001–2021 || 28 Jul 2021 || 52 || align=left | Disc.: LINEARAlt.: 2005 NW37 || |- id="2001 RM8" bgcolor=#E9E9E9 | 2 || || MBA-M || 17.7 || 1.6 km || multiple || 2001–2019 || 31 Oct 2019 || 101 || align=left | Disc.: LINEARAlt.: 2010 KM51 || |- id="2001 RM9" bgcolor=#fefefe | 1 || || MBA-I || 17.2 || 1.1 km || multiple || 2001–2019 || 20 Nov 2019 || 109 || align=left | Disc.: LINEAR || |- id="2001 RJ10" bgcolor=#FA8072 | – || || MCA || 19.2 || data-sort-value="0.43" | 430 m || single || 41 days || 21 Oct 2001 || 21 || align=left | Disc.: LINEAR || |- id="2001 RF13" bgcolor=#FA8072 | 0 || || MCA || 19.38 || data-sort-value="0.40" | 400 m || multiple || 2001–2021 || 30 Nov 2021 || 68 || align=left | Disc.: LINEAR || |- id="2001 RB14" bgcolor=#fefefe | 1 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2019 || 29 Oct 2019 || 90 || align=left | Disc.: LINEAR || |- id="2001 RS15" bgcolor=#FA8072 | 1 || || MCA || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2019 || 10 Jul 2019 || 71 || align=left | Disc.: LINEAR || |- id="2001 RE16" bgcolor=#FA8072 | – || || MCA || 19.5 || data-sort-value="0.53" | 530 m || single || 38 days || 19 Oct 2001 || 22 || align=left | Disc.: LINEARMBA at MPC || |- id="2001 RQ17" bgcolor=#FFC2E0 | 0 || || APO || 22.6 || data-sort-value="0.11" | 110 m || multiple || 2001–2018 || 18 Oct 2018 || 182 || align=left | Disc.: LINEARAMO at MPC || |- id="2001 RW17" bgcolor=#FFC2E0 | 0 || || APO || 20.3 || data-sort-value="0.31" | 310 m || multiple || 2001–2019 || 25 Sep 2019 || 257 || align=left | Disc.: LONEOS || |- id="2001 RX17" bgcolor=#FFC2E0 | 4 || || AMO || 20.0 || data-sort-value="0.36" | 360 m || single || 119 days || 08 Jan 2002 || 62 || align=left | Disc.: LONEOS || |- id="2001 RA18" bgcolor=#FFC2E0 | 0 || || AMO || 19.54 || data-sort-value="0.44" | 440 m || multiple || 2001–2019 || 13 Jan 2019 || 64 || align=left | Disc.: LINEAR || |- id="2001 RY19" bgcolor=#d6d6d6 | 2 || || MBA-O || 17.0 || 2.2 km || multiple || 2001–2016 || 02 Apr 2016 || 48 || align=left | Disc.: LINEARAlt.: 2012 SF23 || |- id="2001 RX20" bgcolor=#FA8072 | 0 || || MCA || 18.68 || data-sort-value="0.55" | 550 m || multiple || 2001–2021 || 30 May 2021 || 56 || align=left | Disc.: LINEAR || |- id="2001 RK21" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.66 || 2.0 km || multiple || 2001–2021 || 12 May 2021 || 228 || align=left | Disc.: LINEAR || |- id="2001 RT21" bgcolor=#d6d6d6 | 4 || || MBA-O || 17.8 || 1.5 km || single || 72 days || 18 Nov 2001 || 24 || align=left | Disc.: LINEAR || |- id="2001 RV22" bgcolor=#fefefe | 0 || || MBA-I || 18.25 || data-sort-value="0.67" | 670 m || multiple || 1994–2021 || 15 Apr 2021 || 117 || align=left | Disc.: LINEAR || |- id="2001 RK23" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.9 km || multiple || 2001–2020 || 20 Dec 2020 || 105 || align=left | Disc.: LINEARAlt.: 2015 UB79 || |- id="2001 RF33" bgcolor=#E9E9E9 | 3 || || MBA-M || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2005 || 16 Jun 2005 || 15 || align=left | Disc.: LINEAR || |- id="2001 RD35" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2015 || 16 Jan 2015 || 34 || align=left | Disc.: LINEAR || |- id="2001 RJ36" bgcolor=#fefefe | 1 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2018 || 19 Apr 2018 || 125 || align=left | Disc.: LINEARAlt.: 2015 OF78 || |- id="2001 RA37" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.87 || 2.4 km || multiple || 2001–2021 || 26 Aug 2021 || 95 || align=left | Disc.: LINEAR || |- id="2001 RW39" bgcolor=#d6d6d6 | 2 || || MBA-O || 16.6 || 2.7 km || multiple || 2001–2018 || 06 Nov 2018 || 57 || align=left | Disc.: LINEAR || |- id="2001 RY39" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.05 || 1.6 km || multiple || 2001–2021 || 09 Apr 2021 || 122 || align=left | Disc.: LINEAR || |- id="2001 RA40" bgcolor=#fefefe | 0 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2019 || 28 Oct 2019 || 60 || align=left | Disc.: LINEAR || |- id="2001 RR40" bgcolor=#FA8072 | 1 || || MCA || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2015 || 12 Feb 2015 || 65 || align=left | Disc.: LINEAR || |- id="2001 RR41" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.30 || 3.1 km || multiple || 2001–2021 || 15 Apr 2021 || 104 || align=left | Disc.: LINEARAlt.: 2015 EH68 || |- id="2001 RG42" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.04 || 1.0 km || multiple || 2001–2019 || 30 Dec 2019 || 37 || align=left | Disc.: LINEARAlt.: 2014 SP289 || |- id="2001 RN43" bgcolor=#FA8072 | 5 || || MCA || 18.4 || 1.2 km || single || 66 days || 25 Oct 2001 || 41 || align=left | Disc.: LINEAR || |- id="2001 RW43" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2019 || 17 Dec 2019 || 89 || align=left | Disc.: Farpoint Obs.Added on 30 September 2021Alt.: 2005 XA129 || |- id="2001 RO46" bgcolor=#FA8072 | 1 || || MCA || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2021 || 04 Jan 2021 || 156 || align=left | Disc.: LINEAR || |- id="2001 RP46" bgcolor=#fefefe | 1 || || HUN || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2019 || 07 Apr 2019 || 51 || align=left | Disc.: LINEARAlt.: 2016 DH1 || |- id="2001 RQ46" bgcolor=#FA8072 | 0 || || MCA || 18.12 || data-sort-value="0.71" | 710 m || multiple || 2001–2021 || 09 Nov 2021 || 70 || align=left | Disc.: LINEAR || |- id="2001 RF47" bgcolor=#FA8072 | 0 || || HUN || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2019 || 25 Nov 2019 || 192 || align=left | Disc.: LINEAR || |- id="2001 RH47" bgcolor=#fefefe | 2 || || MBA-I || 18.0 || data-sort-value="0.75" | 750 m || multiple || 2001–2015 || 15 Oct 2015 || 47 || align=left | Disc.: LINEAR || |- id="2001 RL47" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.39 || 1.4 km || multiple || 2001–2021 || 14 Apr 2021 || 70 || align=left | Disc.: LINEAR || |- id="2001 RM47" bgcolor=#fefefe | 0 || || MBA-I || 18.18 || data-sort-value="0.69" | 690 m || multiple || 2001–2021 || 15 Apr 2021 || 78 || align=left | Disc.: LINEARAlt.: 2012 TM319 || |- id="2001 RN47" bgcolor=#FA8072 | 1 || || HUN || 18.0 || data-sort-value="0.75" | 750 m || multiple || 1995–2019 || 06 Dec 2019 || 241 || align=left | Disc.: LINEAR || |- id="2001 RX47" bgcolor=#FFC2E0 | 1 || || AMO || 19.89 || data-sort-value="0.37" | 370 m || multiple || 2001–2021 || 18 Jun 2021 || 53 || align=left | Disc.: LINEARAlt.: 2021 LO1 || |- id="2001 RK50" bgcolor=#fefefe | 0 || || MBA-I || 18.29 || data-sort-value="0.65" | 650 m || multiple || 2000–2021 || 08 May 2021 || 87 || align=left | Disc.: LINEAR || |- id="2001 RJ51" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.06 || 2.2 km || multiple || 2001–2021 || 06 Apr 2021 || 176 || align=left | Disc.: LINEAR || |- id="2001 RW51" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.4 || 2.9 km || multiple || 2001–2020 || 16 Mar 2020 || 62 || align=left | Disc.: LINEARAlt.: 2013 YX62 || |- id="2001 RW52" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.9 km || multiple || 2001–2021 || 07 Jan 2021 || 158 || align=left | Disc.: LINEARAlt.: 2015 RY118 || |- id="2001 RR54" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.4 || 1.4 km || multiple || 2001–2018 || 12 Oct 2018 || 95 || align=left | Disc.: LINEARAlt.: 2013 LV14 || |- id="2001 RX54" bgcolor=#fefefe | 0 || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2019 || 28 Dec 2019 || 90 || align=left | Disc.: LINEAR || |- id="2001 RB55" bgcolor=#fefefe | 1 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2020 || 15 Aug 2020 || 84 || align=left | Disc.: LINEARAlt.: 2009 VA12 || |- id="2001 RU55" bgcolor=#fefefe | 3 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2021 || 18 Jan 2021 || 33 || align=left | Disc.: LINEARAlt.: 2012 TD60 || |- id="2001 RA56" bgcolor=#fefefe | 0 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2020 || 07 Dec 2020 || 53 || align=left | Disc.: LINEAR || |- id="2001 RV58" bgcolor=#fefefe | 2 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 20 Jul 2020 || 37 || align=left | Disc.: LINEARAdded on 24 August 2020 || |- id="2001 RE59" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.15 || 1.6 km || multiple || 2001–2021 || 13 May 2021 || 256 || align=left | Disc.: LINEAR || |- id="2001 RY59" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.6 || data-sort-value="0.80" | 800 m || multiple || 2001–2018 || 13 Dec 2018 || 65 || align=left | Disc.: LINEAR || |- id="2001 RZ59" bgcolor=#fefefe | 0 || || MBA-I || 17.4 || data-sort-value="0.98" | 980 m || multiple || 2001–2021 || 09 Jan 2021 || 143 || align=left | Disc.: LINEARAlt.: 2012 UK50, 2015 QW || |- id="2001 RN60" bgcolor=#fefefe | 0 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2021 || 15 Jan 2021 || 131 || align=left | Disc.: LINEARAlt.: 2012 TC217 || |- id="2001 RX61" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.45 || 1.4 km || multiple || 2001–2021 || 18 Apr 2021 || 104 || align=left | Disc.: LINEAR || |- id="2001 RL95" bgcolor=#FA8072 | 4 || || MCA || 17.4 || data-sort-value="0.98" | 980 m || single || 96 days || 24 Oct 2001 || 31 || align=left | Disc.: LINEAR || |- id="2001 RQ95" bgcolor=#fefefe | 0 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2020 || 17 Nov 2020 || 92 || align=left | Disc.: LINEAR || |- id="2001 RR95" bgcolor=#fefefe | 1 || || MBA-I || 17.6 || data-sort-value="0.90" | 900 m || multiple || 2001–2020 || 24 Jan 2020 || 64 || align=left | Disc.: LINEAR || |- id="2001 RF96" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.36 || 1.4 km || multiple || 2001–2021 || 18 May 2021 || 105 || align=left | Disc.: Spacewatch || |- id="2001 RU96" bgcolor=#d6d6d6 | 0 || || MBA-O || 15.9 || 3.7 km || multiple || 2001–2021 || 23 Jan 2021 || 123 || align=left | Disc.: SpacewatchAlt.: 2010 KF149 || |- id="2001 RX96" bgcolor=#fefefe | 0 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2020 || 23 Sep 2020 || 62 || align=left | Disc.: Spacewatch Added on 17 January 2021 || |- id="2001 RE97" bgcolor=#fefefe | 0 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2019 || 03 Oct 2019 || 145 || align=left | Disc.: Spacewatch || |- id="2001 RO97" bgcolor=#d6d6d6 | 3 || || MBA-O || 17.91 || 1.5 km || multiple || 2001–2018 || 06 Oct 2018 || 21 || align=left | Disc.: SpacewatchAdded on 24 December 2021 || |- id="2001 RB98" bgcolor=#E9E9E9 | – || || MBA-M || 19.5 || data-sort-value="0.53" | 530 m || single || 13 days || 25 Sep 2001 || 9 || align=left | Disc.: Spacewatch || |- id="2001 RN98" bgcolor=#E9E9E9 | 4 || || MBA-M || 18.8 || data-sort-value="0.73" | 730 m || multiple || 2001–2018 || 10 Nov 2018 || 21 || align=left | Disc.: SpacewatchAdded on 21 August 2021 || |- id="2001 RZ98" bgcolor=#fefefe | 0 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2021 || 17 Jan 2021 || 140 || align=left | Disc.: LINEARAlt.: 2005 UU321 || |- id="2001 RA102" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.31 || 1.0 km || multiple || 2001–2021 || 08 Aug 2021 || 54 || align=left | Disc.: LINEAR || |- id="2001 RH103" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.5 km || multiple || 2001–2019 || 04 Dec 2019 || 121 || align=left | Disc.: LINEARAlt.: 2010 UW31 || |- id="2001 RD104" bgcolor=#fefefe | 0 || || MBA-I || 17.92 || data-sort-value="0.77" | 770 m || multiple || 2001–2021 || 08 May 2021 || 120 || align=left | Disc.: LINEARAlt.: 2008 RL48 || |- id="2001 RB105" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2019 || 08 Dec 2019 || 97 || align=left | Disc.: LINEAR || |- id="2001 RF106" bgcolor=#fefefe | 0 || || MBA-I || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2020 || 20 Dec 2020 || 124 || align=left | Disc.: LINEARAlt.: 2016 SU26 || |- id="2001 RX106" bgcolor=#fefefe | 0 || || MBA-I || 17.7 || data-sort-value="0.86" | 860 m || multiple || 1994–2019 || 24 Dec 2019 || 90 || align=left | Disc.: LINEAR || |- id="2001 RQ107" bgcolor=#fefefe | 2 || || MBA-I || 18.9 || data-sort-value="0.49" | 490 m || multiple || 2001–2019 || 03 Dec 2019 || 36 || align=left | Disc.: LINEAR || |- id="2001 RT107" bgcolor=#d6d6d6 | – || || MBA-O || 15.7 || 4.0 km || single || 7 days || 19 Sep 2001 || 9 || align=left | Disc.: LINEAR || |- id="2001 RE109" bgcolor=#fefefe | 0 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2019 || 29 Nov 2019 || 82 || align=left | Disc.: LINEAR || |- id="2001 RF109" bgcolor=#E9E9E9 | 9 || || MBA-M || 17.80 || 1.5 km || single || 7 days || 19 Sep 2001 || 11 || align=left | Disc.: LINEARAdded on 21 August 2021 || |- id="2001 RZ109" bgcolor=#E9E9E9 | 3 || || MBA-M || 17.8 || 1.2 km || multiple || 2001–2014 || 23 Nov 2014 || 52 || align=left | Disc.: LINEARAlt.: 2014 QY305 || |- id="2001 RS112" bgcolor=#E9E9E9 | – || || MBA-M || 17.8 || data-sort-value="0.82" | 820 m || single || 54 days || 11 Oct 2001 || 20 || align=left | Disc.: LINEAR || |- id="2001 RT112" bgcolor=#fefefe | 1 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2020 || 02 Feb 2020 || 95 || align=left | Disc.: LINEAR || |- id="2001 RZ112" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.0 || 1.4 km || multiple || 2001–2019 || 02 Jul 2019 || 53 || align=left | Disc.: LINEAR || |- id="2001 RM114" bgcolor=#FA8072 | 1 || || MCA || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2015 || 02 Dec 2015 || 57 || align=left | Disc.: LINEARAlt.: 2008 UK235 || |- id="2001 RZ114" bgcolor=#fefefe | 3 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2019 || 24 Aug 2019 || 41 || align=left | Disc.: LINEAR || |- id="2001 RB117" bgcolor=#fefefe | 0 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2020 || 20 Dec 2020 || 110 || align=left | Disc.: LINEARAlt.: 2005 VW63 || |- id="2001 RJ117" bgcolor=#fefefe | 0 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2020 || 14 Nov 2020 || 74 || align=left | Disc.: LINEAR || |- id="2001 RF118" bgcolor=#fefefe | 0 || || MBA-I || 18.2 || data-sort-value="0.68" | 680 m || multiple || 2001–2019 || 28 Nov 2019 || 82 || align=left | Disc.: LINEAR || |- id="2001 RW119" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.1 || 3.4 km || multiple || 2001–2021 || 10 Jun 2021 || 238 || align=left | Disc.: LINEAR || |- id="2001 RT121" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 17 Nov 2020 || 110 || align=left | Disc.: LINEARAlt.: 2005 UE83 || |- id="2001 RS122" bgcolor=#E9E9E9 | 2 || || MBA-M || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2018 || 10 Dec 2018 || 60 || align=left | Disc.: LINEAR || |- id="2001 RB127" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.13 || 1.6 km || multiple || 1997–2021 || 15 Apr 2021 || 175 || align=left | Disc.: LINEAR || |- id="2001 RM128" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.25 || 1.1 km || multiple || 2001–2021 || 03 May 2021 || 99 || align=left | Disc.: LINEAR || |- id="2001 RJ130" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2020 || 11 Oct 2020 || 124 || align=left | Disc.: LINEARAlt.: 2009 WR194 || |- id="2001 RS131" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.99 || 2.2 km || multiple || 2001–2022 || 27 Jan 2022 || 197 || align=left | Disc.: LINEAR || |- id="2001 RN133" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.52 || data-sort-value="0.93" | 930 m || multiple || 2001–2021 || 01 May 2021 || 125 || align=left | Disc.: LINEAR || |- id="2001 RL134" bgcolor=#E9E9E9 | 0 || || MBA-M || 16.95 || 2.3 km || multiple || 2001–2021 || 01 Apr 2021 || 252 || align=left | Disc.: LINEAR || |- id="2001 RN134" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.2 || 1.5 km || multiple || 2001–2014 || 12 Dec 2014 || 71 || align=left | Disc.: LINEAR || |- id="2001 RJ137" bgcolor=#E9E9E9 | – || || MBA-M || 18.7 || data-sort-value="0.54" | 540 m || single || 34 days || 16 Oct 2001 || 19 || align=left | Disc.: LINEAR || |- id="2001 RO137" bgcolor=#d6d6d6 | 1 || || MBA-O || 16.1 || 3.4 km || multiple || 2001–2021 || 07 Mar 2021 || 47 || align=left | Disc.: LINEARAlt.: 2018 PZ55 || |- id="2001 RM138" bgcolor=#FA8072 | 0 || || MCA || 19.39 || data-sort-value="0.39" | 390 m || multiple || 2001–2018 || 05 Nov 2018 || 39 || align=left | Disc.: LINEAR || |- id="2001 RD140" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.16 || data-sort-value="0.69" | 690 m || multiple || 2001–2021 || 07 Jun 2021 || 72 || align=left | Disc.: LINEARAlt.: 2017 KX5 || |- id="2001 RO140" bgcolor=#fefefe | 2 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2020 || 10 Dec 2020 || 56 || align=left | Disc.: LINEAR || |- id="2001 RV143" bgcolor=#C2E0FF | E || || TNO || 6.8 || 150 km || single || 38 days || 20 Oct 2001 || 5 || align=left | Disc.: Kitt Peak Obs.LoUTNOs, cubewano? || |- id="2001 RW143" bgcolor=#C2E0FF | 3 || || TNO || 7.0 || 132 km || multiple || 2001–2013 || 06 Oct 2013 || 20 || align=left | Disc.: Kitt Peak Obs.LoUTNOs, cubewano (cold) || |- id="2001 RY143" bgcolor=#C2E0FF | 2 || || TNO || 7.0 || 137 km || multiple || 2001–2017 || 30 Aug 2017 || 50 || align=left | Disc.: Kitt Peak Obs.LoUTNOs, cubewano? || |- id="2001 RN150" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.2 || 2.0 km || multiple || 2001–2019 || 03 Sep 2019 || 118 || align=left | Disc.: LONEOSAlt.: 2010 JQ124 || |- id="2001 RT150" bgcolor=#fefefe | 0 || || MBA-I || 18.38 || data-sort-value="0.63" | 630 m || multiple || 2001–2022 || 25 Jan 2022 || 108 || align=left | Disc.: LONEOS || |- id="2001 RY153" bgcolor=#fefefe | 0 || || MBA-I || 18.4 || data-sort-value="0.62" | 620 m || multiple || 2001–2020 || 16 Nov 2020 || 135 || align=left | Disc.: NEAT || |- id="2001 RE155" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.10 || 1.1 km || multiple || 1997–2021 || 03 May 2021 || 131 || align=left | Disc.: LINEARAlt.: 2005 QL149 || |- id="2001 RL155" bgcolor=#C2E0FF | E || || TNO || 7.8 || 95 km || single || 37 days || 19 Oct 2001 || 4 || align=left | Disc.: Kitt Peak Obs.LoUTNOs, cubewano? || |- id="2001 RF156" bgcolor=#E9E9E9 | 6 || || MBA-M || 18.8 || data-sort-value="0.73" | 730 m || single || 28 days || 12 Oct 2001 || 16 || align=left | Disc.: NEAT || |- id="2001 RG156" bgcolor=#fefefe | 0 || || MBA-I || 17.5 || data-sort-value="0.94" | 940 m || multiple || 2001–2021 || 17 Jan 2021 || 78 || align=left | Disc.: NEATAlt.: 2014 FS40 || |- id="2001 RJ156" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.6 || 2.7 km || multiple || 2001–2020 || 13 May 2020 || 115 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RL156" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.44 || 1.4 km || multiple || 2001–2021 || 14 Apr 2021 || 123 || align=left | Disc.: Spacewatch || |- id="2001 RM156" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.6 || 1.7 km || multiple || 2001–2019 || 06 Sep 2019 || 53 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RN156" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2020 || 23 Jan 2020 || 54 || align=left | Disc.: Spacewatch || |- id="2001 RO156" bgcolor=#d6d6d6 | 0 || || HIL || 15.9 || 3.7 km || multiple || 2001–2017 || 16 Oct 2017 || 49 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RQ156" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.3 || 1.9 km || multiple || 2001–2021 || 18 Jan 2021 || 80 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RR156" bgcolor=#fefefe | 1 || || MBA-I || 18.3 || data-sort-value="0.65" | 650 m || multiple || 2001–2019 || 09 May 2019 || 46 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RS156" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.99 || 1.1 km || multiple || 2001–2021 || 13 Apr 2021 || 90 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RT156" bgcolor=#fefefe | 0 || || MBA-I || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2021 || 09 Jun 2021 || 81 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RU156" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.93 || 2.3 km || multiple || 2001–2021 || 30 Jun 2021 || 98 || align=left | Disc.: Kitt Peak Obs.Alt.: 2010 LE131 || |- id="2001 RV156" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.42 || 2.9 km || multiple || 2001–2021 || 07 Jul 2021 || 162 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RW156" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.01 || 2.2 km || multiple || 2001–2021 || 15 Apr 2021 || 66 || align=left | Disc.: Spacewatch || |- id="2001 RX156" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.59 || 1.7 km || multiple || 2001–2021 || 30 Nov 2021 || 56 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RY156" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.65 || 1.6 km || multiple || 2001–2021 || 06 Nov 2021 || 39 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RZ156" bgcolor=#fefefe | 0 || || MBA-I || 19.04 || data-sort-value="0.46" | 460 m || multiple || 2001–2021 || 28 Sep 2021 || 60 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RA157" bgcolor=#fefefe | 0 || || MBA-I || 18.1 || data-sort-value="0.71" | 710 m || multiple || 2001–2020 || 07 Dec 2020 || 45 || align=left | Disc.: Spacewatch || |- id="2001 RB157" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.04 || 1.0 km || multiple || 2001–2021 || 03 May 2021 || 61 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RC157" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.0 || 2.2 km || multiple || 2001–2018 || 13 Aug 2018 || 29 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RD157" bgcolor=#fefefe | 1 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2021 || 16 Jan 2021 || 36 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RF157" bgcolor=#fefefe | 0 || || MBA-I || 17.8 || data-sort-value="0.82" | 820 m || multiple || 2001–2019 || 24 Dec 2019 || 92 || align=left | Disc.: Spacewatch || |- id="2001 RG157" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.5 || 1.8 km || multiple || 2001–2020 || 15 Dec 2020 || 61 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RJ157" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.8 || 1.5 km || multiple || 2001–2019 || 24 Dec 2019 || 53 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RK157" bgcolor=#fefefe | 0 || || MBA-I || 18.73 || data-sort-value="0.53" | 530 m || multiple || 2001–2020 || 28 Jan 2020 || 83 || align=left | Disc.: Spacewatch || |- id="2001 RL157" bgcolor=#E9E9E9 | 1 || || MBA-M || 17.9 || data-sort-value="0.78" | 780 m || multiple || 2001–2020 || 22 Mar 2020 || 65 || align=left | Disc.: Spacewatch || |- id="2001 RN157" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.50 || 1.8 km || multiple || 2001–2022 || 26 Jan 2022 || 45 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RO157" bgcolor=#fefefe | 1 || || MBA-I || 18.30 || data-sort-value="0.65" | 650 m || multiple || 2001–2022 || 07 Jan 2022 || 45 || align=left | Disc.: Spacewatch || |- id="2001 RQ157" bgcolor=#d6d6d6 | 0 || || MBA-O || 17.20 || 2.0 km || multiple || 2001–2021 || 10 Aug 2021 || 54 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RU157" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.7 || 2.5 km || multiple || 1993–2020 || 15 May 2020 || 68 || align=left | Disc.: Spacewatch || |- id="2001 RV157" bgcolor=#fefefe | 1 || || MBA-I || 19.0 || data-sort-value="0.47" | 470 m || multiple || 2001–2016 || 03 Nov 2016 || 30 || align=left | Disc.: Spacewatch || |- id="2001 RW157" bgcolor=#fefefe | 0 || || MBA-I || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2020 || 23 Oct 2020 || 41 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RX157" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || 1.6 km || multiple || 2001–2020 || 23 Oct 2020 || 28 || align=left | Disc.: Spacewatch || |- id="2001 RY157" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.4 || 2.9 km || multiple || 1995–2021 || 09 Jan 2021 || 67 || align=left | Disc.: Spacewatch || |- id="2001 RZ157" bgcolor=#fefefe | 0 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2016 || 26 Nov 2016 || 22 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RA158" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.44 || 1.4 km || multiple || 2001–2021 || 12 May 2021 || 101 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RB158" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.8 || 2.4 km || multiple || 2001–2020 || 16 Mar 2020 || 62 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RC158" bgcolor=#E9E9E9 | 0 || || MBA-M || 17.7 || data-sort-value="0.86" | 860 m || multiple || 2001–2020 || 13 May 2020 || 43 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RD158" bgcolor=#fefefe | 0 || || MBA-I || 18.89 || data-sort-value="0.50" | 500 m || multiple || 2001–2021 || 30 Nov 2021 || 89 || align=left | Disc.: Kitt Peak Obs.Alt.: 2013 HL100 || |- id="2001 RE158" bgcolor=#fefefe | 2 || || MBA-I || 18.7 || data-sort-value="0.54" | 540 m || multiple || 2001–2018 || 03 Jun 2018 || 22 || align=left | Disc.: Kitt Peak Obs. || |- id="2001 RF158" bgcolor=#fefefe | 0 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2020 || 14 Sep 2020 || 46 || align=left | Disc.: Kitt Peak Obs.Added on 19 October 2020 || |- id="2001 RG158" bgcolor=#E9E9E9 | 0 || || MBA-M || 18.6 || data-sort-value="0.57" | 570 m || multiple || 2001–2018 || 17 Nov 2018 || 33 || align=left | Disc.: SpacewatchAdded on 19 October 2020 || |- id="2001 RH158" bgcolor=#E9E9E9 | 2 || || MBA-M || 18.5 || 1.1 km || multiple || 2001–2019 || 30 Jun 2019 || 23 || align=left | Disc.: SpacewatchAdded on 19 October 2020 || |- id="2001 RJ158" bgcolor=#fefefe | 1 || || MBA-I || 18.5 || data-sort-value="0.59" | 590 m || multiple || 2001–2020 || 14 Sep 2020 || 32 || align=left | Disc.: Kitt Peak Obs.Added on 19 October 2020 || |- id="2001 RK158" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.89 || 2.3 km || multiple || 2001–2021 || 15 Apr 2021 || 65 || align=left | Disc.: Spacewatch Added on 17 January 2021 || |- id="2001 RL158" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.9 || 2.3 km || multiple || 1999–2021 || 15 Mar 2021 || 43 || align=left | Disc.: Kitt Peak Obs.Added on 11 May 2021 || |- id="2001 RN158" bgcolor=#d6d6d6 | 0 || || MBA-O || 16.95 || 2.3 km || multiple || 2001–2021 || 21 Apr 2021 || 28 || align=left | Disc.: SpacewatchAdded on 17 June 2021 || |- id="2001 RQ158" bgcolor=#fefefe | 4 || || MBA-I || 18.8 || data-sort-value="0.52" | 520 m || multiple || 2001–2012 || 20 Oct 2012 || 16 || align=left | Disc.: Kitt Peak Obs.Added on 5 November 2021 || |} back to top References Lists of unnumbered minor planets
6063609
https://en.wikipedia.org/wiki/Fachhochschule%20Wedel%20University%20of%20Applied%20Sciences
Fachhochschule Wedel University of Applied Sciences
The Fachhochschule Wedel University of Applied Sciences gGmbH is one of the few private but non-profit universities of applied sciences in Germany. It is state-recognized and offers eleven twelve bachelor's and six eight master's degrees in the fields of computer science, technology and economics. It is located on the western outskirts of Hamburg in Schleswig-Holstein and is financed through tuition fees, a state grant and third-party funding. The FH Wedel University of Applied Sciences is a family company and has been run by the third generation of the founder's grandson, Eike Harms, since it was founded in 1969 (as of November 2018). There are currently around 1,350 students enrolled at the university. History The FH Wedel University of Applied Sciences has been a family business for three generations: founded in 1969 by Helmut Harms, his son Dirk Harms took over the management of the university in 1977. In 2010, Eike Harms, the founder's grandson, became university president and is still in charge of the university today (as of November 2020). The history of the university goes back to 1969. At that time, the Physics and Technical School (PTL) Wedel became the FH Wedel University of Applied Sciences Wedel and laid the foundation for the education of students at university level. The educational establishment was retained and still functions as a partner institution today. The first course at the FH Wedel University of Applied Sciences was Computer Engineering, followed in 1979 by the course in Business Informatics. Over the years, not only have new buildings been built and a non-profit company as a private sponsor and a development association founded, but also study abroad and dual studies have been introduced and a number of new courses have been launched: 1991: Degree in Industrial Engineering 1997: Diploma course in Media InformaticsComputer Science 2000: Master's degree in Computer Science 2003: Bachelor's and master's degree in Business Administration 2011: Bachelor's and master's degree in E-commerce 2014: Bachelor's degree in Computer Games Technology and master's degree in IT Security 2015: Bachelor's and master's degree in IT Engineering 2016: Bachelor courses in IT Management, Consulting & Auditing and Smart Technology Courses Degree programs leading to a Bachelor of Science (B. Sc.): Business Administration Computer Games Technology Data Science & Artificial Intelligence E-Commerce Computer Science IT Engineering IT Management, Consulting & Auditing Media Computer Science Smart Technology Computer Engineering Business Informatics Industrial Engineering Degree programs leading to a Master of Science (M. Sc.): Business Administration Business Informatics / IT Management Data Science & Artificial Intelligence E-Commerce Computer Science IT Engineering IT Security Industrial Engineering All courses can also be studied in the dual study model. As a university of applied sciences, teaching and research at the FH Wedel University of Applied Sciences is more application-oriented than at a university. Students put their knowledge into practice through exercises, projects or activities as working students. The contacts to the economy are close and there are numerous partnerships with companies where students can complete internships, write theses or find employment. In addition, company representatives are involved in teaching through lectures and projects. Together with several companies, the FH Wedel University of Applied Sciences offers particularly good and committed students scholarships. Here it cooperates with the Grohe-Treuhandstiftung, OTTO, NovaTec, Hapag-Lloyd, BIT-SERV and msg nexinsure ag. Partner universities The university has been offering a semester abroad since 1996. At the beginning it relied on cooperations with universities overseas. Later, with the Erasmus program, other European countries were included. The university now cooperates with around 30 partner universities worldwide. External links Homepage Wedel Universities and colleges in Schleswig-Holstein Educational institutions established in 1969 1969 establishments in West Germany
7852781
https://en.wikipedia.org/wiki/Anglican%20Diocese%20of%20Ballarat
Anglican Diocese of Ballarat
The Diocese of Ballarat is a diocese of the Anglican Church of Australia, which was created out of the Diocese of Melbourne in 1875. It is situated in the Ballarat region of the state of Victoria, Australia and covers the south-west region of the state. The diocesan cathedral is the Cathedral of Christ the King in Ballarat. Garry Weatherill, formerly the Bishop of Willochra between 2000 and 2011, was installed as the 10th Bishop of Ballarat on 5 November 2011. History The diocese was created in 1875, out of the Diocese of Melbourne. The inaugural Bishop was Samuel Thornton. Ballarat is one of five dioceses of the Anglican Church of Australia in the ecclesiastical Province of Victoria. Cathedral The Cathedral of Christ the King in Ballarat is the cathedral church of the diocese. Website - https://www.ballaratcathedral.org.au/ The date of the first Anglican service in Ballarat is problematic. Undoubtedly it occurred soon after the discovery of gold late in August 1851. A history of St Paul's Bakery Hill contends that it was on 12 October 1851, when an open-air service conducted by the Revd J. Cheyne from Burnbank was held in a tent. It was a musical service, with hymns accompanied by violin and flute, and was so well received that an evening service was held by lamplight. Ballarat historian W. B. Withers is vague on early details, saying the Anglicans followed on the heels of the Wesleyans. Spielvogel records that it was the Revd William Sim and the Revd A. Morrison who held a service in a tent in November 1851 not far from the present site of St Paul's Bakery Hill. Yet another Anglican source plumps for the Revd Charles Perks of St Peter's, Eastern Hill who visited and officiated at an outdoor service under the escarpment later overlooked by Christ Church. Following the survey of the township of Ballarat, a large block of land in Lydiard Street South was set aside for church purposes. There was little building activity until 1854, when the Revd James R.H. Thackeray was appointed in July. In September 1854 collections were taken up for a fund to build a "church and parsonage", services in August being held in the court house. Thackeray's first church was in a tent and in October 1854 a school began in the tent, with William Barton as schoolmaster. There were as many as 80 children crammed into the tent and, in March 1855, Thackeray allowed the school to be transferred to Bakery Hill. Early in October 1854, Henry Bowyer Lane, government architect at the Ballarat Camp, called tenders for the "Erection of a Church in the Township of Ballarat", according to plans to be seen at his office. Building began in 1854 after £250 had been subscribed, and a ceremony was held to lay the foundation stone. But building proceeded slowly and Thackeray was dismissed in August 1855 when he could not account for the money subscribed for the building fund and even tried to sell off the Creswick church. (Spooner, p. 18; Moore, p. 127; Star, 22 September 1855) The Revd John Potter came to Ballarat from Ballan and, with the support of James Stewart and Adam Augustus Lynn, was able to open a temporary chapel building in Armstrong Street on 16 September 1855. A report in the Ballarat Times on 15 September 1856 noted that "a school house and place of worship will be provided as soon as possible". According to Spielvogel, this was ready by October 1855. An article in The Ballarat Star on 16 July 1856 describes a meeting to form a Church of England Association in Ballarat. John Potter was in charge and noted that he has been in Ballarat "for twelve months". He also noted that "a small attempt only has been made to procure a building suitable for Divine Worship". The basalt church, designed by Lane in 1854, was finally completed in 1857 by Backhouse and Reynolds of Geelong as contractors for a price of £2,000. It was dedicated on 13 September 1857. The same contractors and architects were responsible for the nearby Lydiard Street Wesleyan Church, built of stone in 1858 for £5000. Christ Church was built of basalt quarried at Bond Street, Ballarat, and measured 76 feet by 36 feet, with cedar furnishings supplied by a Mr Helpin of Geelong. From the 1850s, the choir was important, with a paid choir at least from 1859. The women of the parish took on the task of sewing the table linens and soft furnishings for the church, led by Loftus Lynn, wife of Ballarat's first solicitor. Adam Loftus Lynn was a leading member of the early church, and he imported a house which was constructed on the site of the Ballarat Club. They had a total of 11 children, who also contributed much to the social activity of the church. Lynn's Chambers in Lydiard Street, opposite the church, commemorates the Lynn family. An article on architecture in the Ballarat Star in 1862 mentions "there has been some talk of building a tower and otherwise enlarging Christ Church, erected some years ago in the early English style of architecture, but the efforts of the congregation seem to have ceased for the present". The congregation asked the Melbourne architect Leonard Terry, who had designed the banks in Lydiard Street, to undertake additions to the existing building in 1867. In 1868 the sanctuary and transepts were built by Edward James at a cost of £1,792 and in September that year the western half of the church (now cathedral) hall was opened as a school hall. A great celebration for the church came on 11 August 1875, when Samuel Thornton was installed as the first bishop of the new Diocese of Ballarat. Henry Caselli, who arrived in Ballarat in 1855, and became a member of the Christ Church community, was at some point appointed diocesan architect. He designed many churches around Ballarat and always gave half his services gratuitously, as well as being a generous donor to building funds. His 20-year-old daughter Georgiana was buried from Christ Church in 1866. Early in 1882, Caselli and Figgis called tenders for the bishop's registry, council chambers and other facilities for Christ Church. At the time of Caselli's death in 1885, he held a prominent position as a leader of Christ Church, and was accorded a farewell by the bishop on 5 March 1885. From the arrival of Bishop Thornton, there were dreams of building a fine cathedral which would have a frontage onto Dana Street. With this in mind the old vicarage, which dated from 1854 to 1855, was demolished and the present deanery built. A public meeting was held at the Alfred Hall on 10 September 1886 to launch the idea of the cathedral. The diocese conducted a design competition, and of the 24 entries received from leading Victorian architects, the design of Tappin, Gilbert and Dennehy was accepted in early 1887, and a fund was opened for the building, which soon amounted to £1710. The original plan had been to build the cathedral over the existing church, but there were so many difficulties to the plan that it was decided to begin on a new site on the corner of Dana and Lydiard Streets. The new deanery was completed on 26 April 1888. There was a ceremonial laying of the foundation stone of the cathedral on 30 November 1888, when the Governor, Sir Henry Loch, came to do the honours. John Manifold laid £500 on the stone as a donation to the building fund. But the Depression of the 1890s made money scarce, and the failure of a number of banks saw significant losses suffered by members of the diocese. Many parishioners felt that Christ Church was perfectly adequate as a cathedral, especially when the architect William Tappin estimated that the cathedral would cost £50,000 to build, exclusive of the tower and spire. Work finally began in 1903, when a tender of £3,624 was accepted for completion of the diocesan offices. New plans were submitted by the architectural firm of Smart, Tappin and Peebles, with work resuming between 1903 and 1908, before it finally stalled. Bluestone for the chapter house was quarried at Redan, and Waurn Ponds stone was used for the doors, arches and traceries. When funds ran out in 1904, and no support was forthcoming from England, the grand cathedral plan was abandoned. The portion of the plan that was erected was called the Manifold Chapter House in 1908, when the first Synod was held in the building in November 1908. It was dedicated on 10 November 1908, at a cost of £12,000. William Thomson Manifold (1861-1922), a pastoralist from Purrumbete near Camperdown, was a generous benefactor to the Anglican diocese. He was educated at Geelong Church of England Grammar School and Jesus College, Cambridge. He supported Anglican and local institutions, making considerable donations, often with his brothers. The Church of England cathedral and chapter house in Ballarat, Queen's College and Ballarat Grammar School benefited from their generosity. Manifold was vicar's warden of St Paul's, Camperdown, a member of the synod and of the bishop's council. As time passed, improvements were made to the pro-cathedral, each improvement making the dream of the great cathedral recede even further. In 1923 Bishop Maxwell Maxwell-Gumbleton decided that Christ Church should be remodeled to conform as far as its design would allow, as a cathedral church. The chancel was extended into the nave, and its floor raised. Screens were placed across the transept arches and a throne of blackwood was placed on the south side of the chancel. In 1929 a new organ was installed. At the end of the 1930s the church was further improved by the erection of a reredos and paneling to the east. A gift from the Friends of Canterbury Cathedral came in 1935, when a stone replica of an 8th-century cross from the cathedral was presented to Ballarat and set into the arch (left hand side) near the altar. The Friends of Canterbury Cathedral made similar gifts to all the other cathedrals in Australia. In 1972, the front porch was added, and the baptistery moved to the west end of the nave. A major development came in 1989 when the diocesan centre was added, the architect being John Vernon. In 1993 the choir and organ gallery was added. In 2004 a new pipe organ was installed. In the 1980s, the diocese sold the old cathedral site (the chapter house) and it was used as a disco, called Hot Gossip, for about a decade from June 1988. It was later called The Chapel Nightclub. In 2007, the old chapter house became a residential apartment. The history of Ballarat diocese was written by John Spooner. Entitled The Golden See, it was published in 1989 and covers the period from the beginning of the colony of Victoria in 1834 until 1975. Deans of Ballarat Bishops of Ballarat Assistant bishops Graham Walden was an assistant bishop of the diocese in 1988. Archdeaconries Archdeaconries and archdeacons of the diocese have included: Archdeacons of the Loddon 3 March 18851894: John Allnutt, incumbent of St Stephen's Portland References External links Diocese of Ballarat website Anglican Cathedral Anglican bishops of Ballarat Ballarat Ballarat 1875 establishments in Australia Gothic Revival church buildings in Australia
2819454
https://en.wikipedia.org/wiki/Dreambox
Dreambox
Dreambox is a series of Linux-powered DVB satellite, terrestrial and cable digital television receivers (set-top boxes), produced by German multimedia vendor Dream Multimedia. History and description The Linux-based production software originally used by Dreambox was originally developed for DBox2, by the Tuxbox project. The Dbox2 was a proprietary design distributed by KirchMedia for their pay TV services. The bankruptcy of KirchMedia flooded the market with unsold boxes available for Linux enthusiasts. The Dreambox shares the basic design of the DBox2, including the Ethernet port and the PowerPC processor. Its firmware is officially user-upgradable, since it is a Linux-based computer, as opposed to third-party "patching" of alternate receivers. All units support Dream's own DreamCrypt conditional access (CA) system, with software-emulated CA Modules (CAMs) available for many alternate CA systems. The built-in Ethernet interface allows networked computers to access the recordings on the internal hard disks on some Dreambox models. It also enables the receiver to store digital copies of DVB MPEG transport streams on distributed file systems or broadcast the streams as IPTV to VideoLAN and XBMC Media Center clients. Unlike many PC based PVR systems that use free-to-air type of DVB receiver cards, the built-in conditional access allows receiving and storing encrypted content. In 2007, Dream Multimedia also introduced a non-Linux based Dreambox receiver, the DM100, their sole to date, still featuring an Ethernet port. It has a USB-B port for service instead of the RS232 or mini-USB connectors found on other models. Unlike all other Dreamboxes, it features an STMicroelectronics CPU instead of PowerPC or MIPS. Dreambox models There are a number of different models of Dreambox available. The numbers are suffixed with -S for Satellite, -T for Terrestrial and -C for Cable: Table **HDMI via DVI to HDMI adapter. Remark: The new 7020hd v2 has a new Flash with another structure, that is why you need a different Linux Image. All new v2 Modells have a new TPM module inside. The rest of the Hardware is identical with the older one. DM 7000 The DM 7000 is based around the IBM STB04500 controller, featuring a PowerPC processor subsystem and hardware MPEG decoding, has 64 MiB of RAM, 8 MiB of NOR flash memory (directly executable), a Common Interface slot, a dual smart card reader, a CompactFlash card reader, a USB 1.1 port, and an IDE (also known as PATA) interface for attaching an internal 3.5 in hard disk drive to convert the unit into a digital video recorder. Accepts only 230 V AC power. Because the boot loader resides in flash memory, this model may require the use of a JTAG in case of bad flashing which destroyed the boot loader. However, a bad flash will occur under rare scenarios, and rarely, almost never, will you need a JTAG. DM 5600, DM 5620 There was a DM 5600 and also a DM 5620 model. The only difference being that the DM 5620 included an Ethernet port. Otherwise, the DM 56X0 models were a cut down version of the DM 7000 without an IDE interface. They did, however, include an RF modulator allowing them to be used with older TVs that lack a SCART connector. DM 500, DM 500+, DM500HD The DM500 is the successor to the DM5620 and is the smallest and cheapest Dreambox. It is based around an IBM STBx25xx Digital Set-Top Box Integrated Controller, featuring notably a 252 MHz PowerPC processor subsystem, hardware MPEG-2 video and audio decoding and smart card interfaces. The DM500 features 32 MB of RAM and 8 MB of NOR flash memory, of which 5 MB are used for read-only firmware (cramfs and squashfs filesystems), 256 kB by the boot loader and the rest by a writable jffs2 filesystem. It has the standard features of a free-to-air (FTA) satellite receiver, plus extensive Fast Ethernet networking connectivity and a single smart card reader. It does not feature a 7-segment LED display, normally found in other FTA decoders. Also has the ability to be used on Digital satellite, cable and terrestrial broadcasts (also known as DVB-S, DVB-C, DVB-T). The DM500+ model has 96 MB of RAM instead of 32, and 32 MB of NAND flash instead of 8 MB of NOR flash. This makes it similar to the DM600 PVR model. It is only available in DVB-S versions. The new DM500HD was announced in Cologne on May 26, 2009. The price will be between and . DM 7020 The DM 7020 is essentially an updated DM 7000 with 96 MiB of RAM, 32 MiB of NAND flash (disk-like) and an RF modulator. Changes were also made on the software side, utilizing Open Embedded for the base Linux operating system. Because the flash memory of this model is not directly executable, the primary boot loader resides in ROM and can recover corrupted secondary boot loader in flash by loading from the serial port. There are some Enigma 2 (beta) images already available for this model. DM 7025, DM 7025+ The DM 7025 is similar to the DM7020 but with the ability to add a second "snap-in" tuner that makes it possible to watch one program while recording another. It is possible to change the tuner module, selecting between any two of Satellite, Terrestrial or Cable versions. Internally, it features a Xilleon 226 system-on-a-chip from ATI, integrating a 300 MHz MIPS CPU core instead of the traditional PowerPC found in other models, and has 128 MiB of RAM. It uses Enigma 2, this is a complete rewrite of the original Enigma GUI, and is still going through growing pains as features that were present in Enigma are added to Enigma2. Enigma2 is Python-based instead of C code. The DM 7025 has the ability to decode MPEG-2 HD as well. Unfortunately, it must downconvert this to 480i or 576i to display it. The DM 7025+ model features an Organic light-emitting diode (OLED) display instead of an LCD one, an eject button on the Common Interface slot and improved power supply. DM 600 PVR The DM 600 PVR is the same small size as the DM 500 but includes an IDE interface allowing to add an internal 2.5 in laptop-type hard disk drive, the box will only recognise 5600rpm drives. On the outside it adds an S-Video output connector and an analog modem port. It is built around the same IBM STBx25xx integrated controller, but features 32 MiB of flash and 96 MiB of RAM, of which 64 MiB are user-accessible. It is possible to change the tuner module, selecting between Satellite, Terrestrial and Cable versions. There is still just one SCART connector and no 7-segment LED display, just 2 status LEDs. The provided remote control unit is the same one supplied with the 7000, 7020 and 7025 and allows one to control the TV set as well. DM 800HD PVR / DM 800 HD se This is essentially a high definition version of the DM 600 PVR, featuring a single pluggable DVB tuner (S/S2, C or T), a 300 MHz MIPS processor, 64 MiB of Flash memory, 256 MiB of RAM and room for an internal SATA 2.5 in disk. It also features one DVI to HDMi Cable, two USB 2.0, one eSATA and one 10/100 Mbit/s Ethernet interfaces. It has an OLED display. DM 800HD se was introduced in late 2010. The main differences of the DM800HD se compared to the DM 800HD are a 400 MHz MIPS processor, a HDMI connector and a color OLED display. Another difference is the improved system chip in DM800se providing native DIVX support among other improvements. DM 8000 HD PVR This is the high definition PVR. Like the DM-7025, it supports pluggable tuner modules. In addition to High Definition, it has an upgrade for a DVD drive (slot in). And it has USB 2.0. Physically on the box it has one DVI-port, but with the supplied DVI to HDMi Cable you get HDMI video. Originally announced to become available in the beginning of 2007, its release date was pushed back. The product then began shipping on 12.12.2008. The planned features were revised as well. Originally, this model was supposed to have 128 MiB of RAM (now 256), 32 MiB of flash (now 256 MiB) and a 300 MHz processor (now 400 MHz Broadcom 7400). Other Linux-based HD receivers became available in the meantime. In June 2012, Dream Multimedia announced the discontinuation of the DM 8000 HD PVR because several electronic components are no longer available. It was also announced, that no direct successor will be developed since Dream Multimedia is already working at "Project Goliath". "Project Goliath" "Project Goliath", announced in June 2012, is supposed to be a possible successor of several Dreambox models. According to Dream Multimedia, it is a "totally new hardware and software product, combining all the features of the successful Dreambox series, and indeed will go beyond that". Alternative firmware and plug-ins The factory-installed distribution on the Dreambox is mostly available under the GNU General Public License (GPL) and uses standard Linux API's, including Linux DVB API and Linux Infrared Remote Control (LIRC). Several models (7025, 800 and 8000) use GStreamer as a multimedia framework. This configuration encourages enthusiasts to modify its functions, particularly in the form of so-called images. Plug-ins There are also many third party addons and plugins available that will extend the functionality of the Dreambox. Some plugins are model specific, while others run on all boxes. Plugins such as Jukebox and SHOUTcast playback, also external XMLTV guides, a web browser are available, and a VLC media player interface for on demand streaming media. Games are also abundant like Pac-Man and Tetris. In addition, unofficial third-party conditional access software modules (CAMs or emulators) are widely circulated on the Internet that emulate the CA systems developed by NDS (VideoGuard), Irdeto, Conax, Nagravision, Viaccess and other proprietary vendors. Some Dreambox owners use these softcams in conjunction with card sharing software to access pay TV services without a subscription card inserted in every connected box. This practice may be illegal in some jurisdictions and third-party software for this purpose is neither officially endorsed nor supported by Dream Multimedia and voids the official warranty. Clones Clones of the DM500-S are wide spread. As a result, Dream Multimedia introduced the DM500+, with changes to try to prevent further counterfeiting. Clones also exist of DM500, DM800, DM800se and DM800se V2 built around the same commodity IBM SoC chip and hence having identical or slightly superior features they are also sold without the Dreambox brand name (e.g., the Envision 500S, with 48 megabytes of RAM instead of 32, also available in a 500C cable version, the Eagle box or the Linbox 5558, or Sunray DM800se). They have a retail price approaching that of non-Linux receivers, generally a fraction of the Dreambox 500 price. Since they contain a copy of the copyrighted original DM500 bootloader program, the legality of these devices is questionable. In April 2008, Dream Multimedia allegedly introduced a time bomb into their latest flash to disable the boot loader on counterfeit models. An unofficial firmware group called Gemini who used the latest flash drivers in their firmware, found that flash corruption would be caused on clone DM500-S receivers. Other developers of unofficial firmware groups would find boxes to be affected by this if they use the latest drivers, providing another time bomb is to be introduced. See also DBox2 Slingbox Vu+ Hauppauge MediaMVP - another connected device based on the IBM STB02500 chip Unibox References External links Dream Multimedia GmbH Codeproject Open Source FTP Downloader for Dreambox Linux DVB API Open Vision image Satellite television Set-top box Television technology Digital video recorders Linux-based devices
667167
https://en.wikipedia.org/wiki/Ent%20Air%20Force%20Base
Ent Air Force Base
Ent Air Force Base was a United States Air Force base located in the Knob Hill neighborhood of Colorado Springs, Colorado. A tent city, established in 1943 during construction of the base, was initially commanded by Major General Uzal Girard Ent (1900–1948), for whom the base is named. The base was opened in 1951. From 1957 to 1963, the base was the site of North American Aerospace Defense Command (NORAD), which subsequently moved to the Cheyenne Mountain Air Force Station. The base became the Ent Annex to the Cheyenne Mountain facility in 1975. The base was closed in 1976. The site later became the location of the United States Olympic Training Center, which was completed in July 1978. Background The first Air Defense Command was established on 26 February 1940, by the War Department. On 2 March 1940, it was put under the First Army Commander. It managed air defense within four geographic air districts. It was inactivated in mid-1944 when the threat of air attack seemed minimal. With the beginning of the Cold War, American defense experts and political leaders began planning and implementing a defensive air shield, which they believed was necessary to defend against a possible attack by long-range, manned Soviet bombers. The Air Defense Command was established 21 March 1946 and the major command was established at Mitchel Field (later Mitchel Air Force Base) in New York on 27 March 1946, which was commanded by Lieutenant General George E. Stratemeyer. By the time of the United States Air Force creation in 1947, as a separate service, it was widely acknowledged the Air Force would be the center point of this defensive effort. The Air Force established the Continental Air Command under both the Air Defense Command and Tactical Air Command on 1 December 1948, at which time Commanding General Gordon P. Saville (later Major General) took command. The Air Defense Command was inactivated as a major command on 1 July 1950. The Air Defense Command was reconstituted by the United States Air Force 1 January 1951, to protect the United States air space, with two geographically-based organizations. The portion of the country east of the 103rd meridian was managed by the Eastern Air Defense Force (also First Air Force territory). The command for the Western Air Defense Force (also Second Air Force territory) was at Ent Air Force Base. The functions included an early warning system to identify and respond to impending air attacks, including fighter interception. Subordinate Air Force commands were given responsibility to protect the various regions of the United States. Colorado Springs Tent Camp The Colorado Springs Tent Camp was the headquarters for the Second Air Force beginning early June 1943. It was moved from Fort George Wright in the state of Washington to the more central location within the western half of the United States—the Second Air Force territory. The tent city was used for soldiers who worked on the conversion of the National Methodist Sanatorium for military use and construction of additional buildings for the base. Beginning in 1943, the Second Air Force was commanded by Major General Uzal Girard Ent who became the Commanding General after having been the Chief of Staff. Ent retired due to disability in the line of duty, due to injuries he sustained in a B-25 crash in October 1944 during takeoff. He died on 5 March 1948. Major General Robert B. Williams became the commanding officer of the Second Air Force in October 1944; he retired 1 July 1946. The facility became inactive when the 15th Air Force headquarters was assigned to March Air Force Base in November 1949. There were discussions about the city taking over the now unused property, but in November 1950, it was announced that the base was to become the headquarters for the Air Defense Command. Air Force Base Air Defense Command On 1 January 1951, the Air Defense Command was reestablished at Mitchel Air Force Base, under the command of Commanding General Ennis Whitehead, later lieutenant general. One week later the command was moved to Colorado Springs. The Ent Air Force Base, named for Major General Uzal Girard Ent, opened on 8 January 1951. The Air Defense Command (ADC) inherited 21 fighter squadrons from Continental Air Command (CONAD) and 37 Air National Guard (ANG) fighter squadrons assigned an M-Day air defense mission. It was also assigned four Air Divisions (Defense). General Benjamin W. Chidlaw was the base commander beginning 29 July 1951 and commander of the Air Defense Command from 25 August 1951 and until 31 May 1955. The Senate appropriated an additional $3 million for expansion of the base in September 1951. The Peterson Air Force Base, which became inactive in 1949 when the 15th Air Force was moved to the March Air Force Base, was activated when the Ent Air Force Base opened. At the same time, the 4600th Air Base Group was activated to provide support for Ent. The funding was part of a military expansion initiative for the Ent Air Force Base, Fort Carson, and Peterson Air Force Base, all in Colorado Springs, Colorado. Much of the construction at Ent was for additional residential facilities. The Air Defense Command began 24-hour Ground Observer Corps operations on 14 July 1952. Starting September 1953, the base was the headquarters for the Army Anti-Aircraft Command. Information about potential hostile aircraft from radar sites around the country was forwarded to a regional clearinghouse, like Otis Air National Guard Base, and then to ADC headquarters at Ent Air Force Base. It was then plotted on the world's largest Plexiglas board. Enemy bombers progress was tracked on the board using grease pencils. If there was a potential threat, interceptor aircraft were scrambled to the target. Because this process was cumbersome, it made a rapid response unattainable. An automated command and control system, Semi-Automatic Ground Environment (SAGE), based upon the Whirlwind II (AN/FSQ-7) computer was implemented to process ground radar and other sources for an immediate view of potential threats in the 1950s. There was an operational plan for a SAGE implementation for Ent by 7 March 1955. A modern concrete block Combat Operations Center (COC) became operational at the base on 15 May 1954. 1 September of that year, the Continental Air Defense Command (CONAD) was activated as a joint command at Ent AFB: Air Defense Command was the United States Air Force component command Army Antiaircraft Command was the Army component Naval Forces CONAD was the Navy component (NAVFORCONAD), established at Ent. CONAD forces were committed to the contiguous radar coverage system and of augmentation forces for all services made available during emergency periods. The Colorado Springs Chamber of Commerce purchased 8.1 acres of land and donated it to the Ent Air Force Base, making it a permanent installation on 31 July 1954. In September of that year, the base became the headquarters of Continental Air Defense Command. More than $19 million was targeted in 1955 for further military expansion in the area, including the Fort Carson, the Ent Air Force Base, and the development of the Air Force Academy. On 15 January 1956, General Earle E. Partridge, CINCONAD, directed his staff to begin preliminary planning for a Combat Operations Center to be located underground. Partridge believed his present above ground center, located on Ent Air Force Base was too small to manage the growing air defense system and was highly vulnerable to sabotage or attack. Partridge was made commander in 1955, was the driving force behind the creation of the Cheyenne Mountain Air Force Station. He requested an underground facility in December 1956. Continental Air Command (CONAD) and the Air Defense Command (ADC) formally separated in 1956. Partridge was relieved of his command of CONAD and Lt. General Joseph H. Atkinson assumed control of ADC. The Interceptor magazine was produced by the Air Defense Command at the Ent Air Force Base by 1959 and then the Aerospace Defense Command into the mid-1970s. NORAD The North American Air Defense Command (NORAD) was established and activated at the base on 12 September 1957. This command is an international organization, taking operational control of Canadian Air Defense Command air defense units and United States Air Defense Command air defense units. The first NORAD Agreement was drafted. Partridge was Commander-in-Chief, CONAD also became commander of NORAD. Royal Canadian Air Force Air Marshal Roy Slemon became deputy commander, NORAD. The official agreement between the two countries was signed 12 May 1958. In 1958, the base put $36,904,558 into the Colorado Springs economy in the form of pay to 3,639 military and 1,222 civilian personnel and dependents allowances, which was more than $7 million more than the previous year. These numbers exclude individuals that work for 15 U.S. industries—such as Boeing and Lockheed Aircraft—on Ent. Due to improvements in radar technology, the Ground Observer Corps was inactivated on 31 July 1959. The NORAD commander issued instructions on 21 April 1961, concerning the 425L command and control computer system operational philosophy, including use by NORAD and component personnel, NORAD entry to sufficiently enable him to evaluate indications presented, the requirements for human judgment in determining the validity of individual system indications, and identification of data as to source system. Cheyenne Mountain transition Excavation began for NORAD Command Operations Center (COC) in Cheyenne Mountain on 18 May 1961. The official ground breaking ceremony was held 16 June 1961 at the construction site of the new NORAD Combat Operations Center. Generals Lee (ADC) and Laurence S. Kuter (NORAD) simultaneously set off symbolic dynamite charges. Estimated cost of the combat operations center construction and equipment was $66 million. Ent Annex The 9th Aerospace Defense Division was activated at Ent Air Force Base on 15 July 1961. It was the first large military space organization in the western world. The first Aerospace Surveillance and Control Squadron were assigned to the 9 ADD. The Air Defense Command's SPACETRACK Center and NORAD's Space Detection and Tracking System (SPADATS) Center merged to form the Space Defense Center. It was moved from Ent AFB to the newly completed Cheyenne Mountain Combat Operations Center and was activated on 3 September 1965. A Major General was assigned as the first Director of the Combat Operations Center as recommended by the Cheyenne Mountain Complex Task Force Study Report on 1 October 1965. This established a separate Battle Staff organization. The Director was responsible directly to CINCNORAD for tactical matters and the Joint Chiefs of Staff for all others. CINCNORAD transferred Combat Operations Center operations from Ent Air Force Base to Cheyenne Mountain and declared the 425L command and control system fully operational 20 April 1966. On 20 May 1966, the NORAD Attack Warning System became operational. The Space Defense Center and the Combat Operations Center achieved Full Operational Capability on 6 February 1967. The total cost was $142.4 million. The Fourteenth Aerospace Force was activated on 1 Jul 1968, at Ent AFB, Colorado. It inherited the staff and mission of the 9th Aerospace Defense Division, which was discontinued. The First Aerospace Control Squadron was then reassigned to the 14th Aerospace Force. The Air Defense Command was re-designated as the Aerospace Defense Command on 15 January 1968. The Continental Air Defense Command and Aerospace Defense Command headquarters began consolidation and streamlining on 1 July 1973. The Department of Defense announced plans for cutbacks in air defense forces showing increasing emphasis on ballistic missile attack warning and decreasing emphasis on bomber defense on 4 February 1974. The Continental Air Defense Command de-established on 30 June 1974. Inactivation The US Army Air Defense command, a component command of the North American Air Defense Command and Continental Air Command, was inactivated at Ent AFB, Colorado on 4 January 1975. The 14th Aerospace Force, Ent AFB, Colorado was inactivated and its personnel and units (missile and space surveillance) were reassigned to HQ ADCOM and ADCOM divisions and the Alaskan ADCOM Region on 1 October 1976. Ent Air Force Base was declared excess. In December 1976, personnel were moved to Peterson Air Force Base and the Chidlaw Building, near downtown Colorado Springs. The Aerospace Defense Command was inactivated on 31 March 1980. Units See also Ent Federal Credit Union, which opened on the base in 1957. Former buildings Federal Building (Colorado Springs) Chidlaw Building Notes References External links Histories for HQ Aerospace Defense Command, Ent AFB, Colorado Installations of the United States Air Force in Colorado 1943 establishments in Colorado 1976 disestablishments in Colorado Cheyenne Mountain Complex Aerospace Defense Command military installations Military installations in Colorado Military installations closed in 1966 North American Aerospace Defense Command History of Colorado Springs, Colorado Military airbases established in 1951
318439
https://en.wikipedia.org/wiki/Text%20mining
Text mining
Text mining, also referred to as text data mining, similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can differ three different perspectives of text mining: information extraction, data mining, and a KDD (Knowledge Discovery in Databases) process. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities). Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation, information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is, essentially, to turn text into data for analysis, via application of natural language processing (NLP), different types of algorithms and analytical methods. An important phase of this process is the interpretation of the gathered information. A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted. The document is the basic element while starting with text mining. Here, we define a document as a unit of textual data, which normally exists in many types of collections. Text analytics The term text analytics describes a set of linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation. The term is roughly synonymous with text mining; indeed, Ronen Feldman modified a 2000 description of "text mining" in 2004 to describe "text analytics". The latter term is now used more frequently in business settings while "text mining" is used in some of the earliest application areas, dating to the 1980s, notably life-sciences research and government intelligence. The term text analytics also describes that application of text analytics to respond to business problems, whether independently or in conjunction with query and analysis of fielded, numerical data. It is a truism that 80 percent of business-relevant information originates in unstructured form, primarily text. These techniques and processes discover and present knowledge – facts, business rules, and relationships – that is otherwise locked in textual form, impenetrable to automated processing. Text analysis processes Subtasks—components of a larger text-analytics effort—typically include: Dimensionality reduction is important technique for pre-processing data. Technique is used to identify the root word for actual words and reduce the size of the text data. Information retrieval or identification of a corpus is a preparatory step: collecting or identifying a set of textual materials, on the Web or held in a file system, database, or content corpus manager, for analysis. Although some text analytics systems apply exclusively advanced statistical methods, many others apply more extensive natural language processing, such as part of speech tagging, syntactic parsing, and other types of linguistic analysis. Named entity recognition is the use of gazetteers or statistical techniques to identify named text features: people, organizations, place names, stock ticker symbols, certain abbreviations, and so on. Disambiguation—the use of contextual clues—may be required to decide where, for instance, "Ford" can refer to a former U.S. president, a vehicle manufacturer, a movie star, a river crossing, or some other entity. Recognition of Pattern Identified Entities: Features such as telephone numbers, e-mail addresses, quantities (with units) can be discerned via regular expression or other pattern matches. Document clustering: identification of sets of similar text documents. Coreference: identification of noun phrases and other terms that refer to the same object. Relationship, fact, and event Extraction: identification of associations among entities and other information in text Sentiment analysis involves discerning subjective (as opposed to factual) material and extracting various forms of attitudinal information: sentiment, opinion, mood, and emotion. Text analytics techniques are helpful in analyzing sentiment at the entity, concept, or topic level and in distinguishing opinion holder and opinion object. Quantitative text analysis is a set of techniques stemming from the social sciences where either a human judge or a computer extracts semantic or grammatical relationships between words in order to find out the meaning or stylistic patterns of, usually, a casual personal text for the purpose of psychological profiling etc. Pre-processing usually involves tasks such as tokenization, filtering and stemming. Applications Text mining technology is now broadly applied to a wide variety of government, research, and business needs. All these groups may use text mining for records management and searching documents relevant to their daily activities. Legal professionals may use text mining for e-discovery, for example. Governments and military groups use text mining for national security and intelligence purposes. Scientific researchers incorporate text mining approaches into efforts to organize large sets of text data (i.e., addressing the problem of unstructured data), to determine ideas communicated through text (e.g., sentiment analysis in social media) and to support scientific discovery in fields such as the life sciences and bioinformatics. In business, applications are used to support competitive intelligence and automated ad placement, among numerous other activities. Security applications Many text mining software packages are marketed for security applications, especially monitoring and analysis of online plain text sources such as Internet news, blogs, etc. for national security purposes. It is also involved in the study of text encryption/decryption. Biomedical applications A range of text mining applications in the biomedical literature has been described, including computational approaches to assist with studies in protein docking, protein interactions, and protein-disease associations. In addition, with large patient textual datasets in the clinical field, datasets of demographic information in population studies and adverse event reports, text mining can facilitate clinical studies and precision medicine. Text mining algorithms can facilitate the stratification and indexing of specific clinical events in large patient textual datasets of symptoms, side effects, and comorbidities from electronic health records, event reports, and reports from specific diagnostic tests. One online text mining application in the biomedical literature is PubGene, a publicly accessible search engine that combines biomedical text mining with network visualization. GoPubMed is a knowledge-based search engine for biomedical texts. Text mining techniques also enable us to extract unknown knowledge from unstructured documents in the clinical domain Software applications Text mining methods and software is also being researched and developed by major firms, including IBM and Microsoft, to further automate the mining and analysis processes, and by different firms working in the area of search and indexing in general as a way to improve their results. Within public sector much effort has been concentrated on creating software for tracking and monitoring terrorist activities. For study purposes, Weka software is one of the most popular options in the scientific world, acting as an excellent entry point for beginners. For Python programmers, there is an excellent toolkit called NLTK for more general purposes. For more advanced programmers, there's also the Gensim library, which focuses on word embedding-based text representations. Online media applications Text mining is being used by large media companies, such as the Tribune Company, to clarify information and to provide readers with greater search experiences, which in turn increases site "stickiness" and revenue. Additionally, on the back end, editors are benefiting by being able to share, associate and package news across properties, significantly increasing opportunities to monetize content. Business and marketing applications Text analytics is being used in business, particularly, in marketing, such as in customer relationship management. Coussement and Van den Poel (2008) apply it to improve predictive analytics models for customer churn (customer attrition). Text mining is also being applied in stock returns prediction. Sentiment analysis Sentiment analysis may involve analysis of movie reviews for estimating how favorable a review is for a movie. Such an analysis may need a labeled data set or labeling of the affectivity of words. Resources for affectivity of words and concepts have been made for WordNet and ConceptNet, respectively. Text has been used to detect emotions in the related area of affective computing. Text based approaches to affective computing have been used on multiple corpora such as students evaluations, children stories and news stories. Scientific literature mining and academic applications The issue of text mining is of importance to publishers who hold large databases of information needing indexing for retrieval. This is especially true in scientific disciplines, in which highly specific information is often contained within the written text. Therefore, initiatives have been taken such as Nature's proposal for an Open Text Mining Interface (OTMI) and the National Institutes of Health's common Journal Publishing Document Type Definition (DTD) that would provide semantic cues to machines to answer specific queries contained within the text without removing publisher barriers to public access. Academic institutions have also become involved in the text mining initiative: The National Centre for Text Mining (NaCTeM), is the first publicly funded text mining centre in the world. NaCTeM is operated by the University of Manchester in close collaboration with the Tsujii Lab, University of Tokyo. NaCTeM provides customised tools, research facilities and offers advice to the academic community. They are funded by the Joint Information Systems Committee (JISC) and two of the UK research councils (EPSRC & BBSRC). With an initial focus on text mining in the biological and biomedical sciences, research has since expanded into the areas of social sciences. In the United States, the School of Information at University of California, Berkeley is developing a program called BioText to assist biology researchers in text mining and analysis. The Text Analysis Portal for Research (TAPoR), currently housed at the University of Alberta, is a scholarly project to catalogue text analysis applications and create a gateway for researchers new to the practice. Methods for scientific literature mining Computational methods have been developed to assist with information retrieval from scientific literature. Published approaches include methods for searching, determining novelty, and clarifying homonyms among technical reports. Digital humanities and computational sociology The automatic analysis of vast textual corpora has created the possibility for scholars to analyze millions of documents in multiple languages with very limited manual intervention. Key enabling technologies have been parsing, machine translation, topic categorization, and machine learning. The automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analyzed by using tools from network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. This automates the approach introduced by quantitative narrative analysis, whereby subject-verb-object triplets are identified with pairs of actors linked by an action, or pairs formed by actor-object. Content analysis has been a traditional part of social sciences and media studies for a long time. The automation of content analysis has allowed a "big data" revolution to take place in that field, with studies in social media and newspaper content that include millions of news items. Gender bias, readability, content similarity, reader preferences, and even mood have been analyzed based on text mining methods over millions of documents. The analysis of readability, gender bias and topic bias was demonstrated in Flaounas et al. showing how different topics have different gender biases and levels of readability; the possibility to detect mood patterns in a vast population by analyzing Twitter content was demonstrated as well. Software Text mining computer programs are available from many commercial and open source companies and sources. See List of text mining software. Intellectual property law Situation in Europe Under European copyright and database laws, the mining of in-copyright works (such as by web mining) without the permission of the copyright owner is illegal. In the UK in 2014, on the recommendation of the Hargreaves review, the government amended copyright law to allow text mining as a limitation and exception. It was the second country in the world to do so, following Japan, which introduced a mining-specific exception in 2009. However, owing to the restriction of the Information Society Directive (2001), the UK exception only allows content mining for non-commercial purposes. UK copyright law does not allow this provision to be overridden by contractual terms and conditions. The European Commission facilitated stakeholder discussion on text and data mining in 2013, under the title of Licenses for Europe. The fact that the focus on the solution to this legal issue was licenses, and not limitations and exceptions to copyright law, led representatives of universities, researchers, libraries, civil society groups and open access publishers to leave the stakeholder dialogue in May 2013. Situation in the United States US copyright law, and in particular its fair use provisions, means that text mining in America, as well as other fair use countries such as Israel, Taiwan and South Korea, is viewed as being legal. As text mining is transformative, meaning that it does not supplant the original work, it is viewed as being lawful under fair use. For example, as part of the Google Book settlement the presiding judge on the case ruled that Google's digitization project of in-copyright books was lawful, in part because of the transformative uses that the digitization project displayed—one such use being text and data mining. Implications Until recently, websites most often used text-based searches, which only found documents containing specific user-defined words or phrases. Now, through use of a semantic web, text mining can find content based on meaning and context (rather than just by a specific word). Additionally, text mining software can be used to build large dossiers of information about specific people and events. For example, large datasets based on data extracted from news reports can be built to facilitate social networks analysis or counter-intelligence. In effect, the text mining software may act in a capacity similar to an intelligence analyst or research librarian, albeit with a more limited scope of analysis. Text mining is also used in some email spam filters as a way of determining the characteristics of messages that are likely to be advertisements or other unwanted material. Text mining plays an important role in determining financial market sentiment. Future Increasing interest is being paid to multilingual data mining: the ability to gain information across languages and cluster similar items from different linguistic sources according to their meaning. The challenge of exploiting the large proportion of enterprise information that originates in "unstructured" form has been recognized for decades. It is recognized in the earliest definition of business intelligence (BI), in an October 1958 IBM Journal article by H.P. Luhn, A Business Intelligence System, which describes a system that will: "...utilize data-processing machines for auto-abstracting and auto-encoding of documents and for creating interest profiles for each of the 'action points' in an organization. Both incoming and internally generated documents are automatically abstracted, characterized by a word pattern, and sent automatically to appropriate action points." Yet as management information systems developed starting in the 1960s, and as BI emerged in the '80s and '90s as a software category and field of practice, the emphasis was on numerical data stored in relational databases. This is not surprising: text in "unstructured" documents is hard to process. The emergence of text analytics in its current form stems from a refocusing of research in the late 1990s from algorithm development to application, as described by Prof. Marti A. Hearst in the paper Untangling Text Data Mining: For almost a decade the computational linguistics community has viewed large text collections as a resource to be tapped in order to produce better text analysis algorithms. In this paper, I have attempted to suggest a new emphasis: the use of large online text collections to discover new facts and trends about the world itself. I suggest that to make progress we do not need fully artificial intelligent text analysis; rather, a mixture of computationally-driven and user-guided analysis may open the door to exciting new results. Hearst's 1999 statement of need fairly well describes the state of text analytics technology and practice a decade later. See also Concept mining Document processing Full text search List of text mining software Market sentiment Name resolution (semantics and text extraction) Named entity recognition News analytics Ontology learning Record linkage Sequential pattern mining (string and sequence mining) w-shingling Web mining, a task that may involve text mining (e.g. first find appropriate web pages by classifying crawled web pages, then extract the desired information from the text content of these pages considered relevant) References Citations Sources Ananiadou, S. and McNaught, J. (Editors) (2006). Text Mining for Biology and Biomedicine. Artech House Books. Bilisoly, R. (2008). Practical Text Mining with Perl. New York: John Wiley & Sons. Feldman, R., and Sanger, J. (2006). The Text Mining Handbook. New York: Cambridge University Press. Hotho, A., Nürnberger, A. and Paaß, G. (2005). "A brief survey of text mining". In Ldv Forum, Vol. 20(1), p. 19-62 Indurkhya, N., and Damerau, F. (2010). Handbook Of Natural Language Processing, 2nd Edition. Boca Raton, FL: CRC Press. Kao, A., and Poteet, S. (Editors). Natural Language Processing and Text Mining. Springer. Konchady, M. Text Mining Application Programming (Programming Series). Charles River Media. Manning, C., and Schutze, H. (1999). Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press. Miner, G., Elder, J., Hill. T, Nisbet, R., Delen, D. and Fast, A. (2012). Practical Text Mining and Statistical Analysis for Non-structured Text Data Applications. Elsevier Academic Press. McKnight, W. (2005). "Building business intelligence: Text data mining in business intelligence". DM Review, 21-22. Srivastava, A., and Sahami. M. (2009). Text Mining: Classification, Clustering, and Applications. Boca Raton, FL: CRC Press. Zanasi, A. (Editor) (2007). Text Mining and its Applications to Intelligence, CRM and Knowledge Management. WIT Press. External links Marti Hearst: What Is Text Mining? (October, 2003) Automatic Content Extraction, Linguistic Data Consortium Automatic Content Extraction, NIST Artificial intelligence applications Applied data mining Computational linguistics Natural language processing Statistical natural language processing Text
5978732
https://en.wikipedia.org/wiki/Corvus%20Systems
Corvus Systems
Corvus Systems was a computer technology company that offered, at various points in its history, computer hardware, software, and complete PC systems. History Corvus was founded by Michael D'Addio and Mark Hahn in 1979. This San Jose, Silicon Valley company pioneered in the early days of personal computers, producing the first hard disk drives, data backup, and networking devices, commonly for the Apple II series. The combination of disk storage, backup, and networking was very popular in primary and secondary education. A classroom would have a single drive and backup with a full classroom of Apple II computers networked together. Students would log in each time they use the computer and access their work via the Corvus Omninet network, which also supported eMail. They went public in 1981 and were traded on the NASDAQ exchange. In 1985 Corvusacquired a company named Onyx & IMI. IMI (International Memories Incorporated) manufactured the hard disks used by Corvus. The New York Times followed their financial fortunes. They were a modest success in the stock market during their first few years as a public company. The company's founders left Corvus in 1985 as the remaining board of directors made the decision to enter the PC clone market. D'Addio and Hahn went on to found Videonics in 1986, the same year Corvus discontinued hardware manufacturing. In 1987, Corvus filed for Chapter 11. That same year two top executives left. Its demise was partially caused by Ethernet establishing itself over Omninet as the local area network standard for PCs, and partially by the decision to become a PC clone company in a crowded and unprofitable market space. Disk drives and backup The company modified the Apple II's DOS operating system to enable using Corvuss 10 MB Winchester technology hard disk drives. Apple DOS normally was limited to the usage of 140 KB floppy disks. The Corvus disks not only increased the size of available storage but were also considerably faster than floppy disks. These disk drives were initially sold to software engineers inside Apple Computer. The disk drives were manufactured by IMI (International Memories Incorporated) in Cupertino, California. Corvus provided the hardware and software to interface them to Apple II's, Tandy TRS-80s, Atari 800, and S-100 bus systems. Later, the DEC Rainbow, Corvus Concept, IBM PCs and Macs were added to the list. These 5 MB and 10 MB drives were twice the size of a shoebox and initially retailed for US$5000. Corvus sold many stand alone drives whose numbers increased as they became shared over Omninet. This allowed sharing a then-very costly hard drive among multiple inexpensive Apple II computers. An entire office or classroom could thus share a single Omninet-connected Corvus drive. Certain models of the drives offered a tape backup option called "Mirror" to make hard disk backups using a VCR, which was itself a relatively new technology. A standalone version of "Mirror" was also made available. Data was backed up at roughly one megabyte per minute which resulted in five or ten-minute backup times. Tapes could hold up to 73MB. Even though Corvus had a on this technology, several other computer companies later used this technique. A later version of tape backup for the Corvus Omninet was called The Bank. and was a standalone Omninet connected device that used custom backup tape media that were very similar in shape and size to today's DLT tapes. Both the Corvus File Server and The Bank tape backup units were in white plastic housings roughly the size of two stacked reams of paper. Networking In 1980 Corvus came out with the first commercially successful local area network (LAN), called Omninet''. Most Ethernet deployments of the time ran at 3 Mbit/s and cost one or two thousand dollars per computer. Ethernet also used a thick and heavy cable that felt like a lead pipe when bent, which was run in proximity to each computer, often in the ceiling plenum. The weight of the cable was such that injury to workers from ceiling failure and falling cables was a real danger. A transceiver unit was spliced or tapped into the cable for each computer, with an additional AUI cable running from the transceiver to the computer itself. Corvus's Omninet ran at one megabit per second, used twisted pair cables and had a simple add-in card for each computer. The card cost $400 and could be installed by the end user. Cards and operating software were produced for both the Apple II and the IBM PC and XT. At the time, many networking experts said that twisted pair could never work because "the bits would leak off", but it eventually became the de facto standard for wired LANs. Other Omninet devices included the "Utility Server" that was an Omninet connected device that allowed one Parallel printer and two Serial devices (usually printers) connected to it to be shared on an Omninet network. Internally the Utility Server was a single-board Z80 computer with 64 kB of RAM, and on startup the internal boot ROM retrieved its operating program from the File Server. The literature/documentation and software that shipped with the Utility Server included a memory map and I/O ports writeup. It was possible to replace the Utility Server's operating code file with a stand-alone copy of WordStar configured for the serial port, and to fetch and save its files on the file server. A dumb terminal connected to the first serial port then became an inexpensive diskless word processing station. A single Omninet was limited to 64 devices, and the device address was set with a 5-bit DIP switch: spending both sides of the dollar bill. Device zero was the first file server, device one was the Mirror or The Bank tape backup, the rest were user computers, or Utility Servers. Systems with more than one file server had them at zero and up, then the tape backup, then the user computers. No matter what the configuration, you could only have 64 devices. Corvus Concept In April 1982, Corvus launched a computer called the Corvus Concept'''. This was a Motorola 68000-based computer in a pizza-box case with a 15" full page display mounted on its top, the first that could be rotated between landscape and portrait modes. Changing display orientation did not require rebooting the computer - it was all automatic and seamless and selected by a mercury switch inside the monitor shell. The screen resolution was 720×560 pixels. Positioned vertically, the monitor displayed 72 rows by 91 columns of text; the horizontal resolution was 56 rows by 120 columns. The first version of the Concept came with 256 kB standard, and expanding the RAM to its maximum supported capacity of 1MB cost $995 at the time. The Concept was capable of using more RAM, and a simple hack provided up to 4MB. The failure of the Concept was mostly related to its lack of compatibility with the IBM PC, introduced the previous August. The Concept interface, though not a GUI, was a standardized text user interface that made heavy use of function keys. Application programs could contextually redefine these keys, and the current command performed by each key was displayed on a persistent status line at the bottom of the screen. The function keys were placed on the top row of the keyboard close to their onscreen representation. A crude "Paint" program was available for $395 that permitted a user to create simple bitmap graphics. These could be pasted into Corvus' word processing program called "Edword", which was quite powerful by the standards of the day; it was judged to be worth the cost of the system by itself. The operating system, called CCOS, was prompt driven, communicating with the user using full sentences such as Copy which file(s)? when the "Copy file" function key was pressed. The user would respond by typing the path of the file to be copied. The OS would then prompt for a destination path. Wildcard pattern matching was supporting using the * and ? characters. The OS supported pipes and "Exec files", which were similar to shell scripts. Versions of the Concept running Unix were available; these configurations could not run standard Concept software. The UCSD p-System was available, and a Pascal compiler was available supporting most UCSD extensions FORTRAN was also standard. Built-in BASIC was also an option, enabling the computer to boot without a disk attached. A software CP/M emulator was available from Corvus, but it was of limited usefulness since it only emulated 8080 instructions and not the more-common Z80-specific instructions. Wesleyan University ported the KERMIT file transfer protocol. The entire motherboard could slide out of the back of the cabinet for easy access to perform upgrades and repairs. The system was equipped with four 50-pin Apple II bus compatible slots for expansion cards. External 5.25" and 8" floppy disk drive peripherals (made by Fujitsu) were available for the Concept. The 8" drive had a formatted capacity of 250kB. The 5.25" drive was read-only, and disks held 140kB. The video card was integrated in the monitor's update circuitry. The system had a battery-backed hardware clock that stored the date and month, but not the year. There was a leap year switch that set February to have 29 days. The system had a built in Omninet port on it. The system could boot from a locally connected floppy disk or Corvus Hard Drive or it could be booted over the Omninet network. In 1984, the base 256K system cost $3995 with monitor and keyboard and bundled Edword word processor. The floppy drive cost an additional $750. Hard drives from 6MB ($2195) to 20MB ($3995) were also available (SCSI I on some). A software bundle containing ISYS integrated spreadsheet, graphing, word processing, and communication software cost $495. The hardware necessary for networking cost $495 per workstation. The Concept Unix workstation came with 512K and cost $4295 for the Concept Uniplex that can be expanded to two users and $5995 for the Concept Plus that can service eight users. The Concept was available as part of turnkey systems from OEMs, such as the Oklahoma Seismic Corporation Mira for oil well exploration, and the KeyText Systems BookWare for publishing. References External links Collection of Corvus documentation The Corvus Museum Website 1979 establishments in California 1987 disestablishments in California American companies established in 1979 American companies disestablished in 1987 Computer companies established in 1979 Computer companies disestablished in 1987 Defunct computer companies of the United States Defunct computer hardware companies Manufacturing companies based in San Jose, California
2570332
https://en.wikipedia.org/wiki/Dines%20Bj%C3%B8rner
Dines Bjørner
Professor Dines Bjørner (born 4 October 1937, in Odense) is a Danish computer scientist. He specializes in research into domain engineering, requirements engineering and formal methods. He worked with Cliff Jones and others on the Vienna Development Method (VDM) at IBM Laboratory Vienna (and elsewhere). Later he was involved with producing the RAISE (Rigorous Approach to Industrial Software Engineering) formal method with tool support. Bjørner was a professor at the Technical University of Denmark (DTU) from 1965–1969 and 1976–2007, before he retired in March 2007. He was responsible for establishing the United Nations University International Institute for Software Technology (UNU-IIST), Macau, in 1992 and was its first director. His magnum opus on software engineering (three volumes) appeared in 2005/6. To support VDM, Bjørner co-founded VDM-Europe, which subsequently became Formal Methods Europe, an organization that supports conferences and related activities. In 2003, he instigated the associated ForTIA Formal Techniques Industry Association. Bjørner became a knight of the Order of the Dannebrog in 1985. He received a Dr.h.c. from the Masaryk University, Brno, Czech Republic in 2004. In 2021, he obtained a Dr. techn. from the Technical University of Denmark, Kongens Lyngby, Denmark. He is a Fellow of the IEEE (2004) and ACM (2005). He has also been a member of the Academia Europaea since 1989. In 2007, a Symposium was held in Macau in honour of Dines Bjørner and Zhou Chaochen. In 2021, Bjørner was elected to a Formal Methods Europe (FME) Fellowship. Bjørner is married to Kari Bjørner, with two children and five grandchildren. Selected books Domain Science and Engineering: A Foundation for Software Development, Bjørner, D. Monographs in Theoretical Computer Science, An EATCS Series, Springer Nature. Hardcover ; softcover ; eBook (2021). Software Engineering 1: Abstraction and Modelling, Bjørner, D. Texts in Theoretical Computer Science, An EATCS Series, Springer-Verlag. (2005). Software Engineering 2: Specification of Systems and Languages, Bjørner, D. Texts in Theoretical Computer Science, An EATCS Series, Springer-Verlag. (2006). Software Engineering 3: Domains, Requirements, and Software Design, Bjørner, D. Texts in Theoretical Computer Science, An EATCS Series, Springer-Verlag. (2006). Formal Specification and Software Development, Bjørner, D. and Jones, C.B. Prentice Hall International Series in Computer Science, Prentice Hall. (1982). The Vienna Development Method: The Meta-Language, Bjørner, D. and Jones, C.B. (editors). Lecture Notes in Computer Science, Volume 61, Springer-Verlag. (1978). See also International Journal of Software and Informatics References External links Home page Biographical information RAISE information 1937 births Living people People from Odense Technical University of Denmark alumni Danish computer scientists IBM employees Technical University of Denmark faculty United Nations University faculty Formal methods people Computer science writers Knights of the Order of the Dannebrog Fellow Members of the IEEE Fellows of the Association for Computing Machinery Members of Academia Europaea
25752317
https://en.wikipedia.org/wiki/Jeremy%20Castro%20Baguyos
Jeremy Castro Baguyos
Jeremy Castro Baguyos (born 1968 in Quezon City, Philippines) is a musician-researcher specializing in the realization of live interactive computer music. Based at the University of Nebraska at Omaha (USA), he is a Professor of Music. His most notable contributions to the field are in the area of live performance combined with interactive computer technology. For the state of Nebraska (USA), Baguyos established the state's first interactive computer music ensemble, Ensemble A.M.I. (Artificial Music Initiative), in conjunction with its first and only electronic music festival featuring interactive computer music, Virtual Music Week. For his own instrument, the double bass, he was one of the early practitioners of interactive computer music performance on the double bass. Inspired by the early electronic pioneers such as Robert Black and Bertram Turetzky and building on foundational studies at the Indiana University Jacobs School of Music, Baguyos studied computer music at the Peabody Conservatory of Johns Hopkins University. It was at Peabody where he performed with the Peabody Computer Music Consort and collaborated with other students of computer music and established composers of computer music who shared his enthusiasm for the emerging art form. The result was the creation and performance, between 2002 and 2005, of some of the first significant repertoire for double bass and interactive electronics and probably the very first double bass repertoire to utilize the MSP extensions to the Max (software) digital audio programming language. It is for this reason, his work differed from the few earlier experiments in interactive computer music for double bass. His realizations in public presentation were implemented in software as opposed to reliance on the much more limited hardware-based synthesis. He performed repertoire that utilized real-time audio capture and DSP, the use of automation in live performance, and simulations of musical machine intelligence. His experimental work in this area has been recorded on the "Music From SEAMUS" annual CD series of the Society for Electroacoustic Music in the United States as well as his own solo CD released in 2005, "Uncoiled Oscillations," (OCD). He appears frequently at notable academic conferences such as the International Computer Music Conference the Society for Electroacoustic Music in the United States., and the Seoul International Computer Music Festival. He is also the Principal Double Bassist of the Des Moines Metro Opera Summer Festival Orchestra, and has performed with the National Symphony (Washington, DC), the Kennedy Center Opera House Orchestra (Washington, DC), and the DC-based early music group the Washington Bach Consort. References University of Nebraska at Omaha Faculty Bio Music Technology at the University of Nebraska at Omaha Electronic Music Midwest Peabody Computer Music Alumni page myauditions.com- string committee member Jeremy Baguyos myspace page (double bass + electronics) Jeremy Baguyos myspace page (computer music composition) ICMC 2006 Reviews, ARRAY. (San Francisco: International Computer Music Association, 2008) pp. 36–37. O'Reilly, Kyle. "Traditional Music, Technology Meld at Concert." Omaha World-Herald (March 9, 2009), p. 2B. Baguyos, Jeremy. "Interactive Computer Music For Double Bass." Bass World, Vol. 28: No. 1 (Spring 2004), pp. 13–17. External links Jeremy Baguyos review in the Computer Music Journal (MIT Press, ), Vol. 33, No. 2 (Summer 2009), pp. 101–103 Proceedings of the International Computer Music Conference 2005 Living people Filipino academics 1968 births University of Nebraska Omaha faculty Peabody Institute alumni People from Quezon City Classical double-bassists 21st-century double-bassists
1147374
https://en.wikipedia.org/wiki/Nicholas%20G.%20Carr
Nicholas G. Carr
Nicholas G. Carr (born 1959) is an American writer who has published books and articles on technology, business, and culture. His book The Shallows: What the Internet Is Doing to Our Brains was a finalist for the 2011 Pulitzer Prize in General Nonfiction. Career Nicholas Carr originally came to prominence with the 2003 Harvard Business Review article "IT Doesn't Matter" and the 2004 book Does IT Matter? Information Technology and the Corrosion of Competitive Advantage (Harvard Business School Press). In these widely discussed works, he argued that the strategic importance of information technology in business has diminished as IT has become more commonplace, standardized and cheaper. His ideas roiled the information technology industry, spurring heated outcries from executives of Microsoft, Intel, Hewlett-Packard and other leading technology companies, although the ideas got mixed responses from other commentators. In 2005, Carr published the controversial article "The End of Corporate Computing" in the MIT Sloan Management Review, in which he argued that in the future companies will purchase information technology as a utility service from outside suppliers. Carr's second book, The Big Switch: Rewiring the World, From Edison to Google, was published in January 2008 by W. W. Norton. It examines the economic and social consequences of the rise of Internet-based "cloud computing" comparing the consequences to those that occurred with the rise of electric utilities in the early 20th century. In the summer of 2008, The Atlantic published Carr's article "Is Google Making Us Stupid?" as the cover story of its annual Ideas issue. Highly critical of the Internet's effect on cognition, the article has been read and debated widely in both the media and the blogosphere. Carr's main argument is that the Internet may have detrimental effects on cognition that diminish the capacity for concentration and contemplation. Carr's 2010 book, The Shallows, develops this argument further. Discussing various examples ranging from Nietzsche's typewriter to London cab drivers' GPS navigators, Carr shows how newly introduced technologies change the way people think, act and live. The book focuses on the detrimental influence of the Internet—although it does recognize its beneficial aspects—by investigating how hypertext has contributed to the fragmentation of knowledge. When users search the Web, for instance, the context of information can be easily ignored. "We don't see the trees," Carr writes. "We see twigs and leaves." One of Carr's major points is that the change caused by the Internet involves the physical restructuring of the human brain, which he explains using the neuroscientific notion of "neuroplasticity." In addition to being a Pulitzer Prize nominee, the book appeared on the New York Times nonfiction bestseller list and has been translated into 17 languages. In January 2008 Carr became a member of the Editorial Board of Advisors of Encyclopædia Britannica. Earlier in his career, Carr served as executive editor of the Harvard Business Review. He was educated at Dartmouth College and Harvard University. In 2014, Carr published his fourth book, "The Glass Cage: Automation and Us", which presents a critical examination of the role of computer automation in contemporary life. Spanning historical, technical, economic, and philosophical viewpoints, the book has been widely acclaimed by reviewers, with the New York Times Sunday Book Review terming it "essential." In 2016, Carr published "Utopia Is Creepy: and Other Provocations", a collection of blog posts, essays, and reviews from 2005 to 2016. The book provides a critique of modern American techno-utopianism, which TIME magazine said "punches a hole in Silicon Valley cultural hubris." Blog Through his blog "Rough Type," Carr has been a critic of technological utopianism and in particular the populist claims made for online social production. In his 2005 blog essay titled "The Amorality of Web 2.0," he criticized the quality of volunteer Web 2.0 information projects such as Wikipedia and the blogosphere and argued that they may have a net negative effect on society by displacing more expensive professional alternatives. In a response to Carr's criticism, Wikipedia co-founder Jimmy Wales admitted that the Wikipedia articles quoted by Carr "are, quite frankly, a horrific embarrassment" and solicited recommendations for improving Wikipedia's quality. In May 2007, Carr argued that the dominance of Wikipedia pages in many search results represents a dangerous consolidation of Internet traffic and authority, which may be leading to the creation of what he called "information plantations". Carr coined the term "wikicrats" (a pejorative description of Wikipedia administrators) in August 2007, as part of a more general critique of what he sees as Wikipedia's tendency to develop ever more elaborate and complex systems of rules and bureaucratic rank or caste over time. He holds a B.A. from Dartmouth College and an M.A., in English and American literature and language, from Harvard University. Books Digital Enterprise : How to Reshape Your Business for a Connected World (2001) Does IT Matter? (2004) The Big Switch: Rewiring the World, from Edison to Google (2008, W. W. Norton) The Shallows: What the Internet Is Doing to Our Brains (2010, W. W. Norton) The Glass Cage: Automation and Us (2014, W. W. Norton) Utopia Is Creepy: and Other Provocations (2016, W. W. Norton) See also The Shallows Is Google Making Us Stupid? Carr–Benkler wager Notes External links Nicholas Carr's homepage Nicholas Carr's weblog The Web Shatters Focus, Rewires Brains by Nicholas Carr IT Doesn't matter, originally published in Harvard Business Review The Argument Over IT May 1, 2004 Does Nick Carr matter? August 21, 2004 Nicholas Carr Strikes Again January 23, 2008 ITworld 1959 births Living people American business writers American technology writers Dartmouth College alumni Critics of Wikipedia Internet theorists American male non-fiction writers Harvard Graduate School of Arts and Sciences alumni
64306783
https://en.wikipedia.org/wiki/Perry%20O.%20Crawford%20Jr.
Perry O. Crawford Jr.
Perry Orson Crawford, Jr. (August 9, 1917 – December 13, 2006) was an American computer pioneer credited as being the first to fully realize and promote the value of digital, as opposed to analog, computers for real-time applications. This was in 1945 while advising Jay Forrester in developing flight simulators and anti-aircraft fire control devices during World War II, before practical digital computers had been produced. His similar foresight on related issues led to his heading twelve years later the design team for IBM's SABRE project, the ticketing system for American Airlines, the first large-scale commercial application of real-time computer systems, which became the model for on-line transaction processing. Early life and education Crawford was born in Medford, Oregon, where his father, Perry Crawford Sr., an engineering graduate of Stanford University, oversaw construction on the Klamath River Hydroelectric Project. His mother, Irma Zschokke Crawford, also a Stanford graduate, was an artist and a descendant of the Swiss writer and revolutionary figure, Heinrich Zschokke. When his father became president of American Utilities Service Corporation in Chicago, Crawford attended New Trier Township High School in Winnetka, Illinois. He entered the Massachusetts Institute of Technology in 1936 to study electrical engineering and came to work under Vannevar Bush with fellow student Claude Shannon on the differential analyzer. The theses for his two degrees are considered to be among the earliest modern computer design documents. His B.Sc thesis, "Instrumental Analysis in Matrix Algebra" was completed in 1939. In summary:Sketches the design of "an automatically controlled calculating machine" capable of performing a variety of matrix calculations, and incorporating means for scanning digital data represented on punched tape, for adding, subtracting, multiplying, and dividing two numbers, and for storing and printing or punching the data. A punched tape was to be used for sequence control, which would specify the selection of the numbers to be operated on, the operation to be performed, and the disposal of the result.When Shannon completed his doctorate, Crawford succeeded him in the Center for Analysis as a postgraduate student. His M.Sc. thesis, "Automatic Control by Arithmetic Operations," (1942), continued the theme:It is the purpose of this thesis to describe the elements and operation of a calculating system for performing one of the operations in the control of anti-aircraft gunfire, which is, namely, the prediction of the future position of the target. It is to be emphasized at the outset that little progress has been made toward the construction of automatic electronic calculating systems for any purpose. ... It can be proposed only that this thesis shows a possible approach to the design of a number of calculating system elements and to the structure of an arithmetical predictor. ... In this introduction, equipment for performing the operations occurring in automatic calculating is described. This equipment includes electronic switching elements, devices for multiplying two numbers, finding a function of a variable, recording numbers, translating mechanical displacements into numerical data, and for translating numerical data into mechanical displacements. The thesis included a description of a matrix "selector switch," for implementing an arbitrary function or control sequence. Independently invented by Jan A. Rajchman, this device was incorporated, with acknowledgments, in the ENIAC. in a 1975 interview, J. Presper Eckert, a designer of the ENIAC, described another contribution:I had gotten the idea of using disks for memory, digital memory, from a master's thesis written by Perry Crawford at MIT. He had not built any such disks; it was just speculation. Career Office of Naval Research From 1942 to 1945 Crawford served as a civilian attached to the Navy's Special Devices Section (a predecessor of the Naval Air Warfare Center Training Systems Division) at Sands Point, Long Island. In 1946 this became the Special Devices Center under the newly created Office of Naval Research (ONR). Crawford supervised the Navy Ballistics Computation Program until September 1948 when he accepted a temporary position  with the Research and Development Board of the Department of Defense. As head of the computer section in ONR he came into contact with Jay Forrester at MIT who, with his collaborator Robert Everett (computer scientist), headed a project that had roots in developing flight simulators for pilot training and evolved into the Whirlwind Project which in turn prepared the way for the air-defense application SAGE (Semi-Automatic Ground Environment). From Forrester's point of view, Crawford was a significant contributor and supporter whom he described as "uninhibited, not restrained by protocol or chain of command, and a freewheeling intervener in many circles of activity":Perry Crawford was an electrical engineering graduate of MIT and a person with continually unfolding visions of futures that others had not yet glimpsed. He was the first person in about 1946 to call my attention to the possibility of digital rather than analog computers. He was always looking, listening, and projecting new ideas into the future. ... In the fall of 1947, following conversations with Perry Crawford, we wrote two documents, numbered L-1 and L-2, that showed how digital computers could manage a Naval task force and interpret radar data. In July 1948, at a conference at the University of California in Los Angeles, Crawford proposed using computers for the control of aircraft. On March 18, 1949, at a panel meeting of the Research and Development Board, he pushed the idea of digital computers in an air defense system.Crawford also contributed to the Moore School Lectures with a talk entitled "Applications of Digital Computation Involving Continuous Input and Output Variables" (August 5, 1946). It discussed such topics as missile and combat simulations and was originally classified as confidential and not published until . He stressed his conviction that these applications could best be performed with the aid of digital computers, a thesis many did not agree with at the time. He gave a talk at a session on electronic computers at the 1947 conference of the Institute of Radio Engineers on "Applications of Electronic Digital Computers" which was summarized in the program:A discussion of computer applications, including scientific calculations, wave propagation, and aerodynamics. Comments will be made on the future relation of analogue and digital computers, and also on the possible engineering application of electronic digital computers to automatic process and factory control, air traffic control, and business calculations. International Business Machines Crawford left his civilian service in the Navy in 1952 to join IBM. The company had been working with the military on SAGE and anticipated further developments in real-time applications. in 1954 Thomas J. Watson, Jr., son of IBM's founder, oversaw Crawford's placement, along with Hans Peter Luhn, to head the design team for creating a digital computer system for managing American Airline's reservations and ticketing. Named SABRE (Semi-Automatic Research Environment), it soon grew to managing the total operation: flight planning, crew schedules, special meals, etc.The project was at the time easily the largest civilian computerization task ever undertaken, involving some 200 technical personnel producing a million lines of program code. ... By the early 1970s all the major carriers possessed reliable real-time systems and communications networks that had become an essential component of their operations, second in importance only to the airplanes themselves. Crawford continued in IBM until his retirement in 1988 working towards what he saw as a necessary "computer transition" as outlined in his 1979 publication below, but otherwise rarely publicized outside of IBM. In a 1980 interview R. Blair Smith, the IBM marketing manager whose contact with American Airlines initiated SABRE, described Crawford's Imaging project:It's a shame we didn't bring it out, but there was a great need for Perry Crawford's concept of imaging. ... Perry's idea was to eliminate typical application programming altogether by having a master program; then have all of the data concerned with, say, running a given business available and identified in the computer. Then if somebody wanted a report of any kind, all he had to do was to tell the computer what was wanted, identify the data from which it would be drawn, and out would come the result. Now, that's an over-simplification. Obviously it would be terribly complex to do. It was a most difficult job, and it never got off the ground. ... I told him he'd have a tough time, because with the status of programming the way it is today, after all of these years of programming, and the programming languages we have developed, introducing the Imaging concept would be about as difficult as converting the typical American to the metric system today. Personal life Crawford married Marguerite (Peggy) Murtagh (1924-1979). They had five children. Published works 1968. “Why CAI [computer aided instruction] Is Really a Late, Late Show: The Coming of Age of the Computer.” pp. 20–25 in Goodman, Walter, and Gould, Thomas F. (eds.), New York State Conference on Instructional Uses of the Computer. Final Report. Proceedings of Conference, Tuxedo Park, N. Y.,October 3-5, 1968. “The computer and the unified media made possible by the computer become a new foundation of intellectual functioning of all forms including the forms that we call education and including the education of the young.” 1969. "The New Views." Systematics: The Journal of The Institute for the Comparative Study of History, Philosophy and the Sciences 6.2, 114–16. 1973. "Design Guide for Redesign." Impact on Instructional Improvement (Sponsored by the New York State Association for Supervision and Curriculum Development.) 8.3, 19–28. The article presents an approach to the design of educational systems. The author is described as the president of his local Croton-Harmon Schools Board of Education. 1974. "On the Connections Between Data and Things in the Real World." pp. 51–57 in Management of Data Elements in Information processing: Proceedings of a Symposium Sponsored by the American Standards Institute and by the National Bureau of Standards. First National Symposium, National Bureau of Standards, Gaithersburg, Maryland January 24–25, 1974. Washington DC: U.S. Department of Commerce. 1979. "Alfred Korzybski and the Computer Transition." General Semantics Bulletin 47, 120–125. Bibliography Green, Tom. Bright Boys: The Making of Information Technology. CRC Press, 2010. Redmond, Kent C., and Thomas M. Smith. Project Whirlwind : The History of a Pioneer Computer. Bedford, MA: Digital Press, 1980. Redmond, Kent C., and Thomas M. Smith. From Whirlwind to MITRE: The R&D Story of The SAGE Air Defense Computer. Cambridge, MA: The MIT Press, 2000. Notes References Eckert, J. Presper (1975). Interview by Christopher Evans, Oral History of Computing, Science Museum, London, and National Physical Laboratory, Teddington, No. 3, Audiotape, 1975. Forrester, Jay W. (2001). "Lincoln Laboratory, MIT: Historical Comments." Heritage Lecture Series lecture given on the 50th Anniversary of Lincoln Laboratory, Lexington, MA, 26 November 2001. Oral history interview with R. Blair Smith. Charles Babbage Institute, University of Minnesota, Minneapolis. External links Computer Oral History Collection, Archives Center, National Museum of American History, Smithsonian Institution Houses tapes and transcript of 1970 interview with Perry O. Crawford, Jr. Perry O. Crawford papers, MC-0509. Massachusetts Institute of Technology, Department of Distinctive Collections, Cambridge, Massachusetts. Accessed July 2, 2020. Archives:The Computer Pioneers: The Whirlwind Computer. The Engineering and Technology History Wiki. Accessed July 5, 2020. Video of a discussion in 1983, moderated by Perry Crawford, among the members and supporters of the Whirlwind team: Jay Forrester, James Killian, Norman H. Taylor, Charles Adams, Dean Arden, J.T. Gilmore, Hal Laning, Robert Everett, and Robert Taylor. Archives:The Computer Pioneers: Electronic Developments During World War II. The Engineering and Technology History Wiki. Accessed July 5, 2020. Video of a discussion in 1983, moderated by Perry Crawford, among other pioneers: Kenneth Bowles, Julius Stratton, Albert Hill, and Gordon Brown. Perry O. Crawford, Computer Pioneers by J. A. N. Lee. IEEE Computer Society. Accessed July 30, 2020. MIT School of Engineering alumni 2006 deaths American computer scientists People from Medford, Oregon 1917 births
20915105
https://en.wikipedia.org/wiki/Accel%20Transmatic
Accel Transmatic
Accel Transmatic Limited is a company headquartered in Chennai, India. The company's major focus areas are Product R&D and development, software R&D and development & Animation. Accel Transmatic is a publicly traded company listed on the Bombay Stock Exchange (BSE) The company has it main operations in Chennai, & Trivandrum in India. The company also has subsidiaries in California, United States & Tokyo, Japan. History Accel Transmatic Limited was originally established as Transmatic Systems Ltd (TSL) in the year 1986 by two entrepreneurs from Kerala, M R Narayanan and T Ravindran, with equity participation from Kerala State Industrial Development Corporation, and IDBI. The company was promoted to develop and manufacture professional electronic products and communication systems in the state of Kerala, India. In 1991 TDICI, the venture capital arm of ICICI provided funding for expansion/diversification to manufacture High Speed Dot matrix printers in collaboration with OutPut Technology Corporation (OTC) of USA. In 1994 the company had its IPO, the issue was oversubscribed and the company got listed in the Mumbai, Chennai, Cochin Stock Exchanges. In 2002, the company diversified into software and technologies space by merging with other technology companies. It approached Accel Limited, an established IT business group based in Chennai for a possible acquisition of the company. Accel Limited acquired the shares held by one of the promoters of the company and the acquisition was completed by August 2003 and thus the company became an Accel group company. Accel's management decided to create a new diversified portfolio for Transmatic Systems by merging two of the group entities namely, Accel Software and Technologies and Accel IT Academy. It was also decided to acquire an embedded software development company based in Technopark, Thiruvananthapuram, namely Ushus Technologies Pvt Ltd (UTPL) into TSL through the merger process, since they were complementary to the business of TSL. Accordingly, a merged entity had been formed effective from 01-01-2004 by the name, Accel Transmatic Ltd (ATL). The legal completion of the merger was completed on 9 February 2005. In 2007, Accel Transmatic expanded its animation production by opening up a motion-capture facility in the Kinfra-Film & Video Park in Thiruvananthapuram. In 2011, Accel Transmatic after completing necessary approvals from Shareholders and board decided to shut Software Operations (Technologies Division - including overseas subsidiary, Accel North America, Inc.), to focus better on Animation and Media Company. Subsidiaries Accel North America (ANA) Accel Solutions Japan References Software companies of India Companies based in Chennai Software companies established in 1986 Indian companies established in 1986 1986 establishments in Tamil Nadu Companies listed on the Bombay Stock Exchange
24040671
https://en.wikipedia.org/wiki/Napoleon%3A%20Total%20War
Napoleon: Total War
Napoleon: Total War is a turn-based strategy and real-time tactics video game developed by Creative Assembly and published by Sega for the Microsoft Windows and macOS. Napoleon was released in North America on 23 February 2010, and in Europe on 26 February. The game is the sixth stand-alone installment in the Total War series. The game is set in Europe, North Africa, and the Middle East during the French Revolutionary Wars and Napoleonic Wars. Players assume the role of Napoleon Bonaparte, or one of his major rivals, on a turn-based campaign map and engage in the subsequent battles in real-time. As with its predecessor, Empire: Total War, which included a special United States storyline, Napoleon features three special campaigns that follow the general's career. Napoleon received generally favourable reviews from video game critics. Reviews praised the game's visuals, story driven campaigns, and new gameplay features. Some reviewers were critical of the game's weak AI, high system requirements, and its limited scope – while others considered Napoleon overly similar to Empire, its immediate predecessor in the series. An entirely new campaign, the Peninsular Campaign, was released 25 June 2010 as downloadable content. It was later released in retail as part of the Empire and Napoleon Total War – Game of the Year Edition compilation pack on 2 October 2010. The macOS version of the game, containing the Peninsular Campaign and additional unit packs, was announced by Feral Interactive on 28 January 2013. It was released for the Mac on 3 July 2013. French actor Stéphane Cornicard provided voice-acting for Napoleon Bonaparte in the original English, German, French, and Spanish editions. Gameplay As with all other games in the Total War series, Napoleon consists of two gameplay types: a turn-based geopolitical campaign – which requires players to build structures in a faction's territories to produce units and create a source of income, research new technologies, deal with other in-game factions through diplomacy, trade and war, send agents on missions, create and command armies, and eventually become the world's dominant faction – and real-time tactical battles where players command huge armies to direct the course of any battles that take place. Napoleon contains four campaigns, two of which follow Napoleon's early military career. The first career event is the Italian campaign of 1796, while the second is the French invasion of Egypt in 1798. Both feature smaller, optional missions that help drive the story forward. The major French campaign, however, is the so-called "Mastery of Europe," which resembles the holistic modes of previous Total War games. Conversely, the "Campaigns of the Coalition" allows players to govern Great Britain, Russia, Prussia, or Austria and attempt to defeat Napoleonic France in Europe. Each major campaign requires players to obtain a certain number of territories, although unlike Empire: Total War, one does not need to wait till the end of the campaign to be declared winner. Like in Empire, revolutions and revolts can affect the course of a player's campaign; France however in the Mastery of Europe campaign is all but immune to revolution. For the first time in the Total War franchise, attrition now plays a part on the campaign map. Depending on the location, armies will lose men due to heat or snow. Unlike Empire, the losses an army has on campaign are automatically replenished when in friendly territory. Some of Napoleon's most famous battles such as the Battle of the Pyramids, Austerlitz, Borodino, and Waterloo are available as historical scenarios, separate from the campaign. As with previous Total War games, battles can be fought manually or auto resolved when two hostile armies or navies meet on the campaign map. Armies and navies consist of Napoleonic era land units and ships respectively. On the battle map, the attacker will win if he manages to rout the entire enemy army while the defender wins if he manages to rout the attacker or have at least one unit remaining when the time limit runs out. Similar somewhat to Empire, land units are armed with gunpowder weapons such as muskets and cannons and melee weapons like swords, sabers and bayonets. Units have morale that will fall if massive casualties are incurred, if they are flanked, the general is killed and several other factors. Once a unit's morale is broken, it will rout and attempt to escape the battlefield. Broken units may regain morale if the balance of power changes, so to ensure these units will not remain a threat, players ought to chase them down with light cavalry. Infantry units may engage in both firefights and melees, cavalry can generally only fight in a melee with the exception of mounted infantry and missile cavalry while artillery units are best used to hit targets from afar. Creative Assembly also implemented a feature wherein while playing a campaign, several notable commanders, including Napoleon himself, instead of being killed on the battlefield, are wounded and sent back to the faction's main capital. A new physics system had been implemented for the real-time battles, so that when cannonballs hit the ground, for instance, they leave impact craters. Gunpowder smoke lingers and reduces visibility in protracted engagements. Mike Simpson, Creative Assembly's studio director, reported that there are a number of environmental factors that affect battlefield tactics: gunpowder backfires when it rains, and the elevation of landscape affects the range of munitions. Individuals within a unit now vary to a greater degree, and are no longer as generic as in previous titles in the series. The campaign map is narrower in focus, but more detailed than Empires campaign map. Turns in Napoleon: Total War represent two weeks, while previous titles sported turns that were the equivalent of at least six months. Additionally the game's artificial intelligence system had been modified. There was also a new uniform system that includes approximately 355 non-editable uniforms that has so far never been released, casting a doubt of its creation. In addition, Napoleon: Total War contains several new multiplayer features and a voice command utility to speak to other players via Steam. Unlike previous Total War titles, there is now the option for a "drop-in" multiplayer campaign mode: when playing a campaign against the computer, it is possible to allow another user to join via a lobby and take control. Multiplayer The multiplayer mode has a campaign mode. Multiplayer drop-in battles allows to fight human opponents in the single player campaign battles. Steam achievements, game play bonuses and voice communications are also available. Marketing and release Napoleon: Total War was first revealed on 19 August 2009. The game was meant to be the first in an all-new story driven branch of the Total War series. On 10 March 2010, a demo was released via Steam featuring a playable version of the Battle of Ligny. Retail versions Napoleon was initially released in four different retail versions: Standard edition, Limited edition, Imperial edition, and the Emperor's edition. All boxed versions include the "Elite Regiment" pack, a collection of five extra units; any edition bought on Steam does not include this unit pack. Standard – comes with only the game disc and manual in a standard plastic case like most other retail game editions. Limited – offers the full game and manual, as well as ten exclusive units in the "Heroes of the Napoleonic Wars" pack. Imperial – includes all the contents of the Limited Edition, but has special premium packaging and an illustrated wallchart timeline of the important events in Napoleon's life. Emperor's – includes all contents of the Imperial Edition, and is the only edition to include a 200mm statuette of Napoleon and a field journal. This edition was released only in Australia and New Zealand. Empire and Napoleon Total War – Game of the Year Edition – this version contains the full versions of both Empire and Napoleon, including most of their available downloadable content (excluding "Heroes of the Napoleonic Wars" and the "Imperial Eagle Pack" for the latter). Pre-orders made via the Steam content delivery system included another special unit: the Royal Scots Greys. Orders made via certain retailers likewise included various special units: , Towarczys, and the Grand Battery of the Convention Downloadable content The first downloadable content for Napoleon, the Imperial Guard Pack, was released on 26 March 2010 for free. It added to the game several new units such as Napoleon's Polish Guard Lancers and an alternate version of the Battle of Waterloo scenario, with the British as the playable faction. Creative Assembly released the Coalition Battle Pack on 6 May 2010. It contains six new units: Lifeguard Hussars, Coldstream Guards, Archduke Charles' Legion, Luetzow's Freikorps, Life Hussars, and the Semenovski Lifeguard. Additionally, it also includes a scenario featuring the Battle of Friedland. A downloadable campaign, The Peninsular Campaign, was announced on 25 May 2010. It was eventually released on 25 June 2010 via Steam. Featuring an enlarged map of the Iberian Peninsula, new units (such as guerrilla units that can be placed outside a player's deployment zone before a battle), agents, technologies, and gameplay mechanics, this new campaign, as its name implies, focuses on the Peninsular War. One of the features advertised for Napoleon was a uniform editor. Upon release Creative Assembly announced that the uniform editor would be delayed; while it was not advertised "on the box", it was advertised as a feature by all online retailers (including Steam) and the official game website. Five months after Napoleon: Total Wars release, mention of the uniform editor was removed from the game's list of features on its official website; it is, however, still being advertised on most online retailers selling the game. Almost eight months after the game's release, Mike Simpson stated that the original uniform editor was never meant for public use, and that Creative Assembly was making a unit editor capable of both editing and creating new units. This new unit editor was scheduled to be released in the first quarter of 2011. To date no further reference to the uniform editor has been made and it is now seen by fans as unlikely to be released in the future. Reception Napoleon: Total War received "generally favorable" reviews, according to review aggregator Metacritic. The game and its developers alike were praised for a number of graphical and AI improvements, along with the new campaign features and multiplayer modes. IGN remarked that the "tactical battles are still some of the most amazing we've ever seen in any game." Gameplanet came to the same conclusion, stating that "graphically, the battles leave Empire in the dust, featuring five times more particles per effect." GameSpot praised the interface, saying that "[it] never feels cluttered, and the bulk of the screen is always devoted to the action." Other aspects of the game received a mixed reaction. According to Eurogamer, despite occasionally poor decision-making "the AI will still hold its own," and provides players "with a challenge that suits the difficulty." Other criticisms focused upon the somewhat linear story-mode campaigns, the duration of naval engagements and the stability of the game's Netcode. Actiontrip commented that "while still a good strategy game, Napoleon: Total War seems to offer less freedom to players in terms of how they can resolve various battle situations." Tom Chick, in his GameSpy review, gave the game a 2.0 out of 5, citing "Bad AI" and the game "feel[ing] like a re-skinned Empire" for the score. Game Revolution felt the same, noting that "the problem Napoleon has is that it's not just like Empire, it is and only is Empire...It feels like an expansion at best, yet it's being sold like it's a brand new game." The game was awarded Best PC Game at Milthon European Game Awards in Paris on 22 September 2010. Score composers Richard Beddow, Richard Birdsall and Ian Livingstone won the British Ivor Novello Award for Best Original Video Game Score on 19 May 2011. References 2010 video games Games for Windows certified games Lua (programming language)-scripted video games MacOS games Napoleonic Wars video games Real-time tactics video games Sega video games Creative Assembly games Total War (video game series) Turn-based strategy video games Video games developed in the United Kingdom Video games set in Egypt Video games set in Europe Video games set in the Middle East Windows games Feral Interactive games Video games with downloadable content Works about the Battle of Waterloo Historical simulation games Grand strategy video games World conquest video games
5392177
https://en.wikipedia.org/wiki/Carlo%20H.%20S%C3%A9quin
Carlo H. Séquin
Carlo Heinrich Séquin (born October 30, 1941) is a professor of Computer Science at the University of California, Berkeley in the United States. Séquin is recognized as one of the pioneers in processor design. Séquin has worked with computer graphics, geometric modelling, and on the development of computer-aided design (CAD) tools for circuit designers. He was born in Zurich, Switzerland. Séquin is a Fellow of the Association for Computing Machinery. Academic history Séquin holds the Baccalaureate type C (in Math and Science), Basel, Switzerland (1960), the Diploma in Experimental Physics, University of Basel, Switzerland (1965), and a Ph.D in Experimental Physics, from the Institute of Applied Physics, Basel (1969). Career Having received his doctorate, Séquin went on to work at the Institute of Applied Physics in Basel on the interface physics of MOS transistors and problems of applied electronics in the field of cybernetic models. From 1970 to 1976 Séquin worked at Bell Telephone Laboratories in New Jersey on the design and investigation of charge-coupled devices for imaging and signal processing applications. While at Bell Telephone Laboratories he was introduced to computer graphics in lectures given by Ken Knowlton. In 1977 Séquin joined the Faculty in the Electrical Engineering and Computer Science Department (EECS) at Berkeley where he introduced the concept of RISC processors with David A. Patterson in the early 1980s. He was head of the Computer Science Division from 1980 to 1983. Since then he has worked extensively on computer graphics, geometric modelling, and on the development of computer aided design (CAD) tools for circuit designers, architects, and for mechanical engineers. Séquin's expertise in computer graphics and geometric design have led to his involvement with sculptors of abstract geometric art. Dr. Séquin is a Fellow of the Association for Computing Machinery (ACM), a Fellow of the IEEE, and has been elected to the Swiss Academy of Engineering Sciences. Since 2001 he has been Associate Dean, Capital Projects, at Berkeley’s College of Engineering. References External links Biographical information on Séquin Carlo H. Séquin's homepage at U.C. Berkeley Sculpture designs and Maths models by Séquin List of publications Interview with Séquin Séquin on perfect shapes in higher dimensions, regular polytopes in n dimensions CARLO H. SÉQUIN, AN ORAL HISTORY. 2 interview: 05 July 2002 tape 1\8: 00:03:41 − 00:06 1941 births Living people Swiss computer scientists American computer scientists Computer graphics researchers Computer systems researchers University of California, Berkeley faculty
32224263
https://en.wikipedia.org/wiki/Dell%20M1000e
Dell M1000e
The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches. Enclosure The M1000e fits in a 19-inch rack and is 10 rack units high (44 cm), 17.6" (44.7 cm) wide and 29.7" (75.4 cm) deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg. On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and the KVM switch. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server. In June 2013 Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors. In 2018 Dell introduced the Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures. The M1000e enclosure has a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules. Midplane The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back. The original midplane 1.0 capabilities are Fabric A - Ethernet 1Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb - IfiniBand DDR, QDR, FDR10. The enhanced midplane 1.1 capabilities are Fabric A - Ethernet 1Gb, 10Gb; Fabrics B&C - Ethernet 1Gb, 10Gb, 40Gb - Fibre Channel 4Gb, 8Gb, 16Gb - IfiniBand DDR, QDR, FDR10, FDR. The original M1000e enclosures came with midplane version 1.0 but that midplane did not support the 10GBASE-KR standard on fabric A (10GBASE-KR standard is supported on fabrics B&C). To have 10Gb Ethernet on fabric A or 16Gb Fibre Channel or InfiniBand FDR (and faster) on fabrics B&C, midplane 1.1 is required. Current versions of the enclosure come with midplane 1.1 and it is possible to upgrade the midplane. Via the markings on the back-side of the enclosure, just above the I/O modules: if an "arrow down" can be seen above the 6 I/O slots the 1.0 midplane was installed in the factory; if there are 3 or 4 horizontal bars, midplane 1.1 was installed. As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC management interface actual installed version of the midplane is visible Front:Blade servers Each M1000e enclosure can hold up to 32 quarter-height, 16 half-height blades or 8 full-height or combinations (e.g. 1 full-height + 14 half-height). The slots are numbered 1-16 where 1-8 are the upper blades and 9-16 are directly beneath 1-8. When using full-height blades one use slot n (where n=1 to 8) and slot n+8 Integrated at the bottom of the front-side is a connection-option for 2 x USB, meant for a mouse and keyboard, as well as a standard VGA monitor connection (15 pin). Next to this is a power-button with power-indication. Next to this is a small LCD screen with navigation buttons which allows one to get system-information without the need to access the CMC/management system of the enclosure. Basic status and configuration information is available via this display. To operate the display one can pull it towards one and tilt it for optimal view and access to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal operation and an orange LED indicating a problem of some kind. This LCD display can also be used for the initial configuration wizard in a newly delivered (unconfigured) system, allowing the operator to configure the CMC IP address. Back:power, management and I/O All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: here one insert the 3 management-modules: one or two CMC modules and an optional iKVM module. At the bottom of the enclosure there are 6 bays for power-supply units. A standard M1000e operates with three PSU's The area in between offers 3 x 3 bays for cooling-fans (left - middle - right) and up to 6 I/O modules: three modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right hand side has places for A2, B2 and C2. The A fabric I/O modules connect to the on-board I/O controllers which in most cases will be a dual 1Gb or 10Gb Ethernet NIC. When the blade has a dual port on-board 1Gb NIC the first NIC will connect to the I/O module in fabric A1 and the 2nd NIC will connect to fabric A2 (and the blade-slot corresponds with the internal Ethernet interface: e.g. the on-board NIC in slot 5 will connect to interface 5 of fabric A1 and the 2nd on-board NIC goes to interface 5 of fabric A2) I/O modules in fabric B1/B2 will connect to the (optional) Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3. All modules can be inserted or removed on a running enclosure (Hot swapping) Available server-blades An M1000e holds up to 32 quarter-height, 16 half-height blades or 8 full-height blades or a mix of them (e.g. 2 full height + 12 half-height). The 1/4 height blades require a full-size sleeve to install. The current list are the currently available 11G blades and the latest generation 12 models. There are also older blades like the M605, M805 and M905 series. Power Edge M420 Released in 2012, PE M420 is a "quarter-size" blade: where most servers are 'half-size', allowing 16 blades per M1000e enclosure, with the new M420 up to 32 blade servers can be installed in a single chassis. Implementing the M420 has some consequences for the system: many people have reserved 16 IP addresses per chassis to support the "automatic IP address assignment" for the iDRAC management card in a blade, but as it is now possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server one needs to run CMC firmware 4.1 or later and one needs a full-size "sleeve" that holds up to four M420 blades. It also has consequences for the "normal" I/O NIC assignment: most (half-size) blades have two LOMs (LAN On Motherboard): one connecting to the switch in the A1 fabric, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All available I/O modules (except for the PCM6348, MXL an MIOA) have 16 internal ports: one for each half-size blade. As an M420 has two 10 Gb LOM NICs, a fully loaded chassis would require 2 × 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine card (Mezzanine B OR Mezzanine C depending on their location) whereas all half-height and full-height systems support two Mezzanine cards. To support all on-board NICs one would need to deploy a 32-slot Ethernet switch such as the MXL or Force10 I/O Aggregator. But for the Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are "load-balanced" between the B and C-fabric of the M1000e: the Mezzanine card in "slot A" (top slot in the sleeve) connects to Fabric C while "slot B" (the second slot from the top) connects to fabric B, and that is then repeated for C and D slots in the sleeve. Power Edge M520 A half-height server with up to 2x 8 core Intel Xeon E5-2400 CPU, running the Intel C600 chipset and offering up to 384 Gb RAM memory via 12 DIMM slots. Two on-blade disks (2.5-inch PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M520 can also be used in the PowerEdge VRTX system. Power Edge M600 A half-height server with a Quad-Core Intel Xeon and 8 DIMM slots for up to 64GB RAM Power Edge M610 A half-height server with a quad-core or six-core Intel 5500 or 5600 Xeon CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5-inch hard-disks or SSDs and a choice of built-in NICs for Ethernet or converged network adapter (CNA), Fibre Channel or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video card Power Edge M610x A full-height blade server that has the same capabilities as the half-height M610 but offering an expansion module containing x16 PCI Express (PCIe) 2.0 expansion slots that can support up to two standard full-length/full-height PCIe cards. Power Edge M620 A half-height server with up to 2x 12 core Intel Xeon E5-2600 or Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM memory via 24 DIMM slots. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage with a range of RAID controller options. Two external and one internal USB ports and two SD card slots. The blades can come pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL. It can also be ordered with Citrix XenServer or VMWare vSphere ESXi or using Hyper-V which comes with W2K8 R2. According to the vendor all Generation 12 servers are optimized to run as virtualisation platform. Out-of-band management is done via iDRAC 7 via the CMC. Power Edge M630 A half-height server with up to 2x 22-core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM memory via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145w CPUs. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M630 can also be used in the PowerEdge VRTX system. Amulet HotKey offers a modified M630 server that can be fitted with a GPU or Teradici PCoIP Mezzanine module. Power Edge M640 A half-height server with up to 2x 28-core Xeon Scalable CPU. Supported on both the M1000e and PowerEdge VRTX chassis. The server can support up to 16 DDR4 RDIMM memory slots for up to 1024 GB RAM and 2 drive bays supporting SAS / SATA or NVMe drives (with an adapter). The server uses iDRAC 9. Power Edge M710 A full-height server with a quad-core or six-core Intel 5500 or 5600 Xeon CPU and up to 192 Gb RAM. A maximum of four on-blade hot-pluggable 2.5" hard-disks or SSD's and a choice of built-in NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The video card is a Matrox G200.The server has the Intel 5520 chipset Power Edge M710HD A two-socket version of the M710 but now in a half-height blade. CPU can be two quad-core or 6-core Xeon 5500 or 5600 with the Intel 5520 chipset. Via 18 DIMM slots up to 288 Gb DDR3 RAM can put on this blade and the standard choice of on-board Ethernet NICs based on Broadcom or Intel and one or two Mezzanine cards for Ethernet, Fibre Channel or InfiniBand. Power Edge M820 A full-height server with 4x 8 core Intel Xeon E5-4600 CPU, running the Intel C600 chipset and offering up to 1.5 TB RAM memory via 48 DIMM slots. Up to four on-blade 2,5" SAS HDD/SSD or two PCIe flash SSD are installable for local storage. The M820 offers a choice of 3 different on-board converged Ethernet adaptors for 10 Gbit/s Fibre Channel over Ethernet (FCoE) from Broadcom, Brocade or QLogic and up to two additional Mezzanine for Ethernet, Fibre Channel or InfiniBand I/O Power Edge M910 A full-height server of the 11th generation with up to 4x 10-core Intel XEON E7 CPU or 4 x 8 core XEON 7500 series or 2 x 8 core XEON 6500 series, 512 Gb or 1Tb DDR3 RAM and two hot-swappable 2,5" hard-drives (spinning or SSD). It uses the Intel E 7510 chipset. A choice of built-in NICs for Ethernet, Fibre Channel or InfiniBand Power Edge M915 Also a full-height 11G server using the AMD Opteron 6100 or 6200 series CPU with the AMD SR5670 and SP5100 chipset. Memory via 32 DDR3 DIMM slots offering up to 512Gb RAM. On-board up to two 2,5 inch HDD or SSD's. The blade comes with a choice of on-board NICs and up to two mezzanine cards for dual-port 10Gb Ethernet, dual-port FCoE, dual-port 8Gb fibre-channel or dual port Mellanox Infiniband. Video is via the on-board Matrox G200eW with 8MB memory Mezzanine cards Each server comes with Ethernet NICs on the motherboard. These 'on board' NICs connect to a switch or pass-through module inserted in the A1 or the A2 bay at the back of the switch. To allow more NICs or non-Ethernet I/O each blade has two so-called mezzanine slots: slot B connecting to the switches/modules in bay B1 and B2 and slot C connecting to C1/C2: An M1000e chassis holds up to 6 switches or pass-through modules. For redundancy one would normally install switches in pairs: the switch in bay A2 is normally the same as the A1 switch and connects the blades on-motherboard NICs to connect to the data or storage network. (Converged) Ethernet Mezzanine cards Standard blade-servers have one or more built-in NICs that connect to the 'default' switch-slot (the A-fabric) in the enclosure (often blade-servers also offer one or more external NIC interfaces at the front of the blade) but if one want the server to have more physical (internal) interfaces or connect to different switch-blades in the enclosure one can place extra mezzanine cards on the blade. The same applies to adding a Fibre Channel host bus adapter or a Fibre Channel over Ethernet (FCoE) converged network adapter interface. Dell offers the following (converged) Ethernet mezzanine cards for their PowerEdge blades: Broadcom 57712 dual-port CNA Brocade BR1741M-k CNA Mellanox ConnectX-2 dual 10Gb card Intel dual port 10Gb Ethernet Intel Quad port Gigabit Ethernet Intel Quad port Gigabit Ethernet with virtualisation technology and iSCSI acceleration features Broadcom NetXtreme II 5709 dual- and quad-port Gigabit Ethernet (dual port with iSCSI offloading features) Broadcom NetXtreme II 5711 dual port 10Gb Ethernet with iSCSI offloading features Non-Ethernet cards Apart from the above the following mezzanine cards are available: Emulex LightPulse LPe1105-M4 Host adapter Mellanox ConnectX IB MDI Dual-Port InfiniBand Mezzanine Card QLogic SANblade HBA SANsurfer Pro Blade storage In most setups the server-blades will use external storage (NAS using iSCSI, FCoE or Fibre Channel) in combination with local server-storage on each blade via hard disk drives or SSDs on the blades (or even only a SD-card with boot-OS like VMware ESX). It is also possible to use completely diskless blades that boot via PXE or external storage. But regardless of the local and boot-storage: the majority of the data used by blades will be stored on SAN or NAS external from the blade-enclosure. EqualLogic Blade-SAN Dell has put the EqualLogic PS M4110 models of iSCSI storage arrays that are physically installed in the M1000e chassis: this SAN will take the same space in the enclosure as two half-height blades next to each other. Apart from the form factor (the physical size, getting power from the enclosure system etc.) it is a "normal" iSCSI SAN: the blades in the (same) chassis communicate via Ethernet and the system does require an accepted Ethernet blade-switch in the back (or a pass-through module + rack-switch): there is no option for direct communication of the server-blades in the chassis and the M4110: it only allows a user to pack a complete mini-datacentre in a single enclosure (19" rack, 10 RU) Depending on the model and used disk driver the PS M4110 offers a system (raw) storage capacity between 4.5 TB (M4110XV with 14 × 146 Gb, 15K SAS HDD) and 14 TB (M4110E with 14 x 1TB, 7,2K SAS HDD). The M4110XS offer 7.4TB using 9 HDD's and 5 SSD's. Each M4110 comes with one or two controllers and two 10-gigabit Ethernet interfaces for iSCSI. The management of the SAN goes via the chassis-management interface (CMC). Because the iSCSI uses 10 Gb interfaces the SAN should be used in combination with one of the 10G blade switches: the PCM 8024-k or the Force10 MXL switch. The enclosure's mid-plane hardware version should be at least version 1.1 to support 10Gb KR connectivity PowerConnect switches At the rear side of the enclosure one will find the power-supplies, fan-trays, one or two chassis-management modules (the CMC's) and a virtual KVM switch. And the rear offers 6 bays for I/O modules numbered in 3 pairs: A1/A2, B1/B2 and C1/C2. The A bays connect the on-motherboard NICs to external systems (and/or allowing communication between the different blades within one enclosure). The Dell PowerConnect switches are modular switches for use in the Dell blade server enclosure M1000e. The M6220, M6348, M8024 and M8024K are all switches in the same family, based on the same fabrics (Broadcom) and running the same firmware-version. All the M-series switches are OSI layer 3 capable: so one can also say that these devices are layer 2 Ethernet switches with built-in router or layer3 functionality. The most important difference between the M-series switches and the Dell PowerConnect classic switches (e.g. the 8024 model) is the fact that most interfaces are internal interfaces that connect to the blade-servers via the midplane of the enclosure. Also the M-series can't be running outside the enclosure: it will only work when inserted in the enclosure. PowerConnect M6220 This is a 20-port switch: 16 internal and 4 external Gigabit Ethernet interfaces and the option to extend it with up to four 10Gb external interfaces for uplinks or two 10Gb uplinks and two stacking ports to stack several PCM6220's into one large logical switch. PowerConnect M6348 This is a 48 port switch: 32 internal 1Gb interfaces (two per serverblade) and 16 external copper (RJ45) gigabit interfaces. There are also two SFP+ slots for 10Gb uplinks and two CX4 slots that can either be used for two extra 10Gb uplinks or to stack several M6348's blades in one logical switch. The M6348 offers four 1Gb interfaces to each blade which means that one can only utilize the switch to full capacity when using blades that offer 4 internal NICs on the A fabric (=the internal/on motherboard NIC). The M6348 can be stacked with other M6348 but also with the PCT7000 series rack-switches. PowerConnect M8024 and M8024k The M8024 and M8024-k offer 16 internal autosensing 1 or 10 Gb interfaces and up to 8 external ports via one or two I/O modules each of which can offer: 4 × 10Gb SFP+ slots, 3 x CX4 10Gb (only) copper or 2 x 10G BaseT 1/10 Gb RJ-45 interfaces. The PCM8024 is 'end of sales' since November 2011 and replaced with the PCM8024-k. Since firmware update 4.2 the PCM8024-k supports partially FCoE via FIP (FCoE Initialisation Protocol) and thus Converged network adapters but unlike the PCM8428-k it has no native fibre channel interfaces. Also since firmware 4.2 the PCM8024-k can be stacked using external 10Gb Ethernet interfaces by assigning them as stacking ports. Although this new stacking-option is also introduced in the same firmware release for the PCT8024 and PCT8024-f one can't stack blade (PCM) and rack (PCT)-versions in a single stack. The new features are not available on the 'original' PCM8024. Firmware 4.2.x for the PCM8024 only corrected bugs: no new features or new functionality are added to 'end of sale' models. To use the PCM8024-k switches one will need the backplane that supports the KR or IEEE 802.3ap standards Powerconnect capabilities All PowerConnect M-series ("PCM") switches are multi-layer switches thus offering both layer 2 (Ethernet) options as well as layer 3 or IP routing options. Depending on the model the switches offer internally 1Gbit/s or 10Gbit/s interfaces towards the blades in the chassis. The PowerConnect M series with "-k" in the model-name offer 10Gb internal connections using the 10GBASE-KR standard. The external interfaces are mainly meant to be used as uplinks or stacking-interfaces but can also be used to connect non-blade servers to the network.On the link-level PCM switches support link aggregation: both static LAG's as well as LACP. As all PowerConnect switches the switches are running RSTP as Spanning Tree Protocol, but it is also possible to run MSTP or Multiple Spanning Tree. The internal ports towards the blades are by default set as edge or "portfast" ports. Another feature is to use link-dependency. One can, for example, configure the switch that all internal ports to the blades are shut down when the switch gets isolated because it loses its uplink to the rest of the network. All PCM switches can be configured as pure layer-2 switches or they can be configured to do all routing: both routing between the configured VLAN's as external routing. Besides static routes the switches also support OSPF and RIP routing. When using the switch as routing switch one need to configure vlan interfaces and assign an IP address to that vlan interface: it is not possible to assign an IP address directly to a physical interface. Stacking All PowerConnect blade switches, except for the original PC-M8024, can be stacked. To stack the new PC-M8024-k switch the switches need to run firmware version 4.2 or higher. In principle one can only stack switches of the same family; thus stacking multiple PCM6220's together or several PCM8024-k. The only exception is the capability to stack the blade PCM6348 together with the rack-switch PCT7024 or PCT7048. Stacks can contain multiple switches within one M1000e chassis but one can also stack switches from different chassis to form one logical switch. Force10 switches MXL 10/40 Gb switch At the Dell Interop 2012 in Las Vegas Dell announced the first FTOS based blade-switch: the Force10 MXL 10/40Gpbs blade switch, and later a 10/40Gbit/s concentrator. The FTOS MXL 40 Gb was introduced on 19 July 2012. The MXL provides 32 internal 10Gbit/s links (2 ports per blade in the chassis), two QSFP+ 40Gbit/s ports and two empty expansion slots allowing a maximum of 4 additional QSFP+ 40Gbit/s ports or 8 10Gbit/s ports. Each QSFP+ port can be used for a 40Gbit/s switch to switch (stack) uplink or, with a break-out cable, 4 x 10Gbit/s links. Dell offers direct attach cables with on one side the QSFP+ interface and 4 x SFP+ on the other end or a QSFP+ transceiver on one end and 4 fibre-optic pairs to be connected to SFP+ transceivers on the other side. Up to six MXL blade-switch can be stacked into one logical switch. Besides the above 2x40 QSFP module the MXL also supports a 4x10Gb SFP+ and a 4x10GbaseT module. All ethernet extension modules for the MXL can also be used for the rack based N4000 series (fka Power connector 8100). The MXL switches also support Fibre Channel over Ethernet so that server-blades with a converged network adapter Mezzanine card can be used for both data as storage using a Fibre Channel storage system. The MXL 10/40 Gbit/s blade switch will run FTOS and because of this will be the first M1000e I/O product without a Web graphical user interface. The MXL can either forward the FCoE traffic to an upstream switch or, using a 4 port 8Gb FC module, perform the FCF function, connecting the MXL to a full FC switch or directly to a FC SAN. I/O Aggregator In October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on FTOS. The I/O Aggregator offers 32 internal 10Gb ports towards the blades and standard two 40 Gbit/s QSFP+ uplinks and offers two extension slots. Depending on one's requirements one can get extension modules for 40Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. One can assign up to 16 x 10Gb uplinks to one's distribution or core layer. The I/O aggregator supports FCoE and DCB (Data center bridging) features Cisco switches Dell also offered some Cisco Catalyst switches for this blade enclosure. Cisco offers a range of switches for blade-systems from the main vendors. Besides the Dell M1000e enclosure Cisco offers similar switches also for HP, FSC and IBM blade-enclosures. For the Dell M1000e there are two model-ranges for Ethernet switching: (note: Cisco also offers the Catalyst 3030, but this switch is for the old Generation 8 or Gen 9 blade system, not for the current M1000e enclosure) As per 2017 the only available Cisco I/O device for the M1000e chassis is the Nexus FEX Catalyst 3032 The Catalyst 3032: a layer 2 switch with 16 internal and 4 external 1Gb Ethernet interfaces with an option to extend to 8 external 1Gb interfaces. The built-in external ports are 10/100/1000 BaseT copper interfaces with an RJ45 connector and up to 4 additional 1Gb ports can be added using the extension module slots that each offer 2 SFP slots for fiber-optic or Twinax 1Gb links. The Catalyst 3032 doesn't offer stacking (virtual blade switching) Catalyst 3130 The 3130 series switches offer 16 internal 1Gb interfaces towards the blade-servers. For the uplink or external connections there are two options: the 3130G offering 4 built-in 10/100/1000BaseT RJ-45 slots and two module-bays allowing for up to 4 SFP 1Gb slots using SFP transceivers or SFP Twinax cables. The 3130X also offers the 4 external 10/100/1000BaseT connections and two modules for X2 10Gb uplinks. Both 3130 switches offer 'stacking' or 'virtual blade switch'. One can stack up to 8 Catalyst 3130 switches to behave like one single switch. This can simplify the management of the switches and simplify the (spanning tree) topology as the combined switches are just one switch for spanning tree considerations. It also allows the network manager to aggregate uplinks from physically different switch-units into one logical link. The 3130 switches come standard with IP Base IOS offering all layer 2 and the basic layer 3 or routing-capabilities. Users can upgrade this basic license to IP Services or IP Advanced services adding additional routing capabilities such as EIGRP, OSPF or BGP4 routing protocols, IPv6 routing and hardware based unicast and multicast routing. These advances features are built into the IOS on the switch, but a user has to upgrade to the IP (Advanced) Services license to unlock these options Nexus Fabric Extender Since January 2013 Cisco and Dell offer a Nexus Fabric Extender for the M1000e chassis: Nexus B22Dell. Such FEX's were already available for HP and Fujitsu blade systems, and now there is also a FEX for the M1000e blade system. The release of the B22Dell is approx. 2,5 years after the initially planned and announced date: a disagreement between Dell and Cisco resulted in Cisco stopping the development of the FEX for the M1000e in 2010. Customers manage a FEX from a core Nexus 5500 series switch. Other I/O cards An M1000e enclosure can hold up to 6 switches or other I/O cards. Besides the ethernet switches as the Powerconnect M-series, Force10 MXL and Cisco Catalyst 3100 switches mentioned above the following I/O modules are available or usable in a Dell M1000e enclosure: Ethernet pass-through modules bring internal server-interfaces to an external interface at the back of the enclosure. There are pass-through modules for 1G, 10G-XAUI and 10G 10GbaseXR. All passthrough modules offer 16 internal interfaces linked to 16 external ports on the module. Emulex 4 or 8 Gb Fibre Channel Passthrough Module Brocade 5424 8Gb FC switch for Fibre Channel based Storage area network Brocade M6505. 16Gb FC switch Dell 4 or 8Gb Fibre-channel NPIV Port aggregator Mellanox 2401G and 4001F/Q - InfiniBand Dual Data Rate or Quad Data Rate modules for High-performance computing Infiniscale 4: 16 port 40Gb Infiniband switch Cisco M7000e Infiniband switch with 8 external DDR ports the below Powerconnect 8428-k switch with 4 "native" 8Gb Fibre channel interfaces: PCM 8428-k Brocade FCoE Although the PCM8024-k and MXL switch do support Fibre Channel over Ethernet, it is not a 'native' FCoE switch: it has no Fibre Channel interfaces. These switches would need to be connected to a "native" FCoE switch such as the Powerconnect B-series 8000e (same as a Brocade 8000 switch) or a Cisco Nexus 5000 series switch with fibre channel interfaces (and licenses). The PCM8428 is the only full Fibre Channel over Ethernet capable switch for the M1000e enclosure that offers 16 x enhanced Ethernet 10Gb internal interfaces, 8 x 10Gb (enhanced) Ethernet external ports and also up to four 8Gb Fibre Channel interfaces to connect directly to a FC SAN controller or central Fibre Channel switch. The switch runs Brocade FC firmware for the fabric and fibre-channel switch and Foundry OS for the Ethernet switch configuration. In capabilities it is very comparable to the Powerconnect-B8000, only the formfactor and number of Ethernet and FC interfaces are different. PowerConnect M5424 / Brocade 5424 This is a Brocade full Fibre Channel switch. It uses either the B or C fabrics to connect the Fibre Channel mezzanine card in the blades to the FC based storage infrastructure. The M5424 offers 16 internal ports connecting to the FC Mezzanine cards in the blade-servers and 8 external ports. From factory only the first two external ports (17 and 18) are licensed: additional connections require extra Dynamic Ports On Demand (DPOD) licenses. The switch runs on a PowerPC 440EPX processor at 667 MHz and 512 MB DDR2 RAM system memory. Further it has 4 Mb boot flash and 512 Mb compact flash memory on board. Brocade M6505 Similar capabilities as above, but offering 16 X 16Gb FC towards server mezzanine and 8 external. Standard license offers 12 connections which can be increased by 12 to support all 24 ports. auto-sensing speed 2,4,8 and 16Gb. Total aggregate bandwidth 384 GB Brocade 4424 As the 5424, the 4424 is also a Brocade SAN I/O offering 16 internal and 8 external ports. The switch supports speeds up to 4 Gbit/s. When delivered 12 of the ports are licensed to be operation and with additional licenses one can enable all 24 ports. The 4424 runs on a PowerPC 440GP processor at 333 MHz with 256 SDRAM system memory, 4 Mb boot flash and 256 Mb compact flash memory. Infiniband There are several modules available offering Infiniband connectivity on the M1000e chassis. Infiniband offers high bandwidth/low-latency intra-computer connectivity such as required in Academic HPC clusters, large enterprise datacenters and cloud applications. There is the SFS M7000e InfiniBand switch from Cisco. The Cisco SFS offers 16 internal 'autosensing' interfaces for single (10) (SDR) or double (20Gbit/s) data rate (DDR) and 8 DDR external/uplink ports. The total switching capacity is 960 Gbit/s Other options are the Mellanox SwitchX M4001F and M4001Q and the Melanox M2401G 20Gb Infiniband switch for the M1000e enclosure The M4001 switches offer either 40 GBit/s (M4001Q) or the 56 Gbit/s (M4001F) connectivity and has 16 external interfaces using QSFP ports and 16 internal connections to the Infiniband Mezzanine card on the blades. As with all other non-Ethernet based switches it can only be installed in the B or C fabric of the M1000e enclosure as the A fabric connects to the "on motherboard" NICs of the blades and they only come as Ethernet NICs or converged Ethernet. The 2401G offers 24 ports: 16 internal and 8 external ports. Unlike the M4001 switches where the external ports are using QSFP ports for fiber transceivers, the 2401 has CX4 copper cable interfaces. The switching capacity of the M2401 is 960 Gbit/s The 4001, with 16 internal and 16 external ports at either 40 or 56 Gbit/s offers a switching capacity of 2.56 Tbit/s Passthrough modules In some setups one don't want or need switching capabilities in one's enclosure. For example: if only a few of the blade-servers do use fibre-channel storage one don't need a fully manageble FC switch: one just want to be able to connect the 'internal' FC interface of the blade directly to one's (existing) FC infrastructure. A pass-through module has only very limited management capabilities. Other reasons to choose for pass-through instead of 'enclosure switches' could be the wish to have all switching done on a 'one vendor' infrastructure; and if that isn't available as an M1000e module (thus not one of the switches from Dell Powerconnect, Dell Force10 or Cisco) one could go for pass-through modules: 32 port 10/100/1000 Mbit/s gigabit Ethernet pass-through card: connects 16 internal Ethernet interfaces (1 per blade) to an external RJ45 10/100/1000 Mbit/s copper port 32 port 10 Gb NIC version supports 16 internal 10Gb ports with 16 external SFP+ slots 32 port 10 Gb CNA version supports 16 internal 10Gb CNA ports with 16 external CNA's Dell 4 or 8Gb Fibre-channel NPIV Port aggregator Intel/Qlogic offer a QDR Infiniband passthru module for the Dell M1000e chassis, and a mezzanine version of the QLE7340 QDR IB HCA. Managing enclosure An M1000e enclosure offers several ways for management. The M1000e offers 'out of band' management: a dedicated VLAN (or even physical LAN) for management. The CMC modules in the enclosure offer management Ethernet interfaces and do not rely on network-connections made via I/O switches in the blade. One would normally connect the Ethernet links on the CMC avoiding a switch in the enclosure. Often a physically isolated LAN is created for management allowing management access to all enclosures even when the entire infrastructure is down. Each M1000e chassis can hold two CMC modules. Each enclosure can have either one or two CMC controllers and by default one can access the CMC Webgui via https and SSH for command-line access. It is also possible to access the enclosure management via a serial port for CLI access or using a local keyboard, mouse and monitor via the iKVM switch. It is possible to daisy-chain several M1000e enclosures. Management interface Below information assumes the use of the Webgui of the M1000e CMC, although all functions are also available via the text-based CLI access. To access the management system one must open the CMC Webgui via https using the out of band management IP address of the CMC. When the enclosure is in 'stand alone' mode one will get a general overview of the entire system: the webgui gives one an overview how the system looks in reality, including the status-leds etc. By default the Ethernet interface of a CMC card will get an address from a DHCP server but it is also possible to configure an IPv4 or IPv6 address via the LED display at the front of the chassis. Once the IP address is set or known the operator can access the webgui using the default root-account that is built in from factory. Via the CMC management one can configure chassis-related features: management IP addresses, authentication features (local user-list, using RADIUS or Tacacs server), access-options (webgui, cli, serial link, KVM etc.), error-logging (syslog server), etc. Via the CMC interface one can configure blades in the system and configuring iDRAC access to those servers. Once enabled one can access the iDRAC (and with that the console of the server) via this webgui or directly opening the webgui of the iDRAC. The same applies to the I/O modules in the rear of the system: via the CMC one can assign an IP address to the I/O module in one of the 6 slots and then surf to the webgui of that module (if there is a web-based gui: unmanaged pass-through modules won't offer a webgui as there is nothing to configure. LCD screen On the front-side of the chassis there is a small hidden LCD screen with 3 buttons: one 4 way directional button allowing one to navigate through the menus on the screen and two "on/off" push buttons which work as an "OK" or "Escape" button. The screen can be used to check the status of the enclosure and the modules in it: one can for example check active alarms on the system, get the IP address of the CMC of KVM, check the system-names etc. Especially for an environment where there are more enclosures in one datacenter it can be useful to check if one are working on the correct enclosure. Unlike the rack or tower-servers there are only a very limited set of indicators on individual servers: a blade server has a power-led and (local) disc-activity led's but no LCD display offering one any alarms, hostnames etc. Nor are there LED's for I/O activity: this is all combined in this little screen giving one information on both the enclosure as well as information over the inserted servers, switches, fans, power-supplies etc. The LCD screen can also be used for the initial configuration of an unconfigured chassis. One can use the LCD screen to set the interface-language and to set the IP address of the CMC for further CLI or web-based configuration. During normal operation the display can be "pushed" into the chassis and is mainly hidden. To use it one would need to pull it out and tilt it to read the screen and have access to the buttons. Blade 17: Local management I/O A blade-system is not really designed for local (on-site) management and nearly all communication with the modules in the enclosure and the enclosure itself are done via the "CMC" card(s) at the back of the enclosure. At the front-side of the chassis, directly adjacent to the power-button, one can connect a local terminal: a standard VGA monitor connector and two USB connectors. This connection is referred to inside the system as 'blade 17' and allows one a local interface to the CMC management cards. iDRAC remote access Apart from normal operation access to one's blade servers (e.g. SSH sessions to a Linux-based OS, RDP to a Windows-based OS etc.) there are roughly two ways to manage one's server blades: via the iDRAC function or via the iKVM switch. Each blade in the enclosure comes with a built-in iDRAC that allows one to access the console over an IP connection. The iDRAC on a blade-server works in the same way as an iDRAC card on a rack or tower-server: there is a special iDRAC network to get access to the iDRAC function. In rack or tower-servers a dedicated iDRAC Ethernet interface connects to a management LAN. On blade-servers it works the same: via the CMC one configure the setup of iDRAC and access to the iDRAC of a blade is NOT linked to any of the on-board NICs: if all one's server NICs would be down (thus all the on-motherboard NICs and also the Mezzanine B and C) one can still access the iDRAC. iKVM: Remote console access Apart from that, one can also connect a keyboard, mouse and monitor directly to the server: on a rack or tower switch one would either connect the I/O devices when needed or one have all the servers connected to a KVM switch. The same is possible with servers in a blade-enclosure: via the optional iKVM module in an enclosure one can access each of one's 16 blades directly. It is possible to include the iKVM switch in an existing network of digital or analog KVM switches. The iKVM switch in the Dell enclosure is an Avocent switch and one can connect (tier) the iKVM module to other digital KVM switches such as the Dell 2161 and 4161 or Avocent DSR digital switches. Also tiering the iKVM to analog KVM switches as the Dell 2160AS or 180AS or other Avocent (compatible) KVM switches is possible. Unlike the CMC, the iKVM switch is not redundant but as one can always access a server (also) via its iDRAC any outage of the KVM switch doesn't stop one from accessing the server-console. Flex addresses The M1000e enclosure offers the option of flex-addresses. This feature allows the system administrators to use dedicated or fixed MAC addresses and World Wide Names (WWN) that are linked to the chassis, the position of the blade and location of the I/O interface. It allows administrators to physically replace a server-blade and/or a Mezzanine card while the system will continue to use the same MAC addresses and/or WWN for that blade without the need to manually change any MAC or WWN addresses and avoid the risk of introducing duplicate addresses: with flex-addresses the system will assign a globally unique MAC/WWN based on the location of that interface in the chassis. The flex-addresses are stored on a SD-card that is inserted in the CMC module of a chassis and when used it overwrites the address burned in into the interfaces of the blades in the system. Power and cooling The M1000e enclosure is, as most blade systems, for IT infrastructures demanding high availability. (Nearly) everything in the enclosure supports redundant operation: each of the 3 I/O fabrics (A, B and C) support two switches or pass-through cards and it supports two CMC controllers, even though one can run the chassis with only one CMC. Also power and cooling is redundant: the chassis supports up to six power-supplies and nine fan units. All power supplies and fan-units are inserted from the back and are all hot-swappable. The power supplies are located at the bottom of the enclosure while the fan-units are located next to and in between the switch or I/O modules. Each power-supply is a 2700-watt power-supply and uses 208–240 V AC as input voltage. A chassis can run with at least two power-supplies (2+0 non-redundant configuration). Depending on the required redundancy one can use a 2+2 or 3+3 setup (input redundancy where one would connect each group of supplies to two different power sources) or a 3+1, 4+2 or 5+1 setup, which gives protection if one power-supply unit would fail - but not for losing an entire AC power group References M1000e Server hardware Computer networking Blade servers Dell Cisco products M1000e
48304470
https://en.wikipedia.org/wiki/HTC%20One%20A9
HTC One A9
The HTC One A9 is an Android smartphone manufactured and marketed by HTC. It was officially announced on October 20, 2015. It is the successor to the HTC One Mini 2 in the Worldwide; but in global markets, it was sold alongside the One M9 as a mid-range offering. It was launched as an effort to improve the revenue of HTC's smartphone business after the failure of the One M9. It features a unibody aluminum frame with a Super AMOLED HD display and Dolby Surround sound for headphones. It also features a fingerprint sensor which can be used to unlock the phone. It is the first non-Nexus device to be pre-installed with Android Marshmallow and the first non-CDMA phone that is compatible to work with the Verizon network in the United States. It received mixed reviews following its release. While many critics lent specific praise to its construction and fingerprint scanner, other aspects have generally received indifferent or mixed reception. Some thought that its price point was too high, while others thought it was a clone of the iPhone 6. In November 2015, HTC reported a 15 percent increase in overall revenue. Development Following the launch of the One M9, the manufacturer saw a decline of nearly 40 percent of their revenue due to the poor sales of the M9 because of the overheating issue caused by the Snapdragon 810 chipset which forced the manufacturer to throttle the processor and the poor performance of the camera which also led the manufacturer to reduce its component order by 30 percent. HTC has reported further loss of revenue in the first and second quarter and they have also mentioned that they closed some of their manufacturing facilities due to the poor sales and outsourced some of their manufacturing. In June, the CEO of HTC, Cher Wang, confirmed that it was developing a "hero product" which was planned to launch in October intended to improve its smartphone business. Rumors surrounding the development of the phone began to surface in July 2015 after the failure of the One M9. It was reported by evleaks that the device will feature a metal unibody, a five-inch screen, and a fingerprint sensor. The internal specifications of the phone was speculated through an unofficial AnTuTu benchmark test report. Several leaked images of the device began to surface which showed its similarities with the iPhone 6. In October 2015, HTC began to release a teaser video to promote the launch event of the device on its official Twitter account. On 20 October 2015, the phone was unveiled online in a virtual event held by HTC. Specifications Hardware Similar to the One M8, the phone is constructed of a unibody aluminum frame with brushed metal backing. The device weighs . It is tall, wide, and thick. The display of the device is Super AMOLED with a resolution of 1920 x 1080 pixels and pixel density of 440 ppi. The device features an octa-core Qualcomm Snapdragon 617 system-on-chip. There are two configurations offered: 16 GB of capacity with 2 GB LPDDR4 RAM and 32 GB of capacity with 3 GB RAM. Both configurations can support storage expansion by microSD card up to 2 TB. HTC emphasized the device's camera due to criticism of the camera of its older phones. The HTC One A9 is equipped with a 13.0-megapixel BSI rear-facing camera along with optical image stabilization, ƒ/2.0 aperture and dual-LED tone flash. Similar to One M9, the front facing camera has an UltraPixel image sensor, designed to work well in low-light environments. The camera offers a pro-mode where the user can adjust the ISO, shutter speed and the white balance the camera is also capable of capturing images in RAW format. The rear and front cameras can record videos at 1080p. The device also features a fingerprint sensor integrated with the home button which can be used to unlock the phone. It also adds support with NFC but however it is restricted to be only used with Android Pay, which is a digital wallet platform developed by Google to power in-app and tap-to-pay purchases on mobile devices. The phone is available in opal silver, deep garnet, topaz gold and carbon grey color finishes. In January 2016, HTC launched the device in pink color variant for sale in Taiwan. Software The device is pre-installed with a customized version of Android 6.0 Marshmallow along with a lighter version of the Sense 7 as the user interface utilizing stock android experience which is known as Sense 7G. Unlike the Sense 6 and 7 used on other devices, the Sense 7G utilizes the material design as the default color scheme, the stock notification and recent apps menu are used instead of the HTC's own design of notification and recent apps menu. The color schemes, icons, sounds, and fonts throughout the operating system can be customized by using HTC Themes where the users can create their own themes or download additional themes. It is also the first non-Nexus device to come pre-installed with Android Marshmallow. Pre-loaded applications on the A9 provide access to Google's various services, including Google Play, which can be used to download and purchase apps, music, movies, and e-books. The phone also features HTC's software suite such as BlinkFeed, Gallery which supports to display and edit images in RAW format and Zoe which allows users to collaborate on highlight reels but it no longer features the HTC's Music app and instead comes pre-installed with Google Play Music. The phone utilizes the Marshmallow features such as Google Now on Tap which allows users to perform searches within the context of information currently being displayed in an app, a new power management system that reduces background activity when a device is not being physically handled which is known as "Doze", native support for fingerprint recognition and the ability to migrate data to a microSD card and use it as primary storage, as well as other internal changes. HTC has committed to provide software updates for the unlocked variant of the phone within 15 days after the software update for the Nexus devices released by Google. They have also mentioned that the users of the unlocked variant of the device can unlock the bootloader without voiding the warranty of the phone. In December 2015, HTC released a maintenance update for the unlocked variant which updates the phone to Android Marshmallow 6.0.1 The Android 7.0 Nougat began rolling out to the unlocked HTC One A9 on January 16, 2017. Sound Unlike the One M9, the phone does not feature the "Boomsound" stereo front facing speakers but instead it utilizes a mono speaker located on the bottom of the device. The phone features Dolby Surround sound for headphones and it can also play high-resolution audio. It is installed with a digital-to-analog converter (DAC) which upscale the audio from 16 bits to 24 bits. Network The unlocked variant of the A9 is the first non-CDMA phone that is compatible to work with the Verizon networks which was enabled by a software update. HTC has explained that the phone connected to Verizon's network relies on the phone's LTE radio for making phone calls and sending SMS and MMS, which became possible through the advancement of VoLTE. However, the communication capabilities cannot work if there is no LTE coverage. Variants Reception The phone has received mixed reviews by critics, although more favorable than the One M9. Its construction, fingerprint scanner, and software received particular praise; some critics noted the camera as an improvement over other HTC phones. Chris Velazco of Engadget said that it was "not the winner this company [HTC] needs", but it praised it for coming with Android 6.0. Andrew Hoyle of CNET said that it "is just fine for a midrange device". Ajay Kumar of PC Magazine praised its construction, describing it as "impeccable". Some critics thought that the phone was overpriced for its feature set. It was also criticized for looking similar to the iPhone 6. Vlad Savov of The Verge was mixed on the phone overall, praising its emphasis on audio and display; but described it as a "blasphemous concoction of Apple design and Google software." Sales The phone was launched in the United States in November on all major carriers which are sold as an unlocked device. During at launch, the device for Verizon was delayed due to compatibility issues and was launched in December 2015. In India, the phone was announced in November and launched in December. Following the launch of the device, HTC has reported a rise of revenue for November which is six months high for the company and 15% increase of the revenue for October. References Notes Nexus devices such as 6P, 5X, 6, 5 and 9 (Tablet). Major carriers such as AT&T, Verizon, T-Mobile and Sprint. External links Android (operating system) devices One (2015) Mobile phones introduced in 2015 Discontinued smartphones
42881069
https://en.wikipedia.org/wiki/Novabench
Novabench
Novabench is a computer benchmarking utility for Microsoft Windows, macOS, and Linux. The program tests the performance of computer components and assigns proprietary scores, with higher scores indicating better performance. An online repository is available where submitted scores can be compared. A user can create an account to keep all of their submitted scores in one place. The tool has been noted for its speed and simplicity. History Microsoft Windows Version 1.0 of Novabench for Windows was released in February 2007. Version 2.0 of Novabench for Windows was released in February 2008. Version 3.0 of Novabench for Windows was released in May 2010. Version 4.0 of Novabench for Windows was released in August 2017. MacOS Version 1.0 of Novabench for MacOS was released in January 2011. Version 4.0 of Novabench for MacOS was released in August 2017. Linux Version 4.0 of Novabench for Linux was released in October 2018. Limitations Novabench does not take advantage of AMD CrossFireX or SLI during graphics testing on Windows, although dual-GPU testing does work on Mac. It also does not test secondary, 3rd, 4th, etc. hard drives; it only tests the system (primary) drive. Windows XP and below are no longer supported with version 3.0.4, and in Novabench 4.0, only Windows 7 and up will be supported. See also Benchmark (computing) References Benchmarks (computing)
12284523
https://en.wikipedia.org/wiki/Free%20Software%20Foundation%20anti-Windows%20campaigns
Free Software Foundation anti-Windows campaigns
Free Software Foundation anti-Windows campaigns are the events targeted against a line of Microsoft Windows operating systems. They are paralleling the Defective by Design campaign against digital rights management technologies, but they instead target Microsoft's operating systems instead of DRM itself. BadVista BadVista was a campaign by the Free Software Foundation to oppose adoption of Microsoft Windows Vista and promote free software alternatives. It aimed to encourage the media to make free software part of their agenda. The campaign was initiated on December 15, 2006 with aims to expose what it views as the harms inflicted on computer users by Microsoft Windows Vista and its embedded digital rights management, as well as providing a user-friendly gateway to free software alternatives. BadVista activists teamed up with Defective by Design members on a Vista launch party on January 30, 2007 at Times Square. Protesters in hazmat suits held their signs explaining the restrictions Vista may impose on computer users. The campaign ended on January 8, 2009, when "victory" was declared after Microsoft released its Windows 7 Beta. This victory claim was based on the tepid adoption of Vista, compared to those sticking with the less-DRM infused Windows XP or moving to the FSF-defined less restrictive Mac OS X or largely free Linux or FreeBSD. A minority of Linux distros are recognized as completely free, however like kFreeBSD vanilla Linux kernel contains binary blob device drivers. This is solved by Linux-libre. Windows 7 Sins In 2009, a campaign targeted towards Windows 7 was launched by the Free Software Foundation under the name "Windows 7 Sins". The campaign's site uses graphics from the free software video game XBill. Upgrade from Windows 8 In October 2012, the Free Software Foundation began another campaign called "Upgrade from Windows 8", this time targeted towards Windows 8. Windows 10 During the Windows 10 release, the FSF issued a statement urging users to reject it due to its proprietary nature. The Foundation also cited other sources of concern, such as forcing lower-paying customers to test less-secure updates before higher-paying users, Microsoft's implication in the 2013 global surveillance scandal and the new privacy policy enacted by Windows. See also Defective by Design – an associated anti-digital rights management campaign that also targets Windows XP and higher Hardware restrictions References External links badvista.fsf.org/badvista-declares-victory - official "BadVista" website en.windows7sins.org - official "Windows 7 sins" website fsf.org/windows8 - official "Upgrade from Windows 8" website fsf.org/news/the-fsfs-statement-on-windows-10 - official "The FSFs statement on Windows 10" website Free Software Foundation Intellectual property activism Microsoft criticisms and controversies
5690219
https://en.wikipedia.org/wiki/New%20API
New API
New API (also referred to as NAPI) is an interface to use interrupt mitigation techniques for networking devices in the Linux kernel. Such an approach is intended to reduce the overhead of packet receiving. The idea is to defer incoming message handling until there is a sufficient amount of them so that it is worth handling them all at once. Motivation A straightforward method of implementing a network driver is to interrupt the kernel by issuing an interrupt request (IRQ) for each and every incoming packet. However, servicing IRQs is costly in terms of processor resources and time. Therefore, the straightforward implementation can be very inefficient in high-speed networks, constantly interrupting the kernel with the thousands of packets per second. Overall performance of the system as well as network throughput can suffer as a result. Polling is an alternative to interrupt-based processing. The kernel can periodically check for the arrival of incoming network packets without being interrupted, which eliminates the overhead of interrupt processing. Establishing an optimal polling frequency is important, however. Too frequent polling wastes CPU resources by repeatedly checking for incoming packets that have not yet arrived. On the other hand, polling too infrequently introduces latency by reducing system reactivity to incoming packets, and it may result in the loss of packets if the incoming packet buffer fills up before being processed. As a compromise, the Linux kernel uses the interrupt-driven mode by default and only switches to polling mode when the flow of incoming packets exceeds a certain threshold, known as the "weight" of the network interface. Compliant drivers A driver using the NAPI interface will work as follow: Packet receive interrupts are disabled. The driver provides a poll method to the kernel. That method will fetch all incoming packets available, on the network card or a DMA ring, so that they will then be handled by the kernel. When allowed to, the kernel calls the device poll method to possibly handle many packets at once. Advantages The load induced by interrupts is reduced even though the kernel has to poll. Packets are less likely to be re-ordered, while out of order packet handling might be a bottleneck otherwise. In case the kernel is unable to handle all incoming packets, the kernel does not have to do any work in order to drop them: they are simply overwritten in the network card's incoming ring buffer. Without NAPI, the kernel has to handle every incoming packet regardless of whether there is time to service it, which leads to thrashing. History NAPI was an over-three-year effort by Alexey Kuznetsov, Jamal Hadi Salim and Robert Olsson. Initial effort to include NAPI was met with resistance by some members of the community, however David Miller worked hard to ensure NAPI's inclusion. A lot of real world testing was done in the Uppsala university network before inclusion. In fact, www.slu.se was the first production NAPI-based OS and is still powered to this day by NAPI-based Bifrost/Linux routers. The pktgen traffic generator was also born around this time. Pktgen was extensively used to test NAPI scenarios not induced by real world traffic. References Further reading The classical NAPI paper. External links Early NAPI work NAPI description on Linux Foundation Network overview, November 19, 2009, The Linux Foundation, by Rami Rosen (archived from the original on October 30, 2011) Interfaces of the Linux kernel Linux kernel features Ethernet
29048
https://en.wikipedia.org/wiki/Single-sideband%20modulation
Single-sideband modulation
In radio communications, single-sideband modulation (SSB) or single-sideband suppressed-carrier modulation (SSB-SC) is a type of modulation used to transmit information, such as an audio signal, by radio waves. A refinement of amplitude modulation, it uses transmitter power and bandwidth more efficiently. Amplitude modulation produces an output signal the bandwidth of which is twice the maximum frequency of the original baseband signal. Single-sideband modulation avoids this bandwidth increase, and the power wasted on a carrier, at the cost of increased device complexity and more difficult tuning at the receiver. Basic concept Radio transmitters work by mixing a radio frequency (RF) signal of a specific frequency, the carrier wave, with the audio signal to be broadcast. In AM transmitters this mixing usually takes place in the final RF amplifier (high level modulation). It is less common and much less efficient to do the mixing at low power and then amplify it in a linear amplifier. Either method produces a set of frequencies with a strong signal at the carrier frequency and with weaker signals at frequencies extending above and below the carrier frequency by the maximum frequency of the input signal. Thus the resulting signal has a spectrum whose bandwidth is twice the maximum frequency of the original input audio signal. SSB takes advantage of the fact that the entire original signal is encoded in each of these "sidebands". It is not necessary to transmit both sidebands plus the carrier, as a suitable receiver can extract the entire original signal from either the upper or lower sideband. There are several methods for eliminating the carrier and one sideband from the transmitted signal. Producing this single sideband signal is too complicated to be done in the final amplifier stage as with AM. SSB Modulation must be done at a low level and amplified in a linear amplifier where lower efficiency partially offsets the power advantage gained by eliminating the carrier and one sideband. Nevertheless, SSB transmissions use the available amplifier energy considerably more efficiently, providing longer-range transmission for the same power output. In addition, the occupied spectrum is less than half that of a full carrier AM signal. SSB reception requires frequency stability and selectivity well beyond that of inexpensive AM receivers which is why broadcasters have seldom used it. In point to point communications where expensive receivers are in common use already they can successfully be adjusted to receive whichever sideband is being transmitted. History The first U.S. patent application for SSB modulation was filed on December 1, 1915 by John Renshaw Carson. The U.S. Navy experimented with SSB over its radio circuits before World War I. SSB first entered commercial service on January 7, 1927, on the longwave transatlantic public radiotelephone circuit between New York and London. The high power SSB transmitters were located at Rocky Point, New York, and Rugby, England. The receivers were in very quiet locations in Houlton, Maine, and Cupar Scotland. SSB was also used over long distance telephone lines, as part of a technique known as frequency-division multiplexing (FDM). FDM was pioneered by telephone companies in the 1930s. With this technology, many simultaneous voice channels could be transmitted on a single physical circuit, for example in L-carrier. With SSB, channels could be spaced (usually) only 4,000 Hz apart, while offering a speech bandwidth of nominally 300 Hz to 3,400 Hz. Amateur radio operators began serious experimentation with SSB after World War II. The Strategic Air Command established SSB as the radio standard for its aircraft in 1957. It has become a de facto standard for long-distance voice radio transmissions since then. Mathematical formulation Single-sideband has the mathematical form of quadrature amplitude modulation (QAM) in the special case where one of the baseband waveforms is derived from the other, instead of being independent messages: where is the message (real-valued), is its Hilbert transform, and is the radio carrier frequency. To understand this formula, we may express as the real part of a complex-valued function, with no loss of information: where represents the imaginary unit.  is the analytic representation of   which means that it comprises only the positive-frequency components of : where and are the respective Fourier transforms of and   Therefore, the frequency-translated function contains only one side of   Since it also has only positive-frequency components, its inverse Fourier transform is the analytic representation of and again the real part of this expression causes no loss of information.  With Euler's formula to expand    we obtain : Coherent demodulation of to recover is the same as AM: multiply by   and lowpass to remove the "double-frequency" components around frequency . If the demodulating carrier is not in the correct phase (cosine phase here), then the demodulated signal will be some linear combination of and , which is usually acceptable in voice communications (if the demodulation carrier frequency is not quite right, the phase will be drifting cyclically, which again is usually acceptable in voice communications if the frequency error is small enough, and amateur radio operators are sometimes tolerant of even larger frequency errors that cause unnatural-sounding pitch shifting effects). Lower sideband can also be recovered as the real part of the complex-conjugate, which represents the negative frequency portion of When is large enough that has no negative frequencies, the product is another analytic signal, whose real part is the actual lower-sideband transmission: The sum of the two sideband signals is: which is the classic model of suppressed-carrier double sideband AM. Practical implementations Bandpass filtering One method of producing an SSB signal is to remove one of the sidebands via filtering, leaving only either the upper sideband (USB), the sideband with the higher frequency, or less commonly the lower sideband (LSB), the sideband with the lower frequency. Most often, the carrier is reduced or removed entirely (suppressed), being referred to in full as single sideband suppressed carrier (SSBSC). Assuming both sidebands are symmetric, which is the case for a normal AM signal, no information is lost in the process. Since the final RF amplification is now concentrated in a single sideband, the effective power output is greater than in normal AM (the carrier and redundant sideband account for well over half of the power output of an AM transmitter). Though SSB uses substantially less bandwidth and power, it cannot be demodulated by a simple envelope detector like standard AM. Hartley modulator An alternate method of generation known as a Hartley modulator, named after R. V. L. Hartley, uses phasing to suppress the unwanted sideband. To generate an SSB signal with this method, two versions of the original signal are generated, mutually 90° out of phase for any single frequency within the operating bandwidth. Each one of these signals then modulates carrier waves (of one frequency) that are also 90° out of phase with each other. By either adding or subtracting the resulting signals, a lower or upper sideband signal results. A benefit of this approach is to allow an analytical expression for SSB signals, which can be used to understand effects such as synchronous detection of SSB. Shifting the baseband signal 90° out of phase cannot be done simply by delaying it, as it contains a large range of frequencies. In analog circuits, a wideband 90-degree phase-difference network is used. The method was popular in the days of vacuum tube radios, but later gained a bad reputation due to poorly adjusted commercial implementations. Modulation using this method is again gaining popularity in the homebrew and DSP fields. This method, utilizing the Hilbert transform to phase shift the baseband audio, can be done at low cost with digital circuitry. Weaver modulator Another variation, the Weaver modulator, uses only lowpass filters and quadrature mixers, and is a favored method in digital implementations. In Weaver's method, the band of interest is first translated to be centered at zero, conceptually by modulating a complex exponential with frequency in the middle of the voiceband, but implemented by a quadrature pair of sine and cosine modulators at that frequency (e.g. 2 kHz). This complex signal or pair of real signals is then lowpass filtered to remove the undesired sideband that is not centered at zero. Then, the single-sideband complex signal centered at zero is upconverted to a real signal, by another pair of quadrature mixers, to the desired center frequency. Full, reduced, and suppressed-carrier SSB Conventional amplitude-modulated signals can be considered wasteful of power and bandwidth because they contain a carrier signal and two identical sidebands. Therefore, SSB transmitters are generally designed to minimize the amplitude of the carrier signal. When the carrier is removed from the transmitted signal, it is called suppressed-carrier SSB. However, in order for a receiver to reproduce the transmitted audio without distortion, it must be tuned to exactly the same frequency as the transmitter. Since this is difficult to achieve in practice, SSB transmissions can sound unnatural, and if the error in frequency is great enough, it can cause poor intelligibility. In order to correct this, a small amount of the original carrier signal can be transmitted so that receivers with the necessary circuitry to synchronize with the transmitted carrier can correctly demodulate the audio. This mode of transmission is called reduced-carrier single-sideband. In other cases, it may be desirable to maintain some degree of compatibility with simple AM receivers, while still reducing the signal's bandwidth. This can be accomplished by transmitting single-sideband with a normal or slightly reduced carrier. This mode is called compatible (or full-carrier) SSB or amplitude modulation equivalent (AME). In typical AME systems, harmonic distortion can reach 25%, and intermodulation distortion can be much higher than normal, but minimizing distortion in receivers with envelope detectors is generally considered less important than allowing them to produce intelligible audio. A second, and perhaps more correct, definition of "compatible single sideband" (CSSB) refers to a form of amplitude and phase modulation in which the carrier is transmitted along with a series of sidebands that are predominantly above or below the carrier term. Since phase modulation is present in the generation of the signal, energy is removed from the carrier term and redistributed into the sideband structure similar to that which occurs in analog frequency modulation. The signals feeding the phase modulator and the envelope modulator are further phase-shifted by 90° with respect to each other. This places the information terms in quadrature with each other; the Hilbert transform of information to be transmitted is utilized to cause constructive addition of one sideband and cancellation of the opposite primary sideband. Since phase modulation is employed, higher-order terms are also generated. Several methods have been employed to reduce the impact (amplitude) of most of these higher-order terms. In one system, the phase-modulated term is actually the log of the value of the carrier level plus the phase-shifted audio/information term. This produces an ideal CSSB signal, where at low modulation levels only a first-order term on one side of the carrier is predominant. As the modulation level is increased, the carrier level is reduced while a second-order term increases substantially in amplitude. At the point of 100% envelope modulation, 6 dB of power is removed from the carrier term, and the second-order term is identical in amplitude to carrier term. The first-order sideband has increased in level until it is now at the same level as the formerly unmodulated carrier. At the point of 100% modulation, the spectrum appears identical to a normal double-sideband AM transmission, with the center term (now the primary audio term) at a 0 dB reference level, and both terms on either side of the primary sideband at −6 dB. The difference is that what appears to be the carrier has shifted by the audio-frequency term towards the "sideband in use". At levels below 100% modulation, the sideband structure appears quite asymmetric. When voice is conveyed by a CSSB source of this type, low-frequency components are dominant, while higher-frequency terms are lower by as much as 20 dB at 3 kHz. The result is that the signal occupies approximately 1/2 the normal bandwidth of a full-carrier, DSB signal. There is one catch: the audio term utilized to phase-modulate the carrier is generated based on a log function that is biased by the carrier level. At negative 100% modulation, the term is driven to zero (0), and the modulator becomes undefined. Strict modulation control must be employed to maintain stability of the system and avoid splatter. This system is of Russian origin and was described in the late 1950s. It is uncertain whether it was ever deployed. A second series of approaches was designed and patented by Leonard R. Kahn. The various Kahn systems removed the hard limit imposed by the use of the strict log function in the generation of the signal. Earlier Kahn systems utilized various methods to reduce the second-order term through the insertion of a predistortion component. One example of this method was also used to generate one of the Kahn independent-sideband (ISB) AM stereo signals. It was known as the STR-77 exciter method, having been introduced in 1977. Later, the system was further improved by use of an arcsine-based modulator that included a 1-0.52E term in the denominator of the arcsin generator equation. E represents the envelope term; roughly half the modulation term applied to the envelope modulator is utilized to reduce the second-order term of the arcsin "phase"-modulated path; thus reducing the second-order term in the undesired sideband. A multi-loop modulator/demodulator feedback approach was used to generate an accurate arcsin signal. This approach was introduced in 1984 and became known as the STR-84 method. It was sold by Kahn Research Laboratories; later, Kahn Communications, Inc. of NY. An additional audio processing device further improved the sideband structure by selectively applying pre-emphasis to the modulating signals. Since the envelope of all the signals described remains an exact copy of the information applied to the modulator, it can be demodulated without distortion by an envelope detector such as a simple diode. In a practical receiver, some distortion may be present, usually at a low level (in AM broadcast, always below 5%), due to sharp filtering and nonlinear group delay in the IF filters of the receiver, which act to truncate the compatibility sideband – those terms that are not the result of a linear process of simply envelope modulating the signal as would be the case in full-carrier DSB-AM – and rotation of phase of these compatibility terms such that they no longer cancel the quadrature distortion term caused by a first-order SSB term along with the carrier. The small amount of distortion caused by this effect is generally quite low and acceptable. The Kahn CSSB method was also briefly used by Airphone as the modulation method employed for early consumer telephone calls that could be placed from an aircraft to ground. This was quickly supplanted by digital modulation methods to achieve even greater spectral efficiency. While CSSB is seldom used today in the AM/MW broadcast bands worldwide, some amateur radio operators still experiment with it. Demodulation The front end of an SSB receiver is similar to that of an AM or FM receiver, consisting of a superheterodyne RF front end that produces a frequency-shifted version of the radio frequency (RF) signal within a standard intermediate frequency (IF) band. To recover the original signal from the IF SSB signal, the single sideband must be frequency-shifted down to its original range of baseband frequencies, by using a product detector which mixes it with the output of a beat frequency oscillator (BFO). In other words, it is just another stage of heterodyning. For this to work, the BFO frequency must be exactly adjusted. If the BFO frequency is off, the output signal will be frequency-shifted (up or down), making speech sound strange and "Donald Duck"-like, or unintelligible. For audio communications, there is a common agreement about the BFO oscillator shift of 1.7 kHz. A voice signal is sensitive to about 50 Hz shift, with up to 100 Hz still bearable. Some receivers use a carrier recovery system, which attempts to automatically lock on to the exact IF frequency. The carrier recovery doesn't solve the frequency shift. It gives better S/N ratio on the detector output. As an example, consider an IF SSB signal centered at frequency = 45000 Hz. The baseband frequency it needs to be shifted to is = 2000 Hz. The BFO output waveform is . When the signal is multiplied by (aka heterodyned with) the BFO waveform, it shifts the signal to  , and to , which is known as the beat frequency or image frequency. The objective is to choose an that results in   = 2000 Hz. (The unwanted components at can be removed by a lowpass filter; for which an output transducer or the human ear may serve). There are two choices for : 43000 Hz and 47000 Hz, called low-side and high-side injection. With high-side injection, the spectral components that were distributed around 45000 Hz will be distributed around 2000 Hz in the reverse order, also known as an inverted spectrum. That is in fact desirable when the IF spectrum is also inverted, because the BFO inversion restores the proper relationships. One reason for that is when the IF spectrum is the output of an inverting stage in the receiver. Another reason is when the SSB signal is actually a lower sideband, instead of an upper sideband. But if both reasons are true, then the IF spectrum is not inverted, and the non-inverting BFO (43000 Hz) should be used. If is off by a small amount, then the beat frequency is not exactly , which can lead to the speech distortion mentioned earlier. SSB as a speech-scrambling technique SSB techniques can also be adapted to frequency-shift and frequency-invert baseband waveforms (voice inversion). This voice scrambling method was made by running the audio of one side band modulated audio sample through its opposite (e.g. running an LSB modulated audio sample through a radio running USB modulation). These effects were used, in conjunction with other filtering techniques, during World War II as a simple method for speech encryption. Radiotelephone conversations between the US and Britain were intercepted and "decrypted" by the Germans; they included some early conversations between Franklin D. Roosevelt and Churchill. In fact, the signals could be understood directly by trained operators. Largely to allow secure communications between Roosevelt and Churchill, the SIGSALY system of digital encryption was devised. Today, such simple inversion-based speech encryption techniques are easily decrypted using simple techniques and are no longer regarded as secure. Vestigial sideband (VSB) Limitation of single-sideband modulation being used for voice signals and not available for video/TV signals leads to the usage of vestigial sideband. A vestigial sideband (in radio communication) is a sideband that has been only partly cut off or suppressed. Television broadcasts (in analog video formats) use this method if the video is transmitted in AM, due to the large bandwidth used. It may also be used in digital transmission, such as the ATSC standardized 8VSB. The broadcast or transport channel for TV in countries that use NTSC or ATSC has a bandwidth of 6 MHz. To conserve bandwidth, SSB would be desirable, but the video signal has significant low-frequency content (average brightness) and has rectangular synchronising pulses. The engineering compromise is vestigial-sideband transmission. In vestigial sideband, the full upper sideband of bandwidth W2 = 4.0 MHz is transmitted, but only W1 = 0.75 MHz of the lower sideband is transmitted, along with a carrier. The carrier frequency is 1.25 MHz above the lower edge of the 6MHz wide channel. This effectively makes the system AM at low modulation frequencies and SSB at high modulation frequencies. The absence of the lower sideband components at high frequencies must be compensated for, and this is done in the IF amplifier. Frequencies for LSB and USB in amateur radio voice communication When single-sideband is used in amateur radio voice communications, it is common practice that for frequencies below 10 MHz, lower sideband (LSB) is used and for frequencies of 10 MHz and above, upper sideband (USB) is used. For example, on the 40 m band, voice communications often take place around 7.100 MHz using LSB mode. On the 20 m band at 14.200 MHz, USB mode would be used. An exception to this rule applies to the five discrete amateur channels on the 60-meter band (near 5.3 MHz) where FCC rules specifically require USB. Extended single sideband (eSSB) Extended single sideband is any J3E (SSB-SC) mode that exceeds the audio bandwidth of standard or traditional 2.9 kHz SSB J3E modes (ITU 2K90J3E) to support higher-quality sound. Amplitude-companded single-sideband modulation (ACSSB) Amplitude-companded single sideband (ACSSB) is a narrowband modulation method using a single sideband with a pilot tone, allowing an expander in the receiver to restore the amplitude that was severely compressed by the transmitter. It offers improved effective range over standard SSB modulation while simultaneously retaining backwards compatibility with standard SSB radios. ACSSB also offers reduced bandwidth and improved range for a given power level compared with narrow band FM modulation. Controlled-envelope single-sideband modulation (CESSB) The generation of standard SSB modulation results in large envelope overshoots well above the average envelope level for a sinusoidal tone (even when the audio signal is peak-limited). The standard SSB envelope peaks are due to truncation of the spectrum and nonlinear phase distortion from the approximation errors of the practical implementation of the required Hilbert transform. It was recently shown that suitable overshoot compensation (so-called controlled-envelope single-sideband modulation or CESSB) achieves about 3.8 dB of peak reduction for speech transmission. This results in an effective average power increase of about 140%. Although the generation of the CESSB signal can be integrated into the SSB modulator, it is feasible to separate the generation of the CESSB signal (e.g. in form of an external speech preprocessor) from a standard SSB radio. This requires that the standard SSB radio's modulator be linear-phase and have a sufficient bandwidth to pass the CESSB signal. If a standard SSB modulator meets these requirements, then the envelope control by the CESSB process is preserved. ITU designations In 1982, the International Telecommunication Union (ITU) designated the types of amplitude modulation: See also ACSSB, amplitude-companded single sideband Independent sideband Modulation for other examples of modulation techniques Sideband for more general information about a sideband References Sources Partly from Federal Standard 1037C in support of MIL-STD-188 Further reading Sgrignoli, G., W. Bretl, R. and Citta. (1995). "VSB modulation used for terrestrial and cable broadcasts." IEEE Transactions on Consumer Electronics. v. 41, issue 3, p. 367 - 382. J. Brittain, (1992). "Scanning the past: Ralph V.L. Hartley", Proc. IEEE, vol.80,p. 463. eSSB - Extended Single Sideband Radio modulation modes Electronic design
487103
https://en.wikipedia.org/wiki/Up2date
Up2date
up2date, also known as the Red Hat Update Agent, is a tool used by older versions of Red Hat Enterprise Linux, CentOS and Fedora Core that downloads and installs new software and upgrades the operating system. It functions as a front-end to the RPM Package Manager and adds advanced features such as automatic dependency resolution. The file specifies where up2date will search for packages. Tool By default, Red Hat Enterprise Linux's up2date retrieves packages from a Red Hat Network (RHN) server, though users can add directories full of packages or even Debian and yum repositories if they wish. up2date on Fedora Core defaults to retrieving packages from yum repositories. Again, other sources can be added (apart from RHN, which is Red Hat Enterprise Linux Specific). As of Fedora Core 5 and Red Hat Enterprise Linux 5, up2date is no longer shipped with the distribution; yum is used instead. CentOS's up2date downloads packages from yum repositories on the CentOS Mirror Network. See also Package management system References External links up2date at redhat.com yum/up2date transitioning from up2date to yum Free package management systems Linux package management-related software Red Hat software
22624363
https://en.wikipedia.org/wiki/Lucidworks
Lucidworks
Lucidworks, a San Francisco, California-based company that specializes in commerce, customer service, and workplace applications. Lucidworks was founded in 2007 under the name Lucid Imagination and launched in 2009. The company was later renamed Lucidworks in 2012. The Lucidworks founding technical team consisted of Marc Krellenstein, Grant Ingersoll, Erik Hatcher, and Yonik Seeley, in addition to advisor Doug Cutting. Will Hayes is Lucidworks CEO. Business model Lucidworks operates primarily with a subscription-based business model with their Lucidworks Fusion platform for designing, building, and deploying big data applications. Lucidworks also offers subscriptions for the support, training, and integration services that help customers in using open source search software. On September 18, 2014, Lucidworks released Lucidworks Fusion, a platform for building search and discovery applications that includes popular search technology Apache Solr and computation framework Apache Spark in its core. On May 10, 2017, Lucidworks announced its acquisition of Twigkit, a software company specializing in user experiences for search and big data applications which was later integrated into the Fusion platform as Fusion App Studio. In September 2017, Reddit teamed with Lucidworks to build their new search application. On March 6, 2018, Lucidworks released Lucidworks Site Search, an embeddable, easy-to-configure, out-of-the-box site search solution that runs in the cloud, or on premise. Funding The company received Series A funding from Basis Technology, Granite Ventures, and Walden International in Sept 2008; In-Q-Tel is a strategic investor. In August 2014, Lucidworks closed an $8 million Series C round with Shasta Ventures, Basis Technology, Granite Ventures and Walden International participating. In November 2015, Lucidworks closed a $21 million Series D round with Allegis Capital, and existing investors Shasta Ventures and Granite Ventures participating. In May 2018, the firm announced a $50 million Series E led by Top Tier Capital Partners, with participation from Silver Lake's growth capital fund, Silver Lake Waterman. In August 2019, Lucidworks announced $100M in Series F funding led by Francisco Partners and TPG Sixth Street Partners, with existing investors Top Tier Capital Partners, Shasta Ventures, Granite Ventures and Allegis participating. Awards Finalist for the 2010 Red Herring 100 North America Award Lucid Imagination Congratulates Apache Solr on InfoWorld Bossie Award Win Winner, InfoWorld 2017 Technology of the Year References External links Official website American companies established in 2007 Search engine software Software companies based in the San Francisco Bay Area Information retrieval organizations Software companies of the United States 2007 establishments in California Software companies established in 2007
46299
https://en.wikipedia.org/wiki/Menelaus
Menelaus
In Greek mythology, Menelaus (; , 'wrath of the people', ) was a king of Mycenaean (pre-Dorian) Sparta. According to the Iliad, Menelaus was a central figure in the Trojan War, leading the Spartan contingent of the Greek army, under his elder brother Agamemnon, king of Mycenae. Prominent in both the Iliad and Odyssey, Menelaus was also popular in Greek vase painting and Greek tragedy, the latter more as a hero of the Trojan War than as a member of the doomed House of Atreus. Family Menelaus was a descendant of Pelops son of Tantalus. He was the younger brother of Agamemnon, and the husband of Helen of Troy. According to the usual version of the story, followed by the Iliad and Odyssey of Homer, Agamemnon and Menelaus were the sons of Atreus, king of Mycenae and Aerope daughter of the Cretan king Catreus. However, according to another tradition, Agamemnon and Menelaus were the sons of Atreus' son Pleisthenes, with their mother being Aerope, Cleolla, or Eriphyle. According to this tradition Pleisthenes died young, with Agamemnon and Menelaus being raised by Atreus. Agamemnon and Menelaus had a sister Anaxibia (or Astyoche) who married Strophius, the son of Crisus. According to the Odyssey, Menelaus had only one child by Helen, a daughter Hermione, and an illegitimate son Megapenthes by a slave. Other sources mention other sons of Menelaus by either Helen, or slaves. A scholiast on Sophocles' Electra quotes Hesiod as saying that after Hermione, Helen also bore Menelaus a son Nicostratus, while according to a Cypria fragment, Menelaus and Helen had a son Pleisthenes. The mythographer Apollodorus, tells us that Megapenthes' mother was a slave "Pieris, an Aetolian, or, according to Acusilaus, ... Tereis", and that Menelaus had another illegitimate son Xenodamas by another slave girl Cnossia, while according to the geographer Pausanias, Megapenthes and Nicostratus were sons of Menelaus by a slave. The scholiast on Iliad 3.175 mentions Nicostratus and Aethiolas as two sons of Helen (by Menelaus?) worshipped by the Lacedaemonians and another son of Helen by Menelaus, Maraphius, from whom descended the Persian Maraphions. Mythology Ascension and reign Although early authors, such as Aeschylus refer in passing to Menelaus' early life, detailed sources are quite late, post-dating 5th-century BC Greek tragedy. According to these sources, Menelaus' father, Atreus, had been feuding with his brother Thyestes over the throne of Mycenae. After a back-and-forth struggle that featured adultery, incest, and cannibalism, Thyestes gained the throne after his son Aegisthus murdered Atreus. As a result, Atreus' sons, Menelaus and Agamemnon, went into exile. They first stayed with King Polypheides of Sicyon, and later with King Oeneus of Calydon. But when they thought the time was ripe to dethrone Mycenae's hostile ruler, they returned. Assisted by King Tyndareus of Sparta, they drove Thyestes away, and Agamemnon took the throne for himself. When it was time for Tyndareus' stepdaughter Helen to marry, many kings and princes came to seek her hand. Among the contenders were Odysseus, Menestheus, Ajax the Great, Patroclus, and Idomeneus. Most offered opulent gifts. Tyndareus would accept none of the gifts, nor would he send any of the suitors away for fear of offending them and giving grounds for a quarrel. Odysseus promised to solve the problem in a satisfactory manner if Tyndareus would support him in his courting of Tyndareus's niece Penelope, the daughter of Icarius. Tyndareus readily agreed, and Odysseus proposed that, before the decision was made, all the suitors should swear a most solemn oath to defend the chosen husband in any quarrel. Then it was decreed that straws were to be drawn for Helen's hand. The suitor who won was Menelaus (Tyndareus, not to displease the mighty Agamemnon offered him another of his daughters, Clytaemnestra). The rest of the suitors swore their oaths, and Helen and Menelaus were married, Menelaus becoming a ruler of Sparta with Helen after Tyndareus and Leda abdicated the thrones. Their supposed palace (ἀνάκτορον) has been discovered (the excavations started in 1926 and continued until 1995) in Pellana, Laconia, to the north-west of modern (and classical) Sparta. Other archaeologists consider that Pellana is too far away from other Mycenaean centres to have been the "capital of Menelaus". Trojan War According to legend, in return for awarding her a golden apple inscribed "to the fairest," Aphrodite promised Paris the most beautiful woman in all the world. After concluding a diplomatic mission to Sparta during the latter part of which Menelaus was absent to attend the funeral of his maternal grandfather Catreus in Crete, Paris ran off to Troy with Helen despite his brother Hector's prohibition. Invoking the oath of Tyndareus, Menelaus and Agamemnon raised a fleet of a thousand ships and went to Troy to secure Helen's return; the Trojans refused, providing a casus belli for the Trojan War. Homer's Iliad is the most comprehensive source for Menelaus's exploits during the Trojan War. In Book 3, Menelaus challenges Paris to a duel for Helen's return. Menelaus soundly beats Paris, but before he can kill him and claim victory, Aphrodite spirits Paris away inside the walls of Troy. In Book 4, while the Greeks and Trojans squabble over the duel's winner, Athena inspires the Trojan Pandarus to shoot Menelaus with his bow and arrow. However, Athena never intended for Menelaus to die and she protects him from the arrow of Pandarus. Menelaus is wounded in the abdomen, and the fighting resumes. Later, in Book 17, Homer gives Menelaus an extended aristeia as the hero retrieves the corpse of Patroclus from the battlefield. According to Hyginus, Menelaus killed eight men in the war, and was one of the Greeks hidden inside the Trojan Horse. During the sack of Troy, Menelaus killed Deiphobus, who had married Helen after the death of Paris. There are four versions of Menelaus' and Helen's reunion on the night of the sack of Troy: Menelaus sought out Helen in the conquered city. Raging at her infidelity, he raised his sword to kill her, but as he saw her weeping at his feet, begging for her life, Menelaus' wrath instantly left him. He took pity on her and decided to take her back as his wife. Menelaus resolved to kill Helen, but her irresistible beauty prompted him to drop his sword and take her back to his ship "to punish her at Sparta", as he claimed. According to the Bibliotheca, Menelaus raised his sword in front of the temple in the central square of Troy to kill her, but his wrath went away when he saw her rending her clothes in anguish, revealing her naked breasts. A similar version by Stesichorus in "Ilion's Conquest" narrated that Menelaus surrendered her to his soldiers to stone her to death, but when she ripped the front of her robes, the Achaean warriors were stunned by her beauty and the stones fell harmlessly from their hands as they stared at her. After the war Book 4 of the Odyssey provides an account of Menelaus' return from Troy and his homelife in Sparta. When visited by Odysseus' son Telemachus, Menelaus recounts his voyage home. As happened to many Greeks, Menelaus' homebound fleet was blown by storms to Crete and Egypt where they were becalmed, unable to sail away. They trapped Proteus and forced him to reveal how to make the voyage home. After their homecoming, Menelaus and Helen's marriage is strained; Menelaus continually revisits the losses of the Trojan War, particularly as he and Helen have no male heir. Menelaus is fond of Megapenthes and Nicostratus, his sons by slave women. According to Euripides' Helen, Menelaus is reunited with Helen after death, on the Isle of the Blessed. In vase painting Menelaus appears in Greek vase painting in the 6th to 4th centuries BC, such as: Menelaus's reception of Paris at Sparta; his retrieval of Patroclus's corpse; and his reunion with Helen. In Greek tragedy Menelaus appears as a character in a number of 5th-century Greek tragedies: Sophocles' Ajax, and Euripides' Andromache, Helen, Orestes, Iphigenia at Aulis, and The Trojan Women. See also 1647 Menelaus, Jovian asteroid USS Menelaus (ARL-13) Menelaus (lunar crater) Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Collard, Christopher and Martin Cropp (2008a), Euripides Fragments: Aegeus–Meleanger, Loeb Classical Library No. 504, Cambridge, Massachusetts, Harvard University Press, 2008. . Online version at Harvard University Press. Collard, Christopher and Martin Cropp (2008b), Euripides Fragments: Oedipus-Chrysippus: Other Fragments, Loeb Classical Library No. 506, Cambridge, Massachusetts, Harvard University Press, 2008. . Online version at Harvard University Press. Dictys Cretensis, The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian, translated by R. M. Frazer (Jr.). Indiana University Press. 1966. Euripides, Andromache in Euripides: Children of Heracles. Hippolytus. Andromache. Hecuba, edited and translated by David Kovacs, Loeb Classical Library No. 484. Cambridge, Massachusetts, Harvard University Press, 1995. . Online version at Harvard University Press. Euripides, Helen, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 2. New York. Random House. 1938. Online version at the Perseus Digital Library. Euripides, Iphigenia in Tauris, translated by Robert Potter in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 2. New York. Random House. 1938. Online version at the Perseus Digital Library. Euripides, Orestes, translated by E. P. Coleridge in The Complete Greek Drama, edited by Whitney J. Oates and Eugene O'Neill, Jr. Volume 1. New York. Random House. 1938. Online version at the Perseus Digital Library. Fowler, R. L., Early Greek Mythography: Volume 2: Commentary, Oxford University Press, 2013. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Grimal, Pierre, The Dictionary of Classical Mythology, Wiley-Blackwell, 1996. . Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Google Books. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Homer, The Odyssey with an English Translation by A.T. Murray, PH.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1919. Online version at the Perseus Digital Library. Hyginus, Gaius Julius, Fabulae, in The Myths of Hyginus, edited and translated by Mary A. Grant, Lawrence: University of Kansas Press, 1960. Online version at ToposText. Most, G.W., Hesiod: The Shield, Catalogue of Women, Other Fragments, Loeb Classical Library, No. 503, Cambridge, Massachusetts, Harvard University Press, 2007, 2018. . Online version at Harvard University Press. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Pausanias, Pausanias Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library. Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015. Sophocles, The Ajax of Sophocles. Edited with introduction and notes by Sir Richard Jebb, Sir Richard Jebb. Cambridge. Cambridge University Press. 1893 Online version at the Perseus Digital Library. Sophocles, Electra in Sophocles. Ajax. Electra. Oedipus Tyrannus, Edited and translated by Hugh Lloyd-Jones, Loeb Classical Library No. 20, Cambridge, Massachusetts, Harvard University Press, 1994. . Online version at Harvard University Press. West, M. L., Greek Epic Fragments: From the Seventh to the Fifth Centuries BC, edited and translated by Martin L. West, Loeb Classical Library No. 497, Cambridge, Massachusetts, Harvard University Press, 2003. . Online version at Harvard University Press. External links Achaean Leaders Greek mythological heroes Mythological kings of Sparta Kings in Greek mythology Characters in the Odyssey Characters in Greek mythology Laconian mythology
36627659
https://en.wikipedia.org/wiki/MuleSoft
MuleSoft
MuleSoft, LLC. is a software company headquartered in San Francisco, California, that provides integration software for connecting applications, data and devices. Started in 2006, the company's Anypoint Platform of integration products is designed to integrate software as a service (SaaS), on-premises software, legacy systems and other platforms. On May 2, 2018, Salesforce acquired Mulesoft for $6.5 billion in a cash and stock deal. History Ross Mason and Dave Rosenberg founded MuleSource in 2006. The "mule" in the name comes from the drudgery, or "donkey work," of data integration that the platform was created to escape. The company changed the name to MuleSoft in 2009. The company originally provided middleware and messaging, and later expanded to provide an integration platform as a service (iPaaS) approach for companies through its main product, Anypoint Platform. In April 2013, the startup announced $37 million in Series E financing in a round led by New Enterprise Associates, with participation from new strategic investor Salesforce.com, and existing investors Hummer Winblad Venture Partners, Morgenthaler Ventures, Lightspeed Venture Partners, Meritech Capital Partners, Sapphire Ventures (formerly SAP Ventures) and Bay Partners. The round brought MuleSoft's total financing, over the course of seven funding rounds, to $259 million. In April 2013, MuleSoft acquired ProgrammableWeb, a website used by developers to help build web, mobile and other connected applications through APIs. In 2016, MuleSoft was ranked #20 on the Forbes Cloud 100 list. In February 2017, the company filed for an IPO and began trading on the New York Stock Exchange on March 17, 2017. In March 2018, Salesforce.com announced it was buying MuleSoft in a deal reported to be worth US$6.5B. In May 2018, Salesforce completed acquisition of MuleSoft. Products MuleSoft's Anypoint Platform includes various components such as Anypoint Design Center, which allows API developers to design and build APIs; Anypoint Exchange, a library for API providers to share APIs, templates, and assets; and Anypoint Management Center, a centralized web interface to analyze, manage, and monitor APIs and integrations. MuleSoft also offers the Mule runtime engine, a runtime solution for connecting enterprise applications on-premises and to the cloud, designed to eliminate the need for custom point-to-point integration code. Operations As of August 2019, MuleSoft has over 1,400 employees and 1,600 customers. References External links MuleSoft Corporate Site Salesforce Development software companies Extract, transform, load tools 2006 establishments in California Software companies established in 2006 American companies established in 2006 Enterprise application integration Software companies based in the San Francisco Bay Area Cloud applications Cloud computing providers Companies based in San Francisco 2017 initial public offerings Companies formerly listed on the New York Stock Exchange 2018 mergers and acquisitions Software companies of the United States
1965793
https://en.wikipedia.org/wiki/Statistical%20time-division%20multiplexing
Statistical time-division multiplexing
Statistical multiplexing is a type of communication link sharing, very similar to dynamic bandwidth allocation (DBA). In statistical multiplexing, a communication channel is divided into an arbitrary number of variable bitrate digital channels or data streams. The link sharing is adapted to the instantaneous traffic demands of the data streams that are transferred over each channel. This is an alternative to creating a fixed sharing of a link, such as in general time division multiplexing (TDM) and frequency division multiplexing (FDM). When performed correctly, statistical multiplexing can provide a link utilization improvement, called the statistical multiplexing gain. Statistical multiplexing is facilitated through packet mode or packet-oriented communication, which among others is utilized in packet switched computer networks. Each stream is divided into packets that normally are delivered asynchronously in a first-come first-served fashion. In alternative fashion, the packets may be delivered according to some scheduling discipline for fair queuing or differentiated and/or guaranteed quality of service. Statistical multiplexing of an analog channel, for example a wireless channel, is also facilitated through the following schemes: Random frequency-hopping orthogonal frequency division multiple access (RFH-OFDMA) Code-division multiple access (CDMA), where different amount of spreading codes or spreading factors can be assigned to different users. Statistical multiplexing normally implies "on-demand" service rather than one that preallocates resources for each data stream. Statistical multiplexing schemes do not control user data transmissions. Comparison with static TDM Time domain statistical multiplexing (packet mode communication) is similar to time-division multiplexing (TDM), except that, rather than assigning a data stream to the same recurrent time slot in every TDM, each data stream is assigned time slots (of fixed length) or data frames (of variable lengths) that often appear to be scheduled in a randomized order, and experience varying delay (while the delay is fixed in TDM). Statistical multiplexing allows the bandwidth to be divided arbitrarily among a variable number of channels (while the number of channels and the channel data rate are fixed in TDM). Statistical multiplexing ensures that slots will not be wasted (whereas TDM can waste slots). The transmission capacity of the link will be shared by only those users who have packets. Static TDM and other circuit switching is carried out at the physical layer in the OSI model and TCP/IP model, while statistical multiplexing is carried out at the data link layer and above. Channel identification In statistical multiplexing, each packet or frame contains a channel/data stream identification number, or (in the case of datagram communication) complete destination address information. Usage Examples of statistical multiplexing are: The MPEG transport stream for digital TV transmission. Statistical multiplexing is used to allow several video, audio and data streams of different data rates to be transmitted over a bandwidth-limited channel (see Statistical multiplexer). The packets have constant lengths. The channel number is denoted Program ID (PID). The UDP and TCP protocols, where data streams from several application processes are multiplexed together. The packets may have varying lengths. The port numbers constitute channel identification numbers (and also address information). The X.25 and Frame Relay packet-switching protocols, where the packets have varying lengths, and the channel number is denoted virtual connection identifier (VCI). The international collection of X.25 providers, using the X.25 protocol suite was colloquially known as "the Packet switched network" in the 1980s and into the beginning of the 1990s. The Asynchronous Transfer Mode packet-switched protocol, where the packets have fixed length. The channel identification number consists of a virtual connection identifier (VCI) and a Virtual Path Identifier (VPI). Statistical multiplexer In digital audio and video broadcasting, for example, a statistical multiplexer is a content aggregating device that allows broadcasters to provide the greatest number of audio or video services for a given bandwidth by sharing a pool of fixed bandwidth among multiple services or streams of varying bitrates. The multiplexer allocates to each service the bandwidth required for its real-time needs so that services with complex scenes receive more bandwidth than services with less complex ones. This bandwidth sharing technique produces the best video quality at the lowest possible aggregate bandwidth. Examples of statistical multiplexers include the Imagine Communications (RGB Networks) BNPXr product line, Harmonic Inc. ProStream, Electra and VOS product families or the Motorola (Terayon) DM6400 and TMIR See also Data fragmentation Dynamic bandwidth allocation Dynamic TDMA Packet Packet switching External links Example of Statistical Multiplexing (Chart from a real DVB-T multiplex) Multiplexing Packets (information technology) Network scheduling algorithms he:ריבוב#ריבוב סטטיסטי
32706443
https://en.wikipedia.org/wiki/OS/VS
OS/VS
OS/VS may refer to one of a number of IBM operating systems for System/370 and successors characterized by the use of virtual storage (VS): IBM OS/VS1, successor to MFT II, 1972-1984 IBM OS/VS2, successor to MVT, including: OS/VS2 version 1, also known as SVS (Single Virtual Storage), 1972-1974 OS/VS2 version 2, also known as MVS (Multiple Virtual Storage), 1974-1981 OS/VS1 had no follow-on systems. Successor operating systems of OS/VS2 dropped the "OS/VS" tag and became simply "MVS/SE", "MVS/SP", "MVS/XA", etc. See also MVS OS/360 and successors IBM mainframe operating systems
51529239
https://en.wikipedia.org/wiki/Cohesity
Cohesity
Cohesity is an American privately held information technology company headquartered in San Jose, California. The company develops software that allows IT professionals to backup, manage and gain insights from their data, across multiple systems or cloud providers. History Cohesity was founded in June 2013 by Mohit Aron, who previously co-founded storage company Nutanix. While still in stealth mode, it closed a Series A funding round of $15M. The company launched publicly in June 2015, introducing a platform designed to consolidate and manage secondary data. In October, Cohesity announced the public launch of its data management products, DataPlatform and DataProtect. As part of coming out of stealth mode, the company announced a Series B funding round of $55M, bringing its total at that point to $70M. In February 2016, the company announced the second generation of DataPlatform and DataProtect. In June, the company launched its 3.0 products, expanding data protection to physical servers. Also by June, the company had raised $70 million in venture funding in two rounds with Google Ventures, Qualcomm Ventures, and Sequoia Capital. On April 4, 2017, Cohesity announced a $90 million Series C funding round, led by GV, the venture capital arm of Google parent Alphabet Inc., and Sequoia Capital. On June 11, 2018, the company announced a Series D funding round of $250 million led by SoftBank Vision Fund. In August, the company introduced a SaaS-based management console called Helios. In February 2019, Cohesity launched their online MarketPlace to sell applications that run on its DataPlatform. In May, the company made its first acquisition by buying Imanis Data, a provider of NoSQL data protection software. In July, the company announced it would be recognizing revenue predominantly from software, transitioning away from recognizing hardware revenue from the sale of backup appliances. Products Cohesity develops software used to consolidate and simplify data management, and includes analytics capabilities. The company's software also solves the problem of mass data fragmentation, as data proliferates across multiple systems or cloud providers. The company's main product, DataPlatform, is hyper converged software that allows businesses to consolidate a variety of workloads, including backups, archives, test and development, along with analytics data, onto a single cloud-native platform. It works with physical servers as well as virtual machines. The company also develops data management and backup software called DataProtect, which runs on DataPlatform. As of 2019, the most current version was code-named Pegasus. Pegusus v6.3 provides anti-ransomware features including machine learning-based anomaly detection. Through the acquisition of Imanis Data, Cohesity extends backup capabilities to NoSQL workloads distributed databases like MongoDB, Cassandra, Couchbase, and Hbase, as well as Hadoop data on Hadoop distributed file system (HDFS) datastores. The company's Helios SaaS management tool provides a dashboard view of a customer's DataPlatform sites, including local and remote; on-premises; and in the public cloud. The company's MarketPlace allows customers to purchase the company's other applications, as well as third party apps that run on the company's DataPlatform. The company also makes a software development kit (SDK) available for programmers who want to develop their own apps for DataPlatform. References External links Official website American companies established in 2013 Companies based in Santa Clara, California Computer storage companies
50353320
https://en.wikipedia.org/wiki/John%20Edmund%20Martineau
John Edmund Martineau
John Edmund Martineau (1904 – 3 June 1982) was an English brewer and brewing executive, who served as President of the Institute of Brewing. Life John Edmund Martineau was born in 1904, the eldest son of Maurice Martineau, of Walsham-le-Willows in Suffolk. In 1936, he married Catherine Makepeace Thackeray (1911–1995), second daughter of William Thackeray Dennis Ritchie (1880–1964), of Woodend House in Marlow, Durham, a descendant of William Makepeace Thackeray. Martineau was the great grandson of an earlier John Martineau who was an early part owner of Whitbreads in the 1800s, when in 1812 Whitbread had merged with the Martineau Brewery. However, John Martineau, his great grandfather had died in an industrial accident in a yeast vat in the brewery in 1834 and his shares in Whitbread passed to his son, who also took a role in future management. Martineau was educated at Eton and New College, Oxford, where he completed a classics degree. He worked at Mure's Brewery in Hampstead, before joining Whitbread & Co's in 1925; promotion to managing director followed in 1931, making him the fifth member of his family to sit on Whitbread's Board since it took over the family business, Martineau and Bland, in 1812. During the Second World War, he served in the Royal Air Force, rising to the rank of Wing Commander and ending with a posting at the Directorate of War Organisation in the Air Ministry. After the war, he returned to Whitbread's and was responsible for overseeing research and technical affairs, including the re-opening of its laboratory in 1946. Martineau worked closely with the Head Brewer, Bill Lasman, and the pair tried to apply scientific advances to brewing. According to his obituary, Whitbread's Luton brewery "would never have been built in that matter if not for the training and encouragement they gave to the technical staff". In 1950, Martineau joined the Council of the Brewers' Society, an appointment which would last for sixteen years. At the same time, he was appointed Chairman of the Publications Committee at the Institute of Brewing, in which post he remained until 1952. Between 1954 and 1956, he served as President of the Institute of Brewing and in 1955 he was appointed Master of the Brewers' Company for a year. He had overseen the reconstruction of the latter company's bomb-damaged hall after the war as Chairman of the Hall Committee. Away from his profession, Martineau was also chairman of the governors at Dame Alice Owen's School and Aldenham School. His obituary in the Journal of the Institute of Brewing records that "his heart was especially close to research and to education" in the brewing industry; he was described as don-like and an intellectual. He died on 3 June 1982. References 1904 births 1982 deaths People educated at Eton College Alumni of New College, Oxford English brewers Masters of the Worshipful Company of Brewers People from Walsham-le-Willows 20th-century English businesspeople
569005
https://en.wikipedia.org/wiki/Gmail
Gmail
Gmail is a free email service provided by Google. As of 2019, it had 1.5 billion active users worldwide. A user typically accesses Gmail in a web browser or the official mobile app. Google also supports the use of email clients via the POP and IMAP protocols. At its launch in 2004, Gmail provided a storage capacity of one gigabyte per user, which was significantly higher than its competitors offered at the time. Today, the service comes with 15 gigabytes of storage. Users can receive emails up to 50 megabytes in size, including attachments, while they can send emails up to 25 megabytes. In order to send larger files, users can insert files from Google Drive into the message. Gmail has a search-oriented interface and a "conversation view" similar to an Internet forum. The service is notable among website developers for its early adoption of Ajax. Google's mail servers automatically scan emails for multiple purposes, including to filter spam and malware, and to add context-sensitive advertisements next to emails. This advertising practice has been significantly criticized by privacy advocates due to concerns over unlimited data retention, ease of monitoring by third parties, users of other email providers not having agreed to the policy upon sending emails to Gmail addresses, and the potential for Google to change its policies to further decrease privacy by combining information with other Google data usage. The company has been the subject of lawsuits concerning the issues. Google has stated that email users must "necessarily expect" their emails to be subject to automated processing and claims that the service refrains from displaying ads next to potentially sensitive messages, such as those mentioning race, religion, sexual orientation, health, or financial statements. In June 2017, Google announced the end to the use of contextual Gmail content for advertising purposes, relying instead on data gathered from the use of its other services. Features Storage On April 1, 2004, Gmail was launched with one gigabyte (GB) of storage space, a significantly higher amount than competitors offered at the time. On April 1, 2005, the first anniversary of Gmail, the limit was doubled to two gigabytes of storage. Georges Harik, the product management director for Gmail, stated that Google would "keep giving people more space forever." On April 24, 2012, Google announced the increase of storage included in Gmail from 7.5 to 10 gigabytes ("and counting") as part of the launch of Google Drive. On May 13, 2013, Google announced the overall merge of storage across Gmail, Google Drive, and Google+ Photos, allowing users 15 gigabytes of included storage among three services. On August 15, 2018, Google launched Google One, a service where users can pay for additional storage, shared among Gmail, Google Drive and Google Photos, through a monthly subscription plan. , storage of up to 15 gigabytes is included, and paid plans are available for up to 2 terabytes for personal use. There are also storage limits to individual Gmail messages. Initially, one message, including all attachments, could not be larger than 25 megabytes. This was changed in March 2017 to allow receiving an email of up to 50 megabytes, while the limit for sending an email stayed at 25 megabytes. In order to send larger files, users can insert files from Google Drive into the message. Interface The Gmail user interface initially differed from other web-mail systems with its focus on search and conversation threading of emails, grouping several messages between two or more people onto a single page, an approach that was later copied by its competitors. Gmail's user interface designer, Kevin Fox, intended users to feel as if they were always on one page and just changing things on that page, rather than having to navigate to other places. Gmail's interface also makes use of 'labels' (tags) – that replace the conventional folders and provide a more flexible method of organizing emails; filters for automatically organizing, deleting or forwarding incoming emails to other addresses; and importance markers for automatically marking messages as 'important'. In November 2011, Google began rolling out a redesign of its interface that "simplified" the look of Gmail into a more minimalist design to provide a more consistent look throughout its products and services as part of an overall Google design change. Majorly redesigned elements included a streamlined conversation view, configurable density of information, new higher-quality themes, a resizable navigation bar with always-visible labels and contacts, and better search. Users were able to preview the new interface design for months prior to the official release, as well as revert to the old interface, until March 2012, when Google discontinued the ability to revert and completed the transition to the new design for all users. In May 2013, Google updated the Gmail inbox with tabs which allow the application to categorize the user's emails. The five tabs are: Primary, Social, Promotions, Updates, and Forums. In addition to customization options, the entire update can be disabled, allowing users to return to the traditional inbox structure. In April 2018, Google introduced a new web UI for Gmail. The new redesign follows Google's Material Design, and changes in the user interface include the use of Google's Product Sans font. Other updates include a Confidential mode, which allows the sender to set an expiration date for a sensitive message or to revoke it entirely, integrated rights management and two-factor authentication. On 16 November 2020, Google announced new settings for smart features and personalization in Gmail. Under the new settings users were given control of their data in Gmail, Chat, and Meet, offering smart features like Smart Compose and Smart Reply. On 6 April 2021, Google rolled out Google Chat and Room (early access) feature to all Gmail users. Spam filter Gmail's spam filtering features a community-driven system: when any user marks an email as spam, this provides information to help the system identify similar future messages for all Gmail users. In the April 2018 update, the spam filtering banners got a redesign, with bigger and bolder lettering. Gmail Labs The Gmail Labs feature, introduced on June 5, 2008, allows users to test new or experimental features of Gmail. Users can enable or disable Labs features selectively and provide feedback about each of them. This allows Gmail engineers to obtain user input about new features to improve them and also to assess their popularity. Popular features, like the "Undo Send" option, often "graduate" from Gmail Labs to become a formal setting in Gmail. All Labs features are experimental and are subject to termination at any time. Search Gmail incorporates a search bar for searching emails. The search bar can also search contacts, files stored in Google Drive, events from Google Calendar, and Google Sites. In May 2012, Gmail improved the search functionality to include auto-complete predictions from the user's emails. Gmail's search functionality does not support searching for word fragments (also known as 'substring search' or partial word search). Workarounds exist. Language support , the Gmail interface supports 72 languages, including: Arabic, Basque, Bulgarian, Catalan, Chinese (simplified), Chinese (traditional), Croatian, Czech, Danish, Dutch, English (UK), English (US), Estonian, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malay, Malayalam, Marathi, Norwegian (Bokmål), Odia, Polish, Punjabi, Portuguese (Brazil), Portuguese (Portugal), Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tagalog (Filipino), Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese, Welsh and Zulu. Language input styles In October 2012, Google added over 100 virtual keyboards, transliterations, and input method editors to Gmail, enabling users different types of input styles for different languages in an effort to help users write in languages that aren't "limited by the language of your keyboard.” In October 2013, Google added handwriting input support to Gmail. In August 2014, Gmail became the first major email provider to let users send and receive emails from addresses with accent marks and letters from outside the Latin alphabet. Platforms Web browsers Gmail's "basic HTML" version works on almost all browsers. The modern AJAX version is officially supported in the current and previous major releases of Google Chrome, Firefox, Internet Explorer, Microsoft Edge and Safari web browsers on a rolling basis. In August 2011, Google introduced Gmail Offline, an HTML5-powered app for providing access to the service while offline. Gmail Offline runs on the Google Chrome browser and can be downloaded from the Chrome Web Store. In addition to the native apps on iOS and Android, users can access Gmail through the web browser on a mobile device. Mobile Gmail has native applications for iOS devices (including iPhone, iPad, and iPod Touch) and for Android devices. In November 2014, Google introduced functionality in the Gmail Android app that enabled sending and receiving emails from non-Gmail addresses (such as Yahoo! Mail and Outlook.com) through POP or IMAP. In November 2016, Google redesigned the Gmail app for the iOS platform, bringing the first complete visual overhaul in "nearly four years". The update added much more use of colors, sleeker transitions, and the addition of several "highly-requested" features, including Undo Send, faster search with instant results and spelling suggestions, and Swipe to Archive/Delete. In May 2017, Google updated Gmail on Android to feature protection from phishing attacks. Media outlets noticed that the new protection was announced amid a widespread phishing attack on a combination of Gmail and Google's Docs document service that occurred on the same day. Later in May, Google announced the addition of "Smart Reply" to Gmail on Android and iOS. "Smart Reply", a feature originally launched for Google's Inbox by Gmail service, scans a message for information and uses machine intelligence to offer three responses the user can optionally edit and send. The feature is limited to the English language at launch, with additional support for Spanish, followed by other languages arriving later. Inbox by Gmail, another app from the Gmail team, was also available for iOS and Android devices. It was discontinued in April 2019. Third-party programs can be used to access Gmail, using the POP or IMAP protocols. In 2019, Google rolled out dark mode for its mobile apps in Android and iOS. Inbox by Gmail In October 2014, Google introduced Inbox by Gmail on an invitation-only basis. Developed by the Gmail team, but serving as a "completely different type of inbox", the service is made to help users deal with the challenges of an active email. Citing issues such as distractions, difficulty in finding important information buried in messages, and receiving more emails than ever, Inbox by Gmail has several important differences from Gmail, including bundles that automatically sort emails of the same topic together, highlights that surface key information from messages, and reminders, assists, and snooze, that help the user in handling incoming emails at appropriate times. Inbox by Gmail became publicly available in May 2015. In September 2018, Google announced it would end the service at the end of March 2019, most of its key features having been incorporated into the standard Gmail service. The service was discontinued on April 2, 2019. Integration with Google products In August 2010, Google released a plugin that provides integrated telephone service within Gmail's Google Chat interface. The feature initially lacked an official name, with Google referring to it as both "Google Voice in Gmail chat" and "Call Phones in Gmail". The service logged over one million calls in 24 hours. In March 2014, Google Voice was discontinued, and replaced with functionality from Google Hangouts, another communication platform from Google. On February 9, 2010, Google commenced its new social networking tool, Google Buzz, which integrated with Gmail, allowing users to share links and media, as well as status updates. Google Buzz was discontinued in October 2011, replaced with new functionality in Google+, Google's then-new social networking platform. Gmail was integrated with Google+ in December 2011, as part of an effort to have all Google information across one Google account, with a centralized Google+ user profile. Backlash from the move caused Google to step back and remove the requirement of a Google+ user account, keeping only a private Google account without a public-facing profile, starting in July 2015. In May 2013, Google announced the integration between Google Wallet and Gmail, which would allow Gmail users to send money as email attachments. Although the sender must use a Gmail account, the recipient does not need to be using a Gmail address. The feature has no transaction fees, but there are limits to the amount of money that can be sent. Initially only available on the web, the feature was expanded to the Android app in March 2017, for people living in the United States. In September 2016, Google released Google Trips, an app that, based on information from a user's Gmail messages, automatically generates travel cards. A travel card contains itinerary details, such as plane tickets and car rentals, and recommends activities, food and drinks, and attractions based on location, time, and interests. The app also has offline functionality. In April 2017, Google Trips received an update adding several significant features. The app now also scans Gmail for bus and train tickets, and allows users to manually input trip reservations. Users can send trip details to other users' email, and if the recipient also has Google Trips, the information will be automatically available in their apps as well. Security History Google has supported the secure HTTPS since the day it launched. In the beginning, it was only default on the login page, a reason that Google engineer Ariel Rideout stated was because HTTPS made "your mail slower". However, users could manually switch to secure HTTPS mode inside the inbox after logging in. In July 2008, Google simplified the ability to manually enable secure mode, with a toggle in the settings menu. In 2007, Google fixed a cross-site scripting security issue that could let attackers collect information from Gmail contact lists. In January 2010, Google began rolling out HTTPS as the default for all users. In June 2012, a new security feature was introduced to protect users from state-sponsored attacks. A banner will appear at the top of the page that warns users of an unauthorized account compromise. In March 2014, Google announced that an encrypted HTTPS connection would be used for the sending and receiving of all Gmail emails, and "every single email message you send or receive —100% of them —is encrypted while moving internally" through the company's systems. Whenever possible, Gmail uses transport layer security (TLS) to automatically encrypt emails sent and received. On the web and on Android devices, users can check if a message is encrypted by checking if the message has a closed or open red padlock. Gmail automatically scans all incoming and outgoing e-mails for viruses in email attachments. For security reasons, some file types, including executables, are not allowed to be sent in emails. At the end of May 2017, Google announced that it had applied machine learning technology to identify emails with phishing and spam, having a 99.9% detection accuracy. The company also announced that Gmail would selectively delay some messages, approximately 0.05% of all, to perform more detailed analysis and aggregate details to improve its algorithms. Third-party encryption in transit In Google's Transparency Report under the Safer email section, it provides information on the percentage of emails encrypted in transit between Gmail and third-party email providers. Two-step verification Gmail supports two-step verification, an optional additional measure for users to protect their accounts when logging in. Once enabled, users are required to verify their identity using a second method after entering their username and password when logging in on a new device. Common methods include entering a code sent to a user's mobile phone through a text message, entering a code using the Google Authenticator smartphone app, responding to a prompt on an Android/iOS device or by inserting a physical security key into the computer's USB port. Using a security key for two-step verification was made available as an option in October 2014. 24-hour lockdowns If an algorithm detects what Google calls "abnormal usage that may indicate that your account has been compromised", the account can be automatically locked down for between one minute and 24 hours, depending on the type of activity detected. Listed reasons for a lock-down include: "Receiving, deleting, or downloading large amounts of mail via POP or IMAP in a short period of time. If you're getting the error message, 'Lockdown in Sector 4,' you should be able to access Gmail again after waiting 24 hours." "Sending a large number of undeliverable messages (messages that bounce back)." "Using file-sharing or file-storage software, browser extensions, or third-party software that automatically logs into your account." "Leaving multiple instances of Gmail open." "Browser-related issues. Please note that if you find your browser continually reloading while attempting to access your Inbox, it's probably a browser issue, and it may be necessary to clear your browser's cache and cookies." Anti-child pornography policy Google combats child pornography through Gmail's servers in conjunction with the National Center for Missing & Exploited Children (NCMEC) to find children suffering abuse around the world. In collaboration with the NCMEC, Google creates a database of child pornography pictures. Each one of the images is given a unique numerical number known as a hash. Google then scans Gmail looking for the unique hashes. When suspicious images are located Google reports the incident to the appropriate national authorities. History The idea for Gmail was developed by Paul Buchheit several years before it was announced to the public. The project was known by the code name Caribou. During early development, the project was kept secret from most of Google's own engineers. This changed once the project improved, and by early 2004, most employees were using it to access the company's internal email system. Gmail was announced to the public by Google on April 1, 2004 as a limited beta release. In November 2006, Google began offering a Java-based application of Gmail for mobile phones. In October 2007, Google began a process of rewriting parts of the code that Gmail used, which would make the service faster and add new features, such as custom keyboard shortcuts and the ability to bookmark specific messages and email searches. Gmail also added IMAP support in October 2007. An update around January 2008 changed elements of Gmail's use of JavaScript, and resulted in the failure of a third-party script some users had been using. Google acknowledged the issue and helped users with workarounds. Gmail exited the beta status on July 7, 2009. Prior to December 2013, users had to approve to see images in emails, which acted as a security measure. This changed in December 2013, when Google, citing improved image handling, enabled images to be visible without user approval. Images are now routed through Google's secure proxy servers rather than the original external host servers. MarketingLand noted that the change to image handling means email marketers will no longer be able to track the recipient's IP address or information about what kind of device the recipient is using. However, Wired stated that the new change means senders can track the time when an email is first opened, as the initial loading of the images requires the system to make a "callback" to the original server. Growth In June 2012, Google announced that Gmail had 425 million active users globally. In May 2015, Google announced that Gmail had 900 million active users, 75% of whom were using the service on mobile devices. In February 2016, Google announced that Gmail had passed 1 billion active users. In July 2017, Google announced that Gmail had passed 1.2 billion active users. In the business sector, Quartz reported in August 2014 that, among 150 companies checked in three major categories in the United States (Fortune 50 largest companies, mid-size tech and media companies, and startup companies from the last Y Combinator incubator class), only one Fortune 50 company used Gmail – Google itself – while 60% of mid-sized companies and 92% of startup companies were using Gmail. In May 2014, Gmail became the first app on the Google Play Store to hit one billion installations on Android devices. Gamil Design company and misspellings Before the introduction of Gmail, the website of product and graphic design from Gamil Design in Raleigh, North Carolina received 3,000 hits per month. A Google engineer who had accidentally gone to the Gamil site a number of times contacted the company and asked if the site had experienced an increase in traffic. In fact, the site's activity had doubled. Two years later, with 600,000 hits per month, the Internet service provider wanted to charge more, and Gamil posted the message on its site "You may have arrived here by misspelling Gmail. We understand. Typing fast is not our strongest skill. But since you've typed your way here, let's share." Google Workspace As part of Google Workspace (formerly G Suite), Google's business-focused offering, Gmail comes with additional features, including: Email addresses with the customer's domain name (@yourcompany.com) 99.9% guaranteed uptime with zero scheduled downtime for maintenance Either 30 GB or unlimited storage shared with Google Drive, depending on the plan 24/7 phone and email support Synchronization compatibility with Microsoft Outlook and other email providers Support for add-ons that integrate third-party apps purchased from the Google Workspace Marketplace with Gmail Reception Gmail is noted by web developers for its early adoption of Ajax. Awards Gmail was ranked second in PC World'''s "100 Best Products of 2005", behind Firefox. Gmail also won 'Honorable Mention' in the Bottom Line Design Awards 2005. In September 2006, Forbes declared Gmail to be the best webmail application for small businesses. In November 2006, Gmail received PC World's 4-star rating. Criticism Privacy Google has one privacy policy that covers all of its services. Google claims that they "will not target ads based on sensitive information, such as race, religion, sexual orientation, health, or sensitive financial categories." Automated scanning of email content Google's mail servers automatically scan emails for multiple purposes, including filtering spam and malware, and (until 2017) adding context-sensitive advertisements next to emails. Privacy advocates raised concerns about this practice; concerns included that allowing email content to be read by a machine (as opposed to a person) can allow Google to keep unlimited amounts of information forever; the automated background scanning of data raises the risk that the expectation of privacy in email usage will be reduced or eroded; information collected from emails could be retained by Google for years after its current relevancy to build complete profiles on users; emails sent by users from other email providers get scanned despite never having agreed to Google's privacy policy or terms of service; Google can change its privacy policy unilaterally, and for minor changes to the policy it can do so without informing users; in court cases, governments and organizations can potentially find it easier to legally monitor email communications; at any time, Google can change its current company policies to allow combining information from emails with data gathered from use of its other services; and any internal security problem on Google's systems can potentially expose many – or all – of its users. In 2004, thirty-one privacy and civil liberties organizations wrote a letter calling upon Google to suspend its Gmail service until the privacy issues were adequately addressed. The letter also called upon Google to clarify its written information policies regarding data retention and data sharing among its business units. The organizations also voiced their concerns about Google's plan to scan the text of all incoming messages for the purposes of ad placement, noting that the scanning of confidential email for inserting third-party ad content violates the implicit trust of an email service provider. On June 23, 2017, Google announced that, later in 2017, it would phase out the scanning of email content to generate contextual advertising, relying on personal data collected through other Google services instead. The company stated that this change was meant to clarify its practices and quell concerns among enterprise G Suite (now Google Workspace) customers who felt an ambiguous distinction between the free consumer and paid professional variants, the latter being advertising-free. Lawsuits In March 2011, a former Gmail user in Texas sued Google, claiming that its Gmail service violates users' privacy by scanning e-mail messages to serve relevant ads. In July 2012, some California residents filed two class action lawsuits against Google and Yahoo!, claiming that they illegally intercept emails sent by individual non-Gmail or non-Yahoo! email users to Gmail and Yahoo! recipients without the senders' knowledge, consent or permission. A motion filed by Google's attorneys in the case concedes that Gmail users have "no expectation of privacy". A court filing uncovered by advocacy group Consumer Watchdog in August 2013 revealed that Google stated in a court filing that no "reasonable expectation" exists among Gmail users in regard to the assured confidentiality of their emails. In response to a lawsuit filed in May 2013, Google explained:"... all users of email must necessarily expect that their emails will be subject to automated processing ...  Just as a sender of a letter to a business colleague cannot be surprised that the recipient's assistant opens the letter, people who use web-based email today cannot be surprised if their communications are processed by the recipient's ECS [electronic communications service] provider in the course of delivery.A Google spokesperson stated to the media on August 15, 2013 that the corporation takes the privacy and security concerns of Gmail users "very seriously." April 2014 Terms of service update Google updated its terms of service for Gmail in April 2014 to create full transparency for its users in regard to the scanning of email content. The relevant revision states: "Our automated systems analyse your content (including emails) to provide you personally relevant product features, such as customised search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored." A Google spokesperson explained that the corporation wishes for its policies "to be simple and easy for users to understand." In response to the update, Jim Killock, executive director of the Open Rights Group, stated: "The really dangerous things that Google is doing are things like the information held in Analytics, cookies in advertising and the profiling that it is able to do on individual accounts". Microsoft ad campaign against Google In 2013, Microsoft launched an advertising campaign to attack Google for scanning email messages, arguing that most consumers are not aware that Google monitors their personal messages to deliver targeted ads. Microsoft claims that its email service Outlook does not scan the contents of messages and a Microsoft spokesperson called the issue of privacy "Google's kryptonite." In response, Google stated; "We work hard to make sure that ads are safe, unobtrusive and relevant ... No humans read your e-mail or Google Account information in order to show you advertisements or related information. An automated algorithm — similar to that used for features like Priority Inbox or spam filtering — determines which ads are shown.” The New York Times cites "Google supporters", who say that "Microsoft's ads are distasteful, the last resort of a company that has been unsuccessful at competing against Google on the more noble battleground of products". Other privacy issues 2010 attack from China In January 2010, Google detected a "highly sophisticated" cyberattack on its infrastructure that originated from China. The targets of the attack were Chinese human rights activists, but Google discovered that accounts belonging to European, American and Chinese activists for human rights in China had been "routinely accessed by third parties". Additionally, Google stated that their investigation revealed that "at least" 20 other large companies from a "wide range of businesses" - including the Internet, finance, technology, media and chemical sectors – had been similarly targeted. Google was in the process of notifying those companies and it had also worked with relevant US authorities. In light of the attacks, Google enhanced the security and architecture of its infrastructure, and advised individual users to install anti-virus and anti-spyware on their computers, update their operating systems and web browsers, and be cautious when clicking on Internet links or when sharing personal information in instant messages and emails. Social network integration The February 2010 launch of Google Buzz, a former social network that was linked to Gmail, immediately drew criticism for publicly sharing details of users' contacts unless the default settings were changed. A new Gmail feature was launched in January 2014, whereby users can email people with Google+ accounts even though they do not know the email address of the recipient. Marc Rotenberg, President of the Electronic Privacy Information Center, called the feature "troubling", and compared it to the Google Buzz initial launch privacy flaw. Update to DoubleClick privacy policy In June 2016, Julia Angwin of ProPublica wrote about Google's updated privacy policy, which deleted a clause that had stated Google would not combine DoubleClick web browsing cookie information with personally identifiable information from its other services. This change has allowed Google to merge users' personally identifiable information from different Google services to create one unified ad profile for each user. After publication of the article, Google reached out to ProPublica to say that the merge would not include Gmail keywords in ad targeting. Outages Gmail suffered at least seven outages in 2009 alone, causing doubts about the reliability of its service. It suffered a new outage on February 28, 2011, in which a bug caused Gmail accounts to be empty. Google stated in a blog post that "email was never lost" and restoration was in progress. Another outage occurred on April 17, 2012, September 24, 2013, January 24, 2014, January 29, 2019 and August 20, 2020. Google has stated that "Gmail remains more than 99.9% available to all users, and we're committed to keeping events like today's notable for their rarity." "On behalf of" tag In May 2009, Farhad Manjoo wrote on The New York Times'' blog about Gmail's "on behalf of" tag. Manjoo explained: "The problems is, when you try to send outbound mail from your Gmail universal inbox, Gmail adds a tag telling your recipients that you're actually using Gmail and not your office e-mail. If your recipient is using Microsoft Outlook, he'll see a message like, 'From [email protected] on behalf of [email protected]. Manjoo further wrote that "Google explains that it adds the tag in order to prevent your e-mail from being considered spam by your recipient; the theory is that if the e-mail is honest about its origins, it shouldn't arouse suspicion by spam checking software". The following July, Google announced a new option that would remove the "On behalf of" tag, by sending the email from the server of the other email address instead of using Gmail's servers. See also Comparison of mail servers Comparison of webmail providers List of Google products References External links Official Website for Gmail for Work Gmail official mobile site (multi-language) 2004 software Computer-related introductions in 2004 Computer-related introductions in 2007 Cross-platform software Google services Internet properties established in 2004 Webmail
53674861
https://en.wikipedia.org/wiki/Quirkos
Quirkos
Quirkos is a CAQDAS software package for the qualitative analysis of text data, commonly used in social science. It provides a graphical interface in which the nodes or themes of analysis are represented by bubbles. It is designed primarily for new and non-academic users of qualitative data, to allow them to quickly learn the basics of qualitative data analysis. Although simpler to use, it lacks some of the features present in other commercial CAQDAS packages such as multimedia support. However, it has been proposed as a useful tool for lay and participant led analysis and is comparatively affordable. It is developed by Edinburgh, UK based Quirkos Software, and was first released in October 2014. The interface is unique, in that it simultaneously displays visualisations and text data and has identical capabilities on Windows, macOS and Linux. The thematic framework is represented with a series of circles, the size of each indicating the amount of data coded to them. Colors are used extensively to indicate the thematic bubble within the coding stripes on the text sources. There are few features for quantitative or statistical analysis of text data, however project files can be exported for analysis in statistical software such as SPSS or R. Quirkos is extensively used in many different fields which utilise qualitative research, including sociology, health, media studies, school of education and human geography. The developers claim use in more than 100 universities across the world. It has also been used in research for non-governmental organisations such as the Infection Control Society and UNICEF. However, the text management capabilities also can be used to assist in systematic literature reviews. Features Basic features and simple operation Import of Microsoft Word, PDF, Text and RTF source files CSV import for tabulated data (such as online surveys) Integrated synonym database for keyword search Cluster analysis and visualisation of concurrent coding Export coded data to annotated Microsoft Word files Subset analysis by discrete and quantitative variables Cloud or local based data storage Live collaboration and team work on projects See also Computer-assisted qualitative data analysis software References External links Review by the University of Surrey CAQDAS network Software overview (presentation) Software overview (video) QDA software Science software for MacOS Science software for Linux
46313839
https://en.wikipedia.org/wiki/Windows%20Server%202016
Windows Server 2016
Windows Server 2016 is the seventh release of the Windows Server server operating system developed by Microsoft as part of the Windows NT family of operating systems. It was developed concurrently with Windows 10 and is the successor to Windows Server 2012 R2. The first early preview version (Technical Preview) became available on October 1, 2014 together with the first technical preview of System Center. Windows Server 2016 was released on September 26, 2016 at Microsoft's Ignite conference and broadly released for retail sale on October 12, 2016. It has three successors: Windows Server 2019, Windows Server 2022, and the Windows Server Semi-Annual Channel, which excludes the graphical user interface and many older components. Features Windows Server 2016 has a variety of new features, including Active Directory Federation Services: It is possible to configure AD FS to authenticate users stored in non-AD directories, such as X.500 compliant Lightweight Directory Access Protocol (LDAP) directories and SQL databases. Windows Defender: Windows Server Antimalware is installed and enabled by default without the GUI, which is an installable Windows feature. Remote Desktop Services: Support for OpenGL 4.4 and OpenCL 1.1, performance and stability improvements; MultiPoint Services role (see Windows MultiPoint Server) Storage Services: Central Storage QoS Policies; Storage Replicas (storage-agnostic, block-level, volume-based, synchronous and asynchronous replication using SMB3 between servers for disaster recovery). Storage Replica replicates blocks instead of files; files can be in use. It's not multi-master, not one-to-many and not transitive. It periodically replicates snapshots, and the replication direction can be changed. Failover Clustering: Cluster operating system rolling upgrade, Storage Replicas Web Application Proxy: Preauthentication for HTTP Basic application publishing, wildcard domain publishing of applications, HTTP to HTTPS redirection, Propagation of client IP address to backend applications IIS 10: Support for HTTP/2 Windows PowerShell 5.1 Windows Server Containers Networking features DHCP: As Network Access Protection was deprecated in Windows Server 2012 R2, in Windows Server 2016 the DHCP role no longer supports NAP DNS: DNS client: Service binding – enhanced support for computers with more than one network interface DNS Server: DNS policies, new DDS record types (TLSA, SPF, and unknown records), new PowerShell cmdlets and parameters Windows Server Gateway now supports Generic Routing Encapsulation (GRE) tunnels IP address management (IPAM): Support for /31, /32, and /128 subnets; discovery of file-based, domain-joined DNS servers; new DNS functions; better integration of DNS, DHCP, and IP Address (DDI) Management Network Controller: A new server role to configure, manage, monitor, and troubleshoot virtual and physical network devices and services in the datacentre Hyper-V Network virtualization: Programmable Hyper-V switch (a new building block of Microsoft's software-defined networking solution); VXLAN encapsulation support; Microsoft Software Load Balancer interoperability; better IEEE Ethernet standard compliance. Hyper-V Rolling Hyper-V cluster update: Unlike upgrading clusters from Windows 2008 R2 to 2012 level, Windows Server 2016 cluster nodes can be added to a Hyper-V Cluster with nodes running Windows Server 2012 R2. The cluster continues to function at a Windows Server 2012 R2 feature level until all of the nodes in the cluster have been upgraded and the cluster functional level has been upgraded. Storage quality of service (QoS) to centrally monitor end-to-end storage performance and create policies using Hyper-V and Scale-Out File Servers New, more efficient binary virtual machine configuration format (.VMCX extension for virtual machine configuration data and the extension for runtime state data) Production checkpoints Hyper-V Manager: Alternate credentials support, down-level management, WS-Management protocol Integration services for Windows guests distributed through Windows Update Hot add and remove for network adapters (for generation 2 virtual machines) and memory (for generation 1 and generation 2 virtual machines) Linux secure boot Connected Standby compatibility Storage Resiliency feature of Hyper-V is formed for detecting transitory loss of connectivity to VM storage. VMs will be paused until connectivity is re-established. RDMA compatible Virtual Switch Nano Server Microsoft announced a new installation option, Nano Server, which offers a minimal-footprint headless version of Windows Server. It excludes the graphical user interface, WoW64 (support for 32-bit software) and Windows Installer. It does not support console login, either locally or via Remote Desktop Connection. All management is performed remotely via Windows Management Instrumentation (WMI), Windows PowerShell and Remote Server Management Tools (a collection of web-based GUI and command line tools). However, in Technical Preview 5, Microsoft has re-added the ability to administer Nano Server locally through PowerShell. According to Microsoft engineer Jeffrey Snover, Nano Server has 93% lower VHD size, 92% fewer critical security advisories, and 80% fewer reboots than Windows Server. Nano Server is only available to Microsoft Software Assurance customers and on cloud computing platforms such as Microsoft Azure and Amazon Web Services. Starting with the new feature release of Windows Server version 1709, Nano Server can only be installed inside a container host. Development Microsoft has been reorganized by Satya Nadella, putting the Server and System Center teams together. Previously, the Server team was more closely aligned with the Windows client team. The Azure team is also working closely with the Server team. In March 2017, Microsoft demonstrated an internal version of Server 2016 running on the ARMv8-A architecture. It was reported that Microsoft was working with Qualcomm Centriq and Cavium ThunderX2 chips. According to James Vincent of The Verge, this decision endangers Intel's dominance of the server CPU market. However, later inquiry from Microsoft revealed that this version of Windows Server is only for internal use and only impacts subscribers of Microsoft Azure service. Preview releases A public beta version of Windows Server 2016 (then still called vNext) branded as "Windows Server Technical Preview" was released on October 1, 2014; the technical preview builds are aimed toward enterprise users. The first Technical Preview was first set to expire on April 15, 2015 but Microsoft later released a tool to extend the expiry date, to last until the second tech preview of the OS in May 2015. The second beta version, "Technical Preview 2", was released on May 4, 2015. Third preview version, "Technical Preview 3" was released on August 19, 2015. "Technical Preview 4" was released on November 19, 2015. "Technical Preview 5" was released on April 27, 2016. Windows Server 2016 Insider Preview Build 16237 was released to Windows Insiders on July 13, 2017. Public release Windows Server 2016 was officially released at Microsoft's Ignite Conference on September 26, 2016. Unlike its predecessor, Windows Server 2016 is licensed by the number of CPU cores rather than number of CPU sockets—a change that has similarly been adopted by BizTalk Server 2013 and SQL Server 2014. The new licensing structure that has been adopted by Windows Server 2016 has also moved away from the Windows Server 2012/2012R2 CPU socket licensing model in that now the amount of cores covered under one license is limited. Windows Server 2016 Standard and Datacenter core licensing now covers a minimum of 8 core licenses for each physical processor and a minimum of 16 core licenses for each server. Core licenses are sold in packs of two with Standard Edition providing the familiar rights to run 2 virtualized OS environments. If the server goes over 16 core licenses for a 2 processor server additional licenses will now be required with Windows Server 2016. Version history Technical Preview Windows Server 2016 Technical Preview, released on October 1, 2014, was the first beta version of the operating system made publicly available. Its version number was 6.4.9841. Technical Preview 2 Windows Server 2016 Technical Preview 2 was made available on May 4, 2015. Its version number was 10.0.10074. (A similar jump in the most significant part of the version number from 6 to 10 is seen in Windows 10.) Highlights of this version include: Nano Server installation option Hyper-V: hot add and remove memory and NIC; resilient virtual machines to keep running even when their cluster fabric fails Rolling upgrades for Hyper-V and Storage clusters Networking: Converged NIC across tenant and RDMA traffic; PacketDirect on 40G Storage: Virtual Machine Storage Path resiliency; Storage Spaces Direct to aggregate Storage Spaces across multiple servers; Storage Replica Security: Host Guardian Service, helping to keep trust and isolation boundary between the cloud infrastructure and guest OS layers; Just Enough Administration, restricting users to perform only specific tasks Management: PowerShell Desired State Configuration; PowerShell Package Manager; Windows Management Framework 5.0 April Preview and DSC Resource Kit Other: Conditional access control in AD FS; application authentication support for OpenID Connect and OAuth; full OpenGL support with RDS for VDI; Server-side support for HTTP/2, including header compression, connection multiplexing and server push Installation options: Minimal Server Interface was made default and renamed the Server installation option to “Server with local admin tools”. Technical Preview 3 The third technical preview of Windows Server 2016 was made available on August 19, 2015. Its version number was 10.0.10514. Highlights of this version include: Windows Server Containers Active Directory Federation Services (AD FS): authentication of users stored in Lightweight Directory Access Protocol (LDAP) directories Installation options: The Server installation option had been renamed to “Server with Desktop Experience” having the shell and Desktop Experience installed by default. Due to the structural changes required to deliver the Desktop Experience on Server, it is no longer possible to convert from Server with Desktop Experience to Server Core or to convert Server Core up to Server with Desktop Experience. Technical Preview 4 The fourth technical preview of the operating system was made available on November 19, 2015, one year and one month after the initial technical preview. Its version number was 10.0.10586. Its highlights include: Nano Server supports the DNS Server and IIS server roles, as well as MPIO, VMM, SCOM, DSC push mode, DCB, Windows Server Installer, and the WMI provider for Windows Update. Its Recovery Console supports editing and repairing the network configuration. A Windows PowerShell module is now available to simplify building Nano Server images. Hyper-V Containers encapsulates each container in a light weight virtual machine. Technical Preview 5 The last technical preview of Windows Server 2016 was made available on April 27, 2016. Its version number was 10.0.14300. Its highlights include: Mostly general refinements. Greater time accuracy in both physical and virtual machines Container support adds performance improvements, simplified network management, and support for Windows containers on Windows 10 Nano Server: an updated module for building Nano Server images, including more separation of physical host and guest virtual machine functionality as well as support for different Windows Server editions. Improvements to the Recovery Console, including separation of inbound and outbound firewall rules as well as the ability to repair configuration of WinRM Networking: traffic to new or existing virtual appliances can now be both mirrored and routed. With a distributed firewall and Network security groups, this enables dynamically segmented and secure workloads in a manner similar to Azure. One can deploy and manage the entire Software-defined networking (SDN) stack using System Center Virtual Machine Manager. Docker can be used to manage Windows Server container networking, and associate SDN policies not only with virtual machines but containers as well Remote Desktop Services: a highly available RDS deployment can leverage Azure SQL Database for the RD Connection Brokers in high availability mode Management: ability to run PowerShell.exe locally on Nano Server (no longer remote only), new Local Users & Groups cmdlets to replace the GUI, added PowerShell debugging support, and added support in Nano Server for security logging & transcription and JEA (Just Enough Administration) Shielded Virtual Machines: New "Encryption Supported" mode that offers more protections than for an ordinary virtual machine, but less than "Shielded" mode, while still supporting vTPM, disk encryption, Live Migration traffic encryption, and other features, including direct fabric administration conveniences such as virtual machine console connections and Powershell Direct Full support for converting existing non-shielded Generation 2 virtual machines to shielded virtual machines, including automated disk encryption Shielded virtual machines are compatible with Hyper-V Replica Release to manufacturing Windows Server 2016 was released to manufacturing on September 26, 2016, bearing the version number of 10.0.14393 (same as Windows 10 Anniversary Update). Microsoft added the following final touches: Available for a 180-day evaluation Fixed Start menu corruptions Improved user experience and performance Windows Store apps have been removed Login screen now has a background The Windows Hello feature has been added Dark theme has been added Semi-Annual Channel releases Version 1709 Windows Server, version 1709 (version shared with Windows 10 Fall Creators Update) was released on October 17, 2017. The release has dropped the Windows Server 2016 name and is just called Windows Server by Microsoft. It is offered to the Microsoft Software Assurance customers who have an active Windows Server 2016 license and has the same system requirements. This is the first Windows Server product to fall under the "Semi-Annual Channel" (SAC) release cadence. This product only features the Server Core and the Nano Server modes. Of the two, only the Server Core mode of the OS can be installed on a bare system. The Nano Server mode is only available as an operating system container. Version 1803 Windows Server, version 1803 (version shared with Windows 10 April 2018 Update) is the second Semi-Annual Channel release of Windows Server. It is also the final version to be branched off the Server 2016 codebase, as the next release shares the version number 1809 with Windows Server 2019. See also Microsoft Servers Comparison of Microsoft Windows versions History of Microsoft Windows Comparison of operating systems List of operating systems References External links PluralSight: Windows Server vNext First Look – An introduction to the new features of the Windows Server vNext operating system Our Server Journey – video session describing the path that Windows Server has taken from its creation to the current day and where it is going from here Michael Pietroforte: Nano Server – Goodbye Windows Server? Microsoft Windows Nano Server, the future of Windows Server? 2016 software Windows Server X86-64 operating systems
36157051
https://en.wikipedia.org/wiki/Windows%20Phone%208
Windows Phone 8
Windows Phone 8 is the second generation of the Windows Phone mobile operating system from Microsoft. It was released on October 29, 2012, and, like its predecessor, it features a flat user interface based on the Metro design language. It was succeeded by Windows Phone 8.1, which was unveiled on April 2, 2014. Windows Phone 8 replaces the Windows CE-based architecture used in Windows Phone 7 with the Windows NT kernel found in Windows 8. Windows Phone 7 devices cannot run or update to Windows Phone 8, and new applications compiled specifically for Windows Phone 8 are not made available for Windows Phone 7 devices. Developers can make their apps available on both Windows Phone 7 and Windows Phone 8 devices by targeting both platforms via the proper SDKs in Visual Studio. Windows Phone 8 devices are manufactured by Microsoft Mobile (formerly Nokia), HTC, Samsung and Huawei. History On June 20, 2012, Microsoft unveiled Windows Phone 8 (codenamed Apollo), a third generation of the Windows Phone operating system for release later in 2012. Windows Phone 8 replaces its previously Windows CE-based architecture with one based on the Windows NT kernel, and shares many components with Windows 8, allowing developers to easily port applications between the two platforms. Windows Phone 8 also allows devices with larger screens (the four confirmed sizes are "WVGA 800×480 15:9","WXGA 1280×768 15:9","720p 1280×720 16:9","1080p 1920x1080 16:9" resolutions) and multi-core processors, NFC (that can primarily be used to share content and perform payments), backwards compatibility with Windows Phone 7 apps, improved support for removable storage (that now functions more similarly to how such storage is handled on Windows and Android), a redesigned home screen incorporating resizable tiles across the entire screen, a new Wallet hub (to integrate NFC payments, coupon websites such as Groupon, and loyalty cards), and "first-class" integration of VoIP applications into the core functions of the OS. Additionally, Windows Phone 8 will include more features aimed at the enterprise market, such as device management, BitLocker encryption, and the ability to create a private Marketplace to distribute apps to employeesfeatures expected to meet or exceed the enterprise capabilities of the previous Windows Mobile platform. Additionally, Windows Phone 8 will support over-the-air updates, and all Windows Phone 8 devices will receive software support for at least 36 months after their release. In the interest of ensuring it is released with devices designed to take advantage of its new features, Windows Phone 8 will not be made available as an update for existing Windows Phone 7 devices. Instead, Microsoft released Windows Phone 7.8 as an update for Windows Phone 7 devices, which backported several features such as the redesigned home screen. Addressing some software bugs with Windows Phone 8 forced Microsoft to delay some enterprise improvements, like VPN support, until the 2014 release of Windows Phone 8.1. Support In March 2013, Microsoft announced that updates for the Windows Phone 8 operating system would be made available through July 8, 2014. Microsoft pushed support up to 36 months, announcing that updates for the Windows Phone 8 operating system would be made available through January 12, 2016. Windows Phone 8 devices will be upgradeable to the next edition of Windows Phone 8.1. Features The following features were confirmed at Microsoft's 'sneak peek' at Windows Phone on June 20, 2012 and the unveiling of Windows Phone 8 on October 29, 2012: Core Windows Phone 8 is the first mobile OS from Microsoft to use the Windows NT kernel, which is the same kernel that runs Windows 8. The operating system adds improved file system, drivers, network stack, security components, media and graphics support. Using the NT kernel, Windows Phone can now support multi-core CPUs of up to 64 cores, as well as 1280×720 and 1280×768 resolutions, in addition to the base 800×480 resolution already available on Windows Phone 7. Furthermore, Windows Phone 8 also adds support for MicroSD cards, which are commonly used to add extra storage to phones. Support for 1080p screens was added in October 2013 with the GDR3 update. Due to the switch to the NT kernel, Windows Phone 8 also supports native 128-bit Bitlocker encryption and Secure Boot. Windows Phone 8 also supports NTFS due to this switch. Web Internet Explorer 10 is the default browser in Windows Phone 8 and carries over key improvements also found in the desktop version. The navigation interface has been simplified down to a single customizable button (defaults to stop / refresh) and the address bar. While users can change the button to a 'Back' button, there is no way to add a 'Forward' button. However, as the browser supports swipe navigation for both forwards and back, this is a minor issue. Multitasking Unlike its predecessor, Windows Phone 8 uses true multitasking, allowing developers to create apps that can run in the background and resume instantly. A user can switch between "active" tasks by pressing and holding the Back button, but any application listed may be suspended or terminated under certain conditions, such as a network connection being established or battery power running low. An app running in the background may also automatically suspend, if the user has not opened it for a long duration of time. The user can close applications by opening the multitasking view and pressing the "X" button in the right-hand corner of each application window, a feature that was added in Update 3. Kids Corner Windows Phone 8 adds Kids Corner, which operates as a kind of "guest mode". The user chooses which applications and games appear on the Kids Corner. When Kids Corner is activated, apps and games installed on the device can be played or accessed without touching the data of the main user signed into the Windows Phone. Rooms Rooms is a feature added specifically for group messaging and communication. Using Rooms, users can contact and see Facebook and Twitter updates only from members of the group created. Members of the group can also share instant messages and photos from within the room. These messages will be shared only with the other room members. Microsoft will be removing this feature sometime during March 2015. Driving Mode With the release of Update 3 in late 2013, pairing a Windows Phone 8 device with a car via Bluetooth now automatically activates "Driving Mode", a specialized UI designed for using a mobile device while driving. Data Sense Data Sense allows users to set data usage limits based on their individual plan. Data Sense can restrict background data when the user is near their set limit (a heart icon is used to notify the user when background tasks are being automatically stopped). Although this feature was originally exclusive to Verizon phones in the United States, the GDR2 update released in July 2013 made Data Sense available to all Windows Phone 8 handsets. NFC and Wallet Select Windows Phones running Windows Phone 8 add NFC capability, which allows for data transfer between two Windows Phone devices, or between a Windows Phone device, and a Windows 8 computer or tablet, using a feature called "Tap and Send". In certain markets, NFC support on Windows Phone 8 can also be used to conduct in-person transactions through credit and debit cards stored on the phone through the Wallet application. Carriers may activate the NFC feature through SIM or integrated phone hardware. Orange will be first carrier to support NFC on Windows Phone 8. Besides NFC support for transactions, Wallet can also be used to store credit cards in order to make Windows Phone Store and other in-app purchases (that is also a new feature), and can be used to store coupons and loyalty cards. Syncing The Windows Phone app succeeds the Zune Software as the sync application to transfer music, videos, other multimedia files and office documents between Windows Phone 8 and a Windows 8/Windows RT computer or tablet. Versions for OS X and Windows Desktop are also available. Windows Phone 7 devices are not compatible with the PC version of the app, but will work with the Mac version. (Zune is still used for syncing Windows Phone 7s with PCs, and thus remains downloadable from the Windows Phone website.) Due to Windows Phone 8 identifying itself as an MTP device, Windows Media Player and Windows Explorer may be used to transfer music, videos and other multimedia files unlike in Windows Phone 7. Videos transferred to a computer are limited to a maximum size of 4 GB. Other features Xbox SmartGlass allows control of an Xbox 360 and Xbox One with a phone (available for Windows Phone, iOS and Android). Xbox Music+Video services support playback of audio and video files in Windows Phone, as well as music purchases. Video purchases were made available with the release of a standalone version of Xbox Video in late 2013 that can be downloaded from the Windows Phone Store. Native code support (C++) toast notifications sent by apps and app developers using the Microsoft Push Notification Service. Simplified porting of Windows 8 apps to Windows Phone 8 (compatibility with Windows 8 "Modern UI" apps) Remote device management of Windows Phone similar to management of Windows PCs VoIP and video chat integration for any VoIP or video chat app (integrates into the phone dialer, people hub) Firmware over the air for Windows Phone updates Minimum 36 month support of Windows Phone updates to Windows Phone 8 devices. Camera app now supports "lenses", which allow third parties to skin and add features to camera interface. Native screen capture is added by pressing home and power buttons simultaneously. Hebrew language support is added for Microsoft to introduce Windows Phone to the Israeli market. Hardware specifications Version history Reception Reviewers generally praised the increased capabilities of Windows Phone 8, but noted the smaller app selection when compared to other phones. Brad Molen of Engadget mentioned that "Windows Phone 8 is precisely what we wanted to see come out of Redmond in the first place," and praised the more customizable Start Screen, compatibility with Windows 8, and improved NFC support. However, Molen also noted the drawback of a lack of apps in the Windows Phone Store. The Verge gave the OS a 7.9/10 rating, stating that "Redmond is presenting one of the most compelling ecosystem stories in the business right now," but criticized the lack of a unified notifications center. Alexandra Chang of Wired gave Windows Phone 8 an 8/10, noting improvement in features previously lacking in Windows Phone 7, such as multi-core processor support, faster Internet browsing, and the switch from Bing Maps to Nokia Maps, but also criticized the smaller selection of apps. Usage IDC reported that in Q1 2013, the first full quarter where WP8 was available to most countries, Windows Phone market share jumped to 3.2% of the worldwide smartphone market, allowing the OS to overtake BlackBerry OS as the third largest mobile operating system by usage. Roughly a year after the release of WP8, Kantar reported in October 2013 that Windows Phone grew its market share substantially to 4.8% in the United States and 10.2% in Europe. Similar statistics from Gartner for Q3 2013 indicated that Windows Phone's global market share increased 123% from the same period in 2012 to 3.6%. In Q1 2014 IDC reported that global market share of Windows Phone has dropped to 2.7%. See also List of Windows Phone 8 devices References External links Official website (Archive) Windows Phone Phone 8 Smartphones
27689271
https://en.wikipedia.org/wiki/Maker%20culture
Maker culture
The maker culture is a contemporary subculture representing a technology-based extension of DIY culture that intersects with hardware-oriented parts of hacker culture and revels in the creation of new devices as well as tinkering with existing ones. The maker culture in general supports open-source hardware. Typical interests enjoyed by the maker culture include engineering-oriented pursuits such as electronics, robotics, 3-D printing, and the use of computer numeric control tools, as well as more traditional activities such as metalworking, woodworking, and, mainly, its predecessor, traditional arts and crafts. The subculture stresses a cut-and-paste approach to standardized hobbyist technologies, and encourages cookbook re-use of designs published on websites and maker-oriented publications. There is a strong focus on using and learning practical skills and applying them to reference designs. There is also growing work on equity and the maker culture. Philosophical emphasis The maker movement is a social movement with an artisan spirit. Promoting equity in the maker movement is fundamental to its success in democratizing access to STEAM and other tech-rich and art domains. Maker culture emphasizes learning-through-doing (active learning) in a social environment. Maker culture emphasizes informal, networked, peer-led, and shared learning motivated by fun and self-fulfillment. Maker culture encourages novel applications of technologies, and the exploration of intersections between traditionally separate domains and ways of working including metal-working, calligraphy, film making, and computer programming. Community interaction and knowledge sharing are often mediated through networked technologies, with websites and social media tools forming the basis of knowledge repositories and a central channel for information sharing and exchange of ideas, and focused through social meetings in shared spaces such as hackerspaces. Maker culture has attracted the interest of educators concerned about students’ disengagement from STEM subjects (science, technology, engineering and mathematics) in formal educational settings. Maker culture is seen as having the potential to contribute to a more participatory approach and create new pathways into topics that will make them more alive and relevant to learners. Some say that the maker movement is a reaction to the de-valuing of physical exploration and the growing sense of disconnection with the physical world in modern cities. Many products produced by the maker communities have a focus on health (food), sustainable development, environmentalism and local culture, and can from that point of view also be seen as a negative response to disposables, globalised mass production, the power of chain stores, multinationals and consumerism. In reaction to the rise of maker culture, Barack Obama pledged to open several national research and development facilities to the public. In addition the U.S. federal government renamed one of their national centers "America Makes". The methods of digital fabrication—previously the exclusive domain of institutions—have made making on a personal scale accessible, following a logical and economic progression similar to the transition from minicomputers to personal computers in the microcomputer revolution of the 1970s. In 2005, Dale Dougherty launched Make magazine to serve the growing community, followed by the launch of Maker Faire in 2006. The term, coined by Dougherty, grew into a full-fledged industry based on the growing number of DIYers who want to build something rather than buy it. Spurred primarily by the advent of RepRap 3D printing for the fabrication of prototypes, declining cost and broad adoption have opened up new realms of innovation. As it has become cost effective to make just one item for prototyping (or a small number of household items), this approach can be depicted as personal fabrication for "a market of one person". Makerspaces The rise of the maker culture is closely associated with the rise of hackerspaces, fablabs and other "makerspaces", of which there are now many around the world, including over 100 each in Germany and the United States. Hackerspaces allow like-minded individuals to share ideas, tools, and skillsets. Some notable hackerspaces which have been linked with the maker culture include Artisan's Asylum, Dallas Makerspace, Noisebridge, NYC Resistor, Pumping Station: One, and TechShop. In addition, those who identify with the subculture can be found at more traditional universities with a technical orientation, such as MIT and Carnegie Mellon (specifically around "shop" areas like the MIT Hobby Shop and CMU Robotics Club). As maker culture becomes more popular, hackerspaces and Fab Labs are becoming more common in universities and public libraries. The federal government has started adopting the concept of fully open makerspaces within its agencies, the first of which (SpaceShop Rapid Prototyping Lab) resides at NASA Ames Research Center. In Europe the popularity of the labs is more prominent than in the US: about three times more labs exist there. Outside Europe and the US, the maker culture is also on the rise, with several hacker or makerspaces being landmarks in their respective cities' entrepreneurial and educational landscape. More precisely: HackerspaceSG in Singapore has been set up by the team now leading the city-state's (and, arguably, South-East Asia's) most prominent accelerator JFDI.Asia. Lamba Labs in Beirut is recognized as a hackerspace where people can collaborate freely, in a city often divided by its different ethnic and religious groups. Xinchejian in Shanghai is China's first hackerspace, which allows for innovation and collaboration in a country known for its strong internet censorship. With the rise of cities, which will host 60% of the human population by 2030, hackerspaces, fablabs and makerspaces will likely gain traction, as they are places for local entrepreneurs to gather and collaborate, providing local solutions to environmental, social or economical issues. The Institute for the Future has launched in this regard Maker Cities as "an open and collaborative online game, to generate ideas about how citizens are changing work, production, governance, learning, well-being, and their neighborhoods, and what this means for the future". Tools and hardware Cloud Cloud computing describes a family of tools in service of the maker movement, enabling increased collaboration, digital workflow, distributed manufacturing (i.e. the download of files that translate directly into objects via a digitized manufacturing process) and sharing economy. This, combined with the open source movement, initially focused on software, has been expanding into open-source hardware, assisted by easy access to online plans (in the cloud) and licensing agreements. Some example of cloud-based tools include online project repositories like Appropedia and thingiverse, version-controlled collaborative platforms like GitHub and wevolver, knowledge sharing platforms like instructables, wikipedia and other Wikis, including WikiHow and wikifab and platforms for distributed manufacturing like shapeways and 100k garages. Computers Programmable microcontrollers and single-board computers like the Arduino, Raspberry Pi, BeagleBone Black, and Intel's Galileo and Edison, many of which are open source, are easy to program and connect to devices such as sensors, displays, and actuators. This lowers the barrier to entry for hardware development. Combined with the cloud, this technology enables the Internet of Things. Digital fabrication Desktop 3D printing is now possible in various plastics and metals. In combination with DIY open-source microelectronics, they can create autoreplicant 3d printers, such as RepRap. Digital fabrication also includes various subtractive fabrication tech, eg. laser cutting, CNC milling, and knitting machines. To create one's own designs for digital fabrication requires digital design tools, like Solidworks, Autodesk, and Rhinoceros 3D. More recently, less expensive or easier to use software has emerged. Free, open-source software such as FreeCAD can be extremely useful in the design process. Autodesk's Fusion 360 is free for start ups and individuals, and Onshape and Tinkercad are browser-based digital design software. Online project repositories make many parts available for digital fabrication—even for people who are unable to do their own design work. Opendesk is one example of a company which has made a business by designing and hosting projects for distributed digital fabrication. Funding platforms Patreon and Kickstarter are two examples of distributed funding platforms key to the maker movement. Hand tools Maker culture is not all about new, digital technologies. Traditional and analog tools remain crucial to the movement. Traditional tools are often more familiar and accessible, which is key to maker culture. In many places and projects where digital fabrication tools are just not suitable, Hand tool are. Other types of making Maker culture involves many types of making – this section reviews some of the major types. Amateur scientific equipment This involves making scientific instruments for citizen science or open source labs. With the advent of low-cost digital manufacturing it is becoming increasingly common for scientists as well as amateurs to fabricate their own scientific apparatuses from open source hardware designs. Docubricks is a repository of open source science hardware. Biology, food and composting Examples of maker culture in food production include baking, homebrewing, winemaking, home roasting coffee, vegoil, pickling, sausage, cheesemaking, yogurt and pastry production. This can also extend into urban agriculture, composting and synthetic biology. Clothes Clothes can include sew and no-sew DIY hacks. Clothing can also include knitted or crocheted clothing and accessories. Some knitters may use knitting machines with varying degrees of automatic patterning. Fully electronic knitting machines can be interfaced to computers running computer-aided design software. Arduino boards have been interfaced to electronic knitting machines to further automate the process. Free People, a popular clothing retailer for young women, often hosts craft nights inside the doors of its Anthropologie locations. Cosmetics Maker cosmetics include perfumes, creams, lotions, shampoos, and eye shadow. Tool kits for maker cosmetics can include beakers, digital scales, laboratory thermometers (if possible, from -20 to 110 °C), pH paper, glass rods, plastic spatulas, and spray to disinfect with alcohol. Perfumes can be created at home using ethanol (96%, or even vodka or everclear), essential oils or fragrance oils, infused oils, even flavour extracts (such as pure vanilla extract), distilled or spring water and glycerine. Tools include glass bottles, glass jar, measuring cup/measuring spoons, a dropper, funnel, and aluminum foil or wrapping paper. Musical instruments The concept of homemade and experimental instruments in music has its roots prior to the maker movement, from complicated experiments with figures such as Reed Ghazala and Michel Waisvisz pioneering early circuit bending techniques to simple projects such as the Cigar Box Guitar. Bart Hopkin published the magazine Experimental Musical Instruments for 15 years followed by a series of books about instrument building. Organizations such as Zvex, WORM, STEIM, Death by Audio, and Casper Electronics cater to the do-it-yourself audience, while musicians like Nicolas Collins and Yuri Landman create and perform with custom made and experimental instruments. Synth DIY While still living at home Hugh Le Caine began a lifelong interest in electronic music and sound generation. In 1937, he designed an electronic free reed organ, and in the mid-1940s, he built the Electronic Sackbut, now recognised to be one of the first synthesizers. In 1953, Robert Moog produced his own theremin design, and the following year he published an article on the theremin in Radio and Television News. In the same year, he founded RA Moog, selling theremins and theremin kits by mail order from his home. One of his customers, Raymond Scott, rewired Moog's theremin for control by keyboard, creating the Clavivox. John Simonton founded PAiA Electronics in Oklahoma City in 1967 and began offering various small electronics kits through mail order. Starting in 1972 PAiA began producing analog synthesizer kits, in both modular and all-in-one form. See also Eurorack, DIY and Open Source Tool making Makers can also make or fabricate their own tools. This includes knives, hand tools, lathes, 3-D printers, wood working tools, etc. Vehicles A kit car, also known as a "component car", is an automobile that is available as a set of parts that a manufacturer sells and the buyer himself then assembles into a functioning car. Car tuning can include electric vehicle conversion. Motorcycle making and conversions are also represented. As examples: Tinker Bike is an open source motorcycle kit adaptable to recycled components; NightShift Bikes is a small, Makerist project in custom, DIY electric motorcycle conversions. Bicycles, too, have a DIY, Maker-style community. Zenga Bros' Tall Bikes are one example. Community bike workshops are a specific type of makerspaces. Media MAKE (a magazine published since 2004 by O'Reilly Media), is considered a "central organ of the Maker Movement," and its founder, Dale Dougherty, is widely considered the founder of the Movement. Other media outlets associated with the movement include Wamungo, Hackaday, Makery, and the popular weblog Boing Boing. Boing Boing editor Cory Doctorow has written a novel, Makers, which he describes as being "a book about people who hack hardware, business-models, and living arrangements to discover ways of staying alive and happy even when the economy is falling down the toilet". In 2016 Intel sponsored a reality TV show—America's Greatest Makers—where 24 teams of makers compete for $1 million. Maker Faires Since 2006 the subculture has held regular events around the world, Maker Faire, which in 2012 drew a crowd of 120,000 attendees. Smaller, community driven Maker Faires referred to as Mini Maker Fairs are also held in various places where an O'Reilly-organised Maker Faire has not yet been held. Maker Faire provides a Mini Maker Faire starter kit to encourage the spread of local Maker Faire events. Following the Maker Faire model, similar events which don't use the Maker Faire brand have emerged around the world. Maker Film Fest A Maker Film Festival was announced for August 2014 Powerhouse Science Center in Durango, Colorado, featuring "Films About Makers, and Makers Making Movies." PPE Production in Response to COVID-19 The Maker movement galvanized in response to the outbreak of the COVID-19 pandemic, with participants initially directing their skills toward designing Open Source ventilators. They subsequently targeted production of Personal protective equipment (PPE). Disruption of supply chains was a mounting problem, particularly in the early days of the pandemic, and compounded with the Shortages related to the COVID-19 pandemic in the medical sectors. The response was largely regional and spread across 86 countries on 6 continents, and coordinated their response, designs and shared insights with each other through intermediary organizations such as Tikkun Olam Makers, the Fab Fouhdation or Open Source Medical Supplies which included more than 70,000 people. National movements emerged in Germany, Brazil, Romania, France, Spain, India, and the United Kingdom. Said movements used distributed manufacturing methods; some cooperated with local government entities, local police and the national military to help locate supply shortages and manage distribution. Total production figures sides the maker community exceeded 48.3 million units produced, totaling a market value of about $271 million. The most-produced items included face shields (25 million), medical gowns (8 million) and face masks (6 million). The primary modes of production utilized were familiar tools like 3D printing, laser cutting or sewing machines, but multiple maker organizations scaled their production output by pooling funds to afford high-output methods like die cutting or injection molding. Criticisms The maker movement has at times been criticized for not fulfilling its goals of inclusivity and democratization. The most famous of these critiques come from Deb Chachra's piece, Why I Am Not a Maker in The Atlantic, criticizing the movement's gendered history and present; Evgeny Morozov's Making It in The New Yorker, challenging the movement's potential to actually disrupt or democratize innovation; and Will Holman's The Toaster Paradox, about Thomas Thwaites' the Toaster Project's challenges to the DIY and "Maker impulse." Others criticize the maker movement as not even being a movement, and posit that fundamental hypocrisy extends to limit the scope and impact of every aspect of the "Movement." Gender Over the past years, various incidents have heightened awareness for gender inclusiveness issues within and outside the movement. A discussion on the public discussion-list from lists.hackerspaces.org in 2013 highlighted the problems women experience within maker culture. A frequently cited message from this discussion is the contribution by David Powell who wrote: “If a hackerspace has one female and she wants more females in the hackerspace then she should start a campaign to find more females. It could be that she host a class about e-textiles or whatever it is females like to talk about.” This post outraged many. Another example is that of Dale Dougherty, by many considered the father of the movement, who publicly discredited Naomi Wu, a Chinese female Maker. On his Twitter account, he wrote: “I am questioning who she really is. Naomi is a persona, not a real person. She is several or many people.” After widespread criticism, Dougherty apologised to her two days later. Wu addresses the gender inclusiveness issues within Making on her Twitter profile description: “It's all about merit until merit has tits”. As a reaction to the widespread male-biased within making communities, women and people of different genders have started setting up feminist makerspaces. Liz Henry, a maker in San Francisco, has set up Double Union, a “supportive community for feminist activism.” Other feminist Makerspaces are Mz* Baltazar’s Laboratory in Vienna and Heart of Code in Berlin. See also Autonomous building Bricolage Craft production Do-it-yourself biology Modular design Open-design movement Open-source car SparkFun Electronics STEAM fields STEM education References External links Informal crowd-sourced research by the Ananse Group The Maker Manifiesto. Maker Movement, P2P Foundation Do it yourself Subcultures
33215485
https://en.wikipedia.org/wiki/Global%20University
Global University
Global University (GU; ) is an educational institution at Beirut, Lebanon established in 1992. Global University currently comprises three faculties: Faculty of Administrative Sciences Faculty of Health Sciences Faculty of Literature and Humanities Academics Faculty of Health Sciences The faculty offers a bachelor's degree and comprises the following academic departments: Department of Nursing Department of Nutrition and Dietetics Department of Physical Therapy Department of Medical Lab Department of Prosthetics and Orthotics (to be announced) Department of Biomedical Science Faculty of Administrative Sciences The faculty offers a bachelor's degree in the following specializations 1- Department of Business Administration a. Management b. Accounting c. Marketing d. Human Resources Management e. Executive Management 2- Department of Information Technology and Computer Science a. Computer Science b. Information Technology and Telecommunications 3- Department of Management Information Systems a. Management Information Systems b. Health Management Information Systems 4- Master of Business Administrations (MBA) 5- Master of Information Technology and Communications Faculty of Literature and Humanities The faculty offers a bachelor's degree in the following specializations 1- Department of Education a. English and Social Studies Education b. Math Education c. Science Education d. Math and Science Education 2- Department of Arabic Language 3- Department of Foreign Languages and Translation 4- Teaching Diploma 5- Master of Education Cooperation Agreements Global University Joins International Association of Universities(IAU) Global Joins SFIPA ASONAM 2010 OSINT_WM 2010 Global University signs a scientific cooperation agreement with the American University in Greece Lebanese University Agreement Damascus University Agreement Al Zaytouna University Agreement Arab Center for Nutrition Agreement Ain Shams University Agreement INPT Morocco Agreement Rafic Hariri University Hospital Agreement AL SAHEL Hospital Agreement AL SAHEL Hospital Agreement ETAG/EOQ Initiative Oracle Academy References External links http://www.gu.edu.lb/ https://web.archive.org/web/20110813194907/http://www.gu.edu.lb/GU_Ar_News/ Educational institutions established in 1992 Universities in Lebanon 1992 establishments in Lebanon
2386169
https://en.wikipedia.org/wiki/8th%20Weapons%20Squadron
8th Weapons Squadron
The 8th Weapons Squadron is a non-flying United States Air Force unit, assigned to the USAF Weapons School at Nellis Air Force Base, Nevada. The squadron inherited the lineage of the 8th Airborne Command and Control Squadron. The 8th’s history includes flying cargo aircraft to supply people and munitions around the South Pacific during WWII. Known then as the 8th Combat Cargo Squadron, the unit’s Curtiss C-46 Commandos and Douglas C-47 Skytrains likely shared ramp space with the 433d Fighter Squadron’s (now the F-15C Weapons Squadron) Lockheed P-38 Lightnings in New Guinea and the Philippines in 1944 and 1945. The 8th Airborne Command and Control Squadron flew the EC-135 to provide airborne command and control for deploying fighter squadrons over the Atlantic Ocean, and supporting the movement of key Air Combat Command leadership. Overview Provides advanced training for Airborne Warning and Control System and Ground Theater Air Control System officers. Also includes training to weapons officers for the E-3 Airborne Warning and Control System (AWACS), Command and Reporting Center (CRC), RC-135 Rivet Joint, EC-130H Compass Call and the E-8 Joint Surveillance Target Attack Radar System (JSTARS) communities. History World War II The first predecessor of the squadron was the 8th Ferrying Squadron, which ferried aircraft to combat theaters and to Brazil from the Southeast United States under the lend-lease program using the Air Transport Command South Atlantic air ferry route, Mar 1942-Mar 1944. The second predecessor of the squadron provided air transportation in Southwestern and Western Pacific, Nov 1944-Sep 1945 as the 8th Combat Cargo Squadron, operating under Fifth Air Force. It operated from Biak to fly passengers and cargo to bases in Australia, New Guinea, the Admiralties, and the Philippines. Also dropped supplies to US and guerrilla forces in the Philippines. Moved to Leyte in May 1945. Maintained flights to bases in Australia, New Guinea, and the Philippines; transported personnel and supplies to the Ryukyus, and evacuated casualties on return flights. Transported personnel and equipment of the occupation forces to Japan and ferried liberated prisoners of war to the Philippines. Moved to Japan in September 1945 where it operated until being inactivated in January 1946. Helicopter operations The third predecessor of the squadron was activated as the 8th Helicopter Flight under Caribbean Air Command in 1949. It operated cargo flights from Albrook Air Force Base providing logistical and supply support to installations in Panama and Latin America, Oct 1949-Feb 1952. Airborne command and control Reactivated in 1972 as EC-135 Airborne command post for tactical deployments worldwide, Feb 1972-May 1996. Has been involved in every United States combat operation since the Vietnam War. Deployed personnel and equipment to Spain and airfield personnel and equipment into Saudi Arabia, Aug 1990-c. Mar 1991 as part of Operation Desert Shield/Desert Storm. From 1978 Its current squadron was formed in 1978, when the concept of Air Weapons Controller was added to the established concept of Fighter Weapons. The first Air Weapons Controllers graduated in December 1984 to become Fighter Weapons School instructors. Instruction at the 8th Weapons Squadron continues to this very day in the fields of United States Air Force tactical air control system (TACS), Air Battle Management (ABM), Electronic Warfare Support (ES), Electronic attack (EA) and their integration in operations. The course has graduated over 350 instructors who have been key to every conflict and contingency since 1985. Lineage 8th Ferrying Squadron Constituted as the 8th Air Corps Ferrying Squadron on 18 February 1942 Activated on 24 March 1942 Redesignated 8th Ferrying Squadron on 12 May 1943 Disbanded on 31 Mar 1944 Reconstituted and consolidated with the 8th Tactical Deployment Control Squadron, the 8th Combat Cargo Squadron and the 8th Helicopter Flight as the 8th Tactical Deployment Control Squadron on 19 September 1985 8th Combat Cargo Squadron Constituted as the 8th Combat Cargo Squadron on 25 April 1944 Activated on 1 May 1944 Inactivated on 15 January 1946 Disbanded on 8 October 1948 Reconstituted and consolidated with the 8th Tactical Deployment Control Squadron, the 8th Ferrying Squadron and the 8th Helicopter Flight as the 8th Tactical Deployment Control Squadron on 19 September 1985 8th Helicopter Flight Constituted as the 8th Helicopter Flight on 7 October 1949 Activated on 27 October 1949 Inactivated on 19 February 1952 Activated on 14 March 1952 Inactivated on 16 December 1952 Consolidated with the 8th Tactical Deployment Control Squadron, the 8th Ferrying Squadron and the 8th Combat Cargo Squadron as the 8th Tactical Deployment Control Squadron on 19 September 1985 8th Weapons Squadron Constituted as the 8th Airborne Command and Control Squadron on 14 August 1969 Activated on 15 October 1969 Inactivated on 8 March 1971 Activated on 1 February 1972 Redesignated 8th Tactical Deployment Control Squadron on 30 April 1974 Consolidated with the 8th Ferrying Squadron, the 8th Combat Cargo Squadron and the 8th Helicopter Flight on 19 September 1985 Redesignated 8th Air Deployment Control Squadron on 1 November 1990 Redesignated 8th Airborne Command and Control Squadron on 1 July 1994 Inactivated on 15 May 1996 Redesignated 8th Weapons Squadron on 24 January 2003 Activated on 3 February 2003 Assignments Nashville Sector, Ferrying Command (later Nashville Sector, Domestic Wing, Ferrying Command; 4th Ferrying Group), 24 March 1942 – 31 March 1944 2d Combat Cargo Group, 1 May 1944 – 15 January 1946 (attached to 5298th Troop Carrier Wing (Provisional), November–December 1944) 5700th Air Base Group, 27 October 1949 – 19 February 1952 Eighteenth Air Force (attached to 16th Troop Carrier Squadron), 14 March–16 December 1952 4500th Air Base Wing, 15 October 1969 – 8 March 1971 Tactical Air Command, 1 February 1972 552d Airborne Warning and Control Wing (later 552d Airborne Warning and Control Division; 552d Airborne Warning and Control Wing), 1 January 1978 28th Air Division, 1 March 1986 552d Operations Group, 29 May 1992 – 15 May 1996 USAF Weapons School, 3 February 2003 – present Stations Berry Field, Tennessee, 24 March 1942 Memphis Municipal Airport, Tennessee, 9 December 1942 – 31 March 1944 Syracuse Army Air Base, New York, 1 May 1944 Baer Field, Indiana, 6–27 October 1944 Finschhafen Airfield, New Guinea, November 1944 Mokmer Airfield, Biak, New Guinea, January 1945 Dulag Airfield, Leyte, 19 March 1945 Okinawa, 25 Aug 1945 Yokota Air Base, Japan, September 1945 – 15 January 1946 Albrook Air Force Base, Panama Canal Zone, 27 October 1949 – 19 February 1952 Sewart Air Force Base, Tennessee, 14 March–16 December 1952 Langley Air Force Base, Virginia, 15 October 1969 – 8 March 1971 Seymour Johnson Air Force Base, North Carolina, 1 February 1972 Tinker Air Force Base, Oklahoma, 15 June 1978 – 15 May 1996 Nellis Air Force Base, Nevada, 3 February 2003 – present Aircraft None (ferried aircraft), 1942–1944 Curtiss C-46 Commando, 1944–1945 Douglas C-47 Skytrain, 1944, 1945 Sikorsky H-5 Dragonfly, 1949–1952 Sikorsky H-19 Chickasaw (Helicopter), 1952 Lockheed EC-121 Warning Star, 1969–1970 Boeing C-135 Stratolifter, 1972-1996 Boeing EC-135, 1972-1996 References Notes Bibliography Weapons 0008 Military units and formations established in 2003
2091393
https://en.wikipedia.org/wiki/Fault%20tolerance
Fault tolerance
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation. A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails. The term is most commonly used to describe computer systems designed to continue more or less fully operational with, perhaps, a reduction in throughput or an increase in response time in the event of some partial failure. That is, the system as a whole is not stopped due to problems either in the hardware or the software. An example in another field is a motor vehicle designed so it will continue to be drivable if one of the tires is punctured, or a structure that is able to retain its integrity in the presence of damage due to causes such as fatigue, corrosion, manufacturing flaws, or impact. Within the scope of an individual system, fault tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication. In any case, if the consequence of a system failure is so catastrophic, the system must be able to use reversion to fall back to a safe mode. This is similar to roll-back recovery but can be a human action if humans are present in the loop. History The first known fault-tolerant computer was SAPO, built in 1951 in Czechoslovakia by Antonín Svoboda. Its basic design was magnetic drums connected via relays, with a voting method of memory error detection (triple modular redundancy). Several other machines were developed along this line, mostly for military use. Eventually, they separated into three distinct categories: machines that would last a long time without any maintenance, such as the ones used on NASA space probes and satellites; computers that were very dependable but required constant monitoring, such as those used to monitor and control nuclear power plants or supercollider experiments; and finally, computers with a high amount of runtime which would be under heavy use, such as many of the supercomputers used by insurance companies for their probability monitoring. Most of the development in the so-called LLNM (Long Life, No Maintenance) computing was done by NASA during the 1960s, in preparation for Project Apollo and other research aspects. NASA's first machine went into a space observatory, and their second attempt, the JSTAR computer, was used in Voyager. This computer had a backup of memory arrays to use memory recovery methods and thus it was called the JPL Self-Testing-And-Repairing computer. It could detect its own errors and fix them or bring up redundant modules as needed. The computer is still working today. Hyper-dependable computers were pioneered mostly by aircraft manufacturers, nuclear power companies, and the railroad industry in the USA. These needed computers with massive amounts of uptime that would fail gracefully enough with a fault to allow continued operation while relying on the fact that the computer output would be constantly monitored by humans to detect faults. Again, IBM developed the first computer of this kind for NASA for guidance of Saturn V rockets, but later on BNSF, Unisys, and General Electric built their own. In the 1970s, much work has happened in the field . For instance, F14 CADC had built-in self-test and redundancy. In general, the early efforts at fault-tolerant designs were focused mainly on internal diagnosis, where a fault would indicate something was failing and a worker could replace it. SAPO, for instance, had a method by which faulty memory drums would emit a noise before failure. Later efforts showed that to be fully effective, the system had to be self-repairing and diagnosing – isolating a fault and then implementing a redundant backup while alerting a need for repair. This is known as N-model redundancy, where faults cause automatic fail-safes and a warning to the operator, and it is still the most common form of level one fault-tolerant design in use today. Voting was another initial method, as discussed above, with multiple redundant backups operating constantly and checking each other's results, with the outcome that if, for example, four components reported an answer of 5 and one component reported an answer of 6, the other four would "vote" that the fifth component was faulty and have it taken out of service. This is called M out of N majority voting. Historically, the motion has always been to move further from N-model and more to M out of N due to the fact that the complexity of systems and the difficulty of ensuring the transitive state from fault-negative to fault-positive did not disrupt operations. Tandem and Stratus were among the first companies specializing in the design of fault-tolerant computer systems for online transaction processing. Examples Hardware fault tolerance sometimes requires that broken parts be taken out and replaced with new parts while the system is still operational (in computing known as hot swapping). Such a system implemented with a single backup is known as single point tolerant and represents the vast majority of fault-tolerant systems. In such systems the mean time between failures should be long enough for the operators to have sufficient time to fix the broken devices (mean time to repair) before the backup also fails. It is helpful if the time between failures is as long as possible, but this is not specifically required in a fault-tolerant system. Fault tolerance is notably successful in computer applications. Tandem Computers built their entire business on such machines, which used single-point tolerance to create their NonStop systems with uptimes measured in years. Fail-safe architectures may encompass also the computer software, for example by process replication. Data formats may also be designed to degrade gracefully. HTML for example, is designed to be forward compatible, allowing Web browsers to ignore new and unsupported HTML entities without causing the document to be unusable. Additionally, some sites, including popular platforms such as Twitter (until December 2020), provide an optional lightweight front end that does not rely on JavaScript and has a minimal layout, to ensure wide accessibility and outreach, such as on game consoles with limited web browsing capabilities. Terminology A highly fault-tolerant system might continue at the same level of performance even though one or more components have failed. For example, a building with a backup electrical generator will provide the same voltage to wall outlets even if the grid power fails. A system that is designed to fail safe, or fail-secure, or fail gracefully, whether it functions at a reduced level or fails completely, does so in a way that protects people, property, or data from injury, damage, intrusion, or disclosure. In computers, a program might fail-safe by executing a graceful exit (as opposed to an uncontrolled crash) in order to prevent data corruption after experiencing an error. A similar distinction is made between "failing well" and "failing badly". Fail-deadly is the opposite strategy, which can be used in weapon systems that are designed to kill or injure targets even if part of the system is damaged or destroyed. A system that is designed to experience graceful degradation, or to fail soft (used in computing, similar to "fail safe"<ref>Stallings, W (2009): Operating Systems. Internals and Design Principles, sixth edition</ref>) operates at a reduced level of performance after some component failures. For example, a building may operate lighting at reduced levels and elevators at reduced speeds if grid power fails, rather than either trapping people in the dark completely or continuing to operate at full power. In computing an example of graceful degradation is that if insufficient network bandwidth is available to stream an online video, a lower-resolution version might be streamed in place of the high-resolution version. Progressive enhancement is an example in computing, where web pages are available in a basic functional format for older, small-screen, or limited-capability web browsers, but in an enhanced version for browsers capable of handling additional technologies or that have a larger display available. In fault-tolerant computer systems, programs that are considered robust are designed to continue operation despite an error, exception, or invalid input, instead of crashing completely. Software brittleness is the opposite of robustness. Resilient networks continue to transmit data despite the failure of some links or nodes; resilient buildings and infrastructure are likewise expected to prevent complete failure in situations like earthquakes, floods, or collisions. A system with high failure transparency will alert users that a component failure has occurred, even if it continues to operate with full performance, so that failure can be repaired or imminent complete failure anticipated. Likewise, a fail-fast component is designed to report at the first point of failure, rather than allow downstream components to fail and generate reports then. This allows easier diagnosis of the underlying problem, and may prevent improper operation in a broken state. Single fault condition A single fault condition is a situation where one means for protection against a hazard is defective. If a single fault condition results unavoidably in another single fault condition, the two failures are considered as one single fault condition. A source offers the following example: Criteria Providing fault-tolerant design for every component is normally not an option. Associated redundancy brings a number of penalties: increase in weight, size, power consumption, cost, as well as time to design, verify, and test. Therefore, a number of choices have to be examined to determine which components should be fault tolerant: How critical is the component? In a car, the radio is not critical, so this component has less need for fault tolerance. How likely is the component to fail? Some components, like the drive shaft in a car, are not likely to fail, so no fault tolerance is needed. How expensive is it to make the component fault tolerant? Requiring a redundant car engine, for example, would likely be too expensive both economically and in terms of weight and space, to be considered. An example of a component that passes all the tests is a car's occupant restraint system. While we do not normally think of the primary occupant restraint system, it is gravity. If the vehicle rolls over or undergoes severe g-forces, then this primary method of occupant restraint may fail. Restraining the occupants during such an accident is absolutely critical to safety, so we pass the first test. Accidents causing occupant ejection were quite common before seat belts, so we pass the second test. The cost of a redundant restraint method like seat belts is quite low, both economically and in terms of weight and space, so we pass the third test. Therefore, adding seat belts to all vehicles is an excellent idea. Other "supplemental restraint systems", such as airbags, are more expensive and so pass that test by a smaller margin. Another excellent and long-term example of this principle being put into practice is the braking system: whilst the actual brake mechanisms are critical, they are not particularly prone to sudden (rather than progressive) failure, and are in any case necessarily duplicated to allow even and balanced application of brake force to all wheels. It would also be prohibitively costly to further double-up the main components and they would add considerable weight. However, the similarly critical systems for actuating the brakes under driver control are inherently less robust, generally using a cable (can rust, stretch, jam, snap) or hydraulic fluid (can leak, boil and develop bubbles, absorb water and thus lose effectiveness). Thus in most modern cars the footbrake hydraulic brake circuit is diagonally divided to give two smaller points of failure, the loss of either only reducing brake power by 50% and not causing as much dangerous brakeforce imbalance as a straight front-back or left-right split, and should the hydraulic circuit fail completely (a relatively very rare occurrence), there is a failsafe in the form of the cable-actuated parking brake that operates the otherwise relatively weak rear brakes, but can still bring the vehicle to a safe halt in conjunction with transmission/engine braking so long as the demands on it are in line with normal traffic flow. The cumulatively unlikely combination of total foot brake failure with the need for harsh braking in an emergency will likely result in a collision, but still one at lower speed than would otherwise have been the case. In comparison with the foot pedal activated service brake, the parking brake itself is a less critical item, and unless it is being used as a one-time backup for the footbrake, will not cause immediate danger if it is found to be nonfunctional at the moment of application. Therefore, no redundancy is built into it per se (and it typically uses a cheaper, lighter, but less hardwearing cable actuation system), and it can suffice, if this happens on a hill, to use the footbrake to momentarily hold the vehicle still, before driving off to find a flat piece of road on which to stop. Alternatively, on shallow gradients, the transmission can be shifted into Park, Reverse or First gear, and the transmission lock / engine compression used to hold it stationary, as there is no need for them to include the sophistication to first bring it to a halt. On motorcycles, a similar level of fail-safety is provided by simpler methods; firstly the front and rear brake systems being entirely separate, regardless of their method of activation (that can be cable, rod or hydraulic), allowing one to fail entirely whilst leaving the other unaffected. Secondly, the rear brake is relatively strong compared to its automotive cousin, even being a powerful disc on sports models, even though the usual intent is for the front system to provide the vast majority of braking force; as the overall vehicle weight is more central, the rear tyre is generally larger and grippier, and the rider can lean back to put more weight on it, therefore allowing more brake force to be applied before the wheel locks up. On cheaper, slower utility-class machines, even if the front wheel should use a hydraulic disc for extra brake force and easier packaging, the rear will usually be a primitive, somewhat inefficient, but exceptionally robust rod-actuated drum, thanks to the ease of connecting the footpedal to the wheel in this way and, more importantly, the near impossibility of catastrophic failure even if the rest of the machine, like a lot of low-priced bikes after their first few years of use, is on the point of collapse from neglected maintenance. Requirements The basic characteristics of fault tolerance require: No single point of failure – If a system experiences a failure, it must continue to operate without interruption during the repair process. Fault isolation to the failing component – When a failure occurs, the system must be able to isolate the failure to the offending component. This requires the addition of dedicated failure detection mechanisms that exist only for the purpose of fault isolation. Recovery from a fault condition requires classifying the fault or failing component. The National Institute of Standards and Technology (NIST) categorizes faults based on locality, cause, duration, and effect. Fault containment to prevent propagation of the failure – Some failure mechanisms can cause a system to fail by propagating the failure to the rest of the system. An example of this kind of failure is the "rogue transmitter" that can swamp legitimate communication in a system and cause overall system failure. Firewalls or other mechanisms that isolate a rogue transmitter or failing component to protect the system are required. Availability of reversion modes In addition, fault-tolerant systems are characterized in terms of both planned service outages and unplanned service outages. These are usually measured at the application level and not just at a hardware level. The figure of merit is called availability and is expressed as a percentage. For example, a five nines system would statistically provide 99.999% availability. Fault-tolerant systems are typically based on the concept of redundancy. Fault tolerance techniques Research into the kinds of tolerances needed for critical systems involves a large amount of interdisciplinary work. The more complex the system, the more carefully all possible interactions have to be considered and prepared for. Considering the importance of high-value systems in transport, public utilities and the military, the field of topics that touch on research is very wide: it can include such obvious subjects as software modeling and reliability, or hardware design, to arcane elements such as stochastic models, graph theory, formal or exclusionary logic, parallel processing, remote data transmission, and more. Replication Spare components address the first fundamental characteristic of fault tolerance in three ways: Replication: Providing multiple identical instances of the same system or subsystem, directing tasks or requests to all of them in parallel, and choosing the correct result on the basis of a quorum; Redundancy: Providing multiple identical instances of the same system and switching to one of the remaining instances in case of a failure (failover); Diversity: Providing multiple different implementations of the same specification, and using them like replicated systems to cope with errors in a specific implementation. All implementations of RAID, redundant array of independent disks, except RAID 0, are examples of a fault-tolerant storage device that uses data redundancy. A lockstep fault-tolerant machine uses replicated elements operating in parallel. At any time, all the replications of each element should be in the same state. The same inputs are provided to each replication, and the same outputs are expected. The outputs of the replications are compared using a voting circuit. A machine with two replications of each element is termed dual modular redundant (DMR). The voting circuit can then only detect a mismatch and recovery relies on other methods. A machine with three replications of each element is termed triple modular redundant (TMR). The voting circuit can determine which replication is in error when a two-to-one vote is observed. In this case, the voting circuit can output the correct result, and discard the erroneous version. After this, the internal state of the erroneous replication is assumed to be different from that of the other two, and the voting circuit can switch to a DMR mode. This model can be applied to any larger number of replications. Lockstep fault-tolerant machines are most easily made fully synchronous, with each gate of each replication making the same state transition on the same edge of the clock, and the clocks to the replications being exactly in phase. However, it is possible to build lockstep systems without this requirement. Bringing the replications into synchrony requires making their internal stored states the same. They can be started from a fixed initial state, such as the reset state. Alternatively, the internal state of one replica can be copied to another replica. One variant of DMR is pair-and-spare. Two replicated elements operate in lockstep as a pair, with a voting circuit that detects any mismatch between their operations and outputs a signal indicating that there is an error. Another pair operates exactly the same way. A final circuit selects the output of the pair that does not proclaim that it is in error. Pair-and-spare requires four replicas rather than the three of TMR, but has been used commercially. Failure-oblivious computingFailure-oblivious computing is a technique that enables computer programs to continue executing despite errors. The technique can be applied in different contexts. First, it can handle invalid memory reads by returning a manufactured value to the program, which in turn, makes use of the manufactured value and ignores the former memory value it tried to access, this is a great contrast to typical memory checkers, which inform the program of the error or abort the program. Second it can be applied to exceptions where some catch blocks are written or synthesized to catch unexpected exceptions. Furthermore, it happens that the execution is modified several times in a row, in order to prevent cascading failures. The approach has performance costs: because the technique rewrites code to insert dynamic checks for address validity, execution time will increase by 80% to 500%. Recovery shepherding Recovery shepherding is a lightweight technique to enable software programs to recover from otherwise fatal errors such as null pointer dereference and divide by zero. Comparing to the failure oblivious computing technique, recovery shepherding works on the compiled program binary directly and does not need to recompile to program. It uses the just-in-time binary instrumentation framework Pin. It attaches to the application process when an error occurs, repairs the execution, tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. It does not interfere with the normal execution of the program and therefore incurs negligible overhead. For 17 of 18 systematically collected real world null-dereference and divide-by-zero errors, a prototype implementation enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs. Circuit breaker The circuit breaker design pattern is a technique to avoid catastrophic failures in distributed systems. Redundancy Redundancy is the provision of functional capabilities that would be unnecessary in a fault-free environment. This can consist of backup components that automatically "kick in" if one component fails. For example, large cargo trucks can lose a tire without any major consequences. They have many tires, and no one tire is critical (with the exception of the front tires, which are used to steer, but generally carry less load, each and in total, than the other four to 16, so are less likely to fail). The idea of incorporating redundancy in order to improve the reliability of a system was pioneered by John von Neumann in the 1950s. Two kinds of redundancy are possible: space redundancy and time redundancy. Space redundancy provides additional components, functions, or data items that are unnecessary for fault-free operation. Space redundancy is further classified into hardware, software and information redundancy, depending on the type of redundant resources added to the system. In time redundancy the computation or data transmission is repeated and the result is compared to a stored copy of the previous result. The current terminology for this kind of testing is referred to as 'In Service Fault Tolerance Testing or ISFTT for short. Disadvantages Fault-tolerant design's advantages are obvious, while many of its disadvantages are not: Interference with fault detection in the same component. To continue the above passenger vehicle example, with either of the fault-tolerant systems it may not be obvious to the driver when a tire has been punctured. This is usually handled with a separate "automated fault-detection system". In the case of the tire, an air pressure monitor detects the loss of pressure and notifies the driver. The alternative is a "manual fault-detection system", such as manually inspecting all tires at each stop. Interference with fault detection in another component. Another variation of this problem is when fault tolerance in one component prevents fault detection in a different component. For example, if component B performs some operation based on the output from component A, then fault tolerance in B can hide a problem with A. If component B is later changed (to a less fault-tolerant design) the system may fail suddenly, making it appear that the new component B is the problem. Only after the system has been carefully scrutinized will it become clear that the root problem is actually with component A. Reduction of priority of fault correction. Even if the operator is aware of the fault, having a fault-tolerant system is likely to reduce the importance of repairing the fault. If the faults are not corrected, this will eventually lead to system failure, when the fault-tolerant component fails completely or when all redundant components have also failed. Test difficulty. For certain critical fault-tolerant systems, such as a nuclear reactor, there is no easy way to verify that the backup components are functional. The most infamous example of this is Chernobyl, where operators tested the emergency backup cooling by disabling primary and secondary cooling. The backup failed, resulting in a core meltdown and massive release of radiation. Cost. Both fault-tolerant components and redundant components tend to increase cost. This can be a purely economic cost or can include other measures, such as weight. Manned spaceships, for example, have so many redundant and fault-tolerant components that their weight is increased dramatically over unmanned systems, which don't require the same level of safety. Inferior components. A fault-tolerant design may allow for the use of inferior components, which would have otherwise made the system inoperable. While this practice has the potential to mitigate the cost increase, use of multiple inferior components may lower the reliability of the system to a level equal to, or even worse than, a comparable non-fault-tolerant system. Related terms There is a difference between fault tolerance and systems that rarely have problems. For instance, the Western Electric crossbar systems had failure rates of two hours per forty years, and therefore were highly fault resistant. But when a fault did occur they still stopped operating completely, and therefore were not fault tolerant''. See also Byzantine fault tolerance Control reconfiguration Damage tolerance Data redundancy Defence in depth Elegant degradation Error detection and correction Error-tolerant design (human error-tolerant design) Failure semantics Fall back and forward Graceful exit Intrusion tolerance List of system quality attributes Resilience (ecology) Progressive enhancement Resilience (network) Robustness (computer science) Rollback (data management) Safe-life design Self-management (computer science) Software diversity References Reliability engineering Computer systems Control engineering Systems engineering Software quality RAID
26149056
https://en.wikipedia.org/wiki/IEC%2062351
IEC 62351
IEC 62351 is a standard developed by WG15 of IEC TC57. This is developed for handling the security of TC 57 series of protocols including IEC 60870-5 series, IEC 60870-6 series, IEC 61850 series, IEC 61970 series & IEC 61968 series. The different security objectives include authentication of data transfer through digital signatures, ensuring only authenticated access, prevention of eavesdropping, prevention of playback and spoofing, and intrusion detection. Standard details IEC 62351-1 — Introduction to the standard IEC 62351-2 — Glossary of terms IEC 62351-3 — Security for any profiles including TCP/IP. TLS Encryption Node Authentication by means of X.509 certificates Message Authentication IEC 62351-4 — Security for any profiles including MMS (e.g., ICCP-based IEC 60870-6, IEC 61850, etc.). Authentication for MMS TLS (RFC 2246)is inserted between RFC 1006 & RFC 793 to provide transport layer security IEC 62351-5 — Security for any profiles including IEC 60870-5 (e.g., DNP3 derivative) TLS for TCP/IP profiles and encryption for serial profiles. IEC 62351-6 — Security for IEC 61850 profiles. VLAN use is made as mandatory for GOOSE RFC 2030 to be used for SNTP IEC 62351-7 — Security through network and system management. Defines Management Information Base (MIBs) that are specific for the power industry, to handle network and system management through SNMP based methods. IEC 62351-8 — Role-based access control. Covers the access control of users and automated agents to data objects in power systems by means of role-based access control (RBAC). IEC 62351-9 — Key Management Describes the correct and safe usage of safety-critical parameters, e.g. passwords, encryption keys. Covers the whole life cycle of cryptographic information (enrollment, creation, distribution, installation, usage, storage and removal). Methods for algorithms using asymmetric cryptography Handling of digital certificates (public / private key) Setup of the PKI environment with X.509 certificates Certificate enrollment by means of SCEP / CMP / EST Certificate revocation by means of CRL / OCSP A secure distribution mechanism based on GDOI and the IKEv2 protocol is presented for the usage of symmetric keys, e.g. session keys. IEC 62351-10 — Security Architecture Explanation of security architectures for the entire IT infrastructure Identifying critical points of the communication architecture, e.g. substation control center, substation automation Appropriate mechanisms security requirements, e.g. data encryption, user authentication Applicability of well-proven standards from the IT domain, e.g. VPN tunnel, secure FTP, HTTPS IEC 62351-11 — Security for XML Files Embedding of the original XML content into an XML container Date of issue and access control for XML data X.509 signature for authenticity of XML data Optional data encryption See also IEC TC 57 List of IEC Technical Committees External links Application of the IEC 62351 at IPCOMM GmbH Report about the implementation of IEC 62351-7 62351 Electric power Computer network security
28869382
https://en.wikipedia.org/wiki/Flipboard
Flipboard
Flipboard is a news aggregator and social network aggregation company based in Palo Alto, California, with offices in New York, Vancouver and Bejiing. Its software, also known as Flipboard, was first released in July 2010. It aggregates content from social media, news feeds, photo sharing sites and other websites, presents it in magazine format, and allows users to "flip" through the articles, images and videos being shared. Readers can also save stories into Flipboard magazines. As of March 2016 the company claims there have been 28 million magazines created by users on Flipboard. The service can be accessed via web browser, or by a Flipboard application for Microsoft Windows and macOS, and via mobile apps for iOS and Android. The client software is available at no charge and is localized in 21 languages. History Flipboard was originally launched exclusively for iPad in 2010. It launched the iPhone and iPod Touch versions seventeen months later in December 2011. The company raised more than $200 million in funding from investors, and an additional $50 million from JPMorgan Chase in July 2015. On May 5, 2012, Flipboard was released for Android phones, beginning with the Samsung Galaxy S3. On May 30, 2012, a beta version of Flipboard for Android was released through its website. A final stable release of the Flipboard for Android was released on June 22, 2012, in Google Play. The Windows 8 version of the Flipboard app was also demonstrated during the Microsoft 2013 Build Conference and on the Flipboard blog with a video, although no release date was given. On October 22, 2014, Flipboard for Windows 8 was rolled out to Windows Phone devices starting with the Nokia Lumia 2520. In March 2014, Flipboard bought Zite, a magazine-style reading app, from the CNN television network. Flipboard's content filtering, topic engine and recommendations system were integrated from this acquisition. Zite was shut down on December 7, 2015. In February 2015, Flipboard became available on the web. Up until then, Flipboard was a mobile app, only available on tablets and mobile phones. The web client provides webpage links on desktop browsers, and lacks some features of the client software. In February 2017, Flipboard updated their mobile apps for iOS and Android to 4.0, which brought a full redesign to the application, and implemented new features such as smart magazines, which allow users to bundle different things together, such as various news sources, people, and hashtags. On May 29, 2019, Flipboard disclosed a security breach that affected an unspecified number of users between June 2, 2018, and March 23, 2019, and April 21 and 22, 2019, where customer databases including information, such as encrypted passwords and access tokens for third-party services, were accessible to an unauthorized party. All passwords and authentication tokens for third-party services are being reset, although Flipboard noted that almost all passwords were hashed using the strong bcrypt algorithm (except for some using the insecure and obsolete SHA-1 algorithm, replaced by the service in 2012), and there was no evidence that the access to tokens was abused. Reception The reaction to the application was mainly positive, with Techpad calling it a "killer" iPad application. Time magazine named it one of the 50 best inventions of 2010. Apple reviewed Flipboard positively, and named the application Apple's "iPad App of the Year" in 2010. When a new update of the software added more features such as support for Google Reader, a web-based aggregator, and content from more publishers, the app received a favorable review from the Houston Chronicle. Censorship On May 15, 2011, Flipboard was blocked by the Great Firewall of China. McCue said on his Twitter feed"China has now officially blocked Flipboard." The company then released its first edition localized for China. Beginning in February 2015, the company started self-censoring users using the application from China. The content guide for China does not include Twitter and Facebook anymore. Existing subscriptions for Twitter or Facebook are also automatically removed. User interface The application's user interface is designed for intuitive flipping through content. Once the feeds have been set up, the first page seen when the application is opened is a list of the subscribed content. The iPhone and Android versions have a "Cover Stories" section on the first page collating only the most recent, important items from all of the subscriptions. This is meant to be read when the user only has a short period of time for reading. See also Comparison of feed aggregators List of most downloaded Android applications References Further reading Richmond, Shane (August 4, 2010). "Flipboard: The Closest Thing I've Seen to the Future of Magazines". The Daily Telegraph (London). Retrieved March 4, 2012. Westaway, Luke (July 22, 2010). "Flipboard for iPad Review". CNET. Retrieved March 4, 2012. External links Android (operating system) software IOS software News aggregator software Universal Windows Platform apps 2010 software
20555724
https://en.wikipedia.org/wiki/OpenXPKI
OpenXPKI
The OpenXPKI project stewards an open-source Public Key Infrastructure (PKI) software. History The OpenXPKI project commenced 2005 and began to produce usable software from 2010 but choose to take a precautionary approach with the first production level release in 2015. The approach taken was to create a modular system with most modules capable of being re-utilised in other systems. A Workflow engine centered approach. The software has been mostly written in Perl. and designed to run on Unix-like operating systems such as FreeBSD and Linux. Database backends have been created for MySQL, PostgreSQL, the Oracle Database and IBM DB2. Technical After installation the software on the node is configured to act as a Certificate Authority (CA), Registration Authority (RA) or End-Entity Enrolement (EE) node. One client implementation is a web frontend that allows end-users to access the OpenXPKI system using a web browser, and a command line interface available also available for system administrators. OpenXKPI also has available a SCEP interface. Reception OpenXPKI has been used successfully in scenarios from performance testing up to enterprise level environments. Shortcomings are that it requires additional components to complete a certificate based authentication, including software for efficient certificate distribution. References Footnotes Sources External links The OpenXPKI Project Cryptographic software
20288
https://en.wikipedia.org/wiki/Microsoft%20Office
Microsoft Office
Microsoft Office, or simply Office, is a family of client software, server software, and services developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, at COMDEX in Las Vegas. Initially a marketing term for an office suite (bundled set of productivity applications), the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. On July 10, 2012, Softpedia reported that Office was being used by over a billion people worldwide. Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the desktop version, available for PCs running the Windows and macOS operating systems. Microsoft also maintains mobile apps for Android and iOS. Office on the web is a version of the software that runs within a web browser. Since Office 2013, Microsoft has promoted Office 365 as the primary means of obtaining Microsoft Office: it allows the use of the software and other services on a subscription business model, and users receive feature updates to the software for the lifetime of the subscription, including new features and cloud computing integration that are not necessarily included in the "on-premises" releases of Office sold under conventional license terms. In 2017, revenue from Office 365 overtook conventional license sales. Microsoft also rebranded most of their standard Office 365 editions into Microsoft 365 to emphasize their current inclusion of products and services. The current on-premises, desktop version of Office is Office 2021, released on October 5, 2021. Components Core apps and services Microsoft Word is a word processor included in Microsoft Office and some editions of the now-discontinued Microsoft Works. The first version of Word, released in the autumn of 1983, was for the MS-DOS operating system and introduced the computer mouse to more users. Word 1.0 could be purchased with a bundled mouse, though none was required. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add closer WYSIWYG features into its package. Word for Mac was released in 1985. Word for Mac was the first graphical version of Microsoft Word. Initially, it implemented the proprietary .doc format as its primary format. Word 2007, however, deprecated this format in favor of Office Open XML, which was later standardized by Ecma International as an open format. Support for Portable Document Format (PDF) and OpenDocument (ODF) was first introduced in Word for Windows with Service Pack 2 for Word 2007. Microsoft Excel is a spreadsheet editor that originally competed with the dominant Lotus 1-2-3 and eventually outsold it. Microsoft released the first version of Excel for the Mac OS in 1985 and the first Windows version (numbered 2.05 to line up with the Mac) in November 1987. Microsoft PowerPoint is a presentation program used to create slideshows composed of text, graphics, and other objects, which can be displayed on-screen and shown by the presenter or printed out on transparencies or slides. Microsoft OneNote is a notetaking program that gathers handwritten or typed notes, drawings, screen clippings and audio commentaries. Notes can be shared with other OneNote users over the Internet or a network. OneNote was initially introduced as a standalone app that was not included in any Microsoft Office 2003 edition. However, OneNote eventually became a core component of Microsoft Office; with the release of Microsoft Office 2013, OneNote was included in all Microsoft Office offerings. OneNote is also available as a web app on Office on the web, a freemium (and later freeware) Windows desktop app, a mobile app for Windows Phone, iOS, Android, and Symbian, and a Metro-style app for Windows 8 or later. Microsoft Outlook (not to be confused with Outlook Express, Outlook.com or Outlook on the web) is a personal information manager that replaces Windows Messaging, Microsoft Mail, and Schedule+ starting in Office 97; it includes an e-mail client, calendar, task manager and address book. On the Mac OS, Microsoft offered several versions of Outlook in the late 1990s, but only for use with Microsoft Exchange Server. In Office 2001, it introduced an alternative application with a slightly different feature set called Microsoft Entourage. It reintroduced Outlook in Office 2011, replacing Entourage. Microsoft OneDrive is a file hosting service that allows users to sync files and later access them from a web browser or mobile device. Microsoft Teams is a platform that combines workplace chat, meetings, notes, and attachments. Windows-only apps Microsoft Publisher is a desktop publishing app for Windows mostly used for designing brochures, labels, calendars, greeting cards, business cards, newsletters, web sites, and postcards. Microsoft Access is a database management system for Windows that combines the relational Access Database Engine (formerly Jet Database Engine) with a graphical user interface and software development tools. Microsoft Access stores data in its own format based on the Access Database Engine. It can also import or link directly to data stored in other applications and databases. Microsoft Project is a project management app for Windows to keep track of events and to create network charts and Gantt charts, not bundled in any Office suite. Microsoft Visio is a diagram and flowcharting app for Windows not bundled in any Office suite. Mobile-only apps Office Lens is an image scanner optimized for mobile devices. It captures the document (e.g. business card, paper, whiteboard) via the camera and then straightens the document portion of the image. The result can be exported to Word, OneNote, PowerPoint or Outlook, or saved in OneDrive, sent via Mail or placed in Photo Library. Office Mobile is a unified Office mobile app for Android and iOS, which combines Word, Excel, and PowerPoint into a single app and introduces new capabilities as making quick notes, signing PDFs, scanning QR codes, and transferring files. Office Remote is an application that turns the mobile device into a remote control for desktop versions of Word, Excel and PowerPoint. Server applications Microsoft SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and usage varies substantially among organizations. SharePoint services include: Excel Services is a spreadsheet editing server similar to Microsoft Excel. InfoPath Forms Services is a form distribution server similar to Microsoft InfoPath. Microsoft Project Server is a project management server similar to Microsoft Project. Microsoft Search Server Skype for Business Server is a real-time communications server for instant messaging and video-conferencing. Microsoft Exchange Server is a mail server and calendaring server. Web services Microsoft Sway is a presentation web app released in October 2014. It also has a native app for iOS and Windows 10. Delve is a service that allows Office 365 users to search and manage their emails, meetings, contacts, social networks and documents stored on OneDrive or Sites in Office 365. Microsoft Forms is an online survey creator, available for Office 365 Education subscribers. Microsoft To Do is a task management service. Outlook.com is a free webmail with a user interface similar to Microsoft Outlook. Outlook on the web is a webmail client similar to Outlook.com but more comprehensive and available only through Office 365 and Microsoft Exchange Server offerings. Microsoft Planner is a planning application available on the Microsoft Office 365 platform. Microsoft Stream is a corporate video sharing service for enterprise users with an Office 365 Academic or Enterprise license. Microsoft Bookings is an appointment booking application on the Microsoft Office 365 platform. Office on the web Office on the web is a free lightweight web version of Microsoft Office and primarily includes three web applications: Word, Excel and Powerpoint. The offering also includes Outlook.com, OneNote and OneDrive which are accessible through a unified app switcher. Users can install the on-premises version of this service, called Office Online Server, in private clouds in conjunction with SharePoint, Microsoft Exchange Server and Microsoft Lync Server. Word, Excel, and PowerPoint on the web can all natively open, edit, and save Office Open XML files (docx, xlsx, pptx) as well as OpenDocument files (odt, ods, odp). They can also open the older Office file formats (doc, xls, ppt), but will be converted to the newer Open XML formats if the user wishes to edit them online. Other formats cannot be opened in the browser apps, such as CSV in Excel or HTML in Word, nor can Office files that are encrypted with a password be opened. Files with macros can be opened in the browser apps, but the macros cannot be accessed or executed. Starting in July 2013, Word can render PDF documents or convert them to Microsoft Word documents, although the formatting of the document may deviate from the original. Since November 2013, the apps have supported real-time co-authoring and autosaving files. Office on the web lacks a number of the advanced features present in the full desktop versions of Office, including lacking the programs Access and Publisher entirely. However, users are able to select the command "Open in Desktop App" that brings up the document in the desktop version of Office on their computer or device to utilize the advanced features there. Supported web browsers include Microsoft Edge, Internet Explorer 11, the latest versions of Firefox or Google Chrome, as well as Safari for OS X 10.8 or later. The Personal edition of Office on the web is available to the general public free of charge with a Microsoft account through the Office.com website, which superseded SkyDrive (now OneDrive) and Office Live Workspace. Enterprise-managed versions are available through Office 365. In February 2013, the ability to view and edit files on SkyDrive without signing in was added. The service can also be installed privately in enterprise environments as a SharePoint app, or through Office Web Apps Server. Microsoft also offers other web apps in the Office suite, such as the Outlook Web App (formerly Outlook Web Access), Lync Web App (formerly Office Communicator Web Access), Project Web App (formerly Project Web Access). Additionally, Microsoft offers a service under the name of Online Doc Viewer to view Office documents on a website via Office on the web. There are free extensions available to use Office on the web directly in Google Chrome and Microsoft Edge. Common features Most versions of Microsoft Office (including Office 97 and later) use their own widget set and do not exactly match the native operating system. This is most apparent in Microsoft Office XP and 2003, where the standard menus were replaced with a colored, flat-looking, shadowed menu style. The user interface of a particular version of Microsoft Office often heavily influences a subsequent version of Microsoft Windows. For example, the toolbar, colored buttons and the gray-colored 3D look of Office 4.3 were added to Windows 95, and the ribbon, introduced in Office 2007, has been incorporated into several programs bundled with Windows 7 and later. In 2012, Office 2013 replicated the flat, box-like design of Windows 8. Users of Microsoft Office may access external data via connection-specifications saved in Office Data Connection (.odc) files. Both Windows and Office use service packs to update software. Office had non-cumulative service releases, which were discontinued after Office 2000 Service Release 1. Past versions of Office often contained Easter eggs. For example, Excel 97 contained a reasonably functional flight-simulator. File formats and metadata Microsoft Office prior to Office 2007 used proprietary file formats based on the OLE Compound File Binary Format. This forced users who share data to adopt the same software platform. In 2008, Microsoft made the entire documentation for the binary Office formats freely available for download and granted any possible patents rights for use or implementations of those binary format for free under the Open Specification Promise. Previously, Microsoft had supplied such documentation freely but only on request. Starting with Office 2007, the default file format has been a version of Office Open XML, though different from the one standardized and published by Ecma International and by ISO/IEC. Microsoft has granted patent rights to the formats technology under the Open Specification Promise and has made available free downloadable converters for previous versions of Microsoft Office including Office 2003, Office XP, Office 2000 and Office 2004 for Mac OS X. Third-party implementations of Office Open XML exist on the Windows platform (LibreOffice, all platforms), macOS platform (iWork '08, NeoOffice, LibreOffice) and Linux (LibreOffice and OpenOffice.org 3.0). In addition, Office 2010, Service Pack 2 for Office 2007, and Office 2016 for Mac supports the OpenDocument Format (ODF) for opening and saving documents – only the old ODF 1.0 (2006 ISO/IEC standard) is supported, not the 1.2 version (2015 ISO/IEC standard). Microsoft provides the ability to remove metadata from Office documents. This was in response to highly publicized incidents where sensitive data about a document was leaked via its metadata. Metadata removal was first available in 2004, when Microsoft released a tool called Remove Hidden Data Add-in for Office 2003/XP for this purpose. It was directly integrated into Office 2007 in a feature called the Document Inspector. Extensibility A major feature of the Office suite is the ability for users and third-party companies to write add-ins (plug-ins) that extend the capabilities of an application by adding custom commands and specialized features. One of the new features is the Office Store. Plugins and other tools can be downloaded by users. Developers can make money by selling their applications in the Office Store. The revenue is divided between the developer and Microsoft where the developer gets 80% of the money. Developers are able to share applications with all Office users. The app travels with the document, and it is for the developer to decide what the recipient will see when they open it. The recipient will either have the option to download the app from the Office Store for free, start a free trial or be directed to payment. With Office's cloud abilities, IT department can create a set of apps for their business employees in order to increase their productivity. When employees go to the Office Store, they'll see their company's apps under My Organization. The apps that employees have personally downloaded will appear under My Apps. Developers can use web technologies like HTML5, XML, CSS3, JavaScript, and APIs for building the apps. An application for Office is a webpage that is hosted inside an Office client application. User can use apps to amplify the functionality of a document, email message, meeting request, or appointment. Apps can run in multiple environments and by multiple clients, including rich Office desktop clients, Office Web Apps, mobile browsers, and also on-premises and in the cloud. The type of add-ins supported differ by Office versions: Office 97 onwards (standard Windows DLLs i.e. Word WLLs and Excel XLLs) Office 2000 onwards (COM add-ins) Office XP onwards (COM/OLE Automation add-ins) Office 2003 onwards (Managed code add-ins – VSTO solutions) Password protection Microsoft Office has a security feature that allows users to encrypt Office (Word, Excel, PowerPoint, Access, Skype Business) documents with a user-provided password. The password can contain up to 255 characters and uses AES 128-bit advanced encryption by default. Passwords can also be used to restrict modification of the entire document, worksheet or presentation. Due to lack of document encryption, though, these passwords can be removed using a third-party cracking software. Support policies Approach All versions of Microsoft Office products from Office 2000 to Office 2016 are eligible for ten years of support following their release, during which Microsoft releases security updates for the product version and provides paid technical support. The ten-year period is divided into two five-year phases: The mainstream phase and the extended phase. During the mainstream phase, Microsoft may provide limited complimentary technical support and release non-security updates or change the design of the product. During the extended phase, said services stop. Office 2019 only receives 5 years of mainstream and 2 years of extended support and Office 2021 only gets 5 years of mainstream support. Timelines of support Platforms Microsoft supports Office for the Windows and macOS platforms, as well as mobile versions for Windows Phone, Android and iOS platforms. Beginning with Mac Office 4.2, the macOS and Windows versions of Office share the same file format, and are interoperable. Visual Basic for Applications support was dropped in Microsoft Office 2008 for Mac, then reintroduced in Office for Mac 2011. Microsoft tried in the mid-1990s to port Office to RISC processors such as NEC/MIPS and IBM/PowerPC, but they met problems such as memory access being hampered by data structure alignment requirements. Microsoft Word 97 and Excel 97, however, did ship for the DEC Alpha platform. Difficulties in porting Office may have been a factor in discontinuing Windows NT on non-Intel platforms. Pricing model and editions The Microsoft Office applications and suites are sold via retail channels, and volume licensing for larger organizations (also including the "Home Use Program". allowing users at participating organizations to buy low-cost licenses for use on their personal devices as part of their employer's volume license agreement). In 2010, Microsoft introduced a software as a service platform known as Office 365, to provide cloud-hosted versions of Office's server software, including Exchange e-mail and SharePoint, on a subscription basis (competing in particular with Google Apps). Following the release of Office 2013, Microsoft began to offer Office 365 plans for the consumer market, with access to Microsoft Office software on multiple devices with free feature updates over the life of the subscription, as well as other services such as OneDrive storage. Microsoft has since promoted Office 365 as the primary means of purchasing Microsoft Office. Although there are still "on-premises" releases roughly every three years, Microsoft marketing emphasizes that they do not receive new features or access to new cloud-based services as they are released unlike Office 365, as well as other benefits for consumer and business markets. Office 365 revenue overtook traditional license sales for Office in 2017. Editions Microsoft Office is available in several editions, which regroup a given number of applications for a specific price. Primarily, Microsoft sells Office as Microsoft 365. The editions are as follows: Microsoft 365 Personal Microsoft 365 Family Microsoft 365 Business Basic Microsoft 365 Business Standard Microsoft 365 Business Premium Microsoft 365 apps for business Microsoft 365 apps for enterprise Office 365 E1, E3, E5 Office 365 A1, A3, A5 (for education) Office 365 G1, G3, G5 (for government) Microsoft 365 F1, F3, Office 365 F3 (for frontline) Microsoft sells Office for a one-time purchase as Home & Student and Home & Business, however, these editions do not receive major updates. Education pricing Post-secondary students may obtain the University edition of Microsoft Office 365 subscription. It is limited to one user and two devices, plus the subscription price is valid for four years instead of just one. Apart from this, the University edition is identical in features to the Home Premium version. This marks the first time Microsoft does not offer physical or permanent software at academic pricing, in contrast to the University versions of Office 2010 and Office 2011. In addition, students eligible for DreamSpark program may receive select standalone Microsoft Office apps free of charge. Discontinued applications and features Binder was an application that can incorporate several documents into one file and was originally designed as a container system for storing related documents in a single file. The complexity of use and learning curve led to little usage, and it was discontinued after Office XP. Bookshelf was a reference collection introduced in 1987 as part of Microsoft's extensive work in promoting CD-ROM technology as a distribution medium for electronic publishing. Data Analyzer was a business intelligence program for graphical visualization of data and its analysis. Docs.com was a public document sharing service where Office users can upload and share Word, Excel, PowerPoint, Sway and PDF files for the whole world to discover and use. Entourage was an Outlook counterpart on macOS, Microsoft discontinued it in favor of extending the Outlook brand name. FrontPage was a WYSIWYG HTML editor and website administration tool for Windows. It was branded as part of the Microsoft Office suite from 1997 to 2003. FrontPage was discontinued in December 2006 and replaced by Microsoft SharePoint Designer and Microsoft Expression Web. InfoPath was a Windows application for designing and distributing rich XML-based forms. The last version was included in Office 2013. InterConnect was a business-relationship database available only in Japan. Internet Explorer was a graphical web browser and one of the main participants of the first browser war. It was included in Office until Office XP when it was removed. Mail was a mail client (in old versions of Office, later replaced by Microsoft Schedule Plus and subsequently Microsoft Outlook). Office Accounting (formerly Small Business Accounting) was an accounting software application from Microsoft targeted towards small businesses that had between 1 and 25 employees. Office Assistant (included since Office 97 on Windows and Office 98 on Mac as a part of Microsoft Agent technology) was a system that uses animated characters to offer context-sensitive suggestions to users and access to the help system. The Assistant is often dubbed "Clippy" or "Clippit", due to its default to a paper clip character, coded as CLIPPIT.ACS. The latest versions that include the Office Assistant were Office 2003 (Windows) and Office 2004 (Mac). Office Document Image Writer was a virtual printer that takes documents from Microsoft Office or any other application and prints them, or stores them in an image file as TIFF or Microsoft Document Imaging Format format. It was discontinued with Office 2010. Office Document Imaging was an application that supports editing scanned documents. Discontinued Office 2010. Office Document Scanning was a scanning and OCR application. Discontinued Office 2010. Office Picture Manager was a basic photo management software (similar to Google's Picasa or Adobe's Photoshop Elements), that replaced Microsoft Photo Editor. PhotoDraw was a graphics program that was first released as part of the Office 2000 Premium Edition. A later version for Windows XP compatibility was released, known as PhotoDraw 2000 Version 2. Microsoft discontinued the program in 2001. Photo Editor was photo-editing or raster-graphics software in older Office versions up to Office XP. It was supplemented by Microsoft PhotoDraw in Office 2000 Premium edition. Schedule Plus (also shown as Schedule+) was released with Office 95. It featured a planner, to-do list, and contact information. Its functions were incorporated into Microsoft Outlook. SharePoint Designer was a WYSIWYG HTML editor and website administration tool. Microsoft attempted to turn it into a specialized HTML editor for SharePoint sites, but failed on this project and wanted to discontinue it. SharePoint Workspace (formerly Groove) was a proprietary peer-to-peer document collaboration software designed for teams with members who are regularly offline or who do not share the same network security clearance. Skype for Business was an integrated communications client for conferences and meetings in real-time; it is the only Microsoft Office desktop app that is neither useful without a proper network infrastructure nor has the "Microsoft" prefix in its name. Streets & Trips (known in other countries as Microsoft AutoRoute) is a discontinued mapping program developed and distributed by Microsoft. Unbind is a program that can extract the contents of a Binder file. Unbind can be installed from the Office XP CD-ROM. Virtual PC was included with Microsoft Office Professional Edition 2004 for Mac. Microsoft discontinued support for Virtual PC on the Mac in 2006 owing to new Macs possessing the same Intel architecture as Windows PCs. It emulated a standard PC and its hardware. Vizact was a program that "activated" documents using HTML, adding effects such as animation. It allows users to create dynamic documents for the Web. The development has ended due to unpopularity. Discontinued server applications Microsoft Office Forms Server lets users use any browser to access and fill InfoPath forms. Office Forms Server is a standalone server installation of InfoPath Forms Services. Microsoft Office Groove Server was centrally managing all deployments of Microsoft Office Groove in the enterprise. Microsoft Office Project Portfolio Server allows creation of a project portfolio, including workflows, which is hosted centrally. Microsoft Office PerformancePoint Server allows customers to monitor, analyze, and plan their business. Discontinued web services Office Live Office Live Small Business had web hosting services and online collaboration tools for small businesses. Office Live Workspace had online storage and collaboration service for documents, which was superseded by Office on the web. Office Live Meeting was a web conferencing service. Criticism Editor In January 2022, entrepreneur Vivek Ramaswamy appeared on Fox News and criticized changes to Microsoft Editor that substituted gender-neutral forms of some words for equivalent gendered terms: "postal carrier" or "mail carrier" in place of "mailman," for example. Data formats Microsoft Office has been criticized in the past for using proprietary file formats rather than open standards, which forces users who share data into adopting the same software platform. However, on February 15, 2008, Microsoft made the entire documentation for the binary Office formats freely available under the Open Specification Promise. Also, Office Open XML, the document format for the latest versions of Office for Windows and Mac, has been standardized under both Ecma International and ISO. Ecma International has published the Office Open XML specification free of copyrights and Microsoft has granted patent rights to the formats technology under the Open Specification Promise and has made available free downloadable converters for previous versions of Microsoft Office including Office 2003, Office XP, Office 2000 and Office 2004 for the Mac. Third-party implementations of Office Open XML exist on the Mac platform (iWork 08) and Linux (OpenOffice.org 2.3 – Novell Edition only). Unicode and bi-directional texts Another point of criticism Microsoft Office has faced was the lack of support in its Mac versions for Unicode and Bi-directional text languages, notably Arabic and Hebrew. This issue, which had existed since the first release in 1989, was addressed in the 2016 version. Privacy On November 13, 2018, a report initiated by the Government of the Netherlands concluded that Microsoft Office 2016 and Office 365 do not comply with GDPR, the European law which regulates data protection and privacy for all citizens in and outside the EU and EFTA region. The investigation was initiated by the observation that Microsoft does not reveal or share publicly any data collected about users of its software. In addition, the company does not provide users of its (Office) software an option to turn off diagnostic and telemetry data sent back to the company. Researchers found that most of the data that the Microsoft software collects and "sends home" is diagnostics. Researchers also observed that Microsoft "seemingly tried to make the system GDPR compliant by storing Office documents on servers based in the EU". However, they discovered the software packages collected additional data that contained private user information, some of which was stored on servers located in the US. The Netherlands Ministry of Justice hired Privacy Company to probe and evaluate the use of Microsoft Office products in the public sector. "Microsoft systematically collects data on a large scale about the individual use of Word, Excel, PowerPoint, and Outlook. Covertly, without informing people", researchers of the Privacy Company stated in their blog post. "Microsoft does not offer any choice with regard to the amount of data, or possibility to switch off the collection, or ability to see what data are collected, because the data stream is encoded." The researchers commented that there is no need for Microsoft to store information such as IPs and email addresses, which are collected automatically by the software. "Microsoft should not store these transient, functional data, unless the retention is strictly necessary, for example, for security purposes", the researchers conclude in the final report by the Netherlands Ministry of Justice. As a result of this in-depth study and its conclusions, the Netherlands regulatory body concluded that Microsoft has violated GDPR "on many counts" including "lack of transparency and purpose limitation, and the lack of a legal ground for the processing." Microsoft has provided the Dutch authorities with an "improvement plan" that should satisfy Dutch regulators that it "would end all violations". The Dutch regulatory body is monitoring the situation and states that "If progress is deemed insufficient or if the improvements offered are unsatisfactory, SLM Microsoft Rijk will reconsider its position and may ask the Data Protection Authority to carry out a prior consultation and to impose enforcement measures." When asked for a response by an IT professional publication, a Microsoft spokesperson stated: We are committed to our customers’ privacy, putting them in control of their data and ensuring that Office ProPlus and other Microsoft products and services comply with GDPR and other applicable laws. We appreciate the opportunity to discuss our diagnostic data handling practices in Office ProPlus with the Dutch Ministry of Justice and look forward to a successful resolution of any concerns." The user privacy data issue affects ProPlus subscriptions of Microsoft Office 2016 and Microsoft Office 365, including the online version of Microsoft Office 365. History of releases Version history Windows versions Microsoft Office for Windows Microsoft Office for Windows started in October 1990 as a bundle of three applications designed for Microsoft Windows 3.0: Microsoft Word for Windows 1.1, Microsoft Excel for Windows 2.0, and Microsoft PowerPoint for Windows 2.0. Microsoft Office for Windows 1.5 updated the suite with Microsoft Excel 3.0. Version 1.6 added Microsoft Mail for PC Networks 2.1 to the bundle. Microsoft Office 3.0 Microsoft Office 3.0, also called Microsoft Office 92, was released on August 30, 1992, and contained Word 2.0, Excel 4.0, PowerPoint 3.0 and Mail 3.0. It was the first version of Office also released on CD-ROM. In 1993, Microsoft Office Professional was released, which added Microsoft Access 1.1. Microsoft Office 4.x Microsoft Office 4.0 was released containing Word 6.0, Excel 4.0a, PowerPoint 3.0 and Mail in 1993. Word's version number jumped from 2.0 to 6.0 so that it would have the same version number as the MS-DOS and Macintosh versions (Excel and PowerPoint were already numbered the same as the Macintosh versions). Microsoft Office 4.2 for Windows NT was released in 1994 for i386, Alpha, MIPS and PowerPC architectures, containing Word 6.0 and Excel 5.0 (both 32-bit, PowerPoint 4.0 (16-bit), and Microsoft Office Manager 4.2 (the precursor to the Office Shortcut Bar)). Microsoft Office 95 Microsoft Office 95 was released on August 24, 1995. Software version numbers were altered again to create parity across the suiteevery program was called version 7.0 meaning all but Word missed out versions. Office 95 included new components to the suite such as Schedule+ and Binder. Office for Windows 95 was designed as a fully 32-bit version to match Windows 95 although some apps not bundled as part of the suite at that time - Publisher for Windows 95 and Project 95 had some 16-bit components even though their main program executable was 32-bit. Office 95 was available in two versions, Office 95 Standard and Office 95 Professional. The standard version consisted of Word 7.0, Excel 7.0, PowerPoint 7.0, and Schedule+ 7.0. The professional edition contained all of the items in the standard version plus Access 7.0. If the professional version was purchased in CD-ROM form, it also included Bookshelf. The logo used in Office 95 returns in Office 97, 2000 and XP. Microsoft Office 98 Macintosh Edition also uses a similar logo. Microsoft Office 97 Microsoft Office 97 (Office 8.0) included hundreds of new features and improvements, such as introducing command bars, a paradigm in which menus and toolbars were made more similar in capability and visual design. Office 97 also featured Natural Language Systems and grammar checking. Office 97 featured new components to the suite including FrontPage 97, Expedia Streets 98 (in Small Business Edition), and Internet Explorer 3.0 & 4.0. Office 97 was the first version of Office to include the Office Assistant. In Brazil, it was also the first version to introduce the Registration Wizard, a precursor to Microsoft Product Activation. With this release, the accompanying apps, Project 98 and Publisher 98 also transitioned to fully 32-bit versions. Exchange Server, a mail server and calendaring server developed by Microsoft, is the server for Outlook after discontinuing Exchange Client. Microsoft Office 2000 Microsoft Office 2000 (Office 9.0) introduced adaptive menus, where little-used options were hidden from the user. It also introduced a new security feature, built around digital signatures, to diminish the threat of macro viruses. The Microsoft Script Editor, an optional tool that can edit script code, was also introduced in Office 2000. Office 2000 automatically trusts macros (written in VBA 6) that were digitally signed from authors who have been previously designated as trusted. Office 2000 also introduces PhotoDraw, a raster and vector imaging program, as well as Web Components, Visio, and Vizact. The Registration Wizard, a precursor to Microsoft Product Activation, remained in Brazil and was also extended to Australia and New Zealand, though not for volume-licensed editions. Academic software in the United States and Canada also featured the Registration Wizard. Microsoft Office XP Microsoft Office XP (Office 10.0 or Office 2002) was released in conjunction with Windows XP, and was a major upgrade with numerous enhancements and changes over Office 2000. Office XP introduced the Safe Mode feature, which allows applications such as Outlook to boot when it might otherwise fail by bypassing a corrupted registry or a faulty add-in. Smart tag is a technology introduced with Office XP in Word and Excel and discontinued in Office 2010. Office XP also introduces new components including Document Imaging, Document Scanning, Clip Organizer, MapPoint, and Data Analyzer. Binder was replaced by Unbind, a program that can extract the contents of a Binder file. Unbind can be installed from the Office XP CD-ROM. Office XP includes integrated voice command and text dictation capabilities, as well as handwriting recognition. It was the first version to require Microsoft Product Activation worldwide and in all editions as an anti-piracy measure, which attracted widespread controversy. Product Activation remained absent from Office for Mac releases until it was introduced in Office 2011 for Mac. Microsoft Office 2003 Microsoft Office 2003 (Office 11.0) was released in 2003. It featured a new logo. Two new applications made their debut in Office 2003: Microsoft InfoPath and OneNote. It is the first version to use new, more colorful icons. Outlook 2003 provides improved functionality in many areas, including Kerberos authentication, RPC over HTTP, Cached Exchange Mode, and an improved junk mail filter. Office 2003 introduces three new programs to the Office product lineup: InfoPath, a program for designing, filling, and submitting electronic structured data forms; OneNote, a note-taking program for creating and organizing diagrams, graphics, handwritten notes, recorded audio, and text; and the Picture Manager graphics software which can open, manage, and share digital images. SharePoint, a web collaboration platform codenamed as Office Server, has integration and compatibility with Office 2003 and so on. Microsoft Office 2007 Microsoft Office 2007 (Office 12.0) was released in 2007. Office 2007's new features include a new graphical user interface called the Fluent User Interface, replacing the menus and toolbars that have been the cornerstone of Office since its inception with a tabbed toolbar, known as the Ribbon; new XML-based file formats called Office Open XML; and the inclusion of Groove, a collaborative software application. While Microsoft removed Data Analyzer, FrontPage, Vizact, and Schedule+ from Office 2007; they also added Communicator, Groove, SharePoint Designer, and Office Customization Tool (OCT) to the suite. Microsoft Office 2010 Microsoft Office 2010 (Office 14.0, Microsoft skipped 13.0 due to fear of 13) was finalized on April 15, 2010, and made available to consumers on June 15, 2010. The main features of Office 2010 include the backstage file menu, new collaboration tools, a customizable ribbon, protected view and a navigation panel. Office Communicator, an instant messaging and videotelephony application, was renamed into Lync 2010. This is the first version to ship in 32-bit and 64-bit variants. Microsoft Office 2010 featured a new logo, which resembled the 2007 logo, except in gold, and with a modification in shape. Microsoft released Service Pack 1 for Office 2010 on June 28, 2011 and Service Pack 2 on July 16, 2013. Office Online was first released online along with SkyDrive, an online storing service. Microsoft Office 2013 A technical preview of Microsoft Office 2013 (Build 15.0.3612.1010) was released on January 30, 2012, and a Customer Preview version was made available to consumers on July 16, 2012. It sports a revamped application interface; the interface is based on Metro, the interface of Windows Phone and Windows 8. Microsoft Outlook has received the most pronounced changes so far; for example, the Metro interface provides a new visualization for scheduled tasks. PowerPoint includes more templates and transition effects, and OneNote includes a new splash screen. On May 16, 2011, new images of Office 15 were revealed, showing Excel with a tool for filtering data in a timeline, the ability to convert Roman numerals to Arabic numerals, and the integration of advanced trigonometric functions. In Word, the capability of inserting video and audio online as well as the broadcasting of documents on the Web were implemented. Microsoft has promised support for Office Open XML Strict starting with version 15, a format Microsoft has submitted to the ISO for interoperability with other office suites, and to aid adoption in the public sector. This version can read and write ODF 1.2 (Windows only). On October 24, 2012, Office 2013 Professional Plus was released to manufacturing and was made available to TechNet and MSDN subscribers for download. On November 15, 2012, the 60-day trial version was released for public download. Office 2013 was released to general availability on January 29, 2013. Service Pack 1 for Office 2013 was released on February 25, 2014. Some applications were completely removed from the entire suite including SharePoint Workspace, Clip Organizer, and Office Picture Manager. Microsoft Office 2016 On January 22, 2015, the Microsoft Office blog announced that the next version of the suite for Windows desktop, Office 2016, was in development. On May 4, 2015, a public preview of Microsoft Office 2016 was released. Office 2016 was released for Mac OS X on July 9, 2015 and for Windows on September 22, 2015. Users who had the Professional Plus 2016 subscription have the new Skype for Business app. Microsoft Teams, a team collaboration program meant to rival Slack, was released as a separate product for business and enterprise users. Microsoft Office 2019 On September 26, 2017, Microsoft announced that the next version of the suite for Windows desktop, Office 2019, was in development. On April 27, 2018, Microsoft released Office 2019 Commercial Preview for Windows 10. It was released to general availability for Windows 10 and for macOS on September 24, 2018. Microsoft Office 2021 On February 18, 2021, Microsoft announced that the next version of the suite for Windows desktop, Office 2021, was in development. This new version will be supported for five years and was released on October 5, 2021. Mac versions Prior to packaging its various office-type Mac OS software applications into Office, Microsoft released Mac versions of Word 1.0 in 1984, the first year of the Macintosh computer; Excel 1.0 in 1985; and PowerPoint 1.0 in 1987. Microsoft does not include its Access database application in Office for Mac. Microsoft has noted that some features are added to Office for Mac before they appear in Windows versions, such as Office for Mac 2001's Office Project Gallery and PowerPoint Movie feature, which allows users to save presentations as QuickTime movies. However, Microsoft Office for Mac has been long criticized for its lack of support of Unicode and for its lack of support for right-to-left languages, notably Arabic, Hebrew and Persian. Early Office for Mac releases (1989–1994) Microsoft Office for Mac was introduced for Mac OS in 1989, before Office was released for Windows. It included Word 4.0, Excel 2.2, PowerPoint 2.01, and Mail 1.37. It was originally a limited-time promotion but later became a regular product. With the release of Office on CD-ROM later that year, Microsoft became the first major Mac publisher to put its applications on CD-ROM. Microsoft Office 1.5 for Mac was released in 1991 and included the updated Excel 3.0, the first application to support Apple's System 7 operating system. Microsoft Office 3.0 for Mac was released in 1992 and included Word 5.0, Excel 4.0, PowerPoint 3.0 and Mail Client. Excel 4.0 was the first application to support new AppleScript. Microsoft Office 4.2 for Mac was released in 1994. (Version 4.0 was skipped to synchronize version numbers with Office for Windows) Version 4.2 included Word 6.0, Excel 5.0, PowerPoint 4.0 and Mail 3.2. It was the first Office suite for Power Macintosh. Its user interface was identical to Office 4.2 for Windows leading many customers to comment that it wasn't Mac-like enough. The final release for Mac 68K was Office 4.2.1, which updated Word to version 6.0.1, somewhat improving performance. Microsoft Office 98 Macintosh Edition Microsoft Office 98 Macintosh Edition was unveiled at MacWorld Expo/San Francisco in 1998. It introduced the Internet Explorer 4.0 web browser and Outlook Express, an Internet e-mail client and usenet newsgroup reader. Office 98 was re-engineered by Microsoft's Macintosh Business Unit to satisfy customers' desire for software they felt was more Mac-like. It included drag–and-drop installation, self-repairing applications and Quick Thesaurus, before such features were available in Office for Windows. It also was the first version to support QuickTime movies. Microsoft Office 2001 and v. X Microsoft Office 2001 was launched in 2000 as the last Office suite for the classic Mac OS. It required a PowerPC processor. This version introduced Entourage, an e-mail client that included information management tools such as a calendar, an address book, task lists and notes. Microsoft Office v. X was released in 2001 and was the first version of Microsoft Office for Mac OS X. Support for Office v. X ended on January 9, 2007, after the release of the final update, 10.1.9 Office v.X includes Word X, Excel X, PowerPoint X, Entourage X, MSN Messenger for Mac and Windows Media Player 9 for Mac; it was the last version of Office for Mac to include Internet Explorer for Mac. Office 2004 Microsoft Office 2004 for Mac was released on May 11, 2004. It includes Microsoft Word, Excel, PowerPoint, Entourage and Virtual PC. It is the final version of Office to be built exclusively for PowerPC and to officially support G3 processors, as its sequel lists a G4, G5, or Intel processor as a requirement. It was notable for supporting Visual Basic for Applications (VBA), which is unavailable in Office 2008. This led Microsoft to extend support for Office 2004 from October 13, 2009, to January 10, 2012. VBA functionality was reintroduced in Office 2011, which is only compatible with Intel processors. Office 2008 Microsoft Office 2008 for Mac was released on January 15, 2008. It was the only Office for Mac suite to be compiled as a universal binary, being the first to feature native Intel support and the last to feature PowerPC support for G4 and G5 processors, although the suite is unofficially compatible with G3 processors. New features include native Office Open XML file format support, which debuted in Office 2007 for Windows, and stronger Microsoft Office password protection employing AES-128 and SHA-1. Benchmarks suggested that compared to its predecessor, Office 2008 ran at similar speeds on Intel machines and slower speeds on PowerPC machines. Office 2008 also lacked Visual Basic for Applications (VBA) support, leaving it with only 15 months of additional mainstream support compared to its predecessor. Nevertheless, five months after it was released, Microsoft said that Office 2008 was "selling faster than any previous version of Office for Mac in the past 19 years" and affirmed "its commitment to future products for the Mac." Office 2011 Microsoft Office for Mac 2011 was released on October 26, 2010,. It is the first version of Office for Mac to be compiled exclusively for Intel processors, dropping support for the PowerPC architecture. It features an OS X version of Outlook to replace the Entourage email client. This version of Outlook is intended to make the OS X version of Office work better with Microsoft's Exchange server and with those using Office for Windows. Office 2011 includes a Mac-based Ribbon similar to Office for Windows. OneNote and Outlook release (2014) Microsoft OneNote for Mac was released on March 17, 2014. It marks the company's first release of the note-taking software on the Mac. It is available as a free download to all users of the Mac App Store in OS X Mavericks. Microsoft Outlook 2016 for Mac debuted on October 31, 2014. It requires a paid Office 365 subscription, meaning that traditional Office 2011 retail or volume licenses cannot activate this version of Outlook. On that day, Microsoft confirmed that it would release the next version of Office for Mac in late 2015. Despite dropping support for older versions of OS X and only keeping support for 64-bit-only versions of OS X, these versions of OneNote and Outlook are 32-bit applications like their predecessors. Office 2016 The first Preview version of Microsoft Office 2016 for Mac was released on March 5, 2015. On July 9, 2015, Microsoft released the final version of Microsoft Office 2016 for Mac which includes Word, Excel, PowerPoint, Outlook and OneNote. It was immediately made available for Office 365 subscribers with either a Home, Personal, Business, Business Premium, E3 or ProPlus subscription. A non–Office 365 edition of Office 2016 was made available as a one-time purchase option on September 22, 2015. Office 2019 Mobile versions Office Mobile for iPhone was released on June 14, 2013, in the United States. Support for 135 markets and 27 languages was rolled out over a few days. It requires iOS 8 or later. Although the app also works on iPad devices, excluding the first generation, it is designed for a small screen. Office Mobile was released for Android phones on July 31, 2013, in the United States. Support for 117 markets and 33 languages was added gradually over several weeks. It is supported on Android 4.0 and later. Office Mobile is or was also available, though no longer supported, on Windows Mobile, Windows Phone and Symbian. There was also Office RT, a touch-optimized version of the standard desktop Office suite, pre-installed on Windows RT. Early Office Mobile releases Originally called Office Mobile which was shipped initially as "Pocket Office", was released by Microsoft with the Windows CE 1.0 operating system in 1996. This release was specifically for the Handheld PC hardware platform, as Windows Mobile Smartphone and Pocket PC hardware specifications had not yet been released. It consisted of Pocket Word and Pocket Excel; PowerPoint, Access, and Outlook were added later. With steady updates throughout subsequent releases of Windows Mobile, Office Mobile was rebranded as its current name after the release of the Windows Mobile 5.0 operating system. This release of Office Mobile also included PowerPoint Mobile for the first time. Accompanying the release of Microsoft OneNote 2007, a new optional addition to the Office Mobile line of programs was released as OneNote Mobile. With the release of Windows Mobile 6 Standard, Office Mobile became available for the Smartphone hardware platform, but unlike Office Mobile for the Professional and Classic versions of Windows Mobile, creation of new documents is not an added feature. A popular workaround is to create a new blank document in a desktop version of Office, synchronize it to the device, and then edit and save on the Windows Mobile device. In June 2007, Microsoft announced a new version of the office suite, Office Mobile 2007. It became available as "Office Mobile 6.1" on September 26, 2007, as a free upgrade download to current Windows Mobile 5.0 and 6 users. However, "Office Mobile 6.1 Upgrade" is not compatible with Windows Mobile 5.0 powered devices running builds earlier than 14847. It is a pre-installed feature in subsequent releases of Windows Mobile 6 devices. Office Mobile 6.1 is compatible with the Office Open XML specification like its desktop counterpart. On August 12, 2009, it was announced that Office Mobile would also be released for the Symbian platform as a joint agreement between Microsoft and Nokia. It was the first time Microsoft would develop Office mobile applications for another smartphone platform. The first application to appear on Nokia Eseries smartphones was Microsoft Office Communicator. In February 2012, Microsoft released OneNote, Lync 2010, Document Connection and PowerPoint Broadcast for Symbian. In April, Word Mobile, PowerPoint Mobile and Excel Mobile joined the Office Suite. On October 21, 2010, Microsoft debuted Office Mobile 2010 with the release of Windows Phone 7. In Windows Phone, users can access and edit documents directly off of their SkyDrive or Office 365 accounts in a dedicated Office hub. The Office Hub, which is preinstalled into the operating system, contains Word, PowerPoint and Excel. The operating system also includes OneNote, although not as a part of the Office Hub. Lync is not included, but can be downloaded as standalone app from the Windows Phone Store free of charge. In October 2012, Microsoft released a new version of Microsoft Office Mobile for Windows Phone 8 and Windows Phone 7.8. Office for Android, iOS and Windows 10 Mobile Office Mobile was released for iPhone on June 14, 2013, and for Android phones on July 31, 2013. In March 2014, Microsoft released Office Lens, a scanner app that enhances photos. Photos are then attached to an Office document. Office Lens is an app in the Windows Phone store, as well as built into the camera functionality in the OneNote apps for iOS and Windows 8. On March 27, 2014, Microsoft launched Office for iPad, the first dedicated version of Office for tablet computers. In addition, Microsoft made the Android and iOS versions of Office Mobile free for 'home use' on phones, although the company still requires an Office 365 subscription for using Office Mobile for business use. On November 6, 2014, Office was subsequently made free for personal use on the iPad in addition to phones. As part of this announcement, Microsoft also split up its single "Office suite" app on iPhones into separate, standalone apps for Word, Excel and PowerPoint, released a revamped version of Office Mobile for iPhone, added direct integration with Dropbox, and previewed future versions of Office for other platforms. Office for Android tablets was released on January 29, 2015, following a successful two-month preview period. These apps allow users to edit and create documents for free on devices with screen sizes of 10.1 inches or less, though as with the iPad versions, an Office 365 subscription is required to unlock premium features and for commercial use of the apps. Tablets with screen sizes larger than 10.1 inches are also supported, but, as was originally the case with the iPad version, are restricted to viewing documents only unless a valid Office 365 subscription is used to enable editing and document creation. On January 21, 2015, during the "Windows 10: The Next Chapter" press event, Microsoft unveiled Office for Windows 10, Windows Runtime ports of the Android and iOS versions of the Office Mobile suite. Optimized for smartphones and tablets, they are universal apps that can run on both Windows and Windows for phones, and share similar underlying code. A simplified version of Outlook was also added to the suite. They will be bundled with Windows 10 mobile devices, and available from the Windows Store for the PC version of Windows 10. Although the preview versions were free for most editing, the release versions will require an Office 365 subscription on larger tablets (screen size larger than 10.1 inches) and desktops for editing, as with large Android tablets. Smaller tablets and phones will have most editing features for free. On June 24, 2015, Microsoft released Word, Excel and PowerPoint as standalone apps on Google Play for Android phones, following a one-month preview. These apps have also been bundled with Android devices from major OEMs, as a result of Microsoft tying distribution of them and Skype to patent-licensing agreements related to the Android platform. The Android version is also supported on certain Chrome OS machines. On February 19, 2020, Microsoft announced a new unified Office mobile app for Android and iOS. This app combines Word, Excel, and PowerPoint into a single app and introduces new capabilities as making quick notes, signing PDFs, scanning QR codes, and transferring files. Online versions Office Web Apps was first revealed in October 2008 at PDC 2008 in Los Angeles. Chris Capossela, senior vice president of Microsoft business division, introduced Office Web Apps as lightweight versions of Word, Excel, PowerPoint and OneNote that allow people to create, edit and collaborate on Office documents through a web browser. According to Capossela, Office Web Apps was to become available as a part of Office Live Workspace. Office Web Apps was announced to be powered by AJAX as well as Silverlight; however, the latter is optional and its availability will only "enhance the user experience, resulting in sharper images and improved rendering." Microsoft's Business Division President Stephen Elop stated during PDC 2008 that "a technology preview of Office Web Apps would become available later in 2008". However, the Technical Preview of Office Web Apps was not released until 2009. On July 13, 2009, Microsoft announced at its Worldwide Partners Conference 2009 in New Orleans that Microsoft Office 2010 reached its "Technical Preview" development milestone and features of Office Web Apps were demonstrated to the public for the first time. Additionally, Microsoft announced that Office Web Apps would be made available to consumers online and free of charge, while Microsoft Software Assurance customers will have the option of running them on premises. Office 2010 beta testers were not given access to Office Web Apps at this date, and it was announced that it would be available for testers during August 2009. However, in August 2009, a Microsoft spokesperson stated that there had been a delay in the release of Office Web Apps Technical Preview and it would not be available by the end of August. Microsoft officially released the Technical Preview of Office Web Apps on September 17, 2009. Office Web Apps was made available to selected testers via its OneDrive (at the time Skydrive) service. The final version of Office Web Apps was made available to the public via Windows Live Office on June 7, 2010. On October 22, 2012, Microsoft announced the release of new features including co-authoring, performance improvements and touch support. On November 6, 2013, Microsoft announced further new features including real-time co-authoring and an Auto-Save feature in Word (replacing the save button). In February 2014, Office Web Apps were re-branded Office Online and incorporated into other Microsoft web services, including Calendar, OneDrive, Outlook.com, and People. Microsoft had previously attempted to unify its online services suite (including Microsoft Passport, Hotmail, MSN Messenger, and later SkyDrive) under a brand known as Windows Live, first launched in 2005. However, with the impending launch of Windows 8 and its increased use of cloud services, Microsoft dropped the Windows Live brand to emphasize that these services would now be built directly into Windows and not merely be a "bolted on" add-on. Critics had criticized the Windows Live brand for having no clear vision, as it was being applied to an increasingly broad array of unrelated services. At the same time, Windows Live Hotmail was re-launched as Outlook.com (sharing its name with the Microsoft Outlook personal information manager). In July 2019, Microsoft announced that they were retiring the "Online" branding for Office Online. The product is now Office, and may be referred to as "Office for the web" or "Office in a browser". See also Microsoft Azure Microsoft Dynamics Microsoft Power Platform List of Microsoft software References External links 1989 software Bundled products or services Classic Mac OS software Office suites for macOS Office suites for Windows Office suites Pocket PC software Windows Mobile Standard software Windows Phone software
7171044
https://en.wikipedia.org/wiki/List%20of%20FoxTrot%20characters
List of FoxTrot characters
This article contains information on the central characters in FoxTrot, a comic strip created by Bill Amend. The strip centers on a nuclear family composed of mother Andy, father Roger, and their three children Peter, Paige and Jason, along with several auxiliary characters. Main characters Jason Fox Jason Fox is the youngest child of the family and is considered the nerdiest person in the family. A 10-year-old boy who wears glasses (though his pupils are unseen), he is shown to be very intelligent, and is often relied on to help Roger with taxes, or Peter and Paige with homework. Unlike his siblings, who sometimes make him pay them to do their homework, Jason actually wants to do his homework, and often receives incredibly high marks as a result (to the point that 72 correct answers out of 20 questions is disappointing to him). He sometimes is disappointed when he has no homework because he did all the homework for the year in the first week of school. He tends to aggravate the teachers with his overly complicated answers, and is frequently in trouble for disrupting class. Despite his intellect, he is shown to take most things too literally on occasion. (Once, when Roger asked him for "java", meaning a cup of coffee, Jason gave him a mug with a printout from the Java programming language.) He also once placed an order for a pizza with "17/51 cheese, 109/327 sausage, and 86,499,328/259,497,984 mushroom" (which resulted in Roger receiving all his change in pennies and telling Jason that his ever being asked to order their pizza again was an "unlikely event"), and unsuccessfully makes attempts to get Roger and Andy to raise his allowance, which almost always results in sudden, sharp decreases in his pay. Like most stereotypical boys, Jason is a constant source of mischief. He is always coming up with jokes, pranks and tricks which include water bombs, snowballs, dart guns, squirt guns and other contraptions, Paige being his favorite target. She is also the center of his insults, like when he came up with a Slug-Man superhero comic, which included "Paige-o-Tron" as the villain, or uploading games to his website which included Pimple Command, Paige Invaders, Ms. Yap Man and Paige Don't Know Jack, on which she is respectively portrayed as a pimply, space alien, a constant talker on the phone and being unable to answer the easiest of questions. Although Paige is his regular target, Peter is sometimes the target of his tricks (such as reprogramming the auto-dial buttons on Peter's cell phone, resulting in Peter accidentally confessing to Andy about sneaking out when the Denise button dialed Andy's number instead). He also enjoys making comic strips with Slug Man (see below), substitutes for other cartoonists' work, e.g. Family Circus, or with his siblings as the characters, such as when he had a week of comic strips in which Paige was portrayed as a deadly monster (which resulted in Paige doing the same for him the following week). Portrayed as a stereotypical nerd, he has an interest in science fiction, particularly Star Wars, Star Trek and role-playing games (primarily the fictional fantasy-themed MMORPG "World of Warquest", a portmanteau of World of Warcraft and EverQuest), as well as a high level of knowledge in mathematics and science. He also seems to have a high interest in comic books and dinosaurs. In a few strips, it is revealed that he likes Pokémon and Yu-Gi-Oh! cards, as well as the fictional "Linuxmon", blending his interest in programming with card games, and brings them to school. Jason is also a frequent user of the family computer, and has been shown to be an amazing programmer, repeatedly constructed his own computer programs (including a search engine, which he built before breakfast) and viruses, which he often sends to the other computers in the house and Eileen Jacobson. He also has written at least two viruses that caused worldwide havoc, called the "Darth Jason" and "I-Don't-Love-You-Eileen-Jacobson". He included a copyright line in the latter, and in the former he completely eliminated the Internet (and Jason's web company, "Jasonzonbayhoo") as a result. It is also shown that Jason is a terrific snow sculptor. When it snows he and his best friend Marcus build monsters out of snow to scare Paige. Another example is a storyline in which Andy tells him to go outside instead of playing video games, so he and Marcus build the environments of their favorite video games to play in. In addition, he plays video games regularly — either by himself, with Peter, with Roger, or with Marcus. In one series of strips, Paige plays one of his games and is more adept at it than Jason to his frustration, with common sense assisting her. He frequently attempts to recreate the work of cartoonists while they are on hiatus, usually as an excuse to make fun of Paige. He also makes his own comic called Slug Man, a parody of Superman and Batman. Occasionally, Jason will make exaggerated plans of his own, such as a large-scale animatronics Christmas display (which has everything but a sound system playing "Jingle Bell Rock" all day) or a skyscraper comic book shop in his backyard (which is squashed by the zoning commission). Like Calvin from Calvin and Hobbes, Jason is shown to have a fear/hatred of girls (see below), but admits to slightly liking one of his only female friends, Eileen Jacobson. He sometimes falls prey to advertising ploys, as illustrated in a 1991 arc where he becomes obsessed for a short while with The Simpsons products. Paige Fox Paige Fox is the middle child of the Fox family. A 14-year-old high school freshman, she is always portrayed with her hair in a ponytail. She enjoys shopping and will often demand that Peter drive her to the mall, sometimes using blackmail to force him into it. Like the rest of the family, she has interests expected of her age group. Her obsessions include fashion, pop music (particularly boy bands and, in earlier strips, Madonna), modern fads and trends, and attractive teenage boys. A running gag through the series is that she has a huge craving for candy and other sweets. For example, the ice cream man comes to her when she rings a bell and takes out his whole stock, asking Paige "the usual?". She has been shown to make a rainbow ice cream cone not by ordering rainbow sorbet, but instead by ordering seven different flavors in the colors of the rainbow, stacked in a rainbow shape in the right order on top of two cones. In an August 2013, strip she and Nicole suggest to a cupcake shop ideas for a pintcake, quartcake, 2litercake, and a galloncake, in case they ever wanted to expand beyond cupcakes. She shares her obsessions with her best friend, Nicole (see below). Paige often has sleepovers at Nicole's house to avoid Jason's pet iguana, Quincy (see below). She also appears to have an iPhone 4. Although persistent in her pursuit of a boyfriend, she has almost never dated in the strip. A running gag is nerd Morton Goldthwait's interest in her, though she openly despises him. Paige has tried learning to cook to attract boys, but the food she makes is often inedible or burnt. One time, Paige burned all of her cookies so badly that Roger used them as a substitute for charcoal briquettes after Andy "directed him towards a stash." On another occasion, she used Diet Pepsi in baking Christmas cookies, thinking it was the same thing as baking soda. On yet another occasion, she burned the Thanksgiving turkey into charcoal with the oven's clean button, as the cookbook instructed her to "clean" the turkey. Similar to Peppermint Patty from Peanuts, Paige often falls asleep in class, partly because she stays up late. She is slightly more diligent than Peter when it comes to schoolwork, but has had her own procrastination-related nightmares (like school starting two weeks early and getting a schedule with only math classes with quizzes every day). Paige will often ask Jason for help in homework, usually in the mathematics of geometry. However, he often gives her intentionally incorrect answers or charges her money in exchange for the correct ones. Paige also has difficulties with English class, particularly when reading Shakespeare. Although she is not flabby, Paige is also shown to be in poor physical shape, not participating in any outdoor activities or organized sports, and the smallest amount of physical activity is shown to wear her out. She once auditioned for the school cheerleading team in an effort to attract boys, which ended in a debacle similar to Peter's attempts to join the baseball team. Peter Fox Peter Fox is the eldest child of the Fox family. A 16-year-old high school junior, he is regularly shown wearing a blue/purple and white baseball cap with the letter A on it, as well as a grey hooded sweatshirt and blue jeans. Occasionally, his cap has an "H" instead of an "A", but only as a benchwarmer for the high school baseball team. He is often seen with his hat on, even at odd times such as swimming. Peter has been seen hatless in places where it is blatantly rude to wear a ballcap indoors, such as church. He is also depicted as having an exaggerated appetite, but is frustrated that no matter how much he eats, he cannot seem to gain any weight (except in one series of strips, where he gained 50 pounds at a pizza joint's all-you-can-eat pizza night, then losing it overnight by the Saturday strip.) He is also shown as a reckless driver, once claiming to have "flirted" with four-digit speeds. Peter has pulled many stunts with the family car, such as speeding, back-wheelies, deliberately spinning and fishtailing, driving so fast that zero-gravity was achieved, attempting to clear a yellow light from six blocks away, almost sideswiping Andy's car, and going over the speed limit while parallel parking. Peter is a procrastinator, and one of the running gags of the strip is the many ways he dreams up to avoid doing his homework or household chores. He once bragged that he was "sick of homework from day one" in response to Paige claiming that she's starting to get sick of homework barely two months after the start of school, as well as becoming two weeks behind on schoolwork only one week into the year, a fact he seems to be proud of. As a result, he often crams homework and study into an all-night session (and once blasted the Hallelujah Chorus on his stereo to celebrate success). In desperation, Peter invented a high-caffeine concoction: coffee-tea (black coffee with a tea bag). He also shows interest in sports, but is constantly shown to be inept at both football and baseball. This has been demonstrated by his name being preprinted on the list of people cut from the football team (and every other sports team, including girls' gymnastics). He even once had a dream of being a contender on the TV show American Gladiators but before the dream could be realized, he threw a muscle flexing in front of a mirror. He is also seen as occasionally power hungry, especially when their parents are away, as shown in an early storyline in which he forced Paige and Jason to be his servants, baking him cookies, cleaning his room, and the like; before locking them in the basement for "mutiny". Peter also holds other stereotypical interests for an adolescent male, including swimsuit models, video games, and guitar playing. He is known to be an avid Bruce Springsteen fan, once dating a girl named Susie Johnson only because she had Springsteen tickets, and during the Thursday strip, Peter touched Bruce's sweat. Since some of the earlier strips, Peter has been dating a blind girl named Denise Russo. Andy Fox Andrea "Andy" Fox is the mother of Peter, Paige, and Jason and the wife of Roger. She is portrayed in the strip as a 42-year-old mother. According to the strip, she graduated from a college prior to marrying, where she majored in English, but it is never revealed where she got her degree, so it could be possible she got it at a different university than her husband. While earlier strips portrayed her as a freelance writer and columnist for a local newspaper, references to her job were gradually dropped and she has mostly been portrayed as a stay-at-home mother. Andy has often clashed with her children about school. She fails to relate to Jason due to his interest in science whereas she was more artistic, and is entirely unsympathetic with Paige and Peter's hatred of school, being under the impression that due to her own love of education, all her kids should also love it. Her locking horns with the kids on school is not without justification, as Peter and Paige are often shown as "go-nowheres", being lazy in school and rarely involved in extracurricular courses. Even on rare occasions Jason has flubbed his schoolwork, causing Andy to come down upon him. Andy often prepares exaggerated vegetarian or vegan meals for her family (most often tofu, eggplants, spinach, beets, etc.), much to their chagrin, but she continues to try to get them to eat what she believes to be healthier meals. Another point of home living she places on her family includes lowering the thermostat to below-average temperatures in an attempt to lower the heating bill, which leads to contention from the kids (primarily Paige) who wish for it to be normal. She often criticizes her husband for his love of beer and meat, and children for what she sees as their bad habits, such as procrastination and use of improper grammar. She has been known to impose her will and beliefs rather harshly on her family, including Roger, such as occasions when she refused to buy coffee because she believed him to be "addicted" to it, and instructing Jason to play video games to prevent Roger from watching Super Bowl pre-game programming. In most cases, she has an intense hatred toward video games, often trying to get her kids disinterested in gaming. Her methods have ranged from directly throwing a console across the room during a particularly violent rant, to confiscating all games save for those approved by mother activist groups. This has also shown her to be something of a hypocrite as her attempt to use a Momvo to restrict her children's shows ended when it wouldn't record her soap operas. Andy has been shown to be the most anger prone of the Fox family, although her anger never escalates to hurting others as Paige and Peter have been shown to do. She is easily the most high-strung member of the family and can start screeching about anything in an instant. It was never specified as to the source of Andy's anger, but hinted partly due to a combination of factors, the strained relationship with her mother, the somewhat strained relationship with her children's refusal to do things her way, and that her adult life has been poorer in comparison to the prosperous times she enjoyed as a child. Her father had a job similar to Roger's, but had a far more stable and rewarding career, whereas she shows annoyance that Roger's career is not progressing as well as her father's had when he was Roger's age. Andy has also been shown as being obsessed with bills, as she always leaves the thermostat too low and subscribes to only the most basic necessities for a household. A running joke has been that during the winter the house has been cold enough to freeze things like hot drinks, the steam coming from Roger's coffee, and the family computer (literally); in one strip, she had the thermostat set below -297 degrees Fahrenheit. This is contrasted with the spendthrift attitude of the kids and their locking horns with Andy over this. Arguably because Roger's money pays the bills, he is also careful with expenses, but shows nowhere near the parsimony of his wife. Andy has also been shown to obsess over certain other fads, such as "Bitty Babies" (a parody of Beanie Babies), the movie Titanic, and the Nintendo DS game Nintendogs. Although Andy has been apparently disappointed in Roger's lot in life, she has been supportive that he works to put a roof over their head. A more serious story arc in FoxTrot was when Roger had been tasked by J.P. Pembrook to prepare a presentation to the board of directors to increase Pembrook's pay package by $300,000 - at a time when many of Roger's coworkers had been laid off due to low company profits. Andy is sympathetic with Roger's dilemma, and even suggests he should denounce Pembrook to his face. Roger Fox Roger Fox is the father of Peter, Paige, and Jason and the husband of Andy. According to the strip, he is 45 years old and was born in Chicago, Illinois. Roger has also stated that he majored in English studies at the fictional Willot College. He works at Pembrook and Associates. Roger's occupation is an unspecified white-collar office job, although his coworkers and his boss, Pembrook, have appeared in the strip. His hobbies include golf, camping, and chess, though he has almost no talent at any of them (or virtually anything else he attempts). When he is golfing, he often hits the ball wildly and completely off target, when camping he always messes up something, and in chess he is defeated in mere seconds. In one strip, he flooded the house trying to use the dishwasher. He often tries to involve his family in his interests, usually by taking them on vacations. He is also portrayed as being highly out of step with modern technology, especially computers. Many strips also show that he is overweight, balding and in poor physical condition, to the point of his tires sinking just by him sitting inside his car. Despite his wife's attempts to get him to eat healthy foods and exercise, he rarely does so. A running gag in the strip depicts the family grill shooting a giant pillar of fire into the sky whenever Roger tries to light it, typically burning him in the process. In one strip, the pillar reached Mars and destroyed a rover; in response, NASA called demanding money for the damages (Roger implied this has happened before). He also finds it extremely difficult to fully wake up, having been depicted as requiring a large amount of coffee to start each day. Despite being rather clueless at times, he can be clever in some matters. He fares well enough as a family man and clearly cares for his wife and children. Other recurring characters Roger's coworkers J.P. Pembrook is Roger's boss. In the strip, he always appears seated at his desk, with only his hands showing (and almost always folded). Pembrook is a stereotypical boss, showing little care about his company or his workers and is extremely greedy. In one strip, he has Roger be a clown for his son's 5th birthday party, resulting in Andy's anger at her husband being used in this manner. In another, slightly more serious storyline, Pembrook admits to Roger that the company has been in debt and laying off multiple workers; despite all this he has Roger convince the board to increase his CEO pay by $300,000. Roger, conflicted over the stress at home, is henpecked by an enraged Andy that he should tell off Pembrook. Fred is Roger's best friend and coworker and they are often shown playing golf together. Fred seems to be the better golfer as seen in one strip where Fred had a score of 81 while Roger had a score of 180. Fred and the other workers like having Roger around since he often causes the office's computer network to crash, thereby preventing them from having to work. However, Fred was later dropped from the strip after a point where Pembrook announced mass layoffs, whereas in private telling Roger he is being kept on board, which suggests Fred was among the workers who lost his job. Andy's contacts Katie O'Dell is the baby daughter of one of Andy's friends, Margaret O'Dell. Paige sometimes babysits for her, usually with disastrous results. One time, Paige fell asleep on the job allowing Katie to find scissors and chop up her brand-new dress while wearing it. In another case Paige watched an adult show, Jerzy Spaniel (a parody of Jerry Springer) with Katie discovering a swear word, repeating it over and over. Another time, Paige gave her cake, making her exceedingly hyper. Another time, Katie had been punished and could not watch TV or videos for misbehaving more than usual that morning, leading to Katie repeatedly yelling "Blue's Clues!" for hours on end. Paige described her as "a broken nonstop tape recorder" on the Saturday strip. Margaret O'Dell is one of Andy's friends in a book club and is the mother of Katie O'Dell. She is known to be very specific in details (such as with phone numbers or Katie's name). She usually pays Paige enough to make Paige happy to babysit again. Peter's classmates and teachers Denise Russo is Peter's blind girlfriend whom he met at school in the strip's first year. They are very close, and kiss constantly, although they do have their share of fights. Peter once attempted to break up with her so he could date other girls and "develop socially," but had a change of heart when he realized dating is not a rite of passage and there is no "appropriate" romance, and later admitted he was subject to peer pressure and media stereotypes. Denise has a manipulative streak and knows exactly how to get what she wants from Peter. In several strips, Paige and Jason blackmail him with photographs of the two together. Since the strip's move to Sundays-only publishing, Denise has hardly been mentioned or shown. Steve Riley is Peter's best friend. Peter and Steve are often seen watching sports, or playing video games or guitar together. Steve works at Luigi's Pizza. He often lets Peter borrow his guitar instruments and amps, which usually ends up with Peter destroying most of the objects in the house. Paige once noted that Steve was "cute", much to Peter's chagrin. Despite his generally straight-laced nature, Steve had once tried to sponge answers off a test Peter had already taken, and Peter calls out Steve for his lousy ways. Coach is Peter's unnamed baseball and football coach. For reasons unknown (other than Peter's implied lack of athletic skill), he always makes Peter sit on the bench during games, only using Peter in an emergency or when he needs a new bag of sunflower seeds. He is the opposite image of a good coach, showing little interest in improving skills and only thinking of the "easy way out" to win games. He is also portrayed with a fat belly that hangs over his pants; hardly the image of an athlete. Later coaches, though, have shown more competence, but share the same low opinions of Peter's athletic skill. The Physics Teacher is an unnamed man who teaches physics at Peter's high school. He wears glasses and a full beard. He seems to have little patience for the laziness of Peter and Steve, but will sometimes be helpful if he realizes students are making an honest effort. Sometimes Paige has been shown as a student of his classes. The Theater Manager is Peter's boss at the movie theater where he has a summer job. Paige's classmates and teachers Nicole is Paige's best friend. She is also a high school freshman who has similar interests. Paige and Nicole sometimes go shopping together. The two once broke off their friendship after Nicole was able to get a date for the prom and Paige was not, but they soon reconciled. Nicole once had a crush on Peter, but that disappeared after she learned of his slovenly persona. Morton Goldthwait is a stereotypical nerd who attends school with Paige, though he is half her size. Although Paige has mostly ignored him, he still has a crush on her. In a 1997 storyline, he was also Jason's summer camp instructor, running his part of the camp with a rod of iron and insisting that he be addressed as "Your glorious and all-powerful high eminence, sir." Jason had planned to set him up on a date with Paige as revenge, until he discovered that Morton actually liked Paige. Jason has also taken advantage of Morton's crush on Paige by arranging to have him become a part of her friends page on Facebook and sending him a "gushy love note" on her behalf. Despite Paige's distaste for Morton, she seems to be at least somewhat flattered by his crush on her, as on one occasion when Morton asks another girl to a school dance because he figures Paige would turn him down, and Paige reacts jealously. It is mentioned that he took the SATs, and was angry about getting "only" a 1590 out of a possible 1600. Jason considers him a stud, which of course infuriates Paige. Morton has been a part of many antics involving school activities. In one series of strips he formed the varsity e-football team, which plays video game football. Dr. Ting is Paige's biology teacher at school. While generally kind and patient, he has been shown to have a sadistic streak, as on an occasion where he assigned his class a test that covered forty-six chapters. Miss Rockbottom is Paige's Physical Education teacher. Despite appearing to be somewhat overweight herself, she is depicted as being very strict, once assigning more than 500 laps around the school track. Jason's classmates, friends and teachers Marcus Jones is Jason's best friend, and the only recurring African-American character in the strip. While similar to Jason in character, Marcus tends to be a little more sensible. Jason and Marcus usually play together, launching model rockets, flying kites, playing video games, or other activities, such as harassing Paige. Marcus has four sisters named Doreen, Lisa, Cybil, and Lana. He is always happy to take care of Quincy when Jason and the family go on vacation, so he can mess with his sisters; he also has a hamster. Eileen Jacobson is one of Jason's classmates. She attempts to converse with Jason regularly, only for him to show discomfort around her. Their relationship appears to be a mixture of hatred and grudging respect (in addition to Jason's fear of girls in general), though there may be romantic subtexts in the relationship. In one extended 1998 storyline, Eileen managed to trick Jason into admitting that he did sort of "like" her, leading Jason to spend weeks trying to figure out how to undo his "mistake" through time travel. At this point they even begin dating, though this is seemingly short-lived when Jason tries to keep the relationship confidential so that his friends wouldn't shun him. In another 1996 story arc, Jason accidentally sends a love note that was meant to be for Andy for Mother's Day to Eileen, and it takes several strips to correct the mistake. Even Marcus thought Jason had Eileen's "cooties," which he confesses at the end of the strip. It is noted by Andy that Eileen has a "loud laugh". Eileen seems to have a knack for getting Jason to do whatever she wants him to do, as evidenced by a series of strips in which Eileen makes Jason be her partner on a school field trip in exchange for a Charizard Pokémon card. She is also a fan of the Harry Potter franchise, and in spite of Jason attempting to compete, she manages to get him interested as well. Miss O'Malley is Jason's teacher. At first, Jason disliked her because, unlike his former teacher, Miss O'Malley encouraged Jason's creativity (such as marking the sites of dinosaur bone discoveries on a map when he was only required to name the continents), but she eventually came to realize how big a problem Jason was. She sometimes forces Jason to stay after school and write sentences on the blackboard after he misbehaves. A common gag is that Jason tries to find excuses to stay at school when summer begins, despite Miss O'Malley's reminders for him to go home. Phoebe and Eugene Wu Phoebe is a friend of Eileen's, whom Jason and Marcus meet while attending science camp one summer. She and her younger brother, Eugene, are Chinese-American. While Phoebe appears to be a relatively normal girl, Eugene is an obnoxious braggart who does not get along well with Jason and Marcus and is regarded even by his big sister (who, it is mentioned, has a higher IQ than he) as an object of derision. Phoebe and Eileen are rivals of Jason and Marcus at first, but the rivalry ends when Jason and Marcus cast the votes for Phoebe and Eileen's science project to win over Eugene's in the camp's science fair, and the girls initiate Marcus and a reluctant Jason into their "super-secret friendship club." The Wu siblings return in a second extended story line a few years later involving the theft of Phoebe's camp journal. The culprit turned out to be Eugene, who had hoped to foster discord among the "friendship club" and thus bring about its disbandment; however, Phoebe punishes him by incorporating a new friendship club with a non-disbandment clause. They have since made occasional guest appearances. Other Quincy is Jason's pet iguana. Jason regularly uses him to tease Paige, either by waving the iguana in front of her, putting him in the bathtub while she is taking a bath, or throwing him on her. Other times he will let Quincy into her room so he can chew up her belongings and throw them up. At other times, Quincy just sits on Jason's head, while sometimes he dresses Quincy and himself up as various fictional characters, such as The Lone Iguana, Quincy John Adams, Quincynook, and QUINCE-E. Quincy's species has never been revealed, but it is most likely that he is a juvenile Green Iguana (Iguana iguana). He has talked only twice. The first time was in a strip where Jason built a Spider-Man web launcher and had trouble with it, and his mother said "Comic books aren't real." He denies this, and Quincy replies, "Listen to your mother, Jason" as a play on what Andy said. The second time was at the end of a dream where Peter is in the Odyssey and discovers, to his disgust, that Quincy is Penelope. In that strip, Quincy said "Hold me. Kiss me. Love me." To his surprise Jason's teacher Ms. O'Malley actually thinks Quincy is adorable. iFruit is a parody of the iMac computer. iFruit is the Fox's talking family computer which, upon being turned on, opens with the phrase, "Welcome to iFruit. Hug me". It first appears in a 1999 storyline in which Andy buys it on a computer shopping spree (namely after, in an earlier storyline, Roger sells the old computer in a yard sale after he ends up losing so much money day trading stocks), but only because it was "darling," much to the dismay of Jason, who comments, "Geekdom is dead." However, later in the series, iFruit starts growing on Jason because Andy starts buying its peripherals saying they are "adorable," unlike in the case of their previous computer. Later strips show that it has upgraded to resemble recent iMac flat-screen computers. However, the old model was shown in a recent strip in the Foxes' basement, where it is used in memory of Steve Jobs (following his death) via daylight saving time, with Andy stating that "you can't turn the clock back", and Jason replying that he just wants to pretend. Pierre is an imaginary character from the strip's early years. Paige had recurring dreams of herself in a relationship with the French prince; these dreams are typically influenced by things happening in the strip's real world while Paige is asleep (Jason messing with her, Andy attempting to wake her up, etc.). In dreams where Paige and Pierre kiss, one can typically expect to in reality find Jason having Quincy kiss her. While most Pierre dreams take place while Paige is home, school has influenced the dreams on more than one occasion. Fauntleroy is the tiny pet dog of one of the Fox's neighbors. He is the main focal point of some arcs focusing on Peter, who is occasionally hired to take care of the dog, despite its tendency to bite. Other family Andy's mother (the maternal grandmother of Paige, Peter and Jason) has occasionally appeared in the strip as well. The grandmother is often referred to as "perfect"; as a result, Andy often feels inferior around her (according to the strip, this resentment started around seventh grade, as Andy's friends liked her mom more than her), and tries to prove herself by competing against her mother (usually by trying to cook a meal as well as her mother can). It has been mentioned that her cooking has won awards and that Martha Stewart has tried to get one of her recipes. In addition, her mother has also supplied Peter with a music CD called Lance and the Boils out of what was strongly implied to be revenge towards Andy for her choice of music as a teenager. She had also gifted the grandchildren with gifts more akin to their interests such as a Nintendo Wii, which Andy has great disdain for, wanting her children to be more like she is, which implies the grandmother is more accepting of the kids for who they are. Uncle Ralph is Roger's brother, an unseen character only mentioned during the early years of the strip, when the Fox family would go to "Uncle Ralph's Cabin" for, at one point, the ninth year in a row. References Fictional families Fictional lizards
10089291
https://en.wikipedia.org/wiki/Comparison%20of%20DVR%20software%20packages
Comparison of DVR software packages
This is a comparison of digital video recorder (DVR), also known as personal video recorder (PVR), software packages. Note: this is may be considered a comparison of DVB software, not all listed packages have recording capabilities. General information Basic general information for popular DVR software packages - not all actually record. Features Information about what common and prominent DVR features are implemented natively (without third-party add-ons unless stated otherwise): Video format support Information about what video codecs are implemented natively (without third-party add-ons) in the PVRs. Information about what video codecs are implemented natively (without third-party add-ons) in the PVRs. Network support Each features is in context of computer-to-computer interaction. All features must be available after the default install otherwise the feature needs a footnote. 1 Yes with registry change 2 Yes with retail third-party plugin 3 Yes with free supported third-party plugin 4 Yes with free unsupported third-party plugin 5 Yes with free third-party software Web Guide 4 6 Yes with add-on software called DVBLink Server 7 Yes with using symlinks, or just adding folders in settings TV tuner hardware TV gateway network tuner TV servers DVRs require TV tuner cards to receive signals. Many DVRs, as seen above, can use multiple tuners. HdHomerun has CableCARD Models (HDHomeRun Prime) and OTA Models (HDHomeRun Connect) that are networked TV Tuners See also List of free television software Comparison of video player software Home cinema Home theater PC (HTPC) Digital video recorder Hard disk recorder DVD recorder Quiet PC Media server Notes External links FLOSS Media Centers Comparison Chart PVR software packages Television technology Television time shifting technology
69228240
https://en.wikipedia.org/wiki/List%20of%20Jupiter%20trojans%20%28Greek%20camp%29%20%28600001%E2%80%93700000%29
List of Jupiter trojans (Greek camp) (600001–700000)
This is a partial list of Jupiter's trojans (60° ahead of Jupiter) with numbers 600001–700000 . 600001–700000 This list contains 183 objects sorted in numerical order. top References Greek_6 Jupiter Trojans (Trojan Camp)
2179138
https://en.wikipedia.org/wiki/Daphne%20Koller
Daphne Koller
Daphne Koller (; born August 27, 1968) is an Israeli-American computer scientist. She was a professor in the department of computer science at Stanford University and a MacArthur Foundation fellowship recipient. She is one of the founders of Coursera, an online education platform. Her general research area is artificial intelligence and its applications in the biomedical sciences. Koller was featured in a 2004 article by MIT Technology Review titled "10 Emerging Technologies That Will Change Your World" concerning the topic of Bayesian machine learning. Education Koller received a bachelor's degree from the Hebrew University of Jerusalem in 1985, at the age of 17, and a master's degree from the same institution in 1986, at the age of 18. She completed her PhD at Stanford in 1993 under the supervision of Joseph Halpern. Career and research After her PhD, Koller did postdoctoral research at University of California, Berkeley from 1993 to 1995 under Stuart J. Russell, and joined the faculty of the Stanford University computer science department in 1995. She was named a MacArthur Fellow in 2004. She was elected a member of the National Academy of Engineering in 2011 for contributions to representation, inference, and learning in probabilistic models with applications to robotics, vision, and biology. She was also elected a fellow of the American Academy of Arts and Sciences in 2014. In April 2008, Koller was awarded the first ever $150,000 ACM-Infosys Foundation Award in Computing Sciences. She and Andrew Ng, a fellow Stanford computer science professor in the AI lab, founded Coursera in 2012. She served as the co-CEO with Ng, and then as president of Coursera. She was recognized for her contributions to online education by being named one of Newsweeks 10 Most Important People in 2010, Time magazine's 100 Most Influential People in 2012, and Fast Companys Most Creative People in 2014. She left Coursera in 2016 to become chief computing officer at Calico. In 2018, she left Calico to start and lead Insitro, a drug discovery startup. Koller is primarily interested in representation, inference, learning, and decision making, with a focus on applications to computer vision and computational biology. Along with Suchi Saria and Anna Penn of Stanford University, Koller developed PhysiScore, which uses various data elements to predict whether premature babies are likely to have health issues. In 2009, she published a textbook on probabilistic graphical models together with Nir Friedman. She offered a free online course on the subject starting in February 2012. Her former doctoral students include Lise Getoor, Mehran Sahami, Suchi Saria, Eran Segal, and Ben Taskar. Honors and awards Her honors and awards include: 1994: Arthur Samuel Thesis Award 1996: Sloan Foundation Faculty Fellowship 1998: Office of Naval Research Young Investigator Award 1999: Presidential Early Career Award for Scientists and Engineers (PECASE) 2001: IJCAI Computers and Thought Award 2003: Cox Medal at Stanford 2004: MacArthur Fellow 2004: Oswald G. Villard Fellow for Undergraduate Teaching at Stanford University 2007: ACM Prize in Computing 2008: ACM/Infosys Award 2010: Newsweeks 10 Most Important People 2010: Huffington Post 100 Game Changers 2011: Elected to National Academy of Engineering 2013: Time magazine's 100 Most Influential People 2014: Elected fellow of the American Academy of Arts and Sciences 2014: Fast Companys Most Creative People in Business<ref>{{cite web|url=https://www.fastcompany.com/person/daphne-koller |title=Fast Companys Most Creative People in Business|website=fastcompany.com}}</ref> 2017: Elected ISCB Fellow by the International Society for Computational Biology (ISCB) 2019: ACM-AAAI Allen Newell Award for contributions with significant breadth across computing, or that bridge computer science and other disciplines Books Koller's book authorships include: Koller contributed one chapter to the 2018 book Architects of Intelligence: The Truth About AI from the People Building it by the American futurist Martin Ford. Probabilistic Graphical Models: Principles and Techniques'' by Daphne Koller and Nir Friedman. Personal life Koller is married to Dan Avida, a venture capitalist at Opus Capital. References 1968 births Living people MacArthur Fellows Artificial intelligence researchers Stanford University alumni American roboticists Women roboticists Fellows of the Association for the Advancement of Artificial Intelligence Stanford University School of Engineering faculty American women computer scientists Women statisticians 20th-century American Jews Members of the United States National Academy of Engineering American bioinformaticians Fellows of the International Society for Computational Biology Fellows of the American Academy of Arts and Sciences Recipients of the ACM Prize in Computing Israeli Jews 21st-century American Jews 20th-century American women 21st-century American women
16558949
https://en.wikipedia.org/wiki/Sinking%20of%20Rochdale%20and%20Prince%20of%20Wales
Sinking of Rochdale and Prince of Wales
Rochdale and Prince of Wales were two troop ships that sank in Dublin Bay in 1807. Dublin Port had long been dangerous because it was accessible only at high tide and was subject to sudden storms. Many ships were lost while waiting for the tide, but little was done until this disaster. The impact of 400 bodies being washed up on an urban shore had an effect on public and official opinion. This event was the impetus to the building of Dún Laoghaire Harbour. On 19 November 1807 several ships left Dublin carrying troops bound for the Napoleonic war. The next day, two ships, the brig Rochdale and H.M. Packet ship Prince of Wales, having been caught in gale-force winds and heavy snow, were lost. Troops on Prince of Wales may have been deliberately locked below deck while the ship's captain and crew escaped. No lifeboat was launched. There was looting. Maritime background This tragedy was the impetus to the building of Dún Laoghaire Harbour, which was initially called "Dunleary", then "Kingstown", and now "Dún Laoghaire". Dublin port was hampered by a sandbar, which meant that ships could enter or leave only at high tide. A solution, the building of the North Bull Wall, had been identified by Vice-Admiral William Bligh in 1800. If there was a storm, a ship would have to ride out the storm in the open sea, waiting for the tide. "The bay of Dublin has perhaps been more fatal to seamen and ships than any in the world, for a ship once caught in it in a gale of wind from ENE to SSE must ride it out at anchors or go on shore, and from the nature of that shore the whole of the crews almost invariably have perished." – Captain Charles Malcolm of George IV's royal yacht. A pier had been built at Dún Laoghaire, now known as the "coal harbour", in 1767, but it had rapidly silted up. The early nineteenth century was unusually stormy. Dublin Bay was notoriously treacherous for boats. The remains of at least 600 vessels rest at the bottom of the bay. On 19 November 1807, the sea began to swell. Wind speed increased to hurricane force. Sleet and snow fell to such intensity that visibility was reduced to zero; they may not have realised how close they were to shore. The east wind blew the ships back towards the shore. While Rochdale and Prince of Wales were lost, another troop transport, Lark, which left earlier, safely reached Holyhead. Other ships were lost at that time. A collier was lost at the South Bull (outside Dublin Port). The inbound Liverpool packet was lost off Bray. Military background In July 1807, following military successes, Napoleon signed the Treaties of Tilsit with Russia and Prussia leaving him master of central and eastern Europe. He then turned his attention westward to Spain and Portugal. The British government was alarmed. Soldiers were recruited to defend England's coast and to intervene in Spain (see Peninsular War) under Wellington. Fear of an invasion of Ireland was further met by the building of Martello Towers on the southern and eastern coasts and watchtowers on the other coastlines. French troops had invaded Ireland on 22 August 1798, under General Humbert, establishing the short-lived Republic of Connacht. On that occasion the Mayo Militia was ingloriously defeated in what became known as the Races of Castlebar. In 1807 many members of the North Mayo and South Mayo Militias volunteered and were lost from the Prince of Wales. They joined the 97th Regiment of Foot, the Minorca Regiment, which was known as the "Queens Own Germans" as it was initially formed from Swiss and German mercenaries. (In 1816, the 97th was renumbered as the 96th). The North Cork Militia was active in suppressing the Irish Rebellion of 1798. They suffered a defeat at the Battle of Oulart Hill. In 1807, while most joined the 18th Regiment of Foot so many members of the North Cork Militia volunteered that they had to be dispersed over 25 different regiments. They joined the British Army for a shilling a week and three meals a day – an alternative to terrible poverty. HM Packet ship Prince of Wales HM Packet ship Prince of Wales was a sloop of 103 tons with a draught of 11 feet. She was built in Parkgate, Cheshire in 1787. She sailed under Captain Robert Jones of Liverpool carrying the 97th regiment on 19 November. The next day she had progressed only to a point opposite Bray Head, a matter of a few miles. She cast anchor, but the sea was so violent that she failed to come to anchor; she was blown back past Dún Laoghaire. Her sails were completely torn. She was driven onto rocks at Blackrock. There was just one longboat aboard. Captain Jones, nine seamen, two women with children (family members), and two soldiers escaped on this lifeboat. They did not know where they were, or how close they were to the shore. They rowed parallel to the shore until one of the sailors fell overboard and found that he was standing in shallow water. It was alleged that the troops were locked below deck, the ladder withdrawn, and the hatch battened down. All 120 soldiers drowned in the storm and are interred in Merrion Cemetery not too far from where the incident occurred. Rochdale Rochdale was larger than Prince of Wales. She was built in 1797; she was a brig of 135 tons and a ten-foot draught. She sailed under Captain Hodgson. She was driven along a similar path as the Prince of Wales. She cast anchors but the cable snapped. On shore cries of the terrified passengers could be heard. As she swept past Dún Laoghaire, soldiers on board fired their muskets to attract attention. At Salthill, would-be rescuers had to shelter from the gunfire. Off Blackrock, blue lights were seen and gunfire heard. She stuck the rocks at the Seapoint Martello tower. A twelve-foot plank would have rescued them, but all 265, including 42 women and 29 children, on board were lost. Their bodies were unrecognisable, being mutilated by the sea and the rocks. Most of those who perished are interred in Carrickbrennan Churchyard in Monkstown with a memorial. Lifeboats Although there were lifeboats stationed at Clontarf, Bullock, Howth, Dún Laoghaire and Islandbridge none were launched. Looting There was looting of the ships and the items washed ashore. An immense amount of baggage was washed ashore and troops were put on guard. Looters gathered as was usual at the time and one from Dún Laoghaire was drowned. All the weekend was spent in collecting the bodies for burial. The Regimental Silver Plate of the Queens Own Germans was lost. Rewards were offered. Six persons were convicted and sent to Kilmainham Gaol for plundering bodies or articles. Murder charge Captain Robert Jones and his crew survived in the only lifeboat. Two soldiers also survived. The captain was accused of murder. The Captain said that the lifeboat was not launched; rather, it was cast into the sea by the storm, so he ordered those on deck to get into it. Anthony McIntyre of the 18th Royal Irish said that the captain launched the lifeboat and that the ladder from the hold to the deck was withdrawn. Andrew Boyle, also of the 18th Royal Irish, spoke through an Irish interpreter, saying that the ladder was not removed because “persons below held on to it very tightly”. The verdict was "Casual death by shipwreck". The case was dismissed. Dún Laoghaire Harbour The Irish Parliament having been abolished, from 1 January 1801 Irish members of parliament had to travel to the House of Commons of the United Kingdom. That meant frequent travel across the Irish Sea. A campaign to build a harbour at Dún Laoghaire was already under way. The person chiefly responsible was a resident Norwegian master mariner and shipbroker named Richard Toucher, who worked tirelessly campaigning to bring about the construction of a safe port. His Asylum Harbour was conceived as a refuge for sailing ships in trouble in Dublin Bay. After this tragedy, the campaign received the support required. The term 'asylum' in this context means a harbour where ships can seek refuge from a storm. Construction commenced on a packet harbour at Howth, which was completed in 1809. Travelling from Dublin to Howth meant travelling through the 'badlands of Sutton', where coaches were liable to be raided. Howth was a shallow harbour, and as larger ships were built, in particular with the introduction of steam packets from 1819, it became unsuitable, its rocky bottom precluded any dredging. In 1815, eight Harbour Commissioners were appointed to supervise the building of a new harbour at Dún Laoghaire. George IV visited in 1821, arriving at Howth and departing from Dún Laoghaire. He renamed the town "Kingstown". The name reverted to Dún Laoghaire in 1921. Reading Bourke, Edward J. The sinking of the Rochdale and the Prince of Wales Bourke, Edward J. Shipwrecks of the Irish Coast Blacker, Rev. Beaver Brief Sketches of the Parishes of Booterstown and Donnybrook (Dublin 1860) de Courcy Ireland, John, History of Dún Laoghaire Harbour (De Burca Books, 2001) . Scott Roberts, Peter. The Ancestry, Life and Times of Commander John Macgregor Skinner R.N. ( Holyhead Maritime Museum, 2007) References Maritime incidents in Ireland Maritime incidents in 1807 1807 in the United Kingdom 1807 in Ireland Age of Sail ships of England Maritime history of Ireland Shipwrecks in the Irish Sea Looting Shipwrecks of Ireland November 1807 events
2242975
https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition
Wavelet packet decomposition
Originally known as Optimal Subband Tree Structuring (SB-TS) also called Wavelet Packet Decomposition (WPD) (sometimes known as just Wavelet Packets or Subband Tree) is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT). Introduction In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low and high pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree. For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to (n + 1) sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy. From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. The best basis algorithm by Coifman and Wickerhauser finds a set of bases that provide the most desirable representation of the data relative to a particular cost function (e.g. entropy). There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others. Discrete wavelet transform theory (continuous in the variable(s)) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete subband transform theory provides a perfect representation of discrete signals. Gallery Applications Wavelet packets were successfully applied in preclinical diagnosis. References External links An implementation of wavelet packet decomposition can be found in MATLAB wavelet toolbox: . An implementation for R can be found in the wavethresh package: . An illustration and implementation of wavelet packets along with its code in C++ can be found at . JWave: An implementation in Java for 1-D and 2-D wavelet packets using Haar, Daubechies, Coiflet, and Legendre wavelets. Wavelets Signal processing
347832
https://en.wikipedia.org/wiki/Fifth%20Generation%20Computer%20Systems
Fifth Generation Computer Systems
The Fifth Generation Computer Systems (FGCS) was an initiative by Japan's Ministry of International Trade and Industry (MITI), begun in 1982, to create computers using massively parallel computing and logic programming. It was to be the result of a government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. There was also an unrelated Russian project also named as a fifth-generation computer (see Kronos (computer)). Ehud Shapiro, in his "Trip Report" paper (which focused the FGCS project on concurrent logic programming as the software foundation for the project), captured the rationale and motivations driving this project: "As part of Japan's effort to become a leader in the computer industry, the Institute for New Generation Computer Technology has launched a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems. These Fifth Generation computers will be built around the concepts of logic programming. In order to refute the accusation that Japan exploits knowledge from abroad without contributing any of its own, this project will stimulate original research and will make its results available to the international research community." The term "fifth generation" was intended to convey the system as being advanced. In the history of computing hardware, computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance. The project would have created the computer over a ten-year period. After the project ended, it would consider an investment in a new "sixth generation" project. Opinions about its outcome are divided: either it was a failure, or it was ahead of its time. Information In the late 1965s it was one of the most used until the early 1970s, there was much talk about "generations" of computer hardware — usually "three generations". First generation: Thermionic vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer. Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer. Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, which, by stacking layers of silicon over a ceramic substrate, accommodated over 20 transistors per chip; the chips could be packed together onto a circuit board to achieve unprecedented logic densities. The IBM 360/91 was a hybrid second- and third-generation computer. Omitted from this taxonomy is the "zeroth-generation" computer based on metal gears (such as the IBM 407) or mechanical relays (such as the Mark I), and the post-third-generation computers based on Very Large Scale Integrated (VLSI) circuits. There was also a parallel set of generations for software: First generation: Machine language. Second generation: Low-level programming languages such as Assembly language. Third generation: Structured high-level programming languages such as C, COBOL and FORTRAN. Fourth generation: "Non-procedural" high-level programming languages (such as object-oriented languages) Throughout these multiple generations up to the 1970s, Japan built computers following U.S. and British leads. In the mid-1970s, the Ministry of International Trade and Industry stopped following western leads and started looking into the future of computing on a small scale. They asked the Japan Information Processing Development Center (JIPDEC) to indicate a number of future directions, and in 1979 offered a three-year contract to carry out more in-depth studies along with industry and academia. It was during this period that the term "fifth-generation computer" started to be used. Prior to the 1970s, MITI guidance had successes such as an improved steel industry, the creation of the oil supertanker, the automotive industry, consumer electronics, and computer memory. MITI decided that the future was going to be information technology. However, the Japanese language, particularly in its written form, presented and still presents obstacles for computers. As a result of these hurdles, MITI held a conference to seek assistance from experts. The primary fields for investigation from this initial project were: Inference computer technologies for knowledge processing Computer technologies to process large-scale data bases and knowledge bases High performance workstations Distributed functional computer technologies Super-computers for scientific calculation The project imagined an "epoch-making computer" with supercomputer-like performance using massively parallel computing/processing. The aim was to build parallel computers for artificial intelligence applications using concurrent logic programming. The FGCS project and its vast findings contributed greatly to the development of the concurrent logic programming field. The target defined by the FGCS project was to develop "Knowledge Information Processing systems" (roughly meaning, applied Artificial Intelligence). The chosen tool to implement this goal was logic programming. Logic programming approach as was characterized by Maarten Van Emden – one of its founders – as: The use of logic to express information in a computer. The use of logic to present problems to a computer. The use of logical inference to solve these problems. More technically, it can be summed up in two equations: Program = Set of axioms. Computation = Proof of a statement from axioms. The Axioms typically used are universal axioms of a restricted form, called Horn-clauses or definite-clauses. The statement proved in a computation is an existential statement. The proof is constructive, and provides values for the existentially quantified variables: these values constitute the output of the computation. Logic programming was thought as something that unified various gradients of computer science (software engineering, databases, computer architecture and artificial intelligence). It seemed that logic programming was a key missing connection between knowledge engineering and parallel computer architectures. The project imagined a parallel processing computer running on top of large databases (as opposed to a traditional filesystem) using a logic programming language to define and access the data. They envisioned building a prototype machine with performance between 100M and 1G LIPS, where a LIPS is a Logical Inference Per Second. At the time typical workstation machines were capable of about 100k LIPS. They proposed to build this machine over a ten-year period, 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982 the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies. In the same year, during a visit to the ICOT, Ehud Shapiro invented Concurrent Prolog, a novel concurrent programming language that integrated logic programming and concurrent programming. Concurrent Prolog is a logic programming language designed for concurrent programming and parallel execution. It is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms. Shapiro described the language in a Report marked as ICOT Technical Report 003, which presented a Concurrent Prolog interpreter written in Prolog. Shapiro's work on Concurrent Prolog inspired a change in the direction of the FGCS from focusing on parallel implementation of Prolog to the focus on concurrent logic programming as the software foundation for the project. It also inspired the concurrent logic programming language Guarded Horn Clauses (GHC) by Ueda, which was the basis of KL1, the programming language that was finally designed and implemented by the FGCS project as its core programming language. Implementation The belief that parallel computing was the future of all performance gains generated by the Fifth-Generation project produced a wave of apprehension in the computer field. After having influenced the consumer electronics field during the 1970s and the automotive world during the 1980s, the Japanese in the 1980s developed a strong reputation. Soon parallel projects were set up in the US as the Strategic Computing Initiative and the Microelectronics and Computer Technology Corporation (MCC), in the UK as Alvey, and in Europe as the European Strategic Program on Research in Information Technology (ESPRIT), as well as the European Computer‐Industry Research Centre (ECRC) in Munich, a collaboration between ICL in Britain, Bull in France, and Siemens in Germany. Five running Parallel Inference Machines (PIM) were eventually produced: PIM/m, PIM/p, PIM/i, PIM/k, PIM/c. The project also produced applications to run on these systems, such as the parallel database management system Kappa, the legal reasoning system HELIC-II, and the automated theorem prover MGTP, as well as applications to bioinformatics. Failure The FGCS Project did not meet with commercial success for reasons similar to the Lisp machine companies and Thinking Machines. The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intel x86 machines). The project did produce a new generation of promising Japanese researchers. But after the FGCS Project, MITI stopped funding large-scale computer research projects, and the research momentum developed by the FGCS Project dissipated. However MITI/ICOT embarked on a Sixth Generation Project in the 1990s. A primary problem was the choice of concurrent logic programming as the bridge between the parallel computer architecture and the use of logic as a knowledge representation and problem solving language for AI applications. This never happened cleanly; a number of languages were developed, all with their own limitations. In particular, the committed choice feature of concurrent constraint logic programming interfered with the logical semantics of the languages. Another problem was that existing CPU performance quickly pushed through the barriers that experts perceived in the 1980s, and the value of parallel computing dropped to the point where it was for some time used only in niche situations. Although a number of workstations of increasing capacity were designed and built over the project's lifespan, they generally found themselves soon outperformed by "off the shelf" units available commercially. The project also failed to maintain continuous growth. During its lifespan, GUIs became mainstream in computers; the internet enabled locally stored databases to become distributed; and even simple research projects provided better real-world results in data mining. Moreover, the project found that the promises of logic programming were largely negated by the use of committed choice. At the end of the ten-year period, the project had spent over ¥50 billion (about US$400 million at 1992 exchange rates) and was terminated without having met its goals. The workstations had no appeal in a market where general purpose systems could now replace and outperform them. This is parallel to the Lisp machine market, where rule-based systems such as CLIPS could run on general-purpose computers, making expensive Lisp machines unnecessary. Ahead of its time Albeit not having produced much success, many of the approaches seen in the Fifth-Generation project, such as logic programming have been distributed over massive knowledge-bases, and are now being re-interpreted in current technologies. For example, the Web Ontology Language (OWL) employs several layers of logic-based knowledge representation systems. It appears, however, that these new technologies reinvented rather than leveraged approaches investigated under the Fifth-Generation initiative. In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end. When clock speeds of CPUs began to move into the 3–5 GHz range, CPU power dissipation and other problems became more important. The ability of industry to produce ever-faster single CPU systems (linked to Moore's Law about the periodic doubling of transistor counts) began to be threatened. Ordinary consumer machines and game consoles began to have parallel processors like the Intel Core, AMD K10, and Cell. Graphics card companies like Nvidia and AMD began introducing large parallel systems like CUDA and OpenCL. Again, however, it is not clear that these developments were facilitated in any significant way by the Fifth-Generation project. In summary, it is argued that the Fifth-Generation project was revolutionary, however, still had areas of downfall. References Classes of computers History of artificial intelligence MITI projects Parallel computing Research projects Supercomputing in Japan
201171
https://en.wikipedia.org/wiki/Archon%3A%20The%20Light%20and%20the%20Dark
Archon: The Light and the Dark
Archon: The Light and the Dark is a 1983 video game developed by Free Fall Associates and one of the first five games published by Electronic Arts. It is superficially similar to chess, in that it takes place on a board with alternating black and white squares; however, instead of fixed rules when landing on another player's piece, an arcade-style fight takes place to determine the victor, and each piece has different combat abilities. The health of the player's piece is enhanced when landing on a square of one's own color. Archon was originally written for the Atari 8-bit family and then ported to the Apple II, Commodore 64, Amstrad CPC, ZX Spectrum, Amiga, IBM PC, Macintosh, PC-88, and NES. It was designed by Paul Reiche III (who also created the graphics for the game) and Jon Freeman and programmed by Anne Westfall. A sequel was released in 1984: Archon II: Adept. Gameplay The goal of the game is either to occupy five power points located on the board, to eliminate all the opposing pieces, or to eliminate all but one remaining imprisoned piece of the opponent's. Accomplishing any one of these goals results in a win. While the board is visually similar to a chessboard, when one piece lands on the same space as an opposing piece, the removal of the targeted piece is not automatic. Instead, the two pieces are placed into a full-screen 'combat arena' and must battle (action-style, with the players operating the combatants) to determine who takes the square. A stronger piece will generally defeat a weaker piece, but not always, and a fight can result in both pieces being eliminated. This uncertainty adds a level of complexity to the game. Different pieces have different abilities in the combat phase. These include movement, lifespan, and weapon. The weapons vary by range, speed, rate of firing, and power. For example, the pawn (represented by knights on the 'light' side and goblins on the 'dark' side) attacks quickly, but has very little strength; its weapon, a sword or club, has limited reach and power. A dragon is stronger and can attack from a distance, while a golem moves slowly and fires a slow but powerful boulder. A piece's powers are affected by the square on which the battle takes place, with each player having an advantage on squares of their own color. Many squares on the board oscillate between light and dark, making them dangerous to hold over time. The three middle power points are on oscillating squares. Some pieces have special abilities. The phoenix can turn into a ball of fire, both damaging the enemy and shielding itself from enemy attacks. The shapeshifter assumes the shape and abilities of whatever piece it is up against. MikroBitti magazine once wrote that the phoenix and the shapeshifter facing each other usually end up as the most boring battle in the entire game; both combatants' capabilities are simultaneously offensive and defensive, and they tend to use it whenever they meet each other, and thus both rarely get damaged. Each side also has a spellcaster piece, who are the leaders: the sorceress for the dark side and the wizard for the light side. The sorceress and the wizard can cast seven different spells. Each spell may be used only once per game by each spellcaster. The computer opponent slowly adapts over time to help players defeat it. The game is usually won when either one side destroys all the opposing pieces or one of the sides is able to occupy all of the five power points. More rarely, a side may also win by imprisoning its opponent's last remaining piece. If each side has but a single piece, and the two pieces destroy each other in combat, then the game ends in a tie. Reception Archon was very well received. Softline praised the game's originality, stating, "If there is any computer game that even slightly resembles Archon, we haven't seen it". The magazine concluded that "it's an announcement that Free Fall does games. And it does them well". Video magazine reviewed the game in its "Arcade Alley" column where reviewers described it as "truly a landmark in the development of computerized strategy games" and suggested that "no review could possibly do more than hint at [Archon] manifold excellence". Computer Gaming World called Archon "a very good game, with lots of care put into its development. I recommend it highly." The magazine said of the Amiga versions, "if you are interested in a challenging strategy game, I recommend both Archon and Adept." Orson Scott Card reviewed the game for Compute! in 1983. He gave Archon and two other EA games, M.U.L.E. and Worms?, complimentary reviews, writing that "they are original; they do what they set out to do very, very well; they allow the player to take part in the creativity; they do things that only computers can do". Allen Doum reviewed the game for Computer Gaming World, and stated that "Archon is a good first step towards what will be an exciting new class of game. Its play, despite the lack of depth or variation that will be possible, is fast moving." Leo LaPorte of Hi-Res—a tournament chess player—unfavorably compared the complexity of its rules to that of chess and Go, but concluded that Archon was "a very good game" that "struck a fine balance between a strategy game and an arcade shoot-'em-up". BYTEs reviewer called Archon one of the best computer games he has ever played, stating it was "rewarding and varied enough to be played again and again." The Addison-Wesley Book of Atari Software 1984 gave the game an overall A+ rating, describing it as "one of the most creative and original games that has come along in several years ... It has great graphics, and will give a lifetime of pleasure." In 1984 Softline readers named Archon the most popular Atari program of 1983. It was awarded "1984 Most Innovative Video Game/Computer Game" at the 5th annual Arkie Awards, where judges noted that "few games make better use of a computer's special abilities than Archon". In 1996, Computer Gaming World ranked Archon as the 20th best game of all time. It was also ranked as the 50th top game by IGN in 2003, who called it a "perfect marriage of strategy and action". The reviewer commented, "Whether on the computer or NES, Archon is an intense, engaging match of wits and reflexes, and boasts some of the coolest battles in gaming history." In 2004, Archon was inducted into GameSpot's list of the greatest games of all time. They also highlighted it among their ten games that should be remade. In 2005, IGN ranked it again as their 77th greatest game. Legacy Free Fall developed a sequel for the same platforms, Archon II: Adept, released by Electronic Arts in 1984. Ten years later an enhanced version of the original was published by Strategic Simulations as Archon Ultra. The original game was rewritten for Palm OS in 2000 by Carsten Magerkurth, who contacted members Free Fall Associates for feedback on creating an improved version released in 2003. Archon: Evolution used code from the original 8-bit version with the blessing of Jon Freeman. In 2008, React Games acquired the license from Free Fall to develop the Archon title across multiple platforms. It released an iPhone version in June 2009. A follow-up title Archon: Conquest was released in October 2009 for the iPhone. Archon: Classic for Windows was released in May 2010 with gameplay elements not in the original game. Archon is a notable influence on Reiche's game Star Control, with a similar combination of turn based strategy and real-time combat. An updated version of the game has been announced for release exclusively for the Intellivision Amico. See also Mortal Kombat: Deception, has a Chess Kombat mini game that is very similar, with almost the same rules. The Unholy War, a 1998 PlayStation game with a similar structure. Wrath Unleashed, a 2004 PlayStation 2 and Xbox game with a similar structure. References External links Archon at c64sets.com - images of the package and manual. A reverse engineering of Archon 1983 video games Amiga games Amstrad CPC games Apple II games Ariolasoft games Atari 8-bit family games Commodore 64 games DOS games Electronic Arts franchises Electronic Arts games Fighting games FM-7 games Classic Mac OS games NEC PC-8801 games NEC PC-9801 games Turn-based strategy video games Sharp MZ games Sharp X1 games ZX Spectrum games Video games developed in the United States Board game-style video games
52685386
https://en.wikipedia.org/wiki/HiOS
HiOS
HiOS (), is an Android-based operating system developed by Hong Kong mobile phone manufacturer Tecno Mobile, a subsidiary of Transsion Holdings, exclusively for their smartphones. HiOS allows for a wide range of user customization without requiring rooting the mobile device. The operating system is also bundled with utility applications that allow users to free up memory, freeze applications, limit data accessibility to applications among others. HiOS comes with features like; Launcher, Private Safe, Split Screen and Lockscreen Notification. History In April 2016, Tecno Mobile released HiOS 1.0, based on Android 6.0 "Marshmallow", featuring a launcher and micro-intelligence. The HiOS was first launched on Tecno Boom J8. In March 2017, HiOS 2.0 was released based on Android 7.0 "Nougat", launching on Camon CX and L9 Plus. It also came with launcher, consisting of Hi Search, Hi Theme, Hi Manager and split screen. In October 2017, HiOS 3.0 was also released based on Android 7.0 as seen in HiOS 2.0, but with improved user interfaces, launching on Phantom 8 and Camon CM. HiOS 3.0 also came with Boomplay Music and Phoenix Browser as primary browsers, it also came with T-Point, Micro Intelligence and Eye Care. In November 2018, HiOS 4.1 was released based on Android 8.1 "Oreo", launching on Camon 11 and Camon 11 Pro, featuring ZeroScreen, one-hand mode and dual apps. The HiOS 4.1 also came with Gesture Navigation and Face ID. In April 2019, HiOS 5.0 was released based on Android 9.0 "Pie", launching on Spark 3 and Phantom 9, featuring Smart Panel, AI Read Mode and Intelligent Voice Broadcast. In September 2019, HiOS 5.5 was also released based on Android 9.0 "Pie", launching on Camon 12 and Spark 4, featuring AR Virtual Canvas, Closed Eye Detection, Gesture Call Picker and Fingerprint Reset Password. In February 2020, HiOS 6.0 was released based on Android 10, launching on Camon 15. The beta version of HiOS 6.0 was released to Tecno Spark 3 Pro on 1 December 2019, featuring system-wide Dark Theme, Social Turbo and Game mode. In September 2020, HiOS 7.0 was released based on Android 10, launching on Camon 16 series. In May 2021, HiOS 7.6 was released based on Android 11, launching on Camon 17 series. In August 2021, HiOS 7.6 was released as a upgrade based on Android 11, on Camon 16 series See also XOS (operating system) References Mobile operating systems
304361
https://en.wikipedia.org/wiki/Boot%20disk
Boot disk
A boot disk is a removable digital data storage medium from which a computer can load and run (boot) an operating system or utility program. The computer must have a built-in program which will load and execute a program from a boot disk meeting certain standards. While almost all modern computers can boot from a hard drive containing the operating system and other software, they would not normally be called boot disks (because they are not removable media). CD-ROMs are the most common forms of media used, but other media, such as magnetic or paper tape drives, ZIP drives, and more recently USB flash drives can be used. The computer's BIOS must support booting from the device in question. One can make one's own boot disk (typically done to prepare for when the system won't start properly). Uses Boot disks are used for: Operating system installation Data recovery Data purging Hardware or software troubleshooting BIOS flashing Customizing an operating environment Software demonstration Running a temporary operating environment, such as when using a Live USB drive. Administrative access in case of lost password is possible with an appropriate boot disk with some operating systems Games (e.g. for Amiga home computers, running MS-DOS video games on modern computers by using a bootable MS-DOS or FreeDOS USB flash drive). Process The term boot comes from the idea of lifting oneself by one's own bootstraps: the computer contains a tiny program (bootstrap loader) which will load and run a program found on a boot device. This program may itself be a small program designed to load a larger and more capable program, i.e., the full operating system. To enable booting without the requirement either for a mass storage device or to write to the boot medium, it is usual for the boot program to use some system RAM as a RAM disk for temporary file storage. As an example, any computer compatible with the IBM PC is able with built-in software to load the contents of the first 512 bytes of a floppy and to execute it if it is a viable program; boot floppies have a very simple loader program in these bytes. The process is vulnerable to abuse; data floppies could have a virus written to their first sector which silently infects the host computer if switched on with the disk in the drive. Media Bootable floppy disks ("boot floppies") for PCs usually contain DOS or miniature versions of Linux. The most commonly available floppy disk can hold only 1.4 MB of data in its standard format, making it impractical for loading large operating systems. The use of boot floppies is in decline, due to the availability of other higher-capacity options, such as CD-ROMs or USB flash drives. Device selection A modern PC is configured to attempt to boot from various devices in a certain order. If a computer is not booting from the device desired, such as the floppy drive, the user may have to enter the BIOS Setup function by pressing a special key when the computer is first turned on (such as , , , or ), and then changing the boot order. More recent BIOSes permit the interruption of the final stage of the boot process and invoke the Boot Menu by pressing a function key (usually or ). This results in a list of bootable devices being presented, from which a selection may be made. Apple silicon Macs display the Boot Menu when the power button is pressed and held, the older Apple Macintosh computers with Intel processors will display the Boot Menu if user presses the or while the machine is starting. Requirements Different operating systems use different boot disk contents. All boot disks must be compatible with the computer they are designed for. MS-DOS/PC DOS/DR-DOS A valid boot sector in form of a volume boot record (VBR) IO.SYS or IBMBIO.COM MSDOS.SYS or IBMDOS.COM COMMAND.COM All files must be for the same version of the operating system. Complete boot disks can be prepared in one operation by an installed operating system; details vary. FreeDOS A valid boot sector on the disk COMMAND.COM KERNEL.SYS Linux A bootloader such as SYSLINUX or GRUB Linux kernel Initial ram disk (initrd) Windows Preinstallation Environment Windows Boot Manager BOOT.WIM See also Darik's Boot and Nuke Data recovery El Torito (CD-ROM standard) Live CD Protected Area Run Time Interface Extension Services (PARTIES) Self-loader References External links reboot.pro - Community forum dedicated to Boot Disks Boot Disk information, sources, and tools Bootable media
18457137
https://en.wikipedia.org/wiki/Personal%20computer
Personal computer
A personal computer (PC) is a multi-purpose microcomputer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large, costly minicomputers and mainframes, time-sharing by many people at the same time is not used with personal computers. Primarily in the late 1970s and 1980s, the term home computer was also used. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications, usually these systems run commercial software, free-of-charge software ("freeware"), which is most often proprietary, or free and open-source software, which is provided in "ready-to-run", or binary, form. Software for personal computers is typically developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible. This contrasts with mobile systems, where software is often available only through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer. Since the early 1990s, Microsoft operating systems and Intel hardware dominated much of the personal computer market, first with MS-DOS and then with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry. These include Apple's macOS and free and open-source Unix-like operating systems, such as Linux. The advent of personal computers and the concurrent Digital Revolution have significantly affected the lives of people in all countries. Terminology The term "PC" is an initialism for "personal computer". While the IBM Personal Computer incorporated the designation in its model name, the term originally described personal computers of any brand. In some contexts, "PC" is used to contrast with "Mac", an Apple Macintosh computer. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" (brand) computers. In 1995, a CBS segment on the growing popularity of PC reported "For many newcomers PC stands for Pain and Confusion". History In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit highly trained, person. This mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, and could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30 of 1956, and the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person. The personal computer was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor, and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs. The MOS integrated circuit was commercialized by RCA in 1964, and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968. Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own. In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time. Early personal computersgenerally called microcomputerswere often sold in a kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972, and a few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use. The CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT, and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts, statisticians, and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weighed about half a ton. Another desktop portable APL machine, the MCM/70, was demonstrated in 1973 and shipped in 1974. It used the Intel 8008 processor. A seminal step in personal computing was the 1973 Xerox Alto, developed at Xerox's Palo Alto Research Center (PARC). It had a graphical user interface (GUI) which later served as inspiration for Apple's Macintosh, and Microsoft's Windows operating system. The Alto was a demonstration project, not commercialized, as the parts were too expensive to be affordable. Also in 1973 Hewlett Packard introduced fully BASIC programmable microcomputers that fit entirely on top of a desk, including a keyboard, a small one-line display, and printer. The Wang 2200 microcomputer of 1973 had a full-size cathode ray tube (CRT) and cassette tape storage. These were generally expensive specialized computers sold for business or scientific uses. 1974 saw the introduction of what is considered by many to be the first true "personal computer", the Altair 8800 created by Micro Instrumentation and Telemetry Systems (MITS). Based on the 8-bit Intel 8080 Microprocessor, the Altair is widely recognized as the spark that ignited the microcomputer revolution as the first commercially successful personal computer. The computer bus designed for the Altair was to become a de facto standard in the form of the S-100 bus, and the first programming language for the machine was Microsoft's founding product, Altair BASIC. In 1976, Steve Jobs and Steve Wozniak sold the Apple I computer circuit board, which was fully prepared and contained about 30 chips. The Apple I computer differed from the other kit-style hobby computers of era. At the request of Paul Terrell, owner of the Byte Shop, Jobs and Wozniak were given their first purchase order, for 50 Apple I computers, only if the computers were assembled and tested and not a kit computer. Terrell wanted to have computers to sell to a wide range of users, not just experienced electronics hobbyists who had the soldering skills to assemble a computer kit. The Apple I as delivered was still technically a kit computer, as it did not have a power supply, case, or keyboard when it was delivered to the Byte Shop. The first successfully mass-marketed personal computer to be announced was the Commodore PET after being revealed in January 1977. However, it was back-ordered and not available until later that year. Three months later (April), the Apple II (usually referred to as the "Apple") was announced with the first units being shipped 10 June 1977, and the TRS-80 from Tandy Corporation / Tandy Radio Shack following in August 1977, which sold over 100,000 units during its lifetime. Together, these 3 machines were referred to as the "1977 trinity". Mass-market, ready-assembled computers had arrived, and allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware. In 1977 the Heath company introduced personal computer kits known as Heathkits, starting with the Heathkit H8, followed by the Heathkit H89 in late 1979. With the purchase of the Heathkit H8 you would obtain the chassis and CPU card to assemble yourself, additional hardware such as the H8-1 memory board that contained 4k of RAM could also be purchased in order to run software. The Heathkit H11 model was released in 1978 and was one of the first 16-bit personal computers; however, due to its high retail cost of $1,295 was discontinued in 1982. During the early 1980s, home computers were further developed for household use, with software for personal productivity, programming and games. They typically could be used with a television already in the home as the computer display, with low-detail blocky graphics and a limited color range, and text about 40 characters wide by 25 characters tall. Sinclair Research, a UK company, produced the ZX Seriesthe ZX80 (1980), ZX81 (1981), and the ZX Spectrum; the latter was introduced in 1982, and totaled 8 million unit sold. Following came the Commodore 64, totaled 17 million units sold and the Amstrad CPC series (464–6128). In the same year, the NEC PC-98 was introduced, which was a very popular personal computer that sold in more than 18 million units. Another famous personal computer, the revolutionary Amiga 1000, was unveiled by Commodore on 23 July 1985. The Amiga 1000 featured a multitasking, windowing operating system, color graphics with a 4096-color palette, stereo sound, Motorola 68000 CPU, 256 KB RAM, and 880 KB 3.5-inch disk drive, for US$1,295. Somewhat larger and more expensive systems were aimed at office and small business use. These often featured 80-column text displays but might not have had graphics or sound capabilities. These microprocessor-based systems were still less costly than time-shared mainframes or minicomputers. Workstations were characterized by high-performance processors and graphics displays, with large-capacity local disk storage, networking capability, and running under a multitasking operating system. Eventually, due to the influence of the IBM PC on the personal computer market, personal computers and home computers lost any technical distinction. Business computers acquired color graphics capability and sound, and home computers and game systems users used the same processors and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of personal computers used at home. IBM's first PC was introduced on 12 August 1981. In 1982 "The Computer" was named Machine of the Year by Time magazine. In the 2010s, several companies such as Hewlett-Packard and Sony sold off their PC and laptop divisions. As a result, the personal computer was declared dead several times during this period. An increasingly important set of uses for personal computers relied on the ability of the computer to communicate with other computer systems, allowing interchange of information. Experimental public access to a shared mainframe computer system was demonstrated as early as 1973 in the Community Memory project, but bulletin board systems and online service providers became more commonly available after 1978. Commercial Internet service providers emerged in the late 1980s, giving public access to the rapidly growing network. In 1991, the World Wide Web was made available for public use. The combination of powerful personal computers with high-resolution graphics and sound, with the infrastructure provided by the Internet, and the standardization of access methods of the Web browsers, established the foundation for a significant fraction of modern life, from bus time tables through unlimited distribution of free videos through to online user-edited encyclopedias. Types Stationary Workstation A workstation is a high-end personal computer designed for technical, mathematical, or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. Workstations are used for tasks such as computer-aided design, drafting and modeling, computation-intensive scientific and engineering calculations, image processing, architectural modeling, and computer graphics for animation and motion picture visual effects. Desktop computer Before the widespread use of PCs, a computer that could fit on a desk was remarkably small, leading to the "desktop" nomenclature. More recently, the phrase usually indicates a particular style of computer case. Desktop computers come in a variety of styles ranging from large vertical tower cases to small models which can be tucked behind or rest directly beneath (and support) LCD monitors. While the term "desktop" often refers to a computer with a vertically aligned computer tower case, these varieties often rest on the ground or underneath desks. Despite this seeming contradiction, the term "desktop" does typically refer to these vertical tower cases as well as the horizontally aligned models which are designed to literally rest on top of desks and are therefore more appropriate to the "desktop" term, although both types qualify for this "desktop" label in most practical situations aside from certain physical arrangement differences. Both styles of these computer cases hold the systems hardware components such as the motherboard, processor chip, other internal operating parts. Desktop computers have an external monitor with a display screen and an external keyboard, which are plugged into ports on the back of the computer case. Desktop computers are popular for home and business computing applications as they leave space on the desk for multiple monitors. A gaming computer is a desktop computer that generally comprises a high-performance video card, processor and RAM, to improve the speed and responsiveness of demanding video games. An all-in-one computer (also known as single-unit PCs) is a desktop computer that combines the monitor and processor within a single unit. A separate keyboard and mouse are standard input devices, with some monitors including touchscreen capability. The processor and other working components are typically reduced in size relative to standard desktops, located behind the monitor, and configured similarly to laptops. A nettop computer was introduced by Intel in February 2008, characterized by low cost and lean functionality. These were intended to be used with an Internet connection to run Web browsers and Internet applications. A Home theater PC (HTPC) combines the functions of a personal computer and a digital video recorder. It is connected to a TV set or an appropriately sized computer display, and is often used as a digital photo viewer, music and video player, TV receiver, and digital video recorder. HTPCs are also referred to as media center systems or media servers. The goal is to combine many or all components of a home theater setup into one box. HTPCs can also connect to services providing on-demand movies and TV shows. HTPCs can be purchased pre-configured with the required hardware and software needed to add television programming to the PC, or can be assembled from components. Keyboard computers are computers inside of keyboards. Examples include the Commodore 64, MSX, Amstrad CPC, Atari ST and the ZX Spectrum. Portable The potential utility of portable computers was apparent early on. Alan Kay described the Dynabook in 1972, but no hardware was developed. The Xerox NoteTaker was produced in a very small experimental batch around 1978. In 1975, the IBM 5100 could be fit into a transport case, making it a portable computer, but it weighed about 50 pounds. Before the introduction of the IBM PC, portable computers consisting of a processor, display, disk drives and keyboard, in a suit-case style portable housing, allowed users to bring a computer home from the office or to take notes at a classroom. Examples include the Osborne 1 and Kaypro; and the Commodore SX-64. These machines were AC-powered and included a small CRT display screen. The form factor was intended to allow these systems to be taken on board an airplane as carry-on baggage, though their high power demand meant that they could not be used in flight. The integrated CRT display made for a relatively heavy package, but these machines were more portable than their contemporary desktop equals. Some models had standard or optional connections to drive an external video monitor, allowing a larger screen or use with video projectors. IBM PC-compatible suitcase format computers became available soon after the introduction of the PC, with the Compaq Portable being a leading example of the type. Later models included a hard drive to give roughly equivalent performance to contemporary desktop computers. The development of thin plasma display and LCD screens permitted a somewhat smaller form factor, called the "lunchbox" computer. The screen formed one side of the enclosure, with a detachable keyboard and one or two half-height floppy disk drives, mounted facing the ends of the computer. Some variations included a battery, allowing operation away from AC outlets. Notebook computers such as the TRS-80 Model 100 and Epson HX-20 had roughly the plan dimensions of a sheet of typing paper (ANSI A or ISO A4). These machines had a keyboard with slightly reduced dimensions compared to a desktop system, and a fixed LCD display screen coplanar with the keyboard. These displays were usually small, with 8 to 16 lines of text, sometimes only 40 columns line length. However, these machines could operate for extended times on disposable or rechargeable batteries. Although they did not usually include internal disk drives, this form factor often included a modem for telephone communication and often had provisions for external cassette or disk storage. Later, clam-shell format laptop computers with similar small plan dimensions were also called "notebooks". Laptop A laptop computer is designed for portability with "clamshell" design, where the keyboard and computer components are on one panel, with a hinged second panel containing a flat display screen. Closing the laptop protects the screen and keyboard during transportation. Laptops generally have a rechargeable battery, enhancing their portability. To save power, weight and space, laptop graphics chips are in many cases integrated into the CPU or chipset and use system RAM, resulting in reduced graphics performance when compared to desktop machines, that more typically have a graphics card installed. For this reason, desktop computers are usually preferred over laptops for gaming purposes. Unlike desktop computers, only minor internal upgrades (such as memory and hard disk drive) are feasible owing to the limited space and power available. Laptops have the same input and output ports as desktops, for connecting to external displays, mice, cameras, storage devices and keyboards. Laptops are also a little more expensive compared to desktops, as the miniaturized components for laptops themselves are expensive. A desktop replacement computer is a portable computer that provides the full capabilities of a desktop computer. Such computers are currently large laptops. This class of computers usually includes more powerful components and a larger display than generally found in smaller portable computers, and may have limited battery capacity or no battery. Netbooks, also called mini notebooks or subnotebooks, were a subgroup of laptops suited for general computing tasks and accessing web-based applications. Initially, the primary defining characteristic of netbooks was the lack of an optical disc drive, smaller size, and lower performance than full-size laptops. By mid-2009 netbooks had been offered to users "free of charge", with an extended service contract purchase of a cellular data plan. Ultrabooks and Chromebooks have since filled the gap left by Netbooks. Unlike the generic Netbook name, Ultrabook and Chromebook are technically both specifications by Intel and Google respectively. Tablet A tablet uses a touchscreen display, which can be controlled using either a stylus pen or finger. Some tablets may use a "hybrid" or "convertible" design, offering a keyboard that can either be removed as an attachment, or a screen that can be rotated and folded directly over top the keyboard. Some tablets may use desktop-PC operating system such as Windows or Linux, or may run an operating system designed primarily for tablets. Many tablet computers have USB ports, to which a keyboard or mouse can be connected. Smartphone Smartphones are often similar to tablet computers, the difference being that smartphones always have cellular integration. They are generally smaller than tablets, and may not have a slate form factor. Ultra-mobile PC The ultra-mobile PC (UMP) is a small tablet computer. It was developed by Microsoft, Intel and Samsung, among others. Current UMPCs typically feature the Windows XP, Windows Vista, Windows 7, or Linux operating system, and low-voltage Intel Atom or VIA C7-M processors. Pocket PC A pocket PC is a hardware specification for a handheld-sized computer (personal digital assistant, PDA) that runs the Microsoft Windows Mobile operating system. It may have the capability to run an alternative operating system like NetBSD or Linux. Pocket PCs have many of the capabilities of desktop PCs. Numerous applications are available for handhelds adhering to the Microsoft Pocket PC specification, many of which are freeware. Microsoft-compliant Pocket PCs can also be used with many other add-ons like GPS receivers, barcode readers, RFID readers and cameras. In 2007, with the release of Windows Mobile 6, Microsoft dropped the name Pocket PC in favor of a new naming scheme: devices without an integrated phone are called Windows Mobile Classic instead of Pocket PC, while devices with an integrated phone and a touch screen are called Windows Mobile Professional. Palmtop and handheld computers Palmtop PCs were miniature pocket-sized computers running DOS that first came about in the late 1980s, typically in a clamshell form factor with a keyboard. Non-x86 based devices were often called palmtop computers, examples being Psion Series 3. In later years a hardware specification called Handheld PC was later released by Microsoft that run the Windows CE operating system. Hardware Computer hardware is a comprehensive term for all physical and tangible parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. Some sub-systems of a personal computer may contain processors that run a fixed program, or firmware, such as a keyboard controller. Firmware usually is not changed by the end user of the personal computer. Most 2010s-era computers require users only to plug in the power supply, monitor, and other cables. A typical desktop computer consists of a computer case (or "tower"), a metal chassis that holds the power supply, motherboard, hard disk drive, and often an optical disc drive. Most towers have empty space where users can add additional components. External devices such as a computer monitor or visual display unit, keyboard, and a pointing device (mouse) are usually found in a personal computer. The motherboard connects all processor, memory and peripheral devices together. The RAM, graphics card and processor are in most cases mounted directly onto the motherboard. The central processing unit (microprocessor chip) plugs into a CPU socket, while the ram modules plug into corresponding ram sockets. Some motherboards have the video display adapter, sound and other peripherals integrated onto the motherboard, while others use expansion slots for graphics cards, network cards, or other I/O devices. The graphics card or sound card may employ a break out box to keep the analog parts away from the electromagnetic radiation inside the computer case. Disk drives, which provide mass storage, are connected to the motherboard with one cable, and to the power supply through another cable. Usually, disk drives are mounted in the same case as the motherboard; expansion chassis are also made for additional disk storage. For large amounts of data, a tape drive can be used or extra hard disks can be put together in an external case. The keyboard and the mouse are external devices plugged into the computer through connectors on an I/O panel on the back of the computer case. The monitor is also connected to the input/output (I/O) panel, either through an onboard port on the motherboard, or a port on the graphics card. Capabilities of the personal computer's hardware can sometimes be extended by the addition of expansion cards connected via an expansion bus. Standard peripheral buses often used for adding expansion cards in personal computers include PCI, PCI Express (PCIe), and AGP (a high-speed PCI bus dedicated to graphics adapters, found in older computers). Most modern personal computers have multiple physical PCI Express expansion slots, with some having PCI slots as well. A peripheral is "a device connected to a computer to provide communication (such as input and output) or auxiliary functions (such as additional storage)". Peripherals generally connect to the computer through the use of USB ports or inputs located on the I/O panel. USB flash drives provide portable storage using flash memory which allows users to access the files stored on the drive on any computer. Memory cards also provide portable storage for users, commonly used on other electronics such as mobile phones and digital cameras, the information stored on these cards can be accessed using a memory card reader to transfer data between devices. Webcams, which are either built into computer hardware or connected via USB are video cameras that records video in real time to either be saved to the computer or streamed somewhere else over the internet. Game controllers can be plugged in via USB and can be used as an input device for video games as an alternative to using keyboard and mouse. Headphones and speakers can be connected via USB or through an auxiliary port (found on I/O panel) and allow users to listen to audio accessed on their computer; however, speakers may also require an additional power source to operate. Microphones can be connected through an audio input port on the I/O panel and allow the computer to convert sound into an electrical signal to be used or transmitted by the computer. Software Computer software is any kind of computer program, procedure, or documentation that performs some task on a computer system. The term includes application software such as word processors that perform productive tasks for users, system software such as operating systems that interface with computer hardware to provide the necessary services for application software, and middleware that controls and co-ordinates distributed systems. Software applications are common for word processing, Internet browsing, Internet faxing, e-mail and other digital messaging, multimedia playback, playing of computer game, and computer programming. The user may have significant knowledge of the operating environment and application programs, but is not necessarily interested in programming nor even able to write programs for the computer. Therefore, most software written primarily for personal computers tends to be designed with simplicity of use, or "user-friendliness" in mind. However, the software industry continuously provide a wide range of new products for use in personal computers, targeted at both the expert and the non-expert user. Operating system An operating system (OS) manages computer resources and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. An operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating computer networking, and managing files. Common contemporary desktop operating systems are Microsoft Windows, macOS, Linux, Solaris and FreeBSD. Windows, macOS, and Linux all have server and personal variants. With the exception of Microsoft Windows, the designs of each of them were inspired by or directly inherited from the Unix operating system. Early personal computers used operating systems that supported command line interaction, using an alphanumeric display and keyboard. The user had to remember a large range of commands to, for example, open a file for editing or to move text from one place to another. Starting in the early 1960s, the advantages of a graphical user interface began to be explored, but widespread adoption required lower-cost graphical display equipment. By 1984, mass-market computer systems using graphical user interfaces were available; by the turn of the 21st century, text-mode operating systems were no longer a significant fraction of the personal computer market. Applications Generally, a computer user uses application software to carry out a specific task. System software supports applications and provides common services such as memory management, network connectivity and device drivers, all of which may be used by applications but are not directly of interest to the end user. A simplified analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system): the power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and LibreOffice, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. Often, they may have some capability to interact with each other in ways beneficial to the user; for example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application. End-user development tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts; even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Gaming PC gaming is popular among the high-end PC market. According to an April 2018 market analysis done by Newzoo, PC gaming has fallen behind both console and mobile gaming in terms of market share sitting at a 24% share of the entire market. The market for PC gaming still continues to grow and is expected to generate $32.3 billion in revenue in the year 2021. PC gaming is at the forefront of competitive gaming, known as esports, with games such as Overwatch and Counter-Strike: Global Offensive leading the industry that is suspected to surpass a trillion dollars in revenue in 2019. Sales Market share In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million personal computers were in use in 2002 and one billion personal computers had been sold worldwide from the mid-1970s up to this time. Of the latter figure, 75% were professional or work related, while the rest were sold for personal or home use. About 81.5% of personal computers shipped had been desktop computers, 16.4% laptops and 2.1% servers. The United States had received 38.8% (394 million) of the computers shipped, Europe 25% and 11.7% had gone to the Asia-Pacific region, the fastest-growing market as of 2002. The second billion was expected to be sold by 2008. Almost half of all households in Western Europe had a personal computer and a computer could be found in 40% of homes in United Kingdom, compared with only 13% in 1985. The global personal computer shipments were 350.9 million units in 2010, 308.3 million units in 2009 and 302.2 million units in 2008. The shipments were 264 million units in the year 2007, according to iSuppli, up 11.2% from 239 million in 2006. In 2004, the global shipments were 183 million units, an 11.6% increase over 2003. In 2003, 152.6 million computers were shipped, at an estimated value of $175 billion. In 2002, 136.7 million PCs were shipped, at an estimated value of $175 billion. In 2000, 140.2 million personal computers were shipped, at an estimated value of $226 billion. Worldwide shipments of personal computers surpassed the 100-million mark in 1999, growing to 113.5 million units from 93.3 million units in 1998. In 1999, Asia had 14.1 million units shipped. As of June 2008, the number of personal computers in use worldwide hit one billion, while another billion is expected to be reached by 2014. Mature markets like the United States, Western Europe and Japan accounted for 58% of the worldwide installed PCs. The emerging markets were expected to double their installed PCs by 2012 and to take 70% of the second billion PCs. About 180 million computers (16% of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12% annually. Based on International Data Corporation (IDC) data for Q2 2011, for the first time China surpassed US in PC shipments by 18.5 million and 17.7 million respectively. This trend reflects the rising of emerging markets as well as the relative stagnation of mature regions. In the developed world, there has been a vendor tradition to keep adding functions to maintain high prices of personal computers. However, since the introduction of the One Laptop per Child foundation and its low-cost XO-1 laptop, the computing industry started to pursue the price too. Although introduced only one year earlier, there were 14 million netbooks sold in 2008. Besides the regular computer manufacturers, companies making especially rugged versions of computers have sprung up, offering alternatives for people operating their machines in extreme weather or environments. In 2011, Deloitte consulting firm predicted that, smartphones and tablet computers as computing devices would surpass the PCs sales (as has happened since 2012). As of 2013, worldwide sales of PCs had begun to fall as many consumers moved to tablets and smartphones. Sales of 90.3 million units in the 4th quarter of 2012 represented a 4.9% decline from sales in the 4th quarter of 2011. Global PC sales fell sharply in the first quarter of 2013, according to IDC data. The 14% year-over-year decline was the largest on record since the firm began tracking in 1994, and double what analysts had been expecting. The decline of Q2 2013 PC shipments marked the fifth straight quarter of falling sales. "This is horrific news for PCs," remarked an analyst. "It's all about mobile computing now. We have definitely reached the tipping point." Data from Gartner showed a similar decline for the same time period. China's Lenovo Group bucked the general trend as strong sales to first-time buyers in the developing world allowed the company's sales to stay flat overall. Windows 8, which was designed to look similar to tablet/smartphone software, was cited as a contributing factor in the decline of new PC sales. "Unfortunately, it seems clear that the Windows 8 launch not only didn't provide a positive boost to the PC market, but appears to have slowed the market," said IDC Vice President Bob O’Donnell. In August 2013, Credit Suisse published research findings that attributed around 75% of the operating profit share of the PC industry to Microsoft (operating system) and Intel (semiconductors). According to IDC, in 2013 PC shipments dropped by 9.8% as the greatest drop-ever in line with consumers trends to use mobile devices. In the second quarter of 2018, PC sales grew for the first time since the first quarter of 2012. According to research firm Gartner, the growth mainly came from the business market while the consumer market experienced decline. Average selling price Selling prices of personal computers steadily declined due to lower costs of production and manufacture, while the capabilities of computers increased. In 1975, an Altair kit sold for around only US$400, but required customers to solder components into circuit boards; peripherals required to interact with the system in alphanumeric form instead of blinking lights would add another $2,000, and the resultant system was of use only to hobbyists. At their introduction in 1981, the US$1,795 price of the Osborne 1 and its competitor Kaypro was considered an attractive price point; these systems had text-only displays and only floppy disks for storage. By 1982, Michael Dell observed that a personal computer system selling at retail for about $3,000 US was made of components that cost the dealer about $600; typical gross margin on a computer unit was around $1,000. The total value of personal computer purchases in the US in 1983 was about $4 billion, comparable to total sales of pet food. By late 1998, the average selling price of personal computer systems in the United States had dropped below $1,000. For Microsoft Windows systems, the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing $569 for desktop computers and $689 for laptops at U.S. retail in August 2008. In 2009, ASP had further fallen to $533 for desktops and to $602 for notebooks by January and to $540 and $560 in February. According to research firm NPD, the average selling price of all Windows portable PCs has fallen from $659 in October 2008 to $519 in October 2009. Environmental impact External costs of environmental impact are not fully included in the selling price of personal computers. Personal computers have become a large contributor to the 50 million tons of discarded electronic waste generated annually, according to the United Nations Environment Programme. To address the electronic waste issue affecting developing countries and the environment, extended producer responsibility (EPR) acts have been implemented in various countries and states. In the absence of comprehensive national legislation or regulation on the export and import of electronic waste, the Silicon Valley Toxics Coalition and BAN (Basel Action Network) teamed up with electronic recyclers in the US and Canada to create an e-steward program for the orderly disposal of electronic waste. Some organizations oppose EPR regulation, and claim that manufacturers naturally move toward reduced material and energy use. See also List of home computers Public computer Portable computer Desktop replacement computer Quiet PC Pocket PC Market share of personal computer vendors Personal Computer Museum Enthusiast computer References Further reading Accidental Empires: How the boys of Silicon Valley make their millions, battle foreign competition, and still can't get a date, Robert X. Cringely, Addison-Wesley Publishing, (1992), PC Magazine, Vol. 2, No. 6, November 1983, ‘'SCAMP: The Missing Link in the PC's Past?‘’ External links How Stuff Works pages: Dissecting a PC How PCs Work How to Upgrade Your Computer How to Build a Computer Global archive with product data-sheets of PCs and Workstations American inventions Classes of computers Home appliances Office equipment
36987204
https://en.wikipedia.org/wiki/Kasparov%27s%20Gambit
Kasparov's Gambit
Kasparov's Gambit, or simply Gambit, is a chess playing computer program created by Heuristic Software and published by Electronic Arts in 1993 based on Socrates II, the only winner of the North American Computer Chess Championship running on a common microcomputer. It was designed for MS-DOS while Garry Kasparov reigned as world champion, whose involvement and support was its key allure. History Julio Kaplan, chessplayer, computer programmer, and owner of the company 'Heuristic Software', first developed Heuristic Alpha in 1990–91. The original version evolved into Socrates with the help of other chess players and programmers including Larry Kaufman and Don Dailey, who, later, were also developers of Kasparov's Gambit. Improvements to Socrates were reflected in a version called Titan, renamed for competition as Socrates II, the most successful of the series winning the 1993 ACM International Chess Championship. During the course of the championship Socrates II, which was running on a stock 486 PC, defeated opponents with purpose-built hardware and software for playing chess, including HiTech and Cray Blitz. Electronic Arts purchased Socrates II and hired its creators to build a new product, Kasparov's Gambit, including Kasparov as consultant and brand. It was the company's effort to enter the chess programs market, dominated at the time by Chessmaster 3000 and Blitz. In 1993 it went on sale, but contained a number of bugs, so was patched at the end of that year. The patched version ran at about 75% of the speed of Socrates II which was quite an achievement considering the whole functionality of the software was sharing the same computer resources. In 1993, it competed in the Harvard Cup (six humans versus six programs) facing grandmasters who had ratings ranging from 2515 to 2625 ELO,. It finished the competition in 12th and last place. Grandmasters took the first five places and another Socrates derivation - Socrates Exp - was the best program finishing in 6th place. According to team developer Eric Schiller, a Windows version was planned by Electronic Arts, but was never finished. Electronic Arts had earlier produced the chess variant Archon: The Light and the Dark (1983), and later followed up with Battle Chess II: Chinese Chess (2002) and Jamdat Mobile's Kasparov Chessmate (2003). Reception Computer Gaming World in 1993 approved of Kasparov's Gambits "stunning" SVGA graphics, Socrates II engine, and coaching features, concluding that it was "above any PC game on the market". It was a runner-up for the magazine's Strategy Game of the Year award in June 1994, losing to Master of Orion. The editors called Kasparov's Gambit "beautifully crafted", a "great teacher" and "a chess game for the 'rest of us.'" It holds the 145th place in Computer Gaming Worlds 1996 list of 150 Best Games of All Time. Features Gambit was intended to have the capabilities of a champion level software and a teaching tool for a wide range of player levels. It was Electronic Arts' first use of windowed video showing digitized images, video and voice of champ Garry Kasparov giving advice and commenting on player moves. Primary features include: Interactive tutorial with video-help by Garry Kasparov An inline glossary of chess terms A library of 500 famous games played by past world champions An auxiliary graphical chessboard showing the computer's analysis while playing or reviewing moves An interactive move list An analysis text box, showing move's elapsed time, depth, score of the best evaluated line and number of positions seek Multiple computer playing styles allowing creation and customization of computer opponents A coach window including the moves played and comments about openings and advice, sometimes showing videos of Kasparov Rating The human strength rating is calculated using Elo formula with the included personalities and the player's own performance, going from 800 to 2800 points. New players get a customizable 800 Elo, which changes according to the total number of games played, opponents' relative strength, and game results. Creation of personalities enables five adjustable characteristics in percentage (0-100%)—strength, orthodoxy, creativity, focus and aggressiveness—which define, besides its style, its Elo rating. User Elo is calculated according to Gambit's universe of electronic players and the user, and so do not match rankings in real world. Instead this feature was designed to provide a useful way to measure the player's strength and progress against Gambit. Teaching tools Besides 125 tutorials, written by renowned chess author and developer Eric Schiller, classified in openings, middle game, endgames (checkmates), tactics and strategy also include a Famous Games database, a list of all-time world champions games commented by Kasparov with a quiz option where user must choose the next move. Technical information It was designed for 386SX IBM AT compatible systems. Even when it was possible to read commands from a keyboard or mouse, the use of a mouse was recommended. When it was released, Kasparov's Gambit offered a nice look & feel experience using SVGA mode with a 640x480 resolution, 256 colors, and voice/video recordings of world champion Garry Kasparov. A lack of sound card support was reported by users. It is playable on DOSBox since version 0.61 on Linux and other Unix-like operating systems, Windows XP and subsequent versions, and Mac OS X. Development First intention was using Heuristic Alpha as Gambit'''s base, but unexpected good performance of Socrates II in tournaments made of it the final choice. According to developer and tester Larry Kauffman "first released included important bugs, that Knowledge of bishop mobility appears to be missing, as does some other chess knowledge, and Gambit appears to run only about 50-60% of the speed of the ACM program in positions (without bishops) where the two do play and evaluate identically. There are also bugs in the features and the time controls, and the program is rather difficult to use (perhaps because it has so many features). One good thing I can say is that the 3d graphics are superb... I have tested the patched version, and have confirmed that most or all of the bugs have been corrected. The new version does play identically to the ACM program and runs at 70-75% of the speed, so it should rate just 30 points below the ACM program."Socrates II engine was fully programmed in assembly language, but rewritten just in C language for Kasparov's Gambit'' engine. Instead, assembly language was used for sound and video capabilities, as for other functionalities. See also Computer chess Vintage software List of Electronic Arts games Notes References External links Games played at 1993 Harvards Cup by Kasparov's Gambit at 365Chess.com. Artificial intelligence applications Chess software DOS games DOS-only games 1993 video games Video games developed in the United States Garry Kasparov
67490736
https://en.wikipedia.org/wiki/Hector%20Martin%20%28hacker%29
Hector Martin (hacker)
Hector Martin Cantero (born September 9, 1990), also known as marcan, is a security hacker known for hacking multiple PlayStation generations, the Wii and other devices. Biography Education Martin went to the American School of Bilbao, where he received his primary and secondary education. Career He has been part of Team Twiizers, where he was responsible for reverse engineering and hacking the Wii. He was the first to create an open source driver for the Microsoft Kinect by reverse engineering for which he was widely credited. Sony sued him and others for hacking the PlayStation 3; the case was eventually settled out of court. In 2016, he ported Linux to the PlayStation 4 and demonstrated that at the 33rd Chaos Communication Congress by running Steam inside Linux. He has created the usbmuxd tool for synchronizing data from iPhones to Linux computers. In 2021, he created the Asahi Linux project which he has been leading since. Martin discovered the "M1racles" security vulnerability in the Apple M1 platform. References External links Personal website 1990 births Living people Hackers Computer security specialists
6698
https://en.wikipedia.org/wiki/Camel%20case
Camel case
Camel case (sometimes stylized as camelCase or CamelCase, also known as camel caps or more formally as medial capitals) is the practice of writing phrases without spaces or punctuation, indicating the separation of words with a single capitalized letter, and the first word starting with either case. Common examples include "iPhone" and "eBay". It is also sometimes used in online usernames such as "johnSmith", and to make multi-word domain names more legible, for example in promoting "EasyWidgetCompany.com". Camel case is often used as a naming convention in computer programming, but is an ambiguous definition due to the optional capitalization of the first letter. Some programming styles prefer camel case with the first letter capitalised, others not. For clarity, this article calls the two alternatives upper camel case (initial uppercase letter, also known as Initial Capitals, Initial Caps, InitCaps or Pascal case) and lower camel case (initial lowercase letter, also known as dromedary case). Some people and organizations, notably Microsoft, use the term camel case only for lower camel case, designating Pascal case for the upper camel case. Camel case is distinct from title case, which capitalises all words but retains the spaces between them, and from Tall Man lettering, which uses capitals to emphasize the differences between similar-looking product names such as "predniSONE" and "predniSOLONE". Camel case is also distinct from snake case, which uses underscores interspersed with lowercase letters (sometimes with the first letter capitalized). A combination of snake and camel case (identifiers Written_Like_This) is recommended in the Ada 95 style guide. Variations and synonyms The original name of the practice, used in media studies, grammars and the Oxford English Dictionary, was "medial capitals". Other synonyms include: camelBack (or camel-back) notation or CamelCaps CapitalizedWords or CapWords for upper camel case in Python compoundNames Embedded caps (or embedded capitals) HumpBack (or hump-back) notation InterCaps or intercapping (abbreviation of Internal Capitalization) mixedCase for lower camel case in Python PascalCase for upper camel case (after the Pascal programming language) Smalltalk case WikiWord or WikiCase (especially in older wikis) The earliest known occurrence of the term "InterCaps" on Usenet is in an April 1990 post to the group alt.folklore.computers by Avi Rappoport. The earliest use of the name "Camel Case" occurs in 1995, in a post by Newton Love. Love has since said, "With the advent of programming languages having these sorts of constructs, the humpiness of the style made me call it HumpyCase at first, before I settled on CamelCase. I had been calling it CamelCase for years. ... The citation above was just the first time I had used the name on USENET." Traditional use in natural language In word combinations The use of medial capitals as a convention in the regular spelling of everyday texts is rare, but is used in some languages as a solution to particular problems which arise when two words or segments are combined. In Italian, pronouns can be suffixed to verbs, and because the honorific form of second-person pronouns is capitalized, this can produce a sentence like non ho trovato il tempo di risponderLe ("I have not found time to answer you" – where Le means "to you"). In German, the medial capital letter I, called Binnen-I, is sometimes used in a word like StudentInnen ("students") to indicate that both Studenten ("male students") and Studentinnen ("female students") are intended simultaneously. However, mid-word capitalisation does not conform to German orthography apart from proper names like McDonald; the previous example could be correctly written using parentheses as Student(inn)en, analogous to "congress(wo)men" in English. In Irish, camel case is used when an inflectional prefix is attached to a proper noun, for example ("in Galway"), from ("Galway"); ("the Scottish person"), from ("Scottish person"); and ("to Ireland"), from ("Ireland"). In recent Scottish Gaelic orthography, a hyphen has been inserted: . This convention is also used by several written Bantu languages (e.g. isiZulu, "Zulu language") and several indigenous languages of Mexico (e.g. Nahuatl, Totonacan, Mixe–Zoque, and some Oto-Manguean languages). In Dutch, when capitalizing the digraph ij, both the letter I and the letter J are capitalized, for example in the country name IJsland ("Iceland"). In Chinese pinyin, camel case is sometimes used for place names so that readers can more easily pick out the different parts of the name. For example, places like Beijing (北京), Qinhuangdao (秦皇岛), and Daxing'anling (大兴安岭) can be written as BeiJing, QinHuangDao, and DaXingAnLing respectively, with the number of capital letters equaling the number of Chinese characters. Writing word compounds only by the initial letter of each character is also acceptable in some cases, so Beijing can be written as BJ, Qinghuangdao as QHD, and Daxing'anling as DXAL. In English, medial capitals are usually only found in Scottish or Irish "Mac-" or "Mc-" names, where for example MacDonald, McDonald, and Macdonald are common spelling variants of the same name, and in Anglo-Norman "Fitz-" names, where for example both FitzGerald and Fitzgerald are found. In their English style guide The King's English, first published in 1906, H. W. and F. G. Fowler suggested that medial capitals could be used in triple compound words where hyphens would cause ambiguity—the examples they give are KingMark-like (as against King Mark-like) and Anglo-SouthAmerican (as against Anglo-South American). However, they described the system as "too hopelessly contrary to use at present." In transliterations In the scholarly transliteration of languages written in other scripts, medial capitals are used in similar situations. For example, in transliterated Hebrew, ha'Ivri means "the Hebrew person" or "the Jew" and b'Yerushalayim means "in Jerusalem". In Tibetan proper names like rLobsang, the "r" stands for a prefix glyph in the original script that functions as tone marker rather than a normal letter. Another example is tsIurku, a Latin transcription of the Chechen term for the capping stone of the characteristic Medieval defensive towers of Chechenia and Ingushetia; the capital letter "I" here denoting a phoneme distinct from the one transcribed as "i". In abbreviations Medial capitals are traditionally used in abbreviations to reflect the capitalization that the words would have when written out in full, for example in the academic titles PhD or BSc. A more recent example is NaNoWriMo, a contraction of National Novel Writing Month and the designation for both the annual event and the nonprofit organization that runs it. In German, the names of statutes are abbreviated using embedded capitals, e.g. StGB for Strafgesetzbuch (Criminal Code), PatG for Patentgesetz (Patent Act), BVerfG for Bundesverfassungsgericht (Federal Constitutional Court), or the very common GmbH, for Gesellschaft mit beschränkter Haftung (private limited company). In this context, there can even be three or more camel case capitals, e.g. in TzBfG for Teilzeit- und Befristungsgesetz (Act on Part-Time and Limited Term Occupations). In French, camel case acronyms such as OuLiPo (1960) were favored for a time as alternatives to initialisms. Camel case is often used to transliterate initialisms into alphabets where two letters may be required to represent a single character of the original alphabet, e.g., DShK from Cyrillic ДШК. History of modern technical use Chemical formulae The first systematic and widespread use of medial capitals for technical purposes was the notation for chemical formulae invented by the Swedish chemist Jacob Berzelius in 1813. To replace the multitude of naming and symbol conventions used by chemists until that time, he proposed to indicate each chemical element by a symbol of one or two letters, the first one being capitalized. The capitalization allowed formulae like "NaCl" to be written without spaces and still be parsed without ambiguity. Berzelius' system continues to be used, augmented with three-letter symbols such as "Uue" for unconfirmed or unknown elements and abbreviations for some common substituents (especially in the field of organic chemistry, for instance "Et" for "ethyl-"). This has been further extended to describe the amino acid sequences of proteins and other similar domains. Early use in trademarks Since the early 20th century, medial capitals have occasionally been used for corporate names and product trademarks, such as DryIce Corporation (1925) marketed the solid form of carbon dioxide (CO2) as "Dry Ice", thus leading to its common name. CinemaScope and VistaVision, rival widescreen movie formats (1953) ShopKo (1962), retail stores, later renamed Shopko MisterRogers Neighborhood, the TV series also called Mister Rogers' Neighborhood (1968) ChemGrass (1965), later renamed AstroTurf (1967) ConAgra (1971), formerly Consolidated Mills MasterCraft (1968), a sports boat manufacturer AeroVironment (1971) PolyGram (1972), formerly Grammophon-Philips Group. United HealthCare (1977) MasterCard (1979), formerly Master Charge SportsCenter (1979) Computer programming In the 1970s and 1980s, medial capitals were adopted as a standard or alternative naming convention for multi-word identifiers in several programming languages. The precise origin of the convention in computer programming has not yet been settled. A 1954 conference proceedings occasionally informally referred to IBM's Speedcoding system as "SpeedCo". Christopher Strachey's paper on GPM (1965), shows a program that includes some medial capital identifiers, including "NextCh" and "WriteSymbol". Multiple-word descriptive identifiers with embedded spaces such as end of file or char table cannot be used in most programming languages because the spaces between the words would be parsed as delimiters between tokens. The alternative of running the words together as in endoffile or chartable is difficult to understand and possibly misleading; for example, chartable is an English word (able to be charted), whereas charTable means a table of chars . Some early programming languages, notably Lisp (1958) and COBOL (1959), addressed this problem by allowing a hyphen ("-") to be used between words of compound identifiers, as in "END-OF-FILE": Lisp because it worked well with prefix notation (a Lisp parser would not treat a hyphen in the middle of a symbol as a subtraction operator) and COBOL because its operators were individual English words. This convention remains in use in these languages, and is also common in program names entered on a command line, as in Unix. However, this solution was not adequate for mathematically-oriented languages such as FORTRAN (1955) and ALGOL (1958), which used the hyphen as an infix subtraction operator. FORTRAN ignored blanks altogether, so programmers could use embedded spaces in variable names. However, this feature was not very useful since the early versions of the language restricted identifiers to no more than six characters. Exacerbating the problem, common punched card character sets of the time were uppercase only and lacked other special characters. It was only in the late 1960s that the widespread adoption of the ASCII character set made both lowercase and the underscore character _ universally available. Some languages, notably C, promptly adopted underscores as word separators, and identifiers such as end_of_file are still prevalent in C programs and libraries (as well as in later languages influenced by C, such as Perl and Python). However, some languages and programmers chose to avoid underscores—among other reasons to prevent confusing them with whitespace—and adopted camel case instead. Charles Simonyi, who worked at Xerox PARC in the 1970s and later oversaw the creation of Microsoft's Office suite of applications, invented and taught the use of Hungarian Notation, one version of which uses the lowercase letter(s) at the start of a (capitalized) variable name to denote its type. One account claims that the camel case style first became popular at Xerox PARC around 1978, with the Mesa programming language developed for the Xerox Alto computer. This machine lacked an underscore key (whose place was taken by a left arrow "←"), and the hyphen and space characters were not permitted in identifiers, leaving camel case as the only viable scheme for readable multiword names. The PARC Mesa Language Manual (1979) included a coding standard with specific rules for upper and lower camel case that was strictly followed by the Mesa libraries and the Alto operating system. Niklaus Wirth, the inventor of Pascal, came to appreciate camel case during a sabbatical at PARC and used it in Modula, his next programming language. The Smalltalk language, which was developed originally on the Alto, also uses camel case instead of underscores. This language became quite popular in the early 1980s, and thus may also have been instrumental in spreading the style outside PARC. Upper camel case (or "Pascal case") is used in Wolfram Language in computer algebraic system Mathematica for predefined identifiers. User defined identifiers should start with a lower case letter. This avoids the conflict between predefined and user defined identifiers both today and in all future versions. Computer companies and products Whatever its origins in the computing field, the convention was used in the names of computer companies and their commercial brands, since the late 1970s — a trend that continues to this day: (1977) CompuServe (1978) WordStar (1979) VisiCalc (1982) MicroProse, WordPerfect (1983) NetWare (1984) LaserJet, MacWorks, PostScript (1985) PageMaker (1987) ClarisWorks, HyperCard, PowerPoint (1990) WorldWideWeb (the first web browser), later renamed Nexus Spread to mainstream usage In the 1980s and 1990s, after the advent of the personal computer exposed hacker culture to the world, camel case then became fashionable for corporate trade names in non-computer fields as well. Mainstream usage was well established by 1990: (1980) EchoStar (1984) BellSouth (1985) EastEnders (1986) SpaceCamp (1990) HarperCollins, SeaTac (1998) PricewaterhouseCoopers, merger of Price Waterhouse and Coopers During the dot-com bubble of the late 1990s, the lowercase prefixes "e" (for "electronic") and "i" (for "Internet", "information", "intelligent", etc.) became quite common, giving rise to names like Apple's iMac and the eBox software platform. In 1998, Dave Yost suggested that chemists use medial capitals to aid readability of long chemical names, e.g. write AmidoPhosphoRibosylTransferase instead of amidophosphoribosyltransferase. This usage was not widely adopted. Camel case is sometimes used for abbreviated names of certain neighborhoods, e.g. New York City neighborhoods SoHo (South of Houston Street) and TriBeCa (Triangle Below Canal Street) and San Francisco's SoMa (South of Market). Such usages erode quickly, so the neighborhoods are now typically rendered as Soho, Tribeca, and Soma. Internal capitalization has also been used for other technical codes like HeLa (1983). Current usage in computing Programming and coding The use of medial caps for compound identifiers is recommended by the coding style guidelines of many organizations or software projects. For some languages (such as Mesa, Pascal, Modula, Java and Microsoft's .NET) this practice is recommended by the language developers or by authoritative manuals and has therefore become part of the language's "culture". Style guidelines often distinguish between upper and lower camel case, typically specifying which variety should be used for specific kinds of entities: variables, record fields, methods, procedures, functions, subroutines, types, etc. These rules are sometimes supported by static analysis tools that check source code for adherence. The original Hungarian notation for programming, for example, specifies that a lowercase abbreviation for the "usage type" (not data type) should prefix all variable names, with the remainder of the name in upper camel case; as such it is a form of lower camel case. Programming identifiers often need to contain acronyms and initialisms that are already in uppercase, such as "old HTML file". By analogy with the title case rules, the natural camel case rendering would have the abbreviation all in uppercase, namely "oldHTMLFile". However, this approach is problematic when two acronyms occur together (e.g., "parse DBM XML" would become "parseDBMXML") or when the standard mandates lower camel case but the name begins with an abbreviation (e.g. "SQL server" would become "sQLServer"). For this reason, some programmers prefer to treat abbreviations as if they were lowercase words and write "oldHtmlFile", "parseDbmXml" or "sqlServer". However, this can make it harder to recognise that a given word is intended as an acronym. Wiki link markup Camel case is used in some wiki markup languages for terms that should be automatically linked to other wiki pages. This convention was originally used in Ward Cunningham's original wiki software, WikiWikiWeb, and can be activated in most other wikis. Some wiki engines such as TiddlyWiki, Trac and PmWiki make use of it in the default settings, but usually also provide a configuration mechanism or plugin to disable it. Wikipedia formerly used camel case linking as well, but switched to explicit link markup using square brackets and many other wiki sites have done the same. MediaWiki, for example, does not support camel case for linking. Some wikis that do not use camel case linking may still use the camel case as a naming convention, such as AboutUs. Other uses The NIEM registry requires that XML data elements use upper camel case and XML attributes use lower camel case. Most popular command-line interfaces and scripting languages cannot easily handle file names that contain embedded spaces (usually requiring the name to be put in quotes). Therefore, users of those systems often resort to camel case (or underscores, hyphens and other "safe" characters) for compound file names like MyJobResume.pdf. Microblogging and social networking services that limit the number of characters in a message are potential outlets for medial capitals. Using camel case between words reduces the number of spaces, and thus the number of characters, in a given message, allowing more content to fit into the limited space. Hashtags, especially long ones, often use camel case to maintain readability (e.g. #CollegeStudentProblems is easier to read than #collegestudentproblems). In website URLs, spaces are percent-encoded as "%20", making the address longer and less human readable. By omitting spaces, camel case does not have this problem. Readability studies Camel case has been criticised as negatively impacting readability due to the removal of spaces and uppercasing of every word. A 2009 study comparing snake case to camel case found that camel case identifiers could be recognised with higher accuracy among both programmers and non-programmers, and that programmers already trained in camel case were able to recognise those identifiers faster than underscored snake-case identifiers. A 2010 follow-up study, with other subjects containing mainly pre-trained programmers and using an improved measurement method with use of eye-tracking equipment, indicates: "While results indicate no difference in accuracy between the two styles, subjects recognize identifiers in the underscore style more quickly." See also References External links Examples and history of CamelCase, also WordsSmashedTogetherLikeSo .NET Framework General Reference Capitalization Styles What's in a nAME(cq)?, by Bill Walsh, at The Slot The Science of Word Recognition, by Kevin Larson, Advanced Reading Technology, Microsoft Corporation Convert text to CamelCase OASIS Cover Pages: CamelCase for Naming XML-Related Components Convert text to CamelCase, Title Case, Uppercase and lowercase Capitalization Naming conventions Typography Source code
25835
https://en.wikipedia.org/wiki/Rn%20%28newsreader%29
Rn (newsreader)
rn (short for Read News) is a news client (or 'newsreader') written by Larry Wall and originally released in 1984. It was one of the first newsreaders to take full advantage of character-addressable CRT terminals (vnews, by Kenneth Almquist was first). Previous newsreaders, such as readnews, were mostly line-oriented and designed for use on the printing terminals which were common on the early Unix minicomputers where the Usenet software and network originated. Later variants of the original rn program included rrn, trn, and strn. Features rn was also notable for three other features it introduced: KILL files, "do the right thing", and automatic configuration. The KILL file was a file (called, obviously enough, ) containing regular expressions matched against the subjects of news articles in each group; if an article matched, it would be marked as having already been read. This feature proved essential as the growth of the Usenet made it impossible to read every article in even a limited selection of newsgroups. "Do the right thing" was a fundamental change in the user-interface model of previous news software; rather than requiring users to navigate menus or learn a distinct command vocabulary for every operating mode of the program, certain single-keystroke commands were repeated throughout the user interface, performing the most obviously appropriate function for the task at hand. The most important of these commands was the space character, which means "go on to the next thing", where the next thing could be the next page, the next article, or the next newsgroup, depending on where the user was in the process of reading news. Finally, automatic configuration was a feature for system administrators, not visible to users. Most Unix programs, and in particular all of the Usenet software, were distributed in source code form. Because different vendors of Unix systems (and in many cases, different versions of the Unix software) implemented slightly different behavior and names for important functions, a system administrator was required to have sufficient programming expertise to edit the source code before building the program executables to account for these differences. A particularly considerate programmer might have centralized these in a single source code file, but it still required manual editing. rn changed that by including a script called , which had enough intelligence on its own to examine the computer system it was running on and determine, of those functions and interfaces known to behave differently, which behavior the system implemented. Today, most open source software is distributed with a similar script, such as autoconf. History Like all of the original newsreaders and the Usenet software itself, rn was designed for the environment of a large time-shared minicomputer, which users connected to using terminals wired directly to the machine, and where the only networks available were accessed by slow and expensive dial-up modem connections. All of the articles in all of the newsgroups were stored in files on the local disk (known as the "news spool"), and rn could simply read those files directly when presenting them to the user. When local area networks became widespread, it was natural that administrators and users would desire remote access to the news spool, and NNTP, the Network News Transfer Protocol, was developed to serve that need. While working at Baylor College of Medicine, Stan O. Barber developed remote rn (rrn), a set of patches to rn which allowed it to communicate with an NNTP server over a local-area (or even wide-area) network. Barber later took over maintenance responsibility for rn itself from Larry Wall. As news volumes continued to increase, it became apparent that even KILL files could not possibly keep up with the sheer number of users and articles. A new concept, the threaded newsreader, was needed as users gradually switched from a "read most, kill few" model to "ignore most, read few". By organizing the articles in a newsgroup according to threads of discussion, using headers that had long been present in Usenet articles but practically unused, a threaded newsreader would allow users to keep up with topics and discussions they were interested without having to explicitly deselect uninteresting threads. Kim F. Storm's nn newsreader was the first to implement this new model, and it looked for a while as if nn would do to rn what rn did to readnews. This fate was averted when Wayne Davison developed trn, a set of patches to rn which gave it both threading at the article level and a new user interface that would allow users to select only the threads they desired, while remaining true to the original rn interface philosophy of do the right thing. An even more recent addition to the rn family has been the addition of scoring, which allows a more complex method of evaluating articles to determine whether the user wishes to read them; originally this was implemented in a code fork of trn called strn, but later this was integrated into the official trn distribution. See also List of Usenet newsreaders Comparison of Usenet newsreaders References External links news.software.readers newsgroup Free Usenet clients
4458640
https://en.wikipedia.org/wiki/Eberhard%20Zangger
Eberhard Zangger
Eberhard Zangger (born 1958 in Kamen, West Germany) is a Swiss geoarchaeologist, corporate communications consultant and publicist. Since 1994 he has been advocating the view that a Luwian civilization existed in Western Asia Minor during the 2nd millennium BC. In 2014 he established the international non-profit foundation Luwian Studies, whose president he is. Life and work Eberhard Zangger studied geology and paleontology at the University of Kiel and obtained a PhD from Stanford University in 1988. After this he was a senior research associate in the Department of Earth Sciences at the University of Cambridge (1988–91). In June 1991 he founded the consultancy office Geoarcheology International in Zurich, Switzerland, from where he participated archaeological projects in the eastern Mediterranean each year until 1999. Zangger began concentrating on geoarchaeology in 1982. His early research work and discoveries included the coastal situation of Dimini in Neolithic Central Greece, the extent of Lake Lerna in the Argive Plain, the age and function of the Mycenaean river diversion and extent of the lower town of Tiryns, the insular character of Asine, the artificial harbor of Nestor at Pylos, including its clean water flushing mechanism, and a human-made dam in Minoan Monastiraki in central Crete. In 1992, Zangger suggested that Plato used an Egyptian version of a story about Troy for his legendary account of Atlantis. Zangger based his argument on comparisons between Mycenaean culture and Plato's account of the Greek civilization facing Atlantis, as well as parallels between the recollections of the Trojan War and the war between Greece and Atlantis. He recognized similarities between the Sea People invasions and the aggressors described by Plato and he also saw parallels between the Sea People invasions and the Trojan War. In 1992 Zangger arrived at the conclusion that Troy must have been much bigger than the archeological scholarship had presumed, and that the city must have had artificial harbors inside the modern floodplain. In a 1993 article, Zangger listed many commonalities between Plato's description of Atlantis and different accounts of Troy as it looked in the late Bronze Age. In 1994, Zangger presented a chronology of political and economic developments in the eastern Mediterranean during the 13th century BC. This time, Zangger interpreted the legend of the Trojan War to be the memory of a momentous war which led to the collapse of many countries around the eastern Mediterranean around 1200 BC. Zangger's overall research goal was to find an explanation for the end of the Bronze Age in the eastern Mediterranean around 1200 BC. In contrast to the archaeological scholarship of the time, Zangger attributed greater importance to the states in Western Anatolia that are known from Hittite documents, including the Luwian kingdoms Arzawa, Mira, Wilusa, Lukka and Seha River Land. In Zangger's view, if these petty kingdoms had stood united, they would have matched the economic and military importance of Mycenaean Greece or Minoan Crete. In a review of the books The Flood from Heaven and Ein neuer Kampf um Troia in the journal Journal of Field Archeology, the US prehistorian Daniel Pullen of Florida State University emphasized Zangger's approach. Zangger, Pullen says, “applies the rigors of scientific methodology to explaining the end of the Bronze Age in the eastern Mediterranean.” In his third book, Zangger turned to developments in the 12th century BC after the Trojan War. According to Zangger, scattered groups of survivors of the Sea People invasions and the Trojan War founded new settlements in Italy and Syria/Palestine from which the Etruscan and Phoenician cultures emerged. Zangger also argued against the overrating of natural disasters as a trigger for cultural change. In his opinion, natural scientists and specialists in urban development and hydraulic engineering should become more often involved in archaeology. In collaboration with the Federal Institute for Geosciences and Natural Resources in Hannover, Zanier proposed a geophysical exploration of the plain of Troy to locate settlement layers and artificial port basins. The Turkish Ministry of Culture did not grant permission to conduct this project. In 2001 Zangger said that because of a vigorous scholarly dispute with the Troy excavator Manfred Korfmann, Zangger was ceasing his research. In the fall of 1999, Zangger became a business consultant specializing in corporate communications and public relations. In 2002 he founded science communication GmbH, a consultancy firm for corporate communications. Luwian Studies Foundation Since April 2014, Zangger is president of the board of trustees of the international non-profit foundation Luwian Studies. The commercial register of Canton Zurich (Switzerland) states as the foundation's purpose “the exploration of the second millennium BC in western Asia Minor and the dissemination of knowledge about it”. The Board of Trustees includes Ivo Hajnal, Jorrit Kelder, Matthias Oertle and Jeffrey Spier. In May 2016, Luwian Studies went public with a website in German, English and Turkish. At the same time Zangger's book appeared: The Luwian Civilization – The missing link in the Aegean Bronze Age. As part of its research, the foundation has systematically catalogued over 340 extensive settlement sites of the Middle and Late Bronze Age in Western Asia Minor. These sites are presented in a public database on the website. James Mellaart’s Estate In June 2017, Zangger received unpublished documents from the estate of the British prehistorian James Mellaart, which the latter had marked to be of particular importance. The material in Mellaart's estate referred to two groups of documents, both of which were allegedly found in 1878 in a village called Beyköy, 34 kilometers north of Afyonkarahisar in western Turkey. On the one hand there was a Luwian hieroglyphic inscription (“HL Beyköy 2”) on limestone which must have been composed around 1180 BC. Mellaart, however, only possessed a drawing of this inscription. According to Mellaart's notes, in addition to this, bronze tablets bearing Hittite texts in Akkadian cuneiform were also found at Beyköy (“Beyköy text”). These described the political events during almost the entire Bronze Age from the perspective of rulers in western Asia Minor. Mellaart only possessed English translations of these documents. In December 2017, Zangger and the Dutch linguist Fred Woudhuizen published in the Dutch archeology journal Talanta the Luwian hieroglyphic drawings (including texts from Edremit, Yazılıtaş, Dağardı and Şahankaya) that were retrieved from Mellaart's estate. However, early in 2018 Zangger distanced himself from Mellaart and accused him of having falsified documents. Further research in Mellaart's former study in London in February 2018 had revealed that Mellaart had completely invented the (allegedly cuneiform) “Beyköy text”. On the other hand, Woudhuizen, who published together with Zangger, continues to believe that the Luwian hieroglyphic inscription HL Beyköy 2 is certainly not forged by Mellaart and probably genuine. Yazılıkaya In June 2019 Zangger together with the archeologist and astronomer Rita Gautschy of the University of Basel, published a new interpretation of the Hittite rock sanctuary Yazılıkaya at Ḫattuša, according to which the sequence of rock reliefs in chamber A could have been used as a lunisolar calendar. Selected publications The Landscape Evolution of the Argive Plain (Greece). Paleo-Ecology, Holocene Depositional History and Coastline Changes. PhD dissertation at Stanford University, University Microfilm International, Ann Arbor, Michigan 1988 Prehistoric Coastal Environments in Greece: The Vanished Landscapes of Dimini Bay and Lake Lerna. Journal of Field Archaeology 18 (1): 1-15. 1991 Geoarchaeology of the Argolid. Argolid, volume 2. Edited by the German Archaeological Institute. Gebrüder Mann Verlag, 149 pages, 1993 The Island of Asine: A paleogeographic reconstruction. Opuscula Atheniensa XX.15: 221-239. 1994 Zangger, Eberhard; Michael Timpson, Sergei Yazvenko, Falko Kuhnke & Jost Knauss: The Pylos Regional Archaeological Project; Landscape Evolution and Site Preservation, Hesperia 66 (4): 549-641. 1997 Athanassas, Constantin et al.: Exploring Paelogreographic Conditions at Two Paleolithic Sites in Navarino, Southwest Greece, Dated by Optically Stimulated Luminescence. Geoarchaeology 27: 237-258. 2012 Plato’s Atlantis Account: A distorted recollection of the Trojan War. Oxford Journal of Archaeology 18 (1): 77-87. 1993 The Flood from Heaven – Deciphering the Atlantis Legend. Sidgwick & Jackson, London; 256 pages 1992 Ein neuer Kampf um Troia – Archäologie in der Krise. Droemer Verlag. Munich, 352 pages 1994 The Future of the Past: Archaeology in the 21st Century. Weidenfeld & Nicolson, London, 2001 Zangger, Eberhard, Michael Timpson, Sergei Yazvenko and Horst Leiermann: Searching for the Ports of Troy. In: Environmental Reconstruction in Mediterranean Landscape, Some Open Questions About the Plain of Troia. In: Troia and the Troad – Scientific Approaches. Springer, Berlin, 317-324. 2003 Notes References Edge: Eberhard Zangger Archaeologists from North Rhine-Westphalia Living people 1958 births People from Kamen
51517281
https://en.wikipedia.org/wiki/Meizu%20M3E
Meizu M3E
The Meizu M3E is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is a current model of the M series. It was unveiled on August 10, 2016 in Beijing. History In July 2016, rumors about a new mid-range Meizu device appeared after several leaked pictures and specifications had been leaked on social media. According to these rumors, the new device was supposed to be called “M1E” and feature a MediaTek Helio P10 system-on-a-chip with a Mali T-860 GPU. On August 2, 2016, a launch event for the new device for August 10, 2016, was officially announced. Release As announced, the M3E was released in Beijing on August 10, 2016. Pre-orders for the M3E began after the launch event on August 10, 2016. Features Flyme The Meizu M3E was released with an updated version of Flyme OS, a modified operating system based on Android Marshmallow. It features an alternative, flat design and improved one-handed usability. Hardware and design The Meizu M3E features a MediaTek Helio P10 system-on-a-chip with an array of eight ARM Cortex-A53 CPU cores and an ARM Mali-T860 MP2 GPU. The M3E comes with 3 GB of RAM and 32 GB of internal storage. It reaches a score of 47397 points on the AnTuTu benchmark. The Meizu M3E is available in five different colors (grey, silver, champagne gold, rose gold and blue). It has a full-metal body, which measures x x and weighs . The Meizu M3E has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front. Unlike most other Android smartphones, the M3E doesn't have capacitive buttons nor on-screen buttons; the functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. The M3E further extends this button by a fingerprint sensor called mTouch. The Meizu M3E features a fully laminated 5.5-inch IPS multi-touch capacitive touchscreen display with 1080x1920 pixels (Full HD) resolution and 403 ppi pixel density. In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity. The Meizu M3E has two cameras. The rear camera has a resolution of 13 MP, a ƒ/2.2 aperture, a 5-element lens, phase-detection autofocus and an LED flash. The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 4-element lens. See also Meizu Comparison of smartphones References External links Official product page Meizu Android (operating system) devices Mobile phones introduced in 2016 Meizu smartphones Discontinued smartphones
2240615
https://en.wikipedia.org/wiki/Comparison%20of%20SSH%20clients
Comparison of SSH clients
An SSH client is a software program which uses the secure shell protocol to connect to a remote computer. This article compares a selection of notable clients. General Platform The operating systems or virtual machines the SSH clients are designed to run on without emulation include several possibilities: Partial indicates that while it works, the client lacks important functionality compared to versions for other OSs but may still be under development. The list is not exhaustive, but rather reflects the most common platforms today. Technical Features Authentication key algorithms This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table. See also Comparison of SSH servers Comparison of FTP client software Comparison of remote desktop software References Cryptographic software Internet Protocol based network software SSH clients Secure Shell
18577363
https://en.wikipedia.org/wiki/Caligari%20Corporation
Caligari Corporation
Caligari Corporation was founded in 1985 by Roman Ormandy. A prototype 3D video animation package for the Amiga Computer, which led to the incorporation of Octree Software was released in 1986. From 1988 to 1992, Octree released several software packages including Caligari1, Caligari2, Caligari Broadcast, and Caligari 24. Caligari wanted to provide inexpensive yet professional industrial video and corporate presentation software. In 1993, Octree Software moved from New York to California and became known as Caligari Corporation. In 1994 trueSpace 1.0 was introduced on the Windows platform. In early 2008, the company was acquired by Microsoft. On 21 May 2009, Caligari announced that Microsoft would cease to provide support for Caligari trueSpace. The website and forums were affected, and some services ceased to operate from May 22, 2009. External links Caligari Corporation's official website last Wayback archive Fan Site developing trueSpace software since 2009 American companies established in 1985 American companies disestablished in 2009 Software companies established in 1985 Software companies disestablished in 2009 Software companies based in Washington (state) Companies based in New York (state) Companies based in Mountain View, California Amiga raytracers Defunct software companies of the United States Microsoft acquisitions
47049462
https://en.wikipedia.org/wiki/Chemical%20cartridge
Chemical cartridge
A respirator cartridge or canister is a type of filter that removes gases, volatile organic compounds (VOCs), and other vapours from breathing air through adsorption, absorption, or chemisorption. It is one of two basic types of filters used by air-purifying respirators.The other is a mechanical filter, which removes only particulates. Hybrid filters combine the two (see image). Workplace air that is polluted with fine particulate matter or noxious gases but that contains enough oxygen (>19.5% in the US; >18% in the RF), can be rendered safe via air-purifying respirators. Cartridges are of different types, and must be chosen correctly and replaced on an appropriate schedule. Purification methods Absorption Capturing noxious gases may be accomplished by sorbents. These materials (activated carbon, aluminum oxide, zeolite, etc.) have a large specific surface area and can absorb many gases. Typically, such sorbents are in the form of granules, and fill the cartridge. Contaminated air travels through the cartridge's bed of sorbent granules. Movable harmful gas molecules collide with the surface of the sorbent and remain therein. The sorbent gradually saturates and loses its ability to capture pollutants. The bond strength between captured molecules and the sorbent is small, and molecules can separate from the sorbent and return to the air. The sorbent's ability to capture gases depends on the properties of the gases and their concentrations, including air temperature and relative humidity. Chemisorption Chemisorption utilizes a chemical reaction between the gas and the absorber. The ability of some harmful gases to react chemically with other substances can be used to capture them. Creating strong links between gas molecules and a sorbent may allow repeated use of a canister if it has enough unsaturated sorbent. Copper salts, for example, can form complex compounds with ammonia. A mixture of copper ions (+2), zinc carbonate, and TEDA can detoxify hydrogen cyanide. By saturating activated carbon with chemicals, chemisorption can be used to help the material make stronger ties with molecules of trapped gases and improve the capture of harmful gases. Saturation of iodine improves mercury capture, saturation of metal salts improves ammonia capture, and saturation of metal oxides improves acid gas capture. Catalytic decomposition Some harmful gases can be neutralized through catalytic oxidation. A hopcalite can oxidize toxic carbon monoxide (CO) into harmless carbon dioxide (CO2). The effectiveness of this catalyst strongly decreases as relative humidity increases. Therefore, desiccants are often added. Air always contains water vapor, and after saturation of the desiccant, the catalyst ceases to function. Combined cartridges Combined, or multi-gas cartridges protect from harmful gases by using multiple sorbents or catalysts. An example is ASZM-TEDA Carbon used in CBRN masks by the US Army. This is a form of activated carbon saturated with copper, zinc, silver, and molybdenum compounds, as well as with triethylenediamine (TEDA). Classification and marking Cartridge selection comes after assessing the atmosphere. NIOSH guides cartridge choice in the US along with manufacturer recommendations. United States In the US, approval for respirator cartridge particulate filtration efficiency classification and certification is administered by the National Institute for Occupational Safety and Health (NIOSH) as part of Part 84 of Title 42 of the Code of Federal Regulations (42 CFR 84). Manufacturers can certify cartridges intended for purifying workplace air from various gaseous contaminants. Orange can be used for painting the entire cartridge housing, or as a strip. But this color is not in the table, and determining the cartridge's protection requires reading the inscription. Legislation requires the employer to select cartridges using only labels (not color markings). European Union and The Russian Federation In the EU and in RF (Russian Federation), manufacturers can certify cartridges intended for cleaning the air of various gaseous contaminants. The codes are covered by EN14387, additionally particulate codes P1, P2 and P3 are used. For example, A1P2 is the code for commonly used filters in industry and agriculture that provide protection against A-type gases and commonly occurring particulates. Cartridges AX, SX, and NO not distinguish on the sorption capacity (as in the US) when they are classified and certified. If the cartridge is designed to protect from several different types of harmful gases, the label will list all designations in order. For example: A2B1, color - brown and grey. Other jurisdictions that use this style of classification include Australia/New Zealand (AS/NZS 1716:2012) and China (GB 2890:2009). Detection of end of service life Service lives of all types of cartridges are limited, therefore, the employer is obliged to replace them in a timely manner. Old methods Subjective reactions of users' sensory systems The use of cartridges in the contaminated atmosphere leads to saturation of the sorbent (or the dryer — when using catalysts). The concentration of harmful gases in the purified air gradually increases. The ingress of harmful gases in the inhaled air can lead to a reaction in a user's sensory system: odor, taste, irritation of the respiratory system, dizziness, headaches, and other health impairments up to the loss of consciousness. These signs (known in the US as "warning properties" - p. 28) indicate that one must leave the polluted workplace area, and replace the cartridge with a new one. This can also be a symptom of a loose fit the mask to one's face and the leakage of unfiltered air through the gaps between the mask and the face. Historically, this method is the oldest. The advantages of this method – if harmful gases have warning properties at concentrations less than 1 PEL, the replacement will be produced on time (in most cases, at least); the application of this method does not require the use of special cartridges (more expensive) and accessories; replacement happens when one needs to do it – after the sorbent saturation, and without any calculations; the sorption capacity of the cartridges is fully expired (which reduces costs for respiratory protection). The disadvantage of this method is that some harmful gases have no warning properties. For example, there is a list of over 500 harmful gases in the Respirator Selection Guide and over 60 of them have no warning properties, and there is no such information for over 100 of them. So, if one uses warning properties to replace cartridges, this may lead to breathing air with an excess harmful gas concentration in some cases. The table contains a list of the chemicals that have no warning properties. If the threshold odor of pentaborane is 194 PEL; and if its concentration is only 10 PEL, one cannot timely change cartridges by using smell - they could be "used" forever, but they cannot protect forever. The practice has shown that the presence of warning properties does not always lead to timely cartridge replacement. A study showed that on average 95% of a group of people have an individual threshold of olfactory sensitivity in the range of from 1/16 to 16 from the mean. This means that 2.5% of people will not be able to smell harmful gases at a concentration 16 times greater than the average threshold of perception of a smell. The threshold of sensitivity of different people can vary by two orders of magnitude. That is, 15% of people do not smell at concentrations four times higher than the sensitivity threshold. The value of threshold smell greatly depends on how much attention people pay to it, and on their health status. The sensitivity may be reduced, for example, due to colds and other ailments. It turns out that a worker's ability to detect smell also depends on the nature of the work to be performed—if it requires concentration, a user may not react to the smell. Prolonged exposure to harmful gases (for example, hydrogen sulfide) at low concentrations can create olfactory fatigue which reduces sensitivity. In all of these cases, users can be exposed to harmful substances with concentrations larger than 1 PEL, and it may lead to the development of occupational diseases. This was the reason for the ban to use this method of cartridge replacement in the US since 1996 (the Occupational Safety and Health Administration OSHA standard). Mass increase To protect workers from carbon monoxide cartridges often use the catalyst hopcalite. This catalyst does not change its properties over time of use, but when it moistened, the degree of protection may be significantly reduced. Because water vapor is always present in the air, the polluted air is dehumidified in the cartridge (for use of the catalyst). Since the mass of water vapor in the polluted air is greater than the mass of harmful gases, trapping moisture from the air leads to a significantly higher increase in the mass of cartridges than the trapping gases. This is a substantial difference, and it can be used to determine whether to continue to use gas cartridges further without replacement. The cartridge is weighed, and a decision can be made based on the magnitude of increase of its mass. For example, the book describes gas cartridges (model "СО"), which were replaced after a weight gain (relative to initial) of 50 grams. Other methods The documents described Soviet cartridges (model "Г"), designed to protect from mercury. Their service life was limited to 100 hours of use (cartridges without particle filter), or 60 hours of use (cartridges with particle filter), after which it was necessary to replace the cartridge with a new one. The documents describe a non-destructive way to determine the remaining service life of new and used gas cartridges. Polluted air was pumped through the cartridge. The degree of purification of air depends on how much-unsaturated sorbent is in the cartridge, therefore, accurate measurement of gas concentration in the cleaned air allows one to estimate the amount of the unsaturated sorbent. Polluted air (1-bromobutane) was pumped for a very short time, and therefore, such tests do not reduce the service life considerably. The sorption capacity decreased due to absorption of this gas by about 0.5% of the sorption capacity of a new cartridge. The method was also used for 100% quality control of the cartridges manufactured by the English firm Martindale Protection Co. (10 microliters 1-bromobutane injected into the air stream), and to check the cartridges issued to workers in firms Waring, Ltd. and Rentokil, Ltd. This method was used in the Chemical Defence Establishment in the early 1970s. The experts who developed this method, received a patent. The document briefly describes two methods to objectively evaluate the degree of saturation of the sorbent in the cartridges. It recommends using spectral and microchemical methods. The spectral method is based on determining the presence of harmful substances in the cartridge by sampling, with subsequent analysis on a special device (стилоскоп - on Russian). A microchemical method is based on a layer-by-layer determination of the presence of harmful substances in the sorbent by sampling with subsequent analysis by chemical method. If the air is contaminated with the most toxic substances, the book recommends to limit the further duration of cartridge use, and it recommended to apply the spectral method (arsine and phosphine, phosgene, fluorine, organochloride, organometallic compounds), and microchemical methods (hydrogen cyanide, cyanogens). Unfortunately, in both cases, there is no description of how to extract a sample of the sorbent from the cartridge housing (the housing is usually not detachable), and use the cartridge after this test, if the test shows that it does not have a lot of saturated sorbents. Modern methods Cartridge certification provides a minimum value of their sorption capacity. US OSHA standard for 1,3-Butadiene indicates the specific service life of the cartridges. Laboratory testing If the company has a laboratory with the right equipment, specialists can skip the contaminated air through the cartridge and determine the degree of cleaning needed. This method enables one to determine the service life in an environment where the air is contaminated with a mixture of different substances that affect their capture with a sorbent (one affecting another capture). Service life calculation methods for such conditions have been developed relatively recently. However, this requires accurate information on concentrations of noxious substances, and they often are not permanent. Tests in laboratories can identify the balance of service life of cartridges after their use. If the remainder is large, similar cartridges in such circumstances can be used over a longer period of time. In some cases, a large balance allows the use of cartridges repeatedly. This method does not require accurate information on the concentrations of harmful substances. The cartridge replacement schedule is drawn upon the basis of the results of their testing in the laboratory. This method has a serious drawback. The company must have complex and expensive equipment and trained professionals to use it, which is not always possible. According to a poll, cartridges replacement in the US was carried out on the basis of laboratory tests in approximately 5% of all organizations. Research to determine whether it is possible to calculate the service life of respirator cartridges (if one know the conditions of their use) have been conducted in developed countries since the 1970s. This allows one to replace cartridges in a timely fashion without the use of sophisticated and expensive equipment. Computer programs The world's leading respirator manufacturers offered customers computer programs for calculating the service life already in the year 2000. 3M program allowed to calculate the service life of the cartridges exposed with more than 900 harmful gases and their combinations in 2013. The MSA program enables taking into account hundreds of gases and their combinations. The same program was developed by Scott and Dragerwerk. J. Wood developed a mathematical model and software which now allows one to calculate the service life of any cartridges with known properties. Now OSHA uses it in its Advisor Genius program. The merit of this way of replacing the cartridges is that it allows an employer to use normal, "common" cartridges, and if they have the exact data, they may replace them in time. The downside is that because of air contamination is often not constant, and the nature of the work to be performed is not always stable (that is, the flow of air through the cartridges is not permanent), it is recommended to use working conditions for calculations, equal to the worst case, for reliable protection. But in all other cases, cartridges will be replaced with a partially used sorbent. This increases the costs of respiratory protection due to more frequent cartridges replacement. In addition, calculation accuracy is reduced under very high relative humidity, because the mathematical model does not take into account some of the physical effects in such cases. End-of-service-life indicators If a cartridge has a device to warn the user of the approaching expiration of the service life (end-of-service-life indicator, ESLI), the indication can be used for timely replacement of cartridges. ESLI can be active or passive. A passive indicator often uses a sensor that changes color. This element is installed in the cartridge at some distance from the filtered air outlet so that the color change occurs before harmful gases begin to pass through the cartridge. An active indicator may use a light or an audible alarm to signal that a cartridge needs to be replaced. Passive end-of-service-life indicators Active indicators use light or an audible alarm for user notification that is triggered by a sensor that is usually installed in the cartridge. Such indicators allow one to replace the cartridges on time in any light and do not require the worker to pay attention to the color of the indicator. They can also be used by workers who badly distinguish different colors. Despite the presence of solutions for technical problems, and the availability of established certification requirements to the ESLI, during the period from 1984 (first certification standard with requirements for active ESLI) until 2013 not one cartridge with active ESLI was approved in the US. It turned out that the requirements for the cartridges are not quite exact, and employers are under no requirement to use these indicators specifically. Therefore, respirator manufacturers fear commercial failure when selling new unusual products, although they continue to carry out research and development work in this area. Active end-of-service-life indicators: Examination of respirator use in the US showed that over 200,000 workers may be exposed to excessive harmful gases due to late replacement of cartridges. So, the Laboratory of PPE (NPPTL) at the NIOSH began to develop an active ESLI. After the completion of the work, its results will help establish clear legal requirements for employers to follow and resulting technology will be transferred to industry to use in new improved RPD. Legal requirements Since it is not always possible to replace cartridges in a timely manner through the use of their odor ets, OSHA has banned the use of this method. The employer is obliged to use only two ways to replace cartridges: on schedule, and by using ESLI (because only these methods provide reliable preservation of workers’ health). OSHA instructions to inspectors provides specific guidance on inspection of implementation of such requirements. On the other hand, the state requires manufacturers to provide the consumer with all necessary information about cartridges to allow one to make a schedule for their timely replacement. Similar requirements exist in the standard on occupational safety, governing selection and application of RPD in EU. In England a tutorial on the selection and use of respirators recommends obtaining information from the manufacturer, and replacing the cartridges on a schedule or use ESLI, and prohibits reusing cartridges after exposure of volatile substances that can migrate. The US law required the employer to use exceptionally supplied air RPD (SAR) for protection against harmful gases that have no warning properties. The use of supplied air respirators may be the only way to reliably protect workers in circumstances when there is no ESLI, and it is impossible to calculate their service life. Legislation in the EU allows an employer to use only supplied air respirators when employees work in conditions where air pollution is IDLH, because of the risk of untimely cartridge replacement. Reuse If the cartridge contains a lot of the sorbent and if the concentration of contaminants is low; or if the cartridge was used for a short duration of time, after completion of its use, it still has a lot of unsaturated sorbent (which can capture gases). This may allow using such cartridges again. The molecules of an entrapped gases may de-absorb during storage of the cartridge. Due to the difference of concentrations inside the body of the cartridge (at the inlet concentration is greater; at the outlet for purified air concentration is lesser), these de-absorbed molecules migrate inside the cartridge to the outlet. The study of cartridges exposed to methyl bromide showed that this migration can impede the re-use of storage. Concentration of harmful substances in the purified air may exceed the PEL (even if clean air is pumped through the cartridge). To protect worker health, US law prohibits cartridge reuse when exposed to harmful substances that can migrate, even if the cartridge has much non-saturated sorbent after the first use. According to the standards, "volatile" substances (those able to migrate) are considered substances with a boiling point below 65 °C. But studies have shown that at the boiling point above 65 °C reuse of the cartridge may be unsafe. Therefore, the manufacturer must provide the buyer with all information required for safe cartridge use. So, if the period of continuous service life of the cartridge (calculated by the program - see above) exceeds eight hours (see tables 4 and 5), the legislation may limit their use to one shift. The paper provides a procedure for calculating the concentration of harmful substances in purified air at the start of cartridge reuse which allows one to determine exactly where they may be safely reused. But these scientific results are not yet reflected in any standards or guidelines on respirator use. The author of the article, working in the US, did not even try to consider the use of gas cartridges more than twice. On the author's website one can download a free computer program that allows one to calculate concentration of harmful substances immediately after the start of re-use of the cartridge (which allows one to determine if it is safe). Regenerating gas cartridges Activated carbon does not bond with harmful gases strongly, so they can be released later. Other sorbents undergo chemical reactions with the hazard and form strong bonds. Special technologies have been developed for recovery of used cartridges. They created conditions that have stimulated desorption caught earlier harmful substances. This used steam or heated air in the 1930s or other methods. Processing of the sorbent was carried out after its removal from the body of the cartridge, or without removing. Specialists tried to use ion-exchange resin as the absorber in 1967. The authors proposed to regenerate the sorbent by washing it in a solution of alkali or soda. The study also showed that cartridges can be effectively regenerated after exposure to methyl bromide (when they are blown with hot air 100 to 110 °C, flow rate 20 L/min, duration about 60 minutes). Regeneration of sorbents is used consistently and systematically in the chemical industry, as it allows cost savings on the replacement of sorbent and regeneration of industrial gas cleaning devices to be carried out thoroughly and in an organized manner. However, in the mass use of gas masks under different conditions it is impossible to control the accuracy and correctness such regeneration of respirators’ cartridges. Therefore, despite the technical feasibility and commercial benefits, regeneration of respirator cartridges in such cases is not carried out. References Safety equipment Respirators
1958462
https://en.wikipedia.org/wiki/Subject-matter%20expert
Subject-matter expert
A subject-matter expert (SME) is a person who is an authority in a particular area or topic. The term is used when developing materials about a topic (a book, an examination, a manual, etc.), and expertise on the topic is needed by the personnel developing the material. For example, tests are often created by a team of psychometricians and a team of SMEs. The psychometricians understand how to engineer a test while the SMEs understand the actual content of the exam. Books, manuals, and technical documentation are developed by technical writers and instructional designers in conjunctions with SMEs. Technical communicators interview SMEs to extract information and convert it into a form suitable for the audience. SMEs are often required to sign off on the documents or training developed, checking it for technical accuracy. SMEs are also necessary for the development of training materials. By field In pharmaceutical and biotechnology areas, ASTM standard E2500 specifies SMEs for various functions in project and process management. In one project, there will be many SMEs who are experts on air, water, utilities, process machines, process, packaging, storage, distribution and supply chain management. "Subject Matter Experts are defined as those individuals with specific expertise and responsibility in a particular area or field (for example, quality unit, engineering, automation, development, operations). Subject Matter Experts should take the lead role in the verification of manufacturing systems as appropriate within their area of expertise and responsibility." —ASTM E2500 §6.7.1 and §6.7.2. In engineering and technical fields, a SME is the one who is an authority in the design concept, calculations and performance of a system or process. In the scientific and academic fields, subject matter experts are recruited to perform peer reviews and are used as oversight personnel to review reports in the accounting and financial fields. A lawyer in an administrative agency may be designated an SME if they specialize in a particular field of law, such as tort, intellectual property rights, etc. A law firm may seek out and use an SME as an expert witness. In electronic discovery environments, the term "SME" labels professionals with expertise using computer-assisted reviewing technology (CAR) and Technology Assisted Review (TAR) to perform searches designed to produce precisely refined results that identify groups of data as potentially responsive or non-responsive to relevant issues. E-discovery SMEs also typically have experience in constructing the search strings used in the search. It also refers to experts used to "train" the TAR systems. Domain expert (software) A domain expert is frequently used in expert systems software development, and there the term always refers to the domain other than the software domain. A domain expert is a person with special knowledge or skills in a particular area of endeavour (e.g. an accountant is an expert in the domain of accountancy). The development of accounting software requires knowledge in two different domains: accounting and software. Some of the development workers may be experts in one domain and not the other. In software engineering environments, the term is used to describe professionals with expertise in the field of application. The term "SME" also has a broader definition in engineering and high tech as one who has the greatest expertise in a technical topic. SMEs are often asked to review, improve, and approve technical work; to guide others; and to teach. According to Six Sigma, an SME "exhibits the highest level of expertise in performing a specialized job, task, or skill of broad definition." In software development, as in the development of "complex computer systems" (e.g., artificial intelligence, expert systems, control, simulation, or business software), an SME is a person who is knowledgeable about the domain being represented (but often not knowledgeable about the programming technology used to represent it in the system). The SME tells the software developers what needs to be done by the computer system, and how the SME intends to use it. The SME may interact directly with the system, possibly through a simplified interface, or may codify domain knowledge for use by knowledge engineers or ontologists. An SME is also involved in validating the resulting system. SME has formal meaning in certain contexts such as Capability Maturity Models. See also Consultant Knowledge engineering Professional Subject-matter expert Turing test References Further reading Maintenance of KBS's by Domain Experts, Bultman, Kuipers, Harmelen (2005) Knowledge engineering Skills Knowledge
44382964
https://en.wikipedia.org/wiki/Holly%20Rushmeier
Holly Rushmeier
Holly Rushmeier is an American computer scientist and is the John C. Malone Professor of Computer Science at Yale University. She is known for her contributions to the field of computer graphics. Biography Rushmeier has received three degrees in mechanical engineering from Cornell University: the B.S. in 1977, the M.S. in 1986, and the Ph.D. in 1988. Before returning to graduate school in 1983, she worked in Seattle as an engineer at Boeing Commercial Airplanes and Washington Natural Gas. While at Cornell, Rushmeier collaborated with Kenneth Torrance and Donald P. Greenberg. After obtaining her Ph.D., Rushmeier joined the mechanical engineering faculty as an assistant professor at Georgia Tech, where she taught courses on heat transfer and numerical methods and conducted research on computer graphics image synthesis. She left in 1991 to join the National Institute of Standards and Technology, where she focused on scientific data visualization. She continued to investigate problems in data visualization as a staff member at the IBM Thomas J. Watson Research Center from 1996 to 2004. She then assumed her current position as professor of computer science at Yale University, where she served as chair of the department from 2011 to 2014. With Julie Dorsey, she leads the computer graphics laboratory at Yale. Work Rushmeier is particularly interested in scanning and modeling shape and appearance, as well as the applications of computer graphics in cultural heritage. At IBM, she worked on the project to create a 3D model of Michelangelo's Florence Pietà, as well as the Eternal Egypt collaboration between IBM and the government of Egypt to build a digital showcase of the country's cultural artifacts. Rushmeier is also noted for her work on global illumination, material capture, and the display of high-dynamic-range images. Her contributions to the field of computer graphics include the development of methods for solving for illumination in the presence of participating media (i.e. environments such as fog and murky water that affect the light passing through them) and the extension of the radiosity method to handle specular BRDFs. She has served in numerous editorial and technical capacities, including editor-in-chief of ACM Transactions on Graphics from 1996 to 1999, editor of IEEE Transactions on Visualization and Computer Graphics from 1996 to 1998, and co-editor-in-chief of Computer Graphics Forum from 2010 to 2014. She was chair of the papers committee for ACM SIGGRAPH in 1996 and co-chair of the IEEE Visualization papers committee in 1998, 2004, and 2005. She is an ACM Distinguished Engineer, a 2016 Fellow of the ACM, a 2011 Fellow of the Eurographics Association, and the recipient of the 2013 ACM SIGGRAPH Computer Graphics Achievement Award. Selected publications References External links Rushmeier's home page at the Yale Computer Graphics Laboratory Rushmeier's biography on the Eurographics website Rushmeier's profile in the Connecticut Academy of Science and Engineering Rushmeier's profile at ACM Living people American computer scientists American women computer scientists Computer graphics researchers Cornell University alumni Yale University faculty Year of birth missing (living people) Fellows of the Association for Computing Machinery American women academics 21st-century American women
18553798
https://en.wikipedia.org/wiki/CODESYS
CODESYS
Codesys (usually stylized as CODESYS, a portmanteau for controller development system, previously stylised CoDeSys) is a development environment for programming controller applications according to the international industrial standard IEC 61131-3. The main product of the software suite is the CODESYS Development System, an IEC 61131-3 tool. Introduction CODESYS is developed and marketed by the German software company CODESYS GmbH located in the Bavarian town of Kempten. The company was founded in 1994 under the name 3S-Smart Software Solutions – it was renamed in 2018 and 2020. Version 1.0 of CODESYS was released in 1994. Licenses of the CODESYS Development System are free of charge and can be installed legally without copy protection on further workstations. The software suite covers different aspects of industrial automation technology with one surface. The tool is independent from device manufacturers and thus used for hundreds of different controllers, PLCs (programmable logic controllers), PAC (programmable automation controllers), ECUs (electronic control units), controllers for building automation and other programmable controllers mostly for industrial purposes. Integrated use cases The tool covers different aspects of industrial automation: Engineering The five programming languages for application programming defined in the IEC 61131-3 are available in the CODESYS development environment. IL (instruction list) is an assembler like programming language (Is now deprecated but available for backward compatibility) ST (structured text) is similar to programming in Pascal or C LD (ladder diagram) enables the programmer to virtually combine relay contacts and coils FBD (function block diagram) enables the user to rapidly program both Boolean and analogue expressions SFC (sequential function chart) is convenient for programming sequential processes and flows Additional graphical editor available in CODESYS: CFC (Continuous Function Chart) is a sort of freehand FBD editor. Other than in the network-oriented FBD editor where the connections between inputs, operators and outputs are set automatically they have to be drawn by the programmer. All boxes can be placed freely which makes it possible to program feedback loops without interim variables. Integrated compilers transform the application code created by CODESYS into native machine code (binary code) which is then downloaded onto the controller. The most important 16-, 32- and 64-bit CPU families are supported, such as TriCore, 80x86/iX, ARM/Cortex, PowerPC, SH, MIPS, BlackFin and more. Once CODESYS is connected with the controller, it offers an extensive debugging functionality such as variable monitoring/writing/forcing by setting breakpoints/performing single steps or recording variable values online on the controller in a ring buffer (Sampling Trace) as well as core dumps during exceptions. CODESYS V3.x is based on the so-called CODESYS Automation Platform, an automation framework device manufacturers can extend by their own plug-in modules. The CODESYS Professional Developer Edition offers the option to add components to the tool which are subject to licensing, e.g. integrated UML support, a connection to the Apache Subversion version control system, online runtime performance analysis ("Profiler"), static code analysis of the application code or script based automated test execution. A git plugin is planned for summer 2021. The CODESYS Application Composer serves to create applications by using existing modules. The user composes, parameterizes, and connects the required modules to form a complete application. This configuration does not require knowledge of PLC programming and can therefore be done by technicians without programming experience. Internal generators create complete, well-structured IEC 61131-3 applications including the I/O mapping and visualizations. The Application Composer requires a license to develop and to compose modules. Furthermore, there are freely usable modules available within the system (i.e. Persistence Manager, Device Diagnosis), which can be used without a license. Runtime After implementing the CODESYS Control Runtime System, intelligent devices can be programmed with CODESYS. A charged-for toolkit provides this runtime system as a source and object code. It can be ported to different platforms. Since the beginning of 2014 there is also a runtime version for the Raspberry Pi. However, this does not guarantee hard real time characteristics. The Raspberry Pi interfaces, such as I²C, SPI and 1-Wire are supported in addition to the Ethernet-based fieldbuses. Furthermore, SoftPLC systems under Windows and Linux are available, which turn industrial PCs and other standard device platforms from different manufacturers such as Janztec, WAGO, Siemens or Phoenix Contact into CODESYS compatible controllers. Fieldbus technology Different field busses can be used directly in the programming system CODESYS. For this purpose, the tool integrates configurators for the most common system such as PROFIBUS, CANopen, EtherCAT, PROFINET and EtherNet/IP. For most of the mentioned systems, protocol stacks are available in the form of CODESYS libraries which can be loaded subsequently to the supported devices. In addition, the platform optionally supports application-specific communication protocols, such as BACnet or KNX for building automation. Communication For the exchange of data with other devices in control networks, CODESYS can seamlessly integrate and use communication protocols. These include proprietary protocols, standardized protocols in automation technology, such as OPC and OPC UA, standard protocols for serial and Ethernet interfaces as well as standard protocols of web technology, such as MQTT or https. The latter are also offered in the form of encapsulated libraries for simplified access to public clouds from AWS or Microsoft (Azure). Visualization An integrated editor helps the user create complex visualization masks directly in the programming system CODESYS and animate them based on application variables. To simplify the procedure, integrated visualization elements are available. An optional toolkit enables the user to create his own visualization elements. The masks created are, among others, used for application tests and commissioning during online operation of the programming system. With optional visualization clients, the created masks can also be used to operate the machine or plant, e.g. on controllers with integrated display (product name CODESYS TargetVisu), in an own portable runtime e.g. under Windows or Linux (product name CODESYS HMI) or in an HTML5-capable web browser (product name CODESYS WebVisu). For simplified use, a free Android app is available for Codesys WebVisu (product name CODESYS Web View). Motion CNC Robotics An optional modular solution for controlling complex movements with an IEC 61131-3 programmed controller is also completely integrated in the programming system CODESYS. The modular solution includes: Editors for motion planning, e. g. with CAMs or DIN 66025 CNC descriptions An axis group configurater for multiple robot kinematics Library modules for decoder, interpolator, for program execution, e. g. according to PLCopen MotionControl, for kinematical transformations and visualization templates Safety To reach the safety integrity level (SIL) required after a risk analysis, all system components have to comply to this level. Pre-certified software components within CODESYS make it much easier for device manufacturers to have their controllers SIL2 or SIL3 certified according IEC 61508. Therefore, CODESYS Safety consists of components within the programming system and the runtime system, whereas the project planning is completely integrated in the IEC 61131-3 programming environment. Users of control technology use the safety functions with devices that have already implemented CODESYS Safety. In addition, an add-on product is available with which the certified EtherCAT Safety Terminals from Beckhoff can be configured within the CODESYS Development System. Automation Server For the administration of compatible devices, an industry 4.0 platform is available, which allows, for example, the storage of projects in source and binary code via web browser and their download to connected devices. The platform is hosted in a public cloud. The communication between the cloud and the controllers is done through a special software Edge Gateway, whose security features have been rated A+ by SSL Labs. This connection can thus be used to communicate securely with devices integrated in the Automation Server without the need for additional VPN tunnels or firewalls, e.g. for displaying web visualizations or for debugging/updating the application software on the device. Additional sources of information and assistance Since 2012, the manufacturer has been operating an online forum in which users can communicate with each other. In 2020 it was transferred to the Q&A platform "Codesys Talk", which is also used as an open source platform for development of projects ("CODESYS Forge"). An Android app is available to simplify the use of the platform ("CODESYS Forge") With the CODESYS Store, the manufacturer operates an online shop in which additional options and products are offered. The majority of the product offerings are free sample projects that make it easier to try out features and supported technologies. Similar to an "App-Shop" platform, users have the possibility to search and install the offered products and projects directly from the CODESYS Development System without leaving the platform. Industrial usage At least 400 device manufacturers from different industrial sectors offer intelligent automation devices with a CODESYS programming interface. These include devices from global players such as Schneider Electric, Beckhoff, WAGO or Festo, but also niche suppliers of industrial controllers. Consequently, more than 100,000 of end users such as machine or plant builders around the world employ CODESYS for different automation tasks and applications. In the CODESYS Store alone, there are far more than 200,000 verified users registered (as of 12/2021). Due to its high degree of distribution, CODESYS can be called the market standard among the device-independent programming tools according to IEC 61131-3. For example, numerous educational institutions (commercial schools, colleges, universities) around the world use CODESYS in the training of control and automation technology. Membership in organisations PLCopen OSADL CAN in Automation OPC Foundation Profibus SERCOS interface EtherCAT IO-Link ODVA The Open Group See also Integrated development environment Process control Programmable logic controller (PLC) Software engineering References Bibliography Gary L. Pratt (2021): The BOOK of CODESYS. self-published, 2021. Herbert Bernstein (2007) SPS-Workshop mit Programmierung nach IEC 61131 mit vielen praktischen Beispielen, mit 2 CD-ROM, VDE Verlag. Prof. Dr. Birgit Vogel-Heuser (2008) Automation & Embedded Systems, Oldenbourg Industrieverlag. Heinrich Lepers (2005) SPS-Programmierung nach IEC 61131-3 mit Beispielen für CoDeSys und STEP 7, Franzis Verlag Günter Wellenreuther/Dieter Zastrow (2007) Automatisieren mit SPS - Übersichten und Übungsaufgaben, Vieweg Verlag. Norbert Becker (2006) Automatisierungstechnik, Vogel Buchverlag. Igor Petrov: Controller Programming: The standard languages and most important development tools. Solon Press, 2007 (Russian) Marcos de Oliveira Fonseca et al.(2008) Aplicando a norma IEC 61131 na automação de processos, ISA América do Sul. (Portuguese) Dag Håkon Hanssen (2008) Programmerbare Logiske Styringer – baser på IEC 61131-3, tapir akademisk forlag. (Norwegian) Jürgen Kaftan: "Practical Examples with AC500 from ABB: 45 Exercises and Solution programmed with CoDeSys Software". IKH Didactic Systems Tom Mejer Antonsen: "PLC Controls with Structured Text (ST): IEC 61131-3 and best practice ST programming", (further languages available) External links CODESYS Talk (former CODESYS user forum) CODESYS Forge (open source projects) http://www.oscat.de/ OpenSource library for version 2 and 3 of CODESYS "OPC UA and IEC 61131-3" ISA Intech article on the power of CODESYS IEC61131-3 and OPC-UA Industrial automation Programmable logic controllers
2466815
https://en.wikipedia.org/wiki/Plug%20compatible
Plug compatible
Plug compatible refers to "hardware that is designed to perform exactly like another vendor's product." The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers. PCM and peripherals Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices. The first example of plug compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions. Ultimately plug-compatible products were offered for most peripherals and system main memory. PCM and computer systems A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products. One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a "bug" and a "feature", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.). PCM and IBM mainframes The original example of PCM mainframes was the Amdahl 470 mainframe computer which was plug-compatible with the IBM System 360 and 370, costing millions of dollars to develop. Similar systems were available from Comparex, Fujitsu, and Hitachi. Not all were large systems. Most of these system vendors eventually left the PCM market. In late 1981, there were eight PCM companies, and collectively they had 36 IBM-compatible models. Non-computer usage of the term The term may also be used to define replacement criteria for other components available from multiple sources. For example, a plug-compatible cooling fan may need to have not only the same physical size and shape, but also similar capability, run from the same voltage, use similar power, attach with a standard electrical connector, and have similar mounting arrangements. Some non-conforming units may be re-packaged or modified to meet plug-compatible requirements, as where an adapter plate is provided for mounting, or a different tool and instructions are supplied for installation, and these modifications would be reflected in the bill of materials for such components. Similar issues arise for computer system interfaces when competitors wish to offer an easy upgrade path. In general, plug-compatible systems are designed where industry or de facto standards have rigorously defined the environment, and there is a large installed population of machines that can benefit from third-party enhancements. Plug compatible does not mean identical replacement. However, nothing prevents a company from developing follow-on products that are backward-compatible with its own early products. See also Bug compatibility Clone (computing) Computer compatibility Drop-in replacement Hercules (emulator) Pin compatibility Proprietary hardware Vendor lock-in Honeywell 200, chasing the IBM 1401 market Xerox 530, chasing the IBM 1130 market References Classes of computers Computer hardware Interoperability
33646344
https://en.wikipedia.org/wiki/DynamicOps
DynamicOps
DynamicOps was an American private software company headquartered in Burlington, Massachusetts. Backed by Credit Suisse, Intel Capital, Sierra Ventures, and Next World Capital, DynamicOps developed cloud automation and management solutions. These solutions were designed to help enterprise IT organizations create scalable private, public, and desktop cloud services from their existing technology systems and processes. It was acquired by VMware in 2012. History DynamicOps' establishment is connected to Credit Suisse. Its original software was initially developed inside Credit Suisse’s Global Research and Development Group in 2005 to help the company address the operational and governance challenges of rolling out virtualization technology. In 2007, after having deployed and used the software to manage its virtual machines, Credit Suisse Ventures decided to fund a company based on the technology. They recruited Rich Krueger who formed and led a new company—DynamicOps. DynamicOps was incorporated on January 31, 2008, and publicly launched later that spring—to further develop and market the product. Leslie Muller, who led the development effort at Credit Suisse, became the co-founder and CTO of DynamicOps. The company had raised a total of $27M in venture funding from Credit Suisse, Intel Capital, Sierra Ventures, and Next World Capital. Additionally, DynamicOps had a multi-year licensing and distribution agreement with Dell, in which DynamicOps software was a component of Virtual Integrated System solution. In July 2012, DynamicOps was acquired by VMware for a price ranging from $100 to $150 million. Technology and products DynamicOps used Operations Virtualization, a foundational technology for its cloud offerings. Operations Virtualization is an abstraction layer between the multiple management systems that make up cloud infrastructure and its consumers. It allows IT, staff, to apply management to the layers below without the layers above needing to know how or why. DynamicOps offered two products, DynamicOps Cloud Suite and DynamicOps Cloud Development Kit. DynamicOps Cloud Suite is a cloud-enablement suite that includes the DynamicOps Cloud Automation Center, the DynamicOps Platform, and the DynamicOps Design Center. It combines automated IT service delivery with unified governance and control across servers and desktops, virtual and physical, and private and public cloud deployments, as well as a graphical editor for visually modifying activities and workflow logic. Additionally, it uses Operations Virtualization technology to enable IT staff to centrally define cloud services and make them available across a global infrastructure. DynamicOps Cloud Development Kit represents a set of tools and documentation used by developers who have deployed DynamicOps-generated cloud environments, for defining new cloud services. See also Amazon EC2 Cloud computing References External links Official website Financial services companies established in 2008 Cloud computing providers Credit Suisse Software companies based in Massachusetts Virtualization software Venture capital firms of the United States Defunct financial services companies of the United States Defunct companies based in Massachusetts 2008 establishments in Massachusetts Software companies established in 2008 Financial services companies disestablished in 2012 Software companies disestablished in 2012 2012 mergers and acquisitions
9317619
https://en.wikipedia.org/wiki/Newgate%20novel
Newgate novel
The Newgate novels (or Old Bailey novels) were novels published in England from the late 1820s until the 1840s that glamorised the lives of the criminals they portrayed. Most drew their inspiration from the Newgate Calendar, a biography of famous criminals published during the late 18th and early 19th centuries, and usually rearranged or embellished the original tale for melodramatic effect. The novels caused great controversy, and drew criticism in particular from the novelist William Makepeace Thackeray, who satirised them in several of his novels and attacked the authors openly. Works Among the earliest Newgate novels were Thomas Gaspey's Richmond (1827) and History of George Godfrey (1828), Edward Bulwer-Lytton's Paul Clifford (1830) and Eugene Aram (1832), and William Harrison Ainsworth's Rookwood (1834), which featured Dick Turpin. Charles Dickens' Oliver Twist (1837) is often also considered to be a Newgate novel. The genre reached its peak with Ainsworth's Jack Sheppard published in 1839, a novel based on the life and exploits of Jack Sheppard, a thief and renowned escape artist who was hanged in 1724. Thackeray, a great opponent of the Newgate novel, reported that vendors sold "Jack Sheppard bags", filled with burglary tools, in the lobbies of the theatres where dramatisation of Ainsworth's story were playing and "one or two young gentlemen have already confessed how much they were indebted to Jack Sheppard who gave them ideas of pocket-picking and thieving [which] they never would have had but for the play". Thackeray's Catherine (1839) was intended as satire of the Newgate novel, based on the life and execution of Catherine Hayes, one of the more gruesome cases in the Newgate Calendar: she conspired to murder her husband and he was dismembered; she was burnt at the stake in 1726. The satirical nature of Thackeray's story was lost on many, and it is often characterised as a Newgate novel itself. Decline The 1840 murder of Lord William Russell by his valet, François Benjamin Courvoisier, proved so controversial that the Newgate novel came under severe criticism. Courvoisier was reported to have been inspired to the act by a dramatisation of Ainsworth's story. Although Courvoisier later denied that the play had influenced him, the furore surrounding his case led the Lord Chamberlain to ban the performance of plays based on Jack Sheppard's life, and sparked off a press campaign which attacked the writers of Newgate novels for irresponsible behaviour. Courvoisier's execution led to further controversy. It was one of the best attended hangings of the era, and Thackeray and Dickens both witnessed the execution, Thackeray using it for the basis of his attack on capital punishment, "On Going to See a Man Hanged". His most vigorous attack in the piece was reserved for Dickens, specifically for Oliver Twist, which Thackeray regarded as glorifying the criminal characters it depicted: It was believed that the character of Fagin was based on the real pickpocket Ikey Solomon, but while Dickens did nothing to discourage this perceived connection, he was at pains not to glorify the criminals he created: Bill Sikes is without redeeming features, and Fagin seems pleasant only in comparison to the other grotesques Oliver meets as his story unfolds. The Newgate novel was also attacked in the literary press, with Jack Sheppard described as a "one of a class of bad books, got up for a bad public" in The Athenaeum, and Punch published a satirical "Literary Recipe" for a startling romance, which began "Take a small boy, charity, factory, carpenter's apprentice, or otherwise, as occasion may serve – stew him down in vice – garnish largely with oaths and flash songs – Boil him in a cauldron of crime and improbabilities. Season equally with good and bad qualities ...". The attacks were enough to make Ainsworth and Lytton turn to other subjects; Dickens continued to use criminals as the central characters in many of his stories. Among the last of the pure Newgate novels was T. P. Prest's 1847 story of love among criminals, Newgate: A Romance. The form melded into the sensation novels and early detective fiction of the 1850s and 1860s. The former included transgressions outside the purely criminal, such as Wilkie Collins's The Woman in White (1859); an early example of the latter is The Moonstone (1868), again by Collins. All were often serialised in a form that gave rise to the penny dreadful magazines. Notes References Further reading Literary genres 19th-century British literature
44624322
https://en.wikipedia.org/wiki/List%20of%20engineering%20colleges%20in%20Jammu%20and%20Kashmir
List of engineering colleges in Jammu and Kashmir
Engineering colleges in Jammu and Kashmir number more than 10. They are affiliated to State Universities such as the University of Kashmir and University of Jammu along with other universities such as Baba Ghulam Shah Badshah University and Islamic University of Science and Technology. Engineering colleges listed below are accredited by All India Council for Technical Education. Srinagar district National Institute of Technology, Srinagar The National Institute of Technology, Srinagar is a national engineering institute located in Hazratbal, Srinagar. It is one of 30 NIT's in India and operates under the control of the Ministry of Human Resource Development (MHRD). NITSRI was established in 1960 as a Regional Engineering College. The institute moved to its present campus in 1965. As a Regional Engineering College, it was affiliated with the University of Kashmir. The Regional Engineering College was upgraded to become the National Institute of Technology, Srinagar in July, 2003. Departments University of Kashmir (North Campus), (South Campus) & (Zakura Campus a.k.a. Institute of Technology) North Campus is situated at Delina, Baramulla. Zakura Campus is situated in Hazratbal. Both are affiliated to University of Kashmir for academic purposes. The campuses are approved by AICTE for their respective technical courses in Engineering. North Campus offers Bachelor of Technology (B.TECH) degree in Computer Science & Engineering while Zakura Campus offers Bachelors of Technology (B.TECH) in Electronics and Communication Engineering, Civil Engineering, Electrical and Electronics Engineering and Mechanical Engineering. The main campus offers courses leading to Masters of Technology (M.Tech) in Computer Science and Embedded Systems. Eligibility for admission to B. Tech courses is JEE Main and GATE for M.Tech programmes. SSM College of Engineering SSM College of Engineering is situated in parihaspira pattan area of district baramulla. Ganderbal GCET Ganderbal Kashmir Engineering College in safapora Ganderbal Baramulla district SSM College of Engineering and Technology SSM College of Engineering and Technology is in Pattan, Baramulla and is affiliated to University of Kashmir for academic purposes. The engineering disciplines are Civil engineering, Electrical engineering, Electronics & Communication engineering, Mechanical engineering, Computer Science and Engineering. Samba district Bhargava College of Engineering and Technology Bhargava College of Engineering and Technology is an engineering college located in Samba. This college is approved by AICET and affiliated to University of Jammu. It offers a Bachelor of Engineering degree in civil, mechanical, electrical, electronics and communication and computer science. It was established in 2015. Jammu district Indian Institute of Technology, Jammu The Indian Institute of Technology Jammu (abbreviated IIT Jammu) is a public research university located in Jammu. The Institute opened in 2016 when a Memorandum of Understanding between the Department of Higher Education, J&K and Department of Higher Education, MHRD, was signed. The institute offers five B.tech programs: computer science, electrical, mechanical, civil and chemical enrolling 30 in each. The admission to these programs is done through JEE-Advanced. Model Institute of Engineering and Technology, Jammu Model Institute of Engineering and Technology is a technical institute located in Bantalab, Jammu. It offers undergraduate degrees in engineering trades and a master's degrees in computer engineering. MBS College of Engineering & Technology, Digaina The college is affiliated to Jammu University for academic purposes. Engineering degrees include Applied Electronics and Instrumentation, Computer, Electrical, Electronics and Communication, Information Technology and Mechanical. Government College of Engineering and Technology, Jammu The Government College of Engineering and Technology, Jammu (GCET, Jammu) is an engineering institute located in Jammu. It offers Bachelor of Engineering (BE) degrees in Computer Sciences, Electronics and Communication, Mechanical, Civil and Electrical. It was established in 1994 and is the first engineering college in the Jammu region. Yogananda College of Engineering and Technology Yogananda College of Engineering and Technology is located in Muthi. The college offers engineering courses in Electrical, Civil, Computer Science, Mechanical and Information technology. Reasi district College of Engineering, Shri Mata Vaishno Devi University,Katra Shri Mata Vaishno Devi University is located in Katra. It provides engineering degrees in Computer Science, Electronics and Communication, Mechanical, Biotechnology, Architecture & Landscape Design and Energy Management. Rajouri district College of Engineering and Technology, Baba Ghulam Shah Badshah University, Rajouri Baba Ghulam Shah Badshah University is located in Rajouri. It came into existence by an Act of the Jammu and Kashmir Legislative Assembly in 2002. The engineering college offers B.tech degrees in Civil, Computer Science, Electrical and Renewable Energy, Electronics and Communication and Information Technology. See also SSM College of Engineering List of colleges affiliated to Kashmir University, Kashmir List of colleges affiliated to Jammu University, Jammu References Engineering colleges Jammu, and Kashmir