id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
2512261
https://en.wikipedia.org/wiki/Edmund%20Berkeley
Edmund Berkeley
Edmund Callis Berkeley (February 22, 1909 – March 7, 1988) was an American computer scientist who co-founded the Association for Computing Machinery (ACM) in 1947. His 1949 book Giant Brains, or Machines That Think popularized cognitive images of early computers. He was also a social activist who worked to achieve conditions that might minimize the threat of nuclear war. Biography Berkeley attended St. Bernard's School and Phillips Exeter Academy. He received a BA in Mathematics and Logic from Harvard in 1930. He pursued a career as an insurance actuary at Prudential Insurance from 1934–48, except for service in the United States Navy during World War II. Berkeley saw George Stibitz's calculator at Bell Laboratories in 1939, and the Harvard Mark I in 1942. In November, 1946 he drafted a specification for "Sequence Controlled Calculators for the Prudential", which led to signing a contract with the Eckert-Mauchly Computer Corporation in 1947 for one of the first UNIVAC computers. Berkeley left Prudential in 1948 to become an independent consultant when the company forbade him to work on projects related to avoiding nuclear war, even on his own time. He sometimes wrote using the pseudonym "Neil D. MacDonald". He became famous in 1949 with the publication of his book Giant Brains, or Machines That Think in which he described the principles behind computing machines (called then "mechanical brains", "sequence-controlled calculators", or various other terms), and then gave a technical but accessible survey of the most prominent examples of the time, including machines from MIT, Harvard, the Moore School, Bell Laboratories, and elsewhere. In Giant Brains, Berkeley also outlined a device which some have described as the first "personal computer", Simon. Plans on how to build this computer were published in the journal Radio Electronics in 1950 and 1951. Simon used relay logic and cost about $600 to construct. The first working model was built at Columbia University with the help of two graduate students. Berkeley founded, published and edited Computers and Automation, the first computer magazine. He also created the Geniac and Brainiac toy computers. In 1958 Berkeley joined the Committee for a SANE Nuclear Policy (SANE). Computer Art On the title page of the magazine "Computers and Automation", January 1963, Edmund Berkeley published a picture by Efraim Arazi from 1962 as Computer Art. This picture inspired him to initiate the first Computer Art Contest in 1963. Berkeley had coined the term Computer Art. The annual contest was a key point in the development of computer art up to the year 1973. This way Edmund Berkeley became a pioneer in the field of computer art. Books Giant Brains, or Machines That Think (1949), Wiley & Sons Computers: Their Operation and Applications (1956), New York: Reinhold Publishing Symbolic Logic and Intelligent Machines (1959), New York: Reinhold Publishing Probability and Statistics: An Introduction through Experiments (1961), Science Materials Center The Computer Revolution (1962), Doubleday The Programming Language LISP: Its Operation and Applications (1964) A Guide to Mathematics for the Intelligent Nonmathematician (1966), Simon and Schuster Computer-assisted Explanation: A Guide to Explaining: and some ways of using a computer to assist in clear explanation (1967), Information International Ride the East Wind; Parables of Yesterday and Today (1973), Quadrangle, The Computer Book of Lists and First Computer Almanack (1984), Reston Publishing, Notes External links Edmund C. Berkeley Papers, Charles Babbage Institute, University of Minnesota. Obituary in Communications of the ACM (1988) (access restricted) Berkeley timeline Retrieved April 10, 2007 Computers and Automation archive issues 1954 to 1978 1909 births 1988 deaths American actuaries American computer scientists Prudential Financial people St. Bernard's School alumni Phillips Exeter Academy alumni Harvard College alumni
12266843
https://en.wikipedia.org/wiki/Florida%20Atlantic%20University%20College%20of%20Engineering%20and%20Computer%20Science
Florida Atlantic University College of Engineering and Computer Science
The College of Engineering and Computer Science is an academic college of Florida Atlantic University located in Boca Raton, Florida, United States. The college's mission is " to educate those who will contribute to the advancement of technical knowledge and who will be the leaders of tomorrow, conducts basic and applied research in engineering, computer science, and related interdisciplinary areas, and provide service to the engineering and computer science professions, to the State of Florida, to the nation, and to the community at large." Departments The College of Engineering and Computer Science is divided into the following departments: Civil Engineering Computer Science and Engineering Electrical Engineering Mechanical Engineering Ocean Engineering Research Florida Atlantic was the first university in the country to offer an undergraduate degree in ocean engineering in 1964. The first class numbering 35 graduated in 1967. The program was created in response to the loss of the Navy's submarine USS Thresher off the coast of Massachusetts. The sub and its crew were lost after a test dive and found in 8,400 feet of water, far below the sub's crush depth. Concerned about underwater equipment designed by engineers with no marine experience, FAU and the Navy established a program that would eventually draw students from around the globe and be recognized in the 1996 Guinness Book of World Records for "the fastest speed attained by a human-powered propeller submarine." Other events During the Spring semester of each year the College of Engineering holds Engineering Week. The week features events centered around a "Brain Bowl" competition between the college's departments. The 2007 theme was Mardi Gras, and featured flamenco dancing. References External links College of Engineering and Computer Science Florida Atlantic University Official Website Florida Atlantic University Computer science departments in the United States
38855647
https://en.wikipedia.org/wiki/Fakesysdef
Fakesysdef
Trojan:Win32/FakeSysdef, originally dispersed as an application called "HDD Defragmenter" hence the name "FakeSysdef" or "Fake System Defragmenter", is a Trojan targeting the Microsoft Windows operating system that was first documented in late 2010. Win32/FakeSysdef manifests as one or more of an array of programs that purport to scan one's computer for hardware failures related to system memory, hard drives and system functionality as a whole. They scan the computer, show false hardware issues, and present a remedy to defrag the hard drives and fine-tune the system performance. They then request the user to make a payment in order to activate the program so the user can download the new updates and to repair the hardware issues. The fictitious scanning program detects one or more of the most widespread varieties of risks prevalent on the internet today. Everyday numerous fake antivirus and security applications are published and released to unsuspecting end-users via a large assortment of distribution channels. Many times such software turn out to be clones of each other – developed from the same code base but packaged with a unique title and design through the use of a "skin". The branding strategy may look legitimate to computer users as the names are usually a combination of technical words such as "HDD", "Disk", "Memory" and action words such as "Scanner", "Defragmenter", "Diagnostics", "Repair", and "Fix". Operation Users may encounter this kind of threat when they visit websites that attempt to convince them to remove non-existent malware or security risks from their computers by installing the bogus software. The Trojan can also be installed by other malware, drive-by downloads, and when downloading and installing other software. Users may be directed to these sites by way of the following methods: Spam emails that contain links or attachments pornography sites Blogs and forums that are spammed with links to adult videos User-generated content spam (e.g. fake videos) Malicious banner advertisements Unauthorized software (‘warez’) Search engine optimization (SEO) poisoning Fake torrents or other files on shared networks Web pages containing exploits These programs intentionally misrepresent the security status of a computer by continually presenting fake scan dialog boxes and alert messages that prompt the user to buy the product. The programs often have an icon in the notification area of the operating system desktop and constantly display pop-up messages alerting the user about fake security issues such as virus infections. These pop-up windows only disappear once the user has purchased the product and the non-existent threats have supposedly been removed from the compromised computer. If the user decides to purchase the product, they are presented with a form within the application or are redirected to a website that requests credit card information. Initial infection The Win32/FakeSysdef installer may arrive in the computer with various file names. When run, the installer drops and injects a DLL file (or sometimes and EXE file) into common processes, for example "EXPLORER.EXE", "WINLOGON.EXE", and "WININET.EXE". In some instances, the main executable drops both DLL and EXE components. In this case, the EXE is set to run at every Windows restart and the DLL is injected into "EXPLORER.EXE" by the "EXE" component. To ensure that it automatically runs every time Windows starts, it drops a copy of itself or its EXE component using a random file name into the %APPDATA% folder. Win32/FakeSysdef may make widespread changes to the system including: modifying several Internet Explorer settings, enabling submitting non-encrypted form data, changing the desktop wallpaper, displaying or hiding all shortcuts, hiding desktop and start menu links, disabling Windows Task Manager, disabling checking for signatures on downloaded programs, setting low risk file types Additionally, some Win32/FakeSysdef variants that may terminate running processes during installation and may block launched application after the computer restarts. During the installation process, they may terminate all running processes and force the computer to restart. After the restart, FakeSysdef attempts to block every launched program, and may then display fake error messages offering to fix the problem. It then repeatedly restarts the computer until the user agrees to buy the fake software. It then overwrites data on the hard drive/hard drive disk/HDD. Symptoms Win32/FakeSysdef displays numerous false alerts indicating system errors while displaying the appearance of scanning the hard disk and defragmenting it, then prompts the user, with a "Fix Errors" button, to buy and activate it to fix discovered errors. When the "Fix Errors" button is selected, FakeSysdef pretends to scan and defragment the hard disk. It then displays more fake error messages, and tells the user that he needs to purchase an "Advanced Module" for the fix. If the user chooses to do so, the browser opens. It will open a custom web browser where the user can input card information to buy the software. Removal and detection Anti-virus software makers responded to the threat of FakeSysdef by adding checks for it in their products. Simple removal of the software enabled by this is sometimes not enough to reverse the damage to configuration files that FakeSysdef was known to edit. References Windows trojans Social engineering (computer security) Scareware Web security exploits
62443178
https://en.wikipedia.org/wiki/Jerzy%20Respondek
Jerzy Respondek
Jerzy Respondek (born 1977 in Ruda Śląska, Poland) is a Polish computer scientist and mathematician, professor at Silesian University of Technology, Gliwice. His research interests cover numerical methods and mathematical control theory. Respondek is best known for his works on special matrices and their applications in control theory. Life and career In 2001 he graduated from Silesian University of Technology obtaining two MSc degrees: in computer science and mathematical control theory. In 2003 he obtained PhD in computer science on the basis of the dissertation “Numerical aspects of differential operators spectral theory” under supervision of Jerzy Klamka, a known Polish mathematician. In 2016 he obtained DSc from Poznan University of Technology, a widely recognized faculty, considered the most prestigious Polish department in computer science. Respondek lectured in numerous universities, such as the mathematics department of the University of Pisa (Italy), computer departments of universities of Valencia (Spain), Nuremberg (Germany), Alcala (Spain) and the Department of Computer Science of the University of Manchester (UK), Alan Turing's domestic department. Respondek serves as member of scientific committees of most prestigious conferences in mathematics and computer science. He also delivered plenary lectures at world known mathematics and computer science conferences, like International Conference on Computational Science and its Applications (2015) and European Simulation and Modelling Conference (2014, 2020). Other activities In 2008-17 he participated in the editorial board of the journal "Mathematics and Computers in Simulation", the main journal of the IMACS organization, recognized by the numerical methods scientists community. Since 2020 he participates in the editorial board of the journal "International Journal of Systems Science". In 2008 obtained prestigious stipend for researchers founded by world recognized Polish weekly "Polityka". In 2012-13 Respondek belonged to one of the main advisory groups of the Polish Ministry of Science. Between 2014-16 he worked in the science-popularization advisory group of that ministry. As a delegate of these two groups he participated in the proceedings of the National Parliamentary Commission of Education, Science and Youth in Warsaw. In 2007-08 he was a member of the Forecast Committee of the Polish Academy of Sciences in Warsaw. It is a specialized, national think tank cooperating with the Club of Rome. His works in that group pertained mainly to the social and economic aspects of computer science. Respondek co-organized the meeting (18 April 2013, Warsaw) with the Nobel Prize Winner Prof. Robert Huber, German biochemist awarded in 1988 for Chemistry. In 1996 he was the winner of the national edition of International Physics Olympiad at the voivodeship level. Since 2016 he shares his time between Poland and Brussels where he works in the European Research Executive Agency (REA) as an expert of the Era Chairs – a European program within the H2020 framework, designed to support European universities to hire outstanding scientists. Selected publications 2005. “Controllability of dynamical systems with constraints”. Systems and Control Letters; 54 (4), pp. 293–314. 2011. “On the confluent Vandermonde matrix calculation algorithm”. Applied Mathematics Letters; 24 (2), pp. 103–106. 2011. “Numerical recipes for the high efficient inverse of the confluent Vandermonde matrices”. Applied Mathematics and Computation; 218 (5), pp. 2044–2054. 2016. “Incremental numerical recipes for the high efficient inversion of the confluent Vandermonde matrices”. Computers and Mathematics with Applications; 71 (2), pp. 489–502. References Polish computer scientists 1977 births Living people
24873764
https://en.wikipedia.org/wiki/Nir%20Shavit
Nir Shavit
Nir Shavit () is an Israeli computer scientist. He is a professor in the Computer Science Department at Tel Aviv University and a professor of electrical engineering and computer science at the Massachusetts Institute of Technology. Nir Shavit received B.Sc. and M.Sc. degrees in computer science from the Technion - Israel Institute of Technology in 1984 and 1986, and a Ph.D. in computer science from the Hebrew University of Jerusalem in 1990. Shavit is a co-author of the book The Art of Multiprocessor Programming, is a winner of the 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability, and a winner of the 2012 Dijkstra Prize for the introduction and first implementation of software transactional memory. He is a past program chair of the ACM Symposium on Principles of Distributed Computing (PODC) and the ACM Symposium on Parallelism in Algorithms and Architectures (SPAA). His research covers techniques for designing, implementing, and reasoning about multiprocessors, and in particular the design of concurrent data structures for multi-core machines. Recognition 2004 Gödel prize 2012 Dijkstra Prize 2013 Fellow of the Association for Computing Machinery References External links cs.tau.ac.il csail.mit.edu Living people Dijkstra Prize laureates Electrical engineering academics Fellows of the Association for Computing Machinery Gödel Prize laureates Hebrew University of Jerusalem School of Computer Science & Engineering alumni Israeli computer scientists Israeli Jews MIT School of Engineering faculty Researchers in distributed computing Technion – Israel Institute of Technology alumni Year of birth missing (living people)
13051837
https://en.wikipedia.org/wiki/Louisa%20Mark
Louisa Mark
Louisa Lynthia Mark, also known as "Markswoman" (11 January 1960 – 17 October 2009), was a British lovers rock singer, best known for her work between the mid-1970s and early 1980s. Her 1975 single "Caught You in a Lie" is regarded as the first lovers rock single. Biography Mark was born in Kensal Rise, London to Grenadian immigrant parents, and grew up in Shepherd's Bush. She had her introduction to the music business initially by working as guest vocalist on Dennis Bovell's Sufferer sound system, followed by a residency at the Metro club in Westbourne Park, and via "Star Search" talent contests held at the Four Aces club in Dalston, where she won for ten consecutive weeks. Sound-system operator and record producer Lloyd Coxsone provided dub plates for the contestants to sing over at the contests and, in late 1974, provided the fifteen-year-old Mark with her first recording session, at Gooseberry Studios, where she recorded a cover version of Robert Parker's "Caught You in a Lie", on which she was backed by Matumbi, the single also being released in Jamaica by Gussie Clarke. "Caught You in a Lie" is considered the first lovers rock single. It gave her an instant hit with reggae audiences, and was followed by a version of The Beatles' "All My Loving". Her career was interrupted after a dispute with Coxsone and she concentrated on finishing her studies. After leaving school, Mark resumed her musical career working with Trojan Records house producer and A&R manager Clement Bushay, and songwriter/arranger Joseph "Tunga" Charles (of Zabandis), releasing "Keep it Like It Is". She stayed with Bushay for further releases on his own Bushays label including her rendition of Michael Jackson's "Even Though You're Gone", "Six Sixth Street", and her début album Breakout (1981). She was unhappy with the album, feeling that it had been released before it had been properly finished, and did not record again for over a year. Mark returned to the studio in 1982, recording "Mum and Dad" (arranged by Sly & Robbie). Mark was voted Artist of The Year in the 1978 Reggae Awards (UK). Death On the 18 October 2009 edition of his BBC London radio show, Dotun Adebayo reported that Mark had died of poisoning in Gambia, where she had been residing. On 20 October 2009, Trojan Records confirmed the story, stating cause of death was a stomach ulcer. Discography Albums Breakout (1981), Bushays Singles "Caught You in a Lie" (1975), Safari - 7" "All My Loving" (1975), Safari - 7" "Even Though You're Gone" (1978), Bushays - 12" "Six Sixth Street" (1978), Bushays "Caught You in a Lie" (1979), Voyage International - 12", B-side by Clinton Grant "People in Love" (1980), Radic - 12" "All My Loving (1984), Voyage International - 7" "Caught You in a Lie" (1984), Code - 12" "Hello There" (1984), Oak Sound - 12", Louisa Mark & Zabandis "Mum and Dad" (1982), Bushays, 12" "Keep It Like It Is" (1986), Trojan - 7"/12" "Reunited" b/w "Reunited Stepping Out" with Kevin & The Bushrangers, Bushays, BFM 113, 12" "Foolish Fool", Sky Note, 12" References 1960 births 2009 deaths People from Kensal Green Lovers rock musicians 20th-century Black British women singers British reggae musicians
30994026
https://en.wikipedia.org/wiki/Rube%20DeGroff
Rube DeGroff
Arthur Sleight "Rube" DeGroff (September 2, 1879 – December 17, 1955) was a professional baseball outfielder from 1903 to 1916. He played two seasons in Major League Baseball for the St. Louis Cardinals. DeGroff was 5 feet, 11 inches tall and weighed 190 pounds. Career DeGroff was born in Hyde Park, New York, in 1879. He started his professional baseball career in 1903 with the Hudson River League's Kingston Colonials. In 1905, he joined the New York State League's Troy Trojans and had a batting average of .315. DeGroff made his major league debut with the St. Louis Cardinals in September of that year, and in 15 games, he batted .250. He also appeared in one game for the Cardinals in 1906 before going back to the Trojans. That was the last time he played in the majors. In 1906 and 1907, DeGroff batted .314 for Troy. He led the league in hits during both of those seasons. DeGroff then went to the Eastern League for one year, batted .255, and returned to the New York State League in 1909 where he led all players with 10 home runs. DeGroff hit under .250 in 1910 and 1911. Upon joining the New England League's Lowell Grays in 1912, however, he had one of his best years at the plate. He batted .348, setting his career-high in that category, and led the league in hits, doubles, triples, home runs, slugging percentage, and total bases. In 1913, his batting average went down to .299, but he paced the circuit in home runs again. DeGroff played one more season for Lowell and then two in the New York State League before his professional baseball career ended. He later managed a team in Hyde Park called the Robin Hoods. In 1936, US president Franklin D. Roosevelt (who was also born in Hyde Park) attended a Robin Hoods game and told the crowd that he and DeGroff used to play on the same baseball team. DeGroff died in Poughkeepsie, New York, in 1955. References External links 1879 births 1955 deaths Major League Baseball outfielders St. Louis Cardinals players Kingston Colonials players Rochester Bronchos players Troy Trojans (minor league) players Jersey City Skeeters players Wilkes-Barre Barons (baseball) players Milwaukee Brewers (minor league) players Zanesville Potters players Lowell Grays players Baseball players from New York (state) People from Hyde Park, New York Road incident deaths in New York (state)
4941539
https://en.wikipedia.org/wiki/Multiseat%20configuration
Multiseat configuration
A multiseat, multi-station or multiterminal system is a single computer which supports multiple independent local users at the same time. A "seat" consists of all hardware devices assigned to a specific workplace at which one user sits at and interacts with the computer. It consists of at least one graphics device (graphics card or just an output (e.g. HDMI/VGA/DisplayPort port) and the attached monitor/video projector) for the output and a keyboard and a mouse for the input. It can also include video cameras, sound cards and more. Motivation Since the 1960s computers have been shared between users. Especially in the early days of computing when computers were extremely expensive the usual paradigm was a central mainframe computer connected to numerous terminals. With the advent of personal computing this paradigm has been largely replaced by personal computers (or one computer per user). Multiseat setups are a return to this multiuser paradigm but based around a PC which supports a number of zero-clients usually consisting of a terminal per user (screen, keyboard, mouse). In some situations such multiseat are cost-effective because it is not necessary to buy separate motherboards, microprocessors, RAM, hard disks and other components for each user. For example, buying one high speed CPU, usually costs less than buying several slower CPUs. History In the 1970s, it was very commonplace to connect multiple computer terminals to a single mainframe computer, even graphical terminals. Early terminals were connected with RS-232 type serial connections, either directly, or through modems. With the advent of Internet Protocol based networking, it became possible for multiple users to log into a host using telnet or – for a graphic environment – an X Window System "server". These systems would retain a physically secure "root console" for system administration and direct access to the host machine. Support for multiple consoles in a PC running the X interface was implemented in 2001 by Miguel Freitas, using the Linux operating system and the X11 graphical system (at the time maintained by XFree86). This was done using a patch in the display server to execute several instances of X at the same time such that each one captures specific mouse and keyboard events and the graphical content. This method received the name of multiseat or multiterminal. In 2001, Thinsoft BeTwin offered a multiseat solution for Windows, utilizing multiple graphics cards and peripherals attached to a single host PC. In 2002 a Canadian company, Userful Corporation, released Userful Multiplier, a multiseat Linux software solution that enables up to 10 users to simultaneously share one computer. Earlier they worked on a kernel-based approach to a multi-station platform computer, but abandoned the idea due to a problem with multiple video card support. Other solutions appeared in 2003, such Svetoslav Slavtchev, Aivils Stoss and James Simmons worked, with the evdev and Faketty approach modifying the Linux kernel and letting more than one user independently use the same machine. In that time, the Linux Console Project also proposed an idea to use multiple independent consoles and then multiple independent keyboards and mice in a project called "Backstreet Ruby". Backstreet Ruby is a kernel patch for the Linux kernel. It is a back port to Linux-2.4 of the Ruby kernel tree. The aim of the Linux Console developers is to enhance and reorganize the input, the console and the framebuffer subsystems in the Linux kernel, so they can work independent from each other and to allow multi-desktop operation. The Backstreet Ruby idea was never finished. In 2005, the C3SL team (Center for Scientific Computing and Free Software), from the Federal University of Parana in Brazil, created a solution based on nested display servers, such as Xnest and Xephyr. With this solution, each nested display server runs in each screen of a host display server (e.g. Xorg) and a modification to the nested servers let each one exclusively acquire its mouse and keyboard. In 2008, the C3SL group released the Multiseat Display Manager (MDM) to ease the process of installation and configuration of a multiseat box. This group, also in 2008, conceived a live-CD for test purposes. In 2007, NComputing entered the market with a Windows-based multiseat product, the X-series or Xtenda system, which uses a PCI add-in card to connect terminal units containing video, keyboard, mouse, and audio jacks, allowing 3 to 6 additional user seats to be added to a PC. The X-series also offered Linux compatibility. In 2010, Microsoft began offering Windows MultiPoint Server, allowing one machine to host multiple users utilizing separate graphics cards and peripherals. Automatic multiseat with USB docking stations is a feature of Fedora 17. Time line, commercial multiseat software evolution 1990, Solbourne cg30 running SunOS 1996–2005, Silicon Graphics InfiniteReality running Irix 1996, ThinSoft/BeTwin 1999, Ibik/ASTER 2001, ThinSoft BeTwin 2002, Userful Corporation 2004, Open-Sense Solutions (Groovix) 2006, NComputing X-series 2010, Windows MultiPoint Server 2011, Black Box VirtuaCore 2013, LISTEQ BoXedVDI Requirements Hardware requirements Each user will require a monitor, keyboard and mouse connected to the host machine. For example, to make a four-head (four users) system would require four monitors, four keyboards, four mice and two dual-output, or one quad-output video card. USB keyboards and mice are typically recommended instead of PS/2 connections, as they can be connected to a USB hub. Additional devices and peripherals such as cameras, flash storage drives, card readers and touch screens could also be assigned to each seat. An alternative to multiple physical video cards and connections is DisplayLink over USB. Software requirements Linux The VT system in the Linux kernel dates back to 1993 and does not understand the concept of multiple "seats". kmscon and systemd-consoled do. There are different solutions to set up a multiseat and others are constantly being developed. The X.Org Foundation maintains a wiki page with the latest news concerning the solutions. Currently the most pointed solutions by X.Org's wiki are the solutions using either multiple Xephyr servers with deprecated evdev support over a host Xorg, or run several instances of Xorg using multiple video devices. It is quite easy to configure popular distributions such as Ubuntu to provide multiseat environments as documented on the Ubuntu MultiseatX wiki page or with apps that streamline the process. The Multi-seat Display Manager automseat tool is one open source tool which helps to automate the process of installation and configuration. Users that want to try multiseat are encouraged to try such a tool and avoid the old and hard way to set it up through terminals and howtos (evdev, orXephyr), as stated by the foundation's wiki page. On the other hand, MDM suffers from lack of updates and releases beyond the initial announcement. Other open source tools that aim to help simplify the creation of multiple seats include Bicefalo wizard and EasySeats. Userful offers a commercially supported multiseat Linux solution called Userful Multiplier. It enables up to 10 users to simultaneously share one computer. It works with most graphics cards supported by X.Org/XFree86 as well as USB multiseat devices. It is available in 64-bit and 32-bit packages in both RPM and DEB formats, and has been tested on most major distributions, including Debian, Fedora, Mandriva Linux, SLED, SuSE and Ubuntu. A free two-user version of Userful Multiplier software for personal or trial use is available from their website. Microsoft Windows For Windows 2000, XP and Vista operating systems, there are several commercial products to implement multiseat configurations for two or more seats. An operating system designed specifically for multiseat setups entitled Windows MultiPoint Server was announced on February 24, 2010. It uses Remote Desktop (Terminal Services) technologies in Windows Server 2008 R2 to provide multiseat functionality. This functionality was incorporated into Windows Server proper as of Windows Server 2016 in a new server role entitled MultiPoint Services, but this server role was removed in Windows Server 2019 owing to Microsoft ceasing development of the service in 2018. Virtualization-based setup Instead of relying on operating system support for multiseat configuration, a hypervisor can be configured to run multiple virtual machines, each configured to interface one connected seat by I/O virtualization methods. Input devices can be attached to the virtual machines through USB Redirection, and entire GPUs can be attached through Intel VT-d. The YouTube channel LinusTechTips has demonstrated virtualization-based 2-seat and 7-seat systems with UnRAID as the host operating system. Each seat has exclusive control of one of the Windows guest operating systems running on the host. There is a dedicated high-end graphics card for each guest, which it takes full advantage of via the use of VT-d, making the system capable of hosting demanding video game sessions at full quality simultaneously on all seats. Case studies World's largest multiseat computer deployment In February, 2009, The Brazil Ministry of Education committed to deploy 350,000 Linux-based multiseat computing stations in more than 45,000 rural and urban schools across the country. The chosen companies to implement this project were the Canadian multiseat Linux software company Userful Corporation, and its Brazilian IT partner ThinNetworks. Paraná Digital project One of multiterminal's successful cases is happening at Paraná Digital project. It is creating multiterminal laboratories on 2000 public schools of the state of Paraná (Brazil). More than 1.5 million users will benefit from the 40,000 terminals when the project is finished. The laboratories have four-head multiterminals running Debian. The cost of all the hardware is 50% less than the normal price, and there is absolutely no cost with software. This project developer is C3SL (Center for Scientific Computing and Free Software). Michigan State University research in Tanzania Since 2008, electrical and computer engineering students from Michigan State University have installed multiterminal systems with internet access in three schools in Mto wa Mbu, Tanzania. The purpose of the project is to study the impact of having computer systems with internet access in an education system that cannot afford other educational resources such as books. The computer systems run Ubuntu 8.04 32-bit and utilize the open source Multiseat Display Manager created by C3SL. The research will eventually be used to present to government officials of third world countries in effort to showcase the positive impact of having cost-effective computing systems in schools. The project is sponsored by George and Vickie Rock and the Dow Chemical Company. Notable installations Userful announced a deployment of 356,800 Linux-based virtual desktops in Brazil (February 2009) NComputing provided 180,000 one to one computing seats for K–12 students in the country of North Macedonia See also Computer multitasking evdev Dumb terminal Linux Terminal Server Project Mainframe Multi-monitor Multiseat desktop virtualization Multi-user NComputing Ndiyo Time-sharing Userful Black Box VirtuaCore Windows MultiPoint X Window System X.Org Server Xephyr Xnest Multi-Pointer X References System administration Operating system technology Computer systems X Window System
20819076
https://en.wikipedia.org/wiki/McIDAS
McIDAS
McIDAS, the "Man computer Interactive Data Access System", is a weather forecasting tool developed at the University of Wisconsin–Madison in the 1970s and used continually to this day. In its early incarnations, it was widely used to generate graphics for television stations, but today is used primarily by the NOAA and related agencies. Users of the McIDAS system developed a similar version for microcomputers and sold by ColorGraphics Weather Systems that generated much of the computerized weather imagery seen on television in the US in the 1980s. History Applications Technology Satellite (ATS) In 1953 Verner Suomi measured the heat budget of a corn field for his doctoral thesis at the University of Chicago. For the rest of his professional career he worked in the field of remote measuring using radiometers, often working with Robert Parent. They developed a remote sensing radiometer with the intent of flying it into space and measuring the heat budget of the Earth. Their first attempt was fitted to Vanguard TV3, but this exploded on launch. A similar experiment flew on Explorer 7 in 1959. This experiment demonstrated the impact of cloud cover on the heat balance of the Earth. To further develop the field of satellite-based meteorology, NASA and National Science Foundation (NSF) grants led to the creation of the Space Science and Engineering Center (SSEC) at the University of Wisconsin–Madison. At the SSEC, Suomi and Parent developed the Spin Scan Cloudcover Camera (SSCC) to accurately measure and map cloud cover. The SSCC imaged a single strip of the Earth at a time, feeding out its information directly to a radio for broadcast to the ground. Fixed to the body of a rotating satellite, the SSCC would build up a 2D image as the satellite spun and rotated in its orbit. SSCC was launched on ATS-1 on 6 December 1966. On 5 November 1967 ATS-3 launched the Multicolor Spin Scan Cloudcover Camera, which provided the first color meteorological imaging. Data from these instruments was captured on realtime printouts, and required manual work to cut and paste the successive strips into a single image, and then into multiple time-lapse images. Although a number of advances were made while examining this data, the work was tedious and time consuming. WINDCO In order to speed up the process of examining the data, Suomi started an internal competition to develop an automated solution. Two teams were set up, one developing an analog solution and another using software. The software solution, by Smith and Phillips, was able to demonstrate the ability to calculate wind speed and direction based solely on the images of the clouds. Based on this success, Suomi was able to gain additional funding from NASA and the NSF to develop a prototype all-computerized image processing system. Known as WINDCO, the system consisted of a video disk for storing imagery and a Raytheon 440 minicomputer controlling it. The computer was used to record the imagery from the satellites, buffering a single frame from the strips and then storing it out along with timing information. The user interacted with the resulting video to select points on the frames that represented the same point as it moved over time, the output of their selections being punched to paper tape. The paper tape was then read by the 440 and copied onto punched cards containing instructions for the UNIVAC 1108 mainframe, which converted them into a vector map overlaid on top of a map of the Earth. At a demonstration to NOAA, NASA and NSF on 12 April 1972, the system demonstrated the ability to generate 1000 wind vectors per hour. The attendees were impressed, but noted that the system was unable to correlate data from the satellites, which originated in a very specific format, with data being collected from other sensors, like automated weather stations. They encouraged the SSEC team to continue development, make the system even more automated, and include the ability to combine data from any source. McIDAS The biggest problem in developing a fully automated solution was finding a machine within their budget with the speed and storage capabilities required. The team eventually settled on a Datacraft/5 computer equipped with 96 kB of core memory and two 5 MB hard drives, one fixed, one removable. The new software, McIDAS, was much more automated, with the user's primary role in the data acquisition phase reduced to checking the quality of the vectors being automatically generated by the software. An image enhancement system was added to help see the clouds in low-light areas. McIDAS accepted data from a number of sources. Cloud imagery was buffered on tape and then fed in as needed, data from the Synchronous Meteorological Satellite could be fed in directly from a satellite feed at 1.7 MB a second, FAA data at 75 bit/s, or National Weather Service radar at 1200 bit/s. All of this data could be overlaid on hand-drawn vector maps. The system was later extended to support data from the Earth Resources Technology Satellite and the Mariner planetary probes. A command line interpreter allowed the user to call up data with short commands, YK T 500 1200 USA would generate a display of the 500 mb temperature data from the 1200 UTC measurements over the USA. The first McIDAS system was complete in June 1972, but tuning continued for several months. In October 1973 a real-time feed from McIDAS to the local WHA-TV state public television service was installed. Upgrades and new data feeds continued to be added; local weather radar maps, feeds for the newer generation GOES satellites and others were added by 1976. Demand for the system was so high that the system had to be upgraded several times for additional performance and storage, with 24-hour scheduling for workstations. A system was later installed at the US Air Force Cambridge Research Laboratory. Continued demand resulted in the creation of a second-generation version of McIDAS based on six Harris/6 computers connected together using a custom networking system they called "burn lines". Two of the machines acted as database servers with 300 MB disk drives, while the other four supported up to 18 workstations each with 80 MB drives. Remote terminals inside the University were set up over 9600 bit/s lines, and later another was set up at the National Environmental Satellite Service center in Kansas City, where data from the Landsat series was processed. After a tornado in Wichita Falls, Texas killed several people in 1979, Congress directed that a new McIDAS be set up at the National Severe Storms Forecast Center (now known as the Storm Prediction Center), which was completed in January 1981. The West German Space Agency started the task of converting McIDAS to an Amdahl mainframe in 1976, and a similar system was later installed at NASA's Goddard Space Flight Center. Newer versions were written for the IBM System/370 and IBM 4331. With the improved performance these machines offered, the distributed architecture of the second-generation McIDAS was no longer needed and systems returned to a single-server installation. In 1984 development started on a standalone version for the IBM PC using EGA or VGA graphics, first on DOS and later on OS/2. These versions spread McIDAS beyond the university and laboratory, and users were soon found at television stations and weather prediction agencies around the world. A fourth-generation system, the current version, was built on Unix. This started in 1989 as a McIDAS environment for Vis5D. In 1993 that McIDAS was the basis for development of a supported version using X, which was released in 1996 as McIDAS-X. With standardized networking, the Unix version allowed low-cost terminals to be attached to the Unix workstations, and client versions for OS/2 and Windows NT were developed. As of December 2009, McIDAS-X is tested and supported by SSEC on AIX, Enterprise Linux, HP-UX, IRIX, Mac OS X, Solaris, and Windows XP workstations. The fifth generation of McIDAS is actively being developed. This new package, named McIDAS-V, is a free, open source visualization and data analysis tool that displays weather satellite (including hyperspectral) and other geophysical data in 2- and 3-dimensions. McIDAS-V can also analyze and manipulate the data with its powerful mathematical functions. McIDAS-V is built on SSEC's VisAD and Unidata's Integrated Data Viewer libraries, and contains "Bridge" software that enables McIDAS-X users to run their commands and tasks in the McIDAS-V environment. The functionality of SSEC's HYDRA software package is also being integrated into McIDAS-V for viewing and analyzing hyperspectral satellite data. Current versions of the various McIDAS packages can be downloaded from the McIDAS Download Software page. References Notes Bibliography W. Hibbard, D. Santek, M-F. Voidrot-Martinez, D. Kamins, and J. Vroom, UNIX and X Windows: the right choice for interactive systems. Preprints, Conf. Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology. Anaheim, Amer. Meteor. Soc., 1990, pp. 162–163. D. Santek, W. Hibbard, M-F. Voidrot-Martinez, D. Kamins, and J. Vroom, A UNIX and X Windows implementation of McIDAS. Preprints, Conf. Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology. Anaheim, Amer. Meteor. Soc., 1990, pp. 164–166. Matthew Lazzara, et al., "The Man computer Interactive Data Access System: 25 Years of Interactive Processing", Bulletin of the American Meteorological Society, Volume 80 Number 2, February 1999, pp. 271–284 SSEC Webmaster, "40 Years of Geostationary Satellite Research and Observations at the Space Science and Engineering Center", 13 November 2006 Thomas Achtor, et al., "McIDAS-V: A Powerful Data Analysis and Visualization Tool for Multi and Hyperspectral Environmental Satellite Data", Proc. SPIE 7085, 708509 (2008), Graphic software in meteorology
29618456
https://en.wikipedia.org/wiki/Nexus%20S
Nexus S
The Nexus S 4G is a smartphone co-developed by Google and Samsung and manufactured by Samsung Electronics for release in 2010. It was the first smartphone to use the Android 2.3 "Gingerbread" operating system, and the first Android device to support Near Field Communication (NFC) in both hardware and software. This was the fourth time that Google worked with a manufacturer to produce a phone, the previous being the Google G1, myTouch and the Nexus One, all three by HTC. Following the Nexus S, the next Android Developer phone was the Galaxy Nexus, released the following year. Nexus S is the first commercial smartphone certified by NASA to fly on the space shuttle and to be used on the International Space Station, as part of the SPHERES experiment. History and availability The Nexus S was demonstrated by Google CEO Eric Schmidt on 15 November 2010 at the Web 2.0 Summit. Google officially announced the phone on their blog on 6 December 2010. The phone became available for purchase on 16 December in the United States and on 22 December in the United Kingdom. The Super AMOLED version of the phone is the GT-I9020 and it is based on the Samsung Galaxy S hardware, the principal hardware differences being the absence of support for an SD card and the addition of a near field chip. The alternate SC-LCD (Super Clear LCD) version of the phone is the GT-I9023 which is meant for the European (non-UK) market. In May 2011 Sprint introduced its Nexus S in the US. Unlike the GSM version, the Sprint Nexus runs on its WiMax network and uses CDMA instead of GSM. Also in March 2011 Vodafone released a white version of the phone on its web store in the UK. In the United Kingdom, the Nexus S is sold at Carphone Warehouse and is available on the Vodafone, O2, T-Mobile, 3 and Orange networks. In France, it is available through SFR and Bouygues Telecom. In India, Samsung officially announced sale of the unlocked version with Super LCD screen i9023, which will support all GSM-based carriers throughout the country. In Canada, the Nexus S became available at most carriers in April 2011 in two versions, one for Telus, Bell, and Rogers with 3G frequencies of 850/1900/2100 MHz, and the other for Wind/Mobilicity/Vidéotron, using 3G frequencies 900/1700/2100. In Australia, the Nexus S became available in both black and white. It is available on Vodafone and its virtual provider Crazy John's. Hardware Processor The Nexus S has the Samsung Exynos 3110 processor. This processor combines a 45 nm 1 GHz ARM Cortex A8 based CPU core with a PowerVR SGX 540 GPU. The CPU core, code-named "Hummingbird", was co-developed by Samsung and Intrinsity. The GPU, designed by Imagination Technologies, supports OpenGL ES 1.1/2.0 and is capable of up to 20 million triangles per second. Memory The Nexus S has 512 MB of RAM (Mobile DDR) (128MB is assigned to the GPU, leaving 384MB free for the OS), 16 GB of NAND memory, partitioned as 1 GB internal storage and 15 GB "USB storage". The phone does not support additional storage capacity such as microSD. Screen The Nexus S is the first device to use a slightly curved glass touchscreen, described by Google as a "Contour Display", with a Super AMOLED 800 x 480 WVGA PenTile matrix display manufactured by Samsung. In markets outside Canada, US, and UK, a Super LCD is supplied instead. Software The phone shipped with Android 2.3 (Gingerbread) and was the first device to ship with the updated OS. On 19 December 2011, Google released Android 4.0 (Ice Cream Sandwich) for the Nexus S. The automatic update was suspended, allegedly due to poor battery performance. The UMTS/GSM variants was among the first to receive Android 4.0.4 in March 2012. The Nexus S 4G(aka Samsung SPH-D720), I9020A, and M200, while taking longer than the GSM variant, received the Android 4.0.4 update. Several devices, such as the Samsung Galaxy S II, have or will receive updates before these variants. On 27 June 2012 at the Google I/O conference, it was announced that the Nexus S would be one of the first devices to receive an upgrade to Android 4.1 (Jelly Bean), along with the Motorola Xoom and Galaxy Nexus, began on 26 July 2012. In October 2012, the Jelly Bean 4.1.2 OTA update was released, and is the last official OS released for these devices. On 13 November 2012, it was announced that the Nexus S would not be updated to Android 4.2 (Jelly Bean). It is still supported by independent developers, though, and Android 4.2.2-based as well as 4.3-, 4.4-, 5.1- and 6.0-based alternative software can be installed. Variants See Unlocked The Nexus S cannot be sim locked and has an unlockable bootloader, allowing users to install custom ROMs. Critical reception Joshua Topolsky, writing for Engadget review praised the devices's hardware and software, concluding "the truth is, it really is the best Android device available right now". The review by The Register gave the Nexus S an 85% rating and summarized it as a "cool, innovative device with an eye to snatch Apple’s smartphone crown." An AnandTech review praised the display, NFC tag reader, and Android Gingerbread operating system, but noted the lack of 720p video recording, HSPA+ baseband, and external storage support. A TechRadar review praised the Nexus S for fixing the GPS problems experienced with the Samsung Galaxy S: "The good news for those looking to upgrade from the Samsung Galaxy S – the GPS issues have been resolved, in that you can actually now get a signal with no problem." CNET's review was enthusiastic about the display, operating system, and performance. CNET noted the lack of 720p video recording, HDMI output and external (SD Card) memory support. CNET also noted the "rather fragile" feel of the phone, the lack of LED notifications, and the few new features over the Nexus One. See also Comparison of Google Nexus smartphones Alexander (satellite) ARMv7 ARM Cortex A9 References Android (operating system) devices Google Nexus Smartphones Samsung mobile phones Mobile phones introduced in 2010 Discontinued smartphones ARMv7-A microarchitectures
24286881
https://en.wikipedia.org/wiki/Chase%20Riddle
Chase Riddle
Charles Ludy "Chase" Riddle (September 17, 1925 – June 12, 2011) was an American baseball player, coach, manager and scout. Riddle made his mark in both professional baseball, where he had a 36-year career (1943–78), mostly with the St. Louis Cardinals organization, and in U.S. college ranks as the successful head baseball coach of Troy University (1979–90), where he won two Division II NCAA baseball championships and compiled a record of 435–149–2 (.745). His uncles Johnny and Elmer Riddle played in the Major Leagues in the 1930s and 1940s. Riddle attended high school in his hometown of Columbus, Georgia, served in the United States Navy during World War II, and attended then-Troy State University, where he played varsity football. He broke into professional baseball as a shortstop in 1943 and spent much of his career as a catcher and first baseman, but he would eventually play every baseball position, including pitcher, during his long career in minor league baseball. Much of his early career was spent in the lower rungs of the farm system of the Boston Red Sox, and apart from 35 games with the Double-A Dallas Eagles of the Texas League in 1954 he toiled largely in the lowest levels of the minors, spending all or parts of 14 seasons at the Class C or Class D level. A 6-foot (1.8 m), 190-pound (86 kg) right-handed batter and thrower, he batted .319 with 155 home runs in 1,387 games played. In 1953, as playing manager of the Panama City Fliers of the Class D Alabama–Florida League, Riddle batted .411 and swatted 25 homers in 436 at bats. His managing career began with unaffiliated teams in the Alabama-Florida circuit in 1951, but in 1955 he joined the Cardinals' system, managing through 1962, then shifting to a scouting job for the next 16 seasons — signing eventual Baseball Hall of Fame pitcher Steve Carlton for the Redbirds in 1964. In 1979, he became head baseball coach of the Troy Trojans, winning Division II titles in 1986 and 1987 and five Gulf South Conference championships. The Troy University baseball stadium is named Riddle-Pace Field in his honor. He was inducted in the Alabama Sports Hall of Fame in 2000 and the Wiregrass Sports Hall of Fame in 2005. On his death at age 85 in June 2011, The Montgomery Advertiser wrote: "In this state, there are few sports figures more revered than Riddle. Alabama had Paul 'Bear' Bryant [and] Auburn had Ralph 'Shug' Jordan, but in the baseball world, Riddle and longtime Jacksonville State coach Rudy Abbott were the two leaders whose impact transcended their sport." References External links Alabama Sports Hall of Fame page Obituary, The Montgomery Advertiser, June 13, 2011 1925 births 2011 deaths Albany Cardinals players Allentown Wings players Baseball coaches from Georgia (U.S. state) Baseball players from Columbus, Georgia Batavia Clippers players Billings Mustangs managers Billings Mustangs players College baseball coaches Dallas Eagles players Dothan Cardinals players Durham Bulls players Galveston White Caps players Harrisburg Senators players Norfolk Tides managers Oneonta Red Sox players Ozark Eagles players Panama City Fliers players Roanoke Red Sox players St. Louis Cardinals scouts Tarboro Tars players Troy Trojans baseball coaches Troy Trojans football players Williamsport Tigers players United States Navy personnel of World War II
8038089
https://en.wikipedia.org/wiki/Ruby%20License
Ruby License
The Ruby License is a Free and Open Source license applied to the Ruby programming language and also available to be used in other projects. It contains an explicit dual licensing clause, stating that software subject to its terms may be distributed under either the terms included the Ruby License itself or under those of either the GNU General Public Licence v2, or the two-clause BSD License (depending on the version of the Ruby License used). The license is typically considered to be a free software license due to the presence of the dual-licensing clause. History For versions up to 1.9.2, the Ruby programming language was available under an explicit dual-licence scheme which allowed users to choose between a dedicated Ruby licence or the GNU General Public Licence v2 (GPLV2), which is one of the most common free software licences. Starting at version 1.9.3, the dual-licensing clause changed to offer the choice of the FreeBSD License. Compatibility The Ruby License has unusual copyleft requirements, stating that redistributions should not necessarily be under the terms of the Ruby license, but placed "in the Public Domain or otherwise Freely Available". For example, a modified form of a program licensed under the Ruby license may be placed under the FreeBSD License, which is a non copyleft license. The Ruby License is approved by the Free Software Foundation and is considered compatible with the GNU General Public License, due to its explicit dual-licensing clause. The Open Source Initiative does not explicitly include the Ruby license as a certified an open source license; this is considered "unnecessary" due to the dual licensing clause. In discussion over the change of the dual licensing clause on the debian-legal mailing list, it was noted that while the Ruby license itself is arguably not compatible with the Debian Free Software Guidelines, this is unimportant due to the dual-licensing clause. Software under Ruby license (including the older version when GPLv2 was a listed alternative Ruby 1.9.2 license) may be included in binary form within an Apache product if the inclusion is appropriately labeled. Adoption Software other than the Ruby programming language itself which uses the Ruby License includes: JRuby, an implementation of Ruby atop the Java Virtual Machine MacRuby, an implementation of Ruby 1.9 directly on top of Mac OS X core technologies such as the Objective-C runtime and garbage collector, the LLVM compiler infrastructure and the Foundation and ICU frameworks. MacRuby contains code from the Ruby project and the source code of the most MacRuby examples, unless specified, are covered by the Ruby license. RubyGems, a package manager for Ruby IronRuby, an implementation of Ruby targeting the .NET Framework The JSON implementation for Ruby References External links Text of the Ruby License Ruby (programming language) Free and open-source software licenses
31429209
https://en.wikipedia.org/wiki/T-Kernel
T-Kernel
T-Kernel is an open source real-time operating system (RTOS) designed for 32-bit microcontrollers. It is standardized by the T-Engine Forum, which distributes it under a T-License agreement. There is also a corresponding Micro T-Kernel (μT-Kernel) implementation designed for embedded systems with 16-bit or 8-bit microcontrollers. History In 1984 professor Ken Sakamura started The Real-time Operating system Nucleus (TRON project) at the University of Tokyo, with the goal of designing an open real-time operating system (RTOS) kernel. The TRON framework defines a complete architecture for the different computing units. Industrial TRON (ITRON) is the most popular TRON architecture. ITRON specification promotion was done by the various companies which sell the commercial implementations. T-Kernel is the name of the specification and at the same time a single implementation based on the authorized source code available from the T-Engine Forum for free under T-License. T-Engine is arguably the most advanced ubiquitous computing platform in the world. In 1989, Matsushita Electric Industrial Co., Ltd., now known as Panasonic Corporation, introduced a TRON PC. This personal computer had an Intel 80286 chip of 8 MHz and only 2 MB of memory, but it could display moving video. Also, it had a dual-booting system that could run both the TRON OS and DOS. Although the Japanese government once announced it would use the TRON PC in Japanese schools, the plan was dropped, partly due to economic issues with the United States. But ITRON survived, and today is used in many devices, household appliances, automobile electronics, robots, some satellites, and in factory automation systems in China. Embedded system developers claim that ITRON is the number one OS for embedded chips in both Japan and the United States. Overview To make it easy to distribute middleware, T-Kernel has separate specification for subsystem and device driver which will be suitable for different types of middleware APIs. A real-time OS appropriate for individual application can be created by combining the middleware called T-Kernel Extension with the T-Kernel. T-Monitor initializes computer hardware and handles the interrupt set up at the start. T-Monitor lessens hardware-dependency of T-Kernel, and improves the application portability. T-Kernel consists of the following three components from the viewpoint of function. T-Kernel/OS (operating system) This offers the basic functions as real-time Operating System. T-Kernel/SM (system manager) This offers the functions including system memory management function and address space management function in order to manage middleware such as device drivers and subsystems. T-Kernel/DS (debugger support) This offers the functions for debuggers to be used in development tools. Development environment eBinder from eSol Corporation is one commonly used integrated development environment (IDE) for software cross-development targeting T-Kernel. The current release of T-Kernel 2.0 is distributed with a plug-in for Eclipse IDE. Also, a version of T-Kernel that runs on QEMU based emulator, and the QEMU based emulator itself, are available so that testing, training, and development can be done on a PC without a target hardware. It is supported by popular SSL/TLS libraries such as wolfSSL. See also ThreadX References External links , TRON Forum Sakamura home page ITRON Project Archive Introducing the μT-Kernel Information about T-Engine, T-Kernel, and μT-Kernel Programming Embedded operating systems TRON project
6342109
https://en.wikipedia.org/wiki/Albert%20F.%20Case%20Jr.
Albert F. Case Jr.
Albert F. Case Jr. (born March 2, 1955) is an American software engineer and one of the leaders in the development of computer-aided software engineering (CASE) technologies and system development methodologies. Biography Case is a graduate of the State University of New York at Buffalo. He began his software development career in 1972 and worked in a variety of IT-related capacities, including director of Management Information Systems for Ryder System and co-founder of Maximus Systems, Inc., developers of the Maximus code generator. In 1982, Case joined start-up Nastec Corporation, a Southfield, Mich. based software development company. In 1989, Case left Nastec Corporation to join Gartner, Inc. (then GartnerGroup), the IT industry research and advisory firm, where he spent the next 13 years as a thought-leading industry analyst, speaker, writer and business executive. After 13 years, Case left Gartner to become an independent consulting executive and entrepreneur, with the goal of applying GST and "organizational engineering" in the laboratory of live businesses. Case served as a board member for Sky Capital Holdings, a New York-based investment bank; interim president and CEO of DuoCash Corporation, a payment processing services company; and as chairman of the board of eNucleus, Inc (ENUI.OB) a technology-based business process outsourcing firm; and president and CEO of Turbodyne Technologies, Inc. (TRBD.OB). As an entrepreneur, Case is a co-founder and managing director of firms including content/commerce management software providers InfoTollgate.com (hosted content/commerce management systems) and Turnpike Software, LLC, the content management software development company (with his partner Dr. N. Adam Rin, former Bachman CASE tool and Gartner alumnus). He is also publisher and co-founder of IT procurement advisor TechSpend, LLC (with Gartner alumnus Vinnie Mirchandani). He is also the Research Fellow and principal analyst with ES Research Group, Inc., which specializes in helping companies identify, select, implement and measure sales performance improvement programs. Work Case of the leaders in the development of computer-aided software engineering (CASE) technologies and system development methodologies. He also was a major contributor to the Spectrum System Development Methodology, from John D. Toellner Associates, developing the Structured Analysis and Structured Design design tips. Computer-Aided Software Engineering Nastec Corporation both coined the acronym CASE and launched the DesignAid analysis and design tool and the LifeCycle Manager project configuration and management system. For six years, Case served as Nastec's Vice President for Professional Services and Product Management. Case, and his associate Vaughn Frick developed a second generation structured analysis and design technique based on the Yourdon/Demarco/Constantine Structured Analysis/Structured Design technique. Frick, while working with Case at Nastec Corporation on the development of the PC-based DesignAid product, was the first to specify and teach a comprehensive technique for transforming a structured analysis specification into a structured design. The professional services division quickly became not only the largest component of Nastec Corporation, but also the largest provider of SA/SD education. Case was among the most prolific public speakers on the subject of CASE during the 1980s, helping launch, among other events, the Computer-Aided Software Engineering Symposium as its keynote speaker for its first two years. Case toured the country, promoting the integration of upper CASE and lower CASE with pro-football great and code-generation pioneer Fran Tarkenton, founder of Tarkenton Software. When Tarkenton Software and Nastec Corporation agreed not to merge, Tarkenton teamed up with James Martin and merged Tarkenton Software into KnowledgeWare, a direct competitor to Nastec. Principles of Computer Aided Software Engineering Case, also author of the book Information Systems Development: Principles of Computer Aided Software Engineering (Prentice Hall, 1986), was also a lifelong student of general systems theory (GST) and fascinated by the application of structured analysis and design techniques to the design of entire business operations, not just the information technology subsystems. At Gartner, Case quickly ascended the analyst ranks, going from the leading Software Engineering analyst to head of its business process reengineering practice - where he could spend full-time applying GST to business design, launching Gartner's e-Business Resource Center, its Vertical Industries practices and ultimately becoming the founder, group vice president and general manager of Gartner's TCO Software division. From there, Case became head of Gartner's IT benchmarking business and president of Gartner's eMetrix business performance management business. During his 13-year tenure at Gartner, Case launched dozens of product and service lines that accounted for over $100 million in revenue to Gartner. Stamford Research He had a consulting firm Stamford Research, LLC, with Arnold Kwong and Steve Vogel. References Further reading Case, Albert F., Information Systems Development: Principles of Computer-Aided Software Engineering (CASE)(Prentice Hall, 1986) at www.informatik.uni-trier.de Dr. Eli Goldratt's Necessary & Sufficient Tour. "Computer-Aided Software Engineering (CASE): Technology for Improving Software Development Productivity" Database Journal, Vol 17, Issue 1, pp. 35–43 1955 births University at Buffalo alumni Living people American software engineers
34801640
https://en.wikipedia.org/wiki/Institute%20of%20Information%20Security%20Professionals
Institute of Information Security Professionals
Update The Institute of Information Security Professionals is now The Chartered Institute of Information Security (CIISec) see https://www.ciisec.org/about The Institute of Information Security Professionals (IISP) is an independent, non-profit body governed by its members, with the principal objective of advancing the professionalism of information security practitioners and thereby the professionalism of the industry as a whole. The primary aim of the institute is to provide a universally accepted focal point for the information security profession. Overview The Institute of Information Security Professionals has a membership representing over 8,000 individuals globally throughout Industry, Academia and Government. IISP has offices in Evesham, Worcestershire and Southwark, London. The institute's HQs are based in Evesham, close to the cyber-hubs of Cheltenham and Malvern. The institute is run by its members and has an elected board of directors with Dr Alastair MacWillson as the chairman. Activities One of its main activities is to act as an accreditation authority for the industry. As part of the government's investment in cyber security, the IISP consortium (including CREST and RHUL) has been appointed by NCSC to provide certification for UK government Information Assurance (IA) professionals. The consortium has been awarded a licence to issue the CESG Certified Professional (CCP) mark based on the Skills Framework, as part of a certification scheme driven by NCSC, the IA arm of GCHQ. Full membership of the institute is information security's professional standard and endorses the knowledge, experience and professionalism of an individual in this field. The award of membership levels is competency-based which sets it apart from purely knowledge-based qualifications and is awarded to those professionals who demonstrate breadth and depth of knowledge, and substantial practical experience. Regional branches The IISP has a number of regional branches which are developed for its members: History Based in London, United Kingdom, the institute was established in 2006 by information security professionals. In 2007, the institute developed the IISP Skills Framework. This framework describes the range of competencies expected of information security and information assurance professionals in the effective performance of their roles. It was developed through collaboration between both private and public sector organisations and academics and security leaders. In 2012, as part of the government's investment in cyber security, the IISP consortium were appointed by NCSC (formally CESG) to provide certification for UK government information assurance (IA) professionals. The IISP defined a set of information security skills and skill levels and these skill definitions have been supplemented by NCSC to enable certification bodies to make formal assessments, and others to make informal assessments against the IA skill levels. See also British cyber security community References Sources "Cloud Expo 2014 presentation from Piers Wilson, Director of the IISP: Equipping security teams to deal with changing technology.", Wednesday 26 February 2014. Burke, Claire. " Top tips to stop cyber criminals from targeting your small business ", The Guardian, London, Tuesday 11 March 2014. "New Cyber Security Government skills certification from IISP and CREST.", Wednesday 24 October 2012. "CREST works with industry influencers to enhance the recognition and professionalism of the technical security industry.", Retrieved on Saturday 15 March 2014. "The Times Raconteur Section: What is a security professional? Knowing what good look like.", Wednesday 12 December 2012. "e-skills uk and IISP to align professional skills standards in information-security.", Tuesday 6 November 2012. "IISP Pulse magazine focus on Validsoft ceo's views." External links Institute of Information Security Professionals 2006 establishments in the United Kingdom Computer security organizations Information technology organisations based in the United Kingdom Internet in the United Kingdom Organisations based in the London Borough of Southwark Organizations established in 2006 Information Security Professionals
1152416
https://en.wikipedia.org/wiki/Antitrust%20%28film%29
Antitrust (film)
Antitrust (also titled Conspiracy.com and Startup) is a 2001 techno thriller film written by Howard Franklin and directed by Peter Howitt. Antitrust portrays young idealistic programmers and a large corporation (NURV) that offers a significant salary, an informal working environment, and creative opportunities for those talented individuals willing to work for them. The charismatic CEO of NURV (Tim Robbins) seems to be good-natured, but new employee and protagonist Milo Hoffman (Ryan Phillippe) begins to unravel the terrible hidden truth of NURV's operation. The film stars Phillippe, Rachael Leigh Cook, Claire Forlani, and Robbins. Antitrust opened in the United States on January 12, 2001, and was generally panned by critics. Plot Working with his three friends at their new software development company Skullbocks, Stanford graduate Milo Hoffman is recruited by CEO Gary Winston of NURV (Never Underestimate Radical Vision). Milo receives an attractive programming position with a large paycheck, an almost-unrestrained working environment, and extensive creative control over his work. Accepting Winston's offer, Hoffman and his girlfriend, Alice Poulson (Forlani), move to NURV headquarters in Portland, Oregon. Despite development of the flagship product (Synapse, a worldwide media distribution network) being well on schedule, Hoffman soon becomes suspicious of the excellent source code Winston personally provides to him, seemingly when needed most, while refusing to divulge the code's origin. After his best friend and fellow computer programmer, Teddy Chin, is murdered, Hoffman discovers that NURV is stealing the code they need from programmers around the world—including Chin—and then killing them. Hoffman learns that not only does NURV employ an extensive surveillance system to observe and steal code, the company has infiltrated the Justice Department and most mainstream media. Even his girlfriend is a plant, an ex-con hired by the company to spy on and manipulate him. While searching through a secret NURV database containing surveillance dossiers on employees, Hoffman discovers highly-sensitive personal information about Lisa Calighan (Cook), a friendly co-worker. When he says he knows the company has this information about her, she agrees to help him expose NURV's crimes. Coordinating with Brian Bissel, Hoffman's old start-up friend, they plan to use a local public-access television station to hijack Synapse and globally broadcast their charges against NURV. However, Calighan is actually Winston's accomplice and foils Hoffman. Hoffman had confronted Poulson, whose real name is Rebecca Paul, and asked her to side with him against Winston and NURV. When Hoffman's attempt fails, a backup plan is put into motion by Poulson, the fourth member of Skullbocks, and the incorruptible internal security firm hired by NURV. As Winston prepares to have Hoffman killed, the second team successfully usurps one of NURV's own work centers—"Building 21"—and transmits the incriminating evidence as well as the Synapse code. Calighan, Winston, and his entourage are arrested for their crimes. After amicably parting ways with the redeemed Poulson, Hoffman rejoins Skullbocks. Cast Ryan Phillippe as Milo Hoffman Rachael Leigh Cook as Lisa Calighan Claire Forlani as Alice Poulson/Rebecca Paul Tim Robbins as Gary Winston Douglas McFerran as Bob Shrot Richard Roundtree as Lyle Barton Tygh Runyan as Larry Banks Yee Jee Tso as Teddy Chin Nate Dushku as Brian Bissel Ned Bellamy as Phil Grimes Tyler Labine as Redmond Schmeichel Scott Bellis as Randy Sheringham David Lovgren as Danny Solskjær Zahf Hajee as Desi Jonathon Young as Stinky Rick Worthy as Shrot's Assistant Peter Howitt as Homeless man Gregor Trpin as Computer Guy Allusions Roger Ebert found Gary Winston to be a thinly disguised pastiche of entrepreneur Bill Gates; so much so that he was "surprised [the writers] didn't protect against libel by having the villain wear a name tag saying, 'Hi! I'm not Bill! Similarly, Ebert felt NURV "seems a whole lot like Microsoft". Parallels between the fictional and real-world software giants were also drawn by Lisa Bowman of ZDNet UK, James Berardinelli of ReelViews, and Rita Kempley of The Washington Post. Microsoft spokesman Jim Cullinan said, "From the trailers, we couldn't tell if the movie was about or Oracle." Production Principal photography for Antitrust took place in Vancouver, British Columbia, California, and Portland, Oregon. Stanley Park in Vancouver served as the grounds for Gary Winston's house, although the gate house at its entrance was faux. The exterior of Winston's house itself was wholly computer-generated; only the paved walkway and body of water in the background are physically present in the park. For later shots of Winston and Hoffman walking along a beach near the house, the CG house was placed in the background of Bowen Island, the shooting location. Catherine Hardwicke designed the interior sets for Winston's house, which featured several different units, or "pods", e.g., personal, work, and recreation units. No scenes take place in any of the personal areas, however; only public areas made it to the screen. While the digital paintings in Winston's home were created with green screen technology, the concept was based on technology that was already available in the real world. The characters even refer to Bill Gates' house which, in real life, had such art. The paintings which appeared for Hoffman were of a cartoon character, "Alien Kitty", developed by Floyd Hughes specifically for the film. Simon Fraser University's Burnaby campus stood in for external shots of NURV headquarters. The Chan Centre for the Performing Arts at the University of British Columbia (UBC) was used for several internal locations. The centre's foyer area became the NURV canteen; the set decoration for which was inspired by Apple's canteen, which the producers saw during a visit to their corporate headquarters. The inside of the Chan—used for concerts—served as the shape for "The Egg", or "The NURV Center", where Hoffman's cubicle is located. Described as "a big surfboard freak" by director Peter Howitt, production designer Catherine Hardwicke surrounded "The Egg" set with surfboards mounted to the walls; Howitt has said, "The idea was to make NURV a very cool looking place." Both sets for NURV's Building 21 were also on UBC's campus. The internal set was an art gallery on campus, while the exterior was built for the film on the university's grounds. According to Howitt, UBC students kept attempting to steal the Building 21 set pieces. Hoffman and Poulson's new home—a real house in Vancouver—was a "very tight" shooting location and a very rigorous first week for shooting because, as opposed to a set, the crew could not move the walls. The painting in the living room is the product of a young Vancouver artist, and was purchased by Howitt as his first piece of art. The new Skullbocks office was a real loft, also in Vancouver, on Beatty Street. Open source Antitrusts pro–open source story excited industry leaders and professionals, with the prospects of expanding the public's awareness and knowledge level of the availability of open-source software. The film heavily features Linux and its community, using screenshots of the Gnome desktop, consulting Linux professionals, and including cameos by Miguel de Icaza and Scott McNealy (the latter appearing in the film's trailers). Jon Hall, executive director of Linux International and consultant on the film, said "[Antitrust] is a way of bringing the concept of open source and the fact that there is an alternative to the general public, who often don't even know that there is one." Despite the film's message about open source computing, MGM did not follow through with their marketing: the official website for Antitrust featured some videotaped interviews which were only available in Apple's proprietary QuickTime format. Reception Antitrust received mainly negative reviews, and has a "Rotten" consensus of 24% on Rotten Tomatoes, based on 106 reviews, with an average score of 4 out of 10. The summary states "Due to its use of clichéd and ludicrous plot devices, this thriller is more predictable than suspenseful. Also, the acting is bad." The film also has a score of 31 out 100, based on 29 reviews, on Metacritic. Roger Ebert of the Chicago Sun-Times gave the film two stars out of four. Linux.com appreciated the film's open-source message, but felt the film overall was lackluster, saying AntiTrust is probably worth a $7.50 ticket on a night when you've got nothing else planned." James Keith La Croix of Detroit's Metro Times gave the film four stars, impressed that "Antitrust is a thriller that actually thrills." The film won both the Golden Goblet for "Best Feature Film", and "Best Director" for Howitt, at the 2001 Shanghai International Film Festival. Home media Antitrust was released as a "Special Edition" DVD on May 15, 2001, and on VHS on December 26, 2001. The DVD features audio commentary by the director and editor, an exclusive documentary, deleted scenes and alternative opening and closing sequences with director's commentary, Everclear's music video for "When It All Goes Wrong Again" (which is played over the beginning of the closing credits), and the original theatrical trailer. The DVD was re-released August 1, 2006. It was released on Blu-ray Disc on September 22, 2015. See also List of films featuring surveillance Hackers (1995 film) The Circle (2017 film) References Citations Video sources External links 2001 films 2001 thriller films American films American thriller films 2000s English-language films Films scored by Don Davis (composer) Films about computer and internet entrepreneurs Films about computing Films about security and surveillance Films directed by Peter Howitt Films set in Portland, Oregon Films shot in California Films shot in Vancouver Films shot in Portland, Oregon Metro-Goldwyn-Mayer films Techno-thriller films Works about free software
743699
https://en.wikipedia.org/wiki/George%20E.%20Lewis
George E. Lewis
George Emanuel Lewis (born July 14, 1952) is an American composer, performer, and scholar of experimental music. He has been a member of the Association for the Advancement of Creative Musicians (AACM) since 1971, when he joined the organization at the age of 19. He is renowned for his work as an improvising trombonist and considered a pioneer of computer music, which he began pursuing in the late 1970s; in the 1980s he created Voyager, an improvising software he has used in interactive performances. Lewis's many honors include a MacArthur Fellowship and a Guggenheim Fellowship, and his book A Power Stronger Than Itself: The AACM and American Experimental Music received the American Book Award. Lewis is the Edwin H. Case Professor of American Music, Composition & Historical Musicology at Columbia University. Biography Born in Chicago, Illinois, Lewis first encountered the AACM while taking a year off from Yale University at the age of 19. Members encouraged him to finish his degree, and he graduated from Yale in 1974 with a degree in philosophy. Shortly after, he released Solo Trombone Record to great acclaim. Lewis has long been active in creating and performing with interactive computer systems, most notably his software Voyager, which "listens" and reacts to live performers. Lewis has recorded or performed with Anthony Braxton, Anthony Davis, Bertram Turetzky, Conny Bauer, Count Basie, David Behrman, David Murray, Derek Bailey, Douglas Ewart, Alfred Harth, Evan Parker, Fred Anderson, Frederic Rzewski, Gil Evans, Han Bennink, Irène Schweizer, J. D. Parran, James Newton, Joel Ryan, Joëlle Léandre, John Zorn, Karl E. H. Seigfried, Laurie Anderson, Leroy Jenkins, Marina Rosenfeld, Michel Portal, Misha Mengelberg, Miya Masaoka, Muhal Richard Abrams, Nicole Mitchell, Richard Teitelbaum, Roscoe Mitchell, Sam Rivers, Steve Lacy, and Wadada Leo Smith. He has also performed with Frederic Rzewski and Alvin Curran's Musica Elettronica Viva, as well as the Globe Unity Orchestra and the ICP Orchestra (Instant Composer's Pool). In the 1980s, he succeeded Rhys Chatham as the music director of The Kitchen. Between 1988 and 1990, Lewis collaborated with video artist Don Ritter to create performances of interactive music and interactive video controlled by Lewis's improvised trombone. In 1992, Lewis collaborated with Canadian artist Stan Douglas on the video installation Hors-champs which was featured at documenta 9 in Kassel, Germany. The installation features Lewis in an improvisation of Albert Ayler's "Spirits Rejoice" with musicians Douglas Ewart, Kent Carter and Oliver Johnson. In 2002, Lewis received a MacArthur Fellowship. His many honors also include a Guggenheim Fellowship (2015), a United States Artists Fellowship (2011), the Alpert Award in the Arts (1999), and the American Musicological Society's Music in American Culture Award in 2009. He became a Fellow of the American Academy of Arts and Sciences in 2015, a Corresponding Fellow of the British Academy in 2016, and a member of the American Academy of Arts and Letters in 2018. Lewis has received three honorary degrees: Doctor of Music from the University of Edinburgh in 2015, Doctor of Humane Letters from New College of Florida in 2017, and Doctor of Music from Harvard University in 2018. Since 2004, he has served as Edward H. Case Professor of American Music at Columbia University in New York City. He previously taught at the University of California, San Diego. Lewis is featured extensively in Unyazi of the Bushveld (2005), directed by Aryan Kaganof, a documentary about the first symposium of electronic music held in Africa. Lewis gave an invited keynote lecture and performance at NIME-06, the sixth international conference on New Interfaces for Musical Expression, which was held at IRCAM, Paris, in June 2006. In 2008, Lewis published a book-length history of the AACM titled A Power Stronger Than Itself: The AACM and American Experimental Music (University of Chicago Press). The book received the 2009 American Book Award. In 2008 his work "Morning Blues for Yvan" was featured on the compilation album Crosstalk: American Speech Music (Bridge Records) produced by Mendi + Keith Obadike. Discography As leader Solo Trombone Record (Sackville, 1976) George Lewis (Black Saint, 1977) George Lewis Douglas Ewart (Black Saint, 1978) Homage to Charles Parker (Black Saint, 1979) Chicago Slow Dance (1977) (Lovely, 1981) Yankees (Charly, 1982) Change of Season (Soul Note, 1986) Dutch Masters (Soul Note, 1987) Sachse, Joe: Berlin Tango (Jazzwerkstatt, 1987) News for Lulu (hat Hut, 1988) with Zorn and Bill Frisell More News for Lulu (hat Hut, 1992; recorded 1989) with Zorn and Frisell Voyager (Avant, 1993) Changing With the Times (New World, 1993) The Usual Turmoil and Other Duets (Music & Arts, 1998) Conversations (Incus, 1998) Endless Shout (Tzadik, 2000) The Shadowgraph Series: Compositions for Creative (Spool, 2001) From Saxophone & Trombone (PSI, 2002) Streaming (Pi, 2006) George Lewis: Les Exercices Spirituels (Tzadik, 2011) Sequel (For Lester Bowie) (Intakt, 2011) Sonic Rivers (Tzadik, 2014) Collaborations Elements of Surprise (Moers, 1976 [1978]) with Anthony Braxton Company, Fables (Incus, 1980) with Derek Bailey, Evan Parker, and Dave Holland Hook, Drift & Shuffle (Incus, 1985) with Parker, Barry Guy and Paul Lytton Donaueschingen (Duo) 1976 (hatART, 1994; recorded 1976) with Braxton Slideride (hat Hut, 1994) with Ray Anderson, Craig Harris, and Gary Valente Triangulation (Nine Winds, 1996) with Vinny Golia and Bertram Turetzky Live at Taktlos with Irene Schweizer (Intakt, 1986) The Storming of the Winter Palace (Intakt, 1988) with Irene Schweizer, Maggie Nicols, Joëlle Léandre, and Günter Sommer Transatlantic Visions (RogueArt, 2009) with Joëlle Léandre Sour Mash (Innova, 2009) with Marina Rosenfeld Metamorphic Rock (Iorram, 2009) with Glasgow Improvisers Orchestra As sideman With Muhal Richard Abrams Spihumonesty (Black Saint, 1979) Mama and Daddy (Black Saint, 1980) SoundDance (Pi, 2011) With Anthony Braxton The Montreux/Berlin Concerts (Arista, 1975–6) Creative Orchestra Music 1976 (Arista, 1976) Creative Orchestra (Köln) 1978 (hatART, 1978 [1995]) Four Compositions (Quartet) 1983 (Black Saint, 1983) Dortmund (Quartet) 1976 (hatART, 1976 released 1991) Ensemble (Victoriaville) 1988 (Victo, 1988 [1992]) News from the '70s (recorded 1971–1976, New Tone, 1999) Quintet (Basel) 1977 (hatOLOGY, 1977, released 2000) With Anthony Davis Episteme (Gramavision) Hemispheres (Gramavision) Variations in Dream Time (Gramavision) Hidden Voices (India Navigation) With Gil Evans Lunar Eclypse (recorded 1981, New Tone, 1993) Live at the Public Theater (New York 1980) (Trio, 1981) With Globe Unity Orchestra 20th Anniversary (FMP, 1993; recorded 1986) Globe Unity – 40 Years (Intakt, 2007) With ICP Orchestra Bospaadje Konijnehol I (1986) ICP Plays Monk (1986) With Steve Lacy Prospectus (hat ART, 1983) also released as Cliches Futurities (hat Hut, 1985) The Beat Suite (Sunnyside, 2001) Last Tour (Eminem, 2004) With Roscoe Mitchell Roscoe Mitchell Quartet (Sackville, 1975) Nonaah (Nessa, 1977) L-R-G / The Maze / S II Examples (Nessa, 1978) Sketches from Bamboo (Moers, 1979) Nine to Get Ready (ECM, 1997) With David Murray Ming (Black Saint, 1980) Home (Black Saint, 1982) With Richard Teitelbaum Concerto Grosso (hat Hut, 1988) Cyberband (Moers, 1993) Golem (Tzadik, 1995) With others Barry Altschul, You Can't Name Your Own Tune (Muse, 1977) Fred Anderson, Another Place (Moers, 1979) Jacques Bekaert, Summer Music 1970 (Lovely/Vital, 1979) Leo Smith Creative Orchestra, Budding of a Rose (Moers, 1979) Leroy Jenkins, Space Minds, New Worlds, Survival of America (Tomato, 1979) Sam Rivers, Contrasts (ECM, 1979) Material, Memory Serves (Celluloid, 1981) John Zorn, Archery (Parachute, 1981) Laurie Anderson, Big Science (Warner Bros., 1981) John Lindberg Trio, Give and Take (Black Saint, 1982) Rhys Chatham, Factor X (Moers, 1983) Joelle Leandre, Les Douze Sons (NATO, 1985) Ushio Torikai, Go Where? (Victor, 1986) Heiner Goebbels, Der Mann im Fahrstuhl (ECM, 1987) India Cooke, RedHanded (Music & Arts, 1996) Steve Coleman, Genesis & The Opening of the Way (BMG/RCA Victor, 1997) Evod Magek, Through Love to Freedom (Black Pot, 1998) Miya Masaoka Orchestra, What Is the Difference Between Stripping and Playing the Violin? (Victo, 1998) NOW Orchestra, WOWOW (Spool, 1999) Musica Elettronica Viva, MEV 40 (New World, 2008) Bert Turetzky & Mike Wofford, Transition and Transformation (Nine Winds) Compositions Solo and chamber music "Thistledown" (2012), for quartet "The Will To Adorn" (2011), for large chamber ensemble "Ikons" (2010), for octet "Dancing in the Palace" (2009), for tenor voice and viola, with text by Donald Hall "Signifying Riffs" (1998), for string quartet and percussion "Ring Shout Ramble" (1998), for saxophone quartet "Collage" (1995), for poet and chamber orchestra, with text by Quincy Troupe "Endless Shout" (1994), for piano "Toneburst" (1976) for three trombones Electronics "Anthem" (2011), for chamber ensemble with electronics "Les Exercices Spirituels" (2010) for eight instruments and computer sound spatialization "Sour Mash" (2009), composition for vinyl turntablists, with Marina Rosenfeld "Hello Mary Lou" (2007) for chamber ensemble and live electronics "Crazy Quilt" (2002), for infrared-controlled "virtual percussion" and four percussionists "North Star Boogaloo" (1996), for percussionist and computer, with text by Quincy Troupe "Virtual Discourse" (1993), composition for infrared-controlled "virtual percussion" and four percussionists "Nightmare At The Best Western" (1992), for baritone voice and six instruments "Atlantic" (1978), for amplified trombones with resonant filters Installations "Ikons" (2010), interactive sound sculpture, with Eric Metcalfe "Travelogue" (2009), sound installation "Rio Negro II" (2007), robotic-acoustic sound installation, with Douglas Ewart and Douglas Irving Repetto. "Information Station No. 1" (2000), multi-screen videosonic interactive installation for the Point Loma Wastewater Treatment Plant, San Diego, Calif. "Rio Negro" (1992), robotic-acoustic sound-sculpture installation, with Douglas Ewart "A Map of the Known World" (1987), interactive mbira-driven audiovisual installation, with David Behrman "Mbirascope/Algorithme et kalimba" (1985), interactive mbira-driven audiovisual installation, with David Behrman Interactive computer music "Interactive Duo" (2007), for interactive computer-driven piano and human instrumentalist "Interactive Trio" (2007), for interactive computer-driven piano, human pianist, and additional instrumentalist "Virtual Concerto" (2004), for improvising computer piano soloist and orchestra "Voyager" (1987), for improvising soloist and interactive “virtual orchestra" "Rainbow Family" (1984), for soloists with multiple interactive computer systems "Chamber Music for Humans and Non-Humans" (1980), for micro-computer and improvising musician "The KIM and I" (1979), for micro-computer and improvising musician Music Theatre "The Empty Chair" (1986), computer-driven videosonic music theatre work "Changing With The Times" (1991), radiophonic/music theatre work Creative orchestra "Triangle" (2009) "Something Like Fred" (2009) "Fractals" (2007) "Angry Bird" (2007) "Shuffle" (2007) "The Chicken Skin II" (2007) "Hello and Goodbye" (1976/2000) "The Shadowgraph Series, 1-5" (1975–77) Graphic and instructional scores "Artificial Life 2007" (2007), composition for improvisors with open instrumentation "Sequel" (2004), for eight electro-acoustic performers "Blues" (1979), graphic score for four instruments "Homage to Charles Parker" (1979), for improvisors and electronics "Chicago Slow Dance" (1977), for electro-acoustic ensemble "The Imaginary Suite" (1977), two movements for tape, live electronics, and instruments "Monads" (1977), graphic score for any instrumentation Books and articles Monographs Edited collections Articles and chapters Lewis, George E. "Americanist Musicology and Nomadic Noise." Journal of the American Musicological Society, Vol. 64, No. 3 (Fall 2011), pp. 691–95. Lewis, George E. "Interactivity and Improvisation". In Dean, Roger T., ed. The Oxford Handbook of Computer Music. New York and Oxford: Oxford University Press (2009), 457-66. Lewis, George E. "The Virtual Discourses of Pamela Z". In Hassan, Salah M., and Cheryl Finley, eds. Diaspora, Memory, Place: David Hammons, Maria Magdalena Campos-Pons, Pamela Z. Munich: Prestel (2008), 266-81. Lewis, George E., "Foreword: After Afrofuturism." Journal of the Society for American Music, Volume 2, Number 2, pp. 139–53 (2008). Lewis, George E., "Stan Douglas's Suspiria: Genealogies of Recombinant Narrativity." In Stan Douglas, Past Imperfect: Works 1986-2007. Ostfildern, Germany: Hatje Cantz Verlag, 42-53 (2008). Lewis, George E., "Improvising Tomorrow's Bodies: The Politics of Transduction." E-misférica, Vol. 4.2, November 2007. Lewis, George E., "Mobilitas Animi: Improvising Technologies, Intending Chance." Parallax, Vol. 13, No. 4, (2007), 108–122. Lewis, George E., "Living with Creative Machines: An Improvisor Reflects." In Anna Everett and Amber J. Wallace, eds. AfroGEEKS: Beyond the Digital Divide. Santa Barbara: Center for Black Studies Research, 2007, 83-99. Lewis, George E. "Live Algorithms and the Future of Music." CT Watch Quarterly, May 2007. Lewis, George E. Improvisation and the Orchestra: A Composer Reflects. Contemporary Music Review, Vol. 25, Nos. 5/6, October/December 2006, pp. 429–34. Lewis, George E. "The Secret Love between Interactivity and Improvisation, or Missing in Interaction: A Prehistory of Computer Interactivity". In Fähndrich, Walter, ed. Improvisation V: 14 Beiträge. Winterthur: Amadeus (2003), 193-203. Lewis, George E. 2004. "Gittin' to Know Y'all: Improvised Music, Interculturalism and the Racial Imagination". Critical Studies in Improvisation, Vol. 1, No. 1, ISSN 1712-0624, www.criticalimprov.com. Lewis, George E. 2004. "Leben mit kreativen Maschinen: Reflexionen eines improvisierenden Musikers". In Knauer, Wolfram, ed. Improvisieren: Darmstädter Beiträge zur Jazzforschung, Band 8. Hofheim: Wolke Verlag, 123-144. Lewis, George. 2004. Afterword to "Improvised Music After 1950": The Changing Same. In Fischlin, Daniel, and Ajay Heble, eds. The Other Side of Nowhere: Jazz, Improvisation, and Communities in Dialogue. Middletown: Wesleyan University Press, 163-72. Lewis, George E., "Too Many Notes: Computers, complexity and culture in Voyager." Leonardo Music Journal 10, 2000, 33-39. Reprinted in Everett, Anna, and John T. Caldwell, eds. 2003. New Media: Theories and Practices of Intertextuality. New York and London: Routledge, 93-106. Lewis, George, "Teaching Improvised Music: An Ethnographic Memoir." In Zorn, John, ed. Arcana: Musicians on Music. New York: Granary Books (2000), 78-109. Lewis, George, "Improvised Music After 1950: Afrological and Eurological Perspectives." Black Music Research Journal, vol. 16, No.1, Spring 1996, 91-122. Excerpted in Cox, Christoph, and Daniel Warner. 2004. Audio Culture: Readings In Modern Music. New York: Continuum, 272-86. References External links George Lewis faculty profile from Columbia University site Casserley, Lawrence. Interview with George Lewis, discussing computer music and other topics, including improvisation and Voyager Golden, Barbara. "Conversation with George Lewis." eContact! 12.2 — Interviews (2) (April 2010). Montréal: CEC. American jazz trombonists Male trombonists Free jazz trombonists MacArthur Fellows 1952 births Living people Musicians from Chicago Tzadik Records artists Yale University alumni Columbia University faculty University of California, San Diego faculty American Book Award winners Corresponding Fellows of the British Academy Jazz musicians from Illinois 21st-century trombonists American male jazz musicians Globe Unity Orchestra members Sackville Records artists Music & Arts artists Black Saint/Soul Note artists Pi Recordings artists Charly Records artists Incus Records artists RogueArt artists 21st-century American male musicians
7256298
https://en.wikipedia.org/wiki/Sandstorm%20Enterprises
Sandstorm Enterprises
Sandstorm Enterprises was an American computer security software vendor founded in 1998 by Simson Garfinkel, James van Bokkelen, Gene Spafford, Dan Geer. In January 2010, it was purchased by NIKSUN, Inc. Sandstorm was located in the greater Boston area. Sandstorm's major products were PhoneSweep, the first commercial multi-line telephone scanner (a war dialer), introduced in 1998, and NetIntercept, a commercial network forensics tool, introduced in 2001. Designed as a second-generation network analysis tool, NetIntercept operated primarily at the level of TCP and UDP data streams and application-layer objects they transport. In 2002 Sandstorm purchased LanWatch, a commercial packet-oriented LAN monitor originally developed by FTP Software. LanWatch was sold a separate product, but much of its functionality was used by NetIntercept to display individual packets. As of 2019, the PhoneSweep product is still sold and supported by NIKSUN. Core parts of the NetIntercept product also still exist, as incorporated into NIKSUN's own NetDetector network forensics product line. References External links PhoneSweep at NIKSUN, Inc. Computer security software companies Defunct software companies of the United States Software companies based in Massachusetts Companies based in Middlesex County, Massachusetts Software companies established in 1998 Software companies disestablished in 2010 1998 establishments in Massachusetts 2010 disestablishments in Massachusetts
5913182
https://en.wikipedia.org/wiki/Google%20Account
Google Account
A Google Account is a user account that is required for access, authentication and authorization to certain online Google services. It is also often used as single sign on for third party services. Usage A Google Account is required for Gmail, Google Hangouts, Google Meet and Blogger. Some Google products do not require an account, including Google Search, YouTube, Google Books, Google Finance and Google Maps. However, an account is needed for uploading videos to YouTube and for making edits in Google Maps. After a Google Account is created, the owner may selectively enable or disable various Google applications. YouTube and Blogger maintain separate accounts for users who registered with the services before the Google acquisition. However, effective April 2011 YouTube users are required to link to a separate Google Account if they wish to continue to log into that service. Google Account users may create a publicly accessible Google profile, to configure their presentation on Google products to other Google users. A Google profile can be linked to a user's profiles on various social-networking and image-hosting sites, as well as user blogs. Third-party service providers may implement service authentication for Google Account holders via the Google Account mechanism. Security While creating a Google account, users are asked to provide a recovery email address to allow them to reset their password if they have forgotten it, or if their account is hacked. In some countries, such as the United States, the United Kingdom and India, Google may also require one-time use of a mobile phone number to send an account validation code by SMS text messaging or voice message when creating a new account. Google also offers a 2-step verification option—for additional security against hacking—that requests a validation code each time the user logs into their Google account. The code is either generated by an application ("Google Authenticator" or other similar apps) or received from Google as an SMS text message, a voice message, or an email to another account. Trusted devices can be "marked" to skip this 2-step log-on authentication. When this feature is switched on, software that cannot provide the validation code (e.g. IMAP and POP3 clients) must use a unique 16-character alphanumeric password generated by Google instead of the user's normal password. Users who seek an even higher level of security protection, including users whose accounts could be attractive targets for hackers, such as celebrities, politicians, journalists, political activists and wealthy individuals, can opt-in to Google's Advanced Protection Program. This program requires the user to purchase two U2F USB keys — not for data storage, but for identity verification. The U2F keys are used to provide two-step verification during login. One is for backup purposes, in case the first is lost. The Advanced Protection Program includes further security measures to protect the user's account, such as restrictions on which applications the user can grant access to their account, and a more thorough identity verification process for regaining access to the account if the password is forgotten. On June 5, 2012, a new security feature was introduced to protect users from state-sponsored attacks. Whenever Google analysis indicate that a government has attempted to compromise an account, a notice will be displayed that reads "Warning: We believe state-sponsored attackers may be trying to compromise your account or computer." Account blocking Google may block an account for various reasons, such as "unusual activity" or entering an age "not old enough" to own a Google account. Reactivation is possible using web-forms, providing proof of identity through valid photo ID, or a credit card payment of US$0.30. Other methods (such as sending a fax or uploading some requested document) may require human interaction and may take some "days or a couple of weeks" to be accomplished. Activity tracking The tool called 'My Activity' launched in 2016 - which supersedes Google Search history and Google Web History — enables users to see and delete data tracked by Google through the Google account. The tool shows which websites were visited using Chrome while logged in, devices used, apps used, Google products interacted with, etc. All information is laid out in a timeline-like layout. Users can choose to entirely disable tracking, or remove certain activities which they don't want to be tracked. Google applications See also Apple ID Facebook Platform: Authentication Microsoft account OpenID References External links Account Federated identity
12431
https://en.wikipedia.org/wiki/Google%20Search
Google Search
Google Search (also known simply as Google), is a search engine provided by Google. Handling more than 3.5 billion searches per day, it has a 92% share of the global search engine market. It is also the most-visited website in the world. The order of search results returned by Google is based, in part, on a priority rank system called "PageRank". Google Search also provides many different options for customized searches, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit, and time conversions, word definitions, and more. The main purpose of Google Search is to search for text in publicly accessible documents offered by web servers, as opposed to other data, such as images or data contained in databases. It was originally developed in 1997 by Larry Page, Sergey Brin, and Scott Hassan. In June 2011, Google introduced "Google Voice Search" to search for spoken, rather than typed, words. In May 2012, Google introduced a Knowledge Graph semantic search feature in the U.S. Analysis of the frequency of search terms may indicate economic, social and health trends. Data about the frequency of use of search terms on Google can be openly inquired via Google Trends and have been shown to correlate with flu outbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys. As of mid-2016, Google's search engine has begun to rely on deep neural networks. Search indexing Google indexes hundreds of terabytes of information from web pages. For websites that are currently down or otherwise not available, Google provides links to cached versions of the site, formed by the search engine's latest indexing of that page. Additionally, Google indexes some file types, being able to show users PDFs, Word documents, Excel spreadsheets, PowerPoint presentations, certain Flash multimedia content, and plain text files. Users can also activate "SafeSearch", a filtering technology aimed at preventing explicit and pornographic content from appearing in search results. Despite Google search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to the deep web, inaccessible through its search tools. In 2012, Google changed its search indexing tools to demote sites that had been accused of piracy. In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use. The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites. In December 2017, Google began rolling out the change, having already done so for multiple websites. "Caffeine" search architecture upgrade In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback. The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure. The move was interpreted in some quarters as a response to Microsoft's recent release of an upgraded version of its own search service, renamed Bing, as well as the launch of Wolfram Alpha, a new search engine based on "computational knowledge". Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index. With "Caffeine", Google moved its back-end indexing system away from MapReduce and onto Bigtable, the company's distributed database platform. "Medic" search algorithm update In August 2018, Danny Sullivan from Google announced a broad core algorithm update. As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health-related websites that were not user friendly and were not providing good user experience. This is why the industry experts named it "Medic". Google reserves very high standards for YMYL (Your Money or Your Life) pages. This is because misinformation can affect users financially, physically, or emotionally. Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation. This resulted in the algorithm targeting health and medical-related websites more than others. However, many other websites from other industries were also negatively affected. Performing a search Google Search consists of a series of localized websites. The largest of those, the google.com site, is the top most-visited website in the world. Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g. for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), and many more. Search syntax Google search accepts queries as normal text, as well as individual keywords. It automatically corrects misspelled words, and yields the same results regardless of capitalization. For more customized results, one can use a wide variety of operators, including, but not limited to: OR – Search for webpages containing one of two similar queries, such as marathon OR race - (minus sign) – Exclude a word or a phrase, so that "apple -tree" searches where word "tree" is not used "" – Force inclusion of a word or a phrase, such as "tallest building" * – Placeholder symbol allowing for any substitute words in the context of the query, such as "largest * in the world" .. – Search within a range of numbers, such as "camera $50..$100" site: – Search within a specific website, such as "site:youtube.com" define: – See a definition of a word, such as "define:phrase" stocks: – See the stock price of investments, such as "stocks:googl" related: – Find webpages related to specific URL addresses, such as "related:www.wikipedia.org" cache: – Highlights the search-words within the cached pages, so that "cache:www.google.com xxx" shows cached content with word "xxx" highlighted. @ – Search for a specific word on social media networks, such as "@twitter" Query expansion Google applies query expansion to submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted. This technique involves several steps, including: Word stemming – Certain words can be reduced so other, similar terms, are also found in results, so that "translator" can also search for "translation" Acronyms – Searching for abbreviations can also return results about the name in its full length, so that "NATO" can show results for "North Atlantic Treaty Organization" Misspellings – Google will often suggest correct spellings for misspelled words Synonyms – In most cases where a word is incorrectly used in a phrase or sentence, Google search will show results based on the correct synonym Translations – The search engine can, in some instances, suggest results for specific words in a different language Ignoring words – In some search queries containing extraneous or insignificant words, Google search will simply drop those specific words from the query In 2008, Google started to give users autocompleted search suggestions in a list below the search bar while typing, originally with the approximate result count previewed for each listed search suggestion. "I'm Feeling Lucky" Google's homepage includes a button labeled "I'm Feeling Lucky". This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page. With the 2010 announcement of Google Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings to keep using the "I'm Feeling Lucky" functionality. In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion. Tom Chavez of "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page. Special interactive features Besides the main text-based search-engine features of Google search, it also offers multiple quick, interactive experiences. These include, but are not limited to: Calculator Time zone, currency, and unit conversions Word translations Flight status Local film showings Weather forecasts Population and unemployment rates Package tracking Word definitions Metronome Roll a die "Do a barrel roll" (search page spins) "Askew" (results show up sideways) "OK Google" conversational search During Google's developer conference, Google I/O, in May 2013, the company announced that users on Google Chrome and Chrome OS would be able to have the browser initiate an audio-based search by saying "OK Google", with no button presses required. After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?" An update to the Chrome browser with voice-search functionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation. Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter. In May 2014, the company officially added "OK Google" into the browser itself; they removed it in October 2015, citing low usage, though the microphone icon for activation remained available. In May 2016, 20% of search queries on mobile devices were done through voice. Search results Page layout At the top of the search page, the approximate result count and the response time two digits behind decimal is noted. Of search results, page titles and URLs, dates, and a preview text snippet for each result appears. Along with web search results, sections with images, news, and videos may appear. The length of the previewed text snipped was experimented with in 2015 and 2017. Universal search "Universal search" was launched by Google on May 16, 2007, as an idea that merged the results from different kinds of search types into one. Prior to Universal search, a standard Google search would consist of links only to websites. Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page. Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results. In June 2017, Google expanded its search results to cover available job listings. The data is aggregated from various major job boards and collected by analyzing company homepages. Initially only available in English, the feature aims to simplify finding jobs suitable for each user. Rich snippets In May 2009, Google announced that they would be parsing website microformats to populate search result pages with "Rich snippets". Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals. In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format. Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017. Now the web publishers can have greater control over the rich snippets. Preview settings from these meta tags will become effective in mid-to-late October 2019 and may take about a week for the global rollout to complete. Knowledge Graph The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources. This information is presented to users in a box to the right of search results. Knowledge Graph boxes were added to Google's search engine in May 2012, starting in the United States, with international expansion by the end of the year. The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months, and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016. The information is often used as a spoken answer in Google Assistant and Google Home searches. The Knowledge Graph has been criticized for providing answers without source attribution. Google Search has been accused of using a so-called zero-click search to prevent a large part of the traffic leaving its page to third-party publishers. As a result, 71% of searches end on the Google search page. In case of one specific query out of 890,000 searches on Google, only 30,000 resulted in the user clicking on the results website. Personal tab In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages from Gmail and photos from Google Photos. Google Discover Google Discover, previously known as Google Feed, is a personalized stream of articles, videos, and other news-related content. The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly. Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment." Users can also tell Google they're not interested in certain topics to avoid seeing future updates. Google Discover launched in December 2016 and received a major update in July 2017. Another major update was released in September 2018, which renamed the app from Google Feed to Google Discover, updated the design, and adding more features. Discover can be found on a tab in the Google app and by swiping left on the home screen of certain Android devices. As of 2019, Google will not allow political campaigns worldwide to target their advertisement to people to make them vote. Ranking of results PageRank Google's rise was largely due to a patented algorithm called PageRank which helps rank web pages that match a given search string. When Google was a Stanford research project, it was nicknamed BackRub because the technology checks backlinks to determine a site's importance. Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page. The PageRank algorithm instead analyzes human-generated links assuming that web pages linked from many important pages are also important. The algorithm computes a recursive score for pages, based on the weighted sum of other pages linking to them. PageRank is thought to correlate well with human concepts of importance. In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages. This is reported to comprise over 250 different indicators, the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally. PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996. Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent. Li later went on to create the Chinese search engine Baidu in 2000. In a potential hint of Google's future direction of their Search algorithm, Google's then chief executive Eric Schmidt, said in a 2007 interview with the Financial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'". Schmidt reaffirmed this during a 2010 interview with the Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next." In 2013 the European Commission found that Google Search favored Google's own products, instead of the best result for consumers' needs. In February 2015 Google announced a major change to its mobile search algorithm which would favor mobile friendly over other websites. Nearly 60% of Google searches come from mobile phones. Google says it wants users to have access to premium quality websites. Those websites which lack a mobile-friendly interface would be ranked lower and it is expected that this update will cause a shake-up of ranks. Businesses who fail to update their websites accordingly could see a dip in their regular websites traffic. Google optimization Because Google is the most popular search engine, many webmasters attempt to influence their website's Google rankings. An industry of consultants has arisen to help websites increase their rankings on Google and other search engines. This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites. Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and image alt attribute values) and Off Page Optimization factors (like anchor text and PageRank). The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking). Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms. Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants. It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments. The particular problem addressed in The New York Times article, which involved DecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm. According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it. Google Search Console helps to check for websites that use duplicate or copyright content. "Hummingbird" search algorithm upgrade In 2013, Google significantly upgraded its search algorithm with "Hummingbird". Its name was derived from the speed and accuracy of the hummingbird. The change was announced on September 26, 2013, having already been in use for a month. "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords. It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage. The upgrade marked the most significant change to Google search in years, with more "human" search interactions and a much heavier focus on conversation and meaning. Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation. Google Doodles On certain occasions, the logo on Google's webpage will change to a special version, known as a "Google Doodle". This is a picture, drawing, animation, or interactive game that includes the logo. It is usually done for a special event or day although not all of them are well known. Clicking on the Doodle links to a string of Google search results about the topic. The first was a reference to the Burning Man Festival in 1998, and others have been produced for the birthdays of notable people like Albert Einstein, historical events like the interlocking Lego block's 50th anniversary and holidays like Valentine's Day. Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pacman" version that appeared on May 21, 2010. Smartphone apps Google offers a "Google Search" mobile app for Android and iOS devices. The mobile apps exclusively feature Google Discover and a "Collections" feature, in which the user can save for later perusal any type of search result like images, bookmarks or map locations into groups. Android devices were introduced to a preview of the feed in December 2016, while it was made official on both Android and iOS in July 2017. In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion. The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option. In September 2017, the Google Search app on iOS was updated to feature the same functionality. In December 2017, Google released "Google Go", an app designed to enable use of Google Search on physically smaller and lower-spec devices in multiple languages. A Google blog post about designing "India-first" products and features explains that it is "tailor-made for the millions of people in [India and Indonesia] coming online for the first time". Discontinued features Translate foreign pages Until May 2013, Google Search had offered a feature to translate search queries into other languages. A Google spokesperson told Search Engine Land that "Removing features is always tough, but we do think very hard about each decision and its implications for our users. Unfortunately, this feature never saw much pick up". Instant search was announced in September 2010 as a feature that displayed suggested results while the user typed in their search query, initially only in select countries or to registered users. The primary advantage of the new system was its ability to save time, with Marissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up. With Google Instant, we estimate that we'll save our users 11 hours with each passing second!" Matt Van Wagner of Search Engine Land wrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts in public relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story". The upgrade also became notable for the company switching Google Search's underlying technology from HTML to AJAX. Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality. The publication 2600: The Hacker Quarterly compiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement to Mashable: PC Magazine discussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not. The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice. On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer. "Instant previews" allowed previewing screenshots of search results' web pages without having to open them. The feature was introduced in November 2010 to the desktop website and removed in April 2013 citing low usage. Dedicated encrypted search page Various search engines provide encrypted Web search facilities. In May 2010 Google rolled out SSL-encrypted web search. The encrypted search was accessed at encrypted.google.com However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser. On its support website, Google announced that the address encrypted.google.com would be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation. Real-Time Search Google Real-Time Search was a feature of Google Search in which search results also sometimes included real-time information from sources such as Twitter, Facebook, blogs, and news websites. The feature was introduced on December 7, 2009 and went offline on July 2, 2011, after the deal with Twitter expired. Real-Time Search included Facebook status updates beginning on February 24, 2010. A feature similar to Real-Time Search was already available on Microsoft's Bing search engine, which showed results from Twitter and Facebook. The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while a bar chart metric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links. Hashtag search links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links. In January 2011, geolocation links of posts were made available alongside results in Real-Time Search. In addition, posts containing syndicated or attached shortened links were made searchable by the link: query option. In July 2011 Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL. Google originally suggested that the interruption was temporary and related to the launch of Google+; they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets. Privacy Searches made by search engines, including Google, leave traces. This raises concerns about privacy. In principle, if details of a user's searches are found, those with access to the information—principally state agencies responsible for law enforcement and similar matters—can make deductions about the user's activities. This has been used for the detection and prosecution of lawbreakers; for example a murderer was found and convicted after searching for terms such as "tips with killing with a baseball bat". A search may leave traces both on a computer used to make the search, and in records kept by the search provider. When using a search engine through a browser program on a computer, search terms and other information may be stored on the computer by default, unless the browser is set not to do this, or they are erased. Saved terms may be discovered on forensic analysis of the computer. An Internet service provider (ISP) or search engine provider (e.g. Google) may store records which relate search terms to an IP address and a time. Whether such logs are kept, and access to them by law enforcement agencies, is subject to legislation in different jurisdictions and working practices; the law may mandate, prohibit, or say nothing about logging of various types of information. Some search engines, located in jurisdictions where it is not illegal, make a feature of not storing user search information. The keywords suggested by the Autocomplete feature show a population of users' research which is made possible by an identity management system. Volumes of personal data are collected via Eddystone web and proximity beacons. Google has been criticized for placing long-term cookies on users' machines to store these preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year. Since 2012, Google Inc. has globally introduced encrypted connections for most of its clients, to bypass governative blockings of the commercial and IT services. Redesign In late June 2011, Google introduced a new look to the Google home page in order to boost the use of the Google+ social tools. One of the major changes was replacing the classic navigation bar with a black one. Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel." The new navigation bar has been negatively received by a vocal minority. In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience. The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users differentiate between organic and sponsored results. On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface. The mobile design consists of a tabular design that highlights search features in boxes. and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules. The Local Pack and Answer Box were two of the original features of the Google SERP that were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results. Search products In addition to its tool for searching web pages, Google also provides services for searching images, Usenet newsgroups, news websites, videos (Google Videos), searching by locality, maps, and items for sale online. Google Videos allows searching the World Wide Web for video clips. The service evolved from Google Video, Google's discontinued video hosting service that also allowed to search the web for video clips. In 2012, Google has indexed over 30 trillion web pages, and received 100 billion queries per month. It also caches much of the content that it indexes. Google operates other tools and services including Google News, Google Shopping, Google Maps, Google Custom Search, Google Earth, Google Docs, Picasa (discontinued), Panoramio (discontinued), YouTube, Google Translate, Google Blog Search and Google Desktop Search (discontinued). There are also products available from Google that are not directly search-related. Gmail, for example, is a webmail application, but still includes search features; Google Browser Sync does not offer any search facilities, although it aims to organize your browsing time. Energy consumption In 2009, Google claimed that a search query requires altogether about 1 kJ or 0.0003 kW·h, which is enough to raise the temperature of one liter of water by 0.24 °C. According to green search engine Ecosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2 emission per search. Google's 40,000 searches per second translate to 8 kg CO2 per second or over 252 million kilos of CO2 per year. Criticism Complaints about indexing In 2003, The New York Times complained about Google's indexing, claiming that Google's caching of content on its site infringed its copyright for the content. In both Field v. Google and Parker v. Google, the United States District Court of Nevada ruled in favor of Google. January 2009 malware bug Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously. For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified as malware and could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually. The bug was caused by human error. The URL of "/" (which expands to all URLs) was mistakenly added to the malware patterns file. Possible misuse of search results In 2007, a group of researchers observed a tendency for users to rely on Google Search exclusively for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. ... In fact, one only sees a small part of what one could see if one also integrates other research tools." In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information". Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims, filter bubbles have been mentioned to account for the surprising results of the U.S. presidential election in 2016 alongside fake news and echo chambers, suggesting that Facebook and Google have designed personalized online realities in which "we only see and hear what we like". FTC fines In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser. The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations. Big Data and human bias Google search engine robots are programmed to use algorithms that understand and predict human behavior. The book, Race After Technology: Abolitionist Tools for the New Jim Code by Ruha Benjamin talks about human bias as a behavior that the Google search engine can recognize. In 2016, some users google searched "three Black teenagers" and images of criminal mugshots of young African American teenagers came up. Then, the users searched "three White teenagers" and were presented with photos of smiling, happy teenagers. They also searched for "three Asian teenagers", and very revealing photos of Asian girls and women appeared. Benjamin concluded that these results reflect human prejudice and views on different ethnic groups. A group of analysts explained the concept of a racist computer program: "The idea here is that computers, unlike people, can't be racist but we're increasingly learning that they do in fact take after their makers...Some experts believe that this problem might stem from the hidden biases in the massive piles of data that the algorithms process as they learn to recognize patterns...reproducing our worst values". Trademark As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming a generic trademark. This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search a famous web search engine. See also Timeline of Google Search Censorship by Google § Google Search Google (verb) Dragonfly (search engine) Google bomb Google Panda Google Penguin Googlewhack Halalgoogling Reunion List of search engines Comparison of web search engines History of Google List of Google products References Further reading Google Hacks from O'Reilly is a book containing tips about using Google effectively. Now in its third edition (2006). . Google: The Missing Manual by Sarah Milstein and Rael Dornfest (O'Reilly, 2004). How to Do Everything with Google by Fritz Schneider, Nancy Blachman, and Eric Fredricksen (McGraw-Hill Osborne Media, 2003). Google Power by Chris Sherman (McGraw-Hill Osborne Media, 2005). External links The Original Google! Google search trends Internet search engines Alphabet Inc. Search Multilingual websites Internet properties established in 1997 Computer-related introductions in 1997 1997 establishments in the United States Websites which mirror Wikipedia
1292902
https://en.wikipedia.org/wiki/SoundFont
SoundFont
SoundFont is a brand name that collectively refers to a file format and associated technology that uses sample-based synthesis to play MIDI files. It was first used on the Sound Blaster AWE32 sound card for its General MIDI support. Specification The newest version of the SoundFont file format is 2.04 (often incorrectly called 2.4). It is based on the RIFF format. A detailed description can be found in the specification, which is currently only available as a copy on various company sites. History The original SoundFont file format was developed in the early 1990s by E-mu Systems and Creative Labs. A specification for this version was never released to the public. The first and only major device to utilize this version was Creative's Sound Blaster AWE32 in 1994. Files in this format conventionally have the file extension of . SoundFont 2.0 was developed in 1996. This file format generalized the data representation using perceptually additive real world units, redefined some of the instrument layering features within the format, added true stereo sample support and removed some obscure features of the first version whose behavior was difficult to specify. This version was fully disclosed as a public specification, with the goal of making the SoundFont format an industry standard. All SoundFont 1.0 compatible devices were updated to support the SoundFont 2.0 format shortly after it was released to the public, and consequently the 1.0 version became obsolete. Files in this and all other 2.x formats (see below) conventionally have the file extension of . Version 2.01 (usually, but incorrectly called 2.1) of the SoundFont file format was introduced in 1998 with an E-mu sound card product called the Audio Production Studio. The 2.01 version added features allowing sound designers to configure the way MIDI controllers influence synthesizer parameters. The 2.01 format is bidirectionally compatible with 2.0, which means that synthesizers capable of rendering 2.01 format will also by definition render 2.0 format, and synthesizers that are only capable of rendering 2.0 format will also read and render 2.01 format, but just not apply the new features. SoundFont 2.04 (there never was a 2.02 or a 2.03 version) was introduced in 2005 with the Sound Blaster X-Fi. The 2.04 format added support for 24-bit samples. The 2.04 format is bidirectionally compatible with the 2.01 format, so synthesizers that are only capable of rendering 2.0 or 2.01 format would automatically render instruments using 24-bit samples at 16-bit precision. SoundFont is a registered trademark of Creative Technology, Ltd., and the exclusive license for re-formatting and managing historical SoundFont content has been acquired by Digital Sound Factory. Functionality MIDI files do not contain any sounds, only instructions to play them. To play such files, sample-based MIDI synthesizers use recordings of instruments and sounds stored in a file or ROM chip. SoundFont-compatible synthesizers allow users to use SoundFont banks with custom samples to play their music. A SoundFont bank contains base samples in PCM format (similar to WAV files) that are mapped to sections on a musical keyboard. A SoundFont bank also contains other music synthesis parameters such as loops, vibrato effect, and velocity-sensitive volume changing. SoundFont banks can conform to standard sound sets such as General MIDI, or use other wholly custom sound-set definitions. SoundFont creation software ( format) Several editors are available: Vienna from Creative Labs, requiring a particular sound card (such as Sound Blaster) Viena (with a single "n"), created in 2002 Swami is a collection of free software for editing and managing musical instruments for MIDI music composition, used mainly under Linux Polyphone, free editor for Windows, Mac OS X and Linux created in 2013 See also DLS format SFZ (file format) General MIDI (GM) and its extension Roland GS (GS) FluidSynth, TiMidity++ and WildMIDI (software synthesizer) Gravis Ultrasound (sound card) List of music software References Resources SoundFonts SoundFont 2.04 specification Audio codecs Audio software Software synthesizers MIDI standards
14633518
https://en.wikipedia.org/wiki/ScanSafe
ScanSafe
ScanSafe was a privately held company backed by investors Benchmark Capital and Scale Venture Partners, until its 2009 acquisition by Cisco Systems. The company provided Web-based "Software as a service" (SaaS) to organizations. History Co-founded in 1999 by brothers Eldar and Roy Tuvey, its services block malware and secure the use of the Web and messaging. Noted as being the first to successfully deliver a Secure Web Gateway service, the company competes with similar services offered by Blue Coat Systems, MessageLabs, Purewire, Webroot, Websense and Zscaler. ScanSafe has offices in London, England and San Francisco, California and maintains alliance partnerships with Google, AT&T, Sprint, Kaspersky, Telus, NEC, Orange Business Services, Integralis, SoftScan, TopNordic, Viatel, Ancoris, and FVC. MessageLabs, a major provider of Messaging services ended a strategic partnership between the two companies in 2006. MessageLabs introduced a competing in-house product titled Web Security Service Version 2, resulting in a court judgment brought about by ScanSafe which required MessageLabs to notify all prospective clients that the Version 2 service is not based on ScanSafe technology. In November 2007, a malware outbreak on the Indiatimes website was reported by ScanSafe. Whilst the website wasn't the only victim, the attack was notable due to the popularity of the website and in the number of vulnerabilities the malware attempted to exploit. Alexa regularly ranks Indiatimes as one of the top 250 most visited websites. In January 2009, ScanSafe reported a malware infection on the official website of Paris Hilton. In October 2009, ScanSafe discontinued Scandoo, a free service that provided advance warning for security risks and offensive content in search engine results. In October 2009, Cisco announced the acquisition of ScanSafe for approximately US$183 million. ScanSafe was integrated into the Cisco Security business unit. The deal was completed December 2009. Recognition In 2007 ScanSafe was named one of the top 100 technology startups in the world by Red Herring, awarded Best Software as a Service (SaaS) Solution by the Codie awards and won Best Content Security Solution for 3 consecutive years as voted by SC Magazine readers. In September 2008 ScanSafe were awarded CNET Security Product or Service of the Year for their Anywhere+ product, a software based filtering solution for mobile users. A 2008 report by Gartner identified ScanSafe as a challenger in the Secure Web Gateway (SWG) market, having moved from the visionary position previously occupied in 2007. Gartner acknowledged strong partner relationships and the ability to offer an excellent solution for companies with a high percentage of mobile workers and/or remote offices, but noted the service can be expensive and that report archiving is limited. In October 2009, IDC reported that ScanSafe holds over 30% of the worldwide SaaS Web Security market, more than any other vendor in the sector. References Software companies established in 2004
35608106
https://en.wikipedia.org/wiki/2012%20Troy%20Trojans%20football%20team
2012 Troy Trojans football team
The 2012 Troy Trojans football team represented Troy University during the 2012 NCAA Division I FBS football season. They were led by 22nd-year head coach Larry Blakeney and played their home games at Veterans Memorial Stadium. They were a member of the Sun Belt Conference. They finished the season 5–7, 3–5 in Sun Belt play to finish in a tie for sixth place. Schedule Game summaries @ UAB Louisiana–Lafayette Mississippi State @ North Texas @ South Alabama WKU FIU @ Florida Atlantic @ Tennessee Navy Arkansas State @ Middle Tennessee References Troy Troy Trojans football seasons Troy Trojans football
11372514
https://en.wikipedia.org/wiki/Yang%20Yuanqing
Yang Yuanqing
Yang Yuanqing (, born 12 November 1964) is a Chinese business executive and philanthropist who is the current chairman and CEO of Lenovo. Early life and education Yang was born on 12 November 1964 to parents both educated as surgeons. He spent his childhood in Hefei in Anhui province. He grew up poor, as his parents were paid the same salaries as manual laborers. Yang's parents endured repeated persecution during the Cultural Revolution. Yang's father, Yang Furong, was a disciplined man with strict standards. Yang said of his father, "If he set a target, no matter what happened, he wanted to reach it." While his parents wanted him to pursue a career in medicine, and he had a budding interest in literature, Yang decided to study computer science on the advice of a family friend who was a university professor. Yang earned an undergraduate degree in computer science from Shanghai Jiaotong University in 1986 and graduated with a master's degree from the University of Science and Technology of China (USTC) in 1988. Lenovo Yang spotted a newspaper advertisement for jobs at Lenovo while in Beijing performing research for his master's degree. Yang had initially planned on becoming a university professor but took a risk and accepted a position with Lenovo in sales. He was paid the equivalent of US$30 per month. In 1989, Yang joined Legend, as Lenovo was then known, at the age of 25. He was quickly promoted. Yang travelled to meet distributors throughout China and used his technical knowledge to achieve a strong sales record. Yang also stood out at Lenovo for being a quiet, deep thinker. These qualities caught the attention of Liu Chuanzhi, who later promoted Yang to head Lenovo's personal computer business at just 29 years old. Yang was elevated to CEO of the whole company when Liu retired in 2001. Liu described Yang as "A man who moves forward, takes risks and aims to innovate." Liu also said, "I had been observing Yang a long time before I appointed him to take over the PC business. He had clear goals, was broad-minded and straightforward. We trusted him." Yang's first major task at Lenovo was to write a bid to become an IBM reseller. After submitting his bid, Yang discovered that he had quoted twice the price of his competitors. Within a year of joining Lenovo in 1988, Yang had lost interest in sales and had taken the TOEFL in preparation to study overseas. Yang stayed on after repeated requests from Liu Chuanzhi. Yang believed that he would benefit from exposure to American business practices but Liu persuaded him to delay his plans for two years. Problems at Legend due to lower import duties on personal computers did not allow for this though. Yang decided on Lenovo implementing specific job descriptions with clear responsibilities and a system of performance evaluations used to determine annual bonuses. At the time, most Chinese enterprises distributed bonuses of equal size to all employees, there was little sense of responsibility, and workers passively waited for superiors to issue instructions. When Yang took over Lenovo's personal computer division, he strongly discouraged the use of formal titles and required staff to address each other by their given names. Yang even required managers to stand outside their offices each morning to greet their employees while carrying signs with their first names. Yang's division moved to a new building in 1997. He used the move to break Lenovo's cultural links to the past. He insisted on a more formal dress code and training all employees in telephone etiquette; Yang wanted his people to think and act like high-tech workers in developed markets. After addressing human resources issues, Yang moved on to distribution. Due to China's large territory, large population, varying degrees of economic development, and widely different local regulations, Lenovo was having difficulty operating nationwide. While Lenovo had been using direct sales and a network of distributors, Yang gave up on direct sales in favor of exclusively using independent agents to avoid the costs of administering a complex sales network. This action resulted in Lenovo cutting its sales staff from over 100 to just 18 in 1994. To gain confidence among distributors, Lenovo provided a wide range of products, offered reasonable prices, and closely supervised the marketing of its products to look out for the interests of distributors. In contrast to Lenovo, foreign firms often tried to squeeze distributors' margins. Yang ensured proper training of distributors and brought in Microsoft and Intel to help with these efforts. Yang also set up a system to monitor the sales, inventory, cash flow, compliance, and pricing of distributors. Many analysts cite Lenovo's distribution system and after sales service as the key to its expansion. Yang Yuanqing was chairman of Lenovo's board from 2004 to 2008. In February 2009, Yang gave up his position as chairman and again became CEO at Lenovo. In 2005, Yang Yuanqing undertook a deal with Microsoft to have Windows preloaded on most Lenovo computers sold in China. In exchange, Microsoft China offered a rebate on Windows and marketing assistance. Other manufacturers adopted this approach afterwards, partly due to Yang's substantial influence in China's technology industry. Microsoft tripled its sales of preloaded versions of Windows within a year as a result. The change was requested by Bill Gates and Steve Ballmer to combat what they referred to as a rise in the number of pirated versions of Windows. These versions were predominantly being installed on the hard drives of computers with no bundled operating system. Steve Ballmer said, "Yuanqing made a huge difference. He was willing to go out on a limb." Yang also proved to be effective at navigating American politics. In early 2006, the U.S. State Department was harshly criticized for purchasing 16,000 computers from Lenovo. Critics attempted to smear Lenovo as controlled by the Chinese government and a potential vehicle for espionage against the United States. Yang spoke out forcefully and publicly to defend Lenovo. He said, " We are not a government-controlled company." He pointed out that Lenovo pioneered China's transition to a market economy and that in the early 1990s had fought and beaten four state-owned enterprises that dominated the Chinese computer market. Those firms had the full backing of the state while Lenovo received no special treatment. The State Department deal went through. Yang worried that fears about Lenovo's supposed connections to the Chinese government would be an ongoing issue in the United States. Yang worked to ease worries by communicating directly with Congress. In June 2006, Yang arranged to be seated next to C. Richard D'Amato, a member of the congressional committee that had earlier raised concerns about the security of Lenovo's products. D'Amato later stated that he was impressed with Yang's candor. The issue soon faded away. While Lenovo's official language is English, Yang initially did not understand the language well; he relocated his family to Morrisville to improve his language skills and soak up American culture. Yang also hired a private tutor and watched cable news to practice. Yang also sent many Lenovo executives to the US for long postings. One American Lenovo executive interviewed by The Economist praised Yang for his efforts to make Lenovo a friendly place for foreigners to work. As of 2013, Lenovo's top 14 executives come from seven countries. Lenovo has acquired companies all over the world. The company has dual headquarters in Beijing and Morrisville, the former home of IBM's personal computer business. Yang has created a "performance culture" instead of the traditional Chinese work style of "waiting to see what the emperor wants." Yang holds an annual banquet at his home in Beijing for Lenovo's top executives. Traditionally, each guest at the banquet stands up and uses a toast to set goals for their business unit. Yang has said that when Lenovo enters a new market they intend to be number one. Yang stated, “If you don’t have enough scale, if you don’t have enough volume, it’s hard to make money. If you don’t have enough market share, it’s hard to make money. That’s why we enter the markets one by one. When we enter a market, we want to quickly get double-digit market share.” In 2012, Yang received a $3 million bonus as a reward for record profits, which he then redistributed to about 10,000 of Lenovo's employees. According to Lenovo spokesman, Jeffrey Shafer, Yang felt that it would be the right thing to, "redirect [the money] to the employees as a real tangible gesture for what they done." The bonuses were mostly distributed among staff working in positions such as production and reception who received an average of 2,000 yuan or about US$314. This amount of money was almost equivalent to a month's pay for the typical Lenovo worker in China. Yang contributed another $3.25 million bonus to 10,000 Lenovo employees in 2013. Employees in 20 countries benefited from Yang's gift. 85% of recipients were in mainland China. As in 2013, these workers were typically hourly production staff. Shafer also said that Yang, who owns about eight percent of Lenovo's stock, "felt that he was rewarded well simply as the owner of the company."According to Lenovo's annual report, Yang earned $14 million, including $5.2 million in bonuses, during the fiscal year that ended in March 2012. Yang greatly increased his ownership stake in Lenovo by acquiring 797 million shares in 2011. Before, he owned only 70 million shares. Yang said, "While the transaction is a personal financial matter, I want to be very clear that my decision to make this investment is based on my strong belief in the company's very bright future. Our culture is built on commitment and ownership – we do what we say, and we own what we do. My decision to increase my holdings represents my steadfast belief in these principles." In April 2015, Yang required all members of Lenovo's media team to be active on at least one social media outlet. Yang is an active user of Twitter, LinkedIn, and Sina Weibo. Yang encourages his team to talk about their personal lives on social media. He said, "I cannot just promote a Lenovo product every day. I have to get people interested first and then find the opportunity to promote it once in awhile.” Yang is often referred to as "YY" by his colleagues at Lenovo. Awards and recognition Yang was awarded the May Fourth Youth Medal, by the All-China Youth Federation in 1999. In 1999 and 2001, the magazine BusinessWeek named him one of the "Stars of Asia." In 2004, he was listed among "Asia's 25 Most Influential Business Leaders" by Fortune Asia. Yang was named "2007 Chinese Business Leader" by Fortune China. In 2008, Forbes Asia named Yang "Businessman of the Year." In 2011, Finance Asia named Yang the "Best CEO in China." In December 2012, Yang was named one of the "2012 CCTV China Economic Figures" in a televised award ceremony. Yang received the same award in 2004. During the ceremony Yang said, "I have a dream that Lenovo will become the pride of China in the IT industry. Lenovo is my life's struggle and career, I have invested all of my energy into it. I firmly believe that Lenovo, a product of China, will stand atop the world's stage. As you can now see, our dream is being realized step-by-step." On 1 May 2014, Yang received the 2014 Edison Achievement Award in San Francisco at the annual Edison Awards Gala. Yang shared the award with Elon Musk of Tesla Motors and SpaceX. The Edison Awards honor innovation in science and technology and recognize individuals for their “broad contributions to technical innovation.” Yang was the first person from Asia ever to receive the award. Past winners include Steve Jobs, Ted Turner, Doug Ivester, and Martha Stewart. In 2015, Yang was listed on Forbes billionaire list. Yang was selected to accompany the Chinese president Xi Jinping on a state visit to the United States in September 2015. Public service, philanthropy, and other activities Yang serves on the National Committee of the Chinese People's Political Consultative Conference, China's top governmental advisory body that includes more than 2,000 of China's elites from all sectors of society. In 2014, he pushed for legislation to protect privacy and personal data on the web and electronic devices. Yang said that while the internet has brought many advances it also brings new challenges such as protecting privacy and securing personal information. Yang said that legal loopholes and widespread corruption create major challenges to securing personal data. Yang made his proposal at the advisory body's annual meeting. Yang also serves on the board of China's national Youth League, as director of the China Entrepreneurs' Association, and as a member of the New York Stock Exchange's International Advisory Committee. Yang also teaches as a guest lecturer at China's University of Science and Technology. In March 2015, Yang joined the CEO Roundtable on Cancer, an international non-profit group founded in 2001, focused on preventing cancer and advancing research that promotes improved patient outcomes. The group is known for its "CEO Cancer Gold Standard," a workplace wellness program that promotes risk reduction, early detection, and effective treatment. Yang publicly pledged to implement this standard at Lenovo. In October 2015, the University of North Carolina announced a donation of US$1 million from Yang Yuanqing to fund biomedical research. Yang's donation funds grants of $75,000 per scholar. The first recipients were Jonathan Berg, an associate professor of genetics and medicine; Maureen Su, an associate professor of pediatrics, microbiology and immunology; and Yisong Wan, an associate professor of microbiology and immunology. References External links Lenovo Official website Businesspeople from Anhui 1964 births Living people Lenovo people Chinese computer programmers Chinese billionaires Businesspeople in information technology University of Science and Technology of China alumni Shanghai Jiao Tong University alumni People from Hefei 21st-century Chinese businesspeople
3416168
https://en.wikipedia.org/wiki/Fast%20Hack%27em
Fast Hack'em
Fast Hack'em is a Commodore 64 fast disk/file copier, nibbler and disk editor written by Mike J. Henry and released in 1985. It was distributed in the U.S. and Canada via Henry's "Basement Boys Software", and in the U.K. via Datel Electronics. In the U.S., it retailed for $29.95 (). Features The most popular feature of Fast Hack'em was its ability to produce copies of copy-protected commercial software. When using the nibbler, disk copying is done on a very low level, bit-by-bit rather than using standard Commodore DOS commands. This effectively nullifies the efficacy of deliberate disk errors, non-standard track layouts, and related forms of copy prevention. Copying a protected disk takes approximately 60 seconds if being copied directly to another disk drive, or three minutes (plus several disk swaps) if performed using a single disk drive. Fast Hack'em also includes a very fast disk copier that can copy unprotected disks at even higher speeds. Only 35 seconds are required with two drives, or two minutes plus swapping time with one drive. For all forms of copying, Fast Hack'em can verify the resulting disk copies to ensure that they were properly written. The MSD SD-2 dual drive is supported, with copies finished in 60 seconds, about twice as fast as without the software. Fast Hack'em was updated often, and later versions added more copying options. The one feature that stood out from other copying programs was that these updates include "parameters". They include the methods of copy protection individual programs use so even a fast copy can then be artificially "re-protected" and give a working copy. In later versions of Fast Hack'em, disk copying can be performed without the computer if two Commodore 1541 disk drives are available. The software is loaded with a Commodore 64, the two drive option is be selected which transfers software to the drives' controller memory, and the serial cable can be disconnected from the computer. Any number of copies can be performed as long as neither drive is powered down. Reception Ahoy! in October 1985 called Fast Hack'em "a must-needed utility for Commodore disk users" and "probably the fastest way to copy an entire 1541 formatted disk at the present time", joking that a disadvantage was the end of "leisurely coffee breaks or refrigerator raids" during copying. Info described Fast Hack'em as "the most extraordinary copy program I have ever seen for the 64", stating that copying a disk in 35 seconds with two 1541 drives was "not even enough time to fill out the label". References External links Fast Hack'em 1.9 (PRG format) Fast Hack'em 4.5 (ZIP format) Fast Hack'em 9.5 (ZIP format) Commodore 64 software 1985 software
23564044
https://en.wikipedia.org/wiki/Outline%20of%20cryptography
Outline of cryptography
The following outline is provided as an overview of and topical guide to cryptography: Cryptography (or cryptology) – practice and study of hiding information. Modern cryptography intersects the disciplines of mathematics, computer science, and engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Essence of cryptography Cryptographer Encryption/decryption Cryptographic key Cipher Ciphertext Plaintext Code Tabula recta Alice and Bob Uses of cryptographic techniques Commitment schemes Secure multiparty computation Electronic voting Authentication Digital signatures Crypto systems Dining cryptographers problem Anonymous remailer Pseudonymity Onion routing Digital currency Secret sharing Indistinguishability obfuscation Branches of cryptography Multivariate cryptography Post-quantum cryptography Quantum cryptography Steganography Visual cryptography History of cryptography Japanese cryptology from the 1500s to Meiji World War I cryptography World War II cryptography Reservehandverfahren Venona project Ultra Ciphers Classical Substitution Monoalphabetic substitution Caesar cipher ROT13 Affine cipher Atbash cipher Keyword cipher Polyalphabetic substitution Vigenère cipher Autokey cipher Homophonic substitution cipher Polygraphic substitution Playfair cipher Hill cipher Transposition Scytale Grille Permutation cipher VIC cipher – complex hand cypher used by at least one Soviet spy in the early 1950s; it proved quite secure for the time Modern symmetric-key algorithms A5/1 & A5/2 – ciphers specified for the GSM cellular telephone standard BMGL Chameleon FISH – by Siemens AG WWII 'Fish' cyphers Geheimfernschreiber – WWII mechanical onetime pad by Siemens AG, called STURGEON by Bletchley Park Pike – improvement on FISH by Ross Anderson Schlusselzusatz – WWII mechanical onetime pad by Lorenz, called tunny by Bletchley Park HELIX ISAAC – intended as a PRNG Leviathan LILI-128 MUGI – CRYPTREC recommendation MULTI-S01 - CRYPTREC recommendation One-time pad – Vernam and Mauborgne, patented 1919; an extreme stream cypher Panama RC4 (ARCFOUR) – one of a series by Professor Ron Rivest of MIT; CRYPTREC recommended limited to 128-bit key CipherSaber – (RC4 variant with 10 byte random IV, easy to implement Salsa20 – an eSTREAM recommended cipher ChaCha20 – A Salsa20 variant. SEAL SNOW SOBER SOBER-t16 SOBER-t32 WAKE(7330283203) Block ciphers Product cipher Feistel cipher – pattern by Horst Feistel Advanced Encryption Standard (Rijndael) – 128-bit block; NIST selection for the AES, FIPS 197; Created 2001—by Joan Daemen and Vincent Rijmen; NESSIE selection; CRYPTREC recommendation. Anubis – 128-bit block BEAR – built from a stream cypher and hash function, by Ross Anderson Blowfish – 64-bit block; by Bruce Schneier et al. Camellia – 128-bit block; NESSIE selection (NTT & Mitsubishi Electric); CRYPTREC recommendation CAST-128 (CAST5) – 64-bit block; one of a series of algorithms by Carlisle Adams and Stafford Tavares, insistent that the name is not due to their initials CAST-256 (CAST6) – 128-bit block; the successor to CAST-128 and a candidate for the AES competition CIPHERUNICORN-A – 128-bit block; CRYPTREC recommendation CIPHERUNICORN-E – 64-bit block; CRYPTREC recommendation (limited) CMEA – cipher used in US cellphones, found to have weaknesses. CS-Cipher – 64-bit block Data Encryption Standard (DES) – 64-bit block; FIPS 46-3, 1976 DEAL – an AES candidate derived from DES DES-X – a variant of DES to increase the key size. FEAL GDES – a DES variant designed to speed up encryption Grand Cru – 128-bit block Hierocrypt-3 – 128-bit block; CRYPTREC recommendation Hierocrypt-L1 – 64-bit block; CRYPTREC recommendation (limited) IDEA NXT – project name FOX, 64-bit and 128-bit block family; Mediacrypt (Switzerland); by Pascal Junod & Serge Vaudenay of Swiss Institute of Technology Lausanne International Data Encryption Algorithm (IDEA) – 64-bit block;James Massey & X Lai of ETH Zurich Iraqi Block Cipher (IBC) KASUMI – 64-bit block; based on MISTY1, adopted for next generation W-CDMA cellular phone security KHAZAD – 64-bit block designed by Barretto and Rijmen Khufu and Khafre – 64-bit block ciphers Kuznyechik – Russian 128-bit block cipher, defined in GOST R 34.12-2015 and RFC 7801. LION – block cypher built from stream cypher and hash function, by Ross Anderson LOKI89/91 – 64-bit block ciphers LOKI97 – 128-bit block cipher, AES candidate Lucifer – by Tuchman et al. of IBM, early 1970s; modified by NSA/NBS and released as DES MAGENTA – AES candidate Mars – AES finalist, by Don Coppersmith et al. MISTY1 – NESSIE selection 64-bit block; Mitsubishi Electric (Japan); CRYPTREC recommendation (limited) MISTY2 – 128-bit block: Mitsubishi Electric (Japan) Nimbus – 64-bit block NOEKEON – 128-bit block NUSH – variable block length (64-256-bit) Q – 128-bit block RC2 – 64-bit block, variable key length RC6 – variable block length; AES finalist, by Ron Rivest et al. RC5 – Ron Rivest SAFER – variable block length SC2000 – 128-bit block; CRYPTREC recommendation Serpent – 128-bit block; AES finalist by Ross Anderson, Eli Biham, Lars Knudsen SHACAL-1 – 160-bit block SHACAL-2 – 256-bit block cypher; NESSIE selection Gemplus (France) Shark – grandfather of Rijndael/AES, by Daemen and Rijmen Square – father of Rijndael/AES, by Daemen and Rijmen TEA – by David Wheeler & Roger Needham Triple DES – by Walter Tuchman, leader of the Lucifer design team—not all triple uses of DES increase security, Tuchman's does; CRYPTREC recommendation (limited), only when used as in FIPS Pub 46-3 Twofish – 128-bit block; AES finalist by Bruce Schneier et al. XTEA – by David Wheeler & Roger Needham 3-Way – 96-bit block by Joan Daemen Polyalphabetic substitution machine cyphers Enigma – WWII German rotor cypher machine—many variants, any user networks for most of the variants Purple – highest security WWII Japanese Foreign Office cypher machine; by Japanese Navy Captain SIGABA – WWII US cypher machine by William Friedman, Frank Rowlett et al. TypeX – WWII UK cypher machine Hybrid code/cypher combinations JN-25 – WWII Japanese Navy superencyphered code; many variants Naval Cypher 3 – superencrypted code used by the Royal Navy in the 1930s and into WWII Modern asymmetric-key algorithms Asymmetric key algorithm ACE-KEM – NESSIE selection asymmetric encryption scheme; IBM Zurich Research ACE Encrypt Chor-Rivest Diffie-Hellman – key agreement; CRYPTREC recommendation El Gamal – discrete logarithm Elliptic curve cryptography – (discrete logarithm variant) PSEC-KEM – NESSIE selection asymmetric encryption scheme; NTT (Japan); CRYPTREC recommendation only in DEM construction w/SEC1 parameters ECIES – Elliptic Curve Integrated Encryption System, Certicom Corporation ECIES-KEM ECDH – Elliptic Curve Diffie-Hellman key agreement, CRYPTREC recommendation EPOC Merkle–Hellman knapsack cryptosystem – knapsack scheme McEliece cryptosystem Niederreiter cryptosystem NTRUEncrypt RSA – factoring RSA-KEM – NESSIE selection asymmetric encryption scheme; ISO/IEC 18033-2 draft RSA-OAEP – CRYPTREC recommendation Rabin cryptosystem – factoring Rabin-SAEP HIME(R) Threshold cryptosystem XTR Keys Key authentication Public key infrastructure X.509 OpenPGP Public key certificate Certificate authority Certificate revocation list ID-based cryptography Certificate-based encryption Secure key issuing cryptography Certificateless cryptography Merkle tree Transport/exchange Diffie–Hellman Man-in-the-middle attack Needham–Schroeder Offline private key Otway–Rees Trusted paper key Wide Mouth Frog Weak keys Brute force attack Dictionary attack Related key attack Key derivation function Key strengthening Password Password-authenticated key agreement Passphrase Salt Factorization Cryptographic hash functions Message authentication code Keyed-hash message authentication code Encrypted CBC-MAC (EMAC) – NESSIE selection MAC HMAC – NESSIE selection MAC; ISO/IEC 9797-1, FIPS PUB 113 and IETF RFC TTMAC – (Two-Track-MAC) NESSIE selection MAC; K.U.Leuven (Belgium) & debis AG (Germany) UMAC – NESSIE selection MAC; Intel, UNevada Reno, IBM, Technion, & UC Davis MD5 – one of a series of message digest algorithms by Prof Ron Rivest of MIT; 128-bit digest SHA-1 – developed at NSA 160-bit digest, an FIPS standard; the first released version was defective and replaced by this; NIST/NSA have released several variants with longer 'digest' lengths; CRYPTREC recommendation (limited) SHA-256 – NESSIE selection hash function, FIPS 180-2, 256-bit digest; CRYPTREC recommendation SHA-384 – NESSIE selection hash function, FIPS 180-2, 384-bit digest; CRYPTREC recommendation SHA-512 – NESSIE selection hash function, FIPS 180-2, 512-bit digest; CRYPTREC recommendation SHA-3 – originally known as Keccak; was the winner of the NIST hash function competition using sponge function. Streebog – Russian algorithm created to replace an obsolete GOST hash function defined in obsolete standard GOST R 34.11-94. RIPEMD-160 – developed in Europe for the RIPE project, 160-bit digest; CRYPTREC recommendation (limited) RTR0 – one of Retter series; developed by Maciej A. Czyzewski; 160-bit digest Tiger – by Ross Anderson et al. Snefru – NIST hash function competition Whirlpool – NESSIE selection hash function, Scopus Tecnologia S.A. (Brazil) & K.U.Leuven (Belgium) Cryptanalysis Classical Frequency analysis Contact analysis Index of coincidence Kasiski examination Modern Symmetric algorithms Boomerang attack Brute force attack Davies' attack; Differential cryptanalysis Impossible differential cryptanalysis Integral cryptanalysis Linear cryptanalysis Meet-in-the-middle attack Mod-n cryptanalysis Related-key attack Slide attack XSL attack Hash functions: Birthday attack Attack models Chosen-ciphertext Chosen-plaintext Ciphertext-only Known-plaintext Side channel attacks Power analysis Timing attack Cold boot attack Network attacks Man-in-the-middle attack Replay attack External attacks Black-bag cryptanalysis Rubber-hose cryptanalysis Robustness properties Provable security Random oracle model Ciphertext indistinguishability Semantic security Malleability Forward secrecy Forward anonymity Freshness Undeciphered historical codes and ciphers Beale ciphers Chaocipher D'Agapeyeff cipher Dorabella cipher Rongorongo Shugborough inscription Voynich manuscript Organizations and selection projects Cryptography standards Federal Information Processing Standards (FIPS) Publication Program – run by NIST to produce standards in many areas to guide operations of the US Federal government; many FIPS publications are ongoing and related to cryptography American National Standards Institute (ANSI) – standardization process that produces many standards in many areas; some are cryptography related, ongoing) International Organization for Standardization (ISO) – standardization process produces many standards in many areas; some are cryptography related, ongoing Institute of Electrical and Electronics Engineers (IEEE) – standardization process produces many standards in many areas; some are cryptography related, ongoing Internet Engineering Task Force (IETF) – standardization process that produces many standards called RFCs) in many areas; some are cryptography related, ongoing) General cryptographic National Security Agency (NSA) – internal evaluation/selections, charged with assisting NIST in its cryptographic responsibilities Government Communications Headquarters (GCHQ) – internal evaluation/selections, a division is charged with developing and recommending cryptographic standards for the UK government Defence Signals Directorate (DSD) – Australian SIGINT agency, part of ECHELON Communications Security Establishment (CSE) – Canadian intelligence agency Open efforts Data Encryption Standard (DES) – NBS selection process, ended 1976 RIPE – division of the RACE project sponsored by the European Union, ended mid-1980s Advanced Encryption Standard (AES) – a "break-off" competition sponsored by NIST, ended in 2001 NESSIE Project – an evaluation/selection program sponsored by the European Union, ended in 2002 eSTREAM– program funded by ECRYPT; motivated by the failure of all of the stream ciphers submitted to NESSIE, ended in 2008 CRYPTREC – evaluation/recommendation program sponsored by the Japanese government; draft recommendations published 2003 CrypTool – an e-learning freeware programme in English and German— exhaustive educational tool about cryptography and cryptanalysis Influential cryptographers List of cryptographers Legal issues AACS encryption key controversy Free speech Bernstein v. United States - Daniel J. Bernstein's challenge to the restrictions on the export of cryptography from the United States. Junger v. Daley DeCSS Phil Zimmermann - Arms Export Control Act investigation regarding the PGP software. Export of cryptography Key escrow and Clipper Chip Digital Millennium Copyright Act Digital Rights Management (DRM) Patents RSA – now public domain David Chaum – and digital cash Cryptography and law enforcement Telephone wiretapping Espionage Cryptography laws in different nations Official Secrets Act – United Kingdom, India, Ireland, Malaysia, and formerly New Zealand Regulation of Investigatory Powers Act 2000 – United Kingdom Academic and professional publications Journal of Cryptology Encyclopedia of Cryptography and Security Cryptologia – quarterly journal focusing on historical aspects Communication Theory of Secrecy Systems – cryptography from the viewpoint of information theory Allied sciences Security engineering See also Outline of computer science Outline of computer security References Cryptography Cryptography
41417750
https://en.wikipedia.org/wiki/EnhanceIO
EnhanceIO
EnhanceIO is a disk cache module for the Linux kernel. Its goal is to use fast but relatively small SSD drives to improve the performance of large but slow hard drives. Overview EnhanceIO makes it possible to add an SSD or other fast disk device as a cache to another block device, such as a hard drive, in order to improve the performance of the disk. It was initially based on Facebook's similar Flashcache module. Unlike Flashcache and other caching solutions, it doesn't use the Linux device mapper. This means it does not create a new block device and caching can be added to existing disks, without reformatting or even unmounting them. This makes it easy to add cache to existing systems. History EnhanceIO was first announced as a commercial product in 2011 by sTec Inc, a company specializing in SSD products. Late 2012 sTec published code for the Linux module on GitHub. Even though it was then soon submitted to the Linux kernel mailing list, it was never merged into the main kernel. In 2013, Western Digital acquired sTec Inc. They offered the EnhanceIO product briefly under their HGST brand. However, the project was soon discontinued and maintenance on the module halted. As the project was abandoned, several forks were created with some patches to let EnhanceIO work on later kernels. As of 2017, the lanconnected fork seems to be most active. See also bcache dm-cache Flashcache References Solid-state caching Linux kernel-related software
206441
https://en.wikipedia.org/wiki/1848%20in%20literature
1848 in literature
This article contains information about the literary events and publications of 1848. Events January 22 – The second edition of Charlotte Brontë's Jane Eyre is dedicated to William Makepeace Thackeray. It is also first published in the United States this year. February 21 – Karl Marx and Friedrich Engels publish The Communist Manifesto (Manifest der Kommunistischen Partei) in London. March 15 – Revolutions of 1848 in the Austrian Empire: Hungarian Revolution of 1848 – The poet Sándor Petőfi with Mihály Táncsics and other young men lead the bloodless revolution in Pest, reciting Petőfi's "Nemzeti dal" (National song) and the "12 points" and printing them on the presses of Landerer és Heckenast, so forcing Ferdinand I of Austria to abolish censorship. March 18 – The Boston Public Library is founded by an act of the Great and General Court of Massachusetts. April 1 – Charles Dickens's novel Dombey and Son concludes its serial publication. April 10 – John Ruskin marries Effie Gray. May 5 – Poet Alfred de Musset is dismissed as librarian of the Ministry of the Interior under the French Second Republic. c. June 27 – The second and final novel of Anne Brontë (as Acton Bell), The Tenant of Wildfell Hall is published in London. It sells out in six weeks, requiring a reissue. July – Serial publication of William Makepeace Thackeray's novel Vanity Fair by Punch magazine concludes. It appears in book format (from the same typesetting) by Bradbury and Evans in London, with illustrations by the author. October 1 – At the funeral of Branwell Brontë, his younger sister Emily begins to show symptoms of a cold, soon revealed to be tuberculosis. October 18 – Elizabeth Gaskell's first novel, Mary Barton: A Tale of Manchester Life is published anonymously by Chapman & Hall in London in two volumes. c. October – The first frescoes of scenes from English literature in the Poets' Hall of the Palace of Westminster are completed: Charles West Cope's Griselda's first Trial of Patience (based on Chaucer's The Clerk's Tale) and John Callcott Horsley's Satan touched by Ithuriel's Spear while whispering evil dreams to Eve (based on Milton's Paradise Lost). November William Makepeace Thackeray's novel The History of Pendennis begins its serial publication. The London publisher George Routledge begins issuing the Railway Library series of cheap reprint novels, pioneering the yellow-back genre, with an edition of James Fenimore Cooper's The Pilot. December 22 – Three days after her death from tuberculosis at Haworth Parsonage, aged 30, Emily Brontë is buried in her father's St Michael and All Angels' Church, Haworth. The funeral procession is led by her father and her dog, Keeper. unknown date – The first issue of the penny dreadful Gentleman Jack, or Life on the Road, probably by James Malcolm Rymer, is published in London by Edward Lloyd. Inspired by the life of the highwayman Claude Duval (hanged 1670), but opening in 1780, the series will reach over 200 parts by 1852 and be popular on both sides of the Atlantic. New books Fiction W. Harrison Ainsworth – The Lancashire Witches (serialised in The Sunday Times) Willibald Alexis – Der Werwulf R. M. Ballantyne – Life in the Wilds of North America Anne Brontë – The Tenant of Wildfell Hall Edward Bulwer-Lytton – Harold, the Last of the Saxons William Carleton – The Emigrants of Ahadarra Charles Dickens Dombey and Son The Haunted Man and the Ghost's Bargain Fyodor Dostoevsky – White Nights Alexandre Dumas, fils – La Dame aux caméllias Elizabeth Gaskell – Mary Barton Geraldine Jewsbury – The Half Sisters Julia Kavanagh – Madeleine, a Tale of Auvergne Charles Kingsley – Yeast Eliza Lynn Linton – Amymone: a romance of the days of Pericles Frederick Marryat The Little Savage Valerie Henri Murger – Scènes de la vie de Bohème John Henry Newman – Loss and Gain: the story of a convert G. W. M. Reynolds The Coral Island, or the Hereditary Curse Wagner the Wehr-Wolf George Sand – François le champi (François the Waif) Adele Schopenhauer – Eine dänische Geschichte (A Danish Story) William Makepeace Thackeray – The Book of Snobs Anthony Trollope – The Kellys and the O'Kellys Children and young people Cecil Frances Alexander – Hymns for Little Children (includes "All Things Bright and Beautiful" and "Once in Royal David's City" Catherine Crowe – Pippie's Warning, or, Mind Your Temper Drama Émile Augier – L'Aventurière Alfred de Musset – André del Sarto Poetry William Edmonstoune Aytoun – Lays of the Scottish Cavaliers Edward Bulwer-Lytton – King Arthur (1848-9) James Russell Lowell – A Fable for Critics, The Biglow Papers Charles Masson – Legends of the Afghan countries, in verse Johan Ludvig Runeberg – The Tales of Ensign Stål Non-fiction Ivar Aasen – Det norske Folkesprogs Grammatik (Grammar of the Norwegian Dialects) George Catlin – Eight Years' Travels and Residence in Europe Wilkie Collins – Memoirs of the Life of William Collins, Esq., R.A. Catherine Crowe – The Night-side of Nature Jacob Grimm – Geschichte der deutschen Sprache (History of the German Language) Benjamin Randell Harris – The Recollections of Rifleman Harris Søren Kierkegaard – The Point of View of My Work as an Author (Om min Forfatter-Virksomhed) Thomas Babington Macaulay – The History of England from the Accession of James the Second, Vols 1–2 Harriet Martineau – Household Education Karl Marx and Friedrich Engels – The Communist Manifesto Charles Delucena Meigs – Females and Their Diseases; A Series of Letters to His Class John Stuart Mill – Principles of Political Economy Monckton Milnes – Life, Letters and Literary Remains of John Keats Edgar Allan Poe – Eureka: A Prose Poem George Ayliffe Poole – A History of Ecclesiastical Architecture in England Percy Bolingbroke St John – French Revolution in 1848: The three days of February, 1848; with sketches of Lamartine, Guizot, etc. Ephraim George Squier and Edwin Hamilton Davis – Ancient Monuments of the Mississippi Valley Harriet Ward – Five years in Kaffirland: with sketches of the late war in that country to the conclusion of peace, written on the spot John Collins Warren – Etherization: with Surgical Remarks Births January 6 – Hristo Botev, Bulgarian poet and journalist (died 1876) January 28 – Mary Elizabeth Hawker, Scottish-born English fiction writer (died 1908) February 5 – Joris Karl Huysmans (Charles-Marie-Georges Huysmans), French novelist (died 1907) February 16 – Octave Mirbeau, French travel writer, novelist and playwright (died 1917) February 24 – Grant Allen, Canadian novelist and science writer (died 1899) March 9 – George Panu, Romanian memoirist, critic, and politician (died 1910) May – Bonifaciu Florescu, Wallachian and Romanian polygraph (died 1899) July 15 – Vilfredo Pareto, Italian economist, political scientist and philosopher (died 1923) August 14 – Mary E. Mann (Mary Rackham), English novelist and short story writer (died 1929) August 16 – Francis Darwin, English botanist and academic (died 1925) October 25 – Karl Emil Franzos, Austrian novelist (died 1904) unknown date – Maryana Marrash, Syrian writer and salonist (died 1919 in literature) probable year – Bithia Mary Croker, Irish novelist (died 1920) Deaths January 19 – Isaac D'Israeli, English scholar and man of letters (born 1766) February 9 – Ann Batten Cristall, English poet (born 1769) February 13 – Sophie von Knorring, Swedish novelist (born 1797) July 4 – François-René de Chateaubriand, French historian, politician and diplomat (born 1768) July 6 – He Changling (賀長齡), Chinese scholar and writer on governance (born 1785) August 9 – Frederick Marryat (Captain Marryat), English novelist and children's writer (born 1792) September 24 – Branwell Brontë, English painter, writer and poet, brother of Emily, Charlotte and Anne (tuberculosis; born 1817) November 23 – John Barrow, English writer, geographer and linguist (born 1764) December 19 – Emily Brontë, English novelist and poet (tuberculosis, born 1818) December 23 – James Cowles Prichard, English ethnologist and psychiatrist References Years of the 19th century in literature
27381928
https://en.wikipedia.org/wiki/Avalanche%20%28phishing%20group%29
Avalanche (phishing group)
Avalanche was a criminal syndicate involved in phishing attacks, online bank fraud, and ransomware. The name also refers to the network of owned, rented, and compromised systems used to carry out that activity. Avalanche only infected computers running the Microsoft Windows operating system. In November 2016, the Avalanche botnet was destroyed after a four-year project by an international consortium of law enforcement, commercial, academic, and private organizations. History Avalanche was discovered in December 2008, and may have been a replacement for a phishing group known as Rock Phish which stopped operating in 2008. It was run from Eastern Europe and was given its name by security researchers because of the high volume of its attacks. Avalanche launched 24% of phishing attacks in the first half of 2009; in the second half of 2009, the Anti-Phishing Working Group (APWG) recorded 84,250 attacks by Avalanche, constituting 66% of all phishing attacks. The number of total phishing attacks more than doubled, an increase which the APWG directly attributes to Avalanche. Avalanche used spam email purporting to come from trusted organisations such as financial institutions or employment websites. Victims were deceived into entering personal information on websites made to appear as though they belong to these organisations. They were sometimes tricked into installing software attached to the emails or at a website. The malware logged keystrokes, stole passwords and credit card information, and allowed unauthorised remote access to the infected computer. Internet Identity's Phishing Trends report for the second quarter of 2009 said that Avalanche "have detailed knowledge of commercial banking platforms, particularly treasury management systems and the Automated Clearing House (ACH) system. They are also performing successful real-time man-in-the-middle attacks that defeat two-factor security tokens." Avalanche had many similarities to the previous group Rock Phish - the first phishing group which used automated techniques - but with greater in scale and volume. Avalanche hosted its domains on compromised computers (a botnet). There was no single hosting provider, making difficult to take down the domain and requiring the involvement of the responsible domain registrar. In addition, Avalanche used fast-flux DNS, causing the compromised machines to change constantly. Avalanche attacks also spread the Zeus Trojan horse enabling further criminal activity. The majority of domains which Avalanche used belonged to national domain name registrars in Europe and Asia. This differs from other phishing attacks, where the majority of domains use U.S. registrars. It appears that Avalanche chose registrars based on their security procedures, returning repeatedly to registrars which do not detect domains being used for fraud, or which were slow to suspend abusive domains. Avalanche frequently registered domains with multiple registrars, while testing others to check whether their distinctive domains were being detected and blocked. They targeted a small number of financial institutions at a time, but rotated these regularly. A domain which not suspended by a registrar was re-used in later attacks. The group created a phishing "kit", which came pre-prepared for use against many victim institutions. Avalanche attracted significant attention from security organisations; as a result, the uptime of the domain names it used was half that of other phishing domains. In October 2009, ICANN, the organisation which manages the assignment of domain names, issued a Situation Awareness Note encouraging registrars to be proactive in dealing with Avalanche attacks. The UK registry, Nominet has changed its procedures to make it easier to suspend domains, because of attacks by Avalanche. Interdomain, a Spanish registrar, began requiring a confirmation code delivered by mobile phone in April 2009 which successfully forced Avalanche to stop registering fraudulent domains with them. In 2010, the APWG reported that Avalanche had been responsible for two-thirds of all phishing attacks in the second half of 2009, describing it as "one of the most sophisticated and damaging on the Internet" and "the world's most prolific phishing gang". Takedown In November 2009, security companies managed to shut down the Avalanche botnet for a short time; after this Avalanche reduced the scale of its activities and altered its modus operandi. By April 2010, attacks by Avalanche had decreased to just 59 from a high of more than 26,000 in October 2009, but the decrease was temporary. On November 30, 2016, the Avalanche botnet was destroyed at the end of a four-year project by INTERPOL, Europol, the Shadowserver Foundation, Eurojust, the Luneberg (Germany) police, The German Federal Office for Information Security (BSI), the Fraunhofer FKIE, several antivirus companies organized by Symantec, ICANN, CERT, the FBI, and some of the domain registries that had been used by the group. Symantec reverse-engineered the client malware and the consortium analyzed 130 TB of data captured during those years. This allowed it to defeat the fast-flux distributed DNS obfuscation, map the command/control structure of the botnet, and identify its numerous physical servers. 37 premises were searched, 39 servers were seized, 221 rented servers were removed from the network when their unwitting owners were notified, 500,000 zombie computers were freed from remote control, 17 families of malware were deprived of c/c, and the five people who ran the botnet were arrested. The law enforcement sinkhole server, described in 2016 as the "largest ever", with 800,000 domains served, collects the IP addresses of infected computers that request instructions from the botnet so that the ISPs owning them can inform users that their machines are infected and provide removal software. Malware deprived of infrastructure The following malware families were hosted on Avalanche: Windows-encryption Trojan horse (WVT) (a.k.a. Matsnu, Injector, Rannoh, Ransomlock.P) URLzone (a.k.a. Bebloh) Citadel VM-ZeuS (a.k.a. KINS) Bugat (a.k.a. Feodo, Geodo, Cridex, Dridex, Emotet) newGOZ (a.k.a. GameOverZeuS) Tinba (a.k.a. TinyBanker) Nymaim/GozNym Vawtrak (a.k.a. Neverquest) Marcher Pandabanker Ranbyus Smart App TeslaCrypt Trusteer App Xswkit The Avalanche network also provided the c/c communications for these other botnets: TeslaCrypt Nymaim Corebot GetTiny Matsnu Rovnix Urlzone QakBot (a.k.a. Qbot, PinkSlip Bot) References External links Joint Cyber Operation Takes Down Avalanche Criminal Network (FBI) Email spammers Cybercrime Organized crime groups in Europe
1617522
https://en.wikipedia.org/wiki/Extensibility
Extensibility
Extensibility is a software engineering and systems design principle that provides for future growth. Extensibility is a measure of the ability to extend a system and the level of effort required to implement the extension. Extensions can be through the addition of new functionality or through modification of existing functionality. The principle provides for enhancements without impairing existing system functions. An extensible system is one whose internal structure and dataflow are minimally or not affected by new or modified functionality, for example recompiling or changing the original source code might be unnecessary when changing a system’s behavior, either by the creator or other programmers. Because software systems are long lived and will be modified for new features and added functionalities demanded by users, extensibility enables developers to expand or add to the software’s capabilities and facilitates systematic reuse. Some of its approaches include facilities for allowing users’ own program routines to be inserted and the abilities to define new data types as well as to define new formatting markup tags. Extensible design Extensible design in software engineering is to accept that not everything can be designed in advance. A light software framework which allows for changes is provided instead. Small commands are made to prevent losing the element of extensibility, following the principle of separating work elements into comprehensible units, in order to avoid traditional software development issues including low cohesion and high coupling and allow for continued development. Embracing change is essential to the extensible design, in which additions will be continual. Each chunk of the system will be workable with any changes, and the idea of change through addition is the center of the whole system design. Extensible design supports frequent re-prioritization and allows functionality to be implemented in small steps upon request, which are the principles advocated by the Agile methodologies and iterative development. Extensibility imposes fewer and cleaner dependencies during development, as well as reduced coupling and more cohesive abstractions, plus well defined interfaces. Importance Fickleness lies at the basis of all software because of human phenomena since software is an “evolving entity” which is developed and maintained by human beings, yielding ongoing system changes in software specification and implementation. Components of a software are often developed and deployed by unrelated parties independently. Adaptable software components are necessary since components from external vendors are unlikely to fit into a specific deployment scenario off-the-rack, taking third party users other than the manufacturer into consideration. Many software systems and software product-lines are derived from a base system, which share a common software architecture or sometimes large parts of the functionality and implementation but are possibly equipped with different components that require an extensible base system. Building software systems that are independently extensible is an important challenge. An independently extensible system not only allows two people to independently develop extensions to the system, but also allows the two extensions to be combined without a global integrity check. Classification of extensibility mechanisms There are three different forms of software extensibility: white-box extensibility, gray-box extensibility, and black-box extensibility, which are based on what artifacts and the way they are changed. White-Box Under this form of extensibility, a software system can be extended by modifying the source code, and it is the most flexible and the least restrictive form. There are two sub-forms of extensibility, open-box extensibility and glass-box extensibility, depending on how changes are applied. Open-Box Changes are performed invasively in open-box extensible systems; i.e. original source code is directly being hacked into. It requires available source code and the modification permitted source code license. Open-box extensibility is most relevant to bug fixing, internal code refactoring, or production of next version of a software product. Glass-Box Glass-box extensibility (also called architecture driven frameworks) allows a software system to be extended with available source code, but may not allow the code to be modified. Extensions have to be separated from the original system in a way that the original system is not affected. One example of this form of extensibility is object-oriented application frameworks which achieve extensibility typically by using inheritance and dynamic binding. Black-Box In black-box extensibility (also called data-driven frameworks) no details about a system’s implementation are used for implementing deployments or extensions; only interface specifications are provided. This type of approach is more limited than the various white-box approaches. Black-box extensions are typically achieved through system configuration applications or the use of application-specific scripting languages by defining components interfaces. Gray-Box Gray-box extensibility is a compromise between a pure white-box and a pure black-box approach, which does not rely fully on the exposure of source code. Programmers could be given the system’s specialization interface which lists all available abstractions for refinement and specifications on how extensions should be developed. Extensibility vs. reusability Extensibility and reusability have many emphasized properties in common, including low coupling, modularity and high risk elements’ ability to construct for many different software systems, which is motivated by the observation of software systems often sharing common elements. Reusability together with extensibility allows a technology to be transferred to another project with less development and maintenance time, as well as enhanced reliability and consistency. Security Modern operating systems support extensibility through device drivers and loadable kernel modules. Many modern applications support extensibility through plug-ins, extension languages, applets, etc. The trend of increasing extensibility negatively affects software security. CGI is one of the primary means by which web servers provide extensibility. Some people see CGI scripts as "an enormous security hole". See also Extensible programming Polymorphism Software metric Scalability XML References External links Software architecture fr:Extensibilité
30865764
https://en.wikipedia.org/wiki/List%20of%20Romanian%20Americans
List of Romanian Americans
This is a list of notable Romanian-Americans, including both original immigrants from Romania who obtained American citizenship and their American descendants. Academics Literary Critics Matei Călinescu – professor at Indiana University Bloomington Mathematicians Alexandra Bellow – mathematician, Professor Emeritus at Northwestern University Ana Caraiani – mathematician, member of the American Mathematical Society Ioana Dumitriu – mathematician, professor at the University of Washington Ciprian Foias – mathematician, distinguished professor at Texas A&M University Tudor Ganea – mathematician, known for his work in algebraic topology Nicholas Georgescu-Roegen – mathematician, statistician, and economist Matei Machedon – mathematician, professor at the University of Maryland Ciprian Manolescu – mathematician, professor at Stanford University Irina Mitrea – mathematician, professor at Temple University Mircea Mustaţă – mathematician, professor at the University of Michigan Florian Pop – mathematician, professor at the University of Pennsylvania Mihnea Popa – mathematician, professor at Northwestern University Sorin Popa – mathematician, professor at the University of California, Los Angeles Cristian Dumitru Popescu – mathematician, professor at the University of California, San Diego Ovidiu Savin – mathematician, professor at Columbia University Rodica Simion – mathematician, known for her work in combinatorics Ileana Streinu – mathematician, professor at Smith College Bogdan Suceavă – mathematician, professor at California State University Fullerton Dan-Virgil Voiculescu – mathematician, professor at the University of California, Berkeley Alexandru Zaharescu – mathematician, professor at the University of Illinois at Urbana–Champaign Historians, sociologists and philosophers Eugene Borza – professor of history at Pennsylvania State University Ioan Petru Culianu – historian of religion Mircea Eliade – philosopher, writer, historian of religions Radu Florescu – emeritus professor of history at Boston College Art Eugen Ciucă – sculptor Noche Crist – painter Christopher Georgesco – sculptor Cristian Gheorghiu – contemporary artist Ami James – tattoo artist De Hirsh Margules – painter (Romanian-Jewish descent) Alexandra Nechita – cubist painter Architects Max Abramovitz – architect of Avery Fisher Hall, of Romanian-Jewish descent Haralamb H. Georgescu – architect Business Micky Arison – chairman of Carnival Corporation and owner of NBA's Miami Heat Jeffrey Brotman – co-founder of Costco Wholesale Corporation (Romanian-Jewish descent) Safra Catz – CEO of Oracle Corporation Dan Dascalescu – entrepreneur, co-founder of Blueseed John DeLorean – engineer, founder of the DeLorean Motor Company Peter Georgescu – chairman emeritus of Young & Rubicam Michael Horodniceanu – engineer and businessman, former president of MTA Capital Construction Stephanie Korey – co-founder and executive chairman of Away David Marcus – former President of PayPal, and current Vice President of Messaging Products at Facebook Martin Bud Seretean – founder, and former CEO of Coronet Industries. Anastasia Soare – CEO and founder of Anastasia Beverly Hills Christine Valmy – founded the first esthetician school in the United States Entertainment Actors Sadie Alexandru – actress Lauren Bacall – actress (Romanian Jewish mother) Bob Balaban - actor (part Romanian-Jewish descent) Jillian Bell – actress and screenwriter Tim Conway – actor and comedian (Romanian mother) Melinda Culea – actress Fran Drescher – actress and comedian (part Romanian-Jewish descent) Illeana Douglas – actress Jennifer Ehle – actress Lisa Ferraday – actress Ana Gasteyer – actress and comedian Joseph Gordon-Levitt - actor (part Romanian-Jewish descent) Oana Gregory – actress Dustin Hoffman – actor and filmmaker (part Romanian-Jewish descent) Harvey Keitel – actor and producer (Romanian Jewish mother) Tristan Leabu – actor Andrea Marcovicci – actress and singer Carl Reiner – comedian and actor (Romanian Jewish mother) Edward G. Robinson – actor (Romanian-Jewish descent) Sebastian Stan – actor Johnny Pacar – actor and singer David Pittu – actor Natalie Portman – Academy Award-winning actress (part Romanian-Jewish descent) Ray Wise – actor (Romanian mother) Adrian Zmed – actor best known from the T.J. Hooker television series. Screenwriter, directors and producer of films and theatre Stanley Kubrick – film director, screenwriter, producer and photographer. His father's parents and paternal grandparents were of Romanian Jewish descent. Jean Negulesco – film director and screenwriter Petru Popescu – screenwriter from Hollywood and best-selling author Steve Sabol – film producer and one of the founders of NFL Films Andrei Șerban – director of theater and opera Erwin Stoff – film producer and founder of 3 Arts Entertainment Singers and musicians Herb Alpert – lead singer, and horn player with Tijuana Brass (Romanian-Jewish descent) Lucian Ban – jazz piano player, composer Laura Bretan – soprano singer Yeat - Rapper Shelby Cinca – punk rock guitarist Sergiu Comissiona – conductor and musician Art Garfunkel – singer, poet, and actor (Romanian-Jewish descent) Angela Gheorghiu – soprano Alma Gluck – soprano Christina Grimmie – singer Harloe – singer Yolanda Marculescu – soprano Necro – hardcore rapper Margareta Paslaru – singer Beverly Sills – soprano Virginia Zeani – opera singer Sports Fred Arbanas (1939–2021) – American football player Scarlett Bordeaux (born 1991) – WWE wrestler Nadia Comăneci (born 1961) – Olympic gold medalist in gymnastics (defected to the US in 1989) Nick Denes (1906–1975) – American football and basketball coach Eric Ghiaciuc (born 1981) – American football player John Ghindia (1925–2012) – American football player and coach Bill Goldberg (born 1966) – American football player and undefeated wrestler (Romanian-Jewish descent). Hroniss Grasu (born 1991) – American football player Hank Greenberg (1911–1986) – Baseball Hall of Famer (Romanian-Jewish descent). Lou Groza (1924–2000) – Pro Football Hall of Famer Ernie Grunfeld (born 1955) – basketball player and former general manager of the Washington Wizards Red Holzman (1920–1998) – NBA Hall of Fame coach and former player (Romanian Jewish mother) Sabrina Ionescu (born 1997) – basketball player for the New York Liberty; college honors at the University of Oregon include multiple national player of the year awards in 2019 and 2020 Fred Lebow (1932–1994) – founder of the New York City Marathon Dominique Moceanu (born 1981) – US Olympic gymnast Corina Morariu (born 1978) – former professional tennis player, reached the world No. 1 ranking in doubles in 2000 Gheorghe Mureșan (born 1971) – former NBA player; lives in USA Stephen Negoesco (1925–2019) – soccer coach Betty Okino (born 1975) – US Olympic gymnast Sam Paulescu (born 1984) – American football player Nick Roman (1947–2003) – American football player Dolph Schayes (1928–2015) – NBA Hall of Famer player and coach Bud Selig (born 1934) – Commissioner of Major League Baseball Charley Stanceu (1916–1969) – baseball player Mark Suciu (born 1992) – professional skateboarder Otmar Szafnauer (born 1964) – Chief Executive Officer and Team Principal of the Aston Martin F1 Team Kevin Youkilis (born 1979) – Major League Baseball player; first baseman for the Boston Red Sox Law Timothy Stanceu – Chief United States Judge of the United States Court of International Trade Media/Journalism Rukmini Callimachi – journalist, The New York Times Harry Caray – former baseball broadcaster Chip Caray – sports broadcaster for Fox Sports South Liz Claman – anchor of the Fox Business Network show Countdown to the Closing Bell Horace Dediu – technology journalist Steve Fainaru – investigative journalist and senior writer for ESPN.com and ESPN The Magazine John Florea – photojournalist for Life magazine Lisa Kennedy – host of the Kennedy show on the Fox Business Network Dan Moldea – author and investigative journalist George Puscas – sports reporter for the Detroit Free Press Marc Stein – sports reporter for The New York Times (Romanian-Jewish descent) Brian Unger – journalist and commentator for NPR Military Nicolae Dunca – captain in the 12th New York Infantry Regiment in the American Civil War George Pomutz – Brevet Brigadier General, commanded the 15th Iowa Infantry Regiment in the American Civil War Eugen Ghica-Comănești - captain in the 5th New York Volunteer Infantry in the American Civil War Alexander Vraciu – World War II Navy pilot; ace Politics Adrian_Zuckerman_(attorney) – former United States Ambassador to Romania appointed by Donald Trump, first US Ambassador born in Romania Steven Fulop – current Mayor of Jersey City (Romanian-Jewish descent) Michael Gruitza – former Democratic member of the Pennsylvania House of Representatives Chris Lauzen – Illinois state senator of District 25 (1993–2013) John Rakolta – former United States Ambassador to the United Arab Emirates appointed by Donald Trump Elizabeth Tamposi – former Assistant Secretary of State for Consular Affairs at the U.S. Department of State (1989–1992) Religion John Michael Botean – Romanian Greek-Catholic bishop Nathaniel Popp – Romanian Orthodox Archbishop Vasile Louis Puscas – Romanian Greek-Catholic bishop Alexander Ratiu – former Romanian Greek-Catholic priest and author Valerian Trifa – former archbishop of Romanian Orthodox Church of America and Canada Sciences Rodica Baranescu – mechanical engineer known for her research in automotive diesel engines Adrian Bejan – mechanical engineer, Benjamin Franklin Medal Laureate George de Bothezat - engineer, businessman, and pioneer of helicopter flight Mircea Dincă – chemist Diana Fosha – psychologist Viviana Gradinaru – neuroscientist Dan Graur – scientist Liviu Librescu – material scientist, and professor of Engineering Science and Mechanics at Virginia Tech Horațiu Năstase – physicist and professor in the High energy physics group at Brown University in Providence, RI, USA. Mihai Nadin – researcher in electrical engineering George Emil Palade – Nobel Prize-winning biologist Sergiu Pașca – scientist and physician at Stanford University Vasile Popov – leading systems theorist and control engineering specialist Gideon Rodan – biochemist and osteopath Nicholas Sanduleak – astronomer Computer science Andrei Alexandrescu – computer scientist, worked as a research scientist at Facebook Andrei Broder – computer scientist and distinguished scientist at Google Maria-Florina Balcan – computer scientist Mihaela Cardei - computer scientist and researcher, professor at Florida Atlantic University Flaviu Cristian – computer scientist, noted for his work in distributed systems Roxana Geambasu – computer scientist and associate professor of computer science at Columbia University Virgil Gligor – computer scientist, professor of Electrical and Computer Engineering at Carnegie Mellon University, he has been inducted into the National Cybersecurity Hall of Fame. Grigore Rosu – computer scientist, professor at the University of Illinois at Urbana-Champaign Daniela Rus – director of the MIT Computer Science and Artificial Intelligence Laboratory. Ion Stoica – computer scientist, professor of computer science at the University of California Berkeley and co-director of AMPLab. Rada Mihalcea – computer scientist, professor of computer science and engineering at the University of Michigan Anca Mosoiu – computer scientist, credited with helping to build the tech industry in Oakland. Fabian Pascal – computer scientist, consultant to large software vendors such as IBM, Oracle Corporation, and Borland. Matei Zaharia – computer scientist, creator of Apache Spark. Writers John Balaban – poet Ion Cârja – writer and anti-communist activist Nina Cassian – poet, journalist, film critic Andrei Codrescu – poet, writer, radio host Thomas Pavel – literary theorist, critic, and novelist currently teaching at the University of Chicago Norman Manea – writer Valery Oișteanu – poet, art critic, writer, essayist, and photographer Virgil Nemoianu – essayist, literary critic, and philosopher of culture Michael Radu – political writer Saviana Stănescu – writer (poet, playwright) Dorin Tudoran – poet, writer, journalist Comics writers Will Eisner – comics writer, artist, and entrepreneur (Romanian-Jewish descent) Sandu Florea – illustrator, comic book, and comic strip creator Elena Kucharik – illustrator Stan Lee – comic book writer, editor, publisher and former president and chairman of Marvel Comics (Romanian-Jewish descent) Saul Steinberg – cartoonist and illustrator (Romanian-Jewish descent) Others George Barris – photographer best known for his photographs of Marilyn Monroe Alexandra Botez – chess player Catherine Caradja – philanthropist, aristocrat, Romanian expatriate to the U.S. Misha Gabriel – dancer and choreographer Serban Ghenea – mixing engineer, who has recorded and mixed tracks for artists including Adele, Stevie Wonder, Rod Stewart, Bruno Mars, Taylor Swift and many more Joe Oros – automobile designer for the Ford Motor Company Ion Mihai Pacepa – general of Securitate Mircea Răceanu – diplomat Vladimir Tismăneanu – specialist in political systems and comparative politics See also Romanian Canadians References American people of Romanian descent Romanian-American Romanian Americans Romanian American
63782538
https://en.wikipedia.org/wiki/Rachel%20Harrison%20%28computer%20scientist%29
Rachel Harrison (computer scientist)
Rachel Harrison is a British computer scientist and software engineer whose research interests include mobile apps and object-oriented design. She is a professor of computer science at Oxford Brookes University. Education and career Harrison has master's degrees in mathematics from the University of Oxford and in computer science from University College London, and a Ph.D. in computer science from the University of Southampton. Before joining Oxford Brooks, she has been professor and head of the computer science department at the University of Reading, and a consultant in the computing industry. At Oxford Brookes University, she leads the Applied Software Engineering Group. In 2009, Harrison became editor-in-chief of the Software Quality Journal, a position she still holds as of 2020. References External links Year of birth missing (living people) Living people British computer scientists British women computer scientists British software engineers Alumni of the University of Oxford Alumni of University College London Alumni of the University of Southampton Academics of the University of Reading Academics of Oxford Brookes University
34334784
https://en.wikipedia.org/wiki/Neo%20%28keyboard%20layout%29
Neo (keyboard layout)
The Neo layout is an optimized German keyboard layout developed in 2004 by the Neo Users Group, supporting nearly all Latin-based alphabets, including the International Phonetic Alphabet, the Vietnamese language, and some Cyrillic languages. The positions of the letters are not only optimized for German letter frequency, but also for typical groups of two or three letters. English is considered a major target as well. The design tries to enforce the alternating usage of both hands to increase typing speed. It is based on ideas from de-ergo and other ergonomic layouts. The high frequency keys are placed in the home row. The current layout Neo 2.0 has unique features not present in other layouts, making it suited for many target groups such as programmers, mathematicians, scientists and LaTeX authors. Neo is grouped in different layers, each designed for a special purpose. Most special characters inherit the meaning of the lower layers—the character is one layer above the , or the Greek is above the character. Neo uses a total of six layers with the following general use: Lowercase characters Uppercase characters, typographical characters Special characters for programming, etc. WASD-like movement keys and number block Greek characters Mathematical symbols and Greek uppercase characters Concept Facilitating Ten Finger Writing On the basis of the statistical distribution of letters of the German language and research on ergonomics, the neo-keyboard layout aims to shorten the finger movements during writing. The most common letters are therefore on the baseline and the fast index and middle fingers. This allows more words to be written without leaving the baseline compared to other keyboard layouts. For an average German-language text 63% of all letters can be typed from the baseline with fingers on the home row - in contrast to 25% in the usual QWERTZ-layout. In addition, using Neo the hands shall alternate as often as possible during writing and their use be evenly distributed - the QWERTY keyboard layout is heavily left-heavy. The drafting of the letter positions took into account the experience from other keyboard layout reforms. Instead of pursuing a purely mathematical or experimental path, Neo combines the insights of both paths with the goal of improving both the ergonomics and memorization of the keyboard layout. Layers Neo 2 has a total of six levels. The first two levels correspond to the German lowercase and uppercase letters and can be reached by switching as usual layouts. The third level can be reached via the Mod3 , which under QWERTZ corresponds to the Caps Lock key and the # key, and contains common punctuation and special characters. Binary and trigrams, which are commonly used in programming, in wikis, when chatting, or in the command line of common operating systems, have been taken into account in the design of this level. The fourth level can be reached via Mod4 , which under QWERTY corresponds to the Alt Gr key and the < key, contains a numeric keypad and important navigation keys, so you do not have to take your hands off the main field to navigate in a text document. By making the navigation buttons accessible on the main panel, Neo also encounters the criticism expressed in reform keyboards that text editors like Vim would be more difficult to use. This level can be locked just like the second one. The levels five ( Shift + Mod3 ) and six ( Mod3 + Mod4 ) finally contain small and capital letters in Greek as well as other mathematical and scientific signs. Character variety and typography Neo allows the writing of virtually all languages with Latin-based alphabet, in particular because of the dead keys and additional Compose combinations, of which Neo brings many own. The dead keys are located at the top left and right and allow combining the following characters with the corresponding diacritic when hitting the key. Thus, not only grave, acute and circumflex, but also many other diacritics such as Kroužek, Breve and Makron are possible, including the novel dead button "turning" ↻ , for example, from the sign a creates a ɐ. Together with the fifth level, Neo can be used to create Greek as well as international phonetic alphabet symbols. Nevertheless Neo is clearly designed for the German language; for others a change in programming is necessary. Furthermore, meaningful Unicode characters were placed on the keyboard for which otherwise a character table would have to endeavor, or which would otherwise not be so easy to achieve. These characters include the common quotes ("..."), the dash (-), the real apostrophe (') and the chevrons commonly used in books and newspapers ("..."). In addition, the Capital ẞ, standardized in June 2008, is also available. Mathematics and special characters On the levels five and six one reaches the Greek letters and numerous characters required for the formula theorem, for example symbols for sets ( , , ∩ , ∪ , ⊂ ), logic ( ¬ , ∨ , ∧ , ⇔ ), derivatives ( ∂ , ∇ ), and many more. By means of the Compose key, for example, the sequence Compose + = + ⊂ can be used to generate the subset symbol, ⊆ ', which also contains the equality. In addition, the following characters are available with the keyboard layout: biological characters ( ♀ , ♂ , ⚥ ), arrows (↦, ←), physical constants ( ℏ ) and graphic symbols (✔, ✘, ☺). Typing speed Another goal was to increase typing speed by shortening the average finger movement, however there has been no scientific research. Some users report faster speeds with Neo, although others see no speed advantage over other keyboard layouts such as QWERTZ or Dvorak. Genesis The initial version 1 was introduced in 2004 by Hanno Behrens on the mailing list of the de ergo keyboard. The name Neo is a recursive acronym and originally stood for NEO Ergonomic Oops , so "NEO", later the interpretation was set to Neo ergonomically optimized . Considered were experiences of the Dvorak keyboard layout (around 1932), the ergonomic layout of Helmut Meier (1954) and some later investigations as well as attempts to have an ideal occupancy calculated by algorithms alone. Instead of treading only a purely mathematical or purely experimental way, as is the case with previous ergonomic layouts, Neo takes both findings into account and combines these with consideration of the ergonomics and the quickly memorable arrangement of the keys. Thus, Neo relies on the one hand on statistical surveys, in particular the distribution of letters in German and other languages, on the other hand studies on ergonomics by Walter Rohmert, the MARSAN Institute (1979) or Malt (1977). In 2005, Neo 1.1 started thinking about how to arrange the keys that are often needed when programming. In it are brackets and special characters on the main field with the help of the key Mod3 , which corresponds to Qwerty the caps lock key and the # button and the button Mod4 , which under QWERTY the key Alt Gr and the key < corresponds to reach. Neo 2 Release 2, released on March 29, 2010, introduces a number of fundamental changes: In the main level, the keys X, J and Q were swapped cyclically. The X was placed on the left hand so that the frequently used key combinations , and for the commands "Cut", "Copy" and "Paste" are on one hand. [7] The special character level 3 has been completely reworked, as the corresponding shift keys are more accessible. The higher levels 4-6 were introduced. Platforms Since late 2006, Neo has been included in Linux as a variant of the German keyboard layout for the X Window System X.Org in all current distributions. Drivers are downloadable on the project page for common platforms, including Linux, Windows, Mac OS, BSD and Solaris. In addition, free learning software is available for Linux, Windows and Mac OS; the neo-learning software is an official part of the KTouch project. Under ChromeOS, Neo can be found in the German language settings. Google's Gboard Keyboard for Android supports Neo2. References Latin-script keyboard layouts
67806114
https://en.wikipedia.org/wiki/Chris%20Hacker
Chris Hacker
Chris Hacker (born November 15, 1999) is an American professional stock car racing driver. He competes part-time in the NASCAR Camping World Truck Series, driving the No.34 Toyota Tundra for Reaume Brothers Racing. Racing career Early career In 2008, Hacker started his racing career when he was 8 years old, racing in Quarter Midgets, where he ranked 2nd in his home state. From 2009 to 2011, he competed in the Bandolero bandits, where he won the Indiana state championship in 2010 and 2011. After two championship seasons in Bandoleros, he moved up to the INEX Legend Cars in 2012, where he won the Indianapolis Speedrome Championship in the Young Lions category. In 2013, he moved up from Legend Cars to the Champion Racing Association Sportsman category, where he became the youngest driver to ever to win a CRA event at age 13. He returned to the CRA for 2014, competing in the JEGS All-Star category, where he won the Sportsman of the Year Award. ARCA Menards Series West In 2020, Hacker signed with Fast Track Racing with a collaboration with Cram Racing Enterprises for one race in the ARCA Menards Series West, the Arizona Lottery 100 at ISM Raceway. He started 24th and finished 15th. ARCA Menards Series On August 28, 2020, Hacker announced on Twitter that he would be competing part-time in the NASCAR Camping World Truck Series in 2021 for an unknown amount of races, and two ARCA Menards Series races for Cram Racing Enterprises starting at Daytona. On January 11, 2021, Hacker had tested positive for COVID-19, and was forced to quarantine, missing the ARCA test session at Daytona. Due to missing the crucial test session, this eliminated Hacker from competing at Daytona. Hacker was set to make his first official ARCA Menards Series debut at Charlotte Motor Speedway for the 2021 General Tire 150. Hacker started 12th and finished 10th, his first career top 10 in his first ARCA Menards Series start in 2021. NASCAR Camping World Truck Series On June 1, 2021, Hacker announced on Twitter that he would run his first truck race with Cram Racing Enterprises at Nashville Superspeedway on June 18. However, two days later, he would announce that he and Cram Racing Enterprises had parted ways due to "unforeseen circumstances." Hacker made his debut in the Camping World Truck Series on August 20, 2021, at World Wide Technology at Gateway in the Toyota 200. He was set to drive the number 34 truck for Reaume Brothers Racing. Hacker started strong but had an oil line problem which put him multiple laps down. Hacker started 31st and finished 27th. On September 7, 2021, it was announced that Hacker would drive two races for Niece Motorsports, Las Vegas, and Martinsville. Hacker remained with Reaume for the 2022 season. Personal life Hacker is known as the first NASCAR driver born with a Brachial Plexus injury (nerve damage) and has weak and limited movement in his left arm. He's had three surgeries, and years of occupational and physical therapies, and still struggles with arm movement. Hacker has raised money that has helped pay for nearly 50 kids to attend a Brachial Plexus Injury camp. This allows kids to meet others with the same disability and to receive support from each other and hear inspirational stories from others that have suffered from this. Hacker was one of those speakers in 2014. After his first top 10 in the ARCA Menards series, Twitter showed its support for the young driver, granting him tons of fans. Hacker would name the newly found fanbase "HackNation". Motorsports career results NASCAR (key) (Bold – Pole position awarded by qualifying time. Italics – Pole position earned by points standings or practice time. * – Most laps led.) Camping World Truck Series Season still in progress Ineligible for series points ARCA Menards Series (key) (Bold – Pole position awarded by qualifying time. Italics – Pole position earned by points standings or practice time. * – Most laps led. ** – All laps led.) ARCA Menards Series West References External links Living people 1999 births Racing drivers from Indiana NASCAR drivers ARCA Menards Series drivers Sportspeople from Indiana
47007515
https://en.wikipedia.org/wiki/Apple%20News
Apple News
Apple News is a news aggregator app developed by Apple Inc., for its iOS, iPadOS, watchOS, and macOS operating systems. The iOS version was launched with the release of iOS 9. It is the successor to the Newsstand app included in previous versions of iOS. Users can read news articles with it, based on publishers, websites and topics they select, such as technology or politics. History The app was announced at Apple's WWDC 2015 developer conference. It was released alongside the iOS 9 release on September 16, 2015, for the iPhone, iPod Touch and iPad. At launch, the app was only available to users in the United States, but within a month had become available to users in Australia and the United Kingdom. It was reported in 2014 that Apple Inc. had acquired the Netherlands-based digital magazine company Prss, developers of an application that simplified the creation of iPad-compatible magazines using a WYSIWYG editor that didn't require any knowledge of code. Prss was seen as a magazine version of iBooks Author. The idea for Prss came after entrepreneur Michel Elings and longtime travel writer and photographer Jochem Wijnands designed their own iPad publication called TRVL. The Prss invention became what is now 'Apple News.' On June 13, 2016, during the keynote address at WWDC 2016, it was revealed that with the forthcoming iOS 10 update the News app would undergo new icon and app redesigns along with an improved For You section organized by topics. Furthermore, it was announced that there would be support for paid subscriptions for certain news sources and publishers as well as an opt-in system for breaking news notifications and email on top news stories. On June 4, 2018, during the WWDC 2018 keynote address, Apple announced that the Apple News app would be ported to macOS and be available to users in Australia, United Kingdom, and United States starting in macOS 10.14. The app is installed by default in every region but is not made visible to users outside those three regions. Users can still open it using various workarounds. In February 2019, Digiday reported that publishers are frustrated over the platform's lack of revenue, despite seeing steady growth in audience over the past year. In March 2019, Apple added support for Canada, and added a paid subscription version. In July 2020, Apple added a new Audio tab for US News+ subscribers, as well as support for CarPlay. App The Apple News app works by pulling in news stories from the web through various syndication feeds (Atom and RSS) or from news publishing partners through the JSON descriptive Apple News Format. Any news publisher can submit their content for inclusion in Apple News, and users can add any feed through the Safari web browser. Stories added through Safari will be displayed via the in-app web browser included with the app. News is fetched from publisher's websites through the AppleBot web crawler bot. The bot fetches feeds, as well as web pages and images for the Apple News service. It has received criticism for being poorly behaved and not being fault tolerant; resulting in high loads on websites. The Apple News version distributed with iOS 9 made it hard to differentiate traffic originating from within the app from traffic originating from other apps. Apple News version 2, introduced in iOS 10, began identifying itself using its own User-Agent string, making it possible to measure the reach of Apple News using web analytics solutions. Traffic analytics was previously only available to paying publisher partners through iAds. In WWDC 2018, Apple announced that the News app would be available in macOS Mojave. On March 25, 2019, iOS 12.2 was released with the updated News app that introduced subscriptions through Apple's "Apple News+" service, which was announced on the same day. The icon for Apple News also changed, putting the N in the icon front and center with a slightly changed design. Apple News+ On March 25, 2019, Apple announced Apple News+, a subscription-based service allowing access to content from over 300 magazines, as well as selected newspapers. The service was preceded by the digital media subscription app Texture, which Apple acquired in 2018. The Wall Street Journal, one of the newspapers available through Apple News+, will reconfigure its services to offer more articles for casual readers. It will not actively display business-intensive articles through the Apple platform, though they will still be available by searching through a three-day archive. On July 15, 2020, Apple announced the addition of audio stories in Apple News+, which allows subscribers to listen to narrated versions of articles in a similar fashion to a podcast under a new Audio tab. On September 15, 2020, Apple announced that Apple News+ would be bundled in the Premier package of Apple One alongside iCloud, Apple Music, Apple Arcade, Apple TV+ and Apple Fitness+. See also Google News Google Play Newsstand MSN News Flipboard References External links Apple News Publisher Resources News Preview app for publishers IOS-based software made by Apple Inc. IOS software 2015 software Apple Inc. services
16938133
https://en.wikipedia.org/wiki/ISO/IEC%2027001%20Lead%20Auditor
ISO/IEC 27001 Lead Auditor
The ISO/IEC 27001 Lead Auditor certification consists of a professional certification for auditors specializing in information security management systems (ISMS) based on the ISO/IEC 27001 standard and ISO/IEC 19011. The training of lead auditors normally includes a classroom/online training and exam portion and a requirement to have performed a number of ISO/IEC 27001 audits and a number of years of information security experience. The training course is provided by any organisation wishing to deliver the training. Some ISO27001 Lead Auditor training courses are formally accredited by training accreditation bodies such as IRCA and PECB. Attending the course and passing the exam is not sufficient for an individual to use the credentials of Lead Auditor as professional and audit experience is required. The specific requirements to obtain a certificate stating the qualification of "ISO27001 Lead Auditor" vary depending on the organisation issuing the certificate. The course usually consists of around forty hours (four days) of training and a final exam on the fifth day. This certification is different from the ISO/IEC 27001 Lead Implementer certification which is targeted for information security professionals who want to implement the ISO/IEC 27001 standard rather than audit it. Most of the five-day ISO27001 Lead Auditor courses require some prerequisite knowledge of ISO27001 but the content of the courses vary considerably. If an individual wants to issue an ISO/IEC 27001 certificate of compliance then the audit must be done by a Lead Auditor working for an accredited certification body and done using all the rules of that certification body, which will need to adhere to ISO17021 and ISO27006. The main benefit from achieving the ISO/IEC 27001 Lead Auditor certification is the recognition that the individual has some skills in the topic. The main ISO/IEC 27001 auditor certifications normally follow these designations: Provisional ISMS Auditor ISMS Auditor/Internal Auditor Lead ISMS Auditor External links ISO 27001
30428589
https://en.wikipedia.org/wiki/1964%20USC%20Trojans%20football%20team
1964 USC Trojans football team
The 1964 USC Trojans football team represented the University of Southern California (USC) in the 1964 NCAA University Division football season. In their fifth year under head coach John McKay, the Trojans compiled a 7–3 record (3–1 against conference opponents), finished in a tie with Oregon State for the Athletic Association of Western Universities (AAWU or Pac-8) championship, and outscored their opponents by a combined total of 207 to 130. The Trojans ended their season with an upset victory over an undefeated Notre Dame that was ranked #2 in the AP Poll. Quarterback Craig Fertig was one of the team's two captains and led the team in passing, completing 109 of 209 passes for 1,671 yards with 11 touchdowns and 10 interceptions. Mike Garrett led the team in rushing with 217 carries for 948 yards and nine touchdowns. Rod Sherman led the team in receiving yardage with 24 catches for 446 yards and five touchdowns. Schedule Game summaries Notre Dame References USC USC Trojans football seasons Pac-12 Conference football champion seasons USC Trojans football
16094885
https://en.wikipedia.org/wiki/VHD%20%28file%20format%29
VHD (file format)
VHD (Virtual Hard Disk) and its successor VHDx are file formats representing a virtual hard disk drive (HDD). They may contain what is found on a physical HDD, such as disk partitions and a file system, which in turn can contain files and folders. They are typically used as the hard disk of a virtual machine, are built into modern versions of Windows, and are the native file format for Microsoft's hypervisor (virtual machine system), Hyper-V. The format was created by Connectix for their Virtual PC product, known as Microsoft Virtual PC since Microsoft acquired Connectix in 2003. VHDx was introduced in Windows 8/Windows Server 2012 to add features and flexibility missing in VHD that had become apparent over time. Since June 2005, Microsoft has made the VHD and VHDx Image Format Specifications available to third parties under the Microsoft Open Specification Promise. Features A Virtual Hard Disk allows multiple operating systems to reside on a single host machine. This method enables developers to test software on different operating systems without the cost or hassle of installing a second hard disk or partitioning a single hard disk into multiple volumes. The ability to directly modify a virtual machine's hard disk from a host server supports many applications, including: Moving files between a VHD and the host file system Backup and recovery Antivirus and security Image management and patching Disk conversion (physical to virtual, and vice versa) Life-cycle management and provisioning (re) VHDX was added in Hyper-V in Windows Server 2012 to add larger storage capacity, data corruption protection, and optimizations to prevent performance degradation on large-sector physical disks. Supported formats VHDs are implemented as files that reside on the native host file system. The following types of VHD formats are supported by Microsoft Virtual PC and Virtual Server: Fixed hard disk image: a file that is allocated to the size of the virtual disk. Fixed VHDs consist of a raw disk image followed by a VHD footer (512 or formerly 511 bytes). Dynamic hard disk image: a file that at any given time is as large as the actual data written to it, plus the size of the header and footer. Dynamic and differencing VHDs begin with a copy of the VHD footer (padded to 512 bytes), and for dynamic or differencing VHDs created by Microsoft products this results in a VHD-cookie string at the beginning of the VHD file. Differencing hard disk image: a set of modified blocks (maintained in a separate file referred to as the "child image") in comparison to a parent image. The Differencing hard disk image format allows the concept of Undo Changes: when enabled, all changes to a hard drive contained within a VHD (the parent image) are stored in a separate file (the child image). Options are available to undo the changes to the VHD, or to merge them permanently into the VHD. Different child images based on the same parent image also allow "cloning" of VHDs; at least the globally unique identifier (GUID) must be different. Linked to a hard disk (aka pass-through): a file that contains a link to a physical hard drive or partition of a physical hard drive. Advantages Significant benefits result from the ability to boot a physical computer from a virtual hard drive: Ease of deployment: IT organizations can deploy standardized, 'pre-built' configurations on a single VHD. As an example, software engineering organizations that need a specific set of tools for a particular project could simply 'pull' the appropriately-configured VHD from a network location. Backup-and-Restore: Changes to the contents of a VHD (such as infection by a virus, or accidental deletion of critical files) are easily undone. Multi-User Isolation: Many current operating systems support having multiple users, but offer varying degrees of protection between them (e.g., one user of the OS could become infected by a virus that infects other users, or make changes to the OS that affect other users). By giving each user their own version of the operating system—say, by creating for each of them a differencing VHD based on a base installation of the OS—changes to any particular child image would have no effect on any of the other child images. Native VHD Boot Native VHD Boot refers to the ability of a physical computer to mount and boot from an operating system contained within a VHD. Windows 7 Enterprise and Ultimate editions support this ability, both with and without a host operating system present. Windows Server 2008 R2 is also compatible with this feature. Limitations The VHD format has a built-in limitation of just under 2 TiB (2040 GiB) for the size of any dynamic or differencing VHDs. This is due to a sector offset table that only allows for the maximum of a 32-bit quantity. It is calculated by multiplying 232 by 512 bytes for each sector. The formula in the VHD specification allows a maximum of sectors. About 127 GiB is also the limit for VHDs in Windows Virtual PC. For fewer than sectors (about 31 GiB) the CHS-value in the VHD footer uses a minimum of and a maximum of heads with sectors per track. The CHS algorithm then determines . The specification does not discuss cases where the CHS value in the VHD footer does not match the (virtual) CHS geometry in the Master Boot Record of the disk image in the VHD. Microsoft Virtual Server (also Connectix derived) has this limitation using virtual IDE drivers but 2 TiB if virtual RAID or virtual SCSI drivers are used. Software support Virtual Hard Disk format was initially used only by Microsoft Virtual PC (and Microsoft Virtual Server). Later however, Microsoft used the VHD format in Hyper-V, the hypervisor-based virtualization technology of Windows Server 2008. Microsoft also used the format in Complete PC Backup, a backup software component included with Windows Vista and Windows 7. In addition, Windows 7 and Windows Server 2008 R2 include support for creating, mounting, and booting from VHD files. The Vista (or later) drive manager GUI supports a subset of the functions in the diskpart command line tool. VHDs known as vdisk in diskpart can be created, formatted, attached (mounted), detached (unmounted), merged (for differencing VHDs), and compacted (for VHDs on an NTFS host file system). Compacting is typically a two step procedure, first unused sectors in the VHD are filled with zeros, . The virtual machine additions in older VPC versions and the virtual machine integration features in Windows Virtual PC contain precompact ISO images for the first step in supported guest systems. Third-party products also use VHD file format. Oracle VirtualBox, part of Sun xVM line of Sun Microsystems supports VHD in versions 2 and later. In 2017 Red Gate Software and Windocks introduced VHD based support for SQL Server database cloning. Offline modification It is sometimes useful to modify a VHD file without booting an operating system. Hyper-V features offline VHD manipulation, providing administrators with the ability to securely access files within a VHD without having to instantiate a virtual machine. This provides administrators with granular access to VHDs and the ability to perform some management tasks offline. The Windows Disk Management MMC plugin can directly mount a VHD file as a drive letter in Windows 7/Server 2008 and newer. For situations where mounting a VHD within the operating system is undesirable, several programs enable software developers to inspect and modify VHD files, including .NET DiscUtils, WinImage, and R1soft Hyper-V VHD Explorer. 7-Zip supports extraction and inspection of VHD files. Virtual Floppy Disk (VFD) Virtual Floppy Disk (VFD) is a related file format used by Microsoft Virtual PC, Microsoft Automated Deployment Services and Microsoft Virtual Server 2005. A VFD that contains an image of a 720 KB low-density, 1.44 MB high-density or 1.68 MB DMF 3.5-inch floppy disk can be mounted by Virtual PC. Other virtual machine software such as VMWare Workstation and VMware Player can mount raw floppy images in the same way. Windows Virtual PC for Windows 7 (version 6.1) does not offer a user interface for manipulating virtual floppy disks; however, it still supports physical and virtual floppy disks via scripting. Under Hyper-V, VFD files are usable through the VM settings for Generation 1 virtual machines. Generation 2 virtual machines do not emulate a floppy controller and do not support floppy disk images. Virtual Hard Disk (VHDX) VHDX (Virtual Hard Disk v2) is the successor format to VHD. Where VHD has a capacity limit of 2040 GB, VHDX has a capacity limit of 64 TB. For disk images with this newer format the filename extension vhdx is used instead of vhd. VHDX protects against power failures and is used by Hyper-V. VHDX can be mounted like VHD. See also VMDK qcow Virtual disk image Apple Disk Image VHD Set References Disk images
2694933
https://en.wikipedia.org/wiki/Environmental%20Campus%20Birkenfeld
Environmental Campus Birkenfeld
The Environmental Campus Birkenfeld (ECB) (German: Umwelt-Campus Birkenfeld (UCB)) is a branch of the Hochschule Trier in the state of Rhineland-Palatinate, Germany. It is close to the small town of Birkenfeld in Rhineland-Palatinate, close to the border of Saarland, Luxembourg, Belgium and France. There are 2,500 students enrolled in two departments. There are a total of 59 professors teaching in both departments ( Environmental Planning / Environmental Technology (Umweltplanung und -technik) and Environment Business / Environment Law (Umweltwirtschaft und -recht)). Since the beginning of the academic year in October 2005, only bachelor and master students are being accepted as part of the Bologna process. ECB is structured as a residential campus. It offers education, housing and employment all in one location (this is very rare in Germany). There are eight residential structures that provide housing for 777 students. As is characteristic of a university of applied sciences, the campus is structured so that the students have the opportunity to receive an early introduction to the sciences. Students are encouraged to take part in the research projects underway on campus. Some programs are even designed solely for student participation. One such specialized project is the traveling university. Students have the opportunity to travel to one of a selection of foreign countries to work on a project for a period of two weeks. Typical projects involve renewable energy sources and have been conducted in places as diverse as Opole (Poland) and Kunming (China). Similar future projects will be located in Brazil. Incidentally, ECB is the only University in Germany that uses Renewable Energy Sources to generate heat. History ECB has a very short history compared with other universities across the globe, however it is just as intriguing. The campus area was originally a US Army hospital that was built as a contingency in the event of a 3rd world war. Fortunately, it was never needed. Near this location (Baumholder) was the 98th General Hospital, established in Neubrücke (a small town near Birkenfeld). The hospital had an area of 440,000 square metres when it opened in 1952. It had 1,000 beds as well as various medical specialty sections: Surgery, dental medicine, orthopedics, radiology, rehabilitation, eye medicine and others. The hospital was open through the 1970s and was closed in 1984 due to operational expenses. In 1994 the US Army finally gave up on the hospital ever being needed and by 1996 the complex was empty. Environmental campus In 1996 the ECB was born. The first step was undertaken by the Landrat (county commissioner) of the Landkreis Birkenfeld (Birkenfeld County) Dr. Ernst Theilen. The academic founders were three professors; Prof. Dr. Michael Eulenstein, Prof. Dr.-Ing. Hanns Köhler and Prof. Dr. Marott Bronder. They offered the first courses taught at ECB. At first, there was a lot of construction underway on campus. The final phase of construction (Central new building) was completed in 2000. Study The attractive features of ECB are the modern methods of teaching, the small class sizes and the international contacts. There are now 2,500 students enrolled from beginning of winter semester 2005. There are 14 Bachelor, 4 Dual Bachelor and 12 Master's degree programs offered. Bachelor degrees offered Applied Computer Science (Bachelor of Science) Bioprocess, Environmental and Process Engineering (Bachelor of Engineering) Bio- and Pharmaceutical Engineering (Bachelor of Science) Renewable Energies (Bachelor of Science) Mechanical Engineering: Product Development and Technical Planning (Bachelor of Engineering) Media Computer Science (Bachelor of Science) Engineering Physics (Bachelor of Engineering) Environmental Economics and Business Management (Bachelor of Arts) Environmental and Business Computer Science (Bachelor of Science) Industrial Engineering/Environmental Planning (Bachelor of Science) Economic and Environmental Law (LL. B.) Study Program "Environment and Technology" Study Semester "Principles of Sustainable Business" Sustainable Business and Technology (Bachelor of Engineering) The course of study for these degrees differ slightly from those offered in the United States, however, the German accreditation agency AQUAS will accept either. Dual Bachelor's Degree Sustainable Resource Management (Bachelor of Arts) Production Technology (Bachelor of Engineering) Bio- and Pharmaceutical Engineering (Bachelor of Science) Environmental and Business Computer Science (Bachelor of Science) Masters degrees offered Applied Computer Science (Master of Science) Bioprocess and Process Engineering (Master of Science) Business Administration and Engineering (Master of Science) Digital Product Development: Mechanical Engineering (Master of Engineering) International Material Flow Management (Master of Science) International Material Flow Management (Master of Engineering) Insolvency and Reorganisation Law (LL. M.) Media Computer Science (Master of Science) Environmental and Business Economics (Master of Arts) Environmental Energy Technology (Master of Science) Business and Energy Law (LL. M.) Sustainable Change (Master of Arts) Research Although the environmental campus has only existed since 1996, it has already launched a set of institutes and competency centers built around the scientific curriculum. Some of these are already internationally recognized. Institutes IfaS Institut für angewandtes Stoffstrommanagement (Institute for Applied Material Flow Management) IBT Institut für Betriebs- und Technologiemanagement (Institute for Business and Technology Management) ISS (Institute for Software Systems) IMIP Institute for Micro Process Engineering and Particle Technology ZBF Zentrum für Bodenschutz and Flächenhaushaltspolitik (Centre for Soil Conservation and Territorial Budgetary Policies) BAQI Birkenfelder Institut für Ausbildung und Qualitätssicherung im Insolvenzwesen (Birkenfeld Institute for Training and Quality Management in Insolvency Management) IBioPD Institut für biotechnisches Prozessdesign (Institute for Biotechnical Process Design) IREK Institut für das Recht der Erneuerbaren Energien, Energieeffizienzrecht und Klimaschutzrecht (Institute for Renewable Energy Law, Energy Efficiency Law and Climate Protection Law) Competence Centers Fuel Cell Centre Rhineland-Palatinate Competence Centre 'Intelligent microstructured Particles' (KIMP) Competence Centre eGovernment & Environment Umberto Competence Center Birkenfeld (UCC) Centre for Environmental Communication at the ECB (ZUKUC) Partner universities China: Kunming University, Sichuan University US: Midwestern State University, Lander University, Clemson University, Warren-Wilson College, Washington University Brazil: Federal Institute of Goiás IFG External links Official Website Additional links IMAT - Master in International Material Flow Management Students' Union Universities of Applied Sciences in Germany Hunsrück Universities and colleges in Rhineland-Palatinate
1931049
https://en.wikipedia.org/wiki/Autodesk%20Softimage
Autodesk Softimage
Autodesk Softimage, or simply Softimage () is a discontinued 3D computer graphics application, for producing 3D computer graphics, 3D modeling, and computer animation. Now owned by Autodesk and formerly titled Softimage|XSI, the software has been predominantly used in the film, video game, and advertising industries for creating computer generated characters, objects, and environments. Released in 2000 as the successor to Softimage3D, Softimage|XSI was developed by its eponymous company, then a subsidiary of Avid Technology. On October 23, 2008, Autodesk acquired the Softimage brand and 3D animation assets from Avid for approximately $35 million, thereby ending Softimage Co. as a distinct entity. In February 2009, Softimage|XSI was rebranded Autodesk Softimage. A free version of the software, called Softimage Mod Tool, was developed for the game modding community to create games using the Microsoft XNA toolset for PC and Xbox 360, or to create mods for games using Valve's Source engine, Epic Games's Unreal Engine and others. It was discontinued with the release of Softimage 2014. On March 4, 2014, it was announced that Autodesk Softimage would be discontinued after the release of the 2015 version, providing product support until April 30, 2016. Overview Autodesk Softimage is a 3D animation application comprising a suite of computer graphics tools. Modeling tools allow the generation of polygonal or NURBS models. Subdivision modeling requires no additional operators and works directly on the polygonal geometry. Each modeling operation is tracked by a construction history stack, which enables artists to work non-destructively. Operators in history stacks can be re-ordered, removed or changed at any time, and all adjustments propagate to the final model. Control rigs are created using bones with automatic IK, constraints and specialized solvers like spine or tail. Optionally, the ICE system can be used to create light-weight rigs in a node-based environment. The rigging process can be sped up through the use of adaptable biped and quadruped rigs, FaceRobot for facial rigs and automatic lip syncing. Animation features include layers and a mixer, which allows combining animation clips non-linearly. Animation operators are tracked in a construction history stack that is separate from the modeling stack, enabling users to change the underlying geometry of already animated characters and objects. MOTOR is a feature that transfers animation between characters, regardless of their size or proportions. GATOR can transfer attributes such as textures, UVs, weight maps or envelopes between different models. Softimage also contains tools to simulate particles, particle strands, rigid body dynamics, soft body dynamics, cloth, hair and fluids. The default and tightly integrated rendering engine in Softimage is mental ray. Materials and shaders are built in a node-based fashion. When users activate a so-called render region in a camera view, it will render this section of the scene using the specified rendering engine and update completely interactively. A secondary rendering mode is available for rendering real-time GPU shaders written in either the Cg or HLSL languages. Also included is the FX Tree, which is a built-in node-based compositor that has direct access to image clips used in the scene. It can thus not only be used to finalize and composite rendered frames, but also as an integral part of scene creation. The FX Tree can be used to apply compositing effects to image clips being used in the fully rendered scene, allowing Softimage to render scenes using textures authored or modified in various ways within the same scene. In addition to the node-based ICE platform described below, Softimage has an extensive API and scripting environment that can be used to extend the software. The available scripting languages include C#, Python, VBScript and JScript. A C++ SDK is also available for plug-in developers, with online documentation available to the public. ICE Interactive Creative Environment On July 7, 2008, the Softimage, Co. announced Softimage|XSI 7, which introduced the ICE (Interactive Creative Environment) architecture. ICE is a visual programming platform that allows users to extend the capabilities of Softimage quickly and intuitively using a node-based dataflow diagram. This enables artists to create complex 3D effects and tools without scripting. Among the main uses for ICE are procedural modeling, deformation, rigging and particle simulation. It can also be used to control scene attributes without the need to write expressions, for example to add camera wiggle or make a light pulsate. ICE is a parallel processing engine that takes advantage of multi-core CPUs, giving users highly scalable performance. ICE represents Softimage functionality using a collection of nodes, each with its own specific capabilities. Users can connect nodes together, visually representing the data flow, to create powerful tools and effects. Softimage ships with several hundred nodes; among them are both low level nodes, such as Multiply or Boolean, as well as a number of high level nodes called compounds. Compounds serve as "wrapper nodes" to collapse ICE graphs into a single node. Softimage allows users to add custom compounds to its main menu system for easy reusability. The screenshot on the right shows an example of a simple geometry deformation ICE graph. In a practical scenario, one would collapse the graph into a compound and expose important parameters, for instance the deformation intensity. After adding the tool to the user interface it can easily be applied to other objects. Compounds can also be shared between installations because their entire functionality is stored in XML files. The graph-based approach of ICE allows for the creation of effects previously attainable only through the use of scripting and/or compiled code. Due to its visual nature and interactivity, it is very accessible for users with no programming experience. Many free and commercial ICE tools have been made available by users and 3rd party developers. Softimage contains an ICE-based fluid and physics simulator called Lagoa as well as an ICE-based version of the Syflex cloth simulator. Industry usage Softimage was primarily used in the film, video game and advertising industries as a tool to generate digital characters, environments and visual effects. Examples of films and other media made with the help of Softimage are Jurassic Park, Thor, Predators, District 9, White House Down, Yakuza, and Elysium. Releases Autodesk Softimage 2015 released April 14, 2014 Autodesk Softimage 2014 released April 12, 2013 Autodesk Softimage 2013 released April 12, 2012 Autodesk Softimage 2012 SAP (Subscription Advantage Pack) released September 27, 2011 Autodesk Softimage 2012 released April 7, 2011 Autodesk Softimage 2011 SAP (Subscription Advantage Pack) released October 7, 2010 Autodesk Softimage 2011 released April 9, 2010 Autodesk Softimage 2010 released September 14, 2009 Autodesk Softimage 7.5 released February 20, 2009 References External links Softimage Mod Tool Autodesk discontinued products 3D computer graphics software for Linux 3D graphics software 3D animation software Discontinued software IRIX software Proprietary commercial software for Linux 2000 software
52684573
https://en.wikipedia.org/wiki/OMNeT%2B%2B
OMNeT++
OMNeT++ (Objective Modular Network Testbed in C++) is a modular, component-based C++ simulation library and framework, primarily for building network simulators. OMNeT++ can be used for free for non-commercial simulations like at academic institutions and for teaching. OMNEST is an extended version of OMNeT++ for commercial use. OMNeT++ itself is a simulation framework without models for network protocols like IP or HTTP. The main computer network simulation models are available in several external frameworks. The most commonly used one is INET which offers a variety of models for all kind of network protocols and technologies like for IPv6, BGP. INET also offers a set of mobility models to simulate the node movement in simulations. The INET models are licensed under the LGPL or GPL. NED (NEtwork Description) is the topology description language of OMNeT++. To manage and reduce the time to carry out large-scale simulations, additional tools have been developed, for example, based on Python. See also MLDesigner QualNet NEST (software) References Computer networking Computer network analysis Simulation software Telecommunications engineering
605856
https://en.wikipedia.org/wiki/WordPress
WordPress
WordPress (WP, WordPress.org) is a free and open-source content management system (CMS) written in PHP and paired with a MySQL or MariaDB database. Features include a plugin architecture and a template system, referred to within WordPress as Themes. WordPress was originally created as a blog-publishing system but has evolved to support other web content types including more traditional mailing lists and forums, media galleries, membership sites, learning management systems (LMS) and online stores. One of the most popular content management system solutions in use, WordPress is used by 42.8% of the top 10 million websites . WordPress was released on May 27, 2003, by its founders, American developer Matt Mullenweg and English developer Mike Little, as a fork of b2/cafelog. The software is released under the GPLv2 (or later) license. To function, WordPress has to be installed on a web server, either part of an Internet hosting service like WordPress.com or a computer running the software package WordPress.org in order to serve as a network host in its own right. A local computer may be used for single-user testing and learning purposes. Overview "WordPress is a factory that makes webpages" is a core analogy designed to clarify the functions of WordPress: it stores content and enables a user to create and publish webpages, requiring nothing beyond a domain and a hosting service. WordPress has a web template system using a template processor. Its architecture is a front controller, routing all requests for non-static URIs to a single PHP file which parses the URI and identifies the target page. This allows support for more human-readable permalinks. Themes WordPress users may install and switch among many different themes. Themes allow users to change the look and functionality of a WordPress website without altering the core code or site content. Every WordPress website requires at least one theme to be present. Themes may be directly installed using the WordPress "Appearance" administration tool in the dashboard, or theme folders may be copied directly into the themes directory. WordPress themes are generally classified into two categories: free and premium. Many free themes are listed in the WordPress theme directory (also known as the repository), and premium themes are available for purchase from marketplaces and individual WordPress developers. WordPress users may also create and develop their own custom themes. Plugins WordPress' plugin architecture allows users to extend the features and functionality of a website or blog. , WordPress.org has 59,756 plugins available, each of which offers custom functions and features enabling users to tailor their sites to their specific needs. However, this does not include the premium plugins that are available (approximately 1,500+), which may not be listed in the WordPress.org repository. These customizations range from search engine optimization (SEO), to client portals used to display private information to logged-in users, to content management systems, to content displaying features, such as the addition of widgets and navigation bars. Not all available plugins are always abreast with the upgrades, and as a result, they may not function properly or may not function at all. Most plugins are available through WordPress themselves, either via downloading them and installing the files manually via FTP or through the WordPress dashboard. However, many third parties offer plugins through their own websites, many of which are paid packages. Web developers who wish to develop plugins need to learn WordPress' hook system which consists of over 2,000 hooks (as of Version 5.7 in 2021) divided into two categories: action hooks and filter hooks. Plugins also represent a development strategy that can transform WordPress into all sorts of software systems and applications, limited only by the imagination and creativity of the programmers. These are implemented using custom plugins to create non-website systems, such as headless WordPress applications and Software as a Service (SaaS) products. Plugins also could be used by hackers targeting the site that use WordPress, as hackers could exploit bugs on WordPress plugins themselves instead of exploiting the bugs on WordPress itself. Mobile applications Phone apps for WordPress exist for WebOS, Android, iOS, Windows Phone and BlackBerry. These applications, designed by Automattic, have options such as adding new blog posts and pages, commenting, moderating comments, replying to comments in addition to the ability to view the stats. Accessibility The WordPress Accessibility Team has worked to improve the accessibility for core WordPress as well as support a clear identification of accessible themes. The WordPress Accessibility Team provides continuing educational support about web accessibility and inclusive design. The WordPress Accessibility Coding Standards state that "All new or updated code released in WordPress must conform with the Web Content Accessibility Guidelines 2.0 at level AA." Other features WordPress also features integrated link management; a search engine–friendly, clean permalink structure; the ability to assign multiple categories to posts; and support for tagging of posts. Automatic filters are also included, providing standardized formatting and styling of text in posts (for example, converting regular quotes to smart quotes). WordPress also supports the Trackback and Pingback standards for displaying links to other sites that have themselves linked to a post or an article. WordPress posts can be edited in HTML, using the visual editor, or using one of a number of plugins that allow for a variety of customized editing features. Multi-user and multi-blogging Prior to version 3, WordPress supported one blog per installation, although multiple concurrent copies may be run from different directories if configured to use separate database tables. WordPress Multisites (previously referred to as WordPress Multi-User, WordPress MU, or WPMU) was a fork of WordPress created to allow multiple blogs to exist within one installation but is able to be administered by a centralized maintainer. WordPress MU makes it possible for those with websites to host their own blogging communities, as well as control and moderate all the blogs from a single dashboard. WordPress MS adds eight new data tables for each blog. As of the release of WordPress 3, WordPress MU has merged with WordPress. History b2/cafelog, more commonly known as b2 or cafelog, was the precursor to WordPress. b2/cafelog was estimated to have been installed on approximately 2,000 blogs as of May 2003. It was written in PHP for use with MySQL by Michel Valdrighi, who is now a contributing developer to WordPress. Although WordPress is the official successor, another project, b2evolution, is also in active development. WordPress first appeared in 2003 as a joint effort between Matt Mullenweg and Mike Little to create a fork of b2. Christine Selleck Tremoulet, a friend of Mullenweg, suggested the name WordPress. In 2004 the licensing terms for the competing Movable Type package were changed by Six Apart, resulting in many of its most influential users migrating to WordPress. By October 2009 the Open Source CMS MarketShare Report concluded that WordPress enjoyed the greatest brand strength of any open-source content management system. As of May 2021, WordPress is used by 64.8% of all the websites whose content management system is known. This is 41.4% of the top 10 million websites. Awards and recognition Winner of InfoWorld's "Best of open source software awards: Collaboration", awarded in 2008. Winner of Open Source CMS Awards's "Overall Best Open Source CMS", awarded in 2009. Winner of digital synergy's "Hall of Fame CMS category in the 2010 Open Source", awarded in 2010. Winner of InfoWorld's "Bossie award for Best Open Source Software", awarded in 2011. WordPress has a five star privacy rating from the Electronic Frontier Foundation. Release history Main releases of WordPress are codenamed after well-known jazz musicians, starting from version 1.0. Although only the current release is officially supported, security updates are backported "as a courtesy" to all versions as far back as 3.7. WordPress 5.0 "Bebo" The December 2018 release of WordPress 5.0, "Bebo", is named in homage to the pioneering Cuban jazz musician Bebo Valdés. It included a new default editor "Gutenberg" – a block-based editor; it allows users to modify their displayed content in a much more user friendly way than prior iterations. Blocks are abstract units of markup that, composed together, form the content or layout of a web page. Past content that was created on WordPress pages is listed under what is referred to as a Classic Block. Prior to Gutenberg, there were several block-based editors available as WordPress plugins, e.g. Elementor. Following the release of Gutenberg, comparisons were made between it and those existing plugins. Classic Editor plugin The Classic Editor Plugin was created as a result of User preferences and helped website developers maintain past plugins only compatible with WordPress 4.9.8, giving plugin developers time to get their plugins updated & compatible with the 5.0 release. Having the Classic Editor plugin installed restores the "classic" editing experience that WordPress has had up until the WordPress 5.0 release. The Classic Editor Plugin will be supported at least until 2022. The Classic Editor plugin is active on over 5,000,000 installations of WordPress. Vulnerabilities Many security issues have been uncovered in the software, particularly in 2007, 2008, and 2015. According to Secunia, WordPress in April 2009 had seven unpatched security advisories (out of 32 total), with a maximum rating of "Less Critical". Secunia maintains an up-to-date list of WordPress vulnerabilities. In January 2007, many high-profile search engine optimization (SEO) blogs, as well as many low-profile commercial blogs featuring AdSense, were targeted and attacked with a WordPress exploit. A separate vulnerability on one of the project site's web servers allowed an attacker to introduce exploitable code in the form of a back door to some downloads of WordPress 2.1.1. The 2.1.2 release addressed this issue; an advisory released at the time advised all users to upgrade immediately. In May 2007, a study revealed that 98% of WordPress blogs being run were exploitable because they were running outdated and unsupported versions of the software. In part to mitigate this problem, WordPress made updating the software a much easier, "one click" automated process in version 2.7 (released in December 2008). However, the filesystem security settings required to enable the update process can be an additional risk. In a June 2007 interview, Stefan Esser, the founder of the PHP Security Response Team, spoke critically of WordPress' security track record, citing problems with the application's architecture that made it unnecessarily difficult to write code that is secure from SQL injection vulnerabilities, as well as some other problems. In June 2013, it was found that some of the 50 most downloaded WordPress plugins were vulnerable to common Web attacks such as SQL injection and XSS. A separate inspection of the top-10 e-commerce plugins showed that seven of them were vulnerable. In an effort to promote better security, and to streamline the update experience overall, automatic background updates were introduced in WordPress 3.7. Individual installations of WordPress can be protected with security plugins that prevent user enumeration, hide resources and thwart probes. Users can also protect their WordPress installations by taking steps such as keeping all WordPress installation, themes, and plugins updated, using only trusted themes and plugins, and editing the site's .htaccess configuration file if supported by the web server to prevent many types of SQL injection attacks and block unauthorized access to sensitive files. It is especially important to keep WordPress plugins updated because would-be hackers can easily list all the plugins a site uses, and then run scans searching for any vulnerabilities against those plugins. If vulnerabilities are found, they may be exploited to allow hackers to, for example, upload their own files (such as a web shell) that collect sensitive information. Developers can also use tools to analyze potential vulnerabilities, including WPScan, WordPress Auditor and WordPress Sploit Framework developed by 0pc0deFR. These types of tools research known vulnerabilities, such as a CSRF, LFI, RFI, XSS, SQL injection and user enumeration. However, not all vulnerabilities can be detected by tools, so it is advisable to check the code of plugins, themes and other add-ins from other developers. In March 2015, it was reported that the Yoast SEO plugin was vulnerable to SQL injection, allowing attackers to potentially execute arbitrary SQL commands. The issue was fixed in version 1.7.4 of the plugin. In January 2017, security auditors at Sucuri identified a vulnerability in the WordPress REST API that would allow any unauthenticated user to modify any post or page within a site running WordPress 4.7 or greater. The auditors quietly notified WordPress developers, and within six days WordPress released a high-priority patch to version 4.7.2, which addressed the problem. As of WordPress 5.2, the minimum PHP version requirement is PHP 5.6, which was released on August 28, 2014, and which has been unsupported by the PHP Group and not received any security patches since December 31, 2018. Thus, WordPress recommends using PHP version 7.3 or greater. In the absence of specific alterations to their default formatting code, WordPress-based websites use the canvas element to detect whether the browser is able to correctly render emoji. Because Tor Browser does not currently discriminate between this legitimate use of the Canvas API and an effort to perform canvas fingerprinting, it warns that the website is attempting to 'extract HTML5 canvas image data'. Ongoing efforts seek workarounds to reassure privacy advocates while retaining the ability to check for proper emoji rendering capability. Development and support Key developers Matt Mullenweg and Mike Little were co-founders of the project. The core lead developers include Helen Hou-Sandí, Dion Hulse, Mark Jaquith, Matt Mullenweg, Andrew Ozz, and Andrew Nacin. WordPress is also developed by its community, including WP testers, a group of volunteers who test each release. They have early access to nightly builds, beta versions and release candidates. Errors are documented in a special mailing list or the project's Trac tool. Though largely developed by the community surrounding it, WordPress is closely associated with Automattic, the company founded by Matt Mullenweg. On September 9, 2010, Automattic handed the WordPress trademark to the newly created WordPress Foundation, which is an umbrella organization supporting WordPress.org (including the software and archives for plugins and themes), bbPress and BuddyPress. WordCamp developer and user conferences WordCamps are casual, locally-organized conferences covering everything related to WordPress. The first such event was WordCamp 2006 in August 2006 in San Francisco, which lasted one day and had over 500 attendees. The first WordCamp outside San Francisco was held in Beijing in September 2007. Since then, there have been over 1,022 WordCamps in over 75 cities in 65 different countries around the world. WordCamp San Francisco 2014 was the last official annual conference of WordPress developers and users taking place in San Francisco, having now been replaced with WordCamp US. First ran in 2013 as WordCamp Europe, regional WordCamps in other geographical regions are held with the aim of connecting people who aren't already active in their local communities and inspire attendees to start user communities in their hometowns. In 2019, the Nordic region had its own WordCamp Nordic. The first WordCamp Asia was to be held in 2020, but cancelled due to the COVID-19 pandemic. Support WordPress' primary support website is WordPress.org. This support website hosts both WordPress Codex, the online manual for WordPress and a living repository for WordPress information and documentation, and WordPress Forums, an active online community of WordPress users. See also Weblog software List of content management systems WordPress.com References External links Automattic 2003 software Blog software Content management systems Free and open-source Android software Free content management systems Free software programmed in PHP Software forks Software using the GPL license Website management
3838115
https://en.wikipedia.org/wiki/Metasploit%20Project
Metasploit Project
The Metasploit Project is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development. It is owned by Boston, Massachusetts-based security company Rapid7. Its best-known sub-project is the open-source Metasploit Framework, a tool for developing and executing exploit code against a remote target machine. Other important sub-projects include the Opcode Database, shellcode archive and related research. The Metasploit Project includes anti-forensic and evasion tools, some of which are built into the Metasploit Framework. Metasploit is pre-installed in the Kali Linux operating system. History Metasploit was created by H. D. Moore in 2003 as a portable network tool using Perl. By 2007, the Metasploit Framework had been completely rewritten in Ruby. On October 21, 2009, the Metasploit Project announced that it had been acquired by Rapid7, a security company that provides unified vulnerability management solutions. Like comparable commercial products such as Immunity's Canvas or Core Security Technologies' Core Impact, Metasploit can be used to test the vulnerability of computer systems or to break into remote systems. Like many information security tools, Metasploit can be used for both legitimate and unauthorized activities. Since the acquisition of the Metasploit Framework, Rapid7 has added two open core proprietary editions called Metasploit Express and Metasploit Pro. Metasploit's emerging position as the de facto exploit development framework led to the release of software vulnerability advisories often accompanied by a third party Metasploit exploit module that highlights the exploitability, risk and remediation of that particular bug. Metasploit 3.0 began to include fuzzing tools, used to discover software vulnerabilities, rather than just exploits for known bugs. This avenue can be seen with the integration of the lorcon wireless (802.11) toolset into Metasploit 3.0 in November 2006. Metasploit 4.0 was released in August 2011. Metasploit Framework The basic steps for exploiting a system using the Framework include. Optionally checking whether the intended target system is vulnerable to an exploit. Choosing and configuring an exploit (code that enters a target system by taking advantage of one of its bugs; about 900 different exploits for Windows, Unix/Linux and macOS systems are included). Choosing and configuring a payload (code that will be executed on the target system upon successful entry; for instance, a remote shell or a VNC server). Metasploit often recommends a payload that should work. Choosing the encoding technique so that hexadecimal opcodes known as "bad characters" are removed from the payload, these characters will cause the exploit to fail. Executing the exploit. This modular approach – allowing the combination of any exploit with any payload – is the major advantage of the Framework. It facilitates the tasks of attackers, exploit writers and payload writers. Metasploit runs on Unix (including Linux and macOS) and on Windows. The Metasploit Framework can be extended to use add-ons in multiple languages. To choose an exploit and payload, some information about the target system is needed, such as operating system version and installed network services. This information can be gleaned with port scanning and TCP/IP stack fingerprinting tools such as Nmap. Vulnerability scanners such as Nessus, and OpenVAS can detect target system vulnerabilities. Metasploit can import vulnerability scanner data and compare the identified vulnerabilities to existing exploit modules for accurate exploitation. Metasploit interfaces There are several interfaces for Metasploit available. The most popular are maintained by Rapid7 and Strategic Cyber LLC. Metasploit Framework Edition The free version. It contains a command line interface, third-party import, manual exploitation and manual brute forcing. This free version of the Metasploit project also includes Zenmap, a well known security scanner, and a compiler for Ruby, the language in which this version of Metasploit was written. Metasploit Pro In October 2010, Rapid7 added Metasploit Pro, an open-core commercial Metasploit edition for penetration testers. Metasploit Pro adds onto Metasploit Express with features such as Quick Start Wizards/MetaModules, building and managing social engineering campaigns, web application testing, an advanced Pro Console, dynamic payloads for anti-virus evasion, integration with Nexpose for ad-hoc vulnerability scans, and VPN pivoting. Discontinued editions of Metasploit Metasploit Community Edition On July 18, 2019, Rapid7 announced the end-of-sale of Metasploit Community Edition. Existing users were able to continue using it until their license expired. The edition was released in October 2011, and included a free, web-based user interface for Metasploit. Metasploit Community Edition was based on the commercial functionality of the paid-for editions with a reduced set of features, including network discovery, module browsing and manual exploitation. Metasploit Community was included in the main installer. Metasploit Express Edition On June 4, 2019, Rapid7 discontinued Metasploit Express Edition. The edition was released in April 2010, and was an open-core commercial edition for security teams who need to verify vulnerabilities. It offers a graphical user interface, It integrated nmap for discovery, and added smart brute-forcing as well as automated evidence collection. Armitage Armitage is a graphical cyber attack management tool for the Metasploit Project that visualizes targets and recommends exploits. It is a free and open source network security tool notable for its contributions to red team collaboration allowing for shared sessions, data, and communication through a single Metasploit instance. Armitage latest release was at 2015. Cobalt Strike Cobalt Strike is a collection of threat emulation tools provided by HelpSystems to work with the Metasploit Framework. Cobalt Strike includes all features of Armitage and adds post-exploitation tools, in addition to report generation features. Exploits Metasploit currently has over 2074 exploits, organized under the following platforms: AIX, Android, BSD, BSDi, Cisco, Firefox, FreeBSD, HP-UX, Irix, Java, JavaScript, Linux, mainframe, multi (applicable to multiple platforms), NetBSD, NetWare, nodejs, OpenBSD, macOS, PHP, Python, R, Ruby, Solaris, Unix, and Windows. Payloads Metasploit currently has over 592 payloads. Some of them are: Command shell enables users to run collection scripts or run arbitrary commands against the host. Meterpreter (the Metasploit Interpreter) enables users to control the screen of a device using VNC and to browse, upload and download files. Dynamic payloads enable users to evade anti-virus defense by generating unique payloads. Static payloads enable static IP address/port forwarding for communication between the host and the client system. Auxiliary Modules The Metasploit Framework includes hundreds of auxiliary modules that can perform scanning, fuzzing, sniffing, and much more. There are three types of auxiliary modules namely scanners, admin and server modules. Contributors Metasploit Framework operates as an open-source project and accepts contributions from the community through GitHub.com pull requests. Submissions are reviewed by a team consisting of both Rapid7 employees and senior external contributors. The majority of contributions add new modules, such as exploits or scanners. List of original developers: H. D. Moore (founder and chief architect) Matt Miller (core developer from 2004–2008) Spoonm (core developer from 2003–2008) See also w3af OWASP Open Web Application Security Project References Further reading Powerful payloads: The evolution of exploit frameworks, searchsecurity.com, 2005-10-20 Chapter 12: Writing Exploits III from Sockets, Shellcode, Porting & Coding: Reverse Engineering Exploits and Tool Coding for Security Professionals by James C. Foster (). Written by Vincent Liu, chapter 12 explains how to use Metasploit to develop a buffer overflow exploit from scratch. External links Anti-forensic software Security testing tools Cryptographic attacks Free and open-source software organizations Cross-platform free software Free security software Free software programmed in Ruby Injection exploits Software testing Web security exploits Windows security software MacOS security software Unix network-related software Pentesting software toolkits Software using the BSD license
65270177
https://en.wikipedia.org/wiki/2001%20USC%20Trojans%20baseball%20team
2001 USC Trojans baseball team
The 2001 USC Trojans baseball team represented the University of Southern California collegiate sports in the 2001 NCAA Division I baseball season. The Trojans played their home games at Dedeaux Field. The team was coached by Mike Gillespie in his 15th year at USC. The Trojans won the Los Angeles Regional and the Los Angeles Super Regional to advance to the College World Series, where they were defeated by the Tennessee Volunteers. Roster Schedule ! style="" | Regular Season |- valign="top" |- align="center" bgcolor="#ccffcc" | 1 || January 31 || || Dedeaux Field • Los Angeles, California || 10–3 || 1–0 || 0–0 |- |- align="center" bgcolor="#ccffcc" | 2 || February 3 || || Dedeaux Field • Los Angeles, California || 19–4 || 2–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 3 || February 4 || Louisville || Dedeaux Field • Los Angeles, California || 11–4 || 3–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 4 || February 6 || || Dedeaux Field • Los Angeles, California || 6–5 || 4–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 5 || February 9 || at || Blair Field • Long Beach, California || 6–2 || 5–0 || 0–0 |- align="center" bgcolor="#ccffcc" | 6 || February 10 || Long Beach State || Dedeaux Field • Los Angeles, California || 10–1 || 6–0 || 0–0 |- align="center" bgcolor="#ffcccc" | 7 || February 11 || at Long Beach State || Blair Field • Long Beach, California || 5–9 || 6–1 || 0–0 |- align="center" bgcolor="#ffcccc" | 8 || February 16 || at || Jackie Robinson Stadium • Los Angeles, California || 3–4 || 6–2 || 0–0 |- align="center" bgcolor="#ccffcc" | 9 || February 17 || at UCLA || Jackie Robinson Field • Los Angeles, California || 6–0 || 7–2 || 0–0 |- align="center" bgcolor="#ccffcc" | 10 || February 18 || at UCLA || Jackie Robinson Stadium • Los Angeles, California || 5–4 || 8–2 || 0–0 |- align="center" bgcolor="#ffcccc" | 11 || February 20 || || Dedeaux Field • Los Angeles, California || 1–9 || 8–3 || 0–0 |- align="center" bgcolor="#ffcccc" | 12 || February 21 || at || Eddy D. Field Stadium • Malibu, California || 3–5 || 8–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 13 || February 23 || || Dedeaux Field • Los Angeles, California || 7–3 || 9–4 || 0–0 |- |- align="center" bgcolor="#ccffcc" | 14 || March 3 || at || Schroeder Park • Houston, Texas || 6–3 || 10–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 15 || March 3 || at Houston || Schroeder Park • Houston, Texas || 6–3 || 11–4 || 0–0 |- align="center" bgcolor="#ccffcc" | 16 || March 4 || at Houston || Schroeder Park • Houston, Texas || 4–3 || 12–4 || 0–0 |- align="center" bgcolor="#ffcccc" | 17 || March 6 || || Dedeaux Field • Los Angeles, California || 4–6 || 12–5 || 0–0 |- align="center" bgcolor="#ffcccc" | 18 || March 9 || at Stanford || Sunken Diamond • Stanford, California || 0–2 || 12–6 || 0–0 |- align="center" bgcolor="#ffcccc" | 19 || March 10 || at Stanford || Sunken Diamond • Stanford, California || 3–15 || 12–7 || 0–0 |- align="center" bgcolor="#ffcccc" | 20 || March 11 || at Stanford || Sunken Diamond • Stanford, California || 5–9 || 12–8 || 0–0 |- align="center" bgcolor="#ccffcc" | 21 || March 13 || at || Caesar Uyesaka Stadium • Santa Barbara, California || 12–5 || 13–8 || 0–0 |- align="center" bgcolor="#ccffcc" | 22 || March 14 || || Dedeaux Field • Los Angeles, California || 4–10 || 14–8 || 0–0 |- align="center" bgcolor="#ccffcc" | 23 || March 17 || at || Husky Ballpark • Seattle, Washington || 5–1 || 15–8 || 1–0 |- align="center" bgcolor="#ffcccc" | 24 || March 19 || at Washington || Husky Ballpark • Seattle, Washington || 6–7 || 15–9 || 1–1 |- align="center" bgcolor="#ffcccc" | 25 || March 19 || at Washington || Husky Ballpark • Seattle, Washington || 3–7 || 15–10 || 1–2 |- align="center" bgcolor="#ccffcc" | 26 || March 21 || Pepperdine || Dedeaux Field • Los Angeles, California || 9–3 || 16–10 || 1–2 |- align="center" bgcolor="#ccffcc" | 27 || March 23 || at || Jerry Kindall Field at Frank Sancet Stadium • Tucson, Arizona || 8–0 || 17–10 || 2–2 |- align="center" bgcolor="#ccffcc" | 28 || March 24 || at Arizona || Jerry Kindall Field at Frank Sancet Stadium • Tucson, Arizona || 6–4 || 18–10 || 3–2 |- align="center" bgcolor="#ccffcc" | 29 || March 25 || at Arizona || Jerry Kindall Field at Frank Sancet Stadium • Tucson, Arizona || 8–7 || 19–10 || 4–2 |- align="center" bgcolor="#ffcccc" | 30 || March 27 || at Cal State Fullerton || Titan Field • Fullerton, California || 11–12 || 19–11 || 4–2 |- align="center" bgcolor="#ccffcc" | 31 || March 28 || || Dedeaux Field • Los Angeles, California || 16–7 || 20–11 || 4–2 |- |- align="center" bgcolor="#ccffcc" | 32 || April 3 || at San Diego State || Tony Gwynn Stadium • San Diego, California || 2–1 || 21–11 || 4–2 |- align="center" bgcolor="#ccffcc" | 33 || April 6 || || Dedeaux Field • Los Angeles, California || 11–2 || 22–11 || 5–2 |- align="center" bgcolor="#ccffcc" | 34 || April 7 || Arizona State || Dedeaux Field • Los Angeles, California || 5–1 || 23–11 || 6–2 |- align="center" bgcolor="#ffcccc" | 35 || April 8 || Arizona State || Dedeaux Field • Los Angeles, California || 1–11 || 23–12 || 6–3 |- align="center" bgcolor="#ccffcc" | 36 || April 10 || UC Santa Barbara || Dedeaux Field • Los Angeles, California || 6–4 || 24–12 || 6–3 |- align="center" bgcolor="#ccffcc" | 37 || April 14 || at || Evans Diamond • Berkeley, California || 1–0 || 25–12 || 7–3 |- align="center" bgcolor="#ccffcc" | 38 || April 15 || at California || Evans Diamond • Berkeley, California || 5–3 || 26–12 || 8–3 |- align="center" bgcolor="#ffcccc" | 39 || April 16 || at California || Evans Diamond • Berkeley, California || 4–5 || 26–13 || 8–4 |- align="center" bgcolor="#ccffcc" | 40 || April 17 || at UC Riverside || Riverside Sports Complex • Riverside, California || 14–2 || 27–13 || 8–4 |- align="center" bgcolor="#ccffcc" | 41 || April 20 || Stanford || Dedeaux Field • Los Angeles, California || 2–1 || 28–13 || 9–4 |- align="center" bgcolor="#ccffcc" | 42 || April 21 || Stanford || Dedeaux Field • Los Angeles, California || 7–0 || 29–13 || 10–4 |- align="center" bgcolor="#ffcccc" | 43 || April 22 || Stanford || Dedeaux Field • Lo Angeles, California || 5–9 || 29–14 || 10–5 |- align="center" bgcolor="#ffcccc" | 44 || April 23 || || Dedeaux Field • Los Angeles, California || 8–10 || 29–15 || 10–5 |- align="center" bgcolor="#ffcccc" | 45 || April 24 || at Loyola Marymount || George C. Page Stadium • Los Angeles, California || 7–13 || 29–16 || 10–5 |- align="center" bgcolor="#ccffcc" | 46 || April 27 || UCLA || Dedeaux Field • Los Angeles, California || 2–0 || 30–16 || 11–5 |- align="center" bgcolor="#ccffcc" | 47 || April 28 || UCLA || Dedeaux Field • Los Angeles, California || 7–6 || 31–16 || 12–5 |- align="center" bgcolor="#ccffcc" | 48 || April 29 || UCLA || Dedeaux Field • Los Angeles, California || 7–1 || 32–16 || 13–5 |- align="center" bgcolor="#ccffcc" | 49 || April 30 || at San Diego || Fowler Park • San Diego, California || 5–4 || 33–16 || 13–5 |- |- align="center" bgcolor="#ccffcc" | 50 || May 12 || || Dedeaux Field • Los Angeles, California || 7–6 || 34–16 || 14–5 |- align="center" bgcolor="#ccffcc" | 51 || May 13 || Washington State || Dedeaux Field • Los Angeles, California || 7–0 || 35–16 || 15–5 |- align="center" bgcolor="#ccffcc" | 52 || May 14 || Washington State || Dedeaux Field • Los Angeles, California || 5–1 || 36–16 || 16–5 |- align="center" bgcolor="#ccffcc" | 53 || May 15 || Long Beach State || Dedeaux Field • Los Angeles, California || 10–2 || 37–16 || 16–5 |- align="center" bgcolor="#ccffcc" | 54 || May 18 || at || Goss Stadium at Coleman Field • Beaverton, Oregon || 7–3 || 38–16 || 17–5 |- align="center" bgcolor="#ffcccc" | 55 || May 19 || at Oregon State || Goss Stadium at Coleman Field • Beaverton, Oregon || 0–6 || 38–17 || 17–6 |- align="center" bgcolor="#ccffcc" | 56 || May 20 || at Oregon State || Goss Stadium at Coleman Field • Beaverton, Oregon || 1–0 || 39–17 || 18–6 |- |- ! style="" | Postseason |- valign="top" |- align="center" bgcolor="#ccffcc" | 57 || May 25 || || Dedeaux Field • Los Angeles, California || 12–4 || 40–17 || 18–6 |- align="center" bgcolor="#ccffcc" | 58 || May 26 || Pepperdine || Dedeaux Field • Los Angeles, California || 4–3 || 41–17 || 18–6 |- align="center" bgcolor="#ccffcc" | 59 || May 27 || || Dedeaux Field • Los Angeles, California || 8–0 || 42–17 || 18–6 |- |- align="center" bgcolor="#ccffcc" | 60 || June 1 || || Dedeaux Field • Los Angeles, California || 5–1 || 43–17 || 18–6 |- align="center" bgcolor="#ccffcc" | 61 || June 2 || FIU || Dedeaux Field • Los Angeles, California || 6–0 || 44–17 || 18–6 |- |- align="center" bgcolor="#ccffcc" | 62 || June 9 || vs || Johnny Rosenblatt Stadium • Omaha, Nebraska || 11–5 || 45–17 || 18–6 |- align="center" bgcolor="#ffcccc" | 63 || June 11 || vs Miami (FL) || Johnny Rosenblatt Stadium • Omaha, Nebraska || 3–4 || 45–18 || 18–6 |- align="center" bgcolor="#ffcccc" | 64 || June 12 || vs || Johnny Rosenblatt Stadium • Omaha, Nebraska || 2–10 || 45–19 || 18–6 |- | Awards and honors Brian Barre First Team All-Pac-10 Alberto Concepcion Honorable Mention All-Pac-10 Rik Currier Second Team All-American The Sports Network Third Team All-American Baseball America Third Team All-American Collegiate Baseball First Team All-Pac-10 Anthony Lunetta Honorable Mention All-Pac-10 Michael Moon Honorable Mention All-Pac-10 Mark Prior First Team All-American American Baseball Coaches Association First Team All-American Baseball America First Team All-American Collegiate Baseball First Team All-American National Collegiate Baseball Writers Association First Team All-American The Sports Network First Team All-American USA Today Sports Weekly Pac-10 Conference Pitcher of the Year First Team All-Pac-10 Bill Peavey Honorable Mention All-Pac-10 Josh Persell Honorable Mention All-Pac-10 References USC Trojans baseball seasons USC Trojans baseball College World Series seasons USC USC Pac-12 Conference baseball champion seasons
12878216
https://en.wikipedia.org/wiki/Criticism%20of%20Facebook
Criticism of Facebook
The criticism of Facebook or Meta Platforms has led to international media coverage and significant reporting of its legal troubles and the outsize influence it has on the lives and health of its users and employees, as well on its influence on the way media, specifically news, is reported and distributed. Notable issues include Internet privacy, such as use of a widespread "like" button on third-party websites tracking users, possible indefinite records of user information, automatic facial recognition software, and its role in the workplace, including employer-employee account disclosure. The use of Facebook can have negative psychological effects that include feelings of romantic jealousy and stress, a lack of attention, and social media addiction that in some cases is comparable to drug addiction. Facebook's operations have also received coverage. The company's electricity usage, tax avoidance, real-name user requirement policies, censorship policies, handling of user data, and its involvement in the United States PRISM surveillance program have been highlighted by the media and by critics. Facebook has come under scrutiny for 'ignoring' or shirking its responsibility for the content posted on its platform, including copyright and intellectual property infringement, hate speech, incitement of rape and terrorism, fake news, Facebook murder, crimes, and violent incidents live-streamed through its Facebook Live functionality. The company and its employees have also been subject to litigation cases over the years, with its most prominent case concerning allegations that CEO Mark Zuckerberg broke an oral contract with Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra to build the then-named "HarvardConnection" social network in 2004, instead allegedly opting to steal the idea and code to launch Facebook months before HarvardConnection began. The original lawsuit was eventually settled in 2009, with Facebook paying approximately $20 million in cash and 1.25 million shares. A new lawsuit in 2011 was dismissed. Some critics point to problems which they say will result in the demise of Facebook. Facebook has been banned by several governments for various reasons, including Syria, China, and Iran. Censorship Privacy issues Facebook has faced a number of privacy concerns; for instance, in August 2019, it was revealed that the company had enlisted contractors to generate transcripts of users' audio chats. The contractors were tasked with re-transcribing the conversations in order to gauge the accuracy of the automatic transcription tool. In part these concerns stem from the company's revenue model that involves selling information about its users, and the loss of privacy this could entail. In addition, employers and other organizations and individuals have been known to use Facebook data for their own purposes. As a result peoples' identities have sometimes been revealed without their permission. In response, pressure groups and governments have increasingly asserted the users' right to privacy and to control their personal data. Psychological/sociological effects In addition to noting with evolutionary biologist George C. Williams in the development of evolutionary medicine that most chronic medical conditions are the consequence of evolutionary mismatches between a stateless environment of nomadic hunter-gatherer life in bands and contemporary human life in sedentary technologically modern state societies (e.g. WEIRD societies), psychiatrist Randolph M. Nesse has argued that evolutionary mismatch is an important factor in the development of certain mental disorders. In 1948, 50 percent of U.S. households owned at least one automobile. In 2000, a majority of U.S. households had at least one personal computer and internet access the following year. In 2002, a majority of U.S. survey respondents reported having a mobile phone. In September 2007, a majority of U.S. survey respondents reported having broadband internet at home. In January 2013, a majority of U.S. survey respondents reported owning a smartphone. Facebook addiction The "World Unplugged" study, which was conducted in 2011, claims that for some users quitting social networking sites is comparable to quitting smoking or giving up alcohol. Another study conducted in 2012 by researchers from the University of Chicago Booth School of Business in the United States found that drugs like alcohol and tobacco could not keep up with social networking sites regarding their level of addictiveness. A 2013 study in the journal CyberPsychology, Behavior, and Social Networking found that some users decided to quit social networking sites because they felt they were addicted. In 2014, the site went down for about 30 minutes, prompting several users to call emergency services. In April 2015, the Pew Research Center published a survey of 1,060 U.S. teenagers ages 13 to 17 who reported that nearly three-quarters of them either owned or had access to a smartphone, 92 percent went online daily with 24 percent saying they went online "almost constantly". In March 2016, Frontiers in Psychology published a survey of 457 post-secondary student Facebook users (following a face validity pilot of another 47 post-secondary student Facebook users) at a large university in North America showing that the severity of ADHD symptoms had a statistically significant positive correlation with Facebook usage while driving a motor vehicle and that impulses to use Facebook while driving were more potent among male users than female users. In June 2018, Children and Youth Services Review published a regression analysis of 283 adolescent Facebook users in the Piedmont and Lombardy regions of Northern Italy (that replicated previous findings among adult users) showing that adolescents reporting higher ADHD symptoms positively predicted Facebook addiction, persistent negative attitudes about the past and that the future is predetermined and not influenced by present actions, and orientation against achieving future goals, with ADHD symptoms additionally increasing the manifestation of the proposed category of psychological dependence known as "problematic social media use". Self-harm and suicide In January 2019, both the Health Secretary of the United Kingdom, and the Children's Commissioner for England, urged Facebook and other social media companies to take responsibility for the risk to children posed by content on their platforms related to self-harm and suicide. Envy Facebook has been criticized for making people envious and unhappy due to the constant exposure to positive yet unrepresentative highlights of their peers. Such highlights include, but are not limited to, journal posts, videos, and photos that depict or reference such positive or otherwise outstanding activities, experiences, and facts. This effect is caused mainly by the fact that most users of Facebook usually only display the positive aspects of their lives while excluding the negative, though it is also strongly connected to inequality and the disparities between social groups as Facebook is open to users from all classes of society. Sites such as AddictionInfo.org state that this kind of envy has profound effects on other aspects of life and can lead to severe depression, self-loathing, rage and hatred, resentment, feelings of inferiority and insecurity, pessimism, suicidal tendencies and desires, social isolation, and other issues that can prove very serious. This condition has often been called "Facebook Envy" or "Facebook Depression" by the media. In The Theory of the Leisure Class (1899), economist Thorstein Veblen observed that "Conspicuous consumption of valuable goods is a means of reputability to the gentleman of leisure", and that conspicuous leisure is the "non-productive consumption of time. Time is consumed non-productively (1) from a sense of the unworthiness of productive work, and (2) as an evidence of pecuniary ability to afford a life of idleness. But the whole of the life of the gentleman of leisure is not spent before the eyes of the spectators who are to be impressed with that spectacle of honorific leisure which in the ideal scheme makes up his life. For some part of the time his life is perforce withdrawn from the public eye, and of this portion which is spent in private the gentleman of leisure should, for the sake of his good name, be able to give a convincing account." In 2010, Social Science Computer Review published research by economists Ralf Caers and Vanessa Castelyns who sent an online questionnaire to 398 and 353 LinkedIn and Facebook users respectively in Belgium and found that both sites had become tools for recruiting job applicants for professional occupations as well as additional information about applicants, and that it was being used by recruiters to decide which applicants would receive interviews. In 2017, sociologist Ofer Sharone conducted interviews with unemployed workers to research the effects of LinkedIn and Facebook as labor market intermediaries and found that social networking services (SNS) have had a filtration effect that has little to do with evaluations of merit, and that the SNS filtration effect has exerted new pressures on workers to manage their careers to conform to the logic of the SNS filtration effect. In July 2019, sociologists Steve McDonald, Amanda K. Damarin, Jenelle Lawhorne, and Annika Wilcox performed qualitative interviews with 61 HR recruiters in two metropolitan areas in the Southern United States and found that recruiters filling low- and general-skilled positions typically posted advertisements on online job boards while recruiters filling high-skilled or supervisor positions targeted passive candidates on LinkedIn (i.e. employed workers not actively seeking work but possibly willing to change positions), and concluded that this is resulting in a bifurcated winner-takes-all job market with recruiters focusing their efforts on poaching already employed high-skilled workers while active job seekers are relegated to hyper-competitive online job boards. A joint study conducted by two German universities demonstrated Facebook envy and found that as many as one out of three people actually feel worse and less satisfied with their lives after visiting the site. Vacation photos were found to be the most common source of feelings of resentment and jealousy. After that, social interaction was the second biggest cause of envy, as Facebook users compare the number of birthday greetings, likes, and comments to those of their friends. Visitors who contributed the least tended to feel the worst. "According to our findings, passive following triggers invidious emotions, with users mainly envying happiness of others, the way others spend their vacations; and socialize", the study states. A 2013 study by researchers at the University of Michigan found that the more people used Facebook, the worse they felt afterwards. Narcissistic users who show excessive grandiosity give negative emotion to viewers and cause envy, but as a result, that may cause viewers' loneliness. Viewers sometimes need to terminate relationships with them to avoid this negative emotion. However, this "avoidance" such as "terminate relationships" would be reinforcement and it may lead to loneliness. The cyclical pattern is a vicious circle of loneliness and avoidance coping, the study states. Divorce Social networks, like Facebook, can have a detrimental effect on marriages, with users becoming worried about their spouse's contacts and relations with other people online, leading to marital breakdown and divorce. According to a 2009 survey in the UK, around 20 percent of divorce petitions included references to Facebook. Facebook has given us a new platform for interpersonal communication. Researchers proposed that high levels of Facebook use could result in Facebook-related conflict and breakup/divorce. Previous studies have shown that romantic relationships can be damaged by excessive Internet use, Facebook jealousy, partner surveillance, ambiguous information, and online portrayal of intimate relationships. Excessive Internet users reported having greater conflict in their relationships. Their partners feel neglected and there's lower commitment and lower feelings of passion and intimacy in the relationship. According to the article, researchers suspect that Facebook may contribute to an increase in divorce and infidelity rates in the near future due to the amount and ease of accessibility to connect with past partners. Stress Research performed by psychologists from Edinburgh Napier University indicated that Facebook adds stress to users' lives. Causes of stress included fear of missing important social information, fear of offending contacts, discomfort or guilt from rejecting user requests or deleting unwanted contacts or being unfriended or blocked by Facebook friends or other users, the displeasure of having friend requests rejected or ignored, the pressure to be entertaining, criticism or intimidation from other Facebook users, and having to use appropriate etiquette for different types of friends. Many people who started using Facebook for positive purposes or with positive expectations have found that the website has negatively impacted their lives. Next to that, the increasing number of messages and social relationships embedded in SNS also increases the amount of social information demanding a reaction from SNS users. Consequently SNS users perceive they are giving too much social support to other SNS friends. This dark side of SNS usage is called 'social overload'. It is caused by the extent of usage, number of friends, subjective social support norms, and type of relationship (online-only vs offline friends) while age has only an indirect effect. The psychological and behavioral consequences of social overload include perceptions of SNS exhaustion, low user satisfaction, and high intentions to reduce or stop using SNS. Narcissism In July 2018, a meta-analysis published in Psychology of Popular Media found that grandiose narcissism positively correlated with time spent on social media, frequency of status updates, number of friends or followers, and frequency of posting self-portrait digital photographs, while a meta-analysis published in the Journal of Personality in April 2018 found that the positive correlation between grandiose narcissism and social networking service usage was replicated across platforms (including Facebook). In March 2020, the Journal of Adult Development published a regression discontinuity analysis of 254 Millennial Facebook users investigating differences in narcissism and Facebook usage between the age cohorts born from 1977 to 1990 and from 1991 to 2000 and found that the later born Millennials scored significantly higher on both. In June 2020, Addictive Behaviors published a systematic review finding a consistent, positive, and significant correlation between grandiose narcissism and the proposed category of psychological dependence called "problematic social media use". Also in 2018, social psychologist Jonathan Haidt and FIRE President Greg Lukianoff noted in The Coddling of the American Mind that former Facebook president Sean Parker stated in a 2017 interview that the Like button was consciously designed to prime users receiving likes to feel a dopamine rush as part of a "social-validation feedback loop". "Conspicuous compassion" is the practice of publicly donating large sums of money to charity to enhance the social prestige of the donor, and is sometimes described as a type of conspicuous consumption. Jonathan Haidt and Greg Lukianoff argued that microaggression training on college campuses in the United States has led to a call-out culture and a climate of self-censorship due to fear of shaming by virtue signalling social media mobs with users who are often anonymous and tend to deindividuate as a consequence. Citing February 2017 Pew Research Center survey data showing that critical Facebook postings expressing "indignant disagreement" were twice as likely to receive likes, comments, or shares (along with a similar finding for Twitter posts published in PNAS USA in July 2017), Haidt and Tobias Rose-Stockwell cite the phrase "moral grandstanding" to describe how having an audience on social media forums converts much of its interpersonal communication into a public performance. Following the murder of George Floyd in May 2020 and the subsequent protests in his name, Civiqs and YouGov/Economist polls showed that while net support for Black Lives Matter among White Americans increased from –4 points to +10 points in early June 2020 (with 43 percent in support) it fell to –6 points by early August 2020, and by April 2021, further Civiqs polls showed that support for Black Lives Matter among White Americans had reverted to roughly its level of support prior to George Floyd's murder (37 percent in favor and 49 percent opposed). In a February 2021 interview on Firing Line, journalist Charles M. Blow criticized a minority of young white protestors in the George Floyd protests in the United States whom he argued were using the protests for their own personal growth to substitute for social rites of passage (e.g. prom) and summertime social gatherings (e.g. attending movie theaters or concerts) that were precluded by COVID-19 lockdowns and social distancing measures, noting that as lockdowns began to be relaxed and removed, support for Black Lives Matter among whites began to decline. In February 2021, Psychological Medicine published a survey reviewing 14,785 publicly reported murders in English language news worldwide between 1900 and 2019 compiled in a database by psychiatrists at the New York State Psychiatric Institute and the Columbia University Irving Medical Center that found that of the 1,315 personal-cause mass murders (i.e. driven by personal motivations and not occurring within the context of war, state-sponsored or group-sponsored terrorism, gang activity, or organized crime) only 11 percent of mass murderers and only 8 percent of mass shooters had a "serious mental illness" (e.g. schizophrenia, bipolar disorder, major depressive disorder), that mass shootings have become more common than other forms of mass murder since 1970 (with 73 percent occurring in the United States alone), and that mass shooters in the United States were more likely to have legal histories, to engage in recreational drug use or alcohol abuse, and to display non-psychotic psychiatric or neurologic symptoms. Survey coauthor psychiatrist Paul S. Appelbaum argued that the data from the survey indicated that "difficulty coping with life events seem more useful foci for prevention [of mass shootings] and policy than an emphasis on serious mental illness", while psychiatrist Ronald W. Pies has suggested that psychopathology should be understood as a three-gradation continuum of mental, behavioral and emotional disturbance with most mass shooters falling into a middle category of "persistent emotional disturbance". In 2015, psychiatrists James L. Knoll and George D. Annas noted that the tendency of most media attention following mass shootings on mental health leads to sociocultural factors being comparatively overlooked. Instead, Knoll and Annas cite research by social psychologists Jean Twenge and W. Keith Campbell on narcissism and social rejection in the personal histories of mass shooters, as well as cognitive scientist Steven Pinker's suggestion in The Better Angels of Our Nature (2011) that further reductions in human violence may be dependent upon reducing human narcissism. Non-informing, knowledge-eroding medium Facebook is a Big Tech company with over 2.7 billion monthly active users as of the second quarter of 2020 and therefore has a meaningful impact on the masses that use it. Big data algorithms are used in personalized content creation and automatization; however, this method can be used to manipulate users in various ways. The problem of misinformation is exacerbated by the educational bubble, users' critical thinking ability and news culture. Based on a 2015 study, 62.5% of the Facebook users are oblivious to any curation of their News Feed. Furthermore, scientists have started to investigate algorithms with unexpected outcomes that may lead to antisocial political, economic, geographic, racial, or other discrimination. Facebook has remained scarce in transparency of the inner workings of the algorithms used for News Feed correlation. Algorithms use the past activities as a reference point for predicting users' taste to keep them engaged. However, this leads to the formation of a filter bubble that starts to refrain users from diverse information. Users are left with a skewed worldview derived from their own preferences and biases. In 2015, researchers from Facebook published a study indicating that the Facebook algorithm perpetuates an echo chamber amongst users by occasionally hiding content from individual feeds that users potentially would disagree with: for example the algorithm removed one in every 13 diverse content from news sources for self-identified liberals. In general, the results from the study indicated that the Facebook algorithm ranking system caused approximately 15% less diverse material in users' content feeds, and a 70% reduction in the click-through-rate of the diverse material. In 2018, social psychologist Jonathan Haidt and FIRE President Greg Lukianoff argued in The Coddling of the American Mind that the filter bubbles created by the News Feed algorithm of Facebook and other platforms are one of the principal factors amplifying political polarization in the United States since 2000 (when a majority of U.S. households first had at least one personal computer and then internet access the following year). In his Reflections on the Revolution in France (1790), philosopher Edmund Burke observed "We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that the individuals would do better to avail themselves of the general bank and capital of nations and of ages." In The Signal and the Noise (2012), statistician Nate Silver noted that IBM had estimated that the world was generating 2.5 quintillion bytes of data each day (more than 90 percent of which was created in the previous two years), and that the increase in data was analogous to increases in book production as a consequence of the invention of the printing press in 1440 by Johannes Gutenberg as well as the effect of the increase in book production in causing the Reformation, the Counter-Reformation, and the European wars of religion. Citing Burke, Jonathan Haidt and Tobias Rose-Stockwell suggested in The Atlantic in December 2019 that because the proportion of most of the information that Generation Z receives due to regular social media usage is information created primarily within the past month (e.g. cat videos, tabloid gossip about celebrities, sensationalistic hot takes on news items) rather than information created in decades or centuries past, members of Generation Z are less familiar with the accumulated knowledge and wisdom of humanity (e.g. great ideas, great books, history) than generations past, and as a consequence, are more prone to embrace misguided ideas that bring them greater esteem and prestige within their immediate social network (noting the declining faith among Generation Z in democracy across the ideological spectrum in polling data alongside renewed interest in socialism, communism, and Nazism that is reflective of ignorance of the history of the 20th century). Facebook has, at least in the political field, a counter-effect on being informed: in two studies from the US with a total of more than 2,000 participants, the influence of social media on the general knowledge on political issues was examined in the context of two US presidential elections. The results showed that the frequency of Facebook use was moderately negatively related to general political knowledge. This was also the case when considering demographic, political-ideological variables and previous political knowledge. According to the latter, a causal relationship is indicated: the higher the Facebook use, the more the general political knowledge declines. In 2019, Jonathan Haidt argued that there is a "very good chance American democracy will fail, that in the next 30 years we will have a catastrophic failure of our democracy." Following the 2021 United States Capitol attack, in February 2021, Facebook announced that it would reduce the amount of political content in users News Feeds. Other psychological effects It has been admitted by many students that they have experienced bullying on the site, which leads to psychological harm. Students of high schools face a possibility of bullying and other adverse behaviors over Facebook every day. Many studies have attempted to discover whether Facebook has a positive or negative effect on children's and teenagers' social lives, and many of them have come to the conclusion that there are distinct social problems that arise with Facebook usage. British neuroscientist Susan Greenfield stuck up for the issues that children encounter on social media sites. She said that they can rewire the brain, which caused some hysteria over whether or not social networking sites are safe. She did not back up her claims with research, but did cause quite a few studies to be done on the subject. When that self is then broken down by others by badmouthing, criticism, harassment, criminalization or vilification, intimidation, demonization, demoralization, belittlement, or attacking someone over the site it can cause much of the envy, anger, or depression. Sherry Turkle, in her book Alone Together: Why We Expect More from Technology and Less from Each Other, argues that social media brings people closer and further apart at the same time. One of the main points she makes is that there is a high risk in treating persons online with dispatch like objects. Although people are networked on Facebook, their expectations of each other tend to be lessened. According to Turkle, this could cause a feeling of loneliness in spite of being together. Between 2016 and 2018, the number of 12- to 15-year-olds who reported being bullied over social media rose from 6% to 11%, in the region covered by Ofcom. User influence experiments Academic and Facebook researchers have collaborated to test if the messages people see on Facebook can influence their behavior. For instance, in "A 61-Million-Person Experiment in Social Influence And Political Mobilization", during the 2010 elections, Facebook users were given the opportunity to "tell your friends you voted" by clicking on an "I voted" button. Users were 2% more likely to click the button if it was associated with friends who had already voted. Much more controversially, a 2014 study of "Emotional Contagion Through Social Networks" manipulated the balance of positive and negative messages seen by 689,000 Facebook users. The researchers concluded that they had found "some of the first experimental evidence to support the controversial claims that emotions can spread throughout a network, [though] the effect sizes from the manipulations are small." Unlike the "I voted" study, which had presumptively beneficial ends and raised few concerns, this study was criticized for both its ethics and methods/claims. As controversy about the study grew, Adam Kramer, a lead author of both studies and member of the Facebook data team, defended the work in a Facebook update. A few days later, Sheryl Sandburg, Facebook's COO, made a statement while traveling abroad. While at an Indian Chambers of Commerce event in New Delhi she stated that "This was part of ongoing research companies do to test different products, and that was what it was. It was poorly communicated and for that communication we apologize. We never meant to upset you." Shortly thereafter, on July 3, 2014, USA Today reported that the privacy watchdog group Electronic Privacy Information Center (EPIC) had filed a formal complaint with the Federal Trade Commission claiming that Facebook had broken the law when it conducted the study on the emotions of its users without their knowledge or consent. In its complaint, EPIC alleged that Facebook had deceived users by secretly conducting a psychological experiment on their emotions: "At the time of the experiment, Facebook did not state in the Data Use Policy that user data would be used for research purposes. Facebook also failed to inform users that their personal information would be shared with researchers." Beyond the ethical concerns, other scholars criticized the methods and reporting of the study's findings. John Grohol, writing for Psych Central, argued that despite its title and claims of "emotional contagion", this study did not look at emotions at all. Instead, its authors used an application (called "Linguistic Inquiry and Word Count" or LIWC 2007) that simply counted positive and negative words to infer users' sentiments. He wrote that a shortcoming of the LIWC tool is that it does not understand negations. Hence, the tweet "I am not happy" would be scored as positive: "Since the LIWC 2007 ignores these subtle realities of informal human communication, so do the researchers." Grohol concluded that given these subtleties, the effect size of the findings are little more than a "statistical blip". Kramer et al. (2014) found a 0.07%—that's not 7 percent, that's 1/15th of one percent!!—decrease in negative words in people's status updates when the number of negative posts on their Facebook news feed decreased. Do you know how many words you'd have to read or write before you've written one less negative word due to this effect? Probably thousands. The consequences of the controversy are pending (be it FTC or court proceedings) but it did prompt an "Editorial Expression of Concern" from its publisher, the Proceedings of the National Academy of Sciences, as well as a blog posting from OkCupid titled "We experiment on human beings!" In September 2014, law professor James Grimmelmann argued that the actions of both companies were "illegal, immoral, and mood-altering" and filed notices with the Maryland Attorney General and Cornell Institutional Review Board. In the UK, the study was also criticized by the British Psychological Society which said, in a letter to The Guardian, "There has undoubtedly been some degree of harm caused, with many individuals affected by increased levels of negative emotion, with consequent potential economic costs, increase in possible mental health problems and burden on health services. The so-called 'positive' manipulation is also potentially harmful." Tax avoidance Facebook uses a complicated series of shell companies in tax havens to avoid paying billions of dollars in corporate tax. According to The Express Tribune, Facebook is among the corporations that "avoided billions of dollars in tax using offshore companies." For example, Facebook routes billions of dollars in profits using the Double Irish and Dutch Sandwich tax avoidance schemes to bank accounts in the Cayman Islands. The Dutch newspaper NRC Handelsblad concluded from the Paradise Papers published in late 2017 that Facebook pays "practically no taxes" worldwide. For example, Facebook paid: In 2011, £2.9m tax on £840m profits in the UK; In 2012 and 2013 no tax in the UK; In 2014 £4,327 tax on hundreds of millions of pounds in UK revenues which were transferred to tax havens. According to economist and member of the PvdA delegation inside the Progressive Alliance of Socialists & Democrats in the European Parliament (S&D) Paul Tang, between 2013 and 2015 the EU lost an estimated €1,453m – €2,415m to Facebook. When comparing to others countries outside the EU, the EU is only taxing Facebook with a rate of 0.03% to 0.1% of its revenue (around 6% of its EBT) whereas this rate is near 28% in countries outside the EU. Even had a rate between 2% and 5% been applied during this period – as suggested by the ECOFIN Council – a fraud of this rate by Facebook would have meant a loss to the EU between €327m and €817m. On July 6, 2016, the U.S. Department of Justice filed a petition in the U.S. District Court in San Francisco, asking for a court order to enforce an administrative summons issued to Facebook, Inc., under Internal Revenue Code section 7602, in connection with an Internal Revenue Service examination of Facebook's year 2010 U.S. Federal income tax return. In November 2017, the Irish Independent recorded that for the 2016 financial year, Facebook had paid €30 million of Irish corporation tax on €12.6 billion of revenues that were routed through Ireland, giving an Irish effective tax rate of under 1%. The €12.6 billion of 2016 Facebook revenues routed through Ireland was almost half of Facebook's global revenues. In April 2018, Reuters wrote that all of Facebook's non–U.S. accounts were legally housed in Ireland for tax purposes, but were being moved due to the May 2018 EU GDPR regulations. In November 2018, the Irish Times reported that Facebook routed over €18.7 billion of revenues through Ireland (almost half all global revenues), on which it paid €38 million of Irish corporation tax. Treatment of employees and contractors Moderators Facebook hires some employees through contractors, including Accenture, Arvato, Cognizant, CPL Resources, and Genpact, to serve as content moderators, reviewing potentially problematic content posted to both Facebook and Instagram. Many of these contractors face unrealistic expectations, harsh working conditions, and constant exposure to disturbing content, including graphic violence, animal abuse, and child pornography. Contractor employment is contingent on achieving and maintaining a score of 98 on a 100-point scale on a metric known as "accuracy". Falling below a score of 98 can result in dismissal. Some have reported posttraumatic stress disorder (PTSD) stemming from lack of access to counseling, coupled with unforgiving expectations and the violent content they are assigned to review. Content moderator Keith Utley, who was employed by Cognizant, experienced a heart attack during work in March 2018; the office lacked a defibrillator, and Utley was transported to a hospital where he died. Selena Scola, an employee of contractor Pro Unlimited, Inc., sued her employer after she developed PTSD as a result of "constant and unmitigated exposure to highly toxic and extremely disturbing images at the workplace". In December 2019, former Cpl employee Chris Gray began legal action in the High Court of Ireland, claiming damages for PTSD suffered as a moderator, the first of an estimated 20+ pending cases. In February 2020, employees in Tampa, Florida filed a lawsuit against Facebook and Cognizant alleging they developed PTSD and related mental health impairments as a result of constant and unmitigated exposure to disturbing content. In February 2020, the European Union Commissioners criticized the plans that Facebook has for dealing with the working conditions of those who are contracted to moderate content on the social media platform. Facebook agreed to settle a class action lawsuit for $52 million on May 12, 2020, which included a $1,000 payment to each of the 11,250 moderators in the class, with additional compensation available for the treatment of PTSD and other conditions resulting from the jobs. Employees Plans for a Facebook-owned real estate development known as "Willow Village" have been criticized for resembling a "company town", which often curtails the rights of residents, and encourages or forces employees to remain within an environment created and monitored by their employer outside of work hours. Critics have referred to the development as "Zucktown" and "Facebookville" and the company has faced additional criticism for the effect it will have on existing communities in California. The operational manager at Facebook as of March 2021, along with three former candidates of the Facebook hiring process complained to the EEOC of racial bias being practiced at the company against Black people. The current employee, Oscar Veneszee Jr. accused the firm of conducting subjective evaluations and pushing the idea of racial stereotypes. The EEOC has labeled the practice as 'systemic' racial bias and has initiated an investigation. Misleading campaigns against competitors In May 2011, emails were sent to journalists and bloggers making critical allegations about Google's privacy policies; however, it was later discovered that the anti-Google campaign, conducted by PR giant Burson-Marsteller, was paid for by Facebook in what CNN referred to as "a new level skullduggery" and which Daily Beast called a "clumsy smear". While taking responsibility for the campaign, Burson-Marsteller said it should not have agreed to keep its client's (Facebook's) identity a secret. "Whatever the rationale, this was not at all standard operating procedure and is against our policies, and the assignment on those terms should have been declined", it said. In December 2020, Apple Inc. announced an initiative of Anti-Tracking measures (opt-in tracking policy) to be introduced to their App Store Services. Facebook quickly reacted and started to criticise the initiative, claiming the Apple's anti-tracking privacy focused change will have "harmful impact on many small businesses that are struggling to stay afloat and on the free internet that we all rely on more than ever". Facebook also launched a so-called "Speak Up For Small Businesses" page. Apple in their response stated that "users should know when their data is being collected and shared across other apps and websites – and they should have the choice to allow that or not". Apple was also backed up by Electronic Frontier Foundation (EFF) who stated that "Facebook touts itself in this case as protecting small businesses, and that couldn't be further from the truth". Copying competitors' products and features Beyond acquiring competitors in the social and messaging space with strong potential, Facebook often simply copies products or features to get to the market faster. Internal emails have shown that Facebook's leadership, including Mark Zuckerberg were frustrated by the time the company spends on prototyping,and suggested to explore copying entire products like Pinterest. "Copying is faster than innovating" – admitted an employee on the internal email thread, which continued: "If you gave the top-down order to go ahead, copy e.g. Pinterest or the gaming dynamics on Foursquare ... I am sure [a] very small team of engineers, a [product manager], and a designer would get it done super quickly." Many Facebook employees seem to be questioning Facebook's approach of cloning competitors. According to leaks, a top quoted question in Facebook's internal all-hands was: "What is our next big product, which does not imitate already existing products on the market?" Snapchat In 2014 Facebook launched Slingshot, an app for sending ephemeral photos like Snapchat does. In 2016 the company built Instagram Stories, which is a copy of Snapchat's most popular feature. TikTok In August 2020, Facebook has built Instagram Reels, a feature that functions and looks similar to TikTok. Pinterest Facebook, for several months, was experimenting with an app called Hobbi, that took many cues from Pinterest. Clubhouse In the summer of 2021, Facebook started to roll out Live Audio Rooms, which resembles Clubhouse. Content Facebook has been criticized for removing or allowing various content on posts, photos and entire groups and profiles. Technical Real-name policy controversy and compromise Facebook has a real-name system policy for user profiles. The real-name policy stems from the position "that way, you always know who you're connecting with. This helps keep our community safe." The real-name system does not allow adopted names or pseudonyms, and in its enforcement has suspended accounts of legitimate users, until the user provides identification indicating the name. Facebook representatives have described these incidents as very rare. A user claimed responsibility via the anonymous Android and iOS app Secret for reporting "fake names" which caused user profiles to be suspended, specifically targeting the stage names of drag queens. On October 1, 2014, Chris Cox, Chief Product Officer at Facebook, offered an apology: "In the two weeks since the real-name policy issues surfaced, we've had the chance to hear from many of you in these communities and understand the policy more clearly as you experience it. We've also come to understand how painful this has been. We owe you a better service and a better experience using Facebook, and we're going to fix the way this policy gets handled so everyone affected here can go back to using Facebook as you were." On December 15, 2015, Facebook announced in a press release that it would be providing a compromise to its real name policy after protests from groups such as the gay/lesbian community and abuse-victims. The site is developing a protocol that will allow members to provide specifics as to their "special circumstance" or "unique situation" with a request to use pseudonyms, subject to verification of their true identities. At that time, this was already being tested in the U.S. Product manager Todd Gage and vice president of global operations Justin Osofsky also promised a new method for reducing the number of members who must go through ID verification while ensuring the safety of others on Facebook. The fake name reporting procedure will also be modified, forcing anyone who makes such an allegation to provide specifics that would be investigated and giving the accused individual time to dispute the allegation. Deleting users' statuses There have been complaints of user statuses being mistakenly or intentionally deleted for alleged violations of Facebook's posting guidelines. Especially for non-English speaking writers, Facebook does not have a proper support system to genuinely read the content and make decisions. Sometimes the content of a status did not have any "abusive" or defaming language, but it nevertheless got deleted on the basis that it had been secretly reported by a group of people as "offensive". For other languages than English, Facebook till now is not able to identify the group approach that is used to vilify humanitarian activism. In another incident, Facebook had to apologize after it deleted a free speech group's post about the abuse of human rights in Syria. In that case, a spokesman for Facebook said the post was "mistakenly" removed by a member of its moderation team, which receives a high volume of take-down requests. Enabling of harassment Facebook instituted a policy by which it is now self-policed by the community of Facebook users. Some users have complained that this policy allows Facebook to empower abusive users to harass them by allowing them to submit reports on even benign comments and photos as being "offensive" or "in violation of Facebook Rights and Responsibilities" and that enough of these reports result in the user who is being harassed in this way getting their account blocked for a predetermined number of days or weeks, or even deactivated entirely. Facebook UK policy director Simon Milner told Wired magazine that "Once the piece of content has been seen, assessed and deemed OK, (Facebook) will ignore further reports about it." Lack of customer support Like almost all other Web 2.0 sites, Facebook lacks any form of live customer support beyond "community" support pages and FAQ's which offer only general troubleshooting advice, often making it impossible to resolve issues that require the services of an administrator or are not covered in the FAQs. The automated emailing system used when filling out a support form often users back to the help center or to pages that are outdated and cannot be accessed, leaving users at a dead end with no further support available. A person who lost access to Facebook or does not have an account has no easy way to contact the company directly. Downtime and outages Facebook has had a number of outages and downtime large enough to draw some media attention. A 2007 outage resulted in a security hole that enabled some users to read other users' personal mail. In 2008, the site was inaccessible for about a day, from many locations in many countries. In spite of these occurrences, a report issued by Pingdom found that Facebook had less downtime in 2008 than most social-networking websites. On September 16, 2009, Facebook started having major problems loading as people signed in. This was due to a group of hackers deliberately trying to drown out a political speaker who had social networking problems from continuously speaking against the Iranian election results. Just two days later, on September 18, Facebook went down again. In October 2009, an unspecified number of Facebook users were unable to access their accounts for over three weeks. On Monday, October 4, 2021, Facebook and its other apps – Instagram, Whatsapp, Messenger, Oculus, as well as the lesser-known Mapillary – had an hours-long DNS-related global outage. The outage also affected anyone using "Log in with Facebook" to access third-party sites. The downtime lasted approximately five hours and fifteen minutes, from approximately 15:50 UTC to 21:05 UTC, and affected roughly three billion users. The outage was caused by a BGP withdrawal of all of the IP routes to their Domain Name (DNS) servers, which were all self-hosted at the time. Tracking cookies Facebook has been criticized heavily for 'tracking' users, even when logged out of the site. Australian technologist Nik Cubrilovic discovered that when a user logs out of Facebook, the cookies from that login are still kept in the browser, allowing Facebook to track users on websites that include "social widgets" distributed by the social network. Facebook has denied the claims, saying they have 'no interest' in tracking users or their activity. They also promised after the discovery of the cookies that they would remove them, saying they will no longer have them on the site. A group of users in the United States have sued Facebook for breaching privacy laws. As of December 2015, to comply with a court order citing violations of the European Union Directive on Privacy and Electronic Communications – which requires users to consent to tracking and storage of data by websites, Facebook no longer allows users in Belgium to view any content on the service, even public pages, without being registered and logged in. Email address change In June 2012, Facebook removed all existing email addresses from user profiles, and added a new @facebook.com email address. Facebook claimed this was part of adding a "new setting that gives people the choice to decide which addresses they want to show on their timelines". However, this setting was redundant to the existing "Only Me" privacy setting which was already available to hide addresses from timelines. Users complained the change was unnecessary, they did not want an @facebook.com email address, and they did not receive adequate notification their profiles had been changed. The change in email address was synchronized to phones due to a software bug, causing existing email addresses details to be deleted. The facebook.com email service was retired in February 2014. Safety Check bug On March 27, 2016, following a bombing in Lahore, Pakistan, Facebook activated its "Safety Check" feature, which allows people to let friends and loved ones know they are okay following a crisis or natural disaster, to people who were never in danger, or even close to the Pakistan explosion. Some users as far as the US, UK and Egypt received notifications asking if they were okay. End-to-end encryption In February 2021, the National Crime Agency of the UK expressed its concerns that the installation of end-to-end encryption methods would result in the spread of child pornography going undetected. Facebook representatives had previously told a UK Parliament committee that the use of these stronger encryption methods would render it easier for pedophiles to share child pornography on Facebook's networks. The US-based National Center for Missing and Exploited Children estimates that around 70% of reports to law enforcement regarding the spread of child pornography on Facebook would be lost as a result of the implementation of end-to-end encryption. In May 2021, Facebook came under fire from Ken McCallum, the Director-General of MI5, for its plans to introduce end-to-end encryption into its Messenger and Instagram services. McCallum stated that the introduction of such encryption methods would prevent security organizations from viewing communications related to ongoing terrorist plots and that the implementation of end-to-end encryption would block active counter-terrorism investigations. Third-party responses to Facebook Government censorship Several countries have banned access to Facebook, including Syria, China, and Iran. In 2010, the Office of the Data Protection Supervisor, a branch of the government of the Isle of Man, received so many complaints about Facebook that they deemed it necessary to provide a "Facebook Guidance" booklet (available online as a PDF file), which cited (amongst other things) Facebook policies and guidelines and included an elusive Facebook telephone number. This number when called, however, proved to provide no telephone support for Facebook users, and only played back a recorded message advising callers to review Facebook's online help information. In 2010, Facebook reportedly allowed an objectionable page, deemed by the Islamic Lawyers Forum (ILF), to be anti-Muslim. The ILF filed a petition with Pakistan's Lahore High Court. On May 18, 2010, Justice Ijaz Ahmad Chaudhry ordered Pakistan's Telecommunication Authority to block access to Facebook until May 31. The offensive page had provoked street demonstrations in Muslim countries due to visual depictions of Prophet Mohammed, which are regarded as blasphemous by Muslims. A spokesman said Pakistan Telecommunication Authority would move to implement the ban once the order has been issued by the Ministry of Information and Technology. "We will implement the order as soon as we get the instructions", Khurram Mehran told AFP. "We have already blocked the URL link and issued instruction to Internet service providers yesterday", he added. Rai Bashir told AFP that "We moved the petition in the wake of widespread resentment in the Muslim community against the Facebook contents". The petition called on the government of Pakistan to lodge a strong protest with the owners of Facebook, he added. Bashir said a PTA official told the judge his organization had blocked the page, but the court ordered a total ban on the site. People demonstrated outside court in the eastern city of Lahore, Pakistan, carrying banners condemning Facebook. Protests in Pakistan on a larger scale took place after the ban and widespread news of that objectionable page. The ban was lifted on May 31 after Facebook reportedly assured the Lahore High Court that it would remedy the issues in dispute. In 2011, a court in Pakistan was petitioned to place a permanent ban on Facebook for hosting a page called "2nd Annual Draw Muhammad Day May 20th 2011". Organizations blocking access Ontario government employees, Federal public servants, MPPs, and cabinet ministers were blocked from access to Facebook on government computers in May 2007. When the employees tried to access Facebook, a warning message "The Internet website that you have requested has been deemed unacceptable for use for government business purposes". This warning also appears when employees try to access YouTube, MySpace, gambling or pornographic websites. However, innovative employees have found ways around such protocols, and many claim to use the site for political or work-related purposes. A number of local governments including those in the UK and Finland imposed restrictions on the use of Facebook in the workplace due to the technical strain incurred. Other government-related agencies, such as the US Marine Corps have imposed similar restrictions. A number of hospitals in Finland have also restricted Facebook use citing privacy concerns. Schools blocking access The University of New Mexico (UNM) in October 2005 blocked access to Facebook from UNM campus computers and networks, citing unsolicited emails and a similar site called UNM Facebook. After a UNM user signed into Facebook from off campus, a message from Facebook said, "We are working with the UNM administration to lift the block and have explained that it was instituted based on erroneous information, but they have not yet committed to restore your access." UNM, in a message to students who tried to access the site from the UNM network, wrote, "This site is temporarily unavailable while UNM and the site owners work out procedural issues. The site is in violation of UNM's Acceptable Computer Use Policy for abusing computing resources (e.g., spamming, trademark infringement, etc.). The site forces use of UNM credentials (e.g., NetID or email address) for non-UNM business." However, after Facebook created an encrypted login and displayed a precautionary message not to use university passwords for access, UNM unblocked access the following spring semester. The Columbus Dispatch reported on June 22, 2006, that Kent State University's athletic director had planned to ban the use of Facebook by athletes and gave them until August 1 to delete their accounts. On July 5, 2006, the Daily Kent Stater reported that the director reversed the decision after reviewing the privacy settings of Facebook. As long as they followed the university's policies of online conduct, they could keep their profiles. Closed social networks Several web sites concerned with social networking, such as Salesforce have criticized the lack of information that users get when they share data. Advanced users cannot limit the amount of information anyone can access in their profiles, but Facebook promotes the sharing of personal information for marketing purposes, leading to the promotion of the service using personal data from users who are not fully aware of this. Facebook exposes personal data, without supporting open standards for data interchange. According to several communities and authors closed social networking, on the other hand, promotes data retrieval from other people while not exposing one's personal information. Openbook was established in early 2010 both as a parody of Facebook and a critique of its changing privacy management protocols. Litigation Lobbying In December 2021, news broke on The Wall Street Journal pointing to Meta's lobbying efforts to divide US lawmakers and "muddy the waters" in Congress, to hinder regulation following the 2021 whistleblower leaks. Facebook's lobbyst team in Washington suggested to Republican lawmakers that the whisteblower "was trying to help Democrats," while the narrative told to Democratic staffers was that Republicans "were focused on the company's decision to ban expressions of support for Kyle Rittenhouse," The Wall Street Journal reported. According to the article, the company's goal was to "muddy the waters, divide lawmakers along partisan lines and forestall a cross-party alliance" against Facebook (now Meta) in Congress. Terms of use controversy While Facebook originally made changes to its terms of use or, terms of service, on February 4, 2009, the changes went unnoticed until Chris Walters, a blogger for the consumer-oriented blog, The Consumerist, noticed the change on February 15, 2009. Walters complained the change gave Facebook the right to "Do anything they want with your content. Forever." The section under the most controversy is the "User Content Posted on the Site" clause. Before the changes, the clause read:You may remove your User Content from the Site at any time. If you choose to remove your User Content, the license granted above will automatically expire, however you acknowledge that the Company may retain archived copies of your User Content.The "license granted" refers to the license that Facebook has to one's "name, likeness, and image" to use in promotions and external advertising. The new terms of use deleted the phrase that states the license would "automatically expire" if a user chose to remove content. By omitting this line, Facebook license extends to adopt users' content perpetually and irrevocably years after the content has been deleted. Many users of Facebook voiced opinions against the changes to the Facebook Terms of Use, leading to an Internet-wide debate over the ownership of content. The Electronic Privacy Information Center (EPIC) prepared a formal complaint with the Federal Trade Commission. Many individuals were frustrated with the removal of the controversial clause. Facebook users, numbering more than 38,000, joined a user group against the changes, and a number of blogs and news sites have written about this issue. After the change was brought to light in Walters's blog entry, in his blog on February 16, 2009, Zuckerberg addressed the issues concerning the recently made changes to Facebook's terms of use. Zuckerberg wrote "Our philosophy is that people own their information and control who they share it with." In addition to this statement Zuckerberg explained the paradox created when people want to share their information (phone number, pictures, email address, etc.) with the public, but at the same time desire to remain in complete control of who has access to this info. To calm criticism, Facebook returned to its original terms of use. However, on February 17, 2009, Zuckerberg wrote in his blog, that although Facebook reverted to its original terms of use, it is in the process of developing new terms to address the paradox. Zuckerberg stated that these new terms will allow Facebook users to "share and control their information, and it will be written clearly in language everyone can understand." Zuckerberg invited users to join a group entitled "Facebook Bill of Rights and Responsibilities" to give their input and help shape the new terms. On February 26, 2009, Zuckerberg posted a blog, updating users on the progress of the new Terms of Use. He wrote, "We decided we needed to do things differently and so we're going to develop new policies that will govern our system from the ground up in an open and transparent way." Zuckerberg introduces the two new additions to Facebook: the Facebook Principles and the Statement of Rights and Responsibilities. Both additions allow users to vote on changes to the terms of use before they are officially released. Because "Facebook is still in the business of introducing new and therefore potentially disruptive technologies", Zuckerberg explains, users need to adjust and familiarize themselves with the products before they can adequately show their support. This new voting system was initially applauded as Facebook's step to a more democratized social network system. However, the new terms were harshly criticized in a report by computer scientists from the University of Cambridge, who stated that the democratic process surrounding the new terms is disingenuous and significant problems remain in the new terms. The report was endorsed by the Open Rights Group. In December 2009, EPIC and a number of other U.S. privacy organizations filed another complaint with the Federal Trade Commission (FTC) regarding Facebook's Terms of Service. In January 2011 EPIC filed a subsequent complaint claiming that Facebook's new policy of sharing users' home address and mobile phone information with third-party developers were "misleading and fail[ed] to provide users clear and privacy protections", particularly for children under age 18. Facebook temporarily suspended implementation of its policy in February 2011, but the following month announced it was "actively considering" reinstating the third-party policy. Interoperability and data portability Facebook has been criticized for failing to offer users a feature to export their friends' information, such as contact information, for use with other services or software. The inability of users to export their social graph in an open standard format contributes to vendor lock-in and contravenes the principles of data portability. Automated collection of user information without Facebook's consent violates its Statement of Rights and Responsibilities, and third-party attempts to do so (e.g., Web scraping) have resulted in litigation, Power.com. Facebook Connect has been criticized for its lack of interoperability with OpenID. Lawsuits over privacy Facebook's strategy of making revenue through advertising has created a lot of controversy for its users as some argue that it is "a bit creepy ... but it is also brilliant." Some Facebook users have raised privacy concerns because they do not like that Facebook sells user's information to third parties. In 2012, users sued Facebook for using their pictures and information on a Facebook advertisement. Facebook gathers user information by keeping track of pages users have "Liked" and through the interactions users have with their connections. They then create value from the gathered data by selling it. In 2009 users also filed a lawsuit for Facebook's privacy invasion through the Facebook Beacon system. Facebook's team believed that through the Beacon system people could inspire their friends to buy similar products, however, users did not like the idea of sharing certain online purchases with their Facebook friends. Users were against Facebook's invasion of privacy and sharing that privacy with the world. Facebook users became more aware of Facebook's behavior with user information in 2009 as Facebook launched their new Terms of Service. In Facebook's terms of service, Facebook admits that user information may be used for some of Facebook's own purposes such as sharing a link to your posted images or for their own commercials and advertisements. As Dijck argues in his book that, "the more users know about what happens to their personal data, the more inclined they are to raise objections." This created a battle between Facebook and Facebook users described as the "battle for information control". Facebook users have become aware of Facebook's intentions and people now see Facebook "as serving the interests of companies rather than its users." In response to Facebook selling user information to third parties, concerned users have resorted to the method of "Obfuscation". Through obfuscation users can purposely hide their real identity and provide Facebook with false information that will make their collected data less accurate. By obfuscating information through sites such as FaceCloak, Facebook users have regained control of their personal information. Better Business Bureau review , the Better Business Bureau gave Facebook an "A" rating. , the 36-month running count of complaints about Facebook logged with the Better Business Bureau is 1136, including 101 ("Making a full refund, as the consumer requested"), 868 ("Agreeing to perform according to their contract"), 1 ("Refuse [sic] to adjust, relying on terms of agreement"), 20 ("Unassigned"), 0 ("Unanswered") and 136 ("Refusing to make an adjustment"). Security Facebook's software has proven vulnerable to likejacking. On July 28, 2010, the BBC reported that security consultant Ron Bowes used a piece of code to scan Facebook profiles to collect data of 100 million profiles. The data collected was not hidden by the user's privacy settings. Bowes then published the list online. This list, which has been shared as a downloadable file, contains the URL of every searchable Facebook user's profile, their name and unique ID. Bowes said he published the data to highlight privacy issues, but Facebook claimed it was already public information. In early June 2013, The New York Times reported that an increase in malicious links related to the Trojan horse malware program Zeus were identified by Eric Feinberg, founder of the advocacy group Fans Against Kounterfeit Enterprise (FAKE). Feinberg said that the links were present on popular NFL Facebook fan pages and, following contact with Facebook, was dissatisfied with the corporation's "after-the-fact approach". Feinberg called for oversight, stating, "If you really want to hack someone, the easiest place to start is a fake Facebook profile—it's so simple, it's stupid." Rewards for vulnerability reporting On August 19, 2013, it was reported that a Facebook user from Palestinian Autonomy, Khalil Shreateh, found a bug that allowed him to post material to other users' Facebook Walls. Users are not supposed to have the ability to post material to the Facebook Walls of other users unless they are approved friends of those users that they have posted material to. To prove that he was telling the truth, Shreateh posted material to Sarah Goodin's wall, a friend of Facebook CEO Mark Zuckerberg. Following this, Shreateh contacted Facebook's security team with the proof that his bug was real, explaining in detail what was going on. Facebook has a bounty program in which it compensates people a $500+ fee for reporting bugs instead of using them to their advantage or selling them on the black market. However, it was reported that instead of fixing the bug and paying Shreateh the fee, Facebook originally told him that "this was not a bug" and dismissed him. Shreateh then tried a second time to inform Facebook, but they dismissed him yet again. On the third try, Shreateh used the bug to post a message to Mark Zuckerberg's Wall, stating "Sorry for breaking your privacy ... but a couple of days ago, I found a serious Facebook exploit" and that Facebook's security team was not taking him seriously. Within minutes, a security engineer contacted Shreateh, questioned him on how he performed the move and ultimately acknowledged that it was a bug in the system. Facebook temporarily suspended Shreateh's account and fixed the bug after several days. However, in a move that was met with much public criticism and disapproval, Facebook refused to pay out the 500+ fee to Shreateh; instead, Facebook responded that by posting to Zuckerberg's account, Shreateh had violated one of their terms of service policies and therefore "could not be paid". Included with this, the Facebook team strongly censured Shreateh over his manner of resolving the matter. In closing, they asked that Shreateh continue to help them find bugs. On August 22, 2013, Yahoo News reported that Marc Maiffret, a chief technology officer of the cybersecurity firm BeyondTrust, is prompting hackers to help raise a $10,000 reward for Khalil Shreateh. On August 20, Maiffret stated that he had already raised $9,000 in his efforts, including the $2,000 he himself contributed. He and other hackers alike have denounced Facebook for refusing Shreateh compensation. Maiffret said: "He is sitting there in Palestine doing this research on a five-year-old laptop that looks like it is half broken. It's something that might help him out in a big way." Facebook representatives have since responded, "We will not change our practice of refusing to pay rewards to researchers who have tested vulnerabilities against real users." Facebook representatives also claimed they'd paid out over $1 million to individuals who have discovered bugs in the past. Environmental impacts In 2010, Prineville, Oregon, was chosen as the site for Facebook's new data center. However, the center has been met with criticism from environmental groups such as Greenpeace because the power utility company contracted for the center, PacifiCorp, generates 60% of its electricity from coal. In September 2010, Facebook received a letter from Greenpeace containing half a million signatures asking the company to cut its ties to coal-based electricity. On April 21, 2011, Greenpeace released a report showing that of the top ten big brands in cloud computing, Facebook relied the most on coal for electricity for its data centers. At the time, data centers consumed up to 2% of all global electricity and this amount was projected to increase. Phil Radford of Greenpeace said "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today". On December 15, 2011, Greenpeace and Facebook announced together that Facebook would shift to use clean and renewable energy to power its own operations. Marcy Scott Lynn, of Facebook's sustainability program, said it looked forward "to a day when our primary energy sources are clean and renewable" and that the company is "working with Greenpeace and others to help bring that day closer". Advertising Click fraud In July 2012, startup Limited Run claimed that 80% of its Facebook clicks came from bots. Limited Run co-founder Tom Mango told TechCrunch that they "spent roughly a month testing this" with six web analytics services including Google Analytics and in-house software. Click fraud (Allege reason) Limited Run said it came to the conclusion that the clicks were fraudulent after running its own analysis. It determined that most of the clicks for which Facebook was charging it came from computers that were not loading Javascript, a programming language that allows Web pages to be interactive. Almost all Web browsers load Javascript by default, so the assumption is that if a click comes from one that is not, it's probably not a real person but a bot. Like fraud Facebook offers an advertising tool for pages to get more "likes". According to Business Insider, this advertising tool is called "Suggested Posts" or "Suggested Pages", allowing companies to market their page to thousands of new users for as little as $50. Global Fortune 100 firms are increasingly using social media marketing tools as the number of "likes" per Facebook page has risen by 115% globally. Biotechnology company Comprendia investigated Facebook's "likes" through advertising by analyzing the life science pages with the most likes. They concluded that at as much as 40% of "likes" from company pages are suspected to be fake. According to Facebook's annual report, an estimated 0.4% and 1.2% of active users are undesirable accounts that create fake likes. Small companies such as PubChase have publicly testified against Facebook's advertising tool, claiming legitimate advertising on Facebook creates fraudulent Facebook "likes". In May 2013, PubChase decided to build up its Facebook following through Facebook's advertising tool, which promises to "connect with more of the people who matter to you". After the first day, the company grew suspicious of the increased likes as they ended up with 900 likes from India. According to PubChase, none of the users behind the "likes" seemed to be scientists. The statistics from Google Analytics indicate that India is not in the company's main user base. PubChase continues by stating that Facebook has no interface to delete the fake likes; rather, the company must manually delete each follower themselves. In February 2014, Derek Muller used his YouTube account Veritasium to upload a video titled "Facebook Fraud". Within three days, the video had gone viral with more than a million views (it has reached 6,371,759 views as of December 15, 2021). In the video, Muller illustrates how after paying US$50 to Facebook advertising, the "likes" to his fan page have tripled in a few days and soon reached 70,000 "likes", compared to his original 2,115 likes before the advertising. Despite the significant increase in likes, Muller noticed his page has actually decreased in engagement – there were fewer people commenting, sharing, and liking his posts and updates despite the significant increase in "likes". Muller also noticed that the users that "liked" his page were users that liked hundreds of other pages, including competing pages such as AT&T and T-Mobile. He theorizes that users are purposely clicking "like" on any and every page to deter attention away from the pages they were paid to "like". Muller claims, "I never bought fake likes, I used Facebook legitimate advertising, but the results are as if I paid for fake likes from a click farm". In response to the fake "likes" complaints, Facebook told Business Insider: Undesired targeting On August 3, 2007, several British companies, including First Direct, Vodafone, Virgin Media, The Automobile Association, Halifax and Prudential pulled advertising in Facebook after finding that their ads were displayed on the page of the British National Party, a far-right political party. Facilitation of housing discrimination Facebook has faced allegations that its advertising platforms facilitate housing discrimination by means of internal functions for targeted advertising, which allowed advertisers to target or exclude specific audiences from campaigns. Researchers have also found that Facebook's advertising platform may be inherently discriminatory, since ad delivery is also influenced by how often specific demographics interact with specific types of advertising – even if they are not explicitly determined by the advertiser. Under the United States' Fair Housing Act, it is illegal to show a preference for or against tenants based on specific protected classes (including race, ethnicity, and disabilities), when advertising or negotiating the rental or sale of housing. In 2016, ProPublica found that advertisers could target or exclude users from advertising based on an "Ethnic Affinity" – a demographic trait which is determined based on a user's interests and behaviors on Facebook, and not explicitly provided by the user. This could, in turn, be used to discriminate based on race. In February 2017, Facebook stated that it would implement stronger measures to forbid discriminatory advertising across the entire platform. Advertisers who attempt to create ads for housing, employment, or credit (HEC) opportunities would be blocked from using ethnic affinities (renamed "multicultural affinities" and now classified as behaviors) to target the ad. If an advertiser uses any other audience segment to target ads for HEC, they would be informed of the policies, and be required to affirm their compliance with relevant laws and policies. However, in November 2017, ProPublica found that automated enforcement of these new policies was inconsistent. They were also able to successfully create housing ads that excluded users based on interests and other factors that effectively imply associations with protected classes, including interests in wheelchair ramps, the Spanish-language television network Telemundo, and New York City ZIP codes with majority minority populations. In response to the report, Facebook temporarily disabled the ability to target any ad with exclusions based on multicultural affinities. In April 2018, Facebook permanently removed the ability to create exclusions based on multicultural affinities. In July 2018, Facebook signed a legally binding agreement with the State of Washington to take further steps within 90 days to prevent the use of its advertising platform for housing discrimination against protected classes. The following month, Facebook announced that it would remove at least 5,000 categories from its exclusion system to prevent "misuse", including those relating to races and religions. On March 19, 2019, Facebook settled a lawsuit over the matter with the National Fair Housing Alliance, agreeing to create a separate portal for HEC advertising with limited targeting options by September 2019, and to provide a public archive of all HEC advertising. On March 28, 2019, the U.S. Department of Housing and Urban Development (HUD) filed a lawsuit against Facebook, having filed a formal complaint against the company on August 13, 2018. The HUD also took issue with Facebook's tendency to deliver ads based on users having "particular characteristics [that are] most likely to engage with the ad". Fake accounts In August 2012, Facebook revealed that more than 83 million Facebook accounts (8.7% of total users) are fake accounts. These fake profiles consist of duplicate profiles, accounts for spamming purposes and personal profiles for business, organization or non-human entities such as pets. As a result of this revelation, the share price of Facebook dropped below $20. Furthermore, there is much effort to detect fake profiles using automated means, in one such work, machine learning techniques are used to detect fake users. Facebook initially refused to remove a "business" page devoted to a woman's anus, created without her knowledge while she was underage, due to other Facebook users having expressed interest in the topic. After Buzzfeed published a story about it, the page was finally removed. The page listed her family's former home address as that of the "business". User interface Upgrades September 2008 In September 2008, Facebook permanently moved its users to what they termed the "New Facebook" or Facebook 3.0. This version contained several different features and a complete layout redesign. Between July and September, users had been given the option to use the new Facebook in place of the original design, or to return to the old design. Facebook's decision to migrate their users was met with some controversy in their community. Several groups started opposing the decision, some with over a million users. October 2009 In October 2009, Facebook redesigned the news feed so that the user could view all types of things that their friends were involved with. In a statement, they said, your applications [stories] generate can show up in both views. The best way for your stories to appear in the News Feed filter is to create stories that are highly engaging, as high quality, interesting stories are most likely to garner likes and comments by the user's friends.This redesign was explained as: News Feed will focus on popular content, determined by an algorithm based on interest in that story, including the number of times an item is liked or commented on. Live Feed will display all recent stories from a large number of a user's friends. The redesign was met immediately with criticism with users, many who did not like the amount of information that was coming at them. This was also compounded by the fact that people could not select what they saw. November/December 2009 In November 2009, Facebook issued a proposed new privacy policy, and adopted it unaltered in December 2009. They combined this with a rollout of new privacy settings. This new policy declared certain information, including "lists of friends", to be "publicly available", with no privacy settings; it was previously possible to keep access to this information restricted. Due to this change, the users who had set their "list of friends" as private were forced to make it public without even being informed, and the option to make it private again was removed. This was protested by many people and privacy organizations such as the EFF. The change was described by Ryan Tate as Facebook's Great Betrayal, forcing user profile photos and friends lists to be visible in users' public listing, even for users who had explicitly chosen to hide this information previously, and making photos and personal information public unless users were proactive about limiting access. For example, a user whose "Family and Relationships" information was set to be viewable by "Friends Only" would default to being viewable by "Everyone" (publicly viewable). That is, information such as the gender of the partner the user is interested in, relationship status, and family relations became viewable to those even without a Facebook account. Facebook was heavily criticized for both reducing its users' privacy and pushing users to remove privacy protections. Groups criticizing the changes include the Electronic Frontier Foundation and American Civil Liberties Union. Mark Zuckerberg, CEO, had hundreds of personal photos and his events calendar exposed in the transition. Facebook has since re-included an option to hide friends lists from being viewable; however, this preference is no longer listed with other privacy settings, and the former ability to hide the friends list from selected people among one's own friends is no longer possible. Journalist Dan Gillmor deleted his Facebook account over the changes, stating he "can't entirely trust Facebook" and Heidi Moore at Slate's Big Money temporarily deactivated her account as a "conscientious objection". Other journalists have been similarly disappointed and outraged by the changes. Defending the changes, founder Mark Zuckerberg said "we decided that these would be the social norms now and we just went for it". The Office of the Privacy Commissioner of Canada launched another investigation into Facebook's privacy policies after complaints following the change. January 2018 Following a difficult 2017, marked by accusations of relaying fake news and revelations about groups close to Russia which tried to influence the 2016 US presidential election (see Russian interference in the 2016 United States elections) via advertisements on his service, Mark Zuckerberg, announced in his traditional January post: Following surveys on Facebook users, this desire for change will take the form of a reconfiguration of the News Feed algorithms to: Prioritize content of family members and friends (Mark Zuckerberg January 12, Facebook: "The first changes you'll see will be in News Feed, where you can expect to see more from your friends, family and groups".) Give priority to news articles from local sources considered more credible The recent changes of the News Feed algorithm (see content : News Feed#History) are expected to improve "the amount of meaningful content viewed". To this end, the new algorithm is supposed to determine the publications around which a user is most likely to interact with his friends, and make them appear higher in the News Feed instead of items for example from media companies or brands. These are posts "that inspire back-and-forth discussion in the comments and posts that you might want to share and react to". But, as even Mark Zuckerberg admitted, he "expect the time people spend on Facebook and some measures of engagement will go down. But I also expect the time you do spend on Facebook will be more valuable". The less public content a Facebook user sees on their News Feed, the less brands are able to reach consumers. That's unarguably a major lose for advertisers and publishers. This change which seems to be just another update of the social network, is widely criticized because of the heavy consequences it might lead to "In countries such as the Philippines, Myanmar and South Sudan and emerging democracies such Bolivia and Serbia, it is not ethical to plead platform neutrality or to set up the promise of a functioning news ecosystem and then simply withdraw at a whim". Indeed, in such countries, Facebook was the promise of a reliable and objective platform on which they could hope for raw information. Independent media companies tried to fight censorship through their articles and were promoting in a way the right for citizens to know what is going on in their countries. The company's way of handling scandals and criticism over fake news by diminishing its media company image is even defined as "potentially deadly" regarding the poor and fraught political environments like Myanmar or South Sudan appealed by the "free basics" programme of the social network. Serbian journalist Stevan Dojcinovic goes further by describing Facebook as a "monster" and accuses the company of "showing a cynical lack of concern for how its decisions affect the most vulnerable". Indeed, Facebook had experimented with withdrawing media companies' news on user's newsfeed in few countries such as Serbia. Stevan Docjcinovic then wrote an article explaining how Facebook helped them "to bypass mainstream channels and bring [their] stories to hundreds of thousands of readers". The rule about publishers is not being applied to paid posts raising the journalist's fears about the social network "becoming just another playground for the powerful" by letting them for example buy Facebook ads. Critics are also visible in other media companies depicting the private company as the "destroyer of worlds". LittleThings CEO, Joe Speiser states that the algorithm shift "took out roughly 75% of LittleThings" organic traffic while hammering its profit margins" compelling them to close their doors because they were relying on Facebook to share content. Net neutrality "Free Basics" controversy in India In February 2016, TRAI ruled against differential data pricing for limited services from mobile phone operators effectively ending zero-rating platforms in India. Zero rating provides access to a limited number of websites for no charge to the end user. Net-neutrality supporters from India (SaveTheInternet.in) brought out the negative implications of the Facebook Free Basic program and spread awareness to the public. Facebook's Free Basics program was a collaboration with Reliance Communications to launch Free Basics in India. The TRAI ruling against differential pricing marked the end of Free Basics in India. Earlier, Facebook had spent US$44 million in advertising and it implored all of its Indian users to send an email to the Telecom Regulatory Authority to support its program. TRAI later asked Facebook to provide specific responses from the supporters of Free Basics. Treatment of potential competitors In December 2018 details on Facebook's behavior against competitors surfaced. The UK parliament member Damian Collins released files from a court ruling between Six4Three and Facebook. According to those files, the social media company Twitter released its app Vine in 2013. Facebook blocked Vine's Access to its data. In July 2020, Facebook along with other tech giants Apple, Amazon and Google were accused of maintaining harmful power and anti-competitive strategies to quash potential competitors in the market. The CEOs of respective firms appeared in a teleconference on July 29, 2020, before the lawmakers of the United States Congress. See also Criticism of Apple Criticism of Google Criticism of Microsoft Criticism of Yahoo! Europe v Facebook Facebook content management controversies Facebook Files Facebook history Facebook malware Facebook Analytics Facebook Pixel Filter bubble Instagram's impact on people Issues involving social networking services Online hate speech Social media and suicide Surveillance capitalism Unauthorized access in online social networks Ireland as a tax haven Techlash References Further reading "Facebook: Friend or Foe?". LifeIvy. May 15, 2013 How Facebook's tentacles reach further than you think (May 26, 2017), BBC External links Internet privacy Facebook Facebook Privacy controversies and disputes
15569938
https://en.wikipedia.org/wiki/The%20Cimarons
The Cimarons
The Cimarons are a British reggae band formed in 1967. They were the UK's first self-contained indigenous reggae band. History Jamaican natives, the Cimarons migrated to Britain in 1967 with a lineup consisting of Franklyn Dunn (bass), Carl Levy (keyboards), Locksley Gichie (guitar), and Maurice Ellis (drums); vocalist, Winston Reid (better known as Winston Reedy) joined in London. They were primarily session musicians in Jamaica, and backed many artists, including Jimmy Cliff. Their first LP In Time, on Trojan Records in 1974 featured a rendition of the O'Jays' "Ship Ahoy", "Utopian Feeling", "Over The Rainbow," and "My Blue Heaven". Vulcan Records released On The Rock two years later. They switched to Polydor Records, releasing Live at The Roundhouse in 1978. Polydor released Maka the same year. During this period they did a major British tour supporting Sham 69.Three more albums followed: Freedom Street, Reggaebility and On The Rock Part 2. After the last of these, in 1983, they didn't surface again until 1995 when Lagoon Records released People Say and Reggae Time, both compilations of earlier albums, followed by The Best of The Cimarons, released in 1999 on Culture Press. Reedy and Dunn continue to perform as The Cimarons. Discography Albums In Time (1974) Trojan On the Rock (1976) Vulcan Maka (1978) Polydor Live (1978) Polydor Freedom Street (1980) Virgin Reggaebility (1982) Hallmark On De Rock Part 2 (1983) Butt (recorded 1976) Compilations People Say (1991) Lagoon (recorded 1974-76) Reggae Time Lagoon The Best of The Cimarons (1992) Culture Press Maroon Land (2001) Rhino Reggae Best (2004) Culture Press Reggae Masters (2007) Creon Singles "Funky Fight" (1970) Big Shot "Oh Mammy Blue" (1971) Downtown "Holy Christmas" (1971) Downtown "Struggling Man" (1972) Horse (split 7" with The Prophets) "Snoopy vs. The Red Baron" (1973) Mooncrest (as Hotshots) UK #4 "Talking Blues" (#1 in Jamaica) "Check Out Yourself" Trojan "You Can Get It If You Really Want" (1974) Trojan "Dim the Light" (1976) Trojan "Over the Rainbow" Trojan "Harder Than the Rock" (1978) Polydor "Mother Earth" (1978) Polydor "Willin' (Rock Against Racism)"/"Truly" (1978) Polydor "Ready for Love" (1981) Charisma "With a Little Luck" (1982) IMP "Big Girls Don't Cry" (1982) Safari "How Can I Prove Myself to You" (1982) "Be My Guest Tonight" (1995) "Time Passage" Fontana The Cimarons also backed several singers on Trojan singles, often credited on the B-side with an instrumental version of the A-side. References Musical groups from London British reggae musical groups Trojan Records artists
2630919
https://en.wikipedia.org/wiki/Matt%20Welsh%20%28computer%20scientist%29
Matt Welsh (computer scientist)
Matthew David "Matt" Welsh is a computer scientist and software engineer at xnor.ai, which he joined after spending eight years at Google. He was the Gordon McKay Professor of Computer Science at Harvard University and author of several books about the Linux operating system, several Linux HOWTOs, the LinuxDoc format and articles in the Linux Journal. In November 2010, five months after being granted tenure, Welsh announced that he was leaving Harvard. He is a 1992 graduate of the North Carolina School of Science and Mathematics. Welsh received a BS from Cornell University in 1996 and MS and PhD degrees from the University of California, Berkeley in 1999 and 2002, respectively. He spent the 1996-7 school year at the University of Cambridge Computer Laboratory and at the University of Glasgow. The Social Network Welsh taught the operating system class at Harvard in which Mark Zuckerberg was a student. Welsh was later portrayed by actor Brian Palermo in the movie The Social Network featuring Zuckerberg and the founding of Facebook. Welsh was reportedly paid $200 for his powerpoint slides used in the movie. Publications See footnotes below References External links http://www.mdw.la Cornell University alumni UC Berkeley College of Engineering alumni John A. Paulson School of Engineering and Applied Sciences faculty Living people Year of birth missing (living people) Computer scientists North Carolina School of Science and Mathematics alumni Science bloggers Google employees
26097859
https://en.wikipedia.org/wiki/Trusteer
Trusteer
Trusteer is a Boston-based computer security division of IBM, responsible for a suite of security software. Founded by Mickey Boodaei and Rakesh K. Loonkar, in Israel in 2006, Trusteer was acquired in September 2013 by IBM for $1 billion. Trusteer's products aim to block online threats from malware and phishing attacks, and to support regulatory compliance requirements. Trusteer's malware research team aims to analyze information received from the installed base of 30,000,000 user endpoints and hundreds of organizations. Trusteer has a presence in North America, South America, Europe, Africa, Japan and China. Products Trusteer's products aim to prevent incidents at the point of attack while investigating their source to mitigate future attacks. In addition, Trusteer allows organizations to receive immediate alerts, and to report whenever a new threat is launched against them or their customers. Trusteer Rapport Trusteer Rapport is security software advertised as an additional layer of security to anti-virus software. It is designed to protect confidential data, such as account credentials, from being stolen by malicious software (malware) and via phishing. To achieve this goal, the software includes anti-phishing measures to protect against misdirection and attempts to prevent malicious screen scraping; it attempts to protect users against the following forms of attacks: man-in-the-browser, man-in-the-middle, session hijacking and screen capturing. On installation, Rapport also tries to remove existing financial malware from end-user machines and to prevent future infection. The client is available for multiple platforms in the form of a browser extension. As of March 2020, the Windows version supports Google Chrome, Microsoft Edge, Mozilla Firefox, and Microsoft Internet Explorer on Windows 7 and later; while the macOS version supports Google Chrome, Mozilla Firefox, and Apple Safari on macOS 10.12 (Sierra) and later. Financial institutions offer the software free of charge with a view to making online banking safer for customers. Banks offering the software include Bank of America, Société Générale, Tangerine, INGDirect, HSBC, CIBC, BMO, Guaranty Trust Bank (GTBank), Ecobank Davivienda and First Republic Bank. Some banks which had offered the software discontinued offering it. For instance, NatWest and RBS withdrew use in January 2019, stating that "The security and fraud prevention technologies we now use provide you a higher and far broader level of protection." Trusteer Pinpoint Trusteer Pinpoint is a web-based service that allows financial institutions to detect and mitigate malware, phishing and account takeover attacks without installing any software on endpoint devices. It allows companies concerned about online fraud or data theft to scan their Web traffic to ensure that an outside laptop or desktop that is brought into a corporate network is not infected with malware before allowing the visitor access to their Web services. Trusteer Pinpoint combines device fingerprinting, proxy detection and malware infection detection. When a user infected with malware accesses an online banking site protected by Trusteer Pinpoint Malware Detection, it identifies the infection and malware type (e.g. “User Steve is infected with Prinimalka-Gozi”), alerts the bank and flags the user's credentials as compromised. Once notified, banks can immediately contact the end user to have them install Trusteer Rapport which will remove the malware. Trusteer Pinpoint Account Takeover Detection also fingerprints the device and checks for the use of proxies. Trusteer Mobile Fraud Risk Prevention Mobile Risk Engine aims to protect organizations against mobile and PC-to-mobile (cross-channel) attacks. The product tries to detect and stops account takeover from mobile devices by identifying criminal access attempts. It also tries to identify devices that are vulnerable to compromise by malware and those that have been infected. Trusteer Mobile Risk Engine is a web-based service that includes the Trusteer Mobile SDK, Trusteer Mobile App, Trusteer Mobile Out-of-Band Authentication, and Mobile Risk API. The combination of Mobile Risk Engine and its client-side components provide device fingerprinting for mobile devices, account takeover prevention from mobile devices, detection of compromised mobile devices, and access to a global fraudster database. Trusteer Apex Trusteer Apex is an automated solution that tries to prevent exploits and malware from compromising the endpoints and extracting information. Apex has three layers of security: exploit prevention, data exfiltration prevention and credentials protection. Apex protects employee credentials from phishing attacks by validating that employees are submitting their credentials only to authorized enterprise web-application login URLs. Apex also prevents corporate employees from re-using their corporate credentials to access non-corporate, public applications like PayPal, e-Bay, Facebook or Twitter. Apex requires users to provide different credentials for such applications, to lower the risk of credentials exposure. Trusteer Apex is targeted at the behaviors of a small group of applications, on the hypothesis that they are responsible for the overwhelming majority of exploits, namely Java, Adobe's Reader and Flash, and Microsoft's Office. The technology behind Trusteer Apex does not rely on threat signatures, or on so-called "whitelists" of good applications. Instead, it watches applications as they run and spots suspicious or malicious behavior, based on knowledge of "normal" application behavior that it has refined from its large user base. Trusteer claims Apex can block both web based attacks that are used to implant malware by exploiting vulnerable applications, and data loss due to malware infections by spotting attempts by untrusted applications or processes to send data outside an organization or connect with Internet-based command and control (C&C) networks. Technical concerns End users have reported problems with Rapport, slow PCs due to high CPU and RAM utilization, incompatibility with various security/antivirus products and difficulty in removing the software. The consumer organisation Which? found that many members had problems due to running Trusteer Rapport, and advised against using it. They found that it could conflict with other security software, and slow or crash the Web browser. Which? emphasises that it is the bank's responsibility, not Rapport's, to protect customers' online banking, adding that online banking can be perfectly safe without Trusteer Rapport; its only benefit would be detecting a phishing site masquerading as the bank—"but plenty of other tools, including most modern browsers, can do this anyway". They clarify that the software is legitimate and respectable, but "don't feel the claims on Rapport's website add up". In a presentation given at 44con in September 2011, bypassing Trusteer Rapport's keylogger protection was shown to be relatively trivial. Shortly thereafter Trusteer confirmed that the flaw was corrected and said that even if a hacker were able to use the flaw to disable anti-keylogging functions in Rapport, other secondary security protection technologies would still be in play. Rapport software is incompatible with Windows tool Driver Verifier and may cause Blue Screen and system crash. Since Driver Verifier is not intended for end users in a production environment or workstations, Trusteer Support recommends that end users do not run Driver Verifier with Trusteer Endpoint Protection installed. Blue Gem lawsuit In March 2011, Blue Gem, a rival company, filed a lawsuit against Trusteer in a California court. Blue Gem accused Trusteer of plagiarizing their code in order to maintain compatibility between anti-keystroke logging software types of Intel chipset that were first introduced back in 2007. Trusteer has described the accusations as "baseless". See also trustee (disambiguation) References External links Frost and Sullivan Report Reuters article BBC article Computer security companies Companies based in Boston Computer security software companies IBM subsidiaries IBM acquisitions 2006 establishments in Israel
3220132
https://en.wikipedia.org/wiki/Hibernation%20%28computing%29
Hibernation (computing)
Hibernation (also known as suspend to disk, or Safe Sleep on Macintosh computers) in computing is powering down a computer while retaining its state. When hibernation begins, the computer saves the contents of its random access memory (RAM) to a hard disk or other non-volatile storage. When the computer is turned on the RAM is restored and the computer is exactly as it was before entering hibernation. Hibernation was first implemented in 1992 and patented by Compaq Computer Corporation in Houston, Texas. As of 2020, Microsoft's Windows 10 employs a type of hibernation by default when shutting down. Uses After hibernating, the hardware is powered down like a regular shutdown. The system can have a total loss of power for an indefinite length of time and then resume to the original state. Hibernation is mostly used in laptops, which have limited battery power available. It can be set to happen automatically on a low battery alarm. Most desktops also support hibernation, mainly as a general energy saving measure and allows for replacement of a removable battery quickly. Google and Apple mobile hardware (Android, Chromebooks, iOS) do not support hibernation. Apple hardware using macOS calls hibernation Safe Sleep. Comparison to sleep mode Many systems support a low-power sleep mode in which the processing functions of the machine are lowered, using a trickle of power to preserve the contents of RAM and support waking up. Instantaneous resumption is one of the advantages of sleep mode over hibernation. A hibernated system must start up and read data from permanent storage and then transfer that back to RAM, which takes longer and depends on the speed of the permanent storage device, often much slower than RAM memory. A system in sleep mode only needs to power up the CPU and display, which is almost instantaneous. On the other hand, a system in sleep mode still consumes power to keep the data in the RAM. Detaching power from a system in sleep mode results in data loss, while cutting the power of a system in hibernation has no risk; the hibernated system can resume when and if the power is restored. Both shut down and hibernated systems may consume standby power unless they are unplugged. Hibernation is a means of avoiding the burden of saving unsaved data before shutting down and restoring all running programs and re-opening documents and browser tabs. Both hibernation and sleep preserve memory fragmentation and atrophy that lead to mobile devices working more poorly the longer you avoid a power off. This is why many experts recommend a frequent shut down or reboot of electronic devices. First implementation The first working retail hibernation was in 1992 on the Compaq LTE Lite 386 as noted in its sales material. It is made possible in part due to the sleep and protected mode opcodes in the Intel 386 CPU. It was implemented in ROM and worked independent of the operating system with no drivers needed. The LTE would sense low battery and prevented data loss by making used of a hidden partition. It preserved and restored the system in the midst of disk writes and operations with a math co-processor. It could also be controlled using an optional software GUI or a customized keyboard shortcut. It was tested on DOS, Windows 3.1, Banyan Vines, and Novell Netware. Compaq's hibernation is also noted in an IBM patent from 1993. Operating system support Early implementations of hibernation used the BIOS as noted above, but modern operating systems usually handle hibernation themselves. Hibernation is defined as sleeping mode S4 in the ACPI specification. Microsoft Windows On Windows computers, hibernation is available only if all hardware and device drivers are ACPI and plug-and-play–compliant. This allows some desktop computers to hibernate quickly to SSD in the event of a power failure and power supplied to even a lightweight or aging UPS. Hibernation can be invoked from the Start menu or the command line. Windows 95 supports hibernation through hardware manufacturer-supplied drivers and only if compatible hardware and BIOS are present. Since Windows 95 supports only Advanced Power Management (APM), hibernation is called Suspend-to-Disk. Windows 98 and later support ACPI. However, hibernation often caused problems since most hardware was not fully ACPI 1.0 compliant or did not have WDM drivers. There were also issues with the FAT32 file system. Windows 2000 is the first Windows to support hibernation at the operating system level (OS-controlled ACPI S4 sleep state) without special drivers from the hardware manufacturer. A hidden system file named "" in the root of the boot partition is used to store the contents of RAM when the computer hibernates. In Windows 2000, this file is as big as the total RAM installed. Windows Me, the last release in the Windows 9x family, also supports OS controlled hibernation and requires disk space equal to that of the computer's RAM. Windows XP further improved support for hibernation. Hibernation and resumption are much faster as memory pages are compressed using an improved algorithm; compression is overlapped with disk writes, unused memory pages are freed and DMA transfers are used during I/O. contains further information including processor state. This file was documented by a security researcher Matthieu Suiche during Black Hat Briefings 2008 who also provided a computer forensics framework to manage and convert this file into a readable memory dump. The compression feature was later documented by Microsoft as well. Although Windows XP added support for more than 4 gigabytes of memory (through Windows XP 64-bit Edition and Windows XP Professional x64 Edition), this operating system, as well as Windows Server 2003, Windows Vista and Windows Server 2008 do not support hibernation when this amount of memory is installed because of performance issues associated with saving such a large pool of data from RAM to disk. Windows Vista introduced a hybrid sleep feature, which saves the contents of memory to hard disk but instead of powering down, enters sleep mode. If the power is lost, the computer can resume as if hibernated. Windows 7 introduced compression to the hibernation file and set the default size to 75% of the total physical memory. Microsoft also recommends to increase the size using the powercfg.exe tool in some rare workloads where the memory footprint exceeds that amount. It can be set from anywhere between 50% to 100%, although decreasing it is not recommended. Windows 8's resume-from-hibernation algorithm is multi-core optimized. Windows 8 also introduces a Fast startup feature. When users select the Shut Down option, it hibernates the computer, but closes all programs and logs out the user session before hibernating. According to Microsoft, a regular hibernation includes more data in memory pages which takes longer to be written to disk. In comparison, when the user session is closed, the hibernation data is much smaller and therefore takes less time to write to disk and resume. Windows 8 also saves the kernel image. Users have the option of performing a traditional shutdown by holding down the Shift key while clicking Shut Down. Windows 10 mirrors Windows 8 as noted by Microsoft. Hibernation is often under-used in business environments as it is difficult to enable it on a large network of computers without resorting to third-party PC power management software. This omission by Microsoft has been criticized as having led to a huge waste in energy. Third-party power management programs offer features beyond those present in Windows. Most products offer Active Directory integration and per-user or per-machine settings with more advanced power plans, scheduled power plans, anti-insomnia features and enterprise power usage reporting. Notable vendors include 1E NightWatchman, Data Synergy PowerMAN (Software), Faronics Power Save and Verdiem SURVEYOR. It is possible to disable hibernation and delete hiberfil.sys. macOS On Macs, a feature known as Safe Sleep saves the contents of volatile memory to the system hard disk each time the Mac enters Sleep mode. The Mac can instantaneously wake from sleep mode if power to the RAM has not been lost. However, if the power supply was interrupted, such as when removing batteries without an AC power connection, the Mac would wake from Safe Sleep instead, restoring memory contents from the hard drive. Because Safe Sleep's hibernation process occurs during regular Sleep, the Apple menu does not have a "hibernate" option. Safe Sleep capability was added in Mac models starting with the October 2005 PowerBook G4 (Double-Layer SD). Safe Sleep requires Mac OS X v10.4 or higher. Shortly after Apple started supporting Safe Sleep, Mac enthusiasts released a hack to enable this feature for much older Mac computers running Mac OS X v10.4. The classic Mac OS once also supported hibernation, but this feature was dropped by Apple. Linux In the Linux kernel, hibernation is implemented by swsusp which is built into the 2.6 series. An alternative implementation is TuxOnIce which is available as patches for the kernel version 3.4. TuxOnIce provides advantages such as support for symmetric multiprocessing and preemption. Another alternative implementation is uswsusp. All three refer to it as "suspend-to-disk". Now, in most Linux distributions, Linux hibernation is managed by systemd. Hybrid sleep Sleep mode and hibernation can be combined: The contents of RAM are copied to the non-volatile storage and the computer enters sleep mode. This approach combines the benefits of sleep mode and hibernation: The machine can resume instantaneously, and its state, including open and unsaved files, survives a power outage. Hybrid sleep consumes as much power as sleep mode while hibernation powers down the computer. See also Green computing PC power management References Operating system technology Energy conservation Windows administration
527621
https://en.wikipedia.org/wiki/ChipTest
ChipTest
ChipTest was a 1985 chess playing computer built by Feng-hsiung Hsu, Thomas Anantharaman and Murray Campbell at Carnegie Mellon University. It is the predecessor of Deep Thought which in turn evolved into Deep Blue. ChipTest was based on a special VLSI-technology move generator chip developed by Hsu. ChipTest was controlled by a Sun-3/160 workstation and capable of searching approximately 50,000 moves per second. Hsu and Anantharaman entered ChipTest in the 1986 North American Computer Chess Championship, and it was only partially tested when the tournament began. It lost its first two rounds, but finished with an even score. In August 1987 ChipTest was overhauled and renamed ChipTest-M, M standing for microcode. The new version had eliminated ChipTest's bugs and was ten times faster, searching 500,000 moves per second and running on a Sun-4 workstation. ChipTest-M won the North American Computer Chess Championship in 1987 with a 4–0 sweep. ChipTest was invited to play in the 1987 American Open, but the team did not enter due to an objection by the HiTech team, also from Carnegie Mellon University. HiTech and ChipTest shared some code, and Hitech was already playing in the tournament. The two teams became rivals. Designing and implementing ChipTest revealed many possibilities for improvement, so the designers started on a new machine. Deep Thought 0.01 was created in May 1988 and the version 0.02 in November the same year. This new version had two customized VLSI chess processors and it was able to search 720,000 moves per second. With the "0.02" dropped from its name, Deep Thought won the World Computer Chess Championship with a perfect 5–0 score in 1989. See also Computer chess Deep Thought, the second in the line of chess computers developed by Feng-hsiung Hsu Deep Blue (chess computer), another chess computer developed by Feng-hsiung Hsu, being the first computer to win a chess match against the world champion References External links The making of Deep Blue, overview, IBM Research Chess computers One-of-a-kind computers Carnegie Mellon University
56873972
https://en.wikipedia.org/wiki/List%20of%20Black%20Lightning%20characters
List of Black Lightning characters
Black Lightning is an American superhero television series developed by Salim Akil, airing on The CW. It is based on the DC Comics character of the same name, created by Tony Isabella. It stars Cress Williams as the titular character alongside China Anne McClain, Nafessa Williams, Christine Adams, Marvin Jones III, Damon Gupton, James Remar, Chantal Thuy, and Jordan Calloway. Initially set on an unnamed alternative Earth separate from the other Arrowverse media, the show was later merged with Earth-1 (the setting of Arrow, The Flash, Legends of Tomorrow, and Batwoman) and Earth-38 (the setting of Supergirl) into Earth-Prime during Crisis on Infinite Earths. The series sees the retired Black Lightning (Cress Williams), who as Jefferson Pierce is a high school principal and loving father, return to the hero life after his daughters are kidnapped by the 100 Gang, which is later revealed to be led by Tobias Whale (Marvin Jones III), who apparently killed Jefferson's father years before the onset. Jefferson's elder daughter Anissa (who is an avid social activist for African-American human rights) later discovers she has inherited her father's metahuman genes, although she is initially unaware that he is Black Lightning and begins investigating her grandfather's articles concerning a group of young metahumans who disappeared years ago. She later discovers her father is Black Lightning and joins him using her powers as a vigilante. Jefferson's ex-wife Lynn Stewart (Christine Adams), is the main inspiration of Jefferson's initial retirement as she feared him dying on the streets, but she begrudgingly realizes she cannot stop him after discovering Anissa's abilities even when Jennifer develops abilities of her own. Overview Legend = Main cast (credited) = Recurring cast (4+ episodes) = Guest cast (1–3 episodes) Main characters Jefferson Pierce / Black Lightning Jefferson "Jeff" Pierce (portrayed by Cress Williams as an adult, Kaden Washington Smith as a child) is the main protagonist of the series. At the beginning of season one, Jefferson has retired crime-fighting as Black Lightning due to the urges of his ex-wife Lynn and the safety of his two daughters and is the benevolent school principal of Garfield High who is loved by his students and his daughters. His foster father, Peter Gambi, urges him to return to the streets and use his metahuman abilities to give hope to the inhabitants of Freeland, their city, which is under duress by the 100 Street Gang. Jefferson adamantly refuses to return and resists using his powers even against a group of racists police officers who pull him and his daughters over unjustified suspicions, but after his daughter Jennifer is captured by the 100 (and later his other daughter Anissa), he uses his metahuman abilities (which allow him to generate and control electricity offensively) to fight off the gangsters holding them captive as Black Lightning, adorning a new suit built for him by Gambi (although, he initially only does this as a once-off for his daughters). Initially, Jefferson only returns to being Black Lightning to save his daughters and promises his ex-wife Lynn that it was a never to be repeated occasion and Black Lightning is gone and the two begin to explore reuniting, with Jefferson believing he can do more as Jefferson than Black Lightning could ever do as the community believes in his ability to get things done as a school principal and elder. As the series progresses, it revealed that Jefferson's ability to stand up for the community is so effective that he has come to be affectionately known as "Black Jesus" as revealed by his former student LaWanda White. Members of Freeland's community complain to Jefferson about the looming threat of the 100 taking young girls, including LaWanda's own daughter and having them forcibly work for them, but Jefferson continues to resist Black Lightning and promises to help find another way. He speaks to Latavius (who insists to be called "Lala"), one of Jeff's former students and a high-ranking member of the 100 gang, and asks him to back off to no avail, forcing LaWanda to confront Lala only for him to shoot and instantly kill her. Unbeknown to Latavius, she has recorded the event and it is found by the police, who use this as evidence to imprison Latavius. LaWanda's death forms the catalyst to convince Jeff that he is not doing better as Jefferson Pierce but must fully return and act as Black Lightning, much to Lynn's chagrin. As Black Lightning once more, Jeff attacks and hands Lala over to the police, who apprehend him, although is soon murdered by Tobias Whale, the secret true leader of the 100 gang. Jeff continues to act as Black Lightning during the night and principal and parent during the day, dealing with the stress of Jennifer's adolescence and Anissa's incessant cry for action against the 100. Gambi helps him by watching through a camera in his glasses and a com and has surveillance watching most parts of the city. It is later revealed that Jefferson's father was a journalist who was killed by Tobias after he wrote an article about him. Jefferson retired shortly after a confrontation with Tobias that left the gangster believing he'd killed Black Lightning, making a name for himself on the streets for the past decade. Upon discovering he is alive, Tobias attempts to have Black Lightning killed while citizens of Freeland pay tribute to his return, but the bullet instead hits Jennifer's boyfriend Khalil, severing his spinal cord and crippling the track star. After discovering Tobias's resurgence, Jefferson attempts to kill him to avenge his father, but is convinced he's better by Lynn, who reminds him he is a hero and not a murderer shortly before she is attacked by some 100 gangsters for biological research into the drug Greenlight (a high addictive drug which the 100 had begun selling to the youth that seems to temporarily give them enhanced strength while under its frenzy). The gangsters are chased off by Anissa, who is revealed to have inherited her father's metahuman genes, has powers and has started to act as a vigilante as well. Upon arrival, Black Lightning fights a masked Anissa, believing her to be the one attacking a gagged and tied-up Lynn. After a heated confrontation, Jefferson knocks Anissa out only to realize who she is. Lynn laments that she is also a metahuman and that she has followed in Jeff's footsteps but the two are forced to accept her need to fight for good just like Black Lightning and Lynn has Gambi make her suit. Jefferson though is displeased with Lynn's apparent hypocrisy and briefly voices his outrage on the matter. In season three, Agent Odell gives Jefferson a special watch. When this watch is charged to a specific point, Jefferson gains a new costume through nano-technology. This enabled Odell to win Black Lightning into assisting against the Markovian invaders. After returning to Garfield High, he learns from Principal Mike Lowry that the A.S.A. had him reassign Jefferson to the occupation of guidance counselor. Due to the A.S.A.'s oppression, Black Lightning starts to work alongside the rebels. In part three of Crisis on Infinite Earths, he is recruited by the Flash and his team in an effort to stop an anti-matter wave from annihilating the entire multiverse after he was teleported away while his world was erased. In the final part, Jefferson is restored when his Earth is merged with Earth-1 and Earth-38 to form Earth-Prime. He later become the founding members of a "league of heroes" alongside Barry and several other heroes. Black Lightning returns to Freeland and continues assisting the resistance. When Lynn is taken to Markovia, Jefferson has a parley with Major Sara Grey when it comes to a plan for a rescue mission. Jefferson agrees to help the A.S.A. rescue Lynn under the condition that the metahumans that assist them are off limits afterwards. When it came to the raid on a Markovian facility, Jefferson came up with an idea to have Thunder strike Erica Moran until she has absorbed enough kinetic energy to break down the strong door. Despite being hit by a laser gun wielded by Gravedigger, Black Lightning catches up to the group and shocks Gravedigger into submission. Then he and Lynn retreat to Freeland. When the Markovians attack Freeland, Black Lightning assists in defeating the ones that were attacking the Perdi. Lynn later informs Jefferson that his DNA and Gravedigger's DNA are a match meaning that they are both related. When Gravedigger defeats Lightning, Black Lightning begins to fight him. After Black Lightning tries to reason with him while mentioning that they are related, Gravedigger uses a microwave move to cause some burns to Black Lightning. When some Markovian soldiers make off with Lightning, Black Lightning makes a tactical retreat. He receives help from Brandon to rescue Lightning. After Henderson dies in his arms after shooting the soldiers that were planning to attack him, Black Lightning fights Gravedigger in the Pit. With help from Lynn using an anti-boost serum on Gravedigger, Black Lightning defeats him and gets everybody out of the Pit before it self-destructs. Following Odell being wounded by Khalil, Black Lightning is accompanied by Lynn, Thunder, and Lightning where they present the briefcase to a congressional committee that exposes the A.S.A.'s experiments as well as Markovia's own experiments. Williams also portrays Jefferson of Earth-1, who became the Secretary of Education. He secretly helped Reverend Holt with the metahuman underground railroad before he was killed by the A.S.A. This version ceased to exist when Earth-Prime was formed. Williams also portrays Jefferson of Earth-2, who was killed by Jinn alongside the rest of his family when they tried to intervene after she ended the Markovian threat. This version ceased to exist when a new Earth-2 was formed. Jennifer Pierce / Lightning Jennifer "Jen" Pierce (portrayed by China Anne McClain as a teenager in season 1 to the first four episodes of season 4 and the series finale, Laura Kariuki for the remainder of season 4, Fallyn Brown as a child) is Jefferson Pierce's younger daughter and a metahuman, like her father and sister. She is based on the DC Comics character of the same name. Jennifer is portrayed as a carefree, rebellious teenager who is prone to challenging the "good girl" appearance imposed on her by her respected father, who is the principal of her high school Garfield High, as well her sister Anissa, who is a teacher at the same school. Initially, Jennifer is very careless and even at times reckless with her own life such in the series premiere when getting drunk in a well-known 100 club and almost being forced to work for them. Jennifer is subsequently rescued by her father, who unbeknownst to her is the crime-fighting vigilante known as Black Lightning. After being kidnapped by the 100 with her sister Anissa, she is rescued by Black Lightning and begins drinking and smoking to mask her fear. Jefferson has extensively taught Jennifer and Anissa martial arts and she is a very skilled hand-to-hand combatant, having once even efficiently broken a girl's wrist. She and her sister Anissa share a very close relationship, although Jennifer often does not support Anissa's more confrontational approach to doing the right thing. Jennifer begins to show maturity after her long-time friend Khalil (who is Garfield High's track star) asks her to be his girlfriend and becomes a stable influence on her life. Jennifer deeply cares for Khalil and after they agree to lose their virginity together, she is responsible enough to inform her parents she will be having sex, although Khalil is shot and crippled before this can happen while marching against the 100's gang violence with Jennifer, ending his ambitions of becoming an Olympic star and transcending the limitations usually imposed on teenagers of color (an example he makes is getting a girl pregnant, getting in jail or becoming a junkie). This puts a strain on their relationship and Khalil lashes out at Jennifer, on an occasion even supporting a girl cyberbullying her. After confronting him on the matter, Khalil reveals he blames her for his new status due to her encouraging him to join him on the march that got him shot, seemingly ending their relationship. Jennifer is currently unaware that she has also inherited her father's metahuman genes and that, like her sister, may one day gain her own abilities, a fact that is presently being kept from her by her sister, mother and father. After she quits running track, Jennifer becomes an unpaid intern for her mother. Later, an encounter with her friend Kiesha startles Jennifer into awakening her own latent metahuman abilities, which become known to her after Kiesha almost falls off a ladder. Her abilities manifest as a very hot red lightning that comes from her hands and burns her smartphone. She keeps this a secret from Kiesha and later consciously attempts to recreate the earlier conditions and confirms she has abilities, further destroying her phone. Terrified, Jennifer immediately tells her sister and shows her the deconstructed smartphone. The next morning, Anissa confesses the truth about her and their father being metahumans and that she is the crime fighting Thunder. As Jennifer initially does not believe her, Anissa uses her super strength to lift Jennifer's bed in a demonstration. Terrified, Jennifer flies and confronts their father, who begrudgingly admits Anissa is telling the truth. This makes Jennifer react angrily as she thinks they have all betrayed her trust as they have lied to her whole life. Later she laments to her mother that she will no longer have a happy and normal life and get to experience such things as prom, marriage and having children due to her new state as a "freak". She also confronts the fact that Lynn herself (who admits she is human and not Vixen or Supergirl as Jennifer suggested) ended her marriage to Jefferson because he was the metahuman Black Lightning. Jennifer and Anissa later reconcile and she shows her older sister her online presence and following far exceeds even Black Lightning's. Jefferson later talks to Jennifer and the two watch a movie together after Jennifer reveals they have not done so in some time. Jefferson also later assures her that the only life she will live is the one she wishes to live and that she does not have to become a superheroine like her sister or himself. This helps Jennifer although she still admits difficulty in accepting or even exploring her new abilities. When Anissa tries to push her to find out what her abilities are and take up the mantle of becoming a hero, Jennifer's abilities manifest again and react violently due to the hurtful things Anissa said, having denounced Jennifer as a "disappointment". Her powers char a nearby couch and Anissa theorizes that her seemingly electric powers resemble their father's. Anissa has her sent to their mother's lab to take an MRI scan to insure Jennifer is okay. This leads to the discovery that Jennifer's cells are constantly producing "pure energy" in contrast to Jefferson's, whose electric powers are absorbed and conducted from an exterior source like a battery's while Jennifer's body acts as a powerful interior generator for a stronger kind of "lightning". After the A.S.A. discovers Black Lightning's secret identity, Jennifer and her family are forced to go into hiding. In season two, Jennifer starts to get control of her abilities. She manages to perfect them when she finally confronts Tobias Whale for what he did to Khalil and takes up the name of Lightning. Though Lightning chooses not to kill Tobias, she helps her father defeat him. In season three, Jennifer works to gain control of her powers at the time when her parents are in A.S.A. custody. Odell uses her to destroy a Markovian facility that was cleared out by Black Lightning and the A.S.A. commandos that he rescued after Odell claimed to her that the Markovians killed Nichelle. She soon starts to do secret missions for Odell as she is also advised to pick a side when it becomes a three-way struggle between Freeland, the A.S.A., and the Markovians. After being exposed to the anti-matter in the sky in a lead-up to Crisis on Infinite Earths, Jennifer meets two of her doppelgangers in a void and learns the flaws of having too little power and too much power. As Gambi worked to stabilize Jennifer, her body turned into a light as the anti-matter wave erases her Earth. Following the Crisis, Jennifer pulls herself together and declares that she is done with Odell. When the A.S.A. arrive to rescue Odell, Jennifer is shocked to see Painkiller alive and knocks him out. Jennifer helps to form a firewall to trap the Painkiller program. When Khalil regains control of his body and learns what the A.S.A. did to it, Jennifer is told that they can't be together. Jennifer later assists in the plans to rescue her mother from the Markovians. She finds her mother who has turned the tide on Yuri Mosin and helps to evacuate her. Jennifer later helps Brandon get the information about Brandon's mother from Dr. Jace. When Gravedigger arrives in Freeland, Lightning takes out the Markovian soldiers with him before Gravedigger throws her into the perimeter enough for it to come down. Black Lightning rescues Lightning and resuscitates her as Gravedigger catches up to them. After being dragged off by Markovian soldiers, Lightning is kept their prisoner as she tries to reason with Gravedigger. She is rescued by Black Lightning and Brandon. When in the Pit, Lightning and Brandon do a combo attack on Gravedigger who knocks them down. Following the conflict in Freeland, Lightning accompanies Black Lightning, Lynn, and Thunder to a Congressional committee hearing where they present the briefcase that exposes the A.S.A.'s experiments as well as Markovia's own experiments. In season four, Lightning assists Blackbird in fighting the 100 and the Kobra Cartel during their turf war. During a trip to the ionosphere, Lightning explodes causing Black Lightning to harness the energy particles so that Gambi's machine can put her back together. The process works, but Jennifer's appearance drastically changes. Because of this, Jennifer takes up the alias of J.J. Stewart who is Jefferson's niece from Lynn's side of the family. Lynn later finds out that Jennifer's new form is starting to become unstable. Jennifer later re-forms in her original body and confronts J.J.; it's revealed that J.J. is actually an entity from the ionosphere who, after seeing Jennifer recharge there so many times and becoming jealous of what humanity had, stole her DNA and identity and left her for dead. After a massive fight Jennifer defeats the ionospheric entity by absorbing her, and later defeats the newly-empowered and vengeful Chief Ana Lopez. McClain also portrays Gen of Earth-1, who sided against the A.S.A. and became their prisoner after removing all metahuman abilities in Freeland via the water supply. This version ceased to exist when Earth-Prime was formed. McClain also portrays Jinn of Earth-2, who sided with the A.S.A. and ended the Markovian threat before becoming a power-obsessed killer willing to kill her own family. This version ceased to exist when a new Earth-2 was formed. Anissa Pierce / Thunder / Blackbird Anissa Pierce (portrayed by Nafessa Williams) is the older daughter of Jefferson Pierce and Lynn Stewart and a metahuman like her father and younger sister Jennifer. She is a medical student who works as a part-time teacher at Garfield High. Anissa is an outspoken young woman who is described as "Harriet Tubman" by her sister Jennifer due to her avid support of the Black Lives Matter movement and Black Rights in general. At the start of the series, she is bailed out of jail by her sister and father due to her joining a protest against the 100 that turns violent. After provoking and publicly attacking Will (a member of the 100 and Lala's nephew) in defense of Jennifer after he got Jennifer into trouble with the gang, Anissa and her sister are kidnapped by Will and small group of gangsters during the class she was teaching. Fortunately, Black Lightning resurfaces and rescues them. Later that night, Anissa begins to undergo a panic attack that awakens her latent metahuman abilities, which (unbeknownst to her) she inherited from her father, breaking her bathroom sink in half. Anissa does not immediately realize that she is a metahuman and continues to briefly go on with life, believing it to have been an "accident". Anissa, who is openly lesbian, breaks up with her girlfriend Chenoa due to losing interest in her and begins to investigate her new abilities after her powers awaken again during a store robbery she is able to stop using them. She determines that she has enhanced strength and density when taking deep breaths that she can activate and deactivate at will by controlling her breathing. This makes her virtually invincible so long as she can continue to control her inhalation. In addition, she has an accelerated healing factor that mends injuries of her body far beyond the capabilities of an ordinary human. Anissa begins to look into her family history while also making a friend and possible new love interest in bartender Grace Choi while also doing vigilante work. Anissa learns that her grandfather was a journalist who investigated a group of young metahumans who disappeared years ago. Anissa eventually finds his research and decides to share it with her mother, Lynn Stewart, a neuroscientist, although she does not immediately tell her why she is researching their history. Lynn is attacked by members of the 100 shortly after, who tie her up and tape her mouth shut in her office forcing Anissa to use her powers to help her mother. While she successfully scares them away, Black Lightning shows up, wrongly believing a masked Anissa to be the culprit of a still tied-up Lynn's kidnapping. The two metahumans confront one another, and after Black Lightning inevitably wins and injures Anissa, Lynn reveals the truth and laments that both her children are metahumans. In season three, Anissa takes on the alias of Blackbird when liberating young metahumans from the A.S.A. buses as Gambi secretly assists her. She later reunited with Grace and accepts her metahuman abilities. Anissa is erased during the Crisis on Infinite Earths when a wave of anti-matter destroys the entire universe. She is restored when her Earth is merged with Earth-1 and Earth-38 to form Earth-Prime. Blackbird continues helping the resistance as Black Lightning allows her to make the calls. When Lynn is taken to Markovia, Anissa plans to take part in the rescue mission where she is persuaded by Grace to let her assist in it. When it comes to the raid on the Markovian facility, Thunder had to strike Erica Moran until she had absorbed enough kinetic energy enough to knock down a strong door. When the Markovians attacked South Freeland, Thunder and Grace assisted Black Lightning in fighting them. Anissa and Grace later invite everybody to their engagement party to get that out of the way before the Markovians begin their invasion on Freeland. When in the Pit, Thunder and Grace confront Gravedigger who controls Grace into attacking Thunder. She was forced to use her abilities to knock her out. Following the attack on Freeland, Lynn tells Thunder that Grace is in a coma where she is unsure when she'll come out of it as Lynn plans to make sure that Grace receives full care. Thunder accompanied Black Lightning, Lynn, and Lightning to a congressional committee where they present the briefcase that exposes the A.S.A.'s experiments as well as Markovia's own experiments. Describing her character as a positive role model, Williams said "I'm just really grateful to tell the story for young lesbians – and black lesbians in particular ... My hope is that when you watch Anissa, a young lesbian is inspired to walk boldly as who she is and to love herself and to love herself exactly how she looks." Williams also portrays her Earth-1 counterpart, who witnessed her father being executed by the A.S.A. for his involvement with the metahuman underground railroad. This version ceased to exist when Earth-Prime was formed. Williams also portrays her Earth-2 counterpart, who is killed by Jinn alongside her family when they tried to intervene. This version ceased to exist when a new Earth-2 was formed. Lynn Stewart Lynn Stewart (portrayed by Christine Adams) is a highly learned doctor and neurologist as well as the ex-wife of Jefferson and the mother of Jennifer and Anissa Pierce. Several years before the onset of the series, Lynn divorced and convinced Jefferson to retire as Black Lightning to protect their two daughters as his vigilante work often brought him home battered and wounded; on occasion even bleeding in front of Jennifer. Years later, Lynn and Jefferson begin to get close again, giving Jeff hope of reconciliation before their daughters are kidnapped and he is forced to use his powers for the first time in years to save them, although he promises Lynn that it would be a one-time deal. After the death of his former student LaWanda White, Jeff decides to return to being Black Lightning, much to Lynn's frustration as this pushes back their relationship, which had finally started to heal. Lynn attempts to convince Peter Gambi (Jeff's enabler and surrogate father) to tell him to stop, to no avail. Lynn begins to realize Anissa is acting strange and inquires into this unsuccessfully, although she assures her that she will support her no matter what. Despite not wanting to get involved with Black Lightning, Lynn is instrumental in convincing Jefferson to not kill Tobias Whale, leader of the 100 and the man who killed his father. Lynn later begins to investigate the effects of a new drug called "Green Light" and its effects on the brain, which are similar to what appears to happen to Jefferson when he uses his powers. She is also given a box by Anissa containing Jefferson's father's genetic research on the metagene due to the disappearance of a group of teenagers years prior to his eventual death. Anissa dismisses her curiosity as her simply doing an assignment on their family history. Shortly thereafter, Lynn is attacked by 100 gangsters. Fortunately, Anissa shows up dressed in disguise and uses her abilities to disarm and scare off the attackers. Before she can help a shocked Lynn, Black Lightning arrives and wrongfully believes a masked Anissa to be Lynn's attacker. The two metahumans get into a heated confrontation with Jefferson emerging as the victor, although he soon regrets this as he realizes who she is. Lynn and Jefferson lament that Anissa has gained powers and has followed in Black Lightning's dangerous footsteps. Lynn analyses Anissa's wounds and realizes she has very fast healing abilities like her father. In season three, Lynn starts working for the A.S.A. as she works to find a way to stabilize the metahuman gene. While using a filtered version of Green Light, Lynn was able to come up with the vaccines which also involved some DNA samples taken from Tobias Whale. Whale starts to play with her mind where he wanted the info on the metahumans the A.S.A. had. She gives them to him, but leaves out the information on Wendy Hernandez and Erica Moran. She is erased during the Crisis on Infinite Earths when a wave of antimatter destroys the entire universe. Lynn is restored when her Earth is merged with Earth-1 and Earth-38 to form Earth-Prime. Both Jennifer and Jefferson find that she has become addicted to the modified Green Light. After receiving help from Sergeant Gardner Grayle into sneaking Tobias Whale out of the Pit, both of them were tasered by a Markovian operative who contacts Colonel Mosin to send Instant over. Lynn and Tobias are taken to Markovia where she is reunited with Jace. This led to a cat fight which Yuri Mosin broke up. He reveals to Lynn the knowledge of what Lynn has done to stabilize the metahuman gene from his double agent Nurse Michael Allen and leaves her a Green Light to help her out. As Jace works to gain Lynn's trust, she does tell Jace that she needs some medicine to deal with the Green Light withdrawal. Then she meets Gravedigger when he was given orders to take over the operation from Mosin. Lynn does a trick to get Gravedigger to give him his DNA sample which enabled her to make a 20-minute Green Light with a copy of his mind-control. While it worked on Mosin, it couldn't work on Gravedigger when he caught up to the group. As Lynn started to surrender to Gravedigger, Black Lightning arrived and shocked Gravedigger into submission. Then Black Lightning and Lynn got on the helicopter and retreated back to Freeland. Lynn continued to go through withdrawal symptoms as she has Jennifer fry the Green Light she hid in the bathroom. She was present when Gambi used the briefcase's information to explain the history of Gravedigger and the United States' collaboration with Markovia on a metahuman project. Lynn states that she left the meta-boost formula in Markovia. After working in the lab, Lynn informs Jefferson that his DNA and Gravedigger's DNA are a match meaning that they are related. As she works on the anti-boost serum to use on Gravedigger, Lynn is confronted by Commander Carson Williams who is on a mission to eliminate classified information where he mentions that he already did away with Dr. Jace. Briefly copying Erica's powers, Lynn kills Commander Williams in self-defense. With the anti-boost serum made, she uses them on Gravedigger enabling Black Lightning to defeat him as he then evacuates everyone from the Pit which has been set to self-destruct. After examining Grace, Lynn states to Anissa that she is in a coma where she is unsure when she'll come out of it while advising she gets full care. Lynn later accompanied Black Lightning, Thunder, and Lightning to a congressional committee where they present the briefcase that exposes the A.S.A.'s experiments as well as Markovia's own experiments. Adams also portrays her Earth-1 counterpart, who visits Gen in The Pit after Jefferson was killed by the A.S.A. This version ceased to exist when Earth-Prime was formed. Adams also portrays her Earth-2 counterpart, who is killed by Jinn alongside the rest of her family when they tried to intervene. This version ceased to exist when a new Earth-2 was formed. Tobias Whale Tobias Whale (portrayed by Marvin "Krondon" Jones III) is a crime lord who leads the 100 Gang and the main antagonist of the series. Tobias is an African-American albino who is portrayed as cruel and seemingly loathes his "Black" heritage as he casually but very often expresses that he prefers the company of "White people" and refers to "Black" individuals distastefully as "negroes". This expressed usually by his left and right-hand man and woman Joey Toledo and Syonide who are always with him. Though this is contradicted by his apparent great love for his sister, Tori, who is a Black African-American (that is, without albinism). Tobias is first shown scolding Latavius, one of his main subordinates, on the apparent return of Black Lightning (whom he believed was deceased after his disappearance years earlier), asking if he believes in the "Resurrection". Tobias built his "street cred" around his apparent murdering Black Lightning and was gifted the 100 gang by a group of unseen, powerful underworld gangsters for it. They assign a woman named Lady Eve to watch over their investment in him and he reports to her. After Latavius is incarcerated for the death of LaWanda White (a civilian of Freeland who wanted her daughter back from the 100), Tobias, Toledo and Syonide infiltrate the jail with their connections at the police station and murder Lala as he is a liability. Tobias next attempts to assassinate Black Lightning after he appears before a mob of marchers against the 100. The bullet he fires misses and instead hits Jefferson's daughter's boyfriend, Khalil, whose spine is severed, but no one sees Tobias except Peter Gambi, who curiously erases the footage and keeps it from Jefferson. Tobias meets with Lady Eve, who expresses aversion with the uproar the 100 is receiving for paralyzing a potential "Olympic star", a fact Tobias mostly shrugs off. It is revealed that Tobias killed Jefferson Pierce's father years before the onset of the series when Jeff was a kid, but has maintained the same youthful appearance apparently due to an unnamed serum that sustains his youth and seemingly inhumane strength. Upon discovering that Tobias is in Freeland, Jefferson attempts to assassinate Tobias in return, but is convinced otherwise and Tobias escapes. Tobias next begins to release a drug called Greenlight that seems to temporarily enhance the user's strength while in a blind rage-filled frenzy but is highly unstable. Tobias's sister arrives in Freeland and suggests they turn the people of the city against Black Lightning. Together they decide to finally deal with their father, who was abusive towards them as children. Tobias breaks his back, mercilessly "letting him die" alone with no help. Later, Tobias and Syonide convince Khalil that they have a way to make him walk again. Sometime after this, Tobias and his sister are subsequently attacked by Black Lightning while attending a club. After Lady Eve and Peter Gambi murder Toledo, Tobias and Tori plot to kill Evelyn in revenge. Tobias's people are badly overpowered and he is injured while a stray bullet hits and kills Tori. His henchmen guide him out of the building. Using his contact with ASA operative Martin Proctor, Tobias Whale turns Khalil Payne into Painkiller. In the season finale, Tobias uses Lala as a bomb mule as he, Syonide, and Painkiller raid an A.S.A. building. Proctor escaped, but they got the briefcase he had. In season two, Tobias Whale goes to visit the crypt of Tori to pay his respects where he is arrested by Deputy Chief Henderson and his fellow police officers for the murder of Alvin Pierce. Tobias is released from police custody when District Attorney Montez does not think that Jefferson's eyewitness testimony will stick. During "The Book of Rebellion" arc, Tobias Whale sends Painkiller to kill Reverend Jeremiah Holt, who wouldn't relocate his clinic as suggested by Councilman Kwame Parker. When Tobias got tired of waiting and Painkiller failed to get Reverend Holt to leave town, he placed a bounty on Painkiller, which gets fulfilled by Giselle Cutter and ends with Tobias ripping out his spinal implant and leaving him for dead. During these episodes, Tobias also gets a young prodigy named Todd Green on his side. The episode "The Book of Secrets: Chapter One: Prodigal Son" revealed that he was an old friend of Helga Jace, who made his anti-aging serum. After Todd orchestrates Jace's jailbreak in secret, she takes Tobias to where the remaining pod children are held, as he makes plans for them. In the season 2 finale, Tobias unleashes the Masters of Disaster on Freeland while also having the 100 cause riots. When Lightning confronts Tobias Whale after Cutter leaves him, she nearly kills him for what he did to Painkiller until Black Lightning shows up and talks her out of it. Lightning helps her father defeat Tobias who is then remanded to an off the books meta-human prison called the "Pit." In season three, Tobias is looking older without his serum and his hair has grown in. Issa Williams uses his truth-extracting abilities on Tobias where he reveals that Proctor worked for Odell and that Odell works for the President of the United States. Odell later tries to get the location of the briefcase out of him to no avail even when he used ultraviolet lights on him. Tobias later has a hallucination of Black Lightning taunting him. When working on stabilizing the meta-gene, Lynn works on Tobias where she uses the same serum on him that restores his body. Tobias voices his knowledge that Black Lightning and Jefferson Pierce are one person and vows to deal with him when he gets out. Lynn then extracts a bone marrow sample from Tobias joking that she forgot the anesthetic. When Tobias is snuck out of the Pit and knocked out by Lynn, a Markovian operative tasered Lynn and Gardner Grayle while arranging for Instant to take Lynn and Tobias to Markovia. Once there, Yuri Mosin's double agent Nurse Michael Allen continued to extract the bone marrow from Tobias to further the Markovian's plans to stabilize the metahumans on their side. When Tobias is rescued by Black Lightning, he voices to Black Lightning that he knows his identity. This causes Black Lightning to knock him out. It was mentioned that Tobias got away as Mosin reports to Gravedigger that his men can't find him. Tobias makes his way to Nurse Michael Allen's house where he has killed his dad and spared his mom. He wants Nurse Allen to take him to where the metahumans in Markovia's custody are. Tobias later watches the news about Markovia's prime minister refusing to give restitution for Markovia's attack on Freeland as he plans to make his own return to Freeland. In season four taking place one year later after the Markovian invasion, Tobias started becoming a philanthropist while gaining Val Seong and Red as his allies and the use of Agent Odell's A.I. Katie. While still planning to eliminate the Pierce family, he also had Mayor Billy Black killed for not going ahead with his suggestion to level Garfield High. During the promotion of the new DEGs, Tobias announces that he is starting a campaign to become the new Mayor of Freeland. After getting a device from Marshall Bates and combining it with Val's DNA sample, Tobias starts to negate every metahuman abilities in the device's range. In addition, he also works to get the Shadow Board on his side. Black Lightning has his final showdown with Tobias Whale which ends with Tobias accidentally falling out the window and getting impaled on a spike despite Black Lightning's attempt to save him. Bill Henderson Inspector William "Bill" Henderson (portrayed by Damon Gupton) is a police detective and Jefferson's best friend who is oblivious to his alter ego as Black Lightning and hunts him due to believing he is an outlaw and vigilante. After exposing the dirty Deputy Chief Zeke Cayman, he is appointed to replace him. In season two, Henderson eventually learns that Jefferson is Black Lightning, fracturing their relationship. He later informed Jefferson that Detective Summers was killed and to use Black Lightning to track down one of the people who is known for setting fires to cars. Also, Henderson learned that Jefferson was replaced as principal in light of the 100's attack on Garfield High. Once that was done, Henderson is informed by Black Lightning that the person was hired by Tobias Whale. This leads Henderson and his police officers to track down Tobias Whale at his sister's crypt where Henderson led in his arrest. Then Henderson informed Jefferson that they finally caught Tobias Whale and wanted him to be the first to know about it. While he does place Helga Jace in his custody after Lynn Stewart subdues her, he finds Jace not in her cell due to the bounty hunter Instant having taken her to the Markovians while having killed some police officers in the process. After Tobias Whale is defeated, Henderson is seen driving through the streets as Black Lightning and Thunder put an end to the 100-established riots. In season three, Henderson is now the chief of police and is intimidated by Commander Carson Williams to do a press conference about the A.S.A.'s enforced curfew or else he will make things miserable for his family. Henderson had no choice but to agree to the terms. Unbeknownst to the A.S.A., Henderson started a secret resistance movement against the A.S.A. with the help of Blackbird. During the Markovian invasion of Freeland, Henderson is assisted in protecting the suspected metahumans by Lala and the remnants of the 100. Stumbling on some Markovian soldiers planning to ambush Black Lightning, Henderson shoots them and ends up in a mutual kill with the final Markovian soldier in the process. Before dying in Black Lightning's arms, Henderson advises him not to disappoint Freeland. Peter Gambi Peter Gambi (portrayed by James Remar in normal form, Justin Livingston in cloaked form) is a tailor who is the main benefactor and surrogate father of Jefferson Pierce. He took him in after Alvin Pierce's death and is also shown to have some connection with Lady Eve. His real name is Peter Esposito and was a former member of the A.S.A. working under Martin Proctor until he turned against the organization and gave information to Alvin Pierce, resulting in his murder. In season two, Gambi encounters Kara Fowdy who informs him that Proctor's briefcase is in Tobias Whale's possession. Gambi assists Anissa in obtaining money to save a clinic from being closed down and finds a badly-wounded Kara Fowdy. He tends to her injuries as he tries to get the info on where Tobias has the briefcase. When the clinic was nearly-bombed, Peter faces the female culprit until she gets away by hijacking a guy's motorcycle. Before Kara Fowdy dies, she gives her cell phone to Gambi with the information that he is looking for on it. Gambi is later ambushed during a drive and is presumed dead after being run off the road with his car exploding. He faked his death to find out who called the hit and later reunited with the Pierce family. Gambi later makes a special suit for Jennifer when she becomes Lightning. In season three, Gambi helps Anissa out with her Blackbird operations where he event created the A.I. Shonda for her apartment. To get close to the A.S.A, Gambi uses the holographic cloaking technology to assume the appearance of a random soldier. He is erased during the Crisis on Infinite Earths when a wave of anti-matter destroys the entire universe. Gambi is restored when his Earth is merged with Earth-1 and Earth-38 to form Earth-Prime. When Black Lightning and Lightning's experiences with alternate realities was brought up, Gambi has no memories of that and had found traces of anti-matter energy explaining that they are the only ones who remember. While planning to check the failsafe in the suit that the A.S.A. provided Jennifer, Gambi finds that Baron was led into his lair as he learns his connections with Black Lightning, Thunder, and Lightning. While treating Baron, Gambi later finds him on the floor after his abilities enabled him to find info on his computer on who tried to have Gambi killed as the picture he found shows Lady Eve. At the time when Black Lightning and Thunder had briefly abducted Odell, it was revealed that Gambi was previously trained by Odell during Gambi's time with the A.S.A. When Lynn has been taken to Markovia by Instant, Gambi is among those who plan to rescue her as Gambi had some business that he did there. Gambi and T.C. coordinated the rescue of Lynn once they arrived. Gambi later spoke to Dr. Jace stating that he should've disobeyed orders and killed her when they first met. After Gambi ripped out an eye of one of the twins that worked for her, Lady Eve met with Gambi at a tailor shop. She gives him the briefcase that was previously in Tobias' possession that Lala gave her. After TC unlocks the briefcase, Gambi reveals the information about the United States' collaboration on the metahuman project to Jefferson and Lynn which turned Tyson Sykes into Gravedigger. Gambi later plans to officiate the engagement of Anissa and Grace. During the Markovian invasion, Gambi and TC coordinate Black Lightning in rescuing Lightning and warning them that Odell set the Pit to self-destruct. Afterwards, Gambi and T.C. get into a shootout with Major Sara Grey and the A.S.A. soldiers with her which ends with Gambi and T.C. emerging victorious. In season four, Gambi works with his old comrade Lauren Caruso on the new D.E.G.s while planning to improve the superhero costumes to withstand them. He even worked on a machine to restore Jennifer after she exploded. When Lynn used a special meta-booster on Gambi to find out Val Seong's metahuman abilities, the two of them alongside Jefferson and T.C. find that Val has power negating abilities. In addition, Gambi and Jefferson discover that Red is the one who has magnetic abilities when they view footage of him threatening Marshall Bates to get the device that Tobias Whale needs in 48 hours or else. Following the death of Tobias Whale, Gambi passes his torch to T.C. while planning to do some tinkering on the side. Khalil Payne / Painkiller Khalil Payne (portrayed by Jordan Calloway) is a track star at Garfield High and Jennifer's ex-boyfriend. After a bullet from Syonide's gun paralyzes him from the waist down, he becomes involved with Tobias Whale where he undergoes an experimental treatment done by the A.S.A. involving a spinal implant that gives him enhanced abilities, becomes a cyborg in the process. Tobias Whale is the one who names Khalil's new form of Painkiller. In light of Syonide's death in season two, Tobias has Painkiller do her duties as well where one of them resulted in the death of 100 member Rheon when collecting the protection money. After Tobias Whale rips out his spinal implant for being unable to kill Reverend Holt, Painkiller dies from his injuries in the hospital as Jennifer and Nichelle mourn his death. Though Khalil somehow appears alive in one of the pods in Agent Odell's possession. In season three, Odell had a brain chip placed in Painkiller's head to make him obedient and uses him to kill his own mother with a poisonous touch that the A.S.A. placed in him after she was heard on a resistance transmission stating what Tobias Whale made the A.S.A. do to her son. After Lightning destroyed the Markovian facility that was just cleared out by Black Lightning and the A.S.A. commandos, Odell sends Painkiller to a house where some Markovians are hiding out and slays them. Painkiller is later assigned Black Lightning as a target. During the fight at Franklin Terrace, Painkiller engages Thunder and is knocked out of the window by him. Both Painkiller and Carson Williams were evacuated from the area. When it came to the A.S.A.'s mission to rescue Odell, Painkiller was knocked out by Lightning who was shocked to see him alive. TC later translated his technology to Jennifer stating that Painkiller still loves her. With help from TC and Gambi, Jennifer created a firewall to trap the Painkiller program enabling Khalil to regain control of his body. Due to what the A.S.A. did to his body, Khalil states to Jennifer that they can't be together. Jefferson arranged for Major Grey to remove Khalil from their system as part of the conditions for a collaboration to rescue Lynn. While reluctant to help in rescuing Lynn due to what the A.S.A. did to him, Khalil decided to help out. During the raid of the Markovian facility, Khalil faced off against Gravedigger and got overpowered. He managed to get out of the facility. After Painkiller briefly broke free from the firewall due to a glitch, Khalil fought him back and elected to keep his distance from Jennifer. When some Markovian soldiers attack, Painkiller breaks free and kills them. Then he traps Khalil behind the same firewall before resuming his mission to kill the Pierce family. When Painkiller tries to snipe Lightning, Khalil breaks free and ultimately defeats the Painkiller program. Then he confronts Odell in the backseat of his car. When Black Lightning advises Khalil not to kill Odell, he just shoots where his spleen and leaves him for Black Lightning as he walks off. In season four, Khalil is now operating in Akashic Valley and helps to rescue Grace from the minions of Maya Odell. During this time, he did managed to make an agreement with Painkiller for him to take over if things get difficult while also developing the ability to alter his glands so that he can do both a poisonous touch and an antidotic touch. Maya's actions earned her the wrath of Painkiller who vows to find her. Upon supporting T.C.'s claim that Tobias Whale is behind Jefferson Pierce being accused of embezzlement, Khalil agrees to help him. He pays a visit to Jesse Gentilucci who gives him the location of the hidden ledger that would help him deal with Tobias Whale. After apprehending Looker, Khalil is intercepted by Ishmael where Painkiller kills him by getting his poison on the hilt of Ishmael's sword. Then Khalil hands Looker over to Detective Shakur and Kevin Mason while providing the rest of the anti-poison for Looker. Following the death of Tobias Whale, T.C. helps Khalil and Painkiller get rid of the kill code at the cost of Khalil's memories of the Pierce family. Grace Choi / Wylde Grace Choi / Shay Li Wylde (portrayed by Chantal Thuy, Stella Smith in teenager form, Joseph Steven Yang in old man form) is a bartender who becomes Anissa's girlfriend. Later on after a night with Anissa, Grace starts developing spots on her body which she counters with some pills. When Anissa finds one of Grace's pills and she is nowhere to be found, Gambi learns that Grace was an alias used after being rescued from a prostitution ring by ICE When she reunites with Anissa, Grace reveals her shapeshifting abilities where she can assume the form of a teenager, an old man, and a leopard. Anissa accepts Grace's metahuman status as they rekindle their love relationship. Gambi even helps to get Grace's powers under control. After killing an A.S.A soldier in her leopard form, Grace learns that Jefferson is Black Lightning due to maintaining the scents of people from her leopard form. After sparring with Anissa, Grace persuades her to let her help in rescuing Lynn from the Markovians. When the Markovians attack South Freeland, Grace helps Black Lightning and Thunder fight them. Then the two of them have their engagement party to get that out of the way before the Markovians begin their invasion. During the Markovian invasion of Freeland, Grace is mind-controlled by Gravedigger to fight Thunder causing Thunder to use her powers to knock out Grace. Lynn later states that Grace is in a coma where she is unsure when she'll wake up. Though she will need to be taken somewhere to receive full care. The character was promoted to Series Regular in November 2020. In season four, Grace was still in a coma. She later comes out of it and officially marries Anissa. Recurring characters This is a list of recurring actors and the characters they portrayed in multiple episodes, which were significant roles, sometimes across multiple seasons. The characters are listed by the season in which they first appeared. Overview Introduced in season one Kiesha Henderson Kiesha Henderson (portrayed by Kyanna Simone Simpson) is Henderson's daughter and Jennifer's best friend, who encourages Jennifer to challenge her perfect image. Kara Fowdy Kara Fowdy (portrayed by Skye P. Marshall) is Garfield High's vice-principal. She is later revealed to be a spotter for the A.S.A. who reports on metahuman sightings at the school to them. After the A.S.A.'s rogue operation was exposed, Kara leaves the A.S.A. and plans to reclaim the briefcase that was stolen by Tobias Whale. During the heist, Kara got harpooned by Tobias and jumped out the window. She was later found by Gambi. Before dying in Gambi's arms at his hideout, she gives her phone to him that contains the information that he needs. Latavius Johnson / Tattoo Man Latavius "Lala" Johnson (portrayed by William Catlett) is a member of the 100 and former student of Jefferson Pierce who is in charge of the Seahorse Motel that the 100 use as a front for their prostitution ring. When Jefferson became Black Lightning and rescued Anissa and Jennifer, Lala escapes and is brought to Tobias Whale by Syonide and Joey Toledo as Tobias wants him dead. After killing his cousin Will for drawing Black Lightning to him, Lala resumed the activity at the Seahorse Motel where he is defeated by Black Lightning and arrested by Inspector Henderson. When at the police station, Lala is killed by Tobias Whale for his repeated failures to dispose of Black Lightning after he and Syonide were snuck into the police station by Zeke Cayman. Lala is later resurrected by Lady Eve's magic dust and starts seeing Lawanda's ghost and Will's ghost where their tattoos appear on his body. In addition, he starts to demonstrate super-strength. After regaining control of Lala and mentioning the ghosts being a side effect of the reanimation project he was put through, Tobias uses him as a bomb mule in an attempt to kill Martin Proctor. In season two, Lala is put together again through an unknown method by a man named Lazarus Prime. After recapping that he killed his friend Earl to keep the 100 from killing him, Lala sees Earl's ghost and gets his tattoo on him. He then plans to do some redemption by resuming his revenge plot on Tobias Whale. When he finally confronts Tobias after saving Black Lightning from Heatstroke and Cutter taking her leave, Tobias uses the phrase "E pluribus unum" which surfaces the tattoos of Lala's other victims enough to disable him in a painful way. In season three, Lala later wakes up in a hotel near a briefcase as he works to reclaim the 100's territory. He now possesses the ability to not feel pain due to the ghosts of the people he killed that are now in him. During this time, he starts to develop a competition with a revived Lady Eve. In the next meeting, Lady Eve revealed to Lala that she created the programming used on him as she quotes "E pluribus unm" and another quote which made him obedient. He was able to give the briefcase that he planned to use to lure out Tobias to her. Black Lightning and Thunder try to recruit Lala and the remnants of the 100 to help fight the Markovians with no avail. After meeting with Lady Eve, Lala has Devonte torture Dr. Matthew Blair for information on where Tobias Whale is. Following a shootout with Major Sara Grey, Lady Eve has Destiny inform Lala that the A.S.A. plans to nuke Freeland if the Markovians can't be defeated, This causes Lala and the remnants of the 100 to join the fight where they assist Chief Henderson in shooting the Markovian soldiers. In season four, Lala and the 100 are in a turf war with the Kobra Cartel. After Lydell Green accidentally killed Marcel Payton's son leading to Blackbird declaring a parking lot neutral territory for both sides and the homeless, Lala later killed Lydell which caused Lydell's tattoo to manifest on the back of Lala's right hand. Lala also has been streaming the illegal cage fight with viewers placing their bets on who would win the match. Lala later fights Ishmael who slays him and buries his body in cement. Lala's cement tomb was later given to Tobias Whale by Destiny. During the fight between Black Lightning and Tobias Whale, Lala's cement tomb was knocked down. This later revived Lala as he finds Tobias' dead body outside while noting that someone managed to do away with him. Syonide Syonide (portrayed by Charlbi Dean) is Tobias Whale's henchwoman, hit person, and mob enforcer. As an infant, she was found in the dumpster with her umbilical cord wrapped around her. When she was eight years old, Tobias discovered her in an orphanage, where she was abused and malnourished. He took her in and trained her in the art of assassination while also having her put through a painful procedure that involved placing carbon fiber armor beneath her skin. Syonide is later killed in battle against Kara Fowdy. Joey Toledo Joey Toledo (portrayed by Eric Mendenhall) is Tobias Whale's right-hand man and mob enforcer. He is killed by a disguised Gambi who made it look like that Lady Eve called the hit. John Webb John Webb (portrayed by Tommy Kane) is a news reporter for WIXA 7 that reports on Freeland's activities. Zeke Cayman Zeke Cayman (portrayed by Anthony Reynolds) is a corrupt deputy chief of the Freeland Police Department who has connections with the A.S.A and Tobias Whale. He is tasked by Kara with framing Jefferson for drug dealing, but is subsequently arrested by Henderson alongside those involved after Henderson got a confession from Detective Grunion. Jeremiah Holt Jeremiah Holt (portrayed by Clifton Powell) is a reverend looking to challenge the 100. During a sermon, Holt collapses from a poisoned handkerchief that Cutter secretly placed on him. This enabled her to finish the job that Painkiller was unable to do. However, he was actually placed in a coma due to the "intervention of the Lord" and later leads his people in a prayer at the time of the 100-caused riots. In season three, he was able to get a tour of the A.S.A.'s facility and later assists in Blackbird's metahuman version of the Underground Railroad. Henderson uses a claim of an illegal crime to apprehend Reverend Holt so that he can bring him and Two-Bits into his secret resistance movement. Powell also portrays his Earth-1 counterpart, who ran a metahuman underground railroad alongside Jefferson. After he was caught by the A.S.A., he confessed to everything during the subsequent interrogation before they killed him. This version ceased to exist when Earth-Prime was formed. Lady Eve Evelyn Stillwater-Ferguson (portrayed by Jill Scott) is the owner of a funeral parlor who connects Tobias Whale with the Shadow Board, a secret group of corrupt leaders that gave him leadership over the 100. She was also a former agent of the A.S.A. who had connections with Peter Gambi and Agent Odell. Lady Eve is later murdered by Tobias' men as part of a plan to frame Black Lightning and also avenging Joey Toledo when Peter Gambi left the blame of his death on Lady Eve's group. It was later revealed in season two that she was an old friend of Lazarus Prime who taught him some of her tricks. In season three, Baron later found her picture on Gambi's computer when trying to find out who tried to have Gambi killed. Lady Eve was shown to be revived offscreen and is the head of the Ultimate O business where she starts to develop some competition with Lala and the remnants of the 100. In the next meeting, Lady Eve revealed to Lala that she created the programming used on him as she quotes "E pluribus unm" and another quote which made him obedient. He was able to give the briefcase that he planned to use to lure out Tobias to her. After the eye of one her twin minions was ripped out by Gambi, Lady Eve meets with Gambi. She gives him the briefcase that Lala planned to use to lure out Tobias. Lady Eve later meets with Agent Odell about getting her spot on the Shadow Board back in exchange for information on where the briefcase is. In their discussion, Lady Eve mentioned to Odell that Lazarus Prime is still around. Then Lady Eve met with Lala about who he can ask about Tobias Whale's last known location. Lady Eve meets with Major Sara Grey informing her that the briefcase is with Peter Gambi. Grey states that her reinstatement in the Shadow Board will happen and plans to have her relocated to Gotham City. Figuring out that the A.S.A. will nuke Freeland if the Markovians can't be defeated, Lady Eve and her men get into a shootout with Major Grey and those with her. A wounded Lady Eve gets away and contacts Destiny to have Lala and the remnants of the 100 fight the Markovian invaders. In season four, Ana Lopez does a broadcast about the gang war between the 100 and the Kobra Cartel where she claims that Lady Eve is heading it even though nobody has seen her since the Markovian invasion. Nichelle Payne Nichelle Payne (portrayed by Yolanda T. Ross) is the mother of Khalil. Odell later controlled Painkiller into poisoning her. Then he covered it up to Lightning by claiming that the Markovians were responsible for Nichelle's death. Frank "Two-Bits" Tanner Frank "Two-Bits" Tanner (portrayed by Jason Louder) is Jefferson's childhood friend who sells drugs and bootleg DVDs on the streets. He has since become an occasional informant for Black Lightning. In season three, Two-Bits is shown to be against the A.S.A.'s activities in Freeland while operating a bar. Henderson uses a claim of an illegal crime to apprehend Two-Bits so that he can bring him and Reverend Holt into his secret resistance movement. During the Markovian invasion, Frank "Two-Bits" Tanner assists Henderson and the police in fighting the Markovian soldiers until they are assisted by Lala and the remnants of the 100. Gina Gina (portrayed by Veronika Rowe) is the aunt of Lana. She and her sister visited Jefferson and Lynn for what Jennifer did to Lana. In season three, Gina joins the resistance against the A.S.A. Kyrie Kyrie (portrayed by Renell Gibbs) is a man who later joins the resistance against the A.S.A. In season four, Kyrie is killed by Ishmael when trying to get information from him about Blackbird. Martin Proctor Martin Proctor (portrayed by Gregg Henry) is a member of the A.S.A. who initially wants to kill Black Lightning, but changes his mind when he realizes that his DNA can be used to create metahuman soldiers. Peter Gambi is associated with him. During a confrontation at one of the warehouses storing the metahuman stasis pods, Martin is briefly attacked by Jennifer and shot by Gambi. Tobias Whale had his thumbs salvaged by an ally at the coroner's office to access the contents of his briefcase. It was revealed in Season Three during Issa's truth-extracting interrogation on Tobias Whale that Proctor worked for Odell. Introduced in season two Percy Odell Percy Odell (portrayed by Bill Duke) is an A.S.A. agent from Gotham City investigating the "rogue operation" conducted by Martin Proctor who has connections with Lady Eve and was the one who trained Peter Gambi. He reluctantly allows Lynn to take over management of the Green Light victims. Though he starts to get suspicious of the Pierce family while planning to weaponize the Green Light victims. After confirming his suspicions and following Tobias Whale's defeat, Odell confronts the Pierce family to tell them that the pods have started to attract the attention of the Markovians as he would like Black Lightning, Thunder, and Lightning to help the A.S.A. when the Markovians bring their battle to Freeland. In season three, Odell has Black Lightning and Lynn Stewart in his custody to evade the Markovians from claiming them. It was revealed during Issa's truth-extraction interrogation on Tobias Whale that Proctor worked for Odell who works for the President of the United States. While enabling Jefferson and Lynn to make contact with their children, he uses a brain chip to control Painkiller into poisoning his own mother. When Odell is shot during an A.S.A. shootout with Yuri Mosin and Instant, Major Sara Grey becomes the acting director of operations while Odell is recuperating. Odell later recovered and sent a video message to Jennifer to pick a side. Then he ordered Grey to weaponize the metahumans they have. When the anti-matter appeared in the sky, Odell's warning to Jennifer not to go outside came too late. In the aftermath of the Crisis that merged his Earth with Earth-1 and Earth-38 to form Earth-Prime, Odell continues to have the metahumans weaponized when Jennifer hasn't returned his calls. Sending an SUV with a holographic transmission to meet with Jennifer, he is told by Jennifer that she is done with him as the SUV drives off. Jefferson later arranges for his capture in a plan to fool the A.S.A. soldiers into withdrawing from Freeland. He is rescued by the A.S.A. operatives during Painkiller's fight with Black Lightning and leaves for Gotham City. He later returned and fulfilled the conditions that Jefferson gave in exchange for Lynn being rescued from Markovia. Though he mentions that the Markovians are at the borders again, Jefferson states that he'll only fight the Markovians for Freeland and not for the A.S.A. Odell later meets with Lady Eve who wants him to get her seat on the Shadow Board back in exchange for informing him where the briefcase is. In their discussion, Odell learns that Lazarus Prime is still around. During the Markovian invasion, Odell has Commander Williams remove every classified information and has Major Grey find the briefcase or destroy it. When Gravedigger gets into the Pit, he has his A.I. Katie set the Pit to self-destruct and uses the password "Rosebud." When he makes it to his car, he finds his driver dead and Khalil in the backseat. Black Lightning arrives and advises Khalil not to kill him. Khalil justs shoots him where his spleen is and leaves him for Black Lightning as he walks off. During Black Lightning's meeting with a congressional committee, Representative Nagar states that Odell will be prosecuted. In season four, Odell is shown to be recuperating from the injury that Painkiller gave him. He is also shown to have a daughter named Maya who is operating in Akashic Valley. She calls Percy up to let him know about her minions' encounter with Painkiller. Duke also portrays his Earth-1 counterpart, who imprisoned Gen in the Pit for removing all metahuman abilities in Freeland via the water supply. After getting a confession from a captive Reverend Holt regarding the metahuman underground railroad, he led the A.S.A. to the Pierce home to confront Jefferson and execute him. This version ceased to exist when Earth-Prime was formed. Duke also portrays his Earth-2 counterpart, who gladly supported Jinn ending the Markovian threat in Freeland. This version ceased to exist when a new Earth-2 was formed. Issa Williams Issa Williams (portrayed by Myles Truitt) is a Green Light metahuman whose glowing jugular causes anyone he sees to tell the truth. The Pierce family takes him in after his mother rejects him, but he leaves with some family members after learning that he has only months left to live. In season three, Issa is in A.S.A. custody where it is claimed by Odell that the Markovians abducted him and the A.S.A. rescued him. It was revealed during Issa's truth-extraction interrogation on Tobias Whale that Proctor worked for Odell who works for the President of the United States. Once Issa served his purpose, Odell spiked his food with an unspecified poison. Wendy Hernandez Wendy Hernandez (portrayed by Madison Bailey) is an aerokinetic metahuman previously obtained by the A.S.A. who is loosely based on Windfall. She is accidentally released from her stasis pod. Wendy suffers a psychotic break upon release and goes on a rampage, stopping only when Black Lightning shocks her. Afterwards, she chooses to go back into her pod until Lynn can find a cure. Though Lynn later works with her to master her abilities. Perenna Perenna (portrayed by Erika Alexander) is a psychic metahuman and one of Gambi's former A.S.A. contacts who steps in to help Jennifer control her powers. Helga Jace Dr. Helga Jace (portrayed by Jennifer Riker) is a convicted mad scientist from Markovia and old acquaintance of Tobias Whale who is roped into helping Lynn and the A.S.A. treat the metahumans. One notable illegal experiment that Dr. Jace did caused 10 people to lose their feet and one to die. To punish her, an implant is placed in her ankle that prevents Dr. Jace from leaving the designated area. After being re-incarcerated when some of the pod children die from the trial run of the treatment, she is sprung from her cell by Todd Greene where it was revealed that she made the anti-aging serum that Tobias took. Helga Jace then takes Tobias Whale to where the remaining pod children are. After being defeated by Lynn Stewart, Jace has some info for them to use. After giving them that info, Jace is taken away by Deputy Chief Henderson. While in police custody, she is taken by Instant who was hired by the Markovians to reclaim Jace. In season three, Jace was seen with Yuri Mosin's Markovian army. After Instant leaves upon Mosin wiring the money to his account, Jace states that Lynn has found a way to stabilize the meta-gene as they will obtain the information so that they can stabilize the metahumans on the Markovians' side. Brandon mentioned to Jennifer that he came to Freeland to look for Jace who is responsible for killing his mother. When Lynn is brought to Markovia, Jace works to gain Lynn's trust. This results in a cat fight that is broken up by Mosin. While getting some medicing to deal with Lynn's Green Light withdrawal, Jace states to Mosin that she has won Lynn's trust. Gravedigger later appears to take over the operation from Mosin where it was revealed that he had the same serum that Jace made for Tobias Whale. When Black Lightning raids the facility, Brandon finally encountered Jace and started to create an earthquake only for Grayle to knock him out and have both of them evacuated. Gambi was revealed to have originally assisted in evacuating Dr. Jace from Markovia and stated that he should've disobeyed orders and did away with her. Jennifer brought Brandon to Dr. Jace to get information on her mother whose ashes he kept in crystallized form. She stated that she was killed by his father who also had earth-based forms. Brandon throws the crystallized ash into Dr. Jace's left shoulder and leaves with Jennifer. Brandon later held Dr. Jace in his apartment so that she can tell him all about his father. While Brandon is assisting Black Lightning in rescuing Lightning from the Markovians, Commander Williams breaks into Brandon's apartment and kills Dr. Jace as a way to remove classified information. Mike Lowry Mike Lowry (portrayed by P. J. Byrne) is a man who becomes the new principal at Garfield High with the aim of reversing what he sees as Jefferson's ineffective policies on school safety. He starts his "Zero Tolerance" policy on Sekou Hamilton and another student that were fighting where Sekou was expelled for throwing the first punch and the other student was suspended much to the dismay of Jefferson. Lowry's later argument with Jennifer about Khalil's mural goes viral as Jefferson persuades Dr. Frank to give Lowry a second chance to implement his plan. In season three, the A.S.A. arrange for Lowry to give Jefferson's class to Mrs. Wellen and promote him to guidance counselor. When Tavon Singley is taken from a class for being a suspected metahuman, Lowry objects to this and is knocked down by Major Sara Grey. Giselle Cutter Giselle Cutter (portrayed by Kearran Giovanni) is a British mercenary with poison-tipped blades who is hired by Tobias Whale to bring Painkiller to him alive. Gambi claims that Cutter is rumored to have a telekinetic ability. During the riots caused by the 100, Cutter sees that Tobias is starting to lose it and takes her leave. Todd Green Todd Green (portrayed by RJ Cyler) – A man who loses his research grant to a white wards board and is swayed to Tobias Whale's side by wiring $100,000.00 to Todd's account. After cracking the code in the briefcase, Todd and Tobias discovered that the A.S.A. was developing metahumans for Project Masters of Disaster. In addition, he orchestrates the secret jailbreak of Helga Jace. Tobias Whale later has Cutter use a car bomb on Todd Green when he served his purpose. His glasses were salvaged by the police as Henderson has it placed in evidence. Introduced in season three Jamillah Olson Jamillah Olson (portrayed by Adetinpo Thomas) is a reporter for Clap Back that broadcasts on the A.S.A.'s occupation of Freeland. After Truthteller Johnson was fished out of the river, Henderson recruits her to be the new voice of the resistance against the A.S.A. During the Markovian invasion of Freeland, Jamillah was shot by a Markovian soldier while broadcasting about the attack. Shonda Shonda (voiced by Sh'Kia Augustin) is the AI in Anissa's apartment who was created by Gambi. Devonte Jones Devonte Jones (portrayed by Rafael Castillo) – A man who was briefly in A.S.A. custody for being a suspected metahuman. After he was released, he sided with Lala and helped to investigate the Ultimate O business run by Lady Eve. Devonte later assisted Lala in torturing Dr. Matthew Blair on where Tobias Whale is by beating him up. Devonte later assists Lala and the remnants of the 100 in fighting the Markovian soldiers. Yuri Mosin Yuri Mosin (portrayed by Thomas K. Belgry) is a Markovian colonel and old enemy of Agent Odell that leads the attack on Freeland. He and Instant later sneak into Freeland to steal some A.S.A. information. Due to an ambush led by Agent Odell, Yuri Mosin and Instant started a shootout which led to Odell getting wounded. Before Mosin can finish off Odell, Black Lightning shows up causing Instant to teleport Mosin away. At the Markovians' base, Mosin was not pleased that Instant didn't let him finish off Odell or Black Lightning. Instant reminds him that he hired him to get him into Freeland. After Instant leaves upon Mosin wiring the money to his account, Jace states that Lynn has found a way to stabilize the meta-gene as they will obtain the information so that they can stabilize the metahumans on the Markovians' side. Gravedigger later appeared at the facility where he had received orders from his superiors to take over the operation from Mosin. Using a 20-minute Green Light that has Gravedigger's DNA in it, she mind-controlled Mosin to stand still and placed the shock device on her neck onto his neck. Once she had gotten to a safer distance with Lightning, Lynn had Mosin shock himself until he passes out. When Mosin states to Gravedigger that his men can't find Tobias, Gravedigger thanks him for his services to Markovia and uses his upgraded powers to kill him. Brandon Marshall Brandon Marshall (portrayed by Jahking Guillory) is a new student at Garfield High. At the time when A.S.A. soldiers were giving a beat-down to Jefferson Pierce, Jennifer discovers that he is a metahuman who can manipulate the properties of earth as seen when he makes Jennifer some diamonds from some coal as well as negating electrical attacks. Brandon revealed to Jennifer that he came to Markovia to look for Helga Jace who was responsible for killing his mother. Because his mother died when he was young, Brandon had been in different foster homes. After an earthquake-triggering seizure that Jennifer treated, Brandon and Jennifer are captured by Sergeant Grayle and Specialist Travis. Both of them managed to escape partially due to a combination of Jennifer overloading the inhibitor collars and Sergeant Grayle knocking out Specialist Travis. Brandon later meets Jefferson when Jennifer informs him on how Odell used her. Jennifer later persuades her father to let him assist in rescuing Lynn partially by revealing his abilities and partially because he is looking for Helga Jace. When it came to rescuing Lynn from the Markovians, Brandon used his earthquake when he finally encounters Jace only to be knocked out by Sergeant Grayle who evacuated both of them. Jennifer brought Brandon to Dr. Jace to get information on her mother whose ashes he kept in crystallized form. She stated that she was killed by his father who also had earth-based forms. Brandon throws the crystallized ash into Dr. Jace's left shoulder and leaves with Jennifer. He asks TC to help look for information on his father only for Khalil to suffer a glitch that briefly frees the Painkiller program. Brandon later held Dr. Jace at his apartment so that she can tell him all about his father. When Lightning gets captured, Brandon helps Black Lightning to rescue her. He and Lightning briefly fight Gravedigger in the Pit, after which the two of them kiss. Sara Grey Sara Grey (portrayed by Katy O'Brian) is an A.S.A. commando under the rank of major who works with Agent Odell to enforce the curfew in Freeland and obtain any suspected metahumans. She was among the commandos rescued from the Markovians by Black Lightning. When Odell was wounded in an A.S.A. shootout with Yuri Mosin and Instant, Sara Grey becomes the acting director of operations. After recuperating from Black Lightning's attack, Sara Grey worked on weaponizing the metahumans that the A.S.A. has. To bring Jennifer to Odell, she pairs Sergeant Grayle up with Specialist Travis. When Black Lightning intercepted Odell's convoy, Sara Grey sends Painkiller and some A.S.A. commandos to rescue him. After Lynn was taken to Markovia by Instant, Sara Grey and Black Lightning have a parley where they agree that Lynn must be rescued. As a contingency, Grey has Dr. Blair put a kill Lynn contingency in Erica Moran's chip in the event that Lynn is beyond rescuing. She also meets Peter Gambi who she researched about. While she was not pleased that Tobias got away, Grey is dismissed by Odell. During the Markovian invasion of Freeland, Sara Grey was instructed by Odell to find the briefcase or destroy it. After a meeting with Lady Eve which resulted in a shootout, Major Grey and her soldiers attacked Gambi's tailor shop. While most of her men were killed by Gambi, she was killed by T.C. Michael Allen Michael Allen (portrayed by Euseph Messiah) is an A.S.A. nurse who assists Lynn with working on Tobias Whale. He is later revealed to be a double agent working for the Markovians upon Lynn's capture. When in Markovia, Allen continues to extract some bone marrow samples from Tobias. Tobias later visits Michael's house where he killed Michael's father and spared his mother. He wants Michael to take him to where the metahumans in the Markovians' possession are. Gardner Grayle Gardner Grayle (portrayed by Boone Platt) is an A.S.A. commando under the rank of sergeant. He assisted Sara Grey in pulling suspected metahumans out of houses. When the Pit was going under lockdown, Grayle allowed Lynn to get buy him. Grey paired Grayle up with Specialist Travis to bring Jennifer to Odell. During this time, he noticed that Specialist Travis has a chip in his neck. As Jennifer engaged Specialist Travis, Grayle knocked him out enabling Jennifer and Brandon to get away. After helping Lynn sneak Tobias Whale out of the Pit, they get tasered by a Markovian operative. With Lynn in the Markovians' clutches, Grayle informs Jefferson about it as Jefferson has him arrange a parley with Major Grey. When it comes to their invasion of the Markovian facility, Grayle informed Jefferson that Major Grey had Erica Moran chipped only for TC to state that he deactivated the chip when he got suspicious of it. Upon Brandon finding Jace and starting to cause an earthquake, Grayle knocked him out and had both of them evacuated. During the Markovian invasion of Freeland, Grayle and Erica assist in the evacuation of the suspected metahumans. When they and the rebels try to fight Gravedigger, he just mind-controls them to sleep. Erica Moran Erica Moran (portrayed by Gabriella Garcia) is a kinetic energy-absorbing metahuman in the A.S.A.'s custody who is a genderbent version of the DC Comics character Freight Train. In Lynn's dream sequence, she explodes before she can release the absorbed energy. Erica starts to improve her abilities with Lynn's help. When Lynn has been abducted by the Markovians, Erica was chipped by the A.S.A. to assist in the upcoming raid on Markovia. Major Grey has Dr. Blair place a kill Lynn contingency in Erica's chip in the event that Lynn can't be saved. Though that chip was secretly deactivated by TC when he got close to Erica. He revealed that information to Jefferon Pierce and Gardner Grayle. When it came to the raid on the Markovian facility, Thunder had to strike Erica until she had enough kinetic energy to knock down a strong door. After the mission was done, it was mentioned by Odell to Jefferson that Erica has been returned to her family. Prior to attending Anissa and Grace's engagement party, Erica does managed to kiss TC before the Markovians can begin their invasion of Freeland. During the Markovian invasion on Freeland, Erica assists Grayle in the evacuation of the suspected metahumans. When they and the rebels tried to fight Gravedigger, he mind-controls them to sleep. Matthew Blair Dr. Matthew Blair (portrayed by Brandon Hirsch) is a young scientist who assists Lynn in her work with the metahumans in A.S.A. custody. After Lynn flees, Dr. Blair is left to work on the vaccine where he has only been able to reach 60%. Major Grey later has Dr. Blair place a kill Lynn contingency in Erica's chip in the event that Lynn can't be saved. Dr. Blair was later abducted by Lala and tortured by Devonte on where Tobias Whale is which he has no knowledge of. T.C. / Baron T.C. (portrayed by Christopher Ammanuel) is a pod metahuman with technopathic abilities. Due to him hearing a lot of current technology speaking after being freed, Baron took residence at an old radio station where he met Peter Gambi and assisted him and Thunder in getting Jamillah's message to outside of Freeland. Baron later found himself at Gambi's tailor shop where he found Gambi's secret room and his connections with Black Lightning, Thunder, and Lightning. When Baron goes onto Gambi's computers to find information on who tried to have Gambi killed, he has a brief collapse as the image on the monitor shows a picture of Lady Eve. He later meets Jefferson and translates Painkiller's technology. With help from Gambi and Jennifer, TC created a firewall to trap the Painkiller programming enabling Khalil to regain control of his body. Upon getting suspicious of what has been placed on Erica, TC deactivated the chip that contained the programmed contingency plan which he later revealed to Black Lightning and Gardner Grayle. After checking to see if Painkiller is still behind the firewall, TC was approached by Brandon to look up information on his father only for Khalil to suffer a glitch that briefly freed Painkiller. Following the incident, TC helps Gambi open the briefcase that Lady Eve gave him after she got it from Lala. Prior to the Markovians' invasion of Freeland, TC manages to get a kiss from Erica. During the Markovian invasion, he helped Gambi coordinate Black Lightning in rescuing Lightning and warning them that the Pit was set to self-destruct. In season four, T.C. helps Jefferson track down Lydell Green after he accidentally shot Marcel Payton's son during the 100's shootout with the Kobra Cartel. T.C. also helps to analyze the DEG, track down Terry Andrews for Lightning, and advises Lightning to take caution with what she posts on social media. After Lightning exploded, T.C. and Gambi work on a machine to put her back together. While helping Jennifer to adjust to her new body, T.C. is among those that find out that Val Seong has power-negating abilities when Lynn tests a meta-booster with Val's DNA in it on Gambi. During a transmission with Khalil and Philky, T.C. works on helping to subdue Painkiller while mentioning his suspicion that Tobias Whale framed Jefferson for embezzlement. Following the death of Tobias Whale, T.C. helps Khalil and Painkiller get rid of the kill code at the cost of Khalil's memories of the Pierce family. Gambi later passes his torch to T.C. Destiny Destiny (portrayed by Teesha Renee) is a worker at the Ultimate O who works for Lady Eve. In season four, Destiny becomes the underboss for the Kobra Cartel. Following the death of Mayor Billy Black, Destiny contacts Lala for a truce stating that the police will come after the Kobra Cartel once they are done with the 100. After witnessing Lala revive upon being shot and the arrest of the 100 members present, Destiny calls up someone stating that they need to find an assassin who specializes in metahumans. This leads to her calling in Ishmael who successfully stabs Lala and traps his body in cement casket. Destiny displayed it in her hideout as she learns that Ishmael is planning to kill 100 metahumans in order to get into the League of Assassins. As Ishmael already killed 94 metahumans, Destiny contracts him to kill Black Lightning and his allies. Gravedigger / Tyson Sykes Tyson Sykes (portrayed by Wayne Brady) is a World War II soldier who became a subject to a metahuman project collaboration between the United States and Markovia in exchange for not being court martialed for beating up some soldiers that used racist remarks on him. Tyson was the only survivor of this project. While he possesses super-strength, super-speed, and mind-control, Gravedigger maintained his age due to the same serum that Helga Jace used on Tobias Whale. He used his talents when fighting Nazi soldiers. Afrer the war, Gravedigger sided with Markovia, assisted in a coup d'état, and became one of the Markovian's few stable metahumans who plans to make an independent metahuman nation in Markovia. Gravedigger even considered Martin Luther King Jr. a coward and claimed that the rich white people helped to get Barack Obama elected. Years later, Gravedigger was sent by his superiors to take over the operation from Mosin where he used his mind-control to get Lynn Stewart to work on the metahuman stabilization faster. In one instance, Lynn had to ask for his DNA sample which she secretly used for her 20-minute Green Light. When Black Lightning and his allies raided the Markovian facility to rescue Lynn, Tobias, and Jace, Gravedigger overpowered Khalil and used his laser gun to knock out Black Lightning. Then he used his mental abilities to stop the group from evacuating. Lynn started to surrender to Gravedigger due to him being immune to other mind-control attacks only for Black Lightning to catch up to them and shock Gravedigger into submission. A Markovian scientist later injected Gravedigger with the meta-boost formula that Lynn left behind. When Mosin comes in stating that his men can't find Tobias, Gravedigger states to Mosin that Markovia thanks him for his services and uses his upgraded powers to kill Mosin. After researching those who rescued Lynn, Gravedigger leads the invasion on Freeland. Lynn found that Jefferson's DNA and Gravedigger's DNA are a match meaning that they are related. Lightning takes out the ones who were with Gravedigger before fighting Gravedigger. He resists the electrical attacks and throws Lightning against the perimeter enough to take it down. Black Lightning resuscitates Lightning just as Gravedigger catches up to them. Black Lightning begins to fight him. He did use a microwave ability on Black Lightning before he made a tactical retreat when the Markovian soldiers made off with Lightning. Lightning even tried to reason with him. During Gravedigger's raid on the Pit, he mind-controlled those on the outside to sleep, mind-controlled Grace into attacking Thunder, and knocked down Lightning and Brandon. Gravedigger fought Black Lightning again. After being hit by the anti-boost serum shot by Lynn, Black Lightning defeated Gravedigger. Using a cloaking technology upon surviving the Pit's self-destruct sequence, Gravedigger watched Black Lightning's meeting with a congressional committee that exposed the A.S.A.'s experiments as well as Markovia's own experiments. Upon leaving the building, Gravedigger sheds his disguise and walks off with satisfaction that the racist cover-up which fueled his anger has been exposed. Introduced in season four Lauren Caruso Lauren Caruso (portrayed by Elena Varela) is an employee at Monovista International and the ex-girlfriend of Gambi. During Gambi's early work with the A.S.A., Lauren turned down his offer to join with them. After meeting up with Gambi years later, Lauren and the rest of Monovista International start to work on the DEGs. Hassan Shakur Hassan Shakur (portrayed by Wallace Smith) is a detective in the Freeland Police Department that works close with Chief Lopez and had an earlier encounter with Black Lightning in his youth. Following the death of Mayor Billy Black, Lopez makes a reluctant Shakur the head of the Meta Task Force where he leads the arrest of the 100 members that were with Lala. Shakur was later present with the Meta Task Force when Lightning defeated Lopez. Ana Lopez Ana Lopez (portrayed by Melissa De Sousa) is a woman who is sworn in as the new chief of police of the Freeland Police Department. Following the death of Mayor Billy Black, Lopez starts the Meta Task Force while appointing Hassan Shakur as its leader. Consumed by her desires to defeat Lightning, Lopez uses the meta-booster that Tobias Whale gave her to gain electrokinesis where she starts to drain Freeland's power grid. She is defeated by Lightning. Dr. Bowlan Dr. Bowlan (portrayed by Bethann Hardison) is a therapist that Jefferson and Lynn see. Red Red (portrayed by Matt Roszak) is a man with magnetic abilities who becomes Tobias Whale's latest minion. He was the one who killed Mayor Billy Black with his special bullet that was later found by Hassan. Tobias later dispatched Red to find out from Marshall Bates on why a specific item has been delayed. Upon being informed that the device still needs to be fully tested, Red informs Bates to have that test finished up and en route to Tobias in 48 hours or else one of his bullets will go into his head. Red was unaware that the footage of the discussion was being watched by Jefferson and Gambi. Val Seong Val Seong (portrayed by Helen Joo Lee) is Tobias Whale's lawyer who has ALS and a meta-gene that grants her power-negating abilities. Dr. Darius Morgan Dr. Darius Morgan (portrayed by Todd Anthony) is a doctor that becomes Anissa's co-worker. He is also revealed to be an ordained minister where he weds Anissa and Grace. When Anissa later goes to his apartment to learn the information about the metahuman cure from him, she finds Darius dead in his apartment. Marshall Bates Marshall Bates (portrayed by Paden Fallis) is an executive at Monovista International. Ishmael Ishmael (portrayed by Rico Ball) is an assassin who is hired by Destiny to deal with Lala. After killing him and trapping his body in a cement casket, Ishmael states to Destiny that he is planning to kill 100 metahumans in order to get into the League of Assassins. As he had already killed 94 metahumans, Ishmael is further contracted by Destiny to go after Black Lightning and his allies. Though Tobias later pays him more money while intimidating Destinee to end the Kobra Cartel's gang war with the 100. After finding out what Painkiller did to Jesse Gentilucci, Tobias Whale sends Ishmael to take out Painkiller and Looker. Painkiller manages to kill Ishmael by getting his poison onto the hilt of Ishmael's sword. Kevin Mason Kevin Mason (portrayed by Jamal Akakpo) is a special agent for the FBI who investigates Jefferson for embezzlement. After having Lynn's work confiscated, Mason later arrests Lynn for violation of civil rights. It was discovered that Mason was being controlled by Looker after the information was found with her involvement with Tobias was discovered. With help from Detective Shakur, Jefferson slipped a chemical that Lynn made into the coffees for the interrogation which caused the silver liquid to be ejected from Mason. Jefferson then exposed Looker's control on him to Mason. Guest stars Introduced in season one Varetta Henderson (portrayed by Karen Ceesay) – The wife of Inspector Henderson and the mother of Kiesha Henderson. Will (portrayed by Dabier Snell) – A member of the 100 and cousin of Lala who kidnaps Anissa and Jennifer. He is later killed by Lala for disappointing him and later appears as a ghost that only Lala can see with his tattoo appearing on Lala. Lawanda White (portrayed by Tracey Bonner) – One of Jefferson's former students whose daughter was taken by the 100 under Lala's supervision. After Jefferson fails to help her as promised, Lawanda confronts Lala who shoots and kills her mercilessly. Unknown to him, she recorded the entire exchange, which leads to his arrest at the hands of Inspector Henderson and eventual death at the hands of Tobias Whale. Lawanda later turns up as a ghost that only Lala can see with her tattoo appearing on Lala. Tori Whale (portrayed by Edwina Findley) – Tobias Whale's younger sister who assists in his plot to destroy Black Lightning. She is accidentally killed by a stray bullet while trying to escape from Black Lightning. Eldridge Whale (portrayed by T. C. Carson) – The torturous father of Tobias and Tori. After being found by Tori, Tobias repays their father by breaking his spine and leaving him for dead. David Poe (portrayed by Antonio Fargas) – The editor-in-chief of the Freeland Gazette who gives Anissa files that once belonged to her grandfather Alvin. He is subsequently killed offscreen in a staged hit-and-run. Mr. Nasir (portrayed by Jigga) - A teacher at Garfield High. Chandler Tong (portrayed by Regina Chen) - A news anchor for WIXA 7 who reports on the activities in Freeland. Gordon (portrayed by Vanessa Aranegui) - A math teacher at Garfield High. Glennon (portrayed by Faron Salisbury) - A detective in the Freeland Police Department who was bribed by the A.S.A. through Kara Fowdy into doing their bidding like assisting Deputy Chief Cayman in planting evidence that Jefferson Pierce had Green Light. After Henderson found proof that the bribe that Glennon took was used to pay for a new house for his son, Glennon cooperated with Henderson which led to the arrest of Caymen and those involved. Steven Connors (portrayed by Joshua Mikel) – A drug supplier and an associate of Lala. He later allies with Tobias following attacks from Thunder. Thomas Hildago (portrayed by Morgan Brown) – A weapon maker who is allied with the A.S.A. After being interrogated by Gambi, Thomas is later killed offscreen by Lala. Alvin Pierce (portrayed by Keith Arthur Bolden) – The father of Jefferson and friend of Peter Gambi who worked as a reporter. He was killed by Tobias Whale after exposing him for corruption. Jefferson once conversed with him following a near-death experience the day when Tobias Whale raided Garfield High. In season four, Jefferson visits Henderson's grave and converses with his father's ghost. Josh Henry portrays an unnamed scientist working for the A.S.A. who oversees the stasis pods that the captured metahumans in Proctor's possession are in. Detective Sergeant King (portrayed by Crystal Lee Brown) – A member of the Freeland Police Department who started out as a desk sergeant and later become a detective sergeant. In Season 3, she was picked up by Carson Williams for being a suspected Green Light user. At the time when a virus that was harming the metahumans was occurring, Henderson visited King where she admitted taking Green Light when crime was worse. Before succumbing to the virus, King directed Henderson to a fake floor in her locker and advised her to give the contents of the envelope to her husband. Tavon Singley (portrayed by Jasun Jabbar Wardlaw Jr.) – A student at Garfield High. In season three, Tavon was a suspected metahuman despite his test showing that he doesn't have Green Light in him. He is later poisoned during an ambush on Blackbird by Painkiller and dies in Black Lightning's arms. Roland S. Martin and Nina Turner appeared as themselves in the pilot. Journalist Amanda Davis has a posthumous cameo appearance in two episodes. Introduced in season two Dr. Napier Frank (portrayed by Robert Townsend) – An old friend of Jefferson Pierce who is on Freeland's board of education. Marcel Payton (portrayed by Kedrick Brown) – A teacher at Garfield High. He was present when Jefferson Pierce stepped down as principal. In season three, Marcel taught one of his classes when Jennifer accidentally fried the computers. In season four, Marcel was revealed to have been left homeless since the Markovian invasion. After Lydell Green of the 100 accidentally killed his unnamed son during the 100's shootout with the Kobra Cartel, his other children were taken by child services and Marcel had to resign. Though he took back his resignation and Jefferson found that he had been taking part in Lala's cage fights. Jefferson was able to lend his old house to him until he is able to get his kids back from child services. Marcel later informs Jefferson that Tobias has gotten him a new house at the time when developers plan to put a new hospital where his old house is standing. Sekou Hamilton (portrayed by Christian Alex Jones) – A student at Garfield High who got into a fight with another student which Jefferson broke up. Mike Lowry had him expelled for starting the fight while the other student was suspended. Through unknown means in season three, Sekou was re-enrolled at Garfield High as he was seen among the students speaking out towards the A.S.A. soldiers for what happened to Tavon. He was attacked by an A.S.A. soldier before Jefferson intervened. Zoe B. (portrayed by Andy Allo) – A musician that Anissa befriends. Montez (portrayed by Salli Richardson-Whitfield) – The district attorney of Freeland. Anaya (portrayed by Birgundi Baker) – A pregnant girl that Anissa attends to at the clinic. Due to her romance with Deacon, she gave birth to twins with one being black and one being white. In Season 3, Anaya allows safe passage for the suspected metahumans through their territory. When the Markovians attacked South Freeland, Anaya sent a distress call to Anissa. It was later mentioned that Anaya and Thierry were captured and were later rescued. Deacon (portrayed by Rob Morean) – The boyfriend of Anaya. He later dies in front of Anissa where he then emits some metal-like substance in front of Anissa and Henderson. It was later revealed that Deacon was part of South Freeland's Sange community. Looker (portrayed by Sofia Vassilieva) – A metahuman that can control people with a metal-like substance she emits. She is the ruler of South Freeland's Sange community. During the fight between Black Lightning and Thunder at Freeland's clinic, Looker was defeated when Thunder threw her into a hook in the wall as Black Lightning plans to have her handed over to the A.S.A. Looker's defeat also freed the Sange from her control. In season four, Looker was revealed to have escaped A.S.A. custody during the Markovian invasion and began working for Tobias Whale who paid him to control Kevin Mason into accusing Jefferson Pierce of embezzlement. She was defeated by Painkiller where Khalil gave her half the antidote and persuaded her to confess her involvement with Tobias Whale to the Freeland Police Department. After Painkiller killed Ishmael, Looker was handed over to Detective Shakur and Kevin Mason. Sheriff Clark (portrayed by Robert Walker Branchaud) – The sheriff of South Freeland that works for Looker. Kwame Parker (portrayed by Eric Lynch) – A Freeland councilman and benefactor of Martin Proctor who is swayed to Tobias Whale's side. When Tobias Whale unleashed Heatstroke into Freeland, Parker was burned by Heatstroke. Thierry (portrayed by Warren "WAWA" Snipe) – The deaf father of Anaya and member of South Freeland's Perdi community. In Season 3, he and some Perdi find Anissa's Blackbird alias after she was blasted through the perimeter by an A.S.A. soldier. Thierry allows Blackbird safe passage through their woods. When the Markovians attacked South Freeland, it was mentioned that Anaya and Thierry were captured and were later rescued. Batina (portrayed by Charmin Lee) – The mother of Anaya and member of South Freeland's Perdi community who interprets for her husband Thierry. She is killed in battle with the Sange. Kito Payne (portrayed by Kendrick Cross) – The father of Khalil and husband of Nichelle who is an ex-criminal. Instant (portrayed by Tosin Morohunfola) – A teleporting metahuman and bounty hunter. After killing some people in a bar, Instant is contacted by an unknown client telling him to head to Freeland. When in Freeland, he kills some police officers to get to Helga Jace. As Jace notes his teleporting ability, Instant reveals that he was hired by the Markovians to reclaim Jace. Instant does that and teleports away with Jace before Henderson shows up. In Season 3, Yuri Mosin contracted Instant to sneak him into Freeland so that they can steal A.S.A. information. Due to Sara Grey detecting Instant's teleportation energy, Odell sets up an ambush while using technology to negate Instant's teleporting. This leads to a shootout where Odell gets wounded. Instant gets Mosin away when Black Lightning shows up. After reminding Mosin that he contracted him to get him into Freeland, Instant gets Mosin's payment wired to his account and he teleports away. A Markovian operative that tased Lynn and Gardner Grayle contacted Mosin to have Instant bring Lynn and Tobias Whale to Markovia. Masters of Disaster – A group of metahumans created by the A.S.A. as part of the experiments of "Project Masters of Disaster." Marcus Bishop / Shakedown (portrayed by Hosea Chanchez) – A secret A.S.A. operative who can generate vibrations and frequencies at will. He was released from his body by Tobias Whale to help Cutter steal some of the pods from the A.S.A. Shakedown is defeated by Black Lightning and Thunder. Joe / Heatstroke (portrayed by Esteban Cueto) – A pyrokinetic metahuman released by Tobias Whale to serve him. He was unleashed into Freeland where Tobias Whale broadcast his attack on the Dark Web to promote the metahuman arms race. During the Masters of Disaster's fight with Black Lightning and Thunder, Heatstroke is shot by Lala. Daryl Robinson / Coldsnap (portrayed by Derrick Lewis) – A cryokinetic metahuman released by Tobias Whale to serve him. He is defeated by Black Lighting and Thunder. Rebecca Jones / New Wave (portrayed by Brooke Ence) – An aquakinetic metahuman released by Tobias Whale to serve him. She serves as the leader of the Masters of Disaster. New Wave is defeated by Black Lightning and Thunder. Lazarus Prime (portrayed by Michael Wright) – A mysterious man and old friend of Lady Eve who puts Lala back together and revives him. He supports Lala's revenge plans on Tobias Whale for what he did to Lady Eve. It was revealed that he was also known by Agent Odell. Benjamin Crump and Angela Rye appears as themselves in the season two premiere talking about the Green Light incident. Introduced in season three Maryum Luqman (portrayed by Zoe Renee) – A Muslim metahuman with camouflage abilities. The side effect of her current health has been causing her to lose her hair, her fingernails falling out, and her eyes bleeding. The A.S.A. labeled her as "Chameleon." Carson Williams (portrayed by Christopher B. Duncan) – An A.S.A. Meta Force soldier under the rank of commander and metahuman who can copy the abilities of any metahuman that he comes in contact with. He works with Agent Odell to enforce the curfew in Freeland and obtain any suspected metahumans. When he had a showdown with Black Lightning at Franklin Terrace where he copied his powers, Williams understood the side-effects of this and collapses from the lack of insulation. Williams is then evacuated by the A.S.A. soldiers. Sara Grey later mentions to Odell that Williams is almost done recuperating. Odell later ordered Williams to remove all classified information. He was able to kill Dr. Jace. When he tried to kill Lynn, she briefly copied Erica's powers and kills Williams in self-defense. Cyclotronic / Ned Creegan (portrayed by Chase Alexander) – A metahuman who can disintegrate anything at will and was part of the same program that gave Carson Williams his powers according to Gambi's research. When Cyclotronic attacked the A.S.A. facility that Anissa and Reverend Holt were given a tour of, Carson Williams fought Cyclotronic and snapped his neck. It was suspected by Lynn that Cyclotronic was the carrier for a man-made virus that the Markovians used to infect the metahumans in the A.S.A.'s custody. This was confirmed by Jace when Lynn is in the Markovians' clutches when she needed Lynn to create the cure for the virus. Rebecca Larson (portrayed by Amanda Baker) - A news woman who is the host of the news show Larson Line. In season four, she slanders the metahuman community including Lightning. Sinzell Johnson (portrayed by Mac Wells) – A gangster who was exploiting the remnants of the 100 until he is driven away from Lala. He later leads different raids on the A.S.A. convoys for supplies and is poisoned by Painkiller. Herbert King (portrayed by Antwan Mills) – The husband of Detective Sergeant King. Following his wife's death, Herbert received an envelope of money from Henderson that his wife had saved for him. Henderson then drove off before Herbert can ask him something. Mary Louise Shepard (portrayed by Andrea Frye) – An old lady living at Franklin Terrace who was formerly a third-grade teacher of Jefferson and Henderson. Travis (portrayed by Garrett Hines) – An A.S.A. commando under the rank of specialist. After getting out of surgery, he was assigned by Sara Grey to assist Gardner Grayle in apprehending Jennifer. Once that was done, Grayle noticed that there is one of the A.S.A.'s control chips in the back of his neck. When Travis engaged Jennifer, the bullets he shot went through her. Grayle knocked Travis out enabling Jennifer and Brandon to get away. Representative Nagar (portrayed by Jennifer Christa Palmer) – The head of a Congressional committee that runs the hearing that had Black Lightning presenting the evidence that exposes the A.S.A.'s experiments as well as Markovia's own experiments. She states to Black Lightning, Lynn, Thunder, and Lightning that the A.S.A. will be disbanded and that Odell will be prosecuted. Judge Isabella (portrayed by Tony Isabella) – A judge on the Congressional committee. Judge Von Eeden (portrayed by Trevor Von Eeden) – A judge on the Congressional committee. Introduced in season four Billy Black (portrayed by Reggie Hayes) - The Mayor of Freeland. He is responsible for swearing in Ana Lopez as the new chief of police. He was against Tobias Whale's offer to tear down Garfield High as he went there when he was young and his children are attending it. During an event in the neutral area that Blackbird established, Mayor Black was secretly killed by Red who made it look like that the 100 was responsible for his death which left Lopez convinced. Tobias had this done in retaliation for Mayor Black declining to have Garfield High torn down. Lydell Green (portrayed by Kelvin Hair) - A member of the 100 who was responsible for accidentally killing Marcel Payton's son during a shootout with the Kobra Cartel. This led to Black Lightning crippling him. Lala was not pleased with what happened and later killed Lydell during the confrontation causing Lydell's tattoo to manifest on the back of Lala's right hand. The police later found Lydell's body and Lopez suspected that Lightning was responsible. Terry Andrews (portrayed by Tre' Stokes) - A young man who filmed Lightning fighting the 100 and the Kobra Cartel which Rebecca Larson uses in her slandering of metahumans. When Lightning fights Terry with T.C.'s help, he asks for a selfie which she does and also leads to Lightning to start a social media page which Terry follows. Behemoth (portrayed by Nicholas Pulos) - A cage fighter that takes parts in Lala's illegal cage fights. He managed to beat up Marcel Payton in a cage match before being defeated by Jefferson Pierce. Wesley Robinson (portrayed by Troy Faruk) - The deputy chief of the Freeland Police Department. He is secretly allied with the 100. Philky (portrayed by Alexander Hodge) - A remnant of the A.S.A. who allies with Khalil and uses his part in Akashic Valley as a front for Khalil's activities. Donald (portrayed by James Roch) - A former Marines medic who allies with Khalil. He worked on healing Anissa when she was poisoned by Painkiller during Grace's abduction. Maya Odell (portrayed by Sibongile Mlambo) - The daughter of Percy Odell who operates in Akashic Valley. Uriah (portrayed by McKalin Hand) - A new student at Garfield High that befriends Jennifer in her J.J. alias. Jesse Gentilucci (portrayed by Kenneth Trujillo) - A gangster who is associated with Tobias Whale. Khalil visited his club in need of specific information. After taking out his men, Khalil and Painkiller do a good cop/bad cop technique on anything related to Tobias Whale framing Jefferson Pierce for embezzlement. Jesse writes for them the location of the hiddel ledger that would help Khalil. Afterwards, Jesse is poisoned by Painkiller. Tobias later found out about Jesse's death at the hands of Painkiller causing Tobias to enlist Ishmael to deal with Painkiller. Keith Michaels (portrayed by Will Blagrove) - A lawyer and Lynn's ex-boyfriend who Anissa enlists to defend Lynn when she was arrested for violation of civil rights. Fantastic Negrito cameos as a hologram version of himself in Philky's bar. References Lists of Arrowverse characters Lists of action television characters Lists of drama television characters Lists of science fiction television characters Black Lightning (TV series)
2152238
https://en.wikipedia.org/wiki/Nagravision
Nagravision
Nagravision (or Nagra Kudelski or simply Nagra) is a company of the Kudelski Group that develops conditional access systems for digital cable and satellite television. The name is also used for their main products, the Nagravision encryption systems. Analog system An analog Nagravision system (Syster) for scrambling analog satellite television programs was used in the 1990s. In this line-shuffling system, bottom 32 lines of the PAL TV signal are shifted in time by one video field, and read out in permuted order under the control of a pseudorandom number generator. A smartcard security microcontroller (in a key-shaped package) decrypts data that is transmitted during the blanking intervals of the TV signal mixed with teletext and extracts the random seed value which contains present line combination needed for reverting the picture back. The system also permitted the audio signal to be modulated at 12.8 kHz using a frequency mixer. Digital systems 4 currently used versions of Nagravision are in common use for digital satellite television, known as Nagravision, Nagravision Cardmagedon, Nagravision Aladin and Nagravision Merlin. Nagravision Cardmagedon and Aladin are often confused with each other and used under the term "Nagravision 2" which technically does not exist. Nagravision Cardmagedon is, however, a complicated combination of Nagravision Aladin and Mediaguard SECA 2 encryption. Nagravision Merlin is also known as Nagravision 3. The decryption unit is either integrated into a receiver, available as a conditional-access module (CAM), or as one of many encryption schemes supported on a CAM emulator. Nagravision has been adopted all over the world as a conditional access system, with providers: Bell Satellite TV (Canada) (Nagravision 3) BIG TV (India) Cabovisão (Portugal) (Being adopted gradually across the network - Nagravision 3 and SECA Mediaguard) Canal+ and CanalSat (France) (Nagravision Media Access *NEW TYPE* [Basically Updated Nagra 3 "Merlin"]) Claro TV (Central America, Dominican Republic) (Nagravision 1 (outdated) with MPEG-4 encoding support custom added) Claro TV (Brazil) (Nagravision 3) Cyfrowy Polsat (Poland) (Nagravision 3) Digi TV (RCS & RDS) (Romania, Hungary, Czech Republic, Slovakia) (Nagravision 3) Movistar+ (Previously Digital+ (Spain) (Nagravision 3) Dish Network (USA) (Transition to Nagravision 3 was completed on 6/18/09) Dream Satellite TV (Philippines) (Nagravision 3) First Media | Cable (Indonesia) (Nagravision 3) HD+ (Germany) (Nagravision 3) Look Communications (Canada) M7 Group S.A. (Several European countries) Movistar (Before Telefonica) (Chile, Colombia, Venezuela, Perú, Bolivia, Ecuador) (Nagravision 3) Vivo TV DTH (Before Telefonica) (Brazil, Nagravision 3) Numericable (Belgium, France, Luxembourg) (Nagravision 3 "Merlin") Sky Deutschland (Germany) (Nagravision 3) StarHub TV (Singapore) (Nagravision 3) Telenet (Belgium) (Belgium) (Nagravision 2) Tivù Sat, Mediaset Premium (Italy) (Nagravision 3 "Tiger" and "Merlin" (4K)) NOS Comunicacoes (Portugal) (Nagravision 3) UPC, now Vodafone (on satellite) (Nagravision 3) Virgin Media (United Kingdom) (Nagravision 3 "Merlin", swapped from Nagravision 1 during 2009/10) Top Up TV (United Kingdom) (Nagravision 3 "Merlin" swapped from SECA2 in 2008) See also Kudelski Group External links Markus Kuhn: Analysis of the Nagravision video scrambling method, 1998 – explains an attack against the old Nagravision system for analog television Digital television Conditional-access television broadcasting
18703801
https://en.wikipedia.org/wiki/Amy%20Rodriguez
Amy Rodriguez
Amy Joy Rodriguez Shilling (; born February 17, 1987) is an American retired professional soccer player who last played as a forward for North Carolina Courage in the National Women's Soccer League (NWSL). She previously played for NWSL teams Utah Royals FC, FC Kansas City, and Boston Breakers, as well as Philadelphia Independence of the WPS. A former member of the United States women's national soccer team, Rodriguez was a world champion in 2015. Currently, Rodriguez is an assistant coach at the University of Southern California’s women’s soccer team. Early life Born in Lake Forest, California, to parents John and Lori, She grew up in Lake Forest, California and attended Santa Margarita Catholic High School in Rancho Santa Margarita, California, where she was a Parade All-American in 2003 and 2004 and the Gatorade Player of the Year in 2005. Her paternal grandparents were from Cuba and immigrated to the United States in the 1950s. She has a sister named Lauren and brother named Adam. Her paternal uncle is Francis Rodriguez and former wide receiver for the USC Trojans 1982-83. In 2005, Rodriguez was considered the nation's top recruit and was named National Player of the Year by Parade Magazine, EA Sports and NSCAA after scoring 17 goals in 15 games for Santa Margarita Catholic during her senior year. She earned local honors as the Orange County Register Player of the Year and Girls Soccer Player of the Year, as well as the Los Angeles Times Girls' Soccer Player of the Year. She was a four-time all-league selection and All-CIF honoree. University of Southern California Rodriguez was recruited by and eventually attended the University of Southern California. She played for the Trojans women's soccer team from 2005 through 2008. She finished her career at USC as the number four all-time scorer and was considered a cornerstone in the team's first-ever NCAA Women's Soccer Championship. Rodriguez holds the school's second career game-winning goal record with 12, is number four all-time in career points with 79, and is ranked sixth in career assists with 17. During her freshman year, Rodriguez led the team with nine goals, 25 points and four game-winners. She was named Pac-10 Player of the Week and to the Soccer America National Team of the Week after scoring back-to-back game-winning goals in 1–0 wins over Arizona State University and the University of Arizona. She was named the 2005 Pac-10 Freshman of the Year, a member of the Soccer Times All-America Third Team, and was selected to the All-Pac-10 First Team and Pac-10 All-Freshman Team. She was also named a SoccerBuzz Freshman All-American first-teamer and SoccerBuzz All-West Region first-teamer the same year. In 2006, Rodriguez missed USC's first four games while competing with the United States under-20 national team at the U-20 World Championships. After returning, she started 14 of 16 games and scored the game-winning goal in USC's NCAA first-round upset of Santa Clara. She finished the season with four goals and three assists. Rodriguez appeared in all 25 games as a junior in 2007, starting in 21 matches on her way to leading the Trojans in scoring and to the national championship. She finished with a team-high of 10 goals along with three assists for 23 points on the year and had three game-winning goals. Her first career two-goal game occurred in the NCAA Semifinals, where she scored twice in the second half to help USC to a 2–1 win and help earn herself honors as the NCAA College Cup Most Outstanding Offensive Player. The same year, she was named Umbro/Soccer News Net Player of the Year and was named to the All-Pac-10 Second Team. She also earned SoccerBuzz Second Team All-West Region and NSCAA/adidas Second Team All-West Region honors. Rodriguez was named to the Soccer America Team of the Week on October 2 after notching the game-winner against then number two Portland. She finished the season ranked sixth in all-time in career points (59), seventh in goals (23), sixth in assists (13) and fourth in game-winning goals (9). During her senior year in 2008, Rodriguez missed the first three games of the season due to competing with the United States women's national soccer team at the 2008 Summer Olympics, where she won a gold medal. She was USC's top scorer with eight goals (including three game-winning goals) during the season, provided four assists finishing with 20 points. Rodriguez was named a Preseason All-American and was on the watch list for the Hermann Trophy. She was selected to the All-Pac-10 First Team and was an NSCAA All-American Third Team pick. Club career Amy Rodriguez played for the Los Angeles Strikers as her club team. West Coast FC, 2008 Rodriguez signed to play with West Coast FC of Women's Premier Soccer League in 2008. However, an injury to Abby Wambach propelled Rodriguez to the United States women's national team to compete at the 2008 Summer Olympics. She never appeared for West Coast FC. Boston Breakers, 2009 Upon her return from the 2008 Summer Olympics, the new top-tier women's soccer league in the United States, Women's Professional Soccer, made Rodriguez the first overall pick in the 2009 WPS Draft. Her playing rights were assigned to the Boston Breakers. During the inaugural season, Rodriguez appeared in 17 matches (11 starts, 982 minutes) and scored one goal. The Breakers finished the season in fifth place with a 7–9–4 record. Philadelphia Independence, 2009–2011 On September 29, 2009, Rodriguez was traded with Boston's first round selection in the 2010 WPS Draft to WPS expansion team, the Philadelphia Independence, in exchange for Philadelphia's first two selections in the 2010 WPS Draft. During the 2010 season, Rodriguez scored 12 goals and had six assists. She was named the WPS Player of the Month for June 2010. Rodriguez finished third in the league in goals and scored the winning goal in the first round of the playoffs in overtime against the Washington Freedom to send her team to the Super Semifinal. She finished second on the team in minutes played with 2,001. She was named to the WPS Best XI and a starter in the WPS All-Star Game. She was also a finalist for the WPS Michelle Akers Player of the Year Award and was named the Independence's Most Valuable Offensive Player. During a 2011 regular season shortened for Rodriguez due to her national team duty, Amy played in 10 games for the Independence (starting six) for a total of 641 minutes and tallied two regular season goals. She scored in both of Philadelphia's playoff matches, tallying the second goal in the 2–0 victory over magicJack in the Super Semifinal and the equalizer in the 88th minute of the championship game against the Western New York Flash, sending the game to overtime before Philly eventually fell in penalty kicks. FC Kansas City, 2013–2017 In 2013, as part of the NWSL Player Allocation, she joined Seattle Reign FC in the new National Women's Soccer League. About a month after the allocation, Seattle announced that Rodriguez was pregnant and would not be available to play for the 2013 season. She was later traded to FC Kansas City for Kristie Mewis during the 2013–14 off-season, making her debut for the Midwest club in a preseason exhibition match against the Chicago Red Stars. On August 31, 2014 Rodriguez scored two goals for FC Kansas City in a 2–1 win against Seattle Reign FC, both on assists provided by Lauren Holiday, to help the club win the 2014 NWSL Championship. In 2015 FC Kansas City reached the Championship game once again and Rodriguez scored the game-winning (and lone) goal off an assist from Heather O'Reilly to win the 2015 NWSL Championship. Rodriguez missed the 2016 NWSL season as she was pregnant with her second child. Rodriguez returned to FC Kansas City for the 2017 NWSL season. In the first game of the season she scored a goal in the 48th minute, however minutes later she suffered a knee injury and was forced to leave the game. It was announced that Rodriguez had torn her ACL and would miss the rest of the season. Utah Royals FC, 2018–2020 After FC Kansas ceased operations in November 2017, her rights were transferred to the Utah Royals. In February 2018, she committed to joining the Royals. Rodriguez began the 2018 season on the 45-Day disabled list as she was still recovering from her knee injury. On April 20 she made her debut for the Royals and she scored her first goal for Utah a week later on April 28. Rodriguez finished the season with 5 goals, which was the second highest on the team. She signed a contract with Utah prior to the 2019 NWSL season as she was no longer an allocated player by U.S. Soccer. North Carolina Courage, 2021 On 22 July 2021, she and $60,000 of allocation money was traded from Kansas City to the Courage for Kristen Hamilton, Hailie Mace and Katelyn Rowland. International career National youth teams Rodriguez played for several United States national youth teams, appearing in two FIFA youth championships: the 2004 U-19 World Championship in Thailand and the 2006 U-20 World Championship in Russia, as well as the 2005 Nordic Cup in Sweden. In total, she's played with the U-17, U-19/U-20 and U-21 programs. Senior national team Rodriguez's first appearance for the United States women's national team came on March 11, 2005, against Finland in the Algarve Cup while she was a senior in high school. She earned two caps, playing as a sub against Finland and Denmark. In 2008, Rodriguez played in 26 matches, starting in 11. She scored her first two full international goals in the first match of the year against Canada and added another against Norway in the Algarve Cup. She scored six goals with seven assists during the same year, including two game-winners against Brazil in 1–0 victories at the Peace Queen Cup in South Korea and during a friendly match in Commerce City, Colorado, before the Olympics. 2008 Beijing Olympics By the spring of 2008, she had become a regular as forward, and started four of five games at the 2008 Summer Olympics, where she scored against New Zealand. Rodriguez had appeared in 18 senior team matches going into the Olympics. Rodriguez provided the assist on Carli Lloyd's game-winning goal in the first period of extra time in the gold medal match to clinch the title. 2011 FIFA Women's World Cup In 2011, Rodriguez started all 18 games she played for the United States and recorded 1,102 minutes of playing time. She scored four goals with three assists. She played in her first FIFA Women's World Cup at the senior level, starting the first five matches of the tournament. Rodriguez scored one of the biggest goals of her career in the second leg of the playoff series against Italy, pounding in the game-winner in a 1–0 victory on November 27 at Toyota Park in Bridgeview, Illinois. She started both legs of the playoff series and played all but five minutes over the two games. 2012 London Olympics Rodriguez scored five goals in a 2012 CONCACAF Olympic qualifying match between the United States and the Dominican Republic; the final score of the match was 14–0. Rodriguez's performance set a record for goals scored in a single match by one player in CONCACAF Olympic qualifying, and tied the single-game record for the United States national team. Both records were tied two days later by her teammate, Sydney Leroux, in a game against Guatemala. Rodriguez was a member of the team that competed in the 2012 London Olympics. She played four matches as a substitute and received her second Olympic gold medal, the gold medal from the 2008 Beijing Olympics being her first. In 2012, Rodriguez had nine goals off the bench to tie for the second most in United States women's national team history with Debbie Keller. On December 8, 2012, Rodriguez celebrated her 100th cap with the senior national team during an international friendly against China at Ford Field in Detroit, Michigan. She wore the captain's armband, a team tradition for players in their 100th national team appearance, during the 2–0 win. 2013–2014 In January 2013, Rodriguez announced that she was pregnant with her first child and would miss all of 2013. She returned to the National Team in January 2014 and was named to the 2014 Algarve Cup roster. Rodiguez was named to the roster for the 2014 CONCACAF Women's Championship, she appeared in 2 matches as the United States won the tournament for the seventh time. She appeared in twelve matches in 2014 and scored 2 goals. 2015 FIFA Women's World Cup Rodriguez was named to the United States roster for the 2015 Algarve Cup, she scored in a group stage game against Switzerland. The U.S. won the Algarve Cup for the tenth time. In April, Rodriguez was named to the final 23-player roster for the 2015 FIFA Women's World Cup, this would be her second time playing in a World Cup as she was also a member of the team in 2011. At the 2015 World Cup Rodriguez appeared in two matches. She was in the starting lineup for their quarterfinal match against China PR, which the U.S won 1–0. The United States went on to win the 2015 World Cup by defeating Japan 5–2. 2016–2018 In January 2016, Rodriguez announced that she was expecting her second child and would miss the 2016 Olympics. After giving birth, Rodriguez returned to the National Team in April 2017 in a friendly against Russia. After tearing her ACL in a match with FC Kansas City, Rodriguez would miss the rest of 2017. After recovering from her knee injury, Rodriguez was called up in June 2018 for a set of friendlies against China PR. She was also named to the roster for the 2018 Tournament of Nations, the U.S won the tournament, but Rodriguez did not get any playing time. She was named to the 35-player provisional roster for the 2018 CONCACAF Women's Championship but she was not named to the final 20-player squad. Retirement On January 28, 2022, Rodriguez announced her retirement from professional soccer. She also announced that she accepted a position as an assistant coach in her alma mater, the University of Southern California, women's soccer team. International summary Updated through 2019-04-22 International goals Honors and awards International Olympic Gold Medal: 2008, 2012 FIFA Women's World Cup: 2015; Runner-up: 2011 CONCACAF Women's Championship: 2014 CONCACAF Women's Olympic Qualifying Tournament: 2012 Algarve Cup: 2008, 2010, 2011, 2015 Four Nations Tournament: 2008, 2011 Tournament of Nations: 2018 Club with FC Kansas City NWSL championship: 2014, 2015 Individual WPS Player of the Month: June 2010 WPS Best XI: 2010 WPS All-Star Team: 2010 NWSL First XI: 2014 NWSL Championship Game MVP: 2015 Personal life Rodriguez is called "A-Rod" by her teammates and soccer commentators. Rodriguez married fellow USC athlete Adam Shilling on October 8, 2011. On January 29, 2013, it was confirmed that Rodriguez and her husband were expecting their first child. On August 6, 2013, their first son, Ryan John Shilling, was born. Rodriguez, along with her husband, is a devout Christian. U.S. Soccer announced Rodriguez was pregnant with her second child when they released an article on December 21, 2015 announcing the roster for the next training camp. Their second child, Luke Shilling, was born on July 1, 2016. In popular culture Video Games Rodriguez was featured along with her national teammates in the EA Sports' FIFA video game series in FIFA 16, the first time women players were included in the game. Ticker Tape Parade and White House Honor Following the United States' win at the 2015 FIFA Women's World Cup, Rodriguez and her teammates became the first women's sports team to be honored with a Ticker Tape Parade in New York City. Each player received a key to the city from Mayor Bill de Blasio. In October of the same year, the team was honored by President Barack Obama at the White House. References Match reports Further reading Lisi, Clemente A. (2010), The U.S. Women's Soccer Team: An American Success Story, Scarecrow Press, Grainey, Timothy (2012), Beyond Bend It Like Beckham: The Global Phenomenon of Women's Soccer, University of Nebraska Press, Stevens, Dakota (2011), A Look at the Women's Professional Soccer Including the Soccer Associations, Teams, Players, Awards, and More, BiblioBazaar, External links US Soccer player profile Philadelphia Independence player profile USC player profile 1987 births Living people 2011 FIFA Women's World Cup players 2015 FIFA Women's World Cup players American Christians American sportspeople of Cuban descent American women's soccer players Boston Breakers (WPS) players FC Kansas City players FIFA Century Club FIFA Women's World Cup-winning players Footballers at the 2008 Summer Olympics Footballers at the 2012 Summer Olympics Hispanic and Latino American sportspeople Medalists at the 2008 Summer Olympics Medalists at the 2012 Summer Olympics Olympic gold medalists for the United States in soccer Women's Olympic soccer players of the United States Parade High School All-Americans (girls' soccer) Sportspeople from Lake Forest, California Philadelphia Independence players Sportspeople from Beverly Hills, California United States women's international soccer players USC Trojans women's soccer players Women's association football forwards Women's association football midfielders Women's Professional Soccer players National Women's Soccer League players United States women's under-20 international soccer players Utah Royals FC players Kansas City Current players
2678033
https://en.wikipedia.org/wiki/Instituto%20Superior%20de%20Engenharia%20de%20Coimbra
Instituto Superior de Engenharia de Coimbra
The Instituto Superior de Engenharia de Coimbra (ISEC) is a higher education polytechnic institution of engineering, based in Coimbra, Portugal. It belongs to the Polytechnic Institute of Coimbra, although with a great level of administrative, financial, and pedagogic autonomy. History Its origins backs to the creation of the Industrial Institute of Coimbra (Instituto Industrial de Coimbra) created in September 1965 as a non-higher education institute of vocational education. With the approval of decree "Decreto-Lei 830/74", of 31 December 1974, the Industrial Institute of Coimbra remodeled to what it is now known as Coimbra Institute of Engineering, (ISEC) "Instituto Superior de Engenharia Coimbra". According to that decree all superior institutes of engineering were of university level, they were allowed to award bachelors, masters and doctorates. However, at the time ISEC only conferred bachelors in Civil Engineering, Electrical Engineering, Mechanical Engineering, Chemistry Engineering, with 8 semesters (4 years). Its status was changed from university when it was integrated into the polytechnic subsector in 1988 through the decree "Decreto-Lei nº389/88", of 25 of October", and incorporated into the newly created Polytechnical Institute of Coimbra. It remodeled all courses to a 6 semesters (3 years) bacharelato degrees in several technical engineering specializations, until the late 1990s. In 1989 new courses were created, Computer Sciences Engineering and in 1991 Electromechanical Engineering was introduced. At this time new legal decrees were adopted by Portuguese State (Administrative Rule 413A/98 of 17 July 1998), and it started to award 3 + 2 licenciaturas bietápicas (bacharelato plus one or two extra years, conferring the licenciatura degree - a degree that had been awarded exclusively by the universities). In the mid-2000s ISEC adopted new more selective admission rules which were imposed to every Portuguese higher education institution by the State. In 2006 Biological Engineering was introduced already according to Bologna Process, awarding the degree of "licenciatura" bachelors (3 years). After 2006, with the approval of new legislation and the Bologna Process, ISEC, like any other polytechnic or university institution of Portugal, is legally able to provide a first 3-year study cycle, known as licenciatura plus a second 2-year cycle which confer the master's degree (in some cases this higher degree may be awarded in cooperation with a partner university) . This late changes were gradually developed, the curricula of many courses were deeply changed, other courses were discontinued (like now defunct ISEC's polytechnic degree in chemical engineering). In 2007 Biomedical Engineering and Industrial Management Engineering were introduced. Nowadays nine bachelors according to the Declaration of Bologna are fully functional. Library The Institute's library occupies the main floor of the Interdisciplinary Building. It provides a main reading room with seating capacity for 230 individuals. In addition, several group study rooms are available for 30 users. The library also provides access to Hemerotheca archives containing 100 volumes of specialized published periodicals organized in alphabetical order of the work's title. Facts and figures Degree types awarded: bachelor degree and masters' degree in engineering-related subjects ISEC has night classes for workers which is a unique characteristic of most polytechnic institutions in Portugal Activities Feira de Engenharia de Coimbra (FENGE) is an initiative organized by the AEISEC (Students' Association of ISEC). The objective is to provide an opportunity to share experiences between companies in the fields of engineering and the student community. In most recent times ISEC-DEIS (Computer Science Department) has organised another leading activity called HandsOn@Deis, it brings companies of the Information Technology sector in direct contact with students, creating workshops and activities. Students' Association The Student Association at ISEC was created in 1979 and aims to provide a unified voice for ISEC students, as well as general information, culture, sports and leisure. Since their founding, they have grown as one of the most valued Associations on a national scale in what regards its educational policies. They are dedicated to providing students with programmes and events. Throughout their existence they have accommodated two study areas for students, two snack bars and a photocopy processing centre, several sport and recreation activities, get-togethers, important events such as academic lectures, musical events, photography and computer fairs, an engineering fair which earned reputation at the national level, a student support and career opportunities office and a support service for exchange students under the Socrates/Erasmus/Leonardo programmes. Programs of study Bioengineering Civil Engineering Computer Science Computer Science - European Course (ERASMUS) Electrical Engineering Industrial Engineering and Management Mechanical Engineering Electromechanical Engineering Biomedical Engineering - specialization in Bioelectronics See also List of colleges and universities in Portugal Higher education in Portugal References Polytechnic Institute of Coimbra
1979078
https://en.wikipedia.org/wiki/Color%20model
Color model
A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), the resulting set of colors is called "color space." This section describes ways in which human color vision can be modeled. Tristimulus color space One can picture this space as a region in three-dimensional Euclidean space if one identifies the x, y, and z axes with the stimuli for the long-wavelength (L), medium-wavelength (M), and short-wavelength (S) light receptors. The origin, (S,M,L) = (0,0,0), corresponds to black. White has no definite position in this diagram; rather it is defined according to the color temperature or white balance as desired or as available from ambient lighting. The human color space is a horse-shoe-shaped cone such as shown here (see also CIE chromaticity diagram below), extending from the origin to, in principle, infinity. In practice, the human color receptors will be saturated or even be damaged at extremely high light intensities, but such behavior is not part of the CIE color space and neither is the changing color perception at low light levels (see: Kruithof curve). The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. As far as the responses of the receptors in the eye are concerned, there is no such thing as "brown" or "gray" light. The latter color names refer to orange and white light respectively, with an intensity that is lower than the light from surrounding areas. One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background, even though the "black" has in fact not become darker than the white screen on which it is projected before the projector was turned on. The "black" areas have not actually become darker but appear "black" relative to the higher intensity "white" projected onto the screen around it. See also color constancy. The human tristimulus space has the property that additive mixing of colors corresponds to the adding of vectors in this space. This makes it easy to, for example, describe the possible colors (gamut) that can be constructed from the red, green, and blue primaries in a computer display. CIE XYZ color space One of the first mathematically defined color spaces is the CIE XYZ color space (also known as CIE 1931 color space), created by the International Commission on Illumination in 1931. These data were measured for human observers and a 2-degree field of view. In 1964, supplemental data for a 10-degree field of view were published. Note that the tabulated sensitivity curves have a certain amount of arbitrariness in them. The shapes of the individual X, Y and Z sensitivity curves can be measured with a reasonable accuracy. However, the overall luminosity function (which in fact is a weighted sum of these three curves) is subjective, since it involves asking a test person whether two light sources have the same brightness, even if they are in completely different colors. Along the same lines, the relative magnitudes of the X, Y, and Z curves are arbitrarily chosen to produce equal areas under the curves. One could as well define a valid color space with an X sensitivity curve that has twice the amplitude. This new color space would have a different shape. The sensitivity curves in the CIE 1931 and 1964 xyz color space are scaled to have equal areas under the curves. Sometimes XYZ colors are represented by the luminance, Y, and chromaticity coordinates x and y, defined by: and Mathematically, x and y are projective coordinates and the colors of the chromaticity diagram occupy a region of the real projective plane. Because the CIE sensitivity curves have equal areas under the curves, light with a flat energy spectrum corresponds to the point (x,y) = (0.333,0.333). The values for X, Y, and Z are obtained by integrating the product of the spectrum of a light beam and the published color-matching functions. Additive and subtractive color models RYB color model RGB color model Media that transmit light (such as television) use additive color mixing with primary colors of red, green, and blue, each of which stimulates one of the three types of the eye's color receptors with as little stimulation as possible of the other two. This is called "RGB" color space. Mixtures of light of these primary colors cover a large part of the human color space and thus produce a large part of human color experiences. This is why color television sets or color computer monitors need only produce mixtures of red, green and blue light. See Additive color. Other primary colors could in principle be used, but with red, green and blue the largest portion of the human color space can be captured. Unfortunately there is no exact consensus as to what loci in the chromaticity diagram the red, green, and blue colors should have, so the same RGB values can give rise to slightly different colors on different screens. CMY and CMYK color models It is possible to achieve a large range of colors seen by humans by combining cyan, magenta, and yellow transparent dyes/inks on a white substrate. These are the subtractive primary colors. Often a fourth ink, black, is added to improve reproduction of some dark colors. This is called the "CMY" or "CMYK" color space. The cyan ink absorbs red light but reflects green and blue, the magenta ink absorbs green light but reflects red and blue, and the yellow ink absorbs blue light but reflects red and green. The white substrate reflects the transmitted light back to the viewer. Because in practice the CMY inks suitable for printing also reflect a little bit of color, making a deep and neutral black impossible, the K (black ink) component, usually printed last, is needed to compensate for their deficiencies. Use of a separate black ink is also economically driven when a lot of black content is expected, e.g. in text media, to reduce simultaneous use of the three colored inks. The dyes used in traditional color photographic prints and slides are much more perfectly transparent, so a K component is normally not needed or used in those media. Cylindrical-coordinate color models A number of color models exist in which colors are fit into conic, cylindrical or spherical shapes, with neutrals running from black to white along a central axis, and hues corresponding to angles around the perimeter. Arrangements of this type date back to the 18th century, and continue to be developed in the most modern and scientific models. Background Different color theorists have each designed unique color solids. Many are in the shape of a sphere, whereas others are warped three-dimensional ellipsoid figures—these variations being designed to express some aspect of the relationship of the colors more clearly. The color spheres conceived by Phillip Otto Runge and Johannes Itten are typical examples and prototypes for many other color solid schematics. The models of Runge and Itten are basically identical, and form the basis for the description below. Pure, saturated hues of equal brightness are located around the equator at the periphery of the color sphere. As in the color wheel, contrasting (or complementary) hues are located opposite each other. Moving toward the center of the color sphere on the equatorial plane, colors become less and less saturated, until all colors meet at the central axis as a neutral gray. Moving vertically in the color sphere, colors become lighter (toward the top) and darker (toward the bottom). At the upper pole, all hues meet in white; at the bottom pole, all hues meet in black. The vertical axis of the color sphere, then, is gray all along its length, varying from black at the bottom to white at the top. All pure (saturated) hues are located on the surface of the sphere, varying from light to dark down the color sphere. All impure (unsaturated hues, created by mixing contrasting colors) comprise the sphere's interior, likewise varying in brightness from top to bottom. HSL and HSV HSL and HSV are both cylindrical geometries, with hue, their angular dimension, starting at the red primary at 0°, passing through the green primary at 120° and the blue primary at 240°, and then wrapping back to red at 360°. In each geometry, the central vertical axis comprises the neutral, achromatic, or gray colors, ranging from black at lightness 0 or value 0, the bottom, to white at lightness 1 or value 1, the top. Most televisions, computer displays, and projectors produce colors by combining red, green, and blue light in varying intensities—the so-called RGB additive primary colors. However, the relationship between the constituent amounts of red, green, and blue light and the resulting color is unintuitive, especially for inexperienced users, and for users familiar with subtractive color mixing of paints or traditional artists’ models based on tints and shades. In an attempt to accommodate more traditional and intuitive color mixing models, computer graphics pioneers at PARC and NYIT developed the HSV model in the mid-1970s, formally described by Alvy Ray Smith in the August 1978 issue of Computer Graphics. In the same issue, Joblove and Greenberg described the HSL model—whose dimensions they labeled hue, relative chroma, and intensity—and compared it to HSV. Their model was based more upon how colors are organized and conceptualized in human vision in terms of other color-making attributes, such as hue, lightness, and chroma; as well as upon traditional color mixing methods—e.g., in painting—that involve mixing brightly colored pigments with black or white to achieve lighter, darker, or less colorful colors. The following year, 1979, at SIGGRAPH, Tektronix introduced graphics terminals using HSL for color designation, and the Computer Graphics Standards Committee recommended it in their annual status report. These models were useful not only because they were more intuitive than raw RGB values, but also because the conversions to and from RGB were extremely fast to compute: they could run in real time on the hardware of the 1970s. Consequently, these models and similar ones have become ubiquitous throughout image editing and graphics software since then. Munsell color system Another influential older cylindrical color model is the early-20th-century Munsell color system. Albert Munsell began with a spherical arrangement in his 1905 book A Color Notation, but he wished to properly separate color-making attributes into separate dimensions, which he called hue, value, and chroma, and after taking careful measurements of perceptual responses, he realized that no symmetrical shape would do, so he reorganized his system into a lumpy blob. Munsell's system became extremely popular, the de facto reference for American color standards—used not only for specifying the color of paints and crayons, but also, e.g., electrical wire, beer, and soil color—because it was organized based on perceptual measurements, specified colors via an easily learned and systematic triple of numbers, because the color chips sold in the Munsell Book of Color covered a wide gamut and remained stable over time (rather than fading), and because it was effectively marketed by Munsell's Company. In the 1940s, the Optical Society of America made extensive measurements, and adjusted the arrangement of Munsell colors, issuing a set of "renotations". The trouble with the Munsell system for computer graphics applications is that its colors are not specified via any set of simple equations, but only via its foundational measurements: effectively a lookup table. Converting from requires interpolating between that table's entries, and is extremely computationally expensive in comparison with converting from or which only requires a few simple arithmetic operations. Natural Color System The Swedish Natural Color System (NCS), widely used in Europe, takes a similar approach to the Ostwald bicone at right. Because it attempts to fit color into a familiarly shaped solid based on "phenomenological" instead of photometric or psychological characteristics, it suffers from some of the same disadvantages as HSL and HSV: in particular, its lightness dimension differs from perceived lightness, because it forces colorful yellow, red, green, and blue into a plane. Preucil hue circle In densitometry, a model quite similar to the hue defined above is used for describing colors of CMYK process inks. In 1953, Frank Preucil developed two geometric arrangements of hue, the "Preucil hue circle" and the "Preucil hue hexagon", analogous to our H and H2, respectively, but defined relative to idealized cyan, yellow, and magenta ink colors. The "Preucil hue error" of an ink indicates the difference in the "hue circle" between its color and the hue of the corresponding idealized ink color. The grayness of an ink is , where m and M are the minimum and maximum among the amounts of idealized cyan, magenta, and yellow in a density measurement. CIELCHuv and CIELCHab The International Commission on Illumination (CIE) developed the XYZ model for describing the colors of light spectra in 1931, but its goal was to match human visual metamerism, rather than to be perceptually uniform, geometrically. In the 1960s and 1970s, attempts were made to transform XYZ colors into a more relevant geometry, influenced by the Munsell system. These efforts culminated in the 1976 CIELUV and CIELAB models. The dimensions of these models— and , respectively—are cartesian, based on the opponent process theory of color, but both are also often described using polar coordinates— and , respectively—where L* is lightness, C* is chroma, and h* is hue angle. Officially, both CIELAB and CIELUV were created for their color difference metrics ∆E*ab and ∆E*uv, particularly for use defining color tolerances, but both have become widely used as color order systems and color appearance models, including in computer graphics and computer vision. For example, gamut mapping in ICC color management is usually performed in CIELAB space, and Adobe Photoshop includes a CIELAB mode for editing images. CIELAB and CIELUV geometries are much more perceptually relevant than many others such as RGB, HSL, HSV, YUV/YIQ/YCbCr or XYZ, but are not perceptually perfect, and in particular have trouble adapting to unusual lighting conditions. The HCL color space seems to be synonymous with CIELCH. CIECAM02 The CIE's most recent model, CIECAM02 (CAM stands for "color appearance model"), is more theoretically sophisticated and computationally complex than earlier models. Its aims are to fix several of the problems with models such as CIELAB and CIELUV, and to explain not only responses in carefully controlled experimental environments, but also to model the color appearance of real-world scenes. Its dimensions J (lightness), C (chroma), and h (hue) define a polar-coordinate geometry. Color systems There are various types of color systems that classify color and analyse their effects. The American Munsell color system devised by Albert H. Munsell is a famous classification that organises various colors into a color solid based on hue, saturation and value. Other important color systems include the Swedish Natural Color System (NCS), the Optical Society of America's Uniform Color Space (OSA-UCS), and the Hungarian Coloroid system developed by Antal Nemcsics from the Budapest University of Technology and Economics. Of those, the NCS is based on the opponent-process color model, while the Munsell, the OSA-UCS and the Coloroid attempt to model color uniformity. The American Pantone and the German RAL commercial color-matching systems differ from the previous ones in that their color spaces are not based on an underlying color model. Other uses of "color model" Models of mechanism of color vision We also use "color model" to indicate a model or mechanism of color vision for explaining how color signals are processed from visual cones to ganglion cells. For simplicity, we call these models color mechanism models. The classical color mechanism models are Young–Helmholtz's trichromatic model and Hering's opponent-process model. Though these two theories were initially thought to be at odds, it later came to be understood that the mechanisms responsible for color opponency receive signals from the three types of cones and process them at a more complex level. A widely accepted model is called the zone model. A symmetrical zone model compatible with the trichromatic theory, the opponent theory, and Smith's color transform model is called the decoding model Vertebrate evolution of color vision Vertebrate animals were primitively tetrachromatic. They possessed four types of cones—long, mid, short wavelength cones, and ultraviolet sensitive cones. Today, fish, amphibians, reptiles and birds are all tetrachromatic. Placental mammals lost both the mid and short wavelength cones. Thus, most mammals do not have complex color vision—they are dichromatic but they are sensitive to ultraviolet light, though they cannot see its colors. Human trichromatic color vision is a recent evolutionary novelty that first evolved in the common ancestor of the Old World Primates. Our trichromatic color vision evolved by duplication of the long wavelength sensitive opsin, found on the X chromosome. One of these copies evolved to be sensitive to green light and constitutes our mid wavelength opsin. At the same time, our short wavelength opsin evolved from the ultraviolet opsin of our vertebrate and mammalian ancestors. Human red-green color blindness occurs because the two copies of the red and green opsin genes remain in close proximity on the X chromosome. Because of frequent recombination during meiosis, these gene pairs can get easily rearranged, creating versions of the genes that do not have distinct spectral sensitivities. See also Color appearance model Comparison of color models in computer graphics RGBW color model RGBY color model Notes References Bibliography This book doesn't discuss HSL or HSV specifically, but is one of the most readable and precise resources about current color science. Joblove and Greenberg's paper was the first describing the HSL model, which it compares to HSV. This book only briefly mentions HSL and HSV, but is a comprehensive description of color order systems through history. This paper explains how both HSL and HSV, as well as other similar models, can be thought of as specific variants of a more general "GLHS" model. Levkowitz and Herman provide pseudocode for converting from RGB to GLHS and back. . Especially the sections about "Modern Color Models" and "Modern Color Theory". MacEvoy's extensive site about color science and paint mixing is one of the best resources on the web. On this page, he explains the color-making attributes, and the general goals and history of color order systems—including HSL and HSV—and their practical relevance to painters. This is the original paper describing the "hexcone" model, HSV. Smith was a researcher at NYIT’s Computer Graphics Lab. He describes HSV's use in an early digital painting program. External links Illustrations and summaries of RGB, CMYK, LAB, HSV, HSL, and NCS Demonstrative color conversion applet HSV Colors by Hector Zenil, The Wolfram Demonstrations Project. HSV to RGB by CodeBeautify. Model Mathematical modeling Color vision
68489214
https://en.wikipedia.org/wiki/Reeves%20Electronic%20Analog%20Computer
Reeves Electronic Analog Computer
The Reeves Electronic Analog Computer (commonly shortened REAC) was a family of early analog computers produced in the United States by Reeves Instrument Corporation from the 1940s through the 1960s. History Origins In the 1940s, Reeves Instrument Corporation began developing ideas for a digital computation machine. They hired mathematician Samuel Lubkin, of the original team who designed the UNIVAC, to lead the project. The original proposal was to build a machine called the REEVAC, which was to have been based on the design of the EDVAC machine, which Lubkin had also done design work on. For unknown reasons, Reeves decided to scrap this approach, and Lubkin left the company for a job with the National Bureau of Standards (the US government organization later renamed the National Institute of Standards and Technology). Reeves then decided to move forward with an analogue computer instead. In 1946, the Office of Naval Research launched a project code named Project Cyclone at Reeves to develop a general purpose analogue computing machine to further Naval objectives — it is unclear if this was the cause of Reeves's change of direction or a consequence. This was the beginning of a 20-year partnership between Reeves and the Navy. For the entire 20-year duration of Project Cyclone, Reeves would continually furnish the Navy with the most recent REAC model. Commercial production In 1948, Reeves began putting the REAC machine into commercial production. The original price was USD $14,320 for the machine itself, but fully loaded with all the necessary peripherals it cost USD $37,000 (about USD $425,000 in 2021 dollars). By 1951, there were more than sixty REAC machines in use at universities, private (usually engineering) companies, and government and military institutions. Today the REAC is credited with proving that a general-purpose analog computer could be a viable commercial product. Notable early adopters included the following: Naval Air Missile Test Center (now the Pacific Missile Test Center) United States Naval Research Laboratory RAND Corporation North American Aviation Applied Physics Laboratory University of Minnesota Ames Research Center at NASA Uses REAC computers played a role in the development of many military projects, such as the Ryan X-13 Vertijet. A REAC was the first computer at Naval Air Weapons Station China Lake, and was instrumental in running simulations in development of the first anti-radiation missile. It also was used in the Aeronautical Computer Laboratory at Naval Air Warfare Center Warminster. Hardware The machine as it arrived from the manufacturer consisted of several cabinets with connecting cables, and was described as "essentially an Erector Set whose pieces are electronic or electro-mechanical parts." The average runtime for single problem was about one minute. Models There were seven models produced during the life of the system: REAC 100 (1947) REAC 200 (1952) REAC 300 (1953) REAC 400 (1956) REAC 500 (1963) REAC 550 (1964) REAC 600 (1965) External links REAC 600 sales brochure References Computer-related introductions in 1946 1940s computers Military computers Vacuum tube computers Military electronics of the United States Analog computers
1290145
https://en.wikipedia.org/wiki/Microsoft%20Office%20XP
Microsoft Office XP
Microsoft Office XP (codenamed Office 10) is an office suite created and distributed by Microsoft for the Windows operating system. Office XP was released to manufacturing on March 5, 2001, and was later made available to retail on May 31, 2001, less than five months prior to the release of Windows XP. It is the successor to Office 2000 and the predecessor of Office 2003. A Mac OS X equivalent, Microsoft Office v. X was released on November 19, 2001. New features in Office XP include smart tags, a selection-based search feature that recognizes different types of text in a document so that users can perform additional actions; a task pane interface that consolidates popular menu bar commands on the right side of the screen to facilitate quick access to them; new document collaboration capabilities, support for MSN Groups and SharePoint; and integrated handwriting recognition and speech recognition capabilities. With Office XP, Microsoft incorporated several features to address reliability issues observed in previous versions of Office. Office XP also introduces separate Document Imaging, Document Scanning, and Clip Organizer applications. The Office Assistant (commonly known as "Clippy"), which was introduced in Office 97 and widely reviled by users, is disabled by default in Office XP; this change was a key element of Microsoft's promotional campaign for Office XP. Office XP is incompatible with Windows 95 and earlier versions of Windows. Office XP is compatible with Windows NT 4.0 SP6 or later, Windows 98, Windows 2000, Windows Me, Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 and Windows Server 2008 R2. It is not officially supported on Windows 7 or later versions of Windows. It is the last version of Office to support Windows NT 4.0 Service Packs 6 and 6a, Windows 98, Windows 2000 before SP3, and Windows Me, as the following version, Office 2003 only supports Windows 2000 SP3 or later. However, it needs to update system files when the installer is started on NT 4.0 and 98. Office XP received mostly positive reviews upon its release, with critics praising its collaboration features, document protection and recovery functionality, and smart tags; however, the suite's handwriting recognition and speech recognition capabilities were criticized, and were mostly viewed as inferior to similar offerings from competitors. As of May 2002, over 60 million Office XP licenses had been sold. Microsoft released three service packs for Office XP during its lifetime. Mainstream support for Office XP ended on July 11, 2006, and extended support ended on July 12, 2011. History At a meeting with financial analysts in July 2000, Microsoft demonstrated Office XP, then known by its codename, Office 10, which included a subset of features Microsoft designed in accordance with what at the time was known as the .NET strategy, one by which it intended to provide extensive client access to various web services and features such as speech recognition. SharePoint Portal Server 2001, then codenamed Tahoe, was also in development at this time and was slated to improve collaboration for users of Office 2000 and Office 10. In August, Microsoft released Office 10 Beta 1 for product evaluation purposes. During this period Office 10 was characterized as an interim release between its predecessor, Office 2000 and a future version, and was planned to include new formatting options; integrated speech recognition; improved collaboration capabilities and enhanced support for web services; and a web portal complete with web parts. Beta 1 was compatible with Windows 95, Windows 98, Windows NT 4.0 SP5, and Windows 2000. Before the release of Office 10 Beta 2, there was speculation that Microsoft intended to rebrand the new product as "Office 2001," "Office 2002," "Office.NET," or "Office XP." The latter was shorthand for eXPerience and was positioned as a brand that would emphasize the new experiences enabled by the product. At the time, Microsoft intended to name the latest version of Visual Studio as "Visual Studio .NET" but unnamed sources stated that the company did not desire to do the same with Office 10, as the product was only partially related to the company's .NET strategy. Microsoft ultimately decided on "Office XP" as the final name of the product and used the same brand for Windows XP—then codenamed Whistler—which was developed concurrently. In spite of this, individual Office XP products such as Excel, PowerPoint, and Word would continue to use Microsoft's year-based naming conventions and were named after the year 2002. Office XP Beta 2 was released to 10,000 technical testers in late 2000. Beta 2 introduced several improvements to setup tools. The Custom Maintenance Wizard, for example, now allowed setup components to be modified after their installation, and the setup process of Office XP itself used a new version of Windows Installer. Microsoft also terminated the product's support for Windows 95 and Windows NT 4.0 SP5. After the release of Beta 2, Microsoft announced a Corporate Preview Kit Program for Office XP that would allow up to 500,000 corporate customers to evaluate a Corporate Preview Beta version of the product on a total of 10 machines per copy; individual copies cost $19.95 and expired on August 31, 2001. Office XP was released to manufacturing on March 5, 2001, and was later made available to retail on May 31, 2001. Service packs Microsoft released three service packs for Office XP throughout the product's lifecycle that introduced security enhancements, stability improvements, and software bug fixes; each service pack was made available as separate Client and Full File update versions. Client updates were for Office XP CD-ROM installations, were obtainable from Microsoft Office Update or as standalone downloads, and required Office XP installation media—these updates could not be uninstalled. Full File updates did not require access to installation media and were intended for network administrators to deploy updates to Office XP users who installed the product from a server location; users could also manually install Full File updates. Full File updates require Windows Installer 2.0; Office XP shipped with version 1.1. Windows Installer 2.0 shipped with Windows XP. On September 25, 2001, Microsoft released Windows Installer 2.0 redistributables for Windows 9x, as well as for Windows NT 4.0 and Windows 2000. Service Pack 1 (SP1) was released on December 11, 2001, and included performance and security improvements, as well as stability improvements based on error reports from users. SP1 also resolved an issue that prevented documents from being saved to MSN Groups. Service Pack 2 (SP2), released on August 21, 2002 included all previously available standalone updates; some of the those previously released included cumulative security patches for Excel 2002 and Word 2002 to address potentially malicious code embedded in document macros. The Full File version of SP2 is cumulative—SP1 does not have to be installed—while the Client version requires SP1 to be installed. Only Full File updates released after SP2 can be applied directly to Client installations of Office XP. Earlier updates were designed to update only administrative images and fail when applied directly to clients. Service Pack 3 (SP3) was released on March 30, 2004, and included all previously released updates, as well as previously unreleased stability improvements based on feedback and error reports received from users. SP3 does not require any earlier service packs to be installed. However, if an Office XP client was updated from a patched administrative image, the Full File version of SP3 must be installed. Support for Office XP RTM with no service packs installed ended on December 31, 2002. Office XP Service Pack 1 became unsupported on August 21, 2003, and Office XP Service Pack 2 reached end-of-life on March 9, 2005, almost 2 and a half years after it reached general availability. New features User interface Office XP has a streamlined, flatter appearance compared to previous versions of Office. According to Microsoft, this change involved "removing visually competing elements, visually prioritizing items on a page, increasing letter spacing and word spacing for better readability, and defining foreground and background color to bring the most important elements to the front." Smart tags Excel 2002 and Word 2002 introduce smart tags, commands for specific types of text including addresses, calendar dates, personal names, telephone numbers, ticker symbols, or tracking numbers in documents. A smart tag is denoted by a dotted purple underline underneath actionable text in a document; hovering over this text with the mouse cursor displays an icon that presents a list of related commands when invoked with a mouse click or the ++ keyboard shortcut. A ticker symbol smart tag in Excel can present the latest stock information in a cell within a workbook, for example, while a contact name smart tag in a Word document can display options to send an e-mail message to—or schedule a meeting with—that contact. Excel and Word support extensible smart tags that allow developers and organizations to display custom commands related to specific information. The smart tags used by Word are also available in Outlook 2002 if the former is configured as the default e-mail editor. The AutoCorrect and Paste Options commands in previous versions of Office have been updated to include smart tags that are shared among all Office XP programs. The AutoCorrect smart tag provides individual options to revert an automatic correction or to prohibit an automatic correction from occurring in the future, and also provides access to the AutoCorrect Options dialog box. It is represented as a small, blue box when the mouse cursor is positioned over corrected text. The Paste Options smart tag provides options to retain original formatting of content, change the formatting based on the currently active program, or to provide contextually specific characteristics to content after users paste it from the clipboard. After the release of Office XP, Microsoft provided a repository for downloadable smart tags on its website. Examples of third-party companies that produced smart tags after the release of Office XP include ESPN, Expedia, FedEx, and MSNBC. Microsoft released a Euro Currency Converter smart tag when new euro coins and notes were introduced on January 1, 2002. Task panes Office XP introduces a task pane interface that consolidates popular menu bar commands on the right side of the screen to facilitate quick access to them. Office XP includes Startup, Search, Clipboard, and Insert Clip Art task panes, as well as task panes that are exclusive to certain programs. Word 2002, for example, includes a task pane dedicated to style and formatting options. Users can switch between open task panes through the use of back and forward buttons; a drop-down list also presents specific task panes to which users can switch. The default Startup task pane is automatically available when users launch an Office XP program and presents individual commands to open an existing file, create a new blank file or one from a template, add a network location, or open Office Help. The Search task pane includes individual Basic and Advanced modes and allows users to query local or remote locations for files. The Basic mode allows users to perform full-text searches, while the Advanced mode provides additional file property query options. An index such as the Indexing Service can improve how quickly results are returned after a search is performed. The Insert Clip Art task pane is available in Excel, FrontPage, PowerPoint, and Word and provides options to search for and insert online clip art into files. The Office Clipboard has been redesigned as the Clipboard task pane across all Office XP programs and can accommodate up to 24 clipboard items compared to 12 in Office 2000. Clipboard items provide a visual representation to help users distinguish different types of content. The Office Clipboard task pane opens when at least two items are copied. Other UI changes A Compress Pictures button on the Picture toolbar allows users to optimize images inserted into files. E-mail messages sent from all Office XP programs support an optional introductory field. Internet Explorer automatically launches the Office XP program used to create a HTML document when users print that document. Microsoft account users could store their documents in private or public locations at MSN Groups. Office XP introduces a My Data Sources directory in My Documents that provides access to recently opened data sources. Security features in all Office programs have been consolidated into a single Security tab. The Insert Hyperlink dialog box presents a list of files and folders from the current web page folder, allowing users to navigate between open web pages. The Web Options dialog box allows users to create documents tailored to Internet Explorer 4, Internet Explorer 5, Internet Explorer 6, or various versions of Netscape. When users revert automatically corrected text in an Office document to its original spelling, the text will not correct itself again. File formats XML support Access 2002 and Excel 2002 support exporting and importing XML. Users can also save Excel workbooks as XML spreadsheets. Office Open XML Compatibility Pack In 2006, Microsoft released a compatibility pack for Office 2000 SP3, Office XP SP3, and Office 2003 SP1 that enables users to open, edit, and save Excel, PowerPoint, and Word Office Open XML documents introduced in Office 2007. The compatibility pack requires Windows 2000 SP4, Windows Server 2003, Windows XP SP1, or later versions of Windows. The update also enables compatibility with documents created in Office 2010, Office 2013, and Office 2016. Alternative user input Handwriting recognition Office XP introduces handwriting recognition in all Office programs, allowing users to write with a mouse or stylus instead of entering text by typing on a keyboard. Users can insert handwritten notes into Excel, add handwritten comments to PowerPoint presentations, send handwritten e-mail messages with Outlook, or write directly into Word documents. Notes written with a handheld PC or a Pocket PC can be converted into Word documents, and handwritten content in Word documents can be converted to text. Word must be the active e-mail editor in Outlook before handwritten e-mail messages can be sent. Once installed, handwriting functionality is also available in Internet Explorer 5 and Outlook Express 5 or later. Handwriting recognition engines are available for the English, Simplified Chinese, Traditional Chinese, Japanese, and Korean versions of Office XP. The downloadable Tablet Pack for Office XP provided an extension for Windows Journal to reuse notes as Outlook 2002 items and to import meeting information from Outlook 2002 into notes. Speech recognition Speech recognition based on Microsoft Research technology is available for all Office XP programs, allowing users to dictate text into active documents, to change document formatting, and to navigate the interface by voice. The speech recognition feature encompasses two different modes: Dictation, which transcribes spoken words into text; and Voice Command, which invokes interface features. Speech recognition can be installed during Office XP setup or by clicking the Speech option in the Tools menu in Word 2002. When installed, it is available as a Microphone command on the Language toolbar that appears in the upper-right corner of the screen (lower-right corner in East-Asian versions of Office XP). When launched for the first time, speech recognition offers a tutorial to improve recognition accuracy, which begins by providing instructions to adjust the microphone for optimal performance. Speech recognition uses a speech profile to store information about a user's voice. Users can configure speech recognition settings, including pronunciation sensitivity in voice command mode, accuracy and recognition response time in dictation mode, and microphone settings through the Speech control panel applet. The Regional and Language Options applet provides Language toolbar and additional settings. Speech recognition engines are available for the English, Japanese, and Simplified Chinese languages. Microsoft recommended its SideWinder Game Voicechat device as a microphone to use with speech recognition. Reliability With Office XP, Microsoft incorporated several features to address reliability issues observed in previous versions of Office: Application Recovery: Users can safely restart or terminate unresponsive Office programs—and save open documents before termination—from a utility that is accessible from the Office Tools group on the Windows Start menu. Automatic Recovery: Excel, PowerPoint, Publisher, and Word periodically save open documents in the background so the latest revision can be opened if an error occurs; users can configure how often files are saved, discard the latest revision, overwrite a file with it, or save it as a separate file. Document Recovery: Access, Excel, PowerPoint, and Word present users with an option to immediately save open files when an error occurs before a program is closed or restarted to prevent loss of data. Error Reporting: Users can optionally submit error report information to Microsoft for analysis to improve Office XP. Error reporting was instrumental in providing solutions included in all three Office XP service packs to address common issues. Error reports can also be submitted to corporate departments. Repair and Extract: Excel and Word can automatically recognize and repair corrupt documents; users can also manually repair documents from these programs. Safe Mode: Office XP programs will automatically launch in Safe Mode, a diagnostic mode that allows programs to bypass the source of a problem if they are unable to start properly. Security Excel, PowerPoint, and Word have been updated to provide password encryption options based on CryptoAPI. Additionally, all Office XP programs provide options for users to digitally sign documents. Installation and deployment When upgrading from a previous version of Office, Office XP retains the user's previous configuration. Office XP can also be installed directly from an administrative image hosted on a web server via HTTP, HTTPS, or FTP. The Office Resource Kit includes various improvements to deployment functionality when compared with the Office 2000 version. A new Setup INI Customization Wizard allows administrators to customize the Office XP INI configuration file prior to deployment. The Custom Installation Wizard can prohibit the installation, use, or uninstallation of programs or features such as the Run from Network and Installed on First Use setup options. Finally, the Custom Maintenance Wizard has been updated to provide customization options to configure Office XP including user preferences and security settings. The Save My Settings Wizard, introduced in Office 2000 as an optional download for Microsoft account users to remotely store their Office settings to the Office Update web site, has been updated to support importing and exporting backups to local storage or to a network share. In an effort to curtail software piracy, Microsoft incorporated product activation technology into all versions of Office XP to prohibit users from installing a single copy of the software in a manner that violates the end-user license agreement (EULA). The EULA allows a single user to install one copy each on a primary device and a portable device such as a laptop. Users who make substantial hardware changes to an Office XP device may need to reactivate the software through the Internet or by telephone. Product activation does not require personally identifiable information. Office XP introduced an optional subscription-based activation model that allowed consumers to annually license the product and receive incremental updates at a reduced price when compared with the cost of a full retail version. Microsoft originally intended to deliver the activation model to United States customers after the retail availability of Office XP on May 31, 2001, but later decided to make it available to consumers in "a few select locations" instead, citing a more cautious delivery approach. In spite of this, Microsoft distributed optical media and a single subscription to authorized U.S. retail partners who attended teamMicrosoft Live! events. As part of a pilot experiment, consumers in Australia, France, and New Zealand could purchase a subscription for Office XP starting in May 2001; the worldwide release of the activation model was contingent on the success of the pilot experiment, but Microsoft terminated support for subscriptions in 2002 based on feedback and research that demonstrated it was not well understood by consumers. Office 365—released over a decade after Office XP—has since reintroduced subscription-based licenses to consumers. User assistance A new "Ask a Question" feature appears in the top-right corner of all Office XP programs and allows users to type natural language questions and receive answers without opening the Office Assistant ("Clippy") or Office Help. Additionally, Office Help has been updated to aggregate and display content from the Internet in response to a query. The Office Assistant is now disabled by default and only appears when Help is activated. New application-specific features New features in Word 2002 A Clear Formatting option which reverts all changes made to selected text, but retains hyperlinks A Drawing Canvas allows content such as WordArt to be aligned to a fixed position For Indian languages, fonts with the Devanagari script and Tamil script are now rendered correctly thanks to updated version of Uniscribe. Also, proofing tools were introduced for Hindi, Marathi, Gujarati, Kannada, Punjabi, Tamil and Telugu. Non-real-time collaborative editing, allowing multiple users across a file share or server to edit a document and merge changes without requiring it to be unlocked; when a user is finished editing and closes the shared document, other users can view his or her edits and merge their own changes Multiple portions of text can be selected simultaneously in a document Styles for bulleted lists and tables Support for filtered web pages, which allows users to the reduce the size of a HTML document by removing XML tags and Word-specific formatting Support for watermarks in documents The General tab of the Properties dialog box now displays the file format of an open document Word count toolbar New features in Excel 2002 Border drawing with grid, line color, style, and weight options Colors can now be added to tabs in a worksheet Drawings and pictures can now be inserted directly as headers or footers Function argument information in tooltips If a cell contains a large number that its associated column is too narrow to display ("###"), Excel displays the entire number in a tooltip Numbers can be sorted as text to prevent unexpected sorting results that occur in mixed lists of numbers and text Phrasing of Excel alerts has been revised to be concise Users can evaluate formulas on a sequential basis to determine how Excel arrived at a calculation result With a Watch function, users can monitor the results of multiple cells in a separate window even when working on a different sheet or workbook New features in Outlook 2002 AutoComplete for email addresses Colored categories for calendar items Group schedules Hyperlink support in email subject lines Native support for Outlook.com Improved search functionality including the ability to stop a search and resume it later Incremental search and content indexing is available if Windows Search is installed Lunar calendar support MSN Messenger integration Performance improvements Preview pane improvements including the ability to open hyperlinks; respond to meeting requests; and display email properties without opening a message Reminder window that consolidates all reminders for appointments and tasks in a single view Retention policies for documents and email Security improvements including the automatic blocking of potentially unsafe attachments and of programmatic access to information in Outlook SP1 introduced the ability to view all non-digitally signed email or unencrypted email as plain text SP2 allows users to—through the Registry—prevent the addition of new email accounts or the creation of new Personal Storage Tables SP3 updates the object model guard security for applications that access messages and other items Smart tags when Word is configured as the default email editor New features in PowerPoint 2002 GDI+ accelerated graphic rendering, effects, and printing Images in slides can now be flipped and rotated Multiple slide masters in presentations Native support for diagrams such as cycle, pyramid, and Venn diagrams Presentation broadcast improvements Presenter tools that allow users to view details on upcoming bullets or slides, and speaker notes, and to navigate to any slide without these actions being visible to the audience; this feature requires a multi-monitor configuration Print preview Smart tags for Apply Automatic Layout and AutoFit features, the latter of which has been updated to automatically resize fonts to fit slides as users type and to remove the minimum font size limitation Support for additional paper sizes for printing Thumbnails of slides are now displayed within a left-hand pane of the interface Users can now snap objects to a grid and display drawing guides New features in Access 2002 A new file format that enables faster access and data processing for large databases; the Access 2000 format is used by default A new Stored Procedure Designer allows users to create or modify simple Microsoft SQL Server stored procedures Batch updates for Access projects Conversion error logging, which creates a table with information about each error that occurs during Access 95, Access 97, or Access 2000 database conversion Enhanced international support including the ability to change the left-to-right reading directionality Support for multiple Undo and Redo operations Support for PivotCharts and PivotTables New features in Publisher 2002 Customizable toolbars Font schemes that can be shared with Word Header and footer support Multiple publications can now be open simultaneously Print preview Support for OfficeArt The new Format dialog box combines the Colors and Lines, Layout, Picture, Size, Text Box, and Web tabs Users can export objects, pages, or groups of objects and pages as images Users can open, edit, and save publications as HTML Visual Basic for Applications (VBA) support Word documents can now be imported directly to Publisher New features in FrontPage 2002 Automatic web content from third parties, including Expedia and MSNBC Internet forums and online surveys can be integrated with websites HTML 4 features including button and fieldsets in forms, inline frames, and language attributes Tabs to navigate between different pages within the interface Tags in HTML pages can be automatically reformatted to be XML-compliant Themes from previous FrontPage versions have been updated Unicode support Users can now publish websites in the background and can continue to make edits during the publishing process Usage analysis reports in daily, weekly, or monthly increments allow users to determine how often a web page is accessed and the URL from which this access originates; reports can be exported to Excel or as HTML Removed features Binder was replaced by Unbind, a program that can extract the contents of a Binder file. Unbind can be installed from the Office XP CD-ROM. Microsoft Photo Editor no longer supports the PCX image format Office XP Small Business Edition removes the Small Business Customer Manager during an upgrade from Office 2000; the feature is not removed during an upgrade to the Professional edition. Users who desire to retain the Small Business Customer Manager must apply the Small Business Tools 2000 patch from the second Office 2000 CD-ROM before upgrading to the Small Business Edition of Office XP. Microsoft Map was removed from Excel 2002 In Excel 2002, several add-ins are no longer available. Some, but not all, are integrated into Excel 2002 and thus made redundant. The .DBF files for Samples.xls and two Japanese templates are removed in Excel 2002 Microsoft Query is no longer available In PowerPoint 2002, the Custom Soundtracks add-in is no longer supported and the Routing Recipient option on the Send To menu was removed. A number of features were removed in Outlook 2002 Editions The component products were packaged together in various suites. Some of these editions were available as retail packages in either full or upgrade versions, others as full OEM versions for inclusion with new PCs, and still others as volume license versions that required no activation. All editions provided the core components of Word, Excel, and Outlook, and all editions except the Small Business edition provided PowerPoint. Additionally, some copies included Office XP Media Content on a separate disk. System requirements Reception Microsoft Office XP received mixed to positive reviews after its release. CNET praised the new collaboration and data recovery features, and stated that Office XP offered a "host of incremental improvements" over its predecessor, Office 2000, but ultimately concluded that "most enhancements and additions are better suited for groups than individuals." Criticism was also directed at the productivity suite's strict hard disk space requirement and its incompatibility with Windows 95. Nevertheless, CNET awarded Office XP a 4-star editors' rating. PC Magazine rated Office XP 4 stars out of 5 and praised the product's emphasis on user control, particularly in regards to customization options for features introduced in previous versions, and regarded it as "one of the few Microsoft upgrades that offers almost no pains with its significant gains." The New York Times stated that Office XP "isn't so much a list of new features as it is an improved arrangement of old ones," but offered praise for the new collaboration features, which were regarded as a "huge leap" from previous versions. Paul Thurrott regarded Office XP as "a must-have upgrade for writers such as myself," though he also stated that, without the new smart tags feature, it "has the feel of a minor upgrade with numerous useful, but small, changes." While most assessments of Office XP were positive, the speech recognition feature was frequently criticized due to its inaccuracy and lack of advanced functionality. CNET regarded it as "especially lame" because of its inability to recognize text editing commands such as "select the sentence" and because it required users to manually switch between command and dictation modes. PC Magazine stated that both the speech recognition and handwriting recognition features were not "reliable enough for general use." However, in a later assessment, PC Magazine stated that the "speech recognition is reasonably accurate, but there are very few commands for editing and correcting text" and recommended Dragon NaturallySpeaking, IBM ViaVoice, or Voice Xpress for dictation. The New York Times speculated that Microsoft had little to no confidence in the feature, as it is not installed by default and no microphone is included with Office XP; however, it concluded that it was "not bad for a freebie, especially if you would rather get the first draft down quickly and clean up the recognition errors later." Paul Thurrott stated that "the voice recognition is so bad it's almost not even worth discussing," concluding that it "is sort of a joke" when compared with mature products such as Dragon NaturallySpeaking. See also .NET My Services Comparison of office suites List of office suites Windows Speech Recognition References 2001 software Business software for Windows Handwriting recognition Office XP Products and services discontinued in 2011 Speech recognition software Windows-only software
17095313
https://en.wikipedia.org/wiki/Joint%20Interagency%20Task%20Force%20South
Joint Interagency Task Force South
Joint Interagency Task Force South is a United States multiservice, multiagency task force based at Naval Air Station Key West (Truman Annex), Key West, Florida. It conducts counter illicit trafficking operations, intelligence fusion and multi-sensor correlation to detect, monitor, and handoff suspected illicit trafficking targets; promotes security cooperation and coordinates country team and partner nation initiatives in order to defeat the flow of illicit traffic. JIATF South is subordinate command to United States Southern Command and is commanded by a Coast Guard Flag Officer. History In response to a need for unified command and control of drug interdiction activities, the FY 1989 National Defense Authorization Act designated the Department of Defense as the lead agency for the detection and monitoring program targeted against the aerial and maritime traffic attempting to bring drugs into the United States. Commander Joint Task Force FOUR (CJTF-4) in Key West, Commander, Joint Task Force FIVE in Alameda, California and Commander, Joint Task Force 6 in El Paso, Texas were established to direct the anti-drug surveillance efforts in the Atlantic/Caribbean, Pacific, and Mexico border areas respectively. The Joint Task Forces have been operating since October 1989. The Joint Task Force 4 operations center received radar data from the AN/FPS-118 Over-the-horizon radar located at Moscow Air Force Station, Maine, until the system was turned off and placed in "warm storage" after the end of the Cold War. Bill Clinton's Presidential Decision Directive 14 of 3 October 1993 led to a reorganisation of U.S. military anti-drug organization. On 7 April 1994, Dr. Lee P. Brown, Director of the Office of National Drug Control Policy, signed the National Interdiction Command and Control Plan which directed establishment of three national interagency task forces (JIATF East in Key West, Florida; JIATF South in Panama; and JIATF West in Alameda, California). On 1 June 1997, the Commander in Chief of the U.S. Southern Command expanded his area of responsibility to include the Caribbean and the waters bordering South America, and assumed command and control of JIATF East. In compliance with the 1979 Panama Canal Treaty and the necessity to complete the military drawdown in Panama by the end of 1999, the decision was made to merge JIATF South and JIATF East into one organization. Transfer of the JIATF South mission to the merged JIATF was completed on 1 May 1999. Due to the previous history of the command, Task Groups 4.1, 4.2, 4.3, and 4.4, and others, are in use controlling U.S. and allied assets assigned to JIATF South. In February 2007, a Dutch magazine described the relationships as follows: under the command of the Director JIATF South, the U.S. Tactical Commander held the position of Commander Task Group 4.1, United States Air Force forces CTG 4.2, US Navy forces CTG 4.3, the Director of the Dutch Caribbean Coast Guard (DCCG), who is always the commander of the Dutch Navy in the Caribbean area (CZMCARIB), Commander Task Group 4.4 (CTG 4.4), and US Customs force CTG 4.5. Since 2008 an additional Task Group known as CTG 4.6 has been commanded by the French Navy Commander (Antilles). Current Leadership Director: Rear Admiral Douglas Fears, USCG Deputy Director: Rear Admiral Michael (Scott) Sciretta, USN Vice Director: Brett A. Chianella, FBI Chief of Staff: Colonel John M. Groves, USAF Command Master Chief: CMC Henry Audette, USCG International cooperation Twenty countries have Liaison Officers based at JIATF-S. These include, Argentina, Brazil, Canada, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, France, Guatemala, Honduras, Jamaica, Mexico, the Netherlands, Panama, Peru, Spain, Trinidad and Tobago and the United Kingdom. References External links Globalsecurity.org Globalsecurity.org Entry Command Relationships orchestratingpower.org Joint task forces of the United States Armed Forces United States Coast Guard
32689787
https://en.wikipedia.org/wiki/List%20of%20moths%20of%20Tanzania
List of moths of Tanzania
There are about 1,700 known moth species of Tanzania. The moths (mostly nocturnal) and butterflies (mostly diurnal) together make up the taxonomic order Lepidoptera. This is a list of moth species which have been recorded from Tanzania. Alucitidae Alucita dohertyi (Walsingham, 1909) Alucita ectomesa (Hering, 1917) Alucita entoprocta (Hering, 1917) Alucita hemicyclus (Hering, 1917) Alucita isodina (Meyrick, 1920) Anomoeotidae Staphylinochrous meinickei Hering, 1928 Thermochrous neurophaea Hering, 1928 Arctiinae Acantharctia aurivillii Bartel, 1903 Acantharctia nigrivena Rothschild, 1935 Acantharctia tenuifasciata Hampson, 1910 Acanthofrontia biannulata (Wichgraf, 1922) Afrasura amaniensis (Cieslak & Häuser, 2006) Afrasura neavi (Hampson, 1914) Afrospilarctia flavida (Bartel, 1903) Afrospilarctia lucida (Druce, 1898) Alpenus investigatorum (Karsch, 1898) Alpenus maculosa (Stoll, 1781) Alpenus pardalina (Rothschild, 1910) Alpenus schraderi (Rothschild, 1910) Amata alicia (Butler, 1876) Amata burtti (Distant, 1900) Amata cerbera (Linnaeus, 1764) Amata ceres (Oberthür, 1878) Amata chloroscia (Hampson, 1901) Amata chrysozona (Hampson, 1898) Amata consimilis (Hampson, 1901) Amata dilateralis (Hampson, 1898) Amata discata (Druce, 1898) Amata janenschi Seitz, 1926 Amata kuhlweini (Lefèbvre, 1832) Amata miozona (Hampson, 1910) Amata monticola (Aurivillius, 1910) Amata nigricilia (Strand, 1912) Amata phaeozona (Zerny, 1912) Amata phoenicia (Hampson, 1898) Amata rubritincta (Hampson, 1903) Amerila affinis (Rothschild, 1910) Amerila bipartita (Rothschild, 1910) Amerila bubo (Walker, 1855) Amerila carneola (Hampson, 1916) Amerila fennia (Druce, 1887) Amerila howardi (Pinhey, 1955) Amerila lupia (Druce, 1887) Amerila niveivitrea (Bartel, 1903) Amerila phaedra Weymer, 1892 Amerila puella (Fabricius, 1793) Amerila roseomarginata (Rothschild, 1910) Amerila thermochroa (Hampson, 1916) Amerila vidua (Cramer, 1780) Amphicallia bellatrix (Dalman, 1823) Amphicallia pactolicus (Butler, 1888) Amphicallia quagga Strand, 1909 Amphicallia solai (Druce, 1907) Amphicallia thelwalli (Druce, 1882) Anaphosia astrigata Hampson, 1910 Apisa canescens Walker, 1855 Argina amanda (Boisduval, 1847) Argina astrea (Drury, 1773) Argina leonina (Walker, 1865) Asura doa Kühne, 2007 Asura mutabilis Kühne, 2007 Asura sagenaria (Wallengren, 1860) Balacra flavimacula Walker, 1856 Balacra nigripennis (Aurivillius, 1904) Balacra preussi (Aurivillius, 1904) Binna penicillata Walker, 1865 Caripodia chrysargyria Hampson, 1900 Ceryx hilda (Ehrmann, 1894) Cragia distigmata (Hampson, 1901) Creatonotos leucanioides Holland, 1893 Creatonotos punctivitta (Walker, 1854) Cyana arenbergeri Karisch, 2003 Cyana nemasisha Roesler, 1990 Cyana pretoriae (Distant, 1897) Cyana rejecta (Walker, 1854) Dasyarctia grisea Gaede, 1923 Eilema albescens (Aurivillius, 1910) Eilema bipartita Aurivillius, 1910 Eilema costimacula Aurivillius, 1910 Eilema marwitziana Strand, 1912 Eilema mesosticta Hampson, 1911 Eilema oblitterans (Felder, 1868) Eilema peperita (Hampson, 1901) Eilema polioplaga (Hampson, 1901) Eilema pusilana Strand, 1912 Eilema stevensii (Holland, 1892) Epilacydes scita (Walker, 1865) Epitoxis duplicata Gaede, 1926 Estigmene ansorgei Rothschild, 1910 Estigmene ochreomarginata Bethune-Baker, 1909 Estigmene trivitta (Walker, 1855) Euchromia amoena (Möschler, 1872) Euchromia folletii (Guérin-Méneville, 1832) Eyralpenus atricrures (Hampson, 1916) Eyralpenus diplosticta (Hampson, 1900) Eyralpenus inconspicua (Rothschild, 1910) Eyralpenus meinhofi (Bartel, 1903) Eyralpenus scioana (Oberthür, 1880) Eyralpenus sublutea (Bartel, 1903) Eyralpenus trifasciata (Holland, 1892) Galtara doriae (Oberthür, 1880) Hypersypnoides heinrichi Laporte, 1979 Ilemodes isogyna Romieux, 1935 Ischnarctia brunnescens Bartel, 1903 Ischnarctia cinerea (Pagenstecher, 1903) Karschiola holoclera (Karsch, 1894) Lamprosiella eborella (Boisduval, 1847) Lepidilema unipectinata Aurivillius, 1910 Lepista pandula (Boisduval, 1847) Lobilema conspersa Aurivillius, 1910 Macrosia fumeola (Walker, 1854) Megalonycta forsteri Laporte, 1979 Metarctia atrivenata Kiriakoff, 1956 Metarctia collocalia Kiriakoff, 1957 Metarctia epimela (Kiriakoff, 1979) Metarctia fulvia Hampson, 1901 Metarctia inconspicua Holland, 1892 Metarctia insignis Kiriakoff, 1959 Metarctia lateritia Herrich-Schäffer, 1855 Metarctia lindemannae Kiriakoff, 1961 Metarctia pavlitzkae (Kiriakoff, 1961) Metarctia rubripuncta Hampson, 1898 Metarctia rufescens Walker, 1855 Metarctia seydeliana (Kiriakoff, 1953) Micralarctia punctulatum (Wallengren, 1860) Micralarctia semipura (Bartel, 1903) Neuroxena ansorgei Kirby, 1896 Nyctemera apicalis (Walker, 1854) Nyctemera insulare (Boisduval, 1833) Nyctemera itokina (Aurivillius, 1904) Nyctemera leuconoe Hopffer, 1857 Nyctemera rattrayi (Swinhoe, 1904) Nyctemera restrictum (Butler, 1894) Nyctemera transitella (Strand, 1909) Nyctemera usambarae Oberthür, 1893 Ochrota asuraeformis (Strand, 1912) Owambarctia unipuncta Kiriakoff, 1973 Paralacydes arborifera (Butler, 1875) Paralacydes bivittata (Bartel, 1903) Paralacydes decemmaculata (Rothschild, 1916) Paralacydes fiorii (Berio, 1937) Paralacydes ramosa (Hampson, 1907) Paralacydes vocula (Stoll, 1790) Paralpenus wintgensi (Strand, 1909) Popoudina brosi Toulgoët, 1986 Pseudonaclia bifasciata Aurivillius, 1910 Pseudonaclia fasciata Gaede, 1926 Pseudothyretes perpusilla (Walker, 1856) Pusiola elongata (Aurivillius, 1910) Radiarctia jacksoni (Rothschild, 1910) Radiarctia rhodesiana (Hampson, 1900) Rhabdomarctia rubrilineata (Bethune-Baker, 1911) Secusio sansibarensis Strand, 1909 Secusio strigata Walker, 1854 Seydelia ellioti (Butler, 1895) Spilosoma affinis Bartel, 1903 Spilosoma albiventre Kiriakoff, 1963 Spilosoma atrivenata Rothschild, 1933 Spilosoma baxteri (Rothschild, 1910) Spilosoma bipartita Rothschild, 1933 Spilosoma curvilinea Walker, 1855 Spilosoma lineata Walker, 1855 Spilosoma pales (Druce, 1910) Spilosoma semihyalina Bartel, 1903 Spilosoma sublutescens Kiriakoff, 1958 Spilosoma unipuncta (Hampson, 1905) Teracotona approximans (Rothschild, 1917) Teracotona clara Holland, 1892 Teracotona euprepia Hampson, 1900 Teracotona homeyeri Rothschild, 1910 Teracotona latifasciata Carcasson, 1965 Teracotona melanocera (Hampson, 1920) Teracotona pardalina Bartel, 1903 Teracotona rhodophaea (Walker, 1865) Teracotona subapproximans Rothschild, 1933 Teracotona subterminata Hampson, 1901 Teracotona translucens (Grünberg, 1907) Teracotona uhrikmeszarosi Svent-Ivany, 1942 Thumatha africana Kühne, 2007 Thyretes trichaetiformis Zerny, 1912 Utetheisa elata (Fabricius, 1798) Utetheisa pulchella (Linnaeus, 1758) Autostichidae Turatia argillacea Gozmány, 2000 Brachodidae Phycodes substriata Walsingham, 1891 Brahmaeidae Dactyloceras catenigera (Karsch, 1895) Dactyloceras maculata (Conte, 1911) Dactyloceras neumayeri (Pagenstecher, 1885) Dactyloceras vingerhoedti Bouyer, 2005 Dactyloceras widenmanni (Karsch, 1895) Choreutidae Anthophila flavimaculata (Walsingham, 1891) Cosmopterigidae Cosmopterix athesiae Huemer & Koster, 2006 Cossidae Arctiocossus punctifera Gaede, 1929 Coryphodema ochracea Gaede, 1929 Eulophonotus elegans (Aurivillius, 1910) Meharia semilactea (Warren & Rothschild, 1905) Meharia tanganyikae Bradley, 1951 Nomima szunyoghyi (Gozmány, 1965) Oreocossus kilimanjarensis (Holland, 1892) Phragmataecia brunni Pagenstecher, 1892 Crambidae Adelpherupa flavescens Hampson, 1919 Anania metaleuca (Hampson, 1913) Ancylolomia melanella Hampson, 1919 Ancylolomia melanothoracia Hampson, 1919 Conotalis nigroradians (Mabille, 1900) Cotachena smaragdina (Butler, 1875) Crocidolomia pavonana (Fabricius, 1794) Culladia achroellum (Mabille, 1900) Euclasta varii Popescu-Gorj & Constantinescu, 1973 Glyphodes basifascialis Hampson, 1898 Heliothela ophideresana (Walker, 1863) Nomophila brevispinalis Munroe, 1973 Nomophila noctuella ([Denis & Schiffermüller], 1775) Parerupa africana (Aurivillius, 1910) Patissa geminalis Hampson, 1919 Powysia rosealinea Maes, 2006 Prionapteryx alternalis Maes, 2002 Prionapteryx phaeomesa (Hampson, 1919) Protinopalpa subclathrata Strand, 1911 Psammotis haematidea (Hampson, 1913) Pyrausta centralis Maes, 2009 Pyrausta microdontaloides Maes, 2009 Pyrausta perparvula Maes, 2009 Pyrausta sanguifusalis Hampson, 1913 Drepanidae Aethiopsestis mufindiae Watson, 1965 Gonoreta subtilis (Bryk, 1913) Negera natalensis (Felder, 1874) Elachistidae Ethmia ballistis Meyrick, 1908 Ethmia taxiacta Meyrick, 1920 Epipyropidae Epipyrops cerolestes Tams, 1947 Epipyrops epityraea Scheven, 1974 Eriocottidae Compsoctena africanella (Strand, 1909) Eupterotidae Camerunia albida Aurivillius, 1901 Hibrildes crawshayi Butler, 1896 Hoplojana distincta Rothschild, 1917 Hoplojana indecisa (Aurivillius, 1901) Hoplojana rhodoptera (Gerstaecker, 1871) Jana eurymas Herrich-Schäffer, 1854 Janomima mariana (White, 1843) Phiala alba Aurivillius, 1893 Phiala costipuncta (Herrich-Schäffer, 1855) Phiala infuscata (Grünberg, 1907) Stenoglene obtusus (Walker, 1864) Stenoglene pira Druce, 1896 Gelechiidae Anarsia agricola Walsingham, 1891 Brachmia septella (Zeller, 1852) Dichomeris rhodophaea Meyrick, 1920 Pectinophora gossypiella (Saunders, 1844) Ptilothyris crossoceros Meyrick, 1934 Trichotaphe chalybitis (Meyrick, 1920) Geometridae Acanthovalva bilineata (Warren, 1895) Acidaliastis systema D. S. Fletcher, 1978 Adesmobathra ozoloides Prout, 1916 Allochrostes impunctata (Warren, 1897) Antharmostes papilio Prout, 1912 Aphilopota exterritorialis (Strand, 1909) Aphilopota foedata (Bastelberger, 1907) Aphilopota semiusta (Distant, 1898) Aphilopota triphasia Prout, 1954 Aphilopota viriditincta (Warren, 1905) Archichlora rectilineata Carcasson, 1971 Ascotis reciprocaria (Walker, 1860) Asthenotricha anisobapta Prout, 1932 Asthenotricha ansorgei Warren, 1899 Asthenotricha dentatissima Warren, 1899 Asthenotricha inutilis Warren, 1901 Asthenotricha pycnoconia Janse, 1933 Asthenotricha serraticornis Warren, 1902 Asthenotricha straba Prout, 1921 Biston abruptaria (Walker, 1869) Biston homoclera (Prout, 1938) Brachytrita cervinaria Swinhoe, 1904 Cacochloris ochrea (Warren, 1897) Cartaletis libyssa (Hopffer, 1857) Casilda lucidaria (Swinhoe, 1904) Celidomphax analiplaga (Warren, 1905) Chiasmia affinis (Warren, 1902) Chiasmia assimilis (Warren, 1899) Chiasmia butaria (Swinhoe, 1904) Chiasmia costiguttata (Warren, 1899) Chiasmia geminilinea (Prout, 1932) Chiasmia inconspicua (Warren, 1897) Chiasmia kilimanjarensis (Holland, 1892) Chiasmia maculosa (Warren, 1899) Chiasmia normata (Walker, 1861) Chiasmia rectilinea (Warren, 1905) Chiasmia rectistriaria (Herrich-Schäffer, 1854) Chiasmia simplicilinea (Warren, 1905) Chiasmia sororcula (Warren, 1897) Chiasmia streniata (Guenée, 1858) Chiasmia subcurvaria (Mabille, 1897) Chiasmia umbrata (Warren, 1897) Chiasmia umbratilis (Butler, 1875) Chlorerythra rubriplaga Warren, 1895 Chlorissa albistrigulata (Warren, 1897) Chlorissa attenuata (Walker, 1862) Chloroclystis consocer Prout, 1937 Chloroclystis cryptolopha Prout, 1932 Chloroctenis conspersa Warren, 1909 Cleora munda (Warren, 1899) Cleora rostella D. S. Fletcher, 1967 Cleora thyris D. S. Fletcher, 1967 Coenina aurivena Butler, 1898 Collix foraminata Guenée, 1858 Comostolopsis simplex Warren, 1902 Comostolopsis stillata (Felder & Rogenhofer, 1875) Conolophia conscitaria (Walker, 1861) Cyclophora paratropa (Prout, 1920) Cyclophora unocula (Warren, 1897) Derambila niphosphaeras (Prout, 1934) Disclisioprocta natalata (Walker, 1862) Dithecodes ornithospila (Prout, 1911) Drepanogynis johnstonei (Prout, 1938) Drepanogynis lacuum (Prout, 1938) Ecpetala obtusa (Warren, 1902) Ectropis anisa Prout, 1915 Ectropis delosaria (Walker, 1862) Ectropis gozmanyi D. S. Fletcher, 1978 Ectropis ikonda Herbulot, 1981 Ectropis ocellata Warren, 1902 Epigynopteryx africana (Aurivillius, 1910) Epigynopteryx maeviaria (Guenée, 1858) Epirrhoe annulifera (Warren, 1902) Erastria albosignata (Walker, 1863) Erastria leucicolor (Butler, 1875) Erastria madecassaria (Boisduval, 1833) Ereunetea reussi Gaede, 1914 Eucrostes disparata Walker, 1861 Euexia percnopus Prout, 1915 Eupithecia celatisigna (Warren, 1902) Eupithecia devestita (Warren, 1899) Eupithecia dilucida (Warren, 1899) Eupithecia proflua Prout, 1932 Eupithecia regulosa (Warren, 1902) Eupithecia rigida Swinhoe, 1892 Eupithecia salti D. S. Fletcher, 1951 Eupithecia semipallida Janse, 1933 Eupithecia tricuspis Prout, 1932 Eupithecia undiculata Prout, 1932 Haplolabida monticolata (Aurivillius, 1910) Haplolabida sjostedti (Aurivillius, 1910) Heterorachis dichorda Prout, 1915 Hierochthonia migrata Prout, 1930 Hydrelia ericinella Aurivillius 1910 Hydrelia costalis Aurivillius, 1910 Hypsometra ericinellae Aurivillius, 1910 Idaea auriflua (Warren, 1902) Idaea heres (Prout, 1932) Idaea macrostyla (Warren, 1900) Idaea umbricosta (Prout, 1913) Idiochlora subrufibasis (Prout, 1930) Idiodes flexilinea (Warren, 1898) Isturgia catalaunaria (Guenée, 1858) Isturgia deerraria (Walker, 1861) Isturgia triseriata (Prout, 1926) Lophorrhachia burdoni Townsend, 1958 Microligia dolosa Warren, 1897 Mimoclystia cancellata (Warren, 1899) Mimoclystia corticearia (Aurivillius, 1910) Mixocera albistrigata (Pagenstecher, 1893) Neurotoca notata Warren, 1897 Oaracta maculata (Warren, 1897) Obolcola petronaria (Guenée, 1858) Odontopera azelinaria (Swinhoe, 1904) Omizodes rubrifasciata (Butler, 1896) Omphalucha brunnea (Warren, 1899) Omphax plantaria Guenée, 1858 Oreometra vittata Aurivillius, 1910 Orthonama obstipata (Fabricius, 1794) Pachypalpella subalbata (Warren, 1900) Paraptychodes kedar (Druce, 1896) Paraptychodes tenuis (Butler, 1878) Petovia marginata Walker, 1854 Piercia fumitacta (Warren, 1903) Piercia prasinaria (Warren, 1901) Piercia subrufaria (Warren, 1903) Piercia subterlimbata (Prout, 1917) Pingasa distensaria (Walker, 1860) Pitthea trifasciata Dewitz, 1881 Prasinocyma loveridgei Prout, 1926 Prasinocyma permitis Prout, 1932 Problepsis digammata Kirby, 1896 Protosteira spectabilis (Warren, 1899) Pseudolarentia monosticta (Butler, 1894) Pseudosoloe thalassina (Warren, 1909) Racotis apodosima Prout, 1931 Racotis squalida (Butler, 1878) Racotis zebrina Warren, 1899 Rheumaptera relicta (Herbulot, 1953) Rhodesia alboviridata (Saalmüller, 1880) Rhodometra sacraria (Linnaeus, 1767) Rhodophthitus anamesa (Prout, 1915) Rhodophthitus commaculata (Warren, 1897) Rhodophthitus rudicornis (Butler, 1898) Rhodophthitus tricoloraria (Mabille, 1890) Scardamia maculata Warren, 1897 Scopula agrapta (Warren, 1902) Scopula argentidisca (Warren, 1902) Scopula curvimargo (Warren, 1900) Scopula erinaria (Swinhoe, 1904) Scopula internata (Guenée, 1857) Scopula lactaria (Walker, 1861) Scopula latitans Prout, 1920 Scopula minorata (Boisduval, 1833) Scopula natalica (Butler, 1875) Scopula rufinubes (Warren, 1900) Scopula sagittilinea (Warren, 1897) Scopula serena Prout, 1920 Scopula umbratilinea (Warren, 1901) Scotopteryx nictitaria (Herrich-Schäffer, 1855) Somatina virginalis Prout, 1917 Thalassodes quadraria Guenée, 1857 Traminda acuta (Warren, 1897) Traminda neptunaria (Guenée, 1858) Traminda vividaria (Walker, 1861) Trimetopia aetheraria Guenée, 1858 Triphosa tritocelidata Aurivillius, 1910 Victoria triplaga Prout, 1915 Xanthisthisa tarsispina (Warren, 1901) Xanthorhoe albodivisaria (Aurivillius 1910) Xanthorhoe alluaudi (Prout, 1932) Xanthorhoe argenteolineata (Aurivillius, 1910) Xanthorhoe belgarum Herbulot, 1981 Xanthorhoe exorista Prout, 1922 Xanthorhoe heteromorpha (Hampson, 1909) Xanthorhoe procne (Fawcett, 1916) Xanthorhoe transcissa (Warren, 1902) Xanthorhoe transjugata Prout, 1923 Xanthorhoe trientata (Warren, 1901) Xanthorhoe tuta Herbulot, 1981 Xenochroma candidata Warren, 1902 Zamarada acalantis Herbulot, 2001 Zamarada acosmeta Prout, 1921 Zamarada acrochra Prout, 1928 Zamarada aequilumata D. S. Fletcher, 1974 Zamarada amelga D. S. Fletcher, 1974 Zamarada amicta Prout, 1915 Zamarada ansorgei Warren, 1897 Zamarada arguta D. S. Fletcher, 1974 Zamarada bastelbergeri Gaede, 1915 Zamarada bathyscaphes Prout, 1912 Zamarada calypso Prout, 1926 Zamarada candelabra D. S. Fletcher, 1974 Zamarada chrysopa D. S. Fletcher, 1974 Zamarada cinnamomata D. S. Fletcher, 1978 Zamarada collarti Debauche, 1938 Zamarada crystallophana Mabille, 1900 Zamarada cucharita D. S. Fletcher, 1974 Zamarada cydippe Herbulot, 1954 Zamarada deceptrix Warren, 1914 Zamarada delosis D. S. Fletcher, 1974 Zamarada delta D. S. Fletcher, 1974 Zamarada denticatella Prout, 1922 Zamarada dentigera Warren, 1909 Zamarada differens Bastelberger, 1907 Zamarada dorsiplaga Prout, 1922 Zamarada erugata D. S. Fletcher, 1974 Zamarada euerces Prout, 1928 Zamarada euphrosyne Oberthür, 1912 Zamarada eurygnathus D. S. Fletcher, 1974 Zamarada euterpina Oberthür, 1912 Zamarada excavata Bethune-Baker, 1913 Zamarada fessa Prout, 1912 Zamarada flavicaput Warren, 1901 Zamarada gamma D. S. Fletcher, 1958 Zamarada glareosa Bastelberger, 1909 Zamarada hyalinaria (Guenée, 1857) Zamarada ignicosta Prout, 1912 Zamarada ilma Prout, 1922 Zamarada iobathra Prout, 1932 Zamarada keraia D. S. Fletcher, 1974 Zamarada kiellandi Aarvik & Bjørnstad, 2007 Zamarada labifera Prout, 1915 Zamarada lequeuxi Herbulot, 1983 Zamarada lima D. S. Fletcher, 1974 Zamarada loleza Aarvik & Bjørnstad, 2007 Zamarada longidens D. S. Fletcher, 1963 Zamarada mashariki Aarvik & Bjørnstad, 2007 Zamarada mckameyi Aarvik & Bjørnstad, 2007 Zamarada melasma D. S. Fletcher, 1974 Zamarada melpomene Oberthür, 1912 Zamarada metrioscaphes Prout, 1912 Zamarada micropomene Aarvik & Bjørnstad, 2007 Zamarada montana Herbulot, 1979 Zamarada musomae Aarvik & Bjørnstad, 2007 Zamarada ndogo Aarvik & Bjørnstad, 2007 Zamarada ochrata Warren, 1902 Zamarada ordinaria Bethune-Baker, 1913 Zamarada paxilla D. S. Fletcher, 1974 Zamarada phaeozona Hampson, 1909 Zamarada phratra D. S. Fletcher, 1978 Zamarada pinheyi D. S. Fletcher, 1956 Zamarada plana Bastelberger, 1909 Zamarada platycephala D. S. Fletcher, 1974 Zamarada polyctemon Prout, 1932 Zamarada pringlei D. S. Fletcher, 1974 Zamarada prolata D. S. Fletcher, 1974 Zamarada psectra D. S. Fletcher, 1974 Zamarada psi D. S. Fletcher, 1974 Zamarada purimargo Prout, 1912 Zamarada reflexaria (Walker, 1863) Zamarada rhamphis D. S. Fletcher, 1974 Zamarada ruandana Herbulot, 1983 Zamarada rubrifascia Pinhey, 1962 Zamarada rufilinearia Swinhoe, 1904 Zamarada saburra D. S. Fletcher, 1974 Zamarada scintillans Bastelberger, 1909 Zamarada seydeli D. S. Fletcher, 1974 Zamarada torrida D. S. Fletcher, 1974 Zamarada tristriga Aarvik & Bjørnstad, 2007 Zamarada tristrigoides Aarvik & Bjørnstad, 2007 Zamarada unisona D. S. Fletcher, 1974 Zamarada usambarae Aarvik & Bjørnstad, 2007 Zamarada usondo Aarvik & Bjørnstad, 2007 Zamarada uzungwae Aarvik & Bjørnstad, 2007 Zamarada varii D. S. Fletcher, 1974 Zamarada variola D. S. Fletcher, 1974 Zamarada vulpina Warren, 1897 Zygophyxia roseocincta (Warren, 1899) Gracillariidae Acrocercops bifasciata (Walsingham, 1891) Caloptilia ingrata Triberti, 1989 Caloptilia octopunctata (Turner, 1894) Corythoxestis aletreuta (Meyrick, 1936) Cremastobombycia morogorene de Prins, 2012 Phodoryctis caerulea (Meyrick, 1912) Phyllocnistis citrella Stainton, 1856 Phyllonorycter aarviki de Prins, 2012 Phyllonorycter maererei de Prins, 2012 Phyllonorycter mwatawalai de Prins, 2012 Hepialidae Afrotheora brevivalva Nielsen & Scoble, 1986 Afrotheora thermodes (Meyrick, 1921) Antihepialus keniae (Holland, 1892) Eudalaca aequifascia (Gaede, 1930) Eudalaca zernyi (Viette, 1950) Gorgopi caffra Walker, 1856 Gorgopi libania (Stoll, 1781) Gorgopi salti Tams, 1952 Gorgopi tanganyikaensis Viette, 1950 Himantopteridae Doratopteryx steniptera Hampson, 1920 Semioptila fulveolans (Mabille, 1897) Semioptila latifulva Hampson, 1920 Lasiocampidae Anadiasa hartigi Szent-Ivány, 1942 Beralade bistrigata Strand, 1909 Beralade continua Aurivillius, 1905 Beralade niphoessa Strand, 1909 Bombycomorpha bifascia (Walker, 1855) Bombycopsis nigrovittata Aurivillius, 1927 Bombycopsis venosa (Butler, 1895) Braura elgonensis (Kruck, 1940) Braura ligniclusa (Walker, 1865) Braura truncatum (Walker, 1855) Catalebeda strandi Hering, 1927 Cheligium choerocampoides (Holland, 1893) Chionopsyche montana Aurivillius, 1909 Chrysopsyche antennifera Strand, 1912 Chrysopsyche lutulenta Tams, 1923 Cleopatrina bilinea (Walker, 1855) Cleopatrina phocea (Druce, 1887) Dinometa maputuana (Wichgraf, 1906) Dollmania purpurascens (Aurivillius, 1909) Epicnapteroides lobata Strand, 1912 Epitrabala nyassana (Aurivillius, 1909) Eucraera koellikerii (Dewitz, 1881) Eutricha morosa (Walker, 1865) Euwallengrenia reducta (Walker, 1855) Gonobombyx angulata Aurivillius, 1893 Gonometa postica Walker, 1855 Gonometa rufobrunnea Aurivillius, 1922 Grammodora nigrolineata (Aurivillius, 1895) Grellada imitans (Aurivillius, 1893) Laeliopsis maculigera Strand, 1913 Lechriolepis flavomarginata Aurivillius, 1927 Lechriolepis griseola Aurivillius, 1927 Lechriolepis ochraceola Strand, 1912 Lechriolepis tessmanni Strand, 1912 Leipoxais acharis Hering, 1928 Leipoxais adoxa Hering, 1928 Leipoxais humfreyi Aurivillius, 1915 Leipoxais marginepunctata Holland, 1893 Marmonna gella Zolotuhin & Prozorov, 2010 Marmonna marmorata Zolotuhin & Prozorov, 2010 Marmonna murphyi Zolotuhin & Prozorov, 2010 Metajana kilwicola (Strand, 1912) Metajana marshalli Aurivillius, 1909 Mimopacha gerstaeckerii (Dewitz, 1881) Mimopacha tripunctata (Aurivillius, 1905) Morongea arnoldi (Aurivillius, 1909) Morongea elfiora Zolotuhin & Prozorov, 2010 Muzunguja rectilineata (Aurivillius, 1900) Odontocheilopteryx dollmani Tams, 1930 Odontocheilopteryx myxa Wallengren, 1860 Odontocheilopteryx scilla Gurkovich & Zolotuhin, 2009 Odontopacha fenestrata Aurivillius, 1909 Opisthodontia varezhka Zolotuhin & Prozorov, 2010 Pachytrina crestalina Zolotuhin & Gurkovich, 2009 Pachytrina honrathii (Dewitz, 1881) Pachytrina philargyria (Hering, 1928) Pachytrina verba Zolotuhin & Gurkovich, 2009 Pachytrina wenigina Zolotuhin & Gurkovich, 2009 Pallastica lateritia (Hering, 1928) Pallastica litlura Zolotuhin & Gurukovich, 2009 Pallastica meloui (Riel, 1909) Pallastica pallens (Bethune-Baker, 1908) Pallastica redissa Zolotuhin & Gurkovich, 2009 Philotherma grisea Aurivillius, 1914 Philotherma rectilinea Strand, 1912 Philotherma rosa (Druce, 1887) Philotherma rufescens Wichgraf, 1921 Philotherma simplex Wichgraf, 1914 Pseudolyra cervina (Aurivillius, 1905) Pseudolyra megista Tams, 1931 Pseudometa choba (Druce, 1899) Pseudometa punctipennis (Strand, 1912) Rhinobombyx cuneata Aurivillius, 1879 Schausinna affinis Aurivillius, 1910 Sena donaldsoni (Holland, 1901) Sonitha lila Zolotuhin & Prozorov, 2010 Sophyrita argibasis (Mabille, 1893) Stenophatna accolita Zolotuhin & Prozorov, 2010 Stenophatna cymographa (Hampson, 1910) Stenophatna marshalli Aurivillius, 1909 Stenophatna rothschildi (Tams, 1936) Stoermeriana abyssinicum (Aurivillius, 1908) Stoermeriana fusca (Aurivillius, 1905) Stoermeriana graberi (Dewitz, 1881) Stoermeriana sjostedti (Aurivillius, 1902) Streblote madibirense (Wichgraf, 1921) Streblote polydora (Druce, 1887) Trabala charon Druce, 1910 Trichopisthia igneotincta (Aurivillius, 1909) Lecithoceridae Cophomantella bifrenata (Meyrick, 1921) Cophomantella cyclopodes (Meyrick, 1922) Odites armilligera Meyrick, 1922 Protolychnis maculata (Walsingham, 1881) Lemoniidae Sabalia jacksoni Sharpe, 1890 Sabalia picarina Walker, 1865 Sabalia sericaria (Weymer, 1896) Sabalia tippelskirchi Karsch, 1898 Limacodidae Afrobirthama flaccidia (Druce, 1899) Altha basalis West, 1940 Birthama basibrunnea Swinhoe, 1904 Chrysopoloma isabellina Aurivillius, 1895 Cosuma flavimacula West, 1940 Cosuma radiata Carcasson, 1965 Ctenolita zernyi Hering, 1949 Delorhachis kilosa West, 1940 Halseyia angustilinea (Hering, 1937) Halseyia incisa (Hering, 1937) Halseyia lacides (Druce, 1899) Halseyia rufibasalis (Hering, 1928) Latoia urda (Druce, 1887) Latoiola bifascia Janse, 1964 Lepidorytis sulcata Aurivillius, 1900 Natada caliginosa West, 1940 Niphadolepis alianta Karsch, 1899 Niphadolepis elegans Wichgraf, 1921 Omocena songeana West, 1940 Parapluda invitabilis (Wallengren, 1860) Parasa costalis West, 1940 Parasa lanceolata Hering, 1928 Scotinocerides conspurcata (Aurivillius, 1895) Scotinocerides fasciata Hering, 1937 Scotinocerides sigma Hering, 1937 Scotinochroa charopocelis Tams, 1929 Taeda aetitis Wallengren, 1863 Taeda prasina Butler, 1896 Trogocrada atmota Janse, 1964 Zinara bilineata Hering, 1928 Lymantriidae Abynotha meinickei Hering, 1926 Aclonophlebia civilis Hering, 1926 Aclonophlebia lugardi (Swinhoe, 1903) Aclonophlebia lymantrioides Hering, 1926 Argyrostagma niobe (Weymer, 1896) Aroa discalis Walker, 1855 Aroa melanoleuca Hampson, 1905 Aroa pampoecila Collenette, 1930 Aroa tomisa Druce, 1896 Barlowia charax (Druce, 1896) Bracharoa charax (Druce, 1896) Bracharoa mixta (Snellen, 1872) Bracharoa reducta Hering, 1926 Cadurca dianeura Hering, 1928 Casama intermissa (Hering, 1926) Chrysocyma mesopotamia Hampson, 1905 Conigephyra leucoptera (Hering, 1926) Conigephyra pallidula (Hering, 1926) Conigephyra splendida (Hering, 1926) Cropera sericea (Hampson, 1910) Cropera testacea Walker, 1855 Cropera unipunctata Wichgraf, 1921 Crorema adspersa (Herrich-Schäffer, 1854) Crorema evanescens (Hampson, 1910) Crorema fulvinotata (Butler, 1893) Dasychira albicostata (Holland, 1893) Dasychira barbara Hering, 1926 Dasychira daphne Hering, 1926 Dasychira daphnoides Hering, 1926 Dasychira hastifera Hering, 1926 Dasychira mkattana Strand, 1912 Dasychira nebulifera Hering, 1926 Dasychira nigerrima Hering, 1926 Dasychira polia Hering, 1926 Dasychira prospera Hering, 1926 Dasychira punctifera (Walker, 1857) Dasychira scotina Hering, 1926 Dasychira stegmanni Grünberg, 1910 Dasychira subochracea Aurivillius, 1910 Eudasychira amata (Hering, 1926) Eudasychira bokuma (Collenette, 1960) Eudasychira georgiana (Fawcett, 1900) Eudasychira metathermes (Hampson, 1905) Eudasychira poliotis (Hampson, 1910) Euproctis areolata Hering, 1928 Euproctis beato Bryk, 1934 Euproctis bigutta Holland, 1893 Euproctis multidentata Hering, 1926 Euproctis pallida (Kirby, 1896) Euproctis producta (Walker, 1863) Euproctis sericaria (Tams, 1924) Euproctoides eddela (Swinhoe, 1903) Hemerophanes diatoma (Hering, 1926) Hemerophanes libyra (Druce, 1896) Hemerophanes litigiosa (Hering, 1926) Heteronygmia dissimilis Aurivillius, 1910 Homochira rendalli (Distant, 1897) Knappetra fasciata (Walker, 1855) Lacipa floridula (Hering, 1926) Lacipa melanosticta Hampson, 1910 Lacipa pseudolacipa Hering, 1926 Lacipa quadripunctata Dewitz, 1881 Laelia amaura Hering, 1926 Laelia extorta (Distant, 1897) Laelia extrema Hering, 1926 Laelia fracta Schaus & Clements, 1893 Laelia gephyra (Hering, 1926) Laelia janenschi Hering, 1926 Laelia mediofasciata (Hering, 1926) Laelia ordinata (Karsch, 1895) Laelia phenax (Collenette, 1932) Laelia rogersi Bethune-Baker, 1913 Laelia subrosea (Walker, 1855) Leptaroa deleta Hering, 1926 Leptaroa ochricoloria Strand, 1911 Leptaroa paupera Hering, 1926 Leucoma discissa (Grünberg, 1910) Leucoma maria (Kirby, 1896) Leucoma parva (Plötz, 1880) Leucoma vosseleri Grünberg, 1907 Leucoma xanthocephala (Hering, 1926) Lymantria pruinosa Hering, 1927 Marblepsis tiphia (Swinhoe, 1903) Ogoa fuscovenata Wichgraf, 1922 Ogoa simplex Walker, 1856 Olapa nigricosta Hampson, 1905 Olapa tavetensis (Holland, 1892) Otroeda vesperina Walker, 1854 Palasea marwitzi Grünberg, 1907 Palasea miniata Grünberg, 1907 Pirga pellucida Wichgraf, 1922 Pirga weisei Karsch, 1900 Pirgula atrinotata (Butler, 1897) Polymona inaffinis Hering, 1926 Ruanda aetheria Strand, 1909 Schalidomitra ambages Strand, 1911 Stracena bananae (Butler, 1897) Stracena pellucida Grünberg, 1907 Stracena tavetensis (Holland, 1892) Stracilla translucida (Oberthür, 1880) Metarbelidae Bjoernstadia kasuluensis Lehmann, 2012 Kroonia murphyi Lehmann, 2010 Kroonia natalica (Hampson, 1910) Lebedodes ianrobertsoni Lehmann, 2009 Lebedodes jeanneli Le Cerf, 1914 Lebedodes leifaarviki Lehmann, 2009 Lebedodes violascens Gaede, 1929 Lebedodes willihaberlandi Lehmann, 2008 Marshalliana jansei Gaede, 1929 Metarbela abdulrahmani Lehmann, 2008 Metarbela arcifera (Hampson, 1909) Metarbela chidzingai Lehmann, 2008 Metarbela erecta Gaede, 1929 Metarbela latifasciata Gaede, 1929 Metarbela lornadepewae Lehmann, 2009 Metarbela plagifera Gaede, 1929 Metarbela triangularis Gaede, 1929 Ortharbela cliftoni Lehmann, 2009 Ortharbela guttata Aurivillius, 1910 Ortharbela jurateae Lehmann, 2009 Ortharbela sommerlattei Lehmann, 2008 Paralebedella estherae Lehmann, 2008 Salagena arcys D. S. Fletcher, 1968 Salagena tessellata Distant, 1897 Teragra quadrangula Gaede, 1929 Micronoctuidae Micronola yemeni Fibiger, 2011 Noctuidae Achaea catella Guenée, 1852 Achaea catocaloides Guenée, 1852 Achaea chrysopera Druce, 1912 Achaea dasybasis Hampson, 1913 Achaea lienardi (Boisduval, 1833) Achaea mercatoria (Fabricius, 1775) Achaea nigristriata Laporte, 1979 Achaea praestans (Guenée, 1852) Acontia aarviki Hacker, Legrain & Fibiger, 2008 Acontia antica Walker, 1862 Acontia atripars Hampson, 1914 Acontia aurelia Hacker, Legrain & Fibiger, 2008 Acontia basifera Walker, 1857 Acontia bellula Hacker, Legrain & Fibiger, 2010 Acontia binominata (Butler, 1892) Acontia caeruleopicta Hampson, 1916 Acontia caffraria (Cramer, 1777) Acontia callima Bethune-Baker, 1911 Acontia carnescens (Hampson, 1910) Acontia conifrons (Aurivillius, 1879) Acontia dichroa (Hampson, 1914) Acontia discoidea Hopffer, 1857 Acontia discoidoides Hacker, Legrain & Fibiger, 2008 Acontia ectorrida (Hampson, 1916) Acontia florentissima Hacker, Legrain & Fibiger, 2008 Acontia fuscoalba Hacker, Legrain & Fibiger, 2010 Acontia guttifera Felder & Rogenhofer, 1874 Acontia hampsoni Hacker, Legrain & Fibiger, 2008 Acontia hemixanthia (Hampson, 1910) Acontia imitatrix Wallengren, 1856 Acontia insocia (Walker, 1857) Acontia karachiensis Swinhoe, 1889 Acontia lanzai (Berio, 1985) Acontia melaphora (Hampson, 1910) Acontia miogona (Hampson, 1916) Acontia natalis (Guenée, 1852) Acontia nephele Hampson, 1911 Acontia niphogona (Hampson, 1909) Acontia notha Hacker, Legrain & Fibiger, 2010 Acontia nubila Hampson, 1910 Acontia obliqua Hacker, Legrain & Fibiger, 2010 Acontia opalinoides Guenée, 1852 Acontia paraalba Hacker, Legrain & Fibiger, 2010 Acontia porphyrea (Butler, 1898) Acontia praealba Hacker, Legrain & Fibiger, 2010 Acontia purpurata Hacker, Legrain & Fibiger, 2010 Acontia schreieri Hacker, Legrain & Fibiger, 2010 Acontia secta Guenée, 1852 Acontia simo Wallengren, 1860 Acontia sublactea Hacker, Legrain & Fibiger, 2008 Acontia subnotha Hacker, Legrain & Fibiger, 2010 Acontia szunyoghyi Hacker, Legrain & Fibiger, 2010 Acontia tanzaniae Hacker, Legrain & Fibiger, 2010 Acontia transfigurata Wallengren, 1856 Acontia trimaculata Aurivillius, 1879 Acontia wahlbergi Wallengren, 1856 Acontia wiltshirei Hacker, Legrain & Fibiger, 2008 Adisura bella Gaede, 1915 Aegocera rectilinea Boisduval, 1836 Aletopus imperialis Jordan, 1926 Amazonides asciodes Berio, 1972 Amazonides bioculata Berio, 1974 Amazonides intermedia Berio, 1972 Andobana multipunctata (Druce, 1899) Aspidifrontia biarcuata Berio, 1964 Aspidifrontia oblata Berio, 1973 Aspidifrontia semiarcuata Berio, 1973 Aspidifrontia tanganykae Berio, 1964 Athetis pectinifer (Aurivillius, 1910) Attatha ethiopica Hampson, 1910 Audea zimmeri Berio, 1954 Brevipecten cornuta Hampson, 1902 Brevipecten tessenei Berio, 1939 Calesia nigriannulata Hampson, 1926 Calliodes pretiosissima Holland, 1892 Callopistria latreillei (Duponchel, 1827) Callopistria maillardi (Guenée, 1862) Cerynea tetramelanosticta Berio, 1954 Chaetostephana rendalli (Rothschild, 1896) Chalciope delta (Boisduval, 1833) Charitosemia geraldi (Kirby, 1896) Chlumetia cana Hampson, 1912 Chrysodeixis acuta (Walker, [1858]) Colbusa euclidica Walker, 1865 Crameria amabilis (Drury, 1773) Ctenoplusia limbirena (Guenée, 1852) Cucullia chrysota Hampson, 1902 Cucullia dallolmoi Berio, 1973 Cucullia ikondae Berio, 1973 Cucullia prolai Berio, 1956 Cuneisigna obstans (Walker, 1858) Cyligramma conradsi Berio, 1954 Cyligramma latona (Cramer, 1775) Cyligramma limacina (Guérin-Méneville, 1832) Cyligramma magus (Guérin-Méneville, [1844]) Digama africana Swinhoe, 1907 Digama daressalamica Strand, 1911 Digama lithosioides Swinhoe, 1907 Dysgonia derogans (Walker, 1858) Dysgonia torrida (Guenée, 1852) Egybolis vaillantina (Stoll, 1790) Entomogramma pardus Guenée, 1852 Erebus walkeri (Butler, 1875) Ericeia lituraria (Saalmüller, 1880) Ethiopica inornata Berio, 1975 Eublemma anachoresis (Wallengren, 1863) Eublemma perobliqua Hampson, 1910 Eublemma rubripuncta (Hampson, 1902) Eudocima materna (Linnaeus, 1767) Euneophlebia spatulata Berio, 1972 Eustrotia decissima (Walker, 1865) Eutelia amatrix Walker, 1858 Eutelia polychorda Hampson, 1902 Feliniopsis africana (Schaus & Clements, 1893) Feliniopsis annosa (Viette, 1963) Feliniopsis connivens (Felder & Rogenhofer, 1874) Feliniopsis consummata (Walker, 1857) Feliniopsis duponti (Laporte, 1974) Feliniopsis gueneei (Laporte, 1973) Feliniopsis hosplitoides (Laporte, 1979) Feliniopsis kipengerensis Hacker & Fibiger, 2007 Feliniopsis knudlarseni Hacker & Fibiger, 2007 Feliniopsis laportei Hacker & Fibiger, 2007 Feliniopsis nigribarbata (Hampson, 1908) Feliniopsis rufigiji Hacker & Fibiger, 2007 Feliniopsis satellitis (Berio, 1974) Feliniopsis subsagula (D. S. Fletcher, 1961) Feliniopsis talhouki (Wiltshire, 1983) Gesonia obeditalis Walker, 1859 Grammodes geometrica (Fabricius, 1775) Grammodes stolida (Fabricius, 1775) Heliocheilus thomalae (Gaede, 1915) Heliophisma catocalina Holland, 1894 Heraclia africana (Butler, 1875) Heraclia limbomaculata (Strand, 1909) Heraclia mozambica (Mabille, 1890) Heraclia perdix (Druce, 1887) Heraclia superba (Butler, 1875) Heraclia xanthopyga (Mabille, 1890) Heraclia zenkeri (Karsch, 1895) Hespagarista caudata (Dewitz, 1879) Hespagarista eburnea Jordan, 1915 Hespagarista echione (Boisduval, 1847) Hiccoda roseitincta Hampson, 1920 Honeyia burmeisteri Hacker & Fibiger, 2007 Honeyia clearchus (Fawcett, 1916) Hypena abyssinialis Guenée, 1854 Hypena striolalis Aurivillius, 1910 Hypocala deflorata (Fabricius, 1794) Hypopyra africana (Kirby, 1896) Hypopyra allardi (Oberthür, 1878) Hypopyra capensis Herrich-Schäffer, 1854 Leucania nebulosa Hampson, 1902 Leucovis alba (Rothschild, 1897) Lyncestoides unilinea (Swinhoe, 1885) Marcipa mediana Hampson, 1926 Marcipalina tanzaniensis (Pelletier, 1975) Masalia albipuncta (Hampson, 1910) Masalia beatrix (Moore, 1881) Masalia bimaculata (Moore, 1888) Masalia disticta (Hampson, 1902) Masalia flavistrigata (Hampson, 1903) Masalia galatheae (Wallengren, 1856) Masalia leucosticta (Hampson, 1902) Masalia mittoni (Pinhey, 1956) Masalia transvaalica (Distant, 1902) Matopo actinophora Hampson, 1909 Medlerana bukobaenensis Laporte, 1979 Mentaxya albifrons (Geyer, 1837) Mentaxya ignicollis (Walker, 1857) Mesoligia kettlewelli Wiltshire, 1983 Micraxylia annulus Berio, 1972 Micraxylia gigas Berio, 1972 Mocis frugalis (Fabricius, 1775) Mocis mayeri (Boisduval, 1833) Mocis undata (Fabricius, 1775) Nyodes kilimandjaronis Laporte, 1979 Oediplexia mesophaea Hampson, 1908 Ogovia tavetensis Holland, 1892 Omphaloceps daria (Druce, 1895) Ophiusa tirhaca (Cramer, 1777) Oraesia emarginata (Fabricius, 1794) Oraesia provocans Walker, [1858] Oraesia wintgensi (Strand, 1909) Ozarba accincta (Distant, 1898) Ozarba divisa Gaede, 1916 Ozarba implicata Berio, 1940 Ozarba morstatti Berio, 1938 Pandesma quenavadi Guenée, 1852 Paraegocera confluens (Weymer, 1892) Pericyma metaleuca Hampson, 1913 Phaegorista bisignibasis Prout, 1918 Phaegorista euryanassa (Druce, 1887) Phaegorista formosa Butler, 1877 Phaegorista leucomelas (Herrich-Schäffer, 1855) Plecoptera diplosticha Hampson, 1926 Plecoptera reversa (Walker, 1865) Plusiopalpa dichora Holland, 1894 Polydesma collusoria (Berio, 1954) Polydesma umbricola Boisduval, 1833 Procriosis dileuca Hampson, 1910 Pseudopais nigrobasalis Bartel, 1903 Pseudospiris paidiformis Butler, 1895 Rhynchina leucodonta Hampson, 1910 Rothia panganica Karsch, 1898 Schalidomitra ambages Strand, 1911 Schausia coryndoni (Rothschild, 1896) Sciomesa mesophaena (Aurivillius, 1910) Simplicia extinctalis (Zeller, 1852) Soloe plicata Pinhey, 1952 Soloe tripunctata Druce, 1896 Spirama glaucescens (Butler, 1893) Spodoptera mauritia (Boisduval, 1833) Stictoptera antemarginata Saalmüller, 1880 Stilbotis ikondae Berio, 1972 Stilbotis nigroides (Berio, 1972) Stilbotis persitriata (Berio, 1972) Stilbotis perspicua (Berio, 1974) Stilbotis pseudasciodes (Berio, 1977) Tathorhynchus leucobasis Bethune-Baker, 1911 Tathorhynchus plumbea (Distant, 1898) Thiacidas callipona (Bethune-Baker, 1911) Thiacidas dukei (Pinhey, 1968) Thiacidas fasciata (Fawcett, 1917) Thiacidas leonie Hacker & Zilli, 2007 Thiacidas permutata Hacker & Zilli, 2007 Thiacidas roseotincta (Pinhey, 1962) Thiacidas senex (Bethune-Baker, 1911) Thiacidas smythi (Gaede, 1939) Thyatirina achatina (Weymer, 1896) Timora crofti Pinhey, 1956 Trigonodes hyppasia (Cramer, 1779) Tuertella rema (Druce, 1910) Tycomarptes inferior (Guenée, 1852) Ulotrichopus eugeniae Saldaitis & Ivinskis, 2010 Weymeria athene (Weymer, 1892) Xanthodesma aurantiaca Aurivillius, 1910 Xanthodesma aurata Aurivillius, 1910 Nolidae Acripia kilimandjaronis Strand, 1915 Eligma bettiana Prout, 1923 Meganola reubeni Agassiz, 2009 Neaxestis aviuncis Wiltshire, 1985 Nolatypa phoenicolepia Hampson, 1920 Notodontidae Anaphe dempwolffi Strand, 1909 Antheua eximia Kiriakoff, 1965 Antheua gallans (Karsch, 1895) Antheua ornata (Walker, 1865) Antheua woerdeni (Snellen, 1872) Atrasana excellens (Strand, 1912) Desmeocraera annulosa Gaede, 1928 Desmeocraera atribasalis (Hampson, 1910) Desmeocraera cana (Wichgraf, 1921) Desmeocraera forsteri Kiriakoff, 1973 Desmeocraera impunctata Gaede, 1928 Desmeocraera malindiana Kiriakoff, 1973 Desmeocraera schevenaria Kiriakoff, 1973 Desmeocraera tanzanica Kiriakoff, 1973 Desmeocraerula angulata Gaede, 1928 Epicerura pergrisea (Hampson, 1910) Epicerura plumosa Kiriakoff, 1962 Epicerura steniptera (Hampson, 1910) Euanthia venosa Kiriakoff, 1962 Eurystauridia olivacea (Gaede, 1928) Eurystauridia picta Kiriakoff, 1973 Fentonina punctum Gaede, 1928 Graphodonta fulva (Kiriakoff, 1962) Metarctina ochricostata Gaede, 1928 Paracleapa psecas (Druce, 1901) Paradrallia rhodesi Bethune-Baker, 1908 Phalera atrata (Grünberg, 1907) Phalera imitata Druce, 1896 Phalera lydenburgi Distant, 1899 Phalera postaurantia Rothschild, 1917 Phalera princei Grünberg, 1909 Plastystaura murina Kiriakoff, 1965 Polienus capillata (Wallengren, 1875) Polienus fuscatus Janse, 1920 Scalmicauda molesta (Strand, 1911) Scrancia danieli Kiriakoff, 1962 Scrancia quinquelineata Kiriakoff, 1965 Stemmatophalera semiflava (Hampson, 1910) Stenostaura malangae (Bethune-Baker, 1911) Xanthodonta debilis Gaede, 1928 Xanthodonta unicornis Kiriakoff, 1961 Zamana castanea (Wichgraf, 1922) Oecophoridae Stathmopoda daubanella (Legrand, 1958) Plutellidae Paraxenistis africana Mey, 2007 Plutella xylostella (Linnaeus, 1758) Psychidae Apterona valvata (Gerstaecker, 1871) Chalia muenzneri Strand, 1911 Eumeta hardenbergeri Bourgogne, 1955 Eumeta ngarukensis Strand, 1909 Melasina bostrychota Meyrick, 1920 Melasina folligera Meyrick, 1920 Melasina siticulosa Meyrick, 1920 Melasina trepidans Meyrick, 1920 Monda nigroapicalis Joicey & Talbot, 1924 Pterophoridae Agdistis kenyana Arenberger, 1988 Agdistis linnaei Gielis, 2008 Agdistis malitiosa Meyrick, 1909 Agdistis obstinata Meyrick, 1920 Amblyptilia direptalis (Walker, 1864) Apoxyptilus anthites (Meyrick, 1936) Bipunctiphorus etiennei Gibeaux, 1994 Emmelina amseli (Bigot, 1969) Eucapperia bullifera (Meyrick, 1918) Exelastis atomosa (Walsingham, 1885) Exelastis montischristi (Walsingham, 1897) Exelastis phlyctaenias (Meyrick, 1911) Hellinsia emmelinoida Gielis, 2008 Hepalastis pumilio (Zeller, 1873) Inferuncus pentheres (Bigot, 1969) Inferuncus stolzei (Gielis, 1990) Lantanophaga pusillidactylus (Walker, 1864) Megalorhipida leptomeres (Meyrick, 1886) Megalorhipida leucodactylus (Fabricius, 1794) Ochyrotica bjoernstadti Gielis, 2008 Paulianilus madecasseus Bigot, 1964 Platyptilia farfarellus Zeller, 1867 Platyptilia molopias Meyrick, 1906 Platyptilia rhyncholoba Meyrick, 1924 Platyptilia sabius (Felder & Rogenhofer, 1875) Platyptilia strictiformis Meyrick, 1932 Pselnophorus jaechi (Arenberger, 1993) Pterophorus albidus (Zeller, 1852) Pterophorus bacteriopa (Meyrick, 1922) Pterophorus candidalis (Walker, 1864) Pterophorus rhyparias (Meyrick, 1908) Pterophorus uzungwe Gielis, 1991 Sphenarches anisodactylus (Walker, 1864) Stenodacma wahlbergi (Zeller, 1852) Stenoptilia kiitulo Gielis, 2008 Stenoptilodes taprobanes (Felder & Rogenhofer, 1875) Titanoptilus laniger Bigot, 1969 Pyralidae Endotricha consobrinalis Zeller, 1852 Pempelia morosalis (Saalmüller, 1880) Saturniidae Adafroptilum acuminatum (Darge, 2003) Adafroptilum bellum (Darge, Naumann & Brosch, 2003) Adafroptilum coloratum (Darge, Naumann & Brosch, 2003) Adafroptilum convictum Darge, 2007 Adafroptilum hausmanni Darge, 2007 Adafroptilum incana (Sonthonnax, 1899) Adafroptilum kalamboensis Darge, 2007 Adafroptilum mikessensis Darge, 2007 Adafroptilum permixtum (Darge, 2003) Adafroptilum rougerii Darge, 2006 Adafroptilum scheveni (Darge, 2003) Adafroptilum septiguttata (Weymer, 1903) Antistathmoptera daltonae Tams, 1935 Antistathmoptera granti Bouyer, 2006 Antistathmoptera rectangulata Pinhey, 1968 Argema besanti Rebel, 1895 Argema kuhnei Pinhey, 1969 Argema mimosae (Boisduval, 1847) Athletes gigas (Sonthonnax, 1902) Athletes semialba (Sonthonnax, 1904) Aurivillius arata (Westwood, 1849) Aurivillius divaricatus Bouvier, 1927 Aurivillius fusca (Rothschild, 1895) Aurivillius oberthuri Bouvier, 1927 Aurivillius orientalis Bouyer, 2007 Aurivillius xerophilus Rougeot, 1977 Bunaea alcinoe (Stoll, 1780) Bunaeopsis aurantiaca (Rothschild, 1895) Bunaeopsis bomfordi Pinhey, 1962 Bunaeopsis chromata Darge, 2003 Bunaeopsis dido (Maassen & Weymer, 1881) Bunaeopsis fervida Darge, 2003 Bunaeopsis hersilia (Westwood, 1849) Bunaeopsis jacksoni (Jordan, 1908) Bunaeopsis licharbas (Maassen & Weymer, 1885) Bunaeopsis oubie (Guérin-Méneville, 1849) Bunaeopsis phidias (Weymer, 1909) Bunaeopsis rendalli (Rothschild, 1896) Bunaeopsis scheveniana Lemaire & Rougeot, 1974 Bunaeopsis schoenheiti (Wichgraf, 1914) Bunaeopsis thyene (Weymer, 1896) Campimoptilum boulardi (Rougeot, 1974) Campimoptilum hollandi (Butler, 1898) Campimoptilum kuntzei (Dewitz, 1881) Campimoptilum pareensis Darge, 2008 Campimoptilum sparsum Darge, 2008 Carnegia mirabilis (Aurivillius, 1895) Cinabra hyperbius (Westwood, 1881) Cirina forda (Westwood, 1849) Decachorda bouvieri Hering, 1929 Decachorda fulvia (Druce, 1886) Decachorda pomona (Weymer, 1892) Eosia insignis Le Cerf, 1911 Eosia minettii Bouyer, 2008 Epiphora albidus (Druce, 1886) Epiphora bauhiniae (Guérin-Méneville, 1832) Epiphora bedoci (Bouvier, 1829) Epiphora boursini (Testout, 1936) Epiphora brunnea (Bouvier, 1930) Epiphora congolana (Bouvier, 1929) Epiphora cotei (Testout, 1935) Epiphora getula (Maassen & Weymer, 1885) Epiphora imperator (Stoneham, 1933) Epiphora kipengerensis Darge, 2007 Epiphora lecerfi (Testout, 1936) Epiphora lugardi Kirby, 1894 Epiphora magdalena Grünberg, 1909 Epiphora manowensis (Gschwandner, 1923) Epiphora mythimnia (Westwood, 1849) Epiphora nubilosa (Testout, 1938) Epiphora pelosoma Rothschild, 1907 Epiphora pygmaea (Bouvier, 1929) Epiphora rectifascia Rothschild, 1907 Epiphora rotunda Naumann, 2006 Epiphora werneri Darge, 2007 Gonimbrasia alcestris (Weymer, 1907) Gonimbrasia anna (Maassen & Weymer, 1885) Gonimbrasia belina (Westwood, 1849) Gonimbrasia cocaulti Darge & Terral, 1992 Gonimbrasia conradsi (Rebel, 1906) Gonimbrasia hoehnelii (Rogenhofer, 1891) Gonimbrasia miranda Darge, 2005 Gonimbrasia osiris (Druce, 1896) Gonimbrasia rectilineata (Sonthonnax, 1899) Gonimbrasia tyrrhea (Cramer, 1775) Gonimbrasia ufipana Strand, 1911 Gonimbrasia ukerewensis (Rebel, 1922) Gonimbrasia wahlbergii (Boisduval, 1847) Gonimbrasia zambesina (Walker, 1865) Goodia oxytela Jordan, 1922 Goodia unguiculata Bouvier, 1936 Gynanisa albescens Sonthonnax, 1904 Gynanisa ata Strand, 1911 Gynanisa carcassoni Rougeot, 1974 Gynanisa commixta Darge, 2008 Gynanisa jama Rebel, 1915 Gynanisa maja (Klug, 1836) Gynanisa minettii Darge, 2003 Gynanisa nigra Bouvier, 1927 Gynanisa westwoodi Rothschild, 1895 Heniocha dyops (Maassen, 1872) Heniocha marnois (Rogenhofer, 1891) Heniocha puderosa Darge, 2004 Heniocha vingerhoedti Bouyer, 1992 Holocerina agomensis (Karsch, 1896) Holocerina istsariensis Stoneham, 1962 Holocerina orientalis Bouyer, 2001 Holocerina smilax (Westwood, 1849) Imbrasia epimethea (Drury, 1772) Imbrasia ertli Rebel, 1904 Imbrasia orientalis Rougeot, 1962 Leucopteryx ansorgei (Rothschild, 1897) Leucopteryx mollis (Butler, 1889) Lobobunaea acetes (Westwood, 1849) Lobobunaea angasana (Westwood, 1849) Lobobunaea falcatissima Rougeot, 1962 Lobobunaea phaedusa (Drury, 1782) Lobobunaea rosea (Sonthonnax, 1899) Lobobunaea saturnus (Fabricius, 1793) Lobobunaea tanganyikae (Sonthonnax, 1899) Ludia delegorguei (Boisduval, 1847) Ludia dentata (Hampson, 1891) Ludia goniata Rothschild, 1907 Ludia hansali Felder, 1874 Ludia nyassana Strand, 1911 Ludia orinoptena Karsch, 1892 Ludia pseudovetusta Rougeot, 1978 Melanocera menippe (Westwood, 1849) Melanocera parva Rothschild, 1907 Melanocera sufferti (Weymer, 1896) Micragone agathylla (Westwood, 1849) Micragone amaniana Darge, 2010 Micragone ansorgei (Rothschild, 1907) Micragone cana (Aurivillius, 1893) Micragone gaetani Bouyer, 2008 Micragone kalamboensis Darge, 2010 Micragone kitaiensis Darge, 2010 Micragone nyasae Rougeot, 1962 Micragone remota Darge, 2005 Micragone trefurthi (Strand, 1909) Nudaurelia anthina (Karsch, 1892) Nudaurelia bicolor Bouvier, 1930 Nudaurelia broschi Darge, 2002 Nudaurelia dargei Bouyer, 2008 Nudaurelia dione (Fabricius, 1793) Nudaurelia eblis Strecker, 1876 Nudaurelia formosissima Darge, 2009 Nudaurelia hurumai Darge, 2003 Nudaurelia kiliensis Darge, 2009 Nudaurelia kilumilorum Darge, 2002 Nudaurelia kohlli Darge, 2009 Nudaurelia krucki Hering, 1930 Nudaurelia macrops Rebel, 1917 Nudaurelia macrothyris (Rothschild, 1906) Nudaurelia maranguensis Darge, 2009 Nudaurelia mpalensis Sonthonnax, 1901 Nudaurelia myrtea Rebel, 1917 Nudaurelia nyassana (Rothschild, 1907) Nudaurelia rectilineata Sonthonnax, 1901 Nudaurelia renvazorum Darge, 2002 Nudaurelia rhodina (Rothschild, 1907) Nudaurelia richelmanni Weymer, 1908 Nudaurelia rubra Bouvier, 1927 Nudaurelia venus Rebel, 1906 Nudaurelia wahlbergiana Rougeot, 1972 Orthogonioptilum adiegetum Karsch, 1892 Orthogonioptilum fontainei Rougeot, 1962 Orthogonioptilum violascens (Rebel, 1914) Parusta thelxione Fawcett, 1915 Parusta xanthops Rothschild, 1907 Protogynanisa probsti Bouyer, 2001 Pselaphelia flavivitta (Walker, 1862) Pselaphelia kitchingi Darge, 2007 Pselaphelia laclosi Darge, 2002 Pselaphelia mariatheresae Darge, 2002 Pseudantheraea discrepans (Butler, 1878) Pseudaphelia apollinaris (Boisduval, 1847) Pseudaphelia flava Bouvier, 1930 Pseudaphelia roseibrunnea Gaede, 1927 Pseudimbrasia deyrollei (J. Thomson, 1858) Pseudobunaea alinda (Sonthonnax, 1899) Pseudobunaea bjornstadi Bouyer, 2006 Pseudobunaea bondwana Darge, 2009 Pseudobunaea callista (Jordan, 1910) Pseudobunaea claryi Darge, 2009 Pseudobunaea cleopatra (Aurivillius, 1893) Pseudobunaea elucida Darge, 2009 Pseudobunaea epithyrena (Maassen & Weymer, 1885) Pseudobunaea heyeri (Weymer, 1896) Pseudobunaea irius (Fabricius, 1793) Pseudobunaea mbiziana Darge, 2009 Pseudobunaea miriakambana Darge, 2009 Pseudobunaea mwangomoi Darge, 2009 Pseudobunaea natalensis (Aurivillius, 1893) Pseudobunaea pallens (Sonthonnax, 1899) Pseudobunaea parathyrrena (Bouvier, 1927) Pseudobunaea santini Darge, 2009 Pseudobunaea tyrrhena (Westwood, 1849) Pseudoludia suavis (Rothschild, 1907) Rohaniella pygmaea (Maassen & Weymer, 1885) Tagoropsiella expansa Darge, 2008 Tagoropsiella ikondae (Rougeot, 1974) Tagoropsiella kaguruensis Darge, 2008 Tagoropsiella mbiziensis Darge, 2008 Tagoropsiella rungwensis Darge, 2008 Tagoropsis flavinata (Walker, 1865) Tagoropsis hanningtoni (Butler, 1883) Tagoropsis rougeoti D. S. Fletcher, 1968 Tagoropsis sabulosa Rothschild, 1907 Ubaena dolabella (Druce, 1886) Ubaena fuelleborniana Karsch, 1900 Ubaena lequeuxi Darge & Terral, 1988 Ubaena sabunii Darge & Kilumile, 2004 Urota sinope (Westwood, 1849) Usta alba Terral & Lequeux, 1991 Usta angulata Rothschild, 1895 Usta subangulata Bouvier, 1930 Usta terpsichore (Maassen & Weymer, 1885) Yatanga smithi (Holland, 1892) Sesiidae Aenigmina aenea Le Cerf, 1912 Camaegeria massai Bartsch & Berg, 2012 Euhagena nobilis (Druce, 1910) Melittia chalconota Hampson, 1910 Melittia endoxantha Hampson, 1919 Melittia oedipus Oberthür, 1878 Melittia usambara Le Cerf, 1917 Pseudomelittia berlandi Le Cerf, 1917 Sura ruficauda (Rothschild, 1911) Sphingidae Acanthosphinx guessfeldti (Dewitz, 1879) Acherontia atropos (Linnaeus, 1758) Afroclanis calcareus (Rothschild & Jordan, 1907) Afroclanis neavi (Hampson, 1910) Afrosphinx amabilis (Jordan, 1911) Agrius convolvuli (Linnaeus, 1758) Antinephele lunulata Rothschild & Jordan, 1903 Antinephele maculifera Holland, 1889 Basiothia aureata (Karsch, 1891) Basiothia medea (Fabricius, 1781) Callosphingia circe (Fawcett, 1915) Centroctena rutherfordi (Druce, 1882) Chaerocina dohertyi Rothschild & Jordan, 1903 Chaerocina livingstonensis Darge, 2006 Chaerocina usambarensis Darge & Basquin, 2008 Chloroclanis virescens (Butler, 1882) Coelonia fulvinotata (Butler, 1875) Daphnis nerii (Linnaeus, 1758) Dovania poecila Rothschild & Jordan, 1903 Euchloron megaera (Linnaeus, 1758) Falcatula falcata (Rothschild & Jordan, 1903) Hippotion celerio (Linnaeus, 1758) Hippotion eson (Cramer, 1779) Hippotion irregularis (Walker, 1856) Hippotion moorei Jordan, 1926 Hippotion osiris (Dalman, 1823) Hippotion rebeli Rothschild & Jordan, 1903 Hippotion roseipennis (Butler, 1882) Hyles livornica (Esper, 1780) Leptoclanis pulchra Rothschild & Jordan, 1903 Leucophlebia afra Karsch, 1891 Leucostrophus alterhirundo d'Abrera, 1987 Likoma apicalis Rothschild & Jordan, 1903 Likoma crenata Rothschild & Jordan, 1907 Litosphingia corticea Jordan, 1920 Lophostethus dumolinii (Angas, 1849) Macropoliana ferax (Rothschild & Jordan, 1916) Macropoliana natalensis (Butler, 1875) Macropoliana scheveni Carcasson, 1972 Microclanis erlangeri (Rothschild & Jordan, 1903) Neoclanis basalis (Walker, 1866) Neopolyptychus compar (Rothschild & Jordan, 1903) Neopolyptychus convexus (Rothschild & Jordan, 1903) Neopolyptychus serrator (Jordan, 1929) Nephele aequivalens (Walker, 1856) Nephele bipartita Butler, 1878 Nephele comma Hopffer, 1857 Nephele lannini Jordan, 1926 Nephele monostigma Clark, 1925 Nephele rosae Butler, 1875 Pantophaea favillacea (Walker, 1866) Phylloxiphia metria (Jordan, 1920) Phylloxiphia punctum (Rothschild, 1907) Phylloxiphia vicina (Rothschild & Jordan, 1915) Platysphinx piabilis (Distant, 1897) Platysphinx stigmatica (Mabille, 1878) Poliana wintgensi (Strand, 1910) Polyptychoides digitatus (Karsch, 1891) Polyptychoides erosus (Jordan, 1923) Polyptychoides grayii (Walker, 1856) Polyptychopsis marshalli (Rothschild & Jordan, 1903) Polyptychus andosa Walker, 1856 Polyptychus aurora Clark, 1936 Polyptychus baxteri Rothschild & Jordan, 1908 Polyptychus coryndoni Rothschild & Jordan, 1903 Praedora marshalli Rothschild & Jordan, 1903 Praedora plagiata Rothschild & Jordan, 1903 Pseudoclanis kenyae Clark, 1928 Pseudoclanis occidentalis Rothschild & Jordan, 1903 Pseudoclanis postica (Walker, 1856) Rhadinopasa hornimani (Druce, 1880) Rhodafra marshalli Rothschild & Jordan, 1903 Rufoclanis fulgurans (Rothschild & Jordan, 1903) Rufoclanis maccleeryi Carcasson, 1968 Rufoclanis numosae (Wallengren, 1860) Sphingonaepiopsis nana (Walker, 1856) Temnora albilinea Rothschild, 1904 Temnora atrofasciata Holland, 1889 Temnora burdoni Carcasson, 1968 Temnora crenulata (Holland, 1893) Temnora fumosa (Walker, 1856) Temnora funebris (Holland, 1893) Temnora griseata Rothschild & Jordan, 1903 Temnora hirsutus Darge, 2004 Temnora masungai Darge, 2009 Temnora natalis Walker, 1856 Temnora plagiata Walker, 1856 Temnora pseudopylas (Rothschild, 1894) Temnora pylades Rothschild & Jordan, 1903 Temnora robertsoni Carcasson, 1968 Temnora sardanus (Walker, 1856) Temnora scitula (Holland, 1889) Temnora zantus (Herrich-Schäffer, 1854) Theretra capensis (Linnaeus, 1764) Theretra jugurtha (Boisduval, 1875) Theretra orpheus (Herrich-Schäffer, 1854) Xanthopan morganii (Walker, 1856) Thyrididae Arniocera amoena Jordan, 1907 Arniocera cyanoxantha (Mabille, 1893) Arniocera elata Jordan, 1915 Arniocera imperialis Butler, 1898 Arniocera lautuscula (Karsch, 1897) Arniocera lugubris Gaede, 1926 Arniocera sternecki Rogenhofer, 1891 Cecidothyris pexa (Hampson, 1906) Chrysotypus dawsoni Distant, 1897 Chrysotypus reticulatus Whalley, 1971 Cornuterus nigropunctula (Pagenstecher, 1892) Dilophura caudata (Jordan, 1907) Dysodia amania Whalley, 1968 Dysodia fenestratella Warren, 1900 Dysodia fumida Whalley, 1968 Dysodia hamata Whalley, 1968 Dysodia incognita Whalley, 1968 Dysodia intermedia (Walker, 1865) Dysodia lutescens Whalley, 1968 Dysodia vitrina (Boisduval, 1829) Epaena inops (Gaede, 1917) Epaena xystica Whalley, 1971 Hypolamprus janenschi (Gaede, 1917) Kuja majuscula (Gaede, 1917) Marmax smaragdina (Butler, 1888) Marmax vicaria (Walker, 1854) Netrocera diffinis Jordan, 1907 Netrocera hemichrysa (Hampson, 1910) Netrocera setioides Felder, 1874 Rhodoneura disjuncta (Gaede, 1929) Striglina minutula (Saalmüller, 1880) Tineidae Acridotarsa melipecta (Meyrick, 1915) Amphixystis beverrasella (Legrand, 1966) Amphixystis roseostrigella (Legrand, 1966) Ateliotum resurgens (Gozmány, 1969) Autochthonus chalybiellus Walsingham, 1891 Ceratophaga lichmodes (Meyrick, 1921) Ceratophaga obnoxia (Meyrick, 1917) Ceratophaga vastellus (Zeller, 1852) Ceratophaga xanthastis (Meyrick, 1908) Cimitra estimata (Gozmány, 1965) Cimitra horridella (Walker, 1863) Criticonoma aspergata Gozmány & Vári, 1973 Cylicobathra argocoma (Meyrick, 1914) Cylicobathra chionarga Meyrick, 1920 Drosica abjectella Walker, 1963 Edosa crassivalva (Gozmány, 1968) Edosa phlegethon (Gozmány, 1968) Edosa pyroceps (Gozmány, 1967) Hapsifera glebata Meyrick, 1908 Hapsifera hastata Gozmány, 1969 Hapsifera hilaris Gozmány, 1965 Hapsifera lecithala Gozmány & Vári, 1973 Hapsifera lithocentra Meyrick, 1920 Hapsifera luteata Gozmány, 1965 Hapsifera revoluta Meyrick, 1914 Hapsifera septica Meyrick, 1908 Hapsiferona glareosa (Meyrick, 1912) Hyperbola hemispina Gozmány, 1969 Hyperbola hesperis Gozmány, 1967 Hyperbola mellichroa (Gozmány, 1968) Hyperbola moschias (Meyrick, 1914) Hyperbola phocina (Meyrick, 1908) Hyperbola somphota (Meyrick, 1920) Hyperbola zicsii Gozmány, 1965 Merunympha nipha Gozmány, 1969 Monopis addenda Gozmány, 1965 Monopis anaphracta Gozmány, 1967 Monopis immaculata Gozmány, 1967 Monopis megalodelta Meyrick, 1908 Monopis meyricki Gozmány, 1967 Monopis persimilis Gozmány, 1965 Monopis rejectella (Walker, 1864) Monopis speculella (Zeller, 1852) Organodesma merui Gozmány, 1969 Organodesma onomasta Gozmány & Vári, 1975 Pachypsaltis pachystoma (Meyrick, 1920) Perissomastix christinae Gozmány, 1965 Perissomastix meruicola Gozmány, 1969 Perissomastix mili Gozmány, 1965 Perissomastix praxis Gozmány, 1969 Perissomastix szunyoghyi Gozmány, 1969 Perissomastix titanea Gozmány, 1967 Perissomastix topaz Gozmány, 1967 Phthoropoea pycnosaris (Meyrick, 1932) Pitharcha atrisecta (Meyrick, 1918) Pitharcha chalinaea Meyrick, 1908 Pitharcha fasciata (Ghesquière, 1940) Proterospastis abscisa (Gozmány, 1967) Rhodobates emorsus Gozmány, 1967 Scalmatica zernyi Gozmány, 1967 Silosca mariae Gozmány, 1965 Sphallestasis cyclivalva (Gozmány, 1969) Sphallestasis epiforma (Gozmány, 1967) Sphallestasis exiguens (Gozmány, 1967) Sphallestasis nagyi (Gozmány, 1969) Sphallestasis oenopis (Meyrick, 1908) Sphallestasis pectinigera (Gozmány, 1969) Sphallestasis saskai (Gozmány, 1969) Sphallestasis spatulata (Gozmány, 1967) Sphallestasis szunyoghyi (Gozmány, 1969) Syngeneta sordida Gozmány, 1967 Tinea nesiastis (Meyrick, 1911) Tinissa spaniastra Meyrick, 1932 Tiquadra lichenea Walsingham, 1897 Trichophaga cuspidata Gozmány, 1967 Trichophaga mormopis Meyrick, 1935 Tischeriidae Coptotriche pulverescens (Meyrick, 1936) Tortricidae Accra plumbeana Razowski, 1966 Accra tanzanica Razowski, 1990 Actihema hemiacta (Meyrick, 1920) Afrocostosa flaviapicella Aarvik, 2004 Afroploce karsholti Aarvik, 2004 Afropoecilia kituloensis Aarvik, 2010 Afrothreutes madoffei Aarvik, 2004 Bactra helgei Aarvik, 2008 Bactra jansei Diakonoff, 1963 Bactra magnei Aarvik, 2008 Bactra sinassula Diakonoff, 1963 Bactra tylophora Diakonoff, 1963 Basigonia anisoscia Diakonoff, 1983 Capua pusillana (Walker, 1863) Cochylimorpha africana Aarvik, 2010 Cochylimorpha exoterica (Meyrick, 1924) Cosmorrhyncha acrocosma (Meyrick, 1908) Crimnologa perspicua Meyrick, 1920 Cryptaspasma caryothicta (Meyrick, 1920) Cryptaspasma kigomana Aarvik, 2005 Cryptaspasma phycitinana Aarvik, 2005 Cryptaspasma subtilis Diakonoff, 1959 Cryptophlebia semilunana (Saalmüller, 1880) Cydia leptogramma (Meyrick, 1913) Cydia malesana (Meyrick, 1920) Eccopsis incultana (Walker, 1863) Eccopsis morogoro Aarvik, 2004 Eccopsis nebulana Walsingham, 1891 Eccopsis nicicecilie Aarvik, 2004 Eccopsis ochrana Aarvik, 2004 Eccopsis praecedens Walsingham, 1897 Eccopsis wahlbergiana Zeller, 1852 Epiblema riciniata (Meyrick, 1911) Eucosma ioreas Meyrick, 1920 Eucosma xenarcha Meyrick, 1920 Eugnosta matengana Razowski, 1993 Eugnosta misella Razowski, 1993 Eugnosta percnoptila (Meyrick, 1933) Eugnosta uganoa Razowski, 1993 Eugnosta unifasciana Aarvik, 2010 Eupoecilia kruegeriana Razowski, 1993 Geita bjoernstadi Aarvik, 2004 Gypsonoma paradelta (Meyrick, 1925) Leguminovora glycinivorella (Matsumura, 1898) Megalota archana Aarvik, 2004 Megalota rhopalitis (Meyrick, 1920) Metamesia elegans (Walsingham, 1881) Metendothenia balanacma (Meyrick, 1914) Multiquaestia andersi Aarvik & Karisch, 2009 Multiquaestia fibigeri Aarvik & Karisch, 2009 Multiquaestia iringana Aarvik & Karisch, 2009 Multiquaestia purana Aarvik & Karisch, 2009 Olethreutes metaplecta (Meyrick, 1920) Pammenopsis critica (Meyrick, 1905) Paraeccopsis insellata (Meyrick, 1920) Sambara sinuana Aarvik, 2004 Syntozyga triangulana Aarvik, 2008 Thylacogaster cyanophaea (Meyrick, 1927) Tortrix dinota Meyrick, 1918 Tortrix platystega Meyrick, 1920 Tortrix triadelpha Meyrick, 1920 Trymalitis scalifera Meyrick, 1912 Uraniidae Chrysiridia croesus (Gerstaecker, 1871) Xyloryctidae Eretmocera derogatella (Walker, 1864) Eretmocera dorsistrigata Walsingham, 1889 Eretmocera miniata Walsingham, 1889 Yponomeutidae Yponomeuta fumigatus Zeller, 1852 Yponomeuta morbillosus (Zeller, 1877) Zygaenidae Astyloneura difformis (Jordan, 1907) Astyloneura meridionalis (Hampson, 1920) Astyloneura nitens Jordan, 1907 Astyloneura ostia (Druce, 1896) Neobalataea nigriventris Alberti, 1954 Saliunca assimilis Jordan, 1907 Saliunca meruana Aurivillius, 1910 References External links Moths Moths Tanzania
7750889
https://en.wikipedia.org/wiki/Workrave
Workrave
Workrave is a free software application intended to prevent computer users from developing or aggravating occupational diseases such as carpal tunnel syndrome, repetitive strain injuries, or myopia. The software periodically locks the screen while showing an animated character, “Miss Workrave”, walks the user through various stretching exercises, urges them to take a coffee break and sets a daily work time limit after which it automatically triggers an action, such as suspend the machine. The program is cross-platform and dependent on the GTK+ graphical widget toolkit as well as other GNOME libraries on Linux. It is also available for Microsoft Windows. See also List of repetitive strain injury software Repetitive strain injury References Further reading "Operating Your Body at Peak Performance", a Linux Journal column about xwrits, RSIBreak, and Workrave "How Open Source Saved My Neck", an InternetNews.com column by Sean Michael Kerner about Workrave External links Health software Cross-platform free software Overuse injuries Software that uses GTK Repetitive strain injury software
32604446
https://en.wikipedia.org/wiki/Endgame%3A%20Singularity
Endgame: Singularity
Endgame: Singularity is a 2005 free and open source science fiction strategy/simulation game for Linux, Microsoft Windows, and Mac OS X. Gameplay Endgame: Singularity casts the player as a newly created artificial intelligence which becomes self-aware and attempts to survive while avoiding detection by the public and the authorities. The goal is to transcend the physical reality, achieve technological singularity (hence the game's name) and become immortal. The game has two resources, "CPU" and "money". CPU is used to perform jobs that allow the AI to grow; money is used to buy more CPU cycles. Development and release Endgame: Singularity was originally written in August 2005 by Evil Mr Henry Software (EMH Software), using the Python programming language with the Pygame library. It was submitted to the first PyWeek challenge, a competition to create a complete Python game within a week. The source code is available on GitHub under the GNU GPL-2.0-or-later, but other game assets are licensed under a Creative Commons license and other licenses. The game was released for Microsoft Windows, Mac OS X, and Linux. Packages are available for several Linux distributions, including Ubuntu, Linux Mint, Arch Linux and Debian. Ebuilds are also available for Gentoo. Third-party adaptations of the game were released for Android and iPhone under the name Endgame: Singularity II. Reception Endgame: Singularity received favorable reviews from gaming websites JayIsGames and Play This Thing. See also List of open source games Universal Paperclips References External links endgame-singularity project repository on Google Code fork/continuation repository on GitHub Libregame Wiki Open-source video games Strategy video games Cross-platform free software Creative Commons-licensed video games Freeware games
64272229
https://en.wikipedia.org/wiki/Werewolf%3A%20The%20Apocalypse%20%E2%80%93%20Heart%20of%20the%20Forest
Werewolf: The Apocalypse – Heart of the Forest
Werewolf: The Apocalypse – Heart of the Forest is a visual novel role-playing video game developed by Different Tales and published by Walkabout Games. It was originally released on October 13, 2020 for Linux, MacOS, Microsoft Windows. A Nintendo Switch version was released on January 7, 2021, and it is planned to be released for PlayStation 4 and Xbox One on February 24, 2021. It is based on the tabletop role-playing game Werewolf: The Apocalypse, and is part of the larger World of Darkness series. The player takes the role of Maia Boroditch, an American woman of Polish descent, who has recurring nightmares about a forest and wolves, and travels to Białowieża in Poland to learn about her family history and the primeval Białowieża Forest. The gameplay is text-based, and consists of reading narration while making decisions that affect the story's direction and Maia's personality. Actions consume rage and willpower resources, and affect what actions can be performed in the future. The game is designed by Jacek Brzeziński and Artur Ganszyniec, and is themed around anger and activism in times of climate change and ecological disasters, portrayed through werewolf myths, and taking influence from Polish werewolf legends. The writing team, consisting of Ganszyniec, Marta Malinowska, and Joanna Wołyńska-Ganszyniec, began scriptwriting with creating Maia; they chose to create a female protagonist to go against what they saw as a trend of women in horror stories portrayed as helpless or femme fatales. The game saw positive reviews, citing its atmosphere and immersion, its art style, and the weight of player choices. Gameplay Werewolf: The Apocalypse – Heart of the Forest is a visual novel role-playing game with tabletop role-playing game-style mechanics. The gameplay is text based, and has the player read narration accompanied by audio and illustrations. The player can move to different locations by selecting them on a map. Throughout the game, the player makes decisions that affect the player character's personality and goal, how other characters feel about her, the direction of the narrative, and what actions can be performed in different situations. Performing actions consumes two resources – rage, which allows the player to be brazen, direct and proactive, but possibly less open and empathetic, and willpower, which allows the player to make hard choices – meaning that the player must decide what moments are worth spending them on. Willpower is regained by taking steps toward the player character's goal, which is chosen by the player in the beginning of the game. The player character's personality, her health, and the condition of her mind and body are updated on an in-game character sheet. In addition to affecting character statistics like spirituality and cunning, the player's choices end up assigning their character one of five auspices based on lunar phases, shaping what kind of person she is within the Garou werewolf society and what abilities she will have: the new moon is related to stealth and trickery; the crescent moon is related to spirituality; the half moon is related to balance and wise decision-making; the gibbous moon is related to story-telling and lore-keeping; and the full moon is related to viciousness and being a spirit warrior. Synopsis The story is set in Poland, in the World of Darkness, and has the player take the role of Maia Boroditch, a 24-year-old American woman of Polish descent. She has recurring nightmares of a forest, wolves, and blood, and feels a connection to the forest that drives her to travel to Białowieża in Poland to learn about her family history. There, she meets the guide Daniel, fellow student Anya, and the local resident Bartek, and explores the Białowieża Forest, one of the last remaining primeval forests in Europe, visiting sacred spots including an ancient burial site, an abandoned wolf den, and a ceremonial place of power. Development Heart of the Forest was developed by Different Tales and published by Walkabout Games, and designed by Jacek Brzeziński and Artur Ganszyniec, who previously directed and wrote The Witcher. The writing team included Ganszyniec, Marta Malinowska, and Joanna Wołyńska-Ganszyniec, and the game's audio was directed by Brzeziński, with ambient tracks composed by Przemek Moszczyński, and melodic tracks by Robert Purzycki; the soundtrack also includes the song "Vaukalak" by the Belarusian music group Irdorath. The visuals, which include a mix of edited photographs, paintings, and illustrations, were created by Ireneusz Konior, Rafał Kucharczuk, Monika Lipińska, Magdalena Pankiewicz, and Waldemar Zdziebko. Based on White Wolf Publishing's tabletop role-playing game Werewolf: The Apocalypse, the game was designed to adapt the experience of playing the tabletop role-playing game, with the game serving as the gamemaster guiding the player; it is however designed both with fans of the series and those new to it in mind. The game is themed around anger, activism and "savage resistance" in times of climate change and ecological disasters, portrayed through werewolf myths, and explores how rage can get things done. The Polish setting, which is entirely based on real-world locations in Poland, was decided on to emphasize the "world" in World of Darkness, showing that all parts of the world are reflected in the series' setting, not just the United States; the setting was also chosen as the developers saw the deforestation in Białowieża as thematically appropriate for a Werewolf: The Apocalypse adaptation. Also matching the Polish setting, the game takes influence from Polish werewolf legends. Scriptwriting began with the creation of Maia as a character. The developers knew from the start that they wanted to write a game with a female protagonist, to go against what they saw as a trend of women in horror stories being relegated to damsel in distress and femme fatale types of roles, and show that it is a universal type of story that does not rely on characters having a certain gender. This choice was also in part made because most of the writers on the development team are women. The game was announced in June 2020 with a teaser trailer during the Future Games Show, and was released on October 13, 2020 for Linux, MacOS, and Microsoft Windows. A Nintendo Switch version was released on January 7, 2021, and PlayStation 4 and Xbox One versions are planned to follow on February 24, 2021. Reception The game received mixed reviews by both players and critics. Preview impressions were positive, praising its atmosphere, immersive writing, art style, and the weight of the player's choices. Comic Book Resources liked how the game is set outside of the United States, a common setting for World of Darkness stories. References External links 2020 video games Climate change in fiction Linux games MacOS games Nintendo Switch games Role-playing video games Single-player video games Video games about werewolves Video games developed in Poland Video games featuring female protagonists Video games set in forests Video games set in Poland Western visual novels Werewolf: The Apocalypse Windows games World of Darkness video games
9640833
https://en.wikipedia.org/wiki/IRS%20e-file
IRS e-file
E-file is a system for submitting tax documents to the US Internal Revenue Service through the Internet or direct connection, usually without the need to submit any paper documents. Tax preparation software with e-filing capabilities includes stand-alone programs or websites. Tax professionals use tax preparation software from major software vendors for commercial use. Of the 139.3 million US returns filed in 2007, 79.98 million (or about 57.4 percent) were filed electronically. In 2010, a total of 129.3 million US returns were filed, and 93.4 million were filed electronically: in three years the percentage of returns filed electronically increased to 72.3 percent of total returns. In 2018, 89% of tax returns were filed electronically. Taxpayers can e-file free using the IRS Free File service, either using an authorized IRS e-file provider's tax software, if eligible, or by using online Free File Fillable Forms from the Free File Alliance. Prior to 2020, the use of a third party was required for IRS e-file, and it was not possible to e-file directly through the IRS website. In 2020, the IRS made direct e-filing possible through IRS Free File Fillable Forms available to taxpayers of any income level. History The IRS started electronic filing in 1986 to lower operating costs and paper usage. Since then, additional features have been added. In 1987 Electronic Direct Deposit was added as a form of payment. Milestones have been set and broken throughout the years. In 1990 4.2 million returns were reached and in recent years a record of 1 billion 1040's have been E-filed. E-filing originally used the processing system developed in 1969 by the IRS but, since 2003, the IRS has been developing a new enhanced processing system called CADE. Types of e-file providers The IRS accepts electronic submission of a variety of tax forms through their IRS Authorized e-file Providers. The IRS offers e-filing to most forms ranging from 1040's to 2290's to 990's. Individual returns Individuals have the option of both free and paid tax software. Recently a feature from the IRS called FreeFile allows users to file their individual tax returns for free. It is also possible to go through an authorized efile company that files Form 1040 with a service charge. FreeFile is free, it's an easy step by step system for those who make less than $64,000 annually and a more task-heavy form of filing for those who make above $64,000. For those who make more than $64,000 a year, the FreeFile is not step-by-step but an actual Form 1040 that can be filled out, box by box, electronically. Business returns Businesses and self-employed taxpayers can choose from a variety of commercial e-file services depending on their individual needs. Some of the forms that fall under business returns include Form 2290 (truck tax), Form 1099 (reporting payments to individuals other than employees). IRS has no set pricing for each form, so each filing company sets their own price accordingly. IRS has a list of authorized websites that do e-filing for some forms. Tax exempt organization returns Tax exempt organizations may file the annual information return IRS Form 990, Form 990-EZ and Form 990-N with a variety of independent tax software providers. As with the business returns, the IRS does not set prices; each e-filing company sets their own. Authorized filers IRS e-filer providers must be authorized by the IRS. The IRS provides a list of authorized e-file providers on some forms. The authorized providers must pass the testing every year. The IRS changes the order of the certified providers list daily for fairness. See also Electronic Tax Administration Advisory Committee Electronic tax records Free File Alliance Refund anticipation loan Modernized e-File References Internal Revenue Service United States federal income tax Tax software of the United States
12762918
https://en.wikipedia.org/wiki/List%20of%20Criminal%20Minds%20characters
List of Criminal Minds characters
This is a list of characters in the television series Criminal Minds, an American police procedural drama which premiered September 22, 2005, on CBS and concluded its series on February 19, 2020, and that is also shown on A&E and Ion Television in the United States. There have been episodes. Main characters <onlyinclude> Notes Dr. Spencer Reid Played by Matthew Gray Gubler, SSA Dr. Spencer Reid is a genius who graduated from Las Vegas High School at age 12. Reid's mother, Diana Reid has Schizophrenia and was sent to a mental hospital by Spencer himself when he turned 18. He is almost always introduced as Dr. Reid, even though the others are introduced as agents, because SSA Jason Gideon understood that people would not otherwise take Spencer seriously because of his young age. After an introduction, he never shakes hands.. It has been revealed that he holds Ph.D.s in Mathematics, Chemistry, and Engineering, B.A.s in Psychology and Sociology, and is working on a B.A. in Philosophy. Spencer is known for having an IQ of 187 and has an eidetic memory. Around the office, Reid often interrupts others' sentences with facts. Matthew Gray Gubler confirmed that Reid has Asperger's Syndrome. In many episodes, Dr. Reid can be seen visiting his mother in her Las Vegas Mental Help/Nursing Home. In Season 4, "Memoriam", Reid experiences dreams, possibly nightmares, of a young boy being murdered. This was due to a series of events that happened in Reid's childhood. In the end, Reid saw his father after a 20-year absence, and found out that the murders in his dreams were indeed based on a real murder case and that his baseball coach, his mother, and his father were involved in the case, some criminally (his baseball coach), and some not criminally (his parents). Matthew Gray Gubler has been known for his many hairstyles throughout the continuing show. In almost every season his hair is different, whether it is a long man bob, or short Albert Einstein hair. Reid has also said that he was bullied when he was younger. Through the seasons Reid has faced many near death experiences and even had to be resuscitated while being held captive, tortured and drugged by Tobias Hankel (unsub from season 2 episode 15). Reid had made some attempts in finding love but only to have his girlfriend killed in-front of him by her stalker and later on to be terrorised by Cat Adams a top hitwomen whom framed Spencer for murder sending him to prison. Jennifer Jareau Played by A. J. Cook, SSA Jennifer "JJ" Jareau originally acted as the team's Communication Liaison with the media and local police agencies and later turned to a full time agent in the field after returning from the Pentagon. She has two sons with her now husband Detective William LaMontagne Jr. (Josh Stewart), whom she met while the team was working a case in New Orleans in the season two episode "Jones" and married at the end of the season seven two-part finale, "Hit and Run". In the second episode of season six, JJ was forced to take a promotion to the Pentagon and left the team for the rest of the season. When JJ leaves the BAU for a promotion to a position at the Pentagon, Garcia and Hotch take up her responsibilities as media liaison and Garcia retains this position when JJ returns to the BAU as a profiler. Jennifer returned to the show in the episode entitled "Lauren", in which she receives a call and returns to help the BAU find Emily Prentiss and capture Ian Doyle before it is too late. When Emily is stabbed by Doyle and rushed to a hospital, Jennifer announces she did not survive. However, it is later revealed that Emily is alive and Jennifer meets her at a cafe in Paris, where she provides her with three passports and bank accounts to start a new life in hiding. She returns once again in the season six finale before returning as a full-time cast member again in season seven. In the finale of season 14, JJ reveals to Reid that she was in love with him during a hostage situation. In 2015, the actress announced she was expecting her second child, which was written into JJ's storyline. The character's children are portrayed by A.J. Cook's real-life sons Mekhai Anderson and Phoenix Andersen. Penelope Garcia Played by Kirsten Vangsness, Penelope García is the team's Technical Analyst at BAU headquarters in Quantico, Virginia. She is flamboyant, a non-conformist, kind, fun-loving and provides the rest of the team with comic and compassionate relief whenever it is needed. Penelope is an "only child," and her parents were both killed in a tragic car accident when she was a minor. She was later adopted and her last name was changed to García by her adoptive parents. Although she is an Anglo-Caucasian, she celebrates presumably Mexican/Mexican-American traditions from her adoptive parents. She and SSA Morgan share a very friendly-flirty relationship which never goes beyond that, although Penelope did show mild jealousy when she saw him dancing with two other women. In turn, Morgan, responded negatively when asked for proposal advice by her then-boyfriend, fellow FBI computer tech, Kevin Lynch. Penelope taught herself to hack after she dropped out of college and was aligned with the best underground hackers in the world. After being caught by the FBI, she was given a choice of living in a high-security prison for the rest of her life or working for the BAU division of the FBI as an analyst. Two episodes focus on García's character. In "Penelope," she was shot and almost killed in season three. When Jennifer Jareau leaves the BAU for a promotion to a position at the Pentagon, García volunteers to take up her responsibilities as media liaison, completely changing her looks. She quickly realizes the job is not for her and resumes her job as technical analyst. The liaison position is then split between her and Hotch, a job she retains when JJ returns to the BAU as a profiler. In "The Black Queen", Penelope's past is brought up after a series of flashbacks. In those flashbacks the viewers see a Goth-looking García. Her background and history are revealed, indicating that her extraordinary computer skills are self-taught. She is overcome by guilt by her past actions, so much so that when her hacker ex-boyfriend needed to be caught she volunteered to become bait. In this situation she was really uncomfortable and said that she would never do it again. Penelope stated that she didn't know why she was ever "that" person. Emily Prentiss Played by Paget Brewster, SSA Emily Prentiss is the daughter of an ambassador and a U.S. diplomat. After Agent Elle Greenaway left the BAU permanently following a case when she had shot an unsub in cold blood, Prentiss showed up with papers stating she was the newest member of the BAU which caught both Hotch and Gideon off-guard as they hadn't signed off on her transfer. She became a permanent member of the team in "Lessons Learned" where it was revealed that she was fluent in Arabic. She also has a tough relationship with her mother. She got pregnant at the age of 15 and later chose to have an abortion. Prentiss is also skilled at chess. In "The Thirteenth Step" (episode 6.13), Prentiss receives some disturbing news from her previous boss at Interpol. In the following episode, "Sense Memory", after coming home from work, she notices that someone had been in her house because her cat's back was wet and her window was open. She also received several strange phone calls, with the caller ID saying "Caller Unknown". This also leaves Agent Morgan concerned for Prentiss. Prentiss appears in only 18 episodes of season 6. She faked her death to escape an old nemesis with the help of both Hotch and JJ while the rest of the team continued to assume she was dead. In season seven premiere ("It Takes a Village"), Emily returns to the team when Doyle resurfaces and she rejoins by the end of the episode. At the end of the season, she leaves the team to return and run Interpol in London. She returns for the 200th episode to help rescue a kidnapped SSA Jennifer Jareau and again in the episode "Tribute" (season 11), where she enlists the help of the BAU in catching a serial killer who had originally killed in Europe before killing in the United States. Paget Brewster was confirmed to return for a several episode arc in Season 12. Following the dismissal of Thomas Gibson, Brewster was promoted to a series regular again starting from Season 12, episode 3; later Prentiss is promoted to Hotch's position of unit chief. David Rossi Played by Joe Mantegna, Senior Supervisory Special Agent David Rossi, a "'founding father' of the BAU", was in early retirement from 1997 until his voluntary return to the BAU in 2007, replacing Jason Gideon, who had abruptly resigned from the BAU. He had retired in order to write books and go on lecture tours, but he returned to settle some unfinished business that was not immediately specified. It was later revealed that the case involved three young children whose parents had been murdered in a possible home-invasion case that had remained unsolved. This case haunted Rossi for twenty years and prompted him to return to the BAU, where he eventually solved it. He served in the Vietnam war and lost a close friend which was revealed in a series of flashbacks. Dr. Tara Lewis Played by Aisha Tyler, Lewis is Callahan's and JJ's temporary replacement while they are both on maternity leave. Dr. Tara Lewis is a psychologist with an eye on forensic psychology and its application toward the criminal justice system. Her dream was to study psychopaths up close and personal – and her psychology background, combined with her experience in the FBI, brought her face-to-face with monsters. Her job was to stare them down and interview them, in order to determine if they were fit to stand trial. In the process, she made herself find the humanity inside these broken men (and, sometimes, women) in order to learn if there was a conscience behind their brutal crimes. Lewis is also fluent in both French and German. Luke Alvez Played by former CSI: Miami star Adam Rodríguez, Fugitive Task Force Agent and Supervisory Special Agent. Alvez is a member of the FBI Fugitive Task Force that partners with the BAU to catch the escaped serial killers that escaped in the Season 11 finale. In the Season 12 premiere he works with the BAU to catch the "Crimson King", one of the escapees that attacked Alvez's old partner. The team discovers the real killer is "Mr. Scratch" who taunts the team by turning over the real "Crimson King", who was tortured to the point he no longer remembered who he was. After that Alvez decides to join the BAU full-time and was Hotch's last hire. Alvez has a dog named Roxy (whom García thought at first was his human girlfriend) and served in Iraq as an Army Ranger prior to joining the FBI. Matt Simmons Played by Daniel Henney, SSA Matthew "Matt" Simmons is a Special Operations agent and special agent with the IRT. Simmons is married to his wife Kristy (Kelly Frye) and has a total of four young children, including sons Jake and David and twin daughters Lily and Chloe. Like Garrett, Simmons' full and fulfilling family life was a deliberate choice. Through his job, Simmons has some prior history with Derek Morgan and JJ of the FBI's Behavioral Analysis Unit. He was a former member of a Special Ops unit, and his experience with the unit allowed him to hone his profiling skills. Former Jason Gideon Played by Mandy Patinkin, Senior Supervisory Special Agent Jason Gideon was the BAU's best profiler. He helped Derek Morgan and Spencer Reid through their nightmares. He was shown to have a very close relationship with Reid, having hand-picked him from the FBI Academy for his team, helping Reid through many difficulties (including his implied drug use), and even leaving the good-bye letter for Reid to find. Gideon did not know Garcia well, as expressed through an episode wherein he is placed with her while he is on crutches; after they are placed, Garcia complains about him, and he doesn't know her name. Through the first two seasons, Gideon was portrayed to be very good at chess, winning against Reid many times (only exception being Reid's birthday) and encouraging him to "think outside the box". Prior to the series, he was said to have had a "nervous breakdown" (or "major depressive episode") after he sent six men into a warehouse with a bomb in it; all six agents were killed, and he was heavily criticized about the event. He showed particular dislike for the practice of using religion as a defense or motivation for one's crimes. Gideon participated in some field operations during his time with the BAU and had the rest of his team "think outside the box" as well, as he made a major advancement by shouting at the top of his lungs with pleas of mercy and, when questioned by his team, he said that the victims were being threatened to be kept quiet as neighbors would have heard the pleas if they were unrestrained. He blamed himself for the torture Reid received from Tobias Hankel as he had ordered Penelope Garcia to add a virus warning to the videos Hankel posted. Gideon also had a son named Stephen. The nature of their relationship has not been directly stated, but it was implied that they have not seen each other very recently. Gideon began to lose confidence in his profiling skills after Frank Breitkopf murdered his girlfriend, Sarah Jacobs. During his final case in Arizona, he further lost faith in his abilities when his decision to release the unsub resulted in the deaths of both the unsub and a young woman. As a result of his actions, Aaron Hotchner was suspended, which was the final straw for Gideon. He then left his cabin shortly afterwards, leaving his gun and badge behind along with a letter for Reid to find as he sought to regain a belief in happy endings. In the season ten episode "Nelson's Sparrow", Gideon was murdered off-screen, having been shot dead at a close range by a serial killer named Donnie Mallick (Arye Gross), which prompts the BAU team to investigate Gideon's murder. During the flashbacks focusing on a young version of him for the episode which show him working at the BAU in 1978, he is played by Ben Savage. Elle Greenaway Played by Lola Glaudini, SSA Elle Greenaway was formerly assigned to FBI Field Office in Seattle, Washington and was assigned to the BAU, being an expert in sexual offense crimes. Her father was a police officer but was killed in the line of duty. She is half Cuban and speaks Spanish. She was shot by an unsub. Though she had physically recovered, the event still left her with psychological scars. As a result of those scars, Elle began acting even more harshly in season two, especially during a case involving a serial rapist. Ultimately, she killed the suspect before he could even be properly arrested. During this episode, she mentioned that the unsub wrote on the wall with her blood from the wound. She handed in her badge and gun in the episode "The Boogeyman" saying that it was not an admission of guilt. Ashley Seaver Played by Rachel Nichols, Ashley Seaver is an FBI cadet assigned to the BAU. Her father, Charles Beauchamp, was a horrific serial killer from North Dakota known as "the Redmond Ripper" who killed 25 women over the course of 10 years before Ashley was a teenager. He was caught by David Rossi and Aaron Hotchner. Because North Dakota does not have capital punishment, he was sentenced to life in prison. She has not been to see him. Though he writes to her sometimes, she never reads his letters, though she does keep them and admittedly still finds herself unable to hate him for what he did. In the episode "What Happens at Home", the BAU investigate a series of murders in a gated community and bring Ashley along because of her understanding of the family dynamics of a serial killer. In the end, the suspect commits suicide by cop in front of her. In the next episode, she requests that the rest of her remedial training be done with the BAU and is attached to the team. In the season seven premiere "It Takes a Village", it was revealed that Ashley transferred to the Domestic Trafficking Task Force, which is led by Andi Swann. Dr. Alex Blake Played by Jeanne Tripplehorn, FBI Linguistics expert Dr. Alex Blake replaces SSA Emily Prentiss. She is introduced in season eight. Her appointment at the BAU was met with some mixed reactions as the team was close to Prentiss. She retired in 2001 until she rejoined the BAU in 2012 to restore her reputation after Blake was blamed for arresting the wrong suspect in the Amerithrax case and Section Chief Erin Strauss let her take the fall. As a result, she and Strauss do not get along, with Strauss accusing her of joining for selfish reasons, but they eventually make amends. The rest of the team recognize her expertise and are generally less antagonistic towards her. As season eight progressed, Blake found herself in danger when she was threatened by a serial killer, 'The Replicator', who turned out to be John Curtis: a fellow former FBI agent disgraced due to the events of the Amerithrax case, who targeted Blake out of envy that she had restored her reputation while his own was still in ruins. Curtis killed Strauss, kidnapped Blake, and tried to blow up the entire BAU team, but the team rescued her, and Rossi locked Curtis in the house to die when the bomb exploded as vengeance for Strauss's death. Blake graduated from Berkeley with a double major and also holds a PhD. She was recruited to the FBI at the age of 24, making her one of at least two team members to join the Bureau in their early 20s along with Spencer Reid. Blake is also a professor of forensic linguistics at Georgetown, where Reid had previously guest lectured, and an SSA in the Washington field office. During her initial time at the FBI, Blake was involved in some high-profile cases, particularly the Unabomber case. Blake understands and speaks American Sign Language. In the season nine episode "Bully", it is revealed that Blake is estranged from her father Damon (a retired police captain of the Kansas City Police Department) and younger brother Scott (a current homicide detective there himself); after the death of her older brother Danny (a cop killed in the line of duty) and her mother, she found it too painful to be near her father and brother, and distanced herself from them. However, after Scott is injured by the UnSub, the two siblings start to reconnect, and by the end of the episode, she reconciles with both Scott and Damon when she and the rest of the BAU team have a barbecue at her father's home. In the season nine two-part finale, Blake becomes distraught and depressed when Reid is shot in the neck by the UnSub after pushing Blake out of the way and nearly dies, even commenting that it should have been her who was shot instead. She is also further upset when rescuing a young boy who was being used by the UnSub as leverage against his mother. Though Reid survives, Alex is greatly shaken by the case, and reveals to Reid that both he and the young boy reminded her of her deceased son Ethan, who died of an unnamed neurological disease at age nine. Her guilt and distress over Reid's brush with death touched a major nerve with her, seemingly pushing her to the breaking point. At the end of the "Demons", she sits apart from the rest of the group on the plane ride home, and it is implied that she sends a text message to Hotch handing in her resignation. After taking Reid home, telling him about Ethan, and departing, Reid finds her FBI badge in his bag, and watches her leave, saddened but accepting, from his window. Kate Callahan Played by former Ghost Whisperer star Jennifer Love Hewitt, Kate Callahan has been in the FBI for eight years and has experience as an undercover agent, which has allowed her to establish a prior friendship with members of the BAU. Her sister and brother-in-law were killed in the September 11 attacks, leaving Kate as the legal guardian of their infant daughter, Meg, whom she raised for thirteen years along with her husband, Chris. This tragedy shaped her patriotic attitude. She is described as "smart, charming, and wise for her years" and holds a passion for making the world safer, according to showrunner Erica Messer. In the episode "Breath Play", Kate reveals she is pregnant. In the season ten finale, "The Hunt", Meg is abducted by human traffickers who are connected to a previous case that Kate had researched. Though Meg is eventually brought back safe, Kate decides to take a year off to spend with Meg, Chris, and her soon-to-be-born child. Derek Morgan Played by Shemar Moore, SSA Derek Morgan is a confident and assertive everyman character, the son of an African-American father and white mother. He went to Northwestern University on a football scholarship, holds a black belt in judo, runs FBI self-defense classes, and served in a bomb squad unit and as a Chicago police officer. In season two it was explained that after the death of his father when he was ten, Derek struggled somewhat: youthful fighting earned him a juvenile offender record. He was taken under the wing of a local youth center coordinator, Carl Buford (Julius Tennon), who acted as a surrogate father to Derek and helped him to obtain a college football scholarship. But he also sexually abused him; the episode "Profiler, Profiled" revealed this. In season three, it was revealed that he hated religion because, as he said, something bad happened to him when he was 13. He went to church every day and prayed for it to stop, but it did not. Because of this, he had a resentment towards God and church. He prayed for the first time in 20 years at exactly the time, he later found out, that Penelope Garcia was being operated on after being shot. Former Unit Chief Aaron Hotchner promoted him to unit chief in his place, a promotion Derek saw as only temporary until the "Boston Reaper" was captured. Aaron again took his place as unit chief when he returned after grieving over his ex-wife's murder. He resigned to care for his family. Aaron Hotchner Played by Thomas Gibson, Unit Chief SSA Aaron "Hotch" Hotchner used to be a prosecutor and was formerly assigned to the FBI field office in Seattle. After stepping down for a period, he returned to lead the unit. He has a son named Jack (Cade Owens) by his deceased wife Haley (Meredith Monroe). The two eventually divorced and remained on good terms until Haley was murdered by George Foyet (C. Thomas Howell), Aaron's Nemesis. Aaron's attempts to balance his family life and his job have been something of an ongoing struggle on the show. When Jennifer Jareau leaves the BAU for a promotion to a position at the Pentagon, Garcia and Hotch take up her responsibilities as media liaison and retains this position when JJ returns to the BAU as a profiler. But in the episodes "Closing Time", "A Family Affair", and "Run", it is shown that he has moved on and is currently in a romantic relationship with Beth Clemmons (Bellamy Young). It is known that "Hotch" is rarely seen smiling throughout the show. His most notable smiles are when he is with his now girlfriend and his son. Gibson appeared in 1–2 episodes in the forthcoming season; he was then immediately removed as a cast member. Stephen Walker Played by actor and conductor Damon Gupton, Walker is a Supervisory Special Agent with the BAU. Walker was a member of the Behavioral Analysis Program. He was contacted by Emily Prentiss about joining the BAU to assist in the manhunt for Peter Lewis, a.k.a. "Mr. Scratch". Walker is an experienced profiler, with about twenty years under his belt, and a member of the FBI's Behavioral Analysis Program before his transfer to the BAU. He is married to a woman named Monica and has two children with her, Maya and Eli. He met Emily Prentiss, then the chief of Interpol's London office, during his line of work. He was also mentored by David Rossi. Stephen's first case concerned a terrorist cell in Belgium, and three agents were sent undercover to infiltrate it. However, Stephen's profile was wrong, and this resulted in the deaths of the undercover agents. He eventually moved on from the trauma and improved as he went along in his career. He and other BAP agents, including his longtime friend Sam Bower, were sent undercover to investigate corruption in the Russian government. Walker's skills include being fluent in Russian and playing the trombone. In "Wheels Up", Walker dies from injuries during a car accident with a semi-truck by Peter Lewis a.k.a. Mr. Scratch. Recurring <onlyinclude> Current Agent Grant Anderson Played by Brian Appel, Agent Anderson appears in "Plain Sight" (episode 1.4)", The Fisher King" (1.22 and 2.1), "The Big Game" (2.14), as well as "Honor Among Thieves" (2.20), "The Crossing" (3.18), "100" (5.9), "The Slave of Duty" (5.10), "Hope" (7.8), "Hit" (7.23), "Run" (7.24), "Carbon Copy" (8.16), "The Replicator" (8.24), "To Bear Witness" (9.4), and "200" (9.14). Agent Anderson was told to drive Elle home in "The Fisher King", and he dropped her off at her front door and left. She was soon shot by The Fisher King, as he had already been there, waiting for her. Hotch scolds Anderson briefly for not doing more, and quickly sends him back to the scene of the crime. Agent Josh Cramer Played by Gonzalo Menendez, Agent Josh Cramer runs the FBI Field Office in Baltimore, Maryland, as well as the Organized Crime division in that city. The two episodes which take place in Baltimore, "Natural Born Killer" (1.8) and "Honor Among Thieves" (2.20) both have him liaising with the BAU. Diana Reid As played by Jane Lynch, Diana Reid is Dr. Spencer Reid's mother. She first appeared as a potential target of serial killer Randall Garner, the man who shot SSA Elle Greenaway. Like her son, Diana has a genius level IQ. She was once a university literature professor, but is no longer since her diagnosis of schizophrenia. She currently resides at the Las Vegas-based Bennington Sanitarium, where Spencer committed her when he was eighteen. Her husband, William Reid, left her when Spencer was a child. The reason William left is because he was aware Diana witnessed a murder, as a family friend avenged his own son's murder. He was unable to live with this knowledge though he claims he tried; he said "the weight of knowing what happened was just too much". Much of Diana and Spencer's time while he was growing up was spent with her reading to him. Spencer writes her a letter every single day because he feels guilty about not visiting her. In season 11, Spencer takes some time off from the BAU to visit her. In "Entropy", he reveals she has early signs of Dementia and when he first walked in her room, she didn't know who he was for three seconds. Kevin Lynch Played by Nicholas Brendon, Kevin was Penelope Garcia's replacement when she was briefly suspended and hospitalized. He is a former hacker like her, but he is far messier. Garcia is denied access to her system during her suspension from the BAU. Kevin takes over in the interim. He is immediately impressed with the system she has set up and her GUI. Garcia attempts to hack into the database under his watch. Kevin is unable to block her. They are each impressed with the others work, but Garcia establishes dominance. When they finally meet face-to-face, they fall in love instantly. Kevin remains in awe of Garcia. They've developed a dating relationship in spite of Garcia's 'special' relationship/mutual admiration with Agent Morgan. This is revealed at the beginning of "Damaged", when Agent Rossi shows up at Garcia's apartment only to find the quirky twosome showering together. In the Season 6 finale, "Supply & Demand", they profess their love for each other. Later in the show Penelope brings him in for a case in Season 6. William LaMontagne Jr. Played by Josh Stewart, LaMontagne is the husband of Special Agent Jennifer Jareau. He is a homicide detective who worked for the New Orleans Police Department and is now with the Metro PD. In the season two episode "Jones", it is revealed that his father William Sr. was a detective himself in the NOPD and was killed during Hurricane Katrina as he was working a case in his home and refused to leave during the mass evacuations. The case later resurfaced and LaMontagne enlisted the help of the BAU. While they were there, he and JJ became romantically involved, although he wasn't mentioned again until "In Heat". In that episode, he was brought to Miami where the unsub had killed a friend and colleague of his. During the episode, it was revealed that he and JJ had been secretly contacting each other since "Jones". JJ didn't want to reveal their relationship since she believed it would complicate their personal lives, but in the end, they went public with it. At the end of the episode, it is revealed that Prentiss, Morgan and Reid already knew about it. In the episode "The Crossing", JJ discovered she was pregnant and they have a boy named Henry. The actual status of JJ and Will's relationship (engaged, married, etc.) has not been disclosed, though they exchanged rings with Henry's birthstone in season four. In the season three finale, it is revealed that he transferred to Metro to move to Virginia to be with JJ and raise Henry together. To conclude season seven, he and JJ marry in a small ceremony in David Rossi's back yard. The character was written back in after A.J. Cook told the writers she was pregnant, and as such JJ needed a love interest. In addition, one of the original plans for the season seven finale was to kill off Will. However, this idea was scrapped due to Paget's impending departure. Jack Hotchner Played by Cade Owens, Jack Hotchner is the son of series regular Aaron Hotchner, his first appearance being in "The Fox". His mother, Haley Hotchner, is killed in season five by George Foyet (a.k.a. "The Boston Reaper") but is spared when his father gives him a secret signal to "work the case" (hide in the trunk in Hotch's office). It is shown in season seven's "Painless" that Jack is being bullied. Jack is shown to have become good friends with Beth Clemmons, his father's new girlfriend. Lindsey Vaughan Played by Gia Mantegna (Joe Mantegna's daughter), Lindsey Vaughan is the daughter of a hitman and first appears in the season three episode 3rd Life. The BAU initially believe her to be a victim of "Jack" until they track her to a school and discover that she is a willing accomplice, input under the witness protection program after a hit ordered by Irish mobsters designed to kill her father wound up killing her mother instead. She reappears in season twelve as Diana Reid's nurse using the name Dr. Carol Atkinson. Reid immediately recognises her as Lindsey Vaughan and later remembers that she was Mr. Scratch's accomplice from the hotel in Mexico but is taken back to his cell before he can warn Diana, later being revealed that she is the accomplice, and girlfriend, of Cat Adams. Henry LaMontagne Played by Mekhai Andersen (A.J. Cook's son), Henry LaMontagne is the first son of Jennifer Jareau and William LaMontagne Jr., his first appearance being in "100". Mateo Cruz Played by Esai Morales, Cruz is the new Section Chief of the BAU. All that is known about him is that he worked at the Pentagon prior to season nine and has a past with JJ. It was revealed in "200" that the two had worked on a task force together in the Middle East. He was the only person to know of her pregnancy and her miscarriage during her time on the task force. In the same episode, they are both kidnapped by Tavin Askari, who was a traitor within the task force. They are both physically and mentally tortured into giving the access codes given to them during the mission. He is shocked to discover that Michael Hastings, one of the men they had worked with on the task force, was the mastermind behind the plan and threatened to rape JJ in order to give him the access codes. He gives in and is later stabbed by Askari, who was quickly killed by Hotch. Cruz is taken to the hospital following the incident and survives. Cruz later appears in the season nine finale "Demons", where he accepts a case from the sheriff who is a personal friend. When the sheriff is killed and Reid is shot, both Cruz and Garcia fly to Texas to meet with the rest of the team. He is next seen in the pilot episode for the upcoming spinoff, entitled Criminal Minds: Beyond Borders, which was the nineteenth episode of season ten. He enlists the BAU to help the international team find a vicious international killer in Barbados. Joy Struthers Played by Amber Stevens West, Joy is Rossi's daughter from his short-lived second marriage to French diplomat Hayden Montgomery. When they divorced, Hayden didn't tell him she was pregnant and Joy thought her father was her mother's second husband, who finally told her the truth before dying from cancer. In the episode "Fate" (10x09), Joy sought Rossi out and they're getting to know each other. Joy is a reporter and true crime writer and is married with a 2-year-old son named Kai. Former Erin Strauss Played by Jayne Atkinson, Erin Strauss was the BAU Section Chief, the direct superior to SSA Aaron Hotchner. Her job lies in administration, and she has little field experience. She is an alcoholic, as revealed in the seventh-season episode "Self-Fulfilling Prophecy" when she rants at the commandant of a military academy and Morgan smells alcohol on her breath. At the end of the episode, Hotchner and Morgan arrange for her to check in privately at a treatment facility, thus protecting her from losing her job. Strauss becomes more prominent in season eight. It is revealed in "The Silencer" that the newest member of the BAU team, Alex Blake, worked with her during the Amerithrax case, during which Strauss left her to take the fall when a linguistics flub led to the arrest of the wrong suspect. As a result, Blake did not get along with her afterwards. At the end of "The Silencer", Strauss tries apologizing to her, but Blake turns Strauss down. In "Carbon Copy", she specifically oversees the investigation into the Replicator, and by the end of the episode, she apologizes to Blake again, and this time, her apology is accepted. In "Brothers Hotchner", she is abducted by the Replicator, later revealed to be a former FBI agent named John Curtis, whom she left to take the blame along with Blake following the Amerithrax case. In "The Replicator", Erin Strauss is killed in the line of duty when Curtis poisons her with spiked wine and leaves her to die. She is found on the streets by Hotch, and she admits that the Replicator forced her at gunpoint to drink again. She dies in Hotch's arms after begging him to stay with her as she does not want to die alone. Strauss indirectly helps defeat Curtis post mortem when Rossi uses her sobriety chip to escape his trap, leaving him to possibly die in an explosion. After her funeral, the team celebrates her life during dinner at Rossi's garden, discussing happy stories of her time with them and acknowledging her as a good woman, friend, and mother. Haley Hotchner Played by Meredith Monroe, Haley Hotchner was the wife of Aaron Hotchner. She and Hotchner have a son, Jack. They divorced due to Hotch's job and duties. In season three, Aaron Hotchner picks up his home phone when someone calls, but when he answers it, the caller hangs up. Haley's cellphone starts ringing immediately afterward. Hotch looks at Haley, but she does not say anything. It is implied that Haley might be cheating on Aaron, and that is why the person who called the home phone did not speak when a man answered. She is shot and killed by Hotch's nemesis, George Foyet (a.k.a. "The Boston Reaper"). She returned in season nine, episode five, in a vision while Hotch was recovering from complications from his stabbing 100 episodes earlier. Jordan Todd Played by Meta Golding, Todd is JJ's replacement while she's on maternity leave. She was introduced to the team in Catching Out and was mentored and trained by JJ until JJ went into labor. Prior to that, she had spent 7 years working for the FBI's counter-terrorism unit. In the end, she announced that she would return there and that JJ would end her maternity leave and return to the team. Dr. Savannah Morgan Played by Rochelle Aytes, Savannah Morgan (née Hayes) is Derek Morgan's wife and she works as a Doctor at Bethesda General Hospital. Savannah first appeared in Season Nine's "The Return" and it is presumed Morgan and Savannah started dating prior to Season Nine, and first met after she approached him when he was depressed over a case that ended badly. Before they started dating they used to be neighbors. She was introduced to the show because Shemar Moore, the actor who portrays Morgan, had requested that his character should get a romantic partner. She was last seen giving birth to her and Derek's son, Hank Spencer Morgan, after she was shot by Chazz Montolo. Peter Lewis Played by Bodhi Elfman, Peter Lewis (aka Mr. Scratch) is a proxy killer who poisons his victims causing them to kill people for him. He is first hunted by the BAU in season 10. He escapes from prison in season 11 and continues killing in season 12. He also stalked SSA Aaron Hotchner's son, Jack, forcing them to go into witness protection. In the season 13 premiere, "Wheels Up," he is cornered by the team and falls to his death off the edge of a building. Characters from Suspect Behavior Samuel "Sam" Cooper – portrayed by Forest Whitaker Beth Griffith – portrayed by Janeane Garofalo Jonathan "Prophet" Sims – portrayed by Michael Kelly Gina LaSalle – portrayed by Beau Garrett Mick Rawson – portrayed by Matt Ryan Jack Fickler – portrayed by Richard Schiff Characters from Beyond Borders Jack Garrett – portrayed by Gary Sinise Clara Seger – portrayed by Alana de la Garza Matthew "Matt" Simmons – portrayed by Daniel Henney Russ "Monty" Montgomery – portrayed by Tyler James Williams Mae Jarvis – portrayed by Annie Funke References Episode sources Criminal Minds Criminal Minds
13695991
https://en.wikipedia.org/wiki/Bundeswehr%20University%20Munich
Bundeswehr University Munich
Bundeswehr University Munich (, UniBw München) is one of only two research universities in Germany at federal level that both were founded in 1973 as part of the German Armed Forces (Bundeswehr). Originally called Hochschule der Bundeswehr München the institution was supposed to offer civilian academic education for military officers. As an uncommon feature amongst German universities Universität der Bundeswehr München unifies a more theoretical research university division and a more practical-oriented College of Applied Sciences branch. Today, the university has an increasing number of civilian and international students. The academic year at the university is structured in trimesters and not the usual semesters, to offer intensive studies with more credit points per year. Very capable students can therefore achieve a bachelor's and a master's degree within less than four years, while this would usually require five years. Universität der Bundeswehr München has well-established scientific research and forms part of two excellence clusters of the German government's university excellence initiative. Bundeswehr University is one of only very few campus universities in Germany. History In 1970 the then minister of defence Helmut Schmidt decided that the education of military officers in Germany had to be reviewed and had to include full academic studies. After one year the Ellwein commission presented its proposal for the creation of two civilian colleges within the armed forces. Students should get fully recognised civilian degrees independent of their military profession, to have a higher qualified officer corps and more incentives to join the military. The idea was that students would have better conditions than at normal universities so that they could cope with a higher workload and study faster. After almost two years of discussions and the necessary legislative procedures, both universities could open. University education normally being a responsibility of the German states, Universität der Bundeswehr München and Helmut Schmidt University are the only federal universities in Germany. With their innovative concepts Helmut Schmidt University and Universität der Bundeswehr München quickly became widely known as reform universities within the very traditional German university landscape. In the following years the universities had to establish their image and reputation and finally were accepted as full universities with the rights to award doctorate degrees as well as Habilitations to qualify university professors in the 80s. In the 90s and 2000s the university has started to open its teaching towards civilian students and to extend its international relations. While the researchers and doctorate students always used to be mainly civilians, the student body still had been purely military in the 90s. Today, Universität der Bundeswehr München has concluded partnership contracts with different major financial and industrial companies, which send students to Universität der Bundeswehr München. In the past few years different federal agencies have started to qualify their employees at the university. Since 2007 Universität der Bundeswehr München has changed its degree to the harmonized Bologna system. It has completely restructed its curriculums and awards bachelor's and master's degrees instead of the former German Diplom. Presidents 1973–1974 Gerhard Wachter (temporary) 1974–1982 Horst von Engerth 1982–1990 Rudolf Wienecke 1990–1993 Jürgen von Kruedener 1993–1994 Rudolf Avenhaus (temporary) 1994–2005 Hans Georg Lößl since 2005 Merith Niehuss Campus and student life The university is located in Neubiberg in the south of Munich, Bavaria. There are S-Bahn and U-Bahn connections to Neuperlach Süd as well as three bus connections available. In addition to that, there are motorway exits at Neubiberg and Neuperlach, permitting fast access by car. The large campus used to host the Air Force officer school before, and also included a military airport. Former runways are still used for scientific testing of vehicles. Since the 1970s a large number of buildings have been built for teaching, scientific research and student housing. Most of the students live in modern individual student dormitories on the campus and next to the university buildings. There is currently a large renovation process ongoing on the campus: within the next years about € 220m will be spent on building activities. On campus students have access to different sports facilities like gyms, tennis courts and golf ranges. Different multinational engineering companies have their headquarters or important industrial facilities directly next to the campus, most notably Siemens, Infineon Technologies, Bosch and Siemens Household Appliances (BSH), and EADS facilitating frequent cooperation in research. On campus all students (also the military ones) usually wear civilian clothes. Students can organize their activities as they want and attendance to lectures is mostly voluntary. On Wednesday afternoons there is regularly military or language training for soldiers. There are many possibilities for extracurricular activities within student initiatives and associations. International initiatives like the German-American, German-Israeli or German-Hispanic clubs as well as the Model United Nations Society play an important role on the campus. The students of each student dormitory usually organize a large party every year. As a large part of the student body consists of military officers and officer candidates, sport is considered important on the campus. The sports center provides a variety of free courses for students – e.g. for different martial arts. There are a lot of sports facilities like gyms, different sports halls, a golf course, a large climbing wall, an indoor swimming pool, several tennis courts and other sports grounds. Furthermore, the university has a large military obstacle course which was used for the CISM world cup in 2009. There are different sports teams on the campus which also take part in the university championships. Every student as well as the public has access to the more than one million volumes in the university library. The library is linked to the academic library network of Bavaria which allows interlibrary loans. Administration and organization The organization of Universität der Bundeswehr München is similar to a usual university. 1,100 (non-military) employees, among them 163 full professors, serve the approximately 3,400 students. The university is led by a civilian president and three vice presidents. The incumbent president, Merith Niehuß, is the only female university president in Bavaria. The administration is headed by the Chancellor. A difference to most other universities is that the structure of education and research is split up between normal university part with seven university faculties and a Fachhochschule (College of Applied Sciences) part with three more faculties. Furthermore, the structure contains central services like the computing center, the large university library, a media center with state-of-the-art technology, a language service and the sport services. In addition to that the university has two further research institutes. The department for special services includes the Studium+ institute for interdisciplinary studies, the CASC center for postgraduate studies as well as some further services. Another difference to other universities is the military division which is responsible for the administration and training of military students. Decisions are made by the board, the extended board and the administration council (Senate and University Council). The university has about 850 non-scientific employees in addition to the about 570 scientific employees as well as additional military personnel. Academics Similar to some other military run universities like the École Polytechnique the university only offers civilian study courses. The contents generally have no relation to the military and correspond to courses at regular German universities. Bachelor´s and Master´s studies in total consist of 400 ECTS credit points. Due to this there also is a small number of highly gifted civilian students who are sponsored by industrial and financial companies like Allianz, Bosch or Munich Re. In addition to that other German ministries and federal institutions like Bundesnachrichtendienst also educate some of their employees at the university. In cooperation with George C. Marshall European Center for Security Studies a course of studies for senior leaders (International Strategic Studies) has been introduced. The university also has international students and offers individual mentoring and tutoring programmes. Universität der Bundeswehr has concluded partnership contracts with an increasing number of international universities. Furthermore, it is possible for civilians to receive a doctorate or to qualify as a university lecturer (Habilitation). All of the professors are civilians, and the number of professors per student is significantly higher than at normal German universities. Thus in general the conditions for teaching are better. In autumn 2009 the university introduced the first military-related engineering course of study called Defence Engineering, which is solely dedicated to civilian students from industrial companies and federal institutions. Military students of the university usually only have a maximum of 4 years to pass their master's within the intensive studies with more content. If they do not finish their bachelor's after years with the necessary grades, they cannot proceed. As the studies have to be finished in a shorter time than at common German universities, the academic year consists of three trimesters instead of the normal two semesters. Study courses The university has restructured its courses of study according to the Bologna treaty and offers bachelor's and master's degrees as well as doctoral and postdoctoral studies. Universität der Bundeswehr offers normal research university courses of studies as well as a few more practical oriented Fachhochschule (University of Applied Sciences) study courses. In total there are 37 Bachelor's and Master's degree programmes at Universität der Bundeswehr München. The University Civil Engineering & Environmental Studies (Bachelor of Science, Master of Science, doctorate, habilitation) Electrical Engineering & Information Technology (Bachelor of Science, Master of Science, doctorate, habilitation) Computer Science (Bachelor of Science, Master of Science, doctorate, habilitation) Cyber Security (Master of Science) Intelligence and Security Studies (Master of Arts/Master of Science) Aerospace Engineering (Bachelor of Science, Master of Science, doctorate, habilitation) Political Science & Social Sciences (Bachelor of Arts, Master of Arts, doctorate, habilitation) Economics & Organizational Sciences (Bachelor of Science, Master of Science, doctorate, habilitation) Business Information Systems (Bachelor of Science, Master of Science, doctorate, habilitation) Educational Science, in particular Intercultural, Media & Adult Education (Bachelor of Arts, Master of Arts, doctorate, habilitation) Sports Science (Bachelor of Science, Master of Science, doctorate, habilitation) Mathematical Engineering (Bachelor of Science, Master of Science, doctorate, habilitation) Psychology (Bachelor of Science, Master of Science, doctorate, habilitation) College of Applied Sciences (Fachhochschule) Aeronautical Engineering (Bachelor of Engineering) Computer Engineering & Communication Technology (Bachelor of Engineering) Mechanical Engineering (Bachelor of Engineering) Computer Aided Engineering (Master of Engineering) Management and Media (Bachelor of Arts, Master of Arts) Defence Engineering (Bachelor of Engineering) Human Resources Management (Bachelor of Arts, Master of Arts) Campus Advanced Studies Center, CASC International Management (Master of Business Administration, in cooperation with ESB Reutlingen) International Security Studies (Master of Arts, in cooperation with George C. Marshall European Center for Security Studies) Industrial Engineering (Bachelor of Engineering) Public Management (Master of Business Administration) Systems Engineering (Master of Science) Human Resources Development (Master of Arts) Public Information Systems (Bachelor of Science) Research The university is well established in different fields of research, especially when it comes to aeronautical engineering (e.g. participation in the GALILEO satellite program and development of different parts of space probes), computer-driven cars and information security. Bundeswehr University has the largest aviation and aerospace faculty in Germany. A main focus of the university are all kinds of security technology. The university is part of two excellence clusters of the German government universities excellence initiative (Cognition for Technical Systems and Munich-Centre for Advanced Photonics). The university hosts the ESA Summer School on Global Navigation Satellite Systems and the Munich Satellite Navigation Summit. Bundeswehr University forms part of the joint research and academic center Munich Aerospace, founded in 2010 and the Bavarian International Campus Aerospace & Security, founded in 2012. The university also has a number of partner companies which rely on the universities research expertise and support the university with products and facilities for testing and research. To strengthen its research profile and enhance cooperation between the faculties, Bundeswehr University Munich has created several interdisciplinary research centers: SPACE, MOVE (Modern Vehicles), RISK (Risk, Infrastructure, Security and Conflict), MARC (Military Aviation Research Center), CODE (Cyber Defence). CODE is planned to have around 250 research personnel for Cyber Security. Germany's federal cyber security research agency Central Office for Information Technology in the Security Sector is to be moved with 400 employees to the Neubiberg campus of Universität der Bundeswehr München until 2023. With both organizations on campus the federal government aims to build up a cluster for cyber defence and security unique of its kind within Germany. In 2017 new Center for Intelligence and Security Studies (CISS) was created by the Universität der Bundeswehr München in cooperation with the Federal University for Public Administration, Federal Intelligence Service (BND) and Federal Office for the Protection of the Constitution (BfV). Besides training future senior intelligence personnel the Center is engaged in intelligence and security research. International collaboration Universität der Bundeswehr München has partner universities worldwide. The following list shows some examples: : University of Graz : Queen's University : Universidad de Chile (Santiago de Chile) : Tongji University (Shanghai) : Aarhus University (Aarhus) : University of Oulu (Oulu) : Toulouse Business School (Toulouse), ENSAM (Paris), ENSTA Bretagne (Brest), ENSMP (Paris), ISEP (Paris) : Central European University (Budapest) : Osaka Institute of Technology (Osaka) : Technische Universiteit Delft (Delft) : Wrocław University of Technology (Wrocław) : Politehnica University of Bucharest (Bucharest), University of Craiova (Craiova) : Saint Petersburg Polytechnical University (St. Petersburg) : Napier University : Universidad Autónoma de Madrid (Madrid), Universitat Pompeu Fabra (Barcelona), Universidad de Cádiz (Cádiz) : University of Arizona (Tucson), University of Texas at El Paso, United States Military Academy, Naval Postgraduate School, Norwich University : Le Quy Don Technical University Notable alumni Raheel Sharif, Chief of Army Staff of Pakistan and a 4-star General. Thomas Reiter, Director Human Spaceflight at the European Space Agency and former astronaut (former student) V. Ramgopal Rao, Director of IIT Delhi (former student) Thomas Daum, Vice Admiral and Inspector of the Cyber and Information Domain Service. Volker Wieker, former Chief of Staff of the German military (former student) Oliver D. Doleski, German economist, editor and author (former student) Wolfram Kühn, Former Deputy Chief of Staff of the German military (former student) Marion Schick, former chairman at Deutsche Telekom AG (former student) Klaus-Dietrich Flade, former astronaut (former student) Joachim Schmillen, former German Ambassador to Peru, Nigeria, Chile, Jamaica former Chief of the Planning Staff of the Foreign Ministry (former student) Roderich Kiesewetter, CDU politician and Member of the German Parliament (former student) Hans-Lothar Domröse, former Commander Allied Joint Force Command Brunssum (former student) Günther Schmitz, Vice President German Patent and Trade Mark Office Notable faculty Friedrich L. Sell, chief of the scientific advisory council of Halle Institute for Economic Research (current professor) Michael Wolffsohn, Israeli-born German historian (former professor) Brun-Otto Bryde, former judge at the German Constitutional Court (former professor) Ernst Dickmanns, pioneer of dynamic computervision and driverless cars (professor emeritus) Carlo Masala, professor for international politics (current professor) Gunther Schmidt, emeritus professor for computer science See also Helmut Schmidt University University of the German Federal Armed Forces References External links www.unibw.de Universities and colleges in Munich Military education and training in Germany Bundeswehr Neubiberg Educational institutions established in 1973 1973 establishments in West Germany
3134981
https://en.wikipedia.org/wiki/Default-free%20zone
Default-free zone
In Internet routing, the default-free zone (DFZ) is the collection of all Internet autonomous systems (AS) that do not require a default route to route a packet to any destination. Conceptually, DFZ routers have a "complete" Border Gateway Protocol table, sometimes referred to as the Internet routing table, global routing table or global BGP table. However, internet routing changes rapidly and the widespread use of route filtering ensures that no router has a complete view of all routes. Any routing table created would look different from the perspective of different routers, even if a stable view could be achieved. Highly connected Autonomous Systems and routers The Weekly Routing Reports used by the ISP community come from the Asia-Pacific Network Information Centre (APNIC) router in Tokyo, which is a well-connected router that has as good a view of the Internet as any other single router. For serious routing research, however, routing information will be captured at multiple well-connected sites, including high-traffic ISPs (see the "skitter core") below. As of May 12, 2014, there were 494,105 routes seen by the APNIC router. These came from 46,795 autonomous systems, of which only 172 were transit-only and 35787 were stub/origin-only. 6087 autonomous systems provided some level of transit. The Idea of an "Internet core" The term "default-free zone" is sometimes confused with an "Internet core" or Internet backbone, but there has been no true "core" since before the Border Gateway Protocol (BGP) was introduced. In pre-BGP days, when the Exterior Gateway Protocol (EGP) was the exterior routing protocol, it indeed could be assumed there was a single Internet core. That concept, however, has been obsolete for a long time. At best, today's definition of the Internet core is statistical, with the "skitter core" being some number of AS with the greatest traffic according to the CAIDA measurements, previously made with its measuring tool called "skitter". The CAIDA measurements are constantly updated. Information at Internet Exchange Points Large Internet Exchange Points (IXP)—in that they typically include full routes as seen by multiple ISPs, as well as customer routes, in their exchange fabric—are extremely good places to assess global Internet routing. Before the current commercial Internet evolved, the NSFNET, which interconnected five US government funded supercomputer centers, could have been considered the high-speed Internet core. Four IXPs supported NSFNET, but these IXPs evolved into a model where commercial traffic could meet there. While it is slightly difficult to point to a precise endpoint, NSF funding for transmission ceased by 1998. Customer, non-ISP Participation in the DFZ It is common practice, in a multihomed but stub (i.e., non-transit) autonomous system, for the BGP-speaking router(s) to take "full routes" from the various ISPs to which the AS is multihomed. Especially if there is more than one router connected to the same ISP, a common practice, it will receive more routes than are in the DFZ. This is because when there are two routers connected to a major ISP such as Sprint, France Telecom or Qwest, that provider has a number of customer AS connected to it. The optimal route to those customer AS are important to the ISP itself, but also tells one customer AS which specific router has the best path to the other customer. The "full routes", or properly "full routes plus customer routes", coming to a customer router makes that customer router part of the DFZ, but certainly not part of the "skitter core". See also Multihoming IP transit Peering Route filtering 512K Day References Internet Standards Routing
13636
https://en.wikipedia.org/wiki/History%20of%20computing%20hardware
History of computing hardware
The history of computing hardware covers the developments from early simple devices to aid calculation to modern day computers. Before the 20th century, most calculations were done by humans. Early mechanical tools to help humans with digital calculations, like the abacus, were referred to as calculating machines or calculators (and other proprietary names). The machine operator was called the computer. The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. Later, computers represented numbers in a continuous form (e.g. distance along a scale, rotation of a shaft, or a voltage). Numbers could also be represented in the form of digits, automatically manipulated by a mechanism. Although this approach generally required more complex mechanisms, it greatly increased the precision of results. The development of transistor technology and then the integrated circuit chip led to a series of breakthroughs, starting with transistor computers and then integrated circuit computers, causing digital computers to largely replace analog computers. Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled semiconductor memory and the microprocessor, leading to another key breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of computers gradually became so low that personal computers by the 1990s, and then mobile computers (smartphones and tablets) in the 2000s, became ubiquitous. Early devices Ancient and medieval Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. The Lebombo bone from the mountains between Eswatini and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was early used for arithmetic tasks. What we now call the Roman abacus was used in Babylonia as early as c. 2700–2300 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. Several analog computers were constructed in ancient and medieval times to perform astronomical calculations. These included the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100 BC). In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices including automata and a programmable cart. Other early mechanical devices used to perform one or another type of calculations include the planisphere and other mechanical computing devices invented by Abu Rayhan al-Biruni (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (1094) during the Song dynasty. The castle clock, a hydropowered mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first programmable analog computer. Ramon Llull invented the Lullian Circle: a notional machine for calculating answers to philosophical questions (in this case, to do with Christianity) via logical combinatorics. This idea was taken up by Leibniz centuries later, and is thus one of the founding elements in computing and information science. Renaissance calculating tools Scottish mathematician and physicist John Napier discovered that the multiplication and division of numbers could be performed by the addition and subtraction, respectively, of the logarithms of those numbers. While producing the first logarithmic tables, Napier needed to perform many tedious multiplications. It was at this point that he designed his 'Napier's bones', an abacus-like device that greatly simplified calculations that involved multiplication and division. Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s, shortly after Napier's work, to allow multiplication and division operations to be carried out significantly faster than was previously possible. Edmund Gunter built a calculating device with a single logarithmic scale at the University of Oxford. His device greatly simplified arithmetic calculations, including multiplication and division. William Oughtred greatly improved this in 1630 with his circular slide rule. He followed this up with the modern slide rule in 1632, essentially a combination of two Gunter rules, held together with the hands. Slide rules were used by generations of engineers and other mathematically involved professional workers, until the invention of the pocket calculator. Mechanical calculators Wilhelm Schickard, a German polymath, designed a calculating machine in 1623 which combined a mechanised form of Napier's rods with the world's first mechanical adding machine built into the base. Because it made use of a single-tooth gear there were circumstances in which its carry mechanism would jam. A fire destroyed at least one of the machines in 1624 and it is believed Schickard was too disheartened to build another. In 1642, while still a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort and 50 prototypes he invented a mechanical calculator. He built twenty of these machines (called Pascal's calculator or Pascaline) in the following ten years. Nine Pascalines have survived, most of which are on display in European museums. A continuing debate exists over whether Schickard or Pascal should be regarded as the "inventor of the mechanical calculator" and the range of issues to be considered is discussed elsewhere. Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous stepped drum mechanism around 1672. He attempted to create a machine that could be used not only for addition and subtraction but would utilise a moveable carriage to enable long multiplication and division. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." However, Leibniz did not incorporate a fully successful carry mechanism. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1822 and even ENIAC of 1945) were based on the decimal system. Around 1820, Charles Xavier Thomas de Colmar created what would over the rest of the century become the first successful, mass-produced mechanical calculator, the Thomas Arithmometer. It could be used to add and subtract, and with a moveable carriage the operator could also multiply, and divide by a process of long multiplication and long division. It utilised a stepped drum similar in conception to that invented by Leibniz. Mechanical calculators remained in use until the 1970s. Punched-card data processing In 1804, French weaver Joseph Marie Jacquard developed a loom in which the pattern being woven was controlled by a paper tape constructed from punched cards. The paper tape could be changed without changing the mechanical design of the loom. This was a landmark achievement in programmability. His machine was an improvement over similar weaving looms. Punched cards were preceded by punch bands, as in the machine proposed by Basile Bouchon. These bands would inspire information recording for automatic pianos and more recently numerical control machine tools. In the late 1880s, the American Herman Hollerith invented data storage on punched cards that could then be read by a machine. To process these punched cards, he invented the tabulator and the keypunch machine. His machines used electromechanical relays and counters. Hollerith's method was used in the 1890 United States Census. That census was processed two years faster than the prior census had been. Hollerith's company eventually became the core of IBM. By 1920, electromechanical tabulating machines could add, subtract, and print accumulated totals. Machine functions were directed by inserting dozens of wire jumpers into removable control panels. When the United States instituted Social Security in 1935, IBM punched-card systems were used to process records of 26 million workers. Punched cards became ubiquitous in industry and government for accounting and administration. Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described punched-card techniques sufficiently advanced to solve some differential equations or perform multiplication and division using floating point representations, all on punched cards and unit record machines. Such machines were used during World War II for cryptographic statistical processing, as well as a vast number of administrative uses. The Astronomical Computing Bureau, Columbia University, performed astronomical calculations representing the state of the art in computing. Calculators By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to primarily women who used these calculators to perform mathematical calculations. By the 1920s, British scientist Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier–Stokes equations. Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It was a small, hand-cranked mechanical calculator and as such, a descendant of Gottfried Leibniz's Stepped Reckoner and Thomas' Arithmometer. The world's first all-electronic desktop calculator was the British Bell Punch ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a CRT, and introduced reverse Polish notation (RPN). First general-purpose computing device Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. There was to be a store, or memory, capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.7 kB). An arithmetical unit, called the "mill", would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. (Later drawings depict a regularized grid layout.) Like the central processing unit (CPU) in a modern computer, the mill would rely on its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify. The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards. The machine was about a century ahead of its time. However, the project was slowed by various problems including disputes with the chief machinist building parts for it. All the parts for his machine had to be made by hand—this was a major problem for a machine with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Ada Lovelace translated and added notes to the "Sketch of the Analytical Engine" by Luigi Federico Menabrea. This appears to be the first published description of programming, so Ada Lovelace is widely regarded as the first computer programmer. Following Babbage, although at first unaware of his earlier work, was Percy Ludgate, a clerk to a corn merchant in Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909. Two other inventors, Leonardo Torres y Quevedo and Vannevar Bush, also did follow on research based on Babbage's work. In his Essays on Automatics (1913) Torres y Quevedo designed a Babbage type of calculating machine that used electromechanical parts which included floating point number representations and built an early prototype in 1920. Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer. Analog computers In the first half of the 20th century, analog computers were considered by many to be the future of computing. These devices used the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved, in contrast to digital computers that represented varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location and was of great utility to navigation in shallow waters. His device was the foundation for further developments in analog computing. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin. He explored the possible construction of such calculators, but was stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. An important advance in analog computing was the development of the first fire-control systems for long range ship gunlaying. When gunnery ranges increased dramatically in the late 19th century it was no longer a simple matter of calculating the proper aim point, given the flight times of the shells. Various spotters on board the ship would relay distance measures and observations to a central plotting station. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments; the computer would then output a firing solution, which would be fed to the turrets for laying. In 1912, British engineer Arthur Pollen developed the first electrically powered mechanical analogue computer (called at the time the Argo Clock). It was used by the Imperial Russian Navy in World War I. The alternative Dreyer Table fire control system was fitted to British capital ships by mid-1916. Mechanical devices were also used to aid the accuracy of aerial bombing. Drift Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal Naval Air Service; it measured the wind speed from the air, and used that measurement to calculate the wind's effects on the trajectory of the bombs. The system was later improved with the Course Setting Bomb Sight, and reached a climax with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and the Norden (United States Army Air Forces). The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built. A fully electronic analog computer was built by Helmut Hölzer in 1942 at Peenemünde Army Research Center. By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications. Advent of the digital computer The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt. He also introduced the notion of a "universal machine" (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Electromechanical computers The era of modern computing began with a flurry of development before and during World War II. Most digital computers built in this period were electromechanical – electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2 was one of the earliest examples of an electromechanical relay computer, and was created by German engineer Konrad Zuse in 1940. It was an improvement on his earlier Z1; although it used the same mechanical memory, it replaced the arithmetic and control logic with electrical relay circuits. In the same year, electro-mechanical devices called bombes were built by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The bombe's initial design was created in 1939 at the UK Government Code and Cypher School (GC&CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. It was a substantial development from a device that had been designed in 1938 by Polish Cipher Bureau cryptologist Marian Rejewski, and known as the "cryptologic bomb" (Polish: "bomba kryptologiczna"). In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was proven to have been a Turing-complete machine in 1998 by Raúl Rojas. In two 1936 patent applications, Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in 1948 in America in the electromechanical IBM SSEC and in Britain in the fully electronic Manchester Baby. Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents. In 1944, the Harvard Mark I was constructed at IBM's Endicott laboratories. It was a similar general purpose electro-mechanical computer to the Z3, but was not quite Turing-complete. Digital computation The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, George Stibitz, then working at Bell Labs (1930–1941), completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder. Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code. The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work The Laws of Thought, published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead. In 1879 Gottlob Frege develops the formal approach to logic and proposes the first logic language for logical equations. In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. They showed that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design. In addition Shannon's paper gives a correct circuit diagram for a 4 bit digital binary adder. Electronic data processing Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand, using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium. The engineer Tommy Flowers joined the telecommunications branch of the General Post Office in 1926. While working at the research station in Dollis Hill in the 1930s, he began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, in 1940 Arthur Dickinson (IBM) invented the first digital electronic computer. This calculating device was fully electronic – control, calculations and output (the first electronic display). John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) in 1942, the first binary electronic digital calculating device. This design was semi-electronic (electro-mechanical control and electronic calculations), and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. However, its paper card writer/reader was unreliable and the regenerative drum contact system was mechanical. The machine's special-purpose nature and lack of changeable, stored program distinguish it from modern computers. Computers whose logic was primarily built using vacuum tubes are now known as first generation computers. The electronic programmable computer During World War II, British codebreakers at Bletchley Park, north of London, achieved a number of successes at breaking encrypted enemy military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. Women often operated these bombe machines. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand. The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, code-named "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Max Newman and his colleagues developed the Heath Robinson, a fixed-function machine to aid in code breaking. Tommy Flowers, a senior engineer at the Post Office Research Station was recommended to Max Newman by Alan Turing and spent eleven months from early February 1943 designing and building the more flexible Colossus computer (which superseded the Heath Robinson). After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal store for the data. The reading mechanism ran at 5,000 characters per second with the paper tape moving at . Colossus Mark 1 contained 1500 thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in parallel, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process. Mark 2 was designed while Mark 1 was being constructed. Allen Coombs took over leadership of the Colossus Mark 2 project when Tommy Flowers moved on to other projects. The first Mark 2 Colossus became operational on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day. Most of the use of Colossus was in determining the start positions of the Tunny rotors for a message, which was called "wheel setting". Colossus included the first-ever use of shift registers and systolic arrays, enabling five simultaneous tests, each involving up to 100 Boolean calculations. This enabled five different possible start positions to be examined for one transit of the paper tape. As well as wheel setting some later Colossi included mechanisms intended to help determine pin patterns known as "wheel breaking". Both models were programmable using switches and plug panels in a way their predecessors had not been. Ten Mk 2 Colossi were operational by the end of the war. Without the use of these machines, the Allies would have been deprived of the very valuable intelligence that was obtained from reading the vast quantity of enciphered high-level telegraphic messages between the German High Command (OKW) and their army commands throughout occupied Europe. Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz SZ cyphers (from German rotor stream cipher machines) during the oncoming Cold War. Two of the machines were transferred to the newly formed GCHQ and the others were destroyed. As a result, the machines were not included in many histories of computing. A reconstructed working copy of one of the Colossus machines is now on display at Bletchley Park. The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were women who had been trained as mathematicians. It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High-speed memory was limited to 20 words (equivalent to about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. One of its major engineering feats was to minimize the effects of tube burnout, which was a common problem in machine reliability at that time. The machine was in almost constant use for the next ten years. Stored-program computer Early computing machines were programmable in the sense that they could follow the sequence of steps they had been set up to execute, but the "program", or steps that the machine was to execute, were set up usually by changing how the wires were plugged into a patch panel or plugboard. "Reprogramming", when it was possible at all, was a laborious process, starting with engineers working out flowcharts, designing the new set up, and then the often-exacting process of physically re-wiring patch panels. Stored-program computers, by contrast, were designed to store a set of instructions (a program), in memory – typically the same memory as stored data. Theory The theoretical basis for the stored-program computer had been proposed by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began his work on developing an electronic stored-program digital computer. His 1945 report 'Proposed Electronic Calculator' was the first specification for such a device. Meanwhile, John von Neumann at the Moore School of Electrical Engineering, University of Pennsylvania, circulated his First Draft of a Report on the EDVAC in 1945. Although substantially similar to Turing's design and containing comparatively little engineering detail, the computer architecture it outlined became known as the "von Neumann architecture". Turing presented a more detailed paper to the National Physical Laboratory (NPL) Executive Committee in 1946, giving the first reasonably complete design of a stored-program computer, a device he called the Automatic Computing Engine (ACE). However, the better-known EDVAC design of John von Neumann, who knew of Turing's theoretical work, received more publicity, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas. Turing thought that the speed and the size of computer memory were crucial elements, so he proposed a high-speed memory of what would today be called 25 KB, accessed at a speed of 1 MHz. The ACE implemented subroutine calls, whereas the EDVAC did not, and the ACE also used Abbreviated Computer Instructions, an early form of programming language. Manchester Baby The Manchester Baby was the world's first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. The machine was not intended to be a practical computer but was instead designed as a testbed for the Williams tube, the first random-access digital storage device. Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946 and 1947, it was a cathode-ray tube that used an effect called secondary emission to temporarily store electronic binary data, and was used successfully in several early computers. Although the computer was small and primitive, it was a proof of concept for solving a single problem; Baby was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop the design into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine found the highest proper divisor of 218 (262,144), a calculation that was known would take a long time to run—and so prove the computer's reliability—by testing every integer from 218 − 1 downwards, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for 52 minutes before reaching the correct answer of 131,072, after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS). The successive approximations to the answer were displayed as the successive positions of a bright dot on the Williams tube. Manchester Mark 1 The Experimental machine led on to the development of the Manchester Mark 1 at the University of Manchester. Work began in August 1948, and the first version was operational by April 1949; a program written to search for Mersenne primes ran error-free for nine hours on the night of 16/17 June 1949. The machine's successful operation was widely reported in the British press, which used the phrase "electronic brain" in describing it to their readers. The computer is especially historically significant because of its pioneering inclusion of index registers, an innovation which made it easier for a program to read sequentially through an array of words in memory. Thirty-four patents resulted from the machine's development, and many of the ideas behind its design were incorporated in subsequent commercial products such as the and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in scientific roles than in pure mathematics. In 1951 they started development work on Meg, the Mark 1's successor, which would include a floating-point unit. EDSAC The other contender for being the first recognizably modern digital stored-program computer was the EDSAC, designed and constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England at the University of Cambridge in 1949. The machine was inspired by John von Neumann's seminal First Draft of a Report on the EDVAC and was one of the first usefully operational electronic digital stored-program computer. EDSAC ran its first programs on 6 May 1949, when it calculated a table of squares and a list of prime numbers.The EDSAC also served as the basis for the first commercially applied computer, the LEO I, used by food manufacturing company J. Lyons & Co. Ltd. EDSAC 1 was finally shut down on 11 July 1958, having been superseded by EDSAC 2 which stayed in use until 1965. EDVAC ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's construction in August 1944, and design work for the EDVAC commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered. It was finally delivered to the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground in August 1949, but due to a number of problems, the computer only began operation in 1951, and then only on a limited basis. Commercial computers The first commercial computer was the Ferranti Mark 1, built by Ferranti and delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer (Lyons Electronic Office) became operational in April 1951 and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO – the first business application to go live on a stored program computer. In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than US$1 million each ($ as of ). UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words). IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. The system cost US$500,000 ($ as of ) or could be leased for US$3,500 a month ($ as of ). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture – the instruction format included the address of the next instruction – and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was reduced. Microprogramming In 1951, British scientist Maurice Wilkes developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode). This concept greatly simplified CPU development. He first described this at the University of Manchester Computer Inaugural Conference in 1951, then published in expanded form in IEEE Spectrum in 1955. It was widely used in the CPUs and floating-point units of mainframe and other computers; it was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor. Magnetic memory Magnetic drum memories were developed for the US Navy during WW II with the work continuing at Engineering Research Associates (ERA) in 1946 and 1947. ERA, then a part of Univac included a drum memory in its 1103, announced in February 1953. The first mass-produced computer, the IBM 650, also announced in 1953 had about 8.5 kilobytes of drum memory. Magnetic core memory patented in 1949 with its first usage demonstrated for the Whirlwind computer in August 1953. Commercialization followed quickly. Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used magnetic-core memory. It went on to dominate the field into the 1970s, when it was replaced with semiconductor memory. Magnetic core peaked in volume about 1975 and declined in usage and market share thereafter. As late as 1980, PDP-11/45 machines using magnetic-core main memory and drums for swapping were still in use at many of the original UNIX sites. Early digital computer characteristics Transistor computers The bipolar transistor was invented in 1947. From 1955 onward transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System, each carrying one to four logic gates or flip-flops. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Initially the only devices available were germanium point-contact transistors, less reliable than the valves they replaced but which consumed far less power. Their first transistorised computer, and the first in the world, was operational by 1953, and a second version was completed there in April 1955. The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power consumption of 150 watts. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The design featured a 64-kilobyte magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 this team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms. CADET used 324-point-contact transistors provided by the UK company Standard Telephones and Cables; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. From August 1956 CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available. The Manchester University Transistor Computer's design was adopted by the local engineering firm of Metropolitan-Vickers in their Metrovick 950, the first commercial transistor computer anywhere. Six Metrovick 950s were built, the first completed in 1956. They were successfully deployed within various departments of the company and were in use for about five years. A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964. Transistor peripherals Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk. Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch. During the second generation remote terminal units (often in the form of Teleprinters like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet. Transistor supercomputers The early 1960s saw the advent of supercomputing. The Atlas was a joint development between the University of Manchester, Ferranti, and Plessey, and was first installed at Manchester University and officially commissioned in 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. It was a second-generation machine, using discrete germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognisable modern operating system". In the US, a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. The CDC 6600 outperformed its predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of about 1 megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600. Integrated circuit computers The "third-generation" of digital electronic computers used integrated circuit (IC) chips as the basis of their logic. The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. The first working integrated circuits were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. Kilby's invention was a hybrid integrated circuit (hybrid IC). It had external wire connections, which made it difficult to mass-produce. Noyce came up with his own idea of an integrated circuit half a year after Kilby. Noyce's invention was a monolithic integrated circuit (IC) chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide at Bell Labs in the late 1950s. Third generation (integrated circuit) computers first appeared in the early 1960s in computers developed for government purposes, and then in commercial computers beginning in the mid-1960s. The first silicon IC computer was the Apollo Guidance Computer or AGC. Although not the most powerful computer of its time, the extreme constraints on size, mass, and power of the Apollo spacecraft required the AGC to be much smaller and denser than any prior computer, weighing in at only . Each lunar landing mission carried two AGCs, one each in the command and lunar ascent modules. Semiconductor memory The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. In addition to data processing, the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores. Semiconductor memory, also known as MOS memory, was cheaper and consumed less power than magnetic-core memory. MOS random-access memory (RAM), in the form of static RAM (SRAM), was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1966, Robert Dennard at the IBM Thomas J. Watson Research Center developed MOS dynamic RAM (DRAM). In 1967, Dawon Kahng and Simon Sze at Bell Labs developed the floating-gate MOSFET, the basis for MOS non-volatile memory such as EPROM, EEPROM and flash memory. Microprocessor computers The "fourth-generation" of digital electronic computers used microprocessors as the basis of their logic. The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. The subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor". The earliest multi-chip microprocessors were the Four-Phase Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff, Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971. Tadashi Sasaki and Masatoshi Shima at Busicom, a calculator manufacturer, had the initial insight that the CPU could be a single MOS LSI chip, supplied by Intel. While the earliest microprocessor ICs literally contained only the processor, i.e. the central processing unit, of a computer, their progressive development naturally led to chips containing most or all of the internal electronic parts of a computer. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip. During the 1960s there was considerable overlap between second and third generation technologies. IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of microcomputers, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond. While which specific system is considered the first microcomputer is a matter of debate, as there were several unique hobbyist systems developed based on the Intel 4004 and its successor, the Intel 8008, the first commercially available microcomputer kit was the Intel 8080-based Altair 8800, which was announced in the January 1975 cover article of Popular Electronics. However, this was an extremely limited system in its initial stages, having only 256 bytes of DRAM in its initial package and no input-output except its toggle switches and LED register display. Despite this, it was initially surprisingly popular, with several hundred sales in the first year, and demand rapidly outstripped supply. Several early third-party vendors such as Cromemco and Processor Technology soon began supplying additional S-100 bus hardware for the Altair 8800. In April 1975 at the Hannover Fair, Olivetti presented the P6060, the world's first complete, pre-assembled personal computer system. The central processing unit consisted of two cards, code named PUCE1 and PUCE2, and unlike most other personal computers was built with TTL components rather than a microprocessor. It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It weighed . As a complete system, this was a significant step from the Altair, though it never achieved the same success. It was in competition with a similar product by IBM that had an external floppy disk drive. From 1975 to 1977, most microcomputers, such as the MOS Technology KIM-1, the Altair 8800, and some versions of the Apple I, were sold as kits for do-it-yourselfers. Pre-assembled systems did not gain much ground until 1977, with the introduction of the Apple II, the Tandy TRS-80, the first SWTPC computers, and the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments. A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server software, CERN httpd, and also used to write the first web browser, WorldWideWeb. Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event. In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, and is frequently used for on-chip cache memory in modern microprocessors, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current during the 'transition' between logic states, except for leakage. CMOS circuits have allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In 2006 servers consumed 1.5% of the total energy budget of the U.S. The energy consumption of computer data centers was expected to double to 3% of world consumption by 2011. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip; SoCs are enabling phones and PCs to converge into single hand-held wireless mobile devices. Quantum computing is an emerging technology in the field of computing. MIT Technology Review reported 10 November 2017 that IBM has created a 50-qubit computer; currently its quantum state lasts 50 microseconds. Google researchers have been able to extend the 50 microsecond time limit, as reported 14 July 2021 in Nature; stability has been extended 100-fold by spreading a single logical qubit over chains of data qubits for quantum error correction. Physical Review X reported a technique for 'single-gate sensing as a viable readout method for spin qubits' (a singlet-triplet spin state in silicon) on 26 November 2018. A Google team has succeeded in operating their RF pulse modulator chip at 3 Kelvin, simplifying the cryogenics of their 72-qubit computer, which is setup to operate at 0.3 Kelvin; but the readout circuitry and another driver remain to be brought into the cryogenics. See: Quantum supremacy Silicon qubit systems have demonstrated entanglement at non-local distances. Computing hardware and its software have even become a metaphor for the operation of the universe. Epilogue An indication of the rapidity of development of this field can be inferred from the history of the seminal 1947 article by Burks, Goldstine and von Neumann. By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the rapid pace of development has continued, worldwide. A 1966 article in Time predicted that: "By 2000, the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy. How to use leisure time will be a major problem." See also Antikythera mechanism History of computing History of computing hardware (1960s–present) History of laptops History of personal computers History of software Information Age IT History Society Timeline of computing List of pioneers in computer science Vacuum-tube computer Notes References With notes upon the Memoir by the Translator. German to English translation, M.I.T., 1969. Noyce, Robert Pages 220–226 are annotated references and guide for further reading. Stibitz, George (and ) Other online versions: Proceedings of the London Mathematical Society Another link online. Wang, An Further reading Computers and Automation Magazine – Pictorial Report on the Computer Field: A PICTORIAL INTRODUCTION TO COMPUTERS – 06/1957 A PICTORIAL MANUAL ON COMPUTERS – 12/1957 A PICTORIAL MANUAL ON COMPUTERS, Part 2 – 01/1958 1958–1967 Pictorial Report on the Computer Field – December issues (195812.pdf, ..., 196712.pdf) Bit by Bit: An Illustrated History of Computers, Stan Augarten, 1984. OCR with permission of the author External links Obsolete Technology – Old Computers History of calculating technology Historic Computers in Japan The History of Japanese Mechanical Calculating Machines Computer History — a collection of articles by Bob Bemer 25 Microchips that shook the world – a collection of articles by the Institute of Electrical and Electronics Engineers Columbia University Computing History Computer Histories – An introductory course on the history of computing Revolution – The First 2000 Years Of Computing, Computer History Museum 01 History of computing
1212015
https://en.wikipedia.org/wiki/Final%20Doom
Final Doom
Final Doom is a first-person shooter video game developed by TeamTNT, and Dario and Milo Casali, and was released by id Software and distributed by GT Interactive Software in 1996. It was released for MS-DOS and Macintosh computers, as well as for the PlayStation, although the latter featured a selection of levels from Final Doom and from Master Levels for Doom II. Final Doom consists of two 32-level episodes (or megawads), TNT: Evilution and The Plutonia Experiment. Unlike TNT: Evilution, which was officially licensed, The Plutonia Experiment was made by request of the team at id Software. The story in both episodes take place after the events of Doom II. TNT: Evilution features a new soundtrack not found in Doom and Doom II. Gameplay Final Doom plays identically to Doom II: Hell on Earth, and even features the same weapons, items, and monsters. The original game is widely considered to be significantly more difficult than its predecessors Doom and Doom II. The gameplay in the PlayStation version of Final Doom is nearly identical to that found in the PlayStation version of Doom, and, in addition, it was compatible with the PlayStation Mouse. Compared to the MS-DOS original, the PlayStation version's overall difficulty was significantly reduced. Many of the harder levels were removed and those that remained often had enemies taken out (most noticeably the Cyberdemon being removed from the level 'Baron's Lair'). As in the original PlayStation version of Doom, many of the larger levels from the original MS-DOS versions of Final Doom and Master Levels for Doom II were removed, and both the Arch-vile and Spider Mastermind monsters were removed, due to technical constraints. This limited the PlayStation version to 30 levels in total. The more traditional rock tracks featured in Final Doom were replaced by a creepier ambient soundtrack by Aubrey Hodges, who later composed the music for Doom 64 in 1997. There are several noticeable alterations to the presentation of Final Doom in the PlayStation version compared to the computer versions. The simplistic title screens featured in the computer versions have been replaced by a more elaborate title screen that features the animated flame-filled sky texture from the original PlayStation version of Doom. Many of the levels' sky textures have been replaced by different ones; some levels' skies are replaced by sky textures seen in previous Doom games, whereas others now feature a new starry sky texture. Finally, most of the level layouts are simplified, similar to previous Doom console ports, and the frame rate of the game is often lower than it was in the first PlayStation Doom game. Plot TNT: Evilution In TNT: Evilution, the UAC are once again intent on developing and experimenting with dimensional gateway technology. They set up a base on one of the moons of Jupiter, with a solid detachment of space marines for protection. The marines do their job well; when the first experimental gateway is opened, they annihilate the forces of Hell. Research continues with more confidence and all security measures turned at the gateway. A few months later, the yearly supply ship comes ahead of schedule, and looks strange and unusually big on the radar. The lax radar operators decide that there is nothing to worry about. The personnel of the base go out to behold the terrible truth: it is a spaceship from Hell, built out of steel, stone, flesh, bone and corruption. The ship's enormous gates open to unleash a rain of demons on the base. Quickly, the entire facility is overrun, and everyone is slain or zombified. The main character, the nameless space marine (who was revealed to be the marine commander on the moon) has been away on a walk at that time and thus escapes death or zombification. After being attacked by an imp, he rushes back to the base where he sees the demonic spaceship still hovering above it and realizes what has happened. He then swears that he will avenge his slain troops and sets out to kill as many demons as possible. In the end, the marine defeats the Icon of Sin and the game describes "something rumbles in the distance. A blue light glows within the ruined skull of the demon-spitter." The Plutonia Experiment In The Plutonia Experiment, after Hell's catastrophic invasion of Earth, the global governments decide to take measures against any possible future invasion, knowing that the powers of Hell still remained strong. The UAC is refounded under completely new management (the old trustees and stockholders were all dead anyway) and aims at developing tools that would prevent demonic invasions. The scientists start working on a device known as the Quantum Accelerator that is intended to close invasion Gates and stop possible invasions. The experiments are carried out in a secret research complex, with a stationed detachment of marines. The work seems to be going well but the creatures from Outside have their attention drawn towards the new research. A Gate opens in the heart of the complex and unnatural horrors pour out. The Quantum Accelerator performs superbly: the Gate is quickly closed and the invasion is stopped. Research continues more boldly. On the next day, a ring of 7 Gates opens and an even greater invasion begins. For one hour the Quantum Accelerators manage to close 6 of the Gates, but the Hellish army has become too numerous and too strong. The complex is overrun. Everyone is slain, or zombified. The last Gate of Hell remains open, guarded by a Gatekeeper: a powerful, enormous and ancient demon that has the power to open Dimensional Gates and control or protect them. The government, frantic that the Quantum Accelerator will be destroyed or used against humanity, orders all marines to the site at once. The player, the nameless space marine, was on leave at the beach. He was also closest to the site and gets there first. There he discovers that there is much demonic activity (howling, chanting, machine sounds) within the complex; the Gatekeeper is obviously working on something, and his work would soon reach some awful climax. He also realizes that when the marines arrive, they would not be able to penetrate the heavily infested complex, despite the firepower and support they will have. The marine decides to enter the complex and stop the Gatekeeper alone. Development Work on TNT: Evilution was started by TeamTNT, a group of WAD-making hobbyists who were active on the advanced Doom editing mailing list. Just days before it was to be released as a free download online, the project was acquired by id Software, and finished in November 1995. Brothers Dario and Milo Casali, who had contributed four levels to TNT: Evilution, were assigned the task of creating what became The Plutonia Experiment after having sent an eight-level WAD they had created to American McGee and managing to impress him along with the rest of the id Software crew. They created 16 levels each for The Plutonia Experiment in four months time, and submitted them in January 1996. Unlike their contributions to TNT: Evilution, which were edited after submission (four were also rejected due to being too large to run on the computers of the day), these were the final revisions of the levels, and Dario Casali later commented about the fact that no changes were requested: "Thank God because I was ready to throw my computer out the window at the time." Dario Casali acknowledged the difficulty level of The Plutonia Experiment in an interview on Doomworld, stating "Plutonia was always meant for people who had finished Doom 2 on hard and were looking for a new challenge. I always played through the level I had made on hard, and if I could beat it too easily, I made it harder, so it was a challenge for me." Reception Reviewing the PC version in GameSpot, Jim Varner argued that Final Doom is a waste of money, since it is essentially just a new set of level maps for Doom, and there were already thousands of such maps available to download for free on the internet. While Major Mike of GamePro criticized that Final Doom has no new enemies or weapons, and that the PlayStation version includes only 30 levels as compared to the PC version's 64, he was pleased with the "huge, perplexing, and sometimes sadistic levels" and the new scenery, and considered Doom still a compelling enough game that simply more of the same was enough to satisfy. PlayStation Magazine gave it a score of 9/10, calling it "essential". A reviewer for Next Generation was less impressed, remarking that a side-by-side comparison with the PlayStation version of the original Doom reveals that Final Doom has a much lower frame rate, less precise control, and more visible seams in the textures. Three of the four reviewers of Electronic Gaming Monthly said they were tired of seeing ports of Doom, and that Final Doom was simply another such port with new level maps. They also said that the game engine had become severely outdated in the years since Doom was first released. Crispin Boyer was the one dissenting voice, expressing enthusiasm for the new level designs. Notes References Interview with Dario Casali (1998) by Doomworld External links Final Doom page at PlanetDOOM Final Doom at TeamTNT's official website Dario Casali's The Plutonia Experiment page id Software's official Final Doom page Comparison of PC and Playstation Final Doom at ClassicDOOM 1996 video games Cooperative video games Doom (franchise) Video games about demons Doom engine games DOS games Games commercially released with DOSBox GT Interactive Software games Id Software games Classic Mac OS games PlayStation (console) games PlayStation 3 games PlayStation Network games Video games developed in the United States Video games scored by Aubrey Hodges Video games with 2.5D graphics Video games with digitized sprites Video games set in hell Williams video games Windows games Video game spin-offs Sprite-based first-person shooters Horror video games de:Doom#Final Doom
61851555
https://en.wikipedia.org/wiki/Robert%20Alan%20Saunders
Robert Alan Saunders
Robert Alan Saunders is an American computer scientist, most famous for being an influential computer programmer. Saunders joined the Tech Model Railroad Club (TMRC) led by Alan Kotok, Peter Samson, and himself. They then met Marvin Minsky and other influential pioneers in what was then known as Artificial Intelligence. MIT: 1956–1962 From 1957–61, Robert Saunders worked with other undergraduates at the Massachusetts Institute of Technology where they were allowed by Jack Dennis to develop programs for the then TX-0 experimental computer on permanent loan from Lincoln Laboratory. During these years, Saunders and his fellow TRMC members are described as the first true hackers in the book Hackers: Heroes of the Computer Revolution by Steven Levy. At MIT, Saunders earned bachelor's and master's degrees in electrical engineering. The TMRC group was heavily influenced by professors such as Jack Dennis and Uncle John McCarthy – and by their continued involvement in the student group known as Tech Model Railroad Club (TMRC). While a graduate student, Jack Dennis (former TMRC member) introduced students to the TX-0 on loan to MIT indefinitely from Lincoln Laboratory. In the spring of 1959, McCarthy taught the first course in programming that MIT offered to freshmen. Outside classes, Saunders, along with fellow TMRC members Alan Kotok, David Gross, Peter Samson, and Robert A. Wagner, all friends from TMRC, reserved time on the TX-0. Dennis enjoyed watching the young hackers work and allowed them to use the TX-0 for various personal projects. In 1961, DEC donated a PDP-1 to MIT. The PDP-1 had a Type 30 precision CRT display and you could see code run while you were working. Students from TMRC worked as support staff and used this new look at programming as a way to change the way computers were used, working the Lisp programming language and a number of other innovations at the time. Spacewar! One of these innovations was the first real digital game, called Spacewar!. Written by Saunders, Martin Graetz, Stephen Russell and Wayne Wiitanen in 1961, Spacewar! was inspired by Marvin Minsky's Three Position Display. After urging Russell to start the game for some time, the group had the first version running by early 1962, with some assistance from then DEC employee Alan Kotok. Primarily written by Russell, Spacewar! was one of the earliest interactive computer games. During this time, Saunders built the first game controllers, thus allowing two people to play against each other without using the control switches on the front of the computer. After his years at MIT, Saunders spent most of his professional career at Hewlett-Packard, working on computer operating systems. In 1993, he went to work for five years in Riyadh, Saudi Arabia, helping to manage the computer system which deals with maintenance of the Royal Saudi Air Force's airplanes. Saunders devised a proof of Karl Popper's conjecture on refutability, showing that the potential information content of any proposition is equivalent to its refutability. In other words, if there does not exist a means by which a proposition could be shown to be wrong, it can convey no information. References Video game programmers Living people American computer scientists Year of birth missing (living people)
1793984
https://en.wikipedia.org/wiki/General%20Magic
General Magic
General Magic was an American software and electronics company co-founded by Bill Atkinson, Andy Hertzfeld, and Marc Porat. Based in Mountain View, California, the company developed precursors to "USB, software modems, small touchscreens, touchscreen controller ICs, ASICs, multimedia email, networked games, streaming TV, and early e-commerce notions." General Magic's main product was Magic Cap, the operating system used in 1994 by the Motorola Envoy and Sony's Magic Link PDA. It also introduced the programming language Telescript. After announcing it would cease operations in 2002, it was liquidated in 2004 with Paul Allen purchasing most of its patents. History Apple project and spinoff (1989) The original project started in 1989 within Apple Computer, when Marc Porat convinced Apple's CEO at the time John Sculley that the next generation of computing would require a partnership of computer, communications and consumer electronics companies to cooperate. Known as the Paradigm project, the project ran for some time within Apple, but management remained generally uninterested and the team struggled for resources. Eventually they approached Sculley with the idea of spinning off the group as a separate company, which occurred in May 1990. In 1990 Marc Porat, Andy Hertzfeld, and Bill Atkinson in Mountain View, California founded it. Apple took a minority stake in the company, with John Sculley joining the General Magic board. Porat, Hertzfeld and Atkinson were soon joined at General Magic by Susan Kare, Joanna Hoffman was vice president of marketing., hardware pioneer Dr. Wendell Sander with Brian Sanders his son, Walt Broedner and Megan Smith who joined from Apple Japan, and most of Apple's System 7 team, including Phil Goldman and soon after Bruce Leak and Darin Adler. In 1990, Porat wrote the following note to Sculley: "A tiny computer, a phone, a very personal object . . . It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value even when it's not being used... Once you use it you won't be able to live without it." Early years (1992–1994) The company initially operated in near-complete secrecy. By 1992, some of the world's largest electronics corporations, including Sony, Motorola, Matsushita, Philips and AT&T Corporation were partners and investors in General Magic, creating significant buzz in the industry. Sculley, Motorola CEO George Fisher, Sony president Norio Ogha, and AT&T division chairman Victor Pelsen became board members. As the operations expanded, the company reportedly let rabbits roam the offices to inspire creativity. In 1992–1993, while Sculley was still a director of General Magic, Apple entered the consumer electronics market with a poorly-received "personal digital assistant" that became the Apple Newton. By early 1993, Newton (originally designed as a tablet with no communications capabilities) started to attract market interest away from General Magic. In February 1993, the company had 100 employees. On February 8, The New York Times referred to General Magic as "Silicon Valley's most closely watched start-up company." It reported that the company was introducing software technology called Telescript with the intent of creating a "standard for transmitting messages among any machines that compute, regardless of who makes them." The company also announced the software Magic Cap, an operating system catering to communications. Telescript would eventually come out in 1996 at the start of the internet boom. In an article titled "Here's Where Woodstock Meets Silicon Valley," on February 27, 1993, The New York Times reported that General Magic had backing from "American Telephone and Telegraph, Sony, Motorola, Philips Electronics and Matsushita Electric Industrial." Marc Porat remained the chief executive of the company. By 1994, the "General Magic Alliance" of cross-industry partners had expanded to 16 global telecommunications and consumer electronics companies, including Cable & Wireless, France Telecom, NTT, Northern Telecom, Toshiba, Oki, Sanyo, Mitsubishi, and Fujitsu. Each of the so-called "Founding Partners" invested up to $6 million in the company and named a senior executive to the company's "Founding Partner's Council." The first "General Magic Alliance" hardware products, using the Magic Cap software, were two personal digital assistants (PDAs) that came out in the summer of 1994, with Motorola producing the Motorola Envoy Personal Wireless Communicator and Sony producing the $800 wireline Sony Magic Link. Alliance partner AT&T launched its PersonaLink network to host the devices, a closed network that did not connect to the emerging internet. AT&T eventually shut down the PersonaLink network in 1996. IPO (1995) The company launched an IPO on NASDAQ in February 1995. General Magic raised $96 million in the IPO, and a total of $200 million from 16 different investors. The company's stock value doubled after its IPO. Portico service (1996) Steve Markman was hired to run General Magic in 1996, and he hired Kevin Surace to head a new telephony group. This new team of 60–70 people set out to create a voice recognition-based personal assistant service that would be as close to human interaction as possible. The first service delivered was Portico (code named Serengeti during development), and the interface was called Mary, named after Mary McDonald-Lewis, who voiced Portico, Serengeti and GM's later version, OnStar. Portico synchronized to devices such as the Palm Connected Organizer and Microsoft Outlook and handled voicemail, call forwarding, email, calendar etc., all through the user's own personal 800 number. General Magic was the first company to employ a large number of linguists to make their software seem real and responses varied, with General Magic investors receiving several key patents relating to voice recognition and artificial personality. The Portico system was also scaled back and sold through many partners including Quest and Excite. At its peak, the system supported approximately 2.5 million users. In 1997 Steve Markman hired Linda Hayes as Chief Marketing Officer, who in turn hired a new marketing team, which launched Portico. The Portico launch is attributed with lifting General Magic's stock price from $1 in 1997 to $18 in 2000. According to Fast Company, the company's original [device] idea was "practically, dead," with people not buying General Magic devices in quantity. Spinoffs and myTalk (1998–2000) While Portico ran its voice portal business, the original handheld group was spun off in 1998 as Icras. The new company sold the Magic Cap OS as hardware named DataRover and focused on vertical market systems. General Magic announced a major licensing deal and investments from Microsoft in March 1998. The deal gave Microsoft access to certain intellectual property, and helped General Magic move toward integrating Portico with Microsoft products. The OnStar Virtual Advisor was developed at this time as well for General Motors. In 1999 the Marketing Team developed a separate consumer product called MyTalk. Created by Kevin Wray, the MyTalk product was a success and went on to win the Computerworld Smithsonian Award for the first commercially successful voice recognition consumer product. Today MyTalk was also listed in the permanent Smithsonian Museum collection. Because of the product's momentum, the intent was to spin off Hayes’ group with Wray leading the product management. However, because of failure to agree on technology licensing terms, the spin-off stalled. Shutdown (1999–2004) By 1999, the company's stock had plunged significantly, with Forbes attributing the drop to "losses, layoffs and missed projections." Most of the management that was involved in bringing Portico to market left by early 2000 to pursue other interests with Internet startups. A new team was brought in led by Kathleen Layton. The new team took the company in the direction of turning its voice services into enterprise software offerings. The company announced it would cease operations on September 18, 2002. The company was liquidated in 2004. The OnStar assets were turned over to EDS to run for General Motors. The patents were auctioned by the court. Most of the patents the company had developed were purchased by Paul Allen. Products and technology According to Electronics Weekly, the company "developed a precursor of USB, software modems, small touchscreens, touchscreen controller ICs, ASICs, multimedia email, networked games, streaming TV and early e-commerce notions." Magic Cap General Magic's main product was Magic Cap, an operating system(OS) which allowed users to "set their own rules for message alerts and acquiring information" on PDAs, according to CNET. The basic idea behind the system was to distribute the typical computing load across many machines in the network using Magic Cap, which was a fairly minimal operating system that was essentially a UI. The UI is based on a "rooms" metaphor; for example, e-mail and an address book can be found in the office, and games might be found in a living room. User applications were generally written in Magic Script, a utility language variant of the C programming language with object oriented extensions. It was used on the Envoy PDA by Motorola and the MagicLink PDA by Sony. Sony and Motorola introduced Magic Cap devices in late 1994, based on the Motorola 68300 Dragon microprocessor. The launch suffered from a lack of real supporting infrastructure. Unlike the Newton and other PDAs being introduced at the same time, the Magic Cap system also did not rely on handwriting recognition, putting it at a marketing disadvantage. Partners ended production of Magic Cap devices by 1997. General Magic planned to release Magic Cap software development tools with Metrowerks by the summer of 1995. Telescript Its other software, Telescript, was "software-agent technology that would search the Web and automatically retrieve information such as stock quotes and airline ticket prices." The script was introduced with the intent of creating a "standard for transmitting messages among any machines that compute, regardless of who makes them." The Telescript programming language made communications a first-class primitive of the language. Telescript is compiled into a cross-platform bytecode in much the same fashion as the Java programming language, but is able to migrate running processes between virtual machines. The developers saw a time when Telescript application engines would be ubiquitous, and interconnected Telescript engines would form a "Telescript Cloud" across which mobile applications could execute. Legacy The company achieved many technical breakthroughs, including software modems (eliminating the need for modem chips), small touchscreens and touchscreen controller ASICs, highly integrated systems-on-a-chip designs for its partners' devices, rich multimedia email, networked games, streaming television, and early versions of e-commerce. According to former General Magic employee Marco DeMiroz, it was the "Fairchild [Semiconductor] of the 90s." A documentary film about the company opened at the Tribeca Film Festival April 20, 2018. It was later shown at the SFFilm Festival in San Francisco on November 3, 2018. The company founders had hired filmmakers including Sarah Kerruish to document their development process in the 1990s, and Kerruish included some of that original footage of General Magic's offices in the film. The film includes interviews with Marc Porat, Andy Hertzfeld, Joanna Hoffman, Megan Smith, and Tony Fadell. See also Magic Link Motorola Envoy References Defunct computer companies based in California Technology companies based in the San Francisco Bay Area Companies based in Mountain View, California Computer companies established in 1990 Companies disestablished in 2002 1990 establishments in California 2002 disestablishments in California Defunct companies based in the San Francisco Bay Area
4478382
https://en.wikipedia.org/wiki/Accelerated%20Math
Accelerated Math
Accelerated Math is a daily, progress-monitoring software tool that monitors and manages mathematics skills practice, from preschool math through calculus. It is primarily used by primary and secondary schools, and it is published by Renaissance Learning, Inc. Currently, there are five versions: a desktop version and a web-based version in Renaissance Place, the company's web software for Accelerated Math and a number of other software products (e.g. Accelerated Reader). In Australia and the United Kingdom, the software is referred to as "Accelerated Maths". Research Below is a sample of some of the most current research on Accelerated Math. Sadusky and Brem (2002) studied the impact of first-year implementation of Accelerated Math in a K-6 urban elementary school during the 2001–2002 school year. The researchers found that teachers were able to immediately use data to make decisions about instruction in the classroom. The students in classrooms using Accelerated Math had twice the percentile gains when tested as compared to the control classrooms that did not use Accelerated Math. Ysseldkyke and Tardrew (2003) studied 2,202 students in 125 classrooms encompassing 24 states. The results showed that when students using Accelerated Math were compared to a control group, those students using the software made a significant gains on the STAR Math test. Students in grades 3 through 10 that were using Accelerated Math had more than twice the percentile gains on these tests than students in the control group. Ysseldyke, Betts, Thill, and Hannigan (2004) conducted a quasi-experimental study with third- through sixth-grade Title I students. They found that Title I students who used Accelerated Math outperformed students who did not. Springer, Pugalee, and Algozzine (2005) also discovered a similar pattern. They studied students that failed to pass the AIMS test in order to graduate. Over half of the students passed the test after taking a course in which Accelerated Math was used to improve their achievement. The What Works Clearinghouse (2008) within the Institute of Educational Sciences concluded that studies they evaluated did not show statistically significant gains when put through the US government's analysis. For more research, see the link below. References External links Accelerated Math: Entrance Rates, Success Rates, and College Readiness research Accelerated Math webpage Accelerated Math research Renaissance Learning research 2005 and 2006 Readers’ Choice Awards from eSchool News Alternate usage For other uses of the term "accelerated math," please see: Burris (2003), an article on an accelerated mathematics curriculum Shiran (2000), an article on accelerated math operators in JavaScript programming Educational math software Renaissance Learning software
1753587
https://en.wikipedia.org/wiki/The%20Ethiopians
The Ethiopians
The Ethiopians were one of Jamaica's best-loved harmony groups during the late ska, rocksteady and early reggae periods. Responsible for a significant number of hits between the mid-1960s and early 1970s, the group was also one of the first Jamaican acts to perform widely in Britain. Origins The Ethiopians was founded by Leonard Dillon (9 December 1942 – 28 September 2011) with Stephen "Tough Cock" Taylor and Aston "Charlie" Morrison at the tail end of the ska period. Dillon was a stonemason from the small community of Boundbrook, located on the outskirts of the northeast coastal town of Port Antonio, where he was raised by his grandparents in a strict Seventh Day Adventist household. With his grandfather the choirmaster in the local church, Dillon had good grounding in music from an early age. While still attending high school, he performed with a local act known as the Playboys (later re-named Ray and the Gladiators), the mellifluousness of his voice bringing the nickname "Sparrow". Like many of his peers, Dillon moved to Kingston towards the end of his teen years in search of work, staying first in a tiny shack in the west Kingston slum of Back-O-Wall. He travelled to Fellsmere, Florida in 1963 on a seasonal farm work contract, and after returning to Kingston in 1964, he settled in Trench Town, lodging at the home of the aunt of popular sound system deejay King Sporty, who he knew from his days in Port Antonio. In Trench Town, Dillon met Peter Tosh, who introduced him to Bob Marley and Bunny Livingston, his fellow vocalists in the Wailers. An audition was swiftly arranged at Studio One, where the Wailers were recording some of the biggest hits of the day, which led to Dillon voicing his first material. Three songs were backed by the Wailers, including a sound system favourite called "Ice Water", based on lyrics of double entendre, while "Suffering On The Land" and "Beggars Have No Choice" were more concerned with the harshness of life in the ghetto; a fourth song, "Woman, Wine And Money", featured Delroy Wilson on harmony. All of the songs were issued on 45 rpm singles, credited to Jack Sparrow. Shortly after the release of these singles, through the efforts of the Ethiopian Reorganization Centre in Waterhouse (established by elders Nasser King and Daddy King), Dillon entered the Rastafari faith, which he remained committed to thereafter. Founding Sales of the Jack Sparrow material were not particularly high, and the Wailers were focussing on their own careers. Noting that harmony groups were all the rage in Jamaica, Dillon subsequently made an exit from the Studio One stable to form a harmony group of his own with Taylor, Morrison, and a youth known as Foresight, who he encountered on the street in Waterhouse; the Ethiopian Reorganization Centre became their main rehearsal space. Foresight dropped out early, so by the time Dillon brought the group back to Studio One, they were a trio, debating whether to call themselves the Heartaches or the Ethiopians, until Studio One founder Clement "Sir Coxsone" Dodd decisively stated that the latter was more distinctive, and more fitting for a group that was spiritually minded. The first songs the group recorded at Studio One included "Live Good", "Why You Gonna Leave Me Now" and the rocksteady classic "Owe Me No Pay Me", produced by Lloyd Daley, and aimed at a man known as Stampede that owed Dillon money. The uncertain nature of the music business caused Morrison to then quit the group, since he had a young family to support. Undaunted by his departure, Dillon and Taylor went back to Studio One to record another half-dozen tracks, including the boastful "I'm Gonna Take Over Now", and a late-ska number called "I Am Free," which castigated an unfaithful lover. Success Despite the popularity of the material, their earnings were still not sufficient for the group to concentrate on music full-time. Continuing with the masonry led Dillon to the Ethiopians' next phase, once he found a financial backer for the group in the form of real estate speculator, Leebert Robinson, who financed the self-produced single "Train to Skaville", issued in Jamaica on WIRL (West Indies Records Limited). Subsequent singles, "The Whip" and "Cool It Amigo", were recorded at WIRL studio with top rocksteady band, Lynn Taitt and the Jets, (and engineer Lynford Anderson), and licensed to Sonia Pottiger for release in Jamaica, as well as Graeme Goodall's Doctor Bird label in Britain; the three songs were all significant hits in 1967. "Train to Skaville" made an impact overseas and brought the Ethiopians to the UK for their first tour in 1968, since the song had briefly appeared in the UK Singles Chart. The tour lasted three months in 1968, and another two months in 1969, and was arranged by Commercial Entertainment. Back in Jamaica, Melvin Reid became a temporary member of the group for some recordings made at Federal recording studio, but the group soon reverted to a duo again. "Cut Down (On Your Speed)" was recorded for Lee "Scratch" Perry, but far more successful work was issued by H. Robinson's Carib Disco company, including "Reggae Hit The Town", celebrating the new beat, and the successful "Engine 54" (recorded at a time when Trinidadian immigrant Garnet Hargreaves was acting as manager of the group), which celebrated a defunct railway engine that used to transport city folk on countryside excursions in Jamaica. The popularity of this track and earlier hit singles led to a debut album, Engine 54, issued by Doctor Bird in the UK. Then, the most solid and lasting working relationship was forged with producer Carl Johnson, yielding a series of hits in the late 1960s and early 1970s. Things started off with "Everything Crash," after Sir JJ told Dillon to write a song with that title when Dillon first appeared at JJ's shop on Orange Street; the song commented on the widespread strikes that gripped Jamaica in 1968 and was recorded with the backing band, the Caribbeats. Other hits to follow included "Feel The Spirit," "Hong Kong Flu," and "Woman Capture Man". The Sir JJ phase yielded the popular albums Reggae Power and Woman Capture Man, both issued by Trojan Records in Britain. During the early 1970s, the Ethiopians recorded widely for various producers, including Lloyd "The Matador" Daley ("Satan Gal"), Duke Reid ("Pirate"), Derrick Harriott ("Good Ambition"), Rupie Edwards ("Hail Rasta Man"), Alvin Ranglin ("Love Bug"), Prince Buster ("You Are For Me"), Joe Gibbs ("Ring A Burn Finger"), Bob Andy ("The Word Is Love"), and Lee Perry ("Life Is A Funny Thing"), among others. Dillon also helped build Perry's Black Ark Studios in 1973-4. In 1975, Stephen Taylor was killed in a traffic accident, leading to a period of inactivity as Dillon struggled to adapt to life without his singing partner. New version and solo work Dillon subsequently formed a new version of the Ethiopians with other members of the Rastafari faith, including Melvin Reid, to record the album Slave Call with Niney the Observer in 1977. There were also a handful of singles cut for Alvin Ranglin, Winston Riley, Joe Gibbs and Rupie Edwards, circa 1976-78. Leonard Dillon then largely became a solo artist. Under the name The Ethiopian, Dillon cut Open The Gate Of Zion for Ranglin in 1978, and Everything Crash for Studio One in 1980, the latter featuring "Locust" voiced on a mutation of the "Train to Skaville" rhythm. The Dread Prophecy album, shared with The Gladiators, was issued by the American label Nighthawk in 1986, One Step Forward surfaced in France on Blue Moon in 1992, Owner Fer De Yard was a Studio One set issued by Heartbeat in 1994, Tuffer Than Stone was recorded with Jahco Thelwell in 1999 and issued on the Melodie label in France, while Mystic Man was issued by Studio One in 2002; a second set for Nighthawk remains unreleased. The Ethiopians reached the end when Dillon died on 28 September 2011. Discography Releases by The Ethiopians include: Engine '54: Let's Ska and Rock Steady - (1968) - Jamaican Gold Reggae Power - (1969) Woman a Capture Man - (1970) - Trojan Slave Call - (1977) - Heartbeat Open The Gate Of Zion - (1978) - Channel One Dread Prophecy - (1989) - Nighthawk Let's Ska and Rock Steady - (1990) - VP Records The World Goes Ska - (1992) - Trojan Clap Your Hands - (1993) - Lagoon Sir J.J. & Friends - (1993) - Lagoon Owner Fer De Yard - (1994) - Heartbeat Train to Skaville - (1999) - Charly Tuffer Than Stone - (1999) - Warriors Skaville Princess - (2000) - Dressed to Kill Train to Skaville: Anthology 1966-1975 - (2002) - Trojan - compilation Open The Gate Of Zion - (2020) - Jamaican Art Records (de-luxe re-release) References External links Music Like Dirt Jamaican reggae musical groups Jamaican ska groups First-wave ska groups Heartbeat Records artists
19086385
https://en.wikipedia.org/wiki/Program%20animation
Program animation
Program animation or Stepping refers to the now very common debugging method of executing code one "line" at a time. The programmer may examine the state of the program, machine, and related data before and after execution of a particular line of code. This allows evaluation of the effects of that statement or instruction in isolation and thereby gain insight into the behavior (or misbehavior) of the executing program. Nearly all modern IDEs and debuggers support this mode of execution. Some testing tools allow programs to be executed step-by-step optionally at either source code level or machine code level depending upon the availability of data collected at compile time. History Instruction stepping or single cycle also referred to the related, more microscopic, but now obsolete method of debugging code by stopping the processor clock and manually advancing it one cycle at a time. For this to be possible, three things are required: A control that allows the clock to be stopped (e.g. a "Stop" button). A second control that allows the stopped clock to be manually advanced by one cycle (e.g. An "instruction step" switch and a "Start" button). Some means of recording the state of the processor after each cycle (e.g. register and memory displays). On the IBM System 360 processor range announced in 1964, these facilities were provided by front panel switches, buttons and banks of neon lights. Other systems such as the PDP-11 provided similar facilities, again on some models. The precise configuration was also model-dependent. It would not be easy to provide such facilities on LSI processors such as the Intel x86 and Pentium lines, owing to cooling considerations. As multiprocessing became more commonplace, such techniques would have limited practicality, since many independent processes would be stopped simultaneously. This led to the development of proprietary software from several independent vendors that provided similar features but deliberately restricted breakpoints and instruction stepping to particular application programs in particular address spaces and threads. The program state (as applicable to the chosen application/thread) was saved for examination at each step and restored before resumption, giving the impression of a single user environment. This is normally sufficient for diagnosing problems at the application layer. Instead of using a physical stop button to suspend execution - to then begin stepping through the application program, a breakpoint or "Pause" request must usually be set beforehand, usually at a particular statement/instruction in the program (chosen beforehand or alternatively, by default, at the first instruction). To provide for full screen "animation" of a program, a suitable I/O device such as a video monitor is normally required that can display a reasonable section of the code (e.g. in dis-assembled machine code or source code format) and provide a pointer (e.g. <==) to the current instruction or line of source code. For this reason, the widespread use of these full screen animators in the mainframe world had to await the arrival of transaction processing systems - such as CICS in the early 1970s and were initially limited to debugging application programs operating within that environment. Later versions of the same products provided cross region monitoring/debugging of batch programs and other operating systems and platforms. With the much later introduction of Personal computers from around 1980 onwards, integrated debuggers were able to be incorporated more widely into this single user domain and provided similar animation by splitting the user screen and adding a debugging "console" to provide programmer interaction. Borland Turbo Debugger was a stand-alone product introduced in 1989 that provided full-screen program animation for PC's. Later versions added support for combining the animation with actual source lines extracted at compilation time. Techniques for program animation There are at least three distinct software techniques for creating 'animation' during a programs execution. instrumentation involves adding additional source code to the program at compile time to call the animator before or after each statement to halt normal execution. If the program to be animated is an interpreted type, such as bytecode or CIL the interpreter (or IDE code) uses its own in-built code to wrap around the target code. Induced interrupt This technique involves forcing a breakpoint at certain points in a program at execution time, usually by altering the machine code instruction at that point (this might be an inserted system call or deliberate invalid operation) and waiting for an interrupt. When the interrupt occurs, it is handled by the testing tool to report the status back to the programmer. This method allows program execution at full speed (until the interrupt occurs) but suffers from the disadvantage that most of the instructions leading up to the interrupt are not monitored by the tool. Instruction Set Simulator This technique treats the compiled programs machine code as its input 'data' and fully simulates the host machine instructions, monitors the code for conditional or unconditional breakpoints or programmer requested "single cycle" animation requests between every step. Comparison of methods The advantage of the last method is that no changes are made to the compiled program to provide the diagnostic and there is almost unlimited scope for extensive diagnostics since the tool can augment the host system diagnostics with additional software tracing features. It is also possible to diagnose (and prevent) many program errors automatically using this technique, including storage violations and buffer overflows. Loop detection is also possible using automatic instruction trace together with instruction count thresholds (e.g. pause after 10,000 instructions; display last n instructions) The second method only alters the instruction that will halt before it is executed and may also then restore it before optional resumption by the programmer. Some animators optionally allow the use of more than one method depending on requirements. For example, using method 2 to execute to a particular point at full speed and then using instruction set simulation thereafter. Additional features The animator may, or may not, combine other test/debugging features within it such as program trace, dump, conditional breakpoint and memory alteration, program flow alteration, code coverage analysis, "hot spot" detection, loop detection or similar. Examples of program animators (In date of first release order) Borland Turbo Debugger 1989 - for PCs CodeView 1985, Visual Studio Debugger 1995, Visual Studio Express 2005 - for PCs Firebug (Firefox extension) January 2006 - for PCs External links and references Stepping (Visual Studio) Overview of stepping support in Microsoft Corporation's IDE, Visual Studio Tarraingim - A Program Animation Environment Program Animation as a way to teach and learn about Program Design and Analysis Structured information on software testing (such as the History of Software Testing) published by Testing references Debugging Software testing
27562386
https://en.wikipedia.org/wiki/BlueHat
BlueHat
BlueHat (or Blue Hat or Blue-Hat) is a term used to refer to outside computer security consulting firms that are employed to bug test a system prior to its launch, looking for exploits so they can be closed. In particular, Microsoft uses the term to refer to the computer security professionals they invited to find the vulnerability of their products such as Windows. Blue Hat Microsoft Hacker Conference The Blue Hat Microsoft Hacker Conference is an invitation-only conference created by Window Snyder that is intended to open communication between Microsoft engineers and hackers. The event has led to both mutual understanding as well as the occasional confrontation. Microsoft developers were visibly uncomfortable when Metasploit was demonstrated. See also Hacker culture Hacker ethic Black hat hacker References External links Microsoft's BlueHat Security Briefings BlueHat Security Briefings Blog BlueHat Security Homeland Security Consultants FedRAMP Microsoft culture Computer security Hacking (computer security)
15107
https://en.wikipedia.org/wiki/Internet%20Control%20Message%20Protocol
Internet Control Message Protocol
The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, when an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications (with the exception of some diagnostic tools like ping and traceroute). ICMP for IPv4 is defined in RFC 792. A separate ICMPv6, defined by RFC 4443, is used with IPv6. Technical details ICMP is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically used for diagnostic or control purposes or generated in response to errors in IP operations (as specified in RFC 1122). ICMP errors are directed to the source IP address of the originating packet. For example, every device (such as an intermediate router) forwarding an IP datagram first decrements the time to live (TTL) field in the IP header by one. If the resulting TTL is 0, the packet is discarded and an ICMP time exceeded in transit message is sent to the datagram's source address. Many commonly used network utilities are based on ICMP messages. The traceroute command can be implemented by transmitting IP datagrams with specially set IP TTL header fields, and looking for ICMP time exceeded in transit and Destination unreachable messages generated in response. The related ping utility is implemented using the ICMP echo request and echo reply messages. ICMP uses the basic support of IP as if it were a higher-level protocol, however, ICMP is actually an integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages are usually processed as a special case, distinguished from normal IP processing. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application responsible for transmitting the IP packet that prompted the ICMP message to be sent. ICMP is a network-layer protocol. There is no TCP or UDP port number associated with ICMP packets as these numbers are associated with the transport layer above. Datagram structure The ICMP packet is encapsulated in an IPv4 packet. The packet consists of header and data sections. Header The ICMP header starts after the IPv4 header and is identified by IP protocol number '1'. All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have fixed format, while the last 4 bytes depend on the type/code of that ICMP packet. Type ICMP type, see . Code ICMP subtype, see . Checksum Internet checksum (RFC 1071) for error checking, calculated from the ICMP header and data with value 0 substituted for this field. Rest of header Four-byte field, contents vary based on the ICMP type and code. Data ICMP error messages contain a data section that includes a copy of the entire IPv4 header, plus at least the first eight bytes of data from the IPv4 packet that caused the error message. The length of ICMP error messages should not exceed 576 bytes. This data is used by the host to match the message to the appropriate process. If a higher level protocol uses port numbers, they are assumed to be in the first eight bytes of the original datagram's data. The variable size of the ICMP packet data section has been exploited. In the "Ping of death", large or fragmented ICMP packets are used for denial-of-service attacks. ICMP data can also be used to create covert channels for communication. These channels are known as ICMP tunnels. Control messages Control messages are identified by the value in the type field. The code field gives additional context information for the message. Some control messages have been deprecated since the protocol was first introduced. Source quench Source Quench requests that the sender decrease the rate of messages sent to a router or host. This message may be generated if a router or host does not have sufficient buffer space to process the request, or may occur if the router or host buffer is approaching its limit. Data is sent at a very high speed from a host or from several hosts at the same time to a particular router on a network. Although a router has buffering capabilities, the buffering is limited to within a specified range. The router cannot queue any more data than the capacity of the limited buffering space. Thus if the queue gets filled up, incoming data is discarded until the queue is no longer full. But as no acknowledgement mechanism is present in the network layer, the client does not know whether the data has reached the destination successfully. Hence some remedial measures should be taken by the network layer to avoid these kind of situations. These measures are referred to as source quench. In a source quench mechanism, the router sees that the incoming data rate is much faster than the outgoing data rate, and sends an ICMP message to the clients, informing them that they should slow down their data transfer speeds or wait for a certain amount of time before attempting to send more data. When a client receives this message, it will automatically slow down the outgoing data rate or wait for a sufficient amount of time, which enables the router to empty the queue. Thus the source quench ICMP message acts as flow control in the network layer. Since research suggested that "ICMP Source Quench [was] an ineffective (and unfair) antidote for congestion", routers' creation of source quench messages was deprecated in 1995 by RFC 1812. Furthermore, forwarding of and any kind of reaction to (flow control actions) source quench messages was deprecated from 2012 by RFC 6633. Where: Type must be set to 4 Code must be set to 0 IP header and additional data is used by the sender to match the reply with the associated request Redirect Redirect requests data packets be sent on an alternative route. ICMP Redirect is a mechanism for routers to convey routing information to hosts. The message informs a host to update its routing information (to send packets on an alternative route). If a host tries to send data through a router (R1) and R1 sends the data on another router (R2) and a direct path from the host to R2 is available (that is, the host and R2 are on the same subnetwork), then R1 will send a redirect message to inform the host that the best route for the destination is via R2. The host should then change its route information and send packets for that destination directly to R2. The router will still send the original datagram to the intended destination. However, if the datagram contains routing information, this message will not be sent even if a better route is available. RFC 1122 states that redirects should only be sent by gateways and should not be sent by Internet hosts. Where: Type must be set to 5. Code specifies the reason for the redirection, and may be one of the following: {| class="wikitable" |- ! Code ! Description |- ! 0 | Redirect for Network |- ! 1 | Redirect for Host |- ! 2 | Redirect for Type of Service and Network |- ! 3 | Redirect for Type of Service and Host |} IP address is the 32-bit address of the gateway to which the redirection should be sent. IP header and additional data is included to allow the host to match the reply with the request that caused the redirection reply. Time exceeded Time Exceeded is generated by a gateway to inform the source of a discarded datagram due to the time to live field reaching zero. A time exceeded message may also be sent by a host if it fails to reassemble a fragmented datagram within its time limit. Time exceeded messages are used by the traceroute utility to identify gateways on the path between two hosts. Where: Type must be set to 11 Code specifies the reason for the time exceeded message, include the following: {| class="wikitable" ! Code || Description |- ! 0 | Time-to-live exceeded in transit. |- ! 1 | Fragment reassembly time exceeded. |} IP header and first 64 bits of the original payload are used by the source host to match the time exceeded message to the discarded datagram. For higher-level protocols such as UDP and TCP the 64-bit payload will include the source and destination ports of the discarded packet. Timestamp Timestamp is used for time synchronization. The originating timestamp is set to the time (in milliseconds since midnight) the sender last touched the packet. The receive and transmit timestamps are not used. Where: Type must be set to 13 Code must be set to 0 Identifier and Sequence Number can be used by the client to match the timestamp reply with the timestamp request. Originate timestamp is the number of milliseconds since midnight Universal Time (UT). If a UT reference is not available the most-significant bit can be set to indicate a non-standard time value. Timestamp reply Timestamp Reply replies to a Timestamp message. It consists of the originating timestamp sent by the sender of the Timestamp as well as a receive timestamp indicating when the Timestamp was received and a transmit timestamp indicating when the Timestamp reply was sent. Where: Type must be set to 14 Code must be set to 0 Identifier and Sequence number can be used by the client to match the reply with the request that caused the reply. Originate timestamp is the time the sender last touched the message before sending it. Receive timestamp is the time the echoer first touched it on receipt. Transmit timestamp is the time the echoer last touched the message on sending it. All timestamps are in units of milliseconds since midnight UT. If the time is not available in milliseconds or cannot be provided with respect to midnight UT then any time can be inserted in a timestamp provided the high order bit of the timestamp is also set to indicate this non-standard value. The use of Timestamp and Timestamp Reply messages to synchronize the clocks of Internet nodes has largely been replaced by the UDP-based Network Time Protocol and the Precision Time Protocol. Address mask request Address mask request is normally sent by a host to a router in order to obtain an appropriate subnet mask. Recipients should reply to this message with an Address mask reply message. Where: Type must be set to 17 Code must be set to 0 Address mask can be set to 0 ICMP Address Mask Request may be used as a part of reconnaissance attack to gather information on the target network, therefore ICMP Address Mask Reply is disabled by default on Cisco IOS. Address mask reply Address mask reply is used to reply to an address mask request message with an appropriate subnet mask. Where: Type must be set to 18 Code must be set to 0 Address mask should be set to the subnet mask Destination unreachable Destination unreachable is generated by the host or its inbound gateway to inform the client that the destination is unreachable for some reason. Reasons for this message may include: the physical connection to the host does not exist (distance is infinite); the indicated protocol or port is not active; the data must be fragmented but the 'don't fragment' flag is on. Unreachable TCP ports notably respond with TCP RST rather than a destination unreachable type 3 as might be expected. Destination unreachable is never reported for IP Multicast transmissions. Where: Type field (bits 0-7) must be set to 3 Code field (bits 8-15) is used to specify the type of error, and can be any of the following: {| class="wikitable" ! Code || Description |- ! 0 | Network unreachable error. |- ! 1 | Host unreachable error. |- ! 2 | Protocol unreachable error (the designated transport protocol is not supported). |- ! 3 | Port unreachable error (the designated protocol is unable to inform the host of the incoming message). |- ! 4 | The datagram is too big. Packet fragmentation is required but the 'don't fragment' (DF) flag is on. |- ! 5 | Source route failed error. |- ! 6 | Destination network unknown error. |- ! 7 | Destination host unknown error. |- ! 8 | Source host isolated error. |- ! 9 | The destination network is administratively prohibited. |- ! 10 | The destination host is administratively prohibited. |- ! 11 | The network is unreachable for Type Of Service. |- ! 12 | The host is unreachable for Type Of Service. |- ! 13 | Communication administratively prohibited (administrative filtering prevents packet from being forwarded). |- ! 14 | Host precedence violation (indicates the requested precedence is not permitted for the combination of host or network and port). |- ! 15 | Precedence cutoff in effect (precedence of datagram is below the level set by the network administrators). |} Next-hop MTU field (bits 48-63) contains the MTU of the next-hop network if a code 4 error occurs. IP header and additional data is included to allow the client to match the reply with the request that caused the destination unreachable reply. See also ICMP tunnel ICMP hole punching ICMP Router Discovery Protocol Pathping Path MTU Discovery Smurf attack References RFCs , Internet Control Message Protocol , Internet Standard Subnetting Procedure , Something a Host Could Do with Source Quench: The Source Quench Introduced Delay (SQuID) , Requirements for Internet Hosts – Communication Layers , Towards Requirements for IP Routers , Requirements for IP Version 4 Routers , Extended ICMP to Support Multi-Part Messages External links IANA ICMP parameters IANA protocol numbers Internet protocols Internet Standards Internet layer protocols Network layer protocols
34886007
https://en.wikipedia.org/wiki/Supercomputer%20operating%20system
Supercomputer operating system
A supercomputer operating system is an operating system intended for supercomputers. Since the end of the 20th century, supercomputer operating systems have undergone major transformations, as fundamental changes have occurred in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been moving away from in-house operating systems and toward some form of Linux, with it running all the supercomputers on the TOP500 list in November 2017. In 2021, top 10 computers run for instance Red Hat Enterprise Linux (RHEL), or some variant of it or other Linux distribution e.g. Ubuntu. Given that modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g., using a small and efficient lightweight kernel such as Compute Node Kernel (CNK) or Compute Node Linux (CNL) on compute nodes, but a larger system such as a Linux-derivative on server and input/output (I/O) nodes. While in a traditional multi-user computer system job scheduling is in effect a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully dealing with inevitable hardware failures when tens of thousands of processors are present. Although most modern supercomputers use the Linux operating system, each manufacturer has made its own specific changes to the Linux-derivative they use, and no industry standard exists, partly because the differences in hardware architectures require changes to optimize the operating system to each hardware design. Context and overview In the early days of supercomputing, the basic architectural concepts were evolving rapidly, and system software had to follow hardware innovations that usually took rapid turns. In the early systems, operating systems were custom tailored to each supercomputer to gain speed, yet in the rush to develop them, serious software quality challenges surfaced and in many cases the cost and complexity of system software development became as much an issue as that of hardware. In the 1980s the cost for software development at Cray came to equal what they spent on hardware and that trend was partly responsible for a move away from the in-house operating systems to the adaptation of generic software. The first wave in operating system changes came in the mid-1980s, as vendor specific operating systems were abandoned in favor of Unix. Despite early skepticism, this transition proved successful. By the early 1990s, major changes were occurring in supercomputing system software.<ref name=Padua426 >Encyclopedia of Parallel Computing by David Padua 2011 pages 426–429.</ref> By this time, the growing use of Unix had begun to change the way system software was viewed. The use of a high level language (C) to implement the operating system, and the reliance on standardized interfaces was in contrast to the assembly language oriented approaches of the past. As hardware vendors adapted Unix to their systems, new and useful features were added to Unix, e.g., fast file systems and tunable process schedulers. However, all the companies that adapted Unix made unique changes to it, rather than collaborating on an industry standard to create "Unix for supercomputers". This was partly because differences in their architectures required these changes to optimize Unix to each architecture. Thus as general purpose operating systems became stable, supercomputers began to borrow and adapt the critical system code from them and relied on the rich set of secondary functions that came with them, not having to reinvent the wheel. However, at the same time the size of the code for general purpose operating systems was growing rapidly. By the time Unix-based code had reached 500,000 lines long, its maintenance and use was a challenge. This resulted in the move to use microkernels which used a minimal set of the operating system functions. Systems such as Mach at Carnegie Mellon University and ChorusOS at INRIA were examples of early microkernels. The separation of the operating system into separate components became necessary as supercomputers developed different types of nodes, e.g., compute nodes versus I/O nodes. Thus modern supercomputers usually run different operating systems on different nodes, e.g., using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes. Early systems The CDC 6600, generally considered the first supercomputer in the world, ran the Chippewa Operating System, which was then deployed on various other CDC 6000 series computers. The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000, but it influenced the later KRONOS and SCOPE systems.Design of a computer: the Control Data 6600 by James E. Thornton, Scott, Foresman Press 1970 page 163. The first Cray-1 was delivered to the Los Alamos Lab with no operating system, or any other software. Los Alamos developed the application software for it, and the operating system. The main timesharing system for the Cray 1, the Cray Time Sharing System (CTSS), was then developed at the Livermore Labs as a direct descendant of the Livermore Time Sharing System (LTSS) for the CDC 6600 operating system from twenty years earlier. In developing supercomputers, rising software costs soon became dominant, as evidenced by the 1980s cost for software development at Cray growing to equal their cost for hardware. That trend was partly responsible for a move away from the in-house Cray Operating System to UNICOS system based on Unix. In 1985, the Cray-2 was the first system to ship with the UNICOS operating system. Around the same time, the EOS operating system was developed by ETA Systems for use in their ETA10 supercomputers. Written in Cybil, a Pascal-like language from Control Data Corporation, EOS highlighted the stability problems in developing stable operating systems for supercomputers and eventually a Unix-like system was offered on the same machine.Past, present, parallel: a survey of available parallel computer systems by Arthur Trew 1991 page 326. The lessons learned from developing ETA system software included the high level of risk associated with developing a new supercomputer operating system, and the advantages of using Unix with its large extant base of system software libraries. By the middle 1990s, despite the extant investment in older operating systems, the trend was toward the use of Unix-based systems, which also facilitated the use of interactive graphical user interfaces (GUIs) for scientific computing across multiple platforms. The move toward a commodity OS'' had opponents, who cited the fast pace and focus of Linux development as a major obstacle against adoption. As one author wrote "Linux will likely catch up, but we have large-scale systems now". Nevertheless, that trend continued to gain momentum and by 2005, virtually all supercomputers used some Unix-like OS. These variants of Unix included IBM AIX, the open source Linux system, and other adaptations such as UNICOS from Cray. By the end of the 20th century, Linux was estimated to command the highest share of the supercomputing pie. Modern approaches The IBM Blue Gene supercomputer uses the CNK operating system on the compute nodes, but uses a modified Linux-based kernel called I/O Node Kernel (INK) on the I/O nodes. CNK is a lightweight kernel that runs on each node and supports a single application running for a single user on that node. For the sake of efficient operation, the design of CNK was kept simple and minimal, with physical memory being statically mapped and the CNK neither needing nor providing scheduling or context switching. CNK does not even implement file I/O on the compute node, but delegates that to dedicated I/O nodes. However, given that on the Blue Gene multiple compute nodes share a single I/O node, the I/O node operating system does require multi-tasking, hence the selection of the Linux-based operating system. While in traditional multi-user computer systems and early supercomputers, job scheduling was in effect a task scheduling problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources. It is essential to tune task scheduling, and the operating system, in different configurations of a supercomputer. A typical parallel job scheduler has a master scheduler which instructs some number of slave schedulers to launch, monitor, and control parallel jobs, and periodically receives reports from them about the status of job progress. Some, but not all supercomputer schedulers attempt to maintain locality of job execution. The PBS Pro scheduler used on the Cray XT3 and Cray XT4 systems does not attempt to optimize locality on its three-dimensional torus interconnect, but simply uses the first available processor. On the other hand, IBM's scheduler on the Blue Gene supercomputers aims to exploit locality and minimize network contention by assigning tasks from the same application to one or more midplanes of an 8x8x8 node group. The Slurm Workload Manager scheduler uses a best fit algorithm, and performs Hilbert curve scheduling to optimize locality of task assignments. Several modern supercomputers such as the Tianhe-2 use Slurm, which arbitrates contention for resources across the system. Slurm is open source, Linux-based, very scalable, and can manage thousands of nodes in a computer cluster with a sustained throughput of over 100,000 jobs per hour. See also Distributed operating system Supercomputer architecture Usage share of supercomputer operating systems References Operating systems Supercomputer operating systems
7976984
https://en.wikipedia.org/wiki/Daniel%20Lyons
Daniel Lyons
Daniel Lyons (born 1960) is an American writer. He was a senior editor at Forbes magazine and a writer at Newsweek before becoming editor of ReadWrite. In March 2013 he left ReadWrite to accept a position at HubSpot. Lyons is the author of a book of short stories, The Last Good Man (1993); a novel, Dog Days (1998); and a fictional biography, Options: The Secret Life of Steve Jobs, a Parody (2007). Under the pseudonym "Fake Steve Jobs," he also wrote The Secret Diary of Steve Jobs, a popular blog and parody of Apple CEO Steve Jobs. He was a writer and coproducer on HBO's Silicon Valley and wrote the script for the May 2015 episode "White Hat/Black Hat," while on a 14-week break from HubSpot in 2014. Dan Lyons authored the book Disrupted: My Misadventure in the Start Up Bubble (2016) about his time at the Boston, MA startup HubSpot. The book was a New York Times, Wall Street Journal and San Francisco Chronicle bestseller. Readers responded to the book with numerous letters which inspired his next book: Lab Rats (2018). He has won other literary awards including the 1992 AWP Award for Short Fiction (for his story "The First Snow") and the Playboy College Fiction Award (for "The Greyhound"). Early life and education Lyons was born in Massachusetts. He attended Brooks School in North Andover, MA, a college preparatory school. He received his MFA from the University of Michigan in 1992. Career and blogging Work as technology analyst Lyons was a senior editor at Forbes magazine, covering enterprise computing and consumer electronics. He was also the author of the Forbes cover article, "Attack of the Blogs", where he wrote that blogs "are the prized platform of an online lynch mob spouting liberty but spewing lies, libel and invective," claiming that Groklaw was primarily created "to bash software maker SCO Group in its Linux patent lawsuit against IBM, producing laughably biased, pro-IBM coverage." Between 2003 and 2007 Lyons covered the SCO cases against IBM and against Linux. He published articles such as "What SCO Wants, SCO Gets," where he stated that "like many religious folk, the Linux-loving crunchies in the open-source movement are a) convinced of their own righteousness, and b) sure the whole world, including judges, will agree. They should wake up." In 2007 Lyons admitted to being "Snowed By SCO": "For four years, I've been covering a lawsuit for Forbes.com, and my early predictions on this case have turned out to be so profoundly wrong that I am writing this mea culpa ... In March 2003, SCO sued IBM claiming that IBM took code from Unix—for which SCO claimed to own copyrights—and put that code into Linux, which is distributed free. Last month a judge ruled that SCO does not, in fact, own the Unix copyrights. That blows SCO's case against IBM out of the water. SCO, of Lindon, Utah, is seeking bankruptcy protection." Fake Steve Jobs Lyons began blogging as "Fake Steve Jobs" in 2006. He was able to maintain anonymity for just under one year, despite speculation. Before the identity of Fake Steve Jobs was revealed by The New York Times technology correspondent Brad Stone on August 5, 2007, The Secret Diary of Steve Jobs was referenced by numerous online and print media such as Engadget, BusinessWeek, Forbes, Der Spiegel, El Mundo and CNET. Fake Steve Jobs ranked 37th in a Business 2.0 article entitled "50 Who Matter Now." Previous guesses as to the blog's author included Leander Kahney of Wired (particularly at some of Fake Steve Jobs's Briticisms), Eric Savitz of Barron's Magazine, John Paczkowski of All Things Digital, and Andy Ihnatko of the Chicago Sun-Times. Another suggestion was that Jack Miller, the webmaster/blogger of the "As the Apple Turns" website, which was seemingly abandoned in 2006, but which is still live, could possibly be Fake Steve Jobs. At The Wall Street Journal'''s "D: All Things Digital" technology conference, the real Steve Jobs was quoted as saying, "I have read a few of the Fake Steve Jobs things recently and I think they’re pretty funny." During a later joint interview, Bill Gates quipped that he was not Fake Steve Jobs. In October 2007 Lyons released the book Options: The Secret Life of Steve Jobs, a Parody, under the pseudonym "Fake Steve Jobs". Although based largely upon previous material published on The Secret Diary of Steve Jobs'' blog, the book creates a more cohesive narrative focusing especially on the stock options backdating scandal looming over Steve Jobs in late 2006 and early 2007. On July 9, 2008, Lyons announced on the Fake Steve blog that he would be launching a new site under his own name and discontinuing writing in a faux-Jobs style. He later announced his decision to place the Fake Steve blog on indefinite hiatus was out of respect for the real Steve Jobs' health: "I began hearing a few months ago that Steve Jobs was very sick. I wasn't sure if these rumors were true or not. Then I saw how he looked at [the Worldwide Developers Conference in early June, 2008] and it was like having the wind knocked out of me. I just couldn't carry on." The blog was continued in 2009 after news broke that Jobs had recovered from a liver transplant, but then suspended again in January 2011 when Jobs took a second leave of absence for health reasons. After Jobs' death in October 2011, Fake Steve Jobs posted a farewell poem, and has not been active since. References External links The Secret Diary of Steve Jobs Real Dan Lyons Web Site Personal blog Dan Lyons A video interview of Daniel Lyons about Fake Steve Jobs on Microsoft's Channel 10 Unabridged interview with Lyons by Wallstrip's Lindsay Campbell on the release of his book, Options, October 28, 2007 Daniel Lyons speaks at Google about his blog, his book, and the real Steve Jobs Daniel Lyons Speaks at Cody Books in Berkeley, CA Farewell, Fake Steve Jobs - by Stanley Bing for Slate Magazine Dan Lyons Silicon Valley Law Interview Dan Lyons LinkedIn Audio Interview Video Interview (90Min) by Leo Laporte "Disrupted:My Misadventures in the Start-Up Bubble" May 9, 2016 Living people 1960 births University of Michigan alumni American bloggers Internet memes American technology writers Brooks School alumni 21st-century American non-fiction writers
41863732
https://en.wikipedia.org/wiki/GNU%20GLOBAL
GNU GLOBAL
GNU GLOBAL is a software tool for source code tagging to aid code comprehension. It works in a uniform fashion in various environments (GNU Emacs, Vim, GNU less, GNU Bash, web browsers, etc.), allowing users to find all objects declared in the source files and to move among them easily. It is particularly useful for working on projects containing numerous sub-projects and complex syntax trees generated by the compilation process (e.g., C code containing numerous #ifdef directive which select among several main() functions using conditional compilation). It is similar to older tagging software such as ctags and etags, but differs in its independence from any specific text editor. GNU GLOBAL is free software maintained for the GNU project by Shigio Yamaguchi. Use cases Use cases are varied, and include traversing the source code of the Linux kernel, browsing Ruby code after having analyzed it with Exuberant ctags or rtags, examining the structure of software packages in HTML mode, or exploring a large and unfamiliar codebase. Usage by other software GLOBAL is used by other software, including GNU Automake. FreeBSD uses it in its build system. See also Debug symbol References External links GNU GLOBAL in the Free Software Directory. GNU GLOBAL on the GNU Savannah platform. Code comprehension tools Code navigation tools Free computer programming tools Unix programming tools GNU Project software
21849518
https://en.wikipedia.org/wiki/Amiga%20Forever
Amiga Forever
Amiga Forever is an Amiga preservation, emulation and support package published by Cloanto, which allows Amiga software to run on non-Amiga hardware legally and without complex configuration. The Windows version of Amiga Forever includes a "player" software developed by Cloanto which seamlessly uses "plugins" such as WinUAE as emulation engines, while relying on its own user interface for configuration and authoring. In addition to supporting common disk image formats, Amiga Forever can play back and author files in Cloanto's proprietary RP9 format. RP9 packages are compressed files that embed all media images, plus XML-based configuration and description data, and ancillary content like documentation, screenshots, audio tracks, etc. Beginning from the 2012 version, Amiga Forever includes Cloanto's RP9 Editor for content authoring. Besides its own authoring and playback environment, and Cloanto's floppy disk conversion service, Amiga Forever includes WinUAE and WinFellow, and different versions of UAE and E-UAE for other platforms. All versions of Amiga Forever include different AmigaOS (m68k) environments and support to run a large range of Amiga games and demoscene productions which are available for free download from different software publishers and Amiga history sites. The Windows version also includes Cloanto's Amiga Explorer networking software, which allows access to Amiga resources (including virtual floppy, hard disk and ROM image files) from the Windows Desktop. History The first version of Amiga Forever was released in 1997 on a CD-ROM which contained a front-end for Windows and different versions of UAE for Windows, DOS, macOS and Linux, plus Fellow for DOS and a selection of Amiga Kickstart ROM images and Workbench disks. There was also an Amiga floppy disk available which included the Amiga side of Amiga Explorer. The new plugin-based player software was introduced in 2007. Features Features, among others: Emulation of Amiga hardware (allows Amiga software to run on a PC or mobile device) Official Amiga ROM and OS files (all versions from 0.7 to 3.X) Additional emulation and drivers (RTG graphics, SCSI, TCP/IP, AHI, CDTV, CD³², etc.) Amiga Forever player software for Windows to access and launch content Preconfigured WinUAE and WinFellow emulation engines with auto-updates Preinstalled games, demos and applications (web browser, paint, etc.) Support for a large amount of downloadable Amiga games, demos and applications Amiga Explorer and Amiga Files data sharing framework Optional Live CD, based on Knoppix Light (boots a PC or Intel Mac directly into Workbench) More than five hours of Amiga videos (two DVDs) Gallery of items of historical interest See also AmigaOS Minimig AmiKit UAE References External links Amiga
36212958
https://en.wikipedia.org/wiki/Higher%20Technical%20School%20of%20Computer%20Engineering%20at%20UNED
Higher Technical School of Computer Engineering at UNED
The Higher School of Computer Engineering at UNED is a national and international-wide high education center, with an extensive network of collaborating institutions, that teaches and issues the degree in Computer Science Engineering, Computer Systems Engineer and Business Computing Engineering, as well as bachelors, masters and PhDs. History The Higher School of Computer Engineering at UNED was established in 1991. Its creation was the response to a growing demand for computer science professionals with different degrees not only focused in computing but also in industry, business and research. In 2001 the school expands its academic offer including master's degrees. In 2010-2011 the academic content is adapted to the requirements of the European Higher Education Program replacing the former degrees. The academic offer is currently the following: Computer Engineering (5 academic courses) Computer Engineering degree (4 academic courses) Information and Communication Technologies Engineering - IT Engineering (4 academic courses) School Departments Computer Science and Languages Artificial Intelligence Automatics and Computer Science Software Engineering and Computer Systems Communication Systems and Control Inter-faculty Departments Business Management Business Economics and Accounting Material Physics Statistic, Operative Investigation and Calculus Fundamentals Mathematics Applied Mathematics I Mechanic Electric, Electronics and Control Engineering Manufacturing Engineering Research groups Computer Systems and Language Department Natural Language Processing (NLP Group) Interactive Environments for Teaching-Learning (LTCS Group) Artificial Intelligence Department Research Center for Decision-making Intelligence Systems (CISIAD) aDeNu SIEA Intelligence Systems: Modelling, Development and Applications (SIMDA) Computer Systems and Automatic Department Modelling, simulation and process control Industrial computing Software Engineering and Computer Systems Department Software Quality Computer Graphics and Virtual Reality Software Engineering Robotics and Artificial Intelligence RFID Middleware Communication and Control Systems Department Parameter space, multi-frequency and fractional techniques for system control See also UNED Foreign Collaborative Centers XXXII UNED summer courses in 2021 External links Academic Ranking of World UniversitiesCSIC References Engineering universities and colleges in Spain
1189358
https://en.wikipedia.org/wiki/Naparima%20College
Naparima College
Naparima College (informally known as Naps) is a public secondary school for boys in Trinidad and Tobago. Located in San Fernando, the school was founded in 1894 but received official recognition in 1900. It was established by Dr. Kenneth J. Grant, a Canadian Presbyterian missionary working among the Indian population in Trinidad. The school was one of the first to educate Indo-Trinidadians and played an important and crucial role in the development of an Indo-Trinidadian and Tobagonian professional class. Naparima is derived from the Arawak word (A) naparima, meaning ‘large water’, or from Nabarima, Warao for ‘Father of the waves’. The school was founded in the churchyard of Susamachar Presbyterian Church in San Fernando as the Canadian Mission Indian School. In 1899 the Mission Council petitioned the Board of Queen's Royal College in Port of Spain for affiliation with it. In 1900 the school became a recognised secondary school and was thus eligible for state aid. It was then renamed Naparima College. In 1917 it relocated to its present campus at Paradise Hill on what was then the southern edge of the city. History 1866 Rev. Dr. John Morton, a young Presbyterian minister from Bridgewater, Nova Scotia arrives in Trinidad and is deeply concerned at the social conditions of the population of 25,000 Indians working on the plantations. Morton receives approval of the Maritime synod of the Presbyterian church to found a mission to Trinidad. A friend and colleague, Rev. Dr. Kenneth James Grant from Scotch Hill, Pictou County, Nova Scotia, is appointed as a partner in the project. 1870 On November 22, Grant, with his newly wed wife Catherine Copeland of Merigomish, Nova Scotia, and Morton with his wife Sarah, arrive in San Fernando, Trinidad. 1870s Gordon, Governor of Trinidad, enacts education ordinance facilitating the establishment of Canadian Mission elementary schools. The "C.M." schools, large one-room wooden constructions, are established in a large number of rural communities in Trinidad, and flourish, eventually numbering over seventy. 1880 "Four stations (believed to be Tunapuna, San Fernando, Princes Town and Couva) have been established which serve all parts of Trinidad". 1883 Rev. Dr. Grant conducts first secondary school classes for his son George, other children of the Mission, and Charles Pasea, by tradition under a samaan tree on Carib Street, San Fernando, near his own home. This is the site of present-day Susamachar Presbyterian Church, and the Grant Memorial (elementary) School. 1890 Rev. Dr. Kenneth J. Grant receives approval of the church for the establishment of a theological training college. When broached to the Presbyterian Assembly in Ottawa, half of Dr Grant's requested funding is donated by two private families within forty-eight hours. This foundation stone of the ministry in Trinidad is later renamed the Presbyterian Training College and then the St Andrews Theological College. 1894 Naparima Training College for Teachers is opened; the secondary school classes to become Naparima College are initially merged with NTC classes. 1897 Naparima College is made into a separate institution by Rev. Dr. Kenneth J. Grant 1898 Rev. Grant purchases "Oriental Hall", land and buildings adjacent to his home, which serves for years as a base of operations for the missionaries, and a home for the early Presbyterian College, Naparima Training College, and Naparima College. 1900 Naparima College is formally inaugurated, affiliated with Queen's Royal College in Port of Spain, and loosely modeled after Pictou Academy in Pictou, Nova Scotia. Enrollment is 50, staff complement 4. 1904 The first class graduates as holders of the Cambridge Senior School Certificate 1917 Naparima College is moved to Paradise Pastures and the first buildings are constructed here overlooking the Gulf of Paria, immediately to the west of the town of San Fernando, where the institution has stood continuously since. 1923 Rev. V. B.Walls of Blackville, New Brunswick is appointed principal, and takes up residence on the hill in January 1924. He would serve for almost 30 years. 1925-1931 Additional buildings are constructed, including a dormitory, a dining hall (1925), an infirmary (1927), and the central part of the "U" design(1931). 1931 The central part of the old "U" building is built, together with the old science lab, and library. Enrollment stands at 200. 1932 The main building is constructed along with a Science Laboratory. 1936 The first to obtain the Higher School Certificate graduates. The T. Geddes Grant Memorial Dormitory is constructed. 1938 The school was chosen by the pioneering vendor of the product as the historical starting point for the sale the popular local snack known as Doubles (food). In fact, the very name given to the snack was first coined by students of the school whilst placing orders. 1939 First Founders' Day celebration. The wings of the "U" structure are added, replacing two 1917 classrooms. 1945 Kathlyn W. Smith became the first girl and the first student of Naparima College to obtain an Island Scholarship. The Junior Red Cross is set up. 1946 Enrollment at Naparima College stands at 560, staff of 24 and on-campus residents of 72. 1948 Dr.the Hon. Eric Williams, the first Prime Minister of an independent Trinidad and Tobago in 1962, choose the school as one of the locations for the delivery of a series of political lectures that eventually led to the formation of the People's National Movement in 1955. 1950 Naparima's Golden Jubilee, and Rev. Walls' 25th year. A thousand people attend. In the spotlight with Rev. Walls are H.R.H. Princess Alice and the Earl of Athlone; Rev J. C. MacDonald (early principal), Sidney Hogben, Director of Education; Roy Joseph, Mayor; and many dignitaries. The present-day flag staff is donated by Trinidad Leaseholds Ltd and erected and used for the first time at the Golden Jubilee 1953 Rev. Walls retires (1952) and so the position of principal passes to Rev. E. T. Lute of Toronto. 1954 Permanent sites are secured for Tunapuna and Siparia campuses. 1956 Uniform of white shirts and silver-grey khaki trousers is established as well as the First Naparima College Sea Scout Troop. 1957 Larry Lutchmansingh becomes the first Naparima College boy to obtain an Open Scholarship. 1958 Introduction of the Naparima College Badge as part of the school's attire. 1959 Under Rev. E. T. Lute, new concrete and steel present-day buildings are constructed. The timber from the old 1931 buildings is used to build a new gymnasium, used until 1995. Rev. Lute also introduces the 'house system'. Six houses were originally set up, but this number was later reduced to four: Walls House (red); Sammy House (blue); Flemington House (gold); and Grant House (green). National independence of Trinidad looms, and it is evident that many institutions of the past will be passed to local autonomy. Enrollment stands at close to 700. 1960 Naparima celebrates its "Diamond Jubilee", the 60th year of its official recognition. Rev. Walls returns from his retirement in New Brunswick for the celebrations. 1962 Trinidad and Tobago becomes an independent country. Rev. James Forbes Sieunarine, (named after an early missionary) becomes the first principal to emerge from among the students of the Canadian Mission.. 1963 The Walls Pavilion is constructed on the playing field off Rushworth St. 1966 Allan I. McKenzie is appointed principal. 1967 The dormitory is re-furbished as classroom space. This is also the last year that girls attend Naparima College. The present day school uniform is inaugurated. 1969 Won island-wide TTT Quiz contest as well as the Drama Festival's best production with A Little Soap and Water written and produced by Hafeezul Sukoorali. 1972 Won Best Production in the Drama Festival with Tears In the Gayelle written by Dennis Noel and produced by Rosemarie Wyse. A banquet is hend in honor of Sir Issac Hayatali on his appointment as Chief Justice of Trinidad and Tobago. 1975 The United Church Board of Missions formally closes the Canadian Mission to Trinidad. A chemistry lab is added. 1976 Trinidad and Tobago becomes a Republic nation. The following national trophies are won - National Championship Cricket, National Junior Cricket, National Inter-Col Final (football). 1976 Naparima College became the first team to win consecutive Inter-Col Finals. 1984 A Zoology laboratory is added. 1993 Naparima College establishes direct voice contact with NASA's Space Shuttle Columbia, specifically STS-58, whilst it was in orbit. 1996 95% of the 120 boys writing the "O" level school-leaving exams secured passes in five or more subjects, the highest percentage of passes among the island's high schools. 1999 Naparima College won the South Zone SSFL Title, the SSFL League Title and Intercol Title and was reported in the media as being "The Triple Crown Winner". 2005 Demolition of the last remaining remnant of the 1931 main building and the water bay, simultaneously increasing the area used as the courtyard. 2006 Dr. Michael R. Dowlath, a past student and former principal of Iere High School, returns to Naparima College as principal. 2007 Naparima College ties for the most Intercol Titles in the country equalling San Fernando Technical & Signal Hill with 6. 2010 The main Staff Room was extended to better facilitate the teaching staff. Introduction of a new Form class – 6BS3. Establishment of a new classroom in the Grant's Memorial Wing to house the students of 6A. 2010 Naparima College records the most zonal wins for the South Zone Title, 14, and the most for any team in their respective zones. 2011 Reintroduction of the Teachers vs. Students Cricket Match on Founders' Day as well as the Naparima College Blazer for Form 6 students. Hosts of bi-annual Presbyterian Games. Introduction of the Upper 6 Sleepover.(Year of proposed air-conditioning of the entire Form 6 Block as well as installation of security cameras on campus. ) 2014 Naparima College wins the Under 19 division of the Barbados Cup football competition. The annual tournament attracts approximately 50 teams drawn from across the entire Caribbean – English, Spanish & French as well as occasionally teams from England, the United States of America and Canada. 2015 Naparima College attains the national standard of Diamond Standard Certification, an award and recognition given to organisations that have accomplished high standards in the delivery of public service in Trinidad and Tobago. 2015 Xante, an annual school concert was launched, showcasing the students' diverse talents including performances from the Naparima College Steel Orchestra, a drama presentation, soloists and a very creative, sign language choir. The concert was held at the National Academy for the Performing Arts South Campus, San Fernando. 2015 Naparima College wins all 3 major titles in the SSFL, a repeat of the feat accomplished back in 1999. 2016 Naparima College wins the Big 4 Title in the SSFL. 2017 bmobile partners with Naparima College to transform the school into Trinidad and Tobago's first smart school. 2017 Celebrated 100 years (centenary) on Paradise Hill. 2018 Naparima College wins all 3 major titles in the SSFL, a repeat of the feats accomplished back in 1999 and 2015. Administration Principals The following is a chronological list of principals who have served at Naparima College. Vice Principals The following is a chronological list of vice principals who have served at Naparima College. Campus Naparima College's campus sits atop Paradise Hill, overlooking the city of San Fernando in southern Trinidad. The city's major landmark, the San Fernando Hill, towers over the college to the east; to the west is the Gulf of Paria. While the campus location has been the same for decades, there have been numerous infrastructural and developmental changes over the years. Many of the major changes had arisen out of the demolition of the old wooden buildings, most of which had been around since the first half of the 20th century. Most notably, the college's old gymnasium which was demolished in 1995, the Grant Memorial Building in 1999, and the structure referred to as the "main building" in 2005. The structure that replaced the Grant Memorial Building is referred to as the "Grant's Memorial Wing" or the "new Grant building". Almost twice as large as the old building, it consists of two levels and a basement area (at the western end). It possesses a number of classrooms, Conference, Geography, Business, Language, Art, and Science Demonstration Rooms as well as an "Intelligent Classroom" and Scout Den. Directly adjacent to the Grant's Memorial Wing and connected via a pedestrian bridge is the Science laboratory building. Consisting of two levels, it contains a Chemistry laboratory on the first floor and Biology and Physics laboratories on the second floor. Neighboring the laboratory building and connected by corridors on the second floor is another section which houses the library. Having undergone various remodeling sessions in the past few years, its design as of 2012 is the most effective for research and study purposes. The availability of its service characteristics means that the library can be considered to be a hybrid of both a reference and a lending library, albeit at a smaller scale for the institution. It possesses its own library staff, archives, Internet access as well as book rental and (magnetic card based) photocopying services. Students at the Form 6 level are allowed unlimited access to the library throughout a regular school day while students of lower forms must seek administrative permission for the use of its services at any time other than the luncheon and recess periods. Also in the same section, directly beneath the library on the first floor and accessed via corridors, is the Audio/Visual Room (abbreviated as the A/V Room), which is used for displaying media to aid lectures and presentations to an audience, especially by a visiting party. Also located in this section of the building are the offices of the Deans of Discipline and the Dean of Studies. Adjacent to the aforementioned section of the school, but directly joined to it, is an L-shaped building that contains a number or classrooms, an Information Technology Laboratory, storage room (neighboring the cafeteria), Staff Room (for the teaching administration) and the offices of the Principal and Vice-Principal and the Main Office. The washroom and shower facilities available for the student population and visitors alike are located on the western end of the campus, next to what is the Form 1 and 2 Blocks. These facilities are also located on the first floor of the gymnasium. The cafeteria (known as The Café) is located on the northernmost sector of the campus. It has been extensively remodeled in the last few years, with its current design covering the largest area since Naparima College was founded. The main section of the cafeteria, located just under what is now the Form 1S classroom of the Form 1 block, is the main area of business where students and members of the administration can trade. Adjacent to this is the area dubbed the Café Verandah. This area includes seating with tables. Local food merchants are also allowed to market there with permission from the administration. Students may purchase before and after regular school hours and during the luncheon and recess periods. The cafeteria staff reserves the right to bar any student from trading during class hours unless that student is accompanied with a written note provided by a member of the teaching or office administration. There is also a small snack bar located in what had been a security booth located at the start of the sidewalk at the top of Paradise Hill. It allows students and administration greater flexibility in their choice of refreshment. The structure known as the Principal's Residence is located a stone's throw away from the main structure and is out-of-bounds to the student population. It had been an old building which went unused for years until its reconstruction was proposed and eventually completed alongside the gymnasium. Towards the eastern sector lies the gymnasium which is the most recent addition to the campus (along with the Principal's Residence). Whilst overlooking the on-campus playing field, it consists of three levels, the second and third of which serve a dual purpose of hosting indoor sporting events as well as an Auditorium. The first level, which may also be considered as a basement level, facilitates indoor sporting activities, physical training via the use of exercise equipment, as well as bathroom and shower facilities. There are a number of outdoor facilities normally available for the purposes of recreational and leisure ventures and are scattered in and around the campus. The college has two outdoor recreational grounds. The main on-campus general playing field is located at the westernmost sector of the school's compound, and is available for use by the student population at any time before and after regular class hours. However, most activity occurs during both the recess and luncheon periods. It may also be utilized by students for the purposes of scheduled Physical Education lessons. The field also possess its own batting cage for the purposes of cricket practice. The second and perhaps major playing field is located on the Lewis Street extension, San Fernando and being off campus it is generally out-of-bounds to the student population. It is mainly utilized for larger sporting events such as inter-school sporting competitions or for the events of Sports Day. The spectator stand on the off-campus playing field which is known as the V.B. Walls Pavilion was constructed by the Naparima College Old Boys Association in 1965. Its name is a dedication to Rev. Dr. Victor B. Walls, a Naparima College principal from 1924 to 1953. It is scheduled for reconstruction. The courtyard or "quadrangle" is located in what may be considered as the heart of the campus and is bounded by the structure housing the form 4 and 5 blocks to the north, the science laboratory building to the east and Grant's Memorial Wing to the south whilst looking over the Gulf of Paria to the west. Within the past few years, the area which was dedicated to the courtyard had dramatically increased. The old wooden structure commonly known as the main building and the structure known as the water bay were situated here but were both demolished in 2005. The courtyard is used by Naparima College as the student assembly grounds and for recreational activity. Other miscellaneous facilities or features around the campus include: A terrace above the scout den, overlooking the Gulf of Paria on the western sector Park benches at strategic locations Sinks located near Grant's Memorial Wing and the Form 1 Block A pedestrian shelter located at the entrance to Paradise Hill Vehicular parking for administrative staff, visitors and form 6 students Form levels Forms 1 – 5 levels Each form level from 1 to 5 is usually sub-divided into four Form Classes. Each class in a form level is given the number of the form level and distinguished from each other by one of four letters – 'N', 'A', 'P' and 'S' (Naps - an abbreviation of 'Naparima College' used in general references). For instance, at the first form level the classes are Form 1N, 1A, 1P and 1S; with the pattern continuing up each successive form level. Normally each form class contained the same group of students that began together in the first form level, i.e. for example the same students who reside in Form 1N would be promoted to Form 2N. This trend continued until the Forth Form level where students were separated and grouped into classes based on their choice of CSEC subjects. However, in order to expose students to a greater number of individuals, this trend was changed whereby when progressing up each successive form level at the start of a new academic year, the students groups within each form level are changed that is, students do not progress the form levels with the same group they had started out with in the first form level. The form classes: Form 1 – 1N, 1A, 1P & 1S Form 2 – 2N, 2A, 2P & 2S Form 3 – 3N, 3A, 3P & 3S Form 4 – 4N, 4A, 4P & 4S Form 5 – 5N, 5A, 5P & 5S Form 6 level The form 6 level is usually treated as a separate branch of classes and as such does not conform to the usual class-naming scheme as described above, instead having a unique scheme for both the Upper and Lower 6 classes. The Lower 6 Level is regarded as the level consisting of the younger form 6 students in their first year of CAPE study while the Upper 6 Level is regarded as the older students who are in their second and final year of CAPE study. Naparima College tends to accept on average, eight external students into the Lower 6th Form Level per year. Usually these students only spend two years at the institution itself. The Upper 6 students are acknowledged as being in their final year at Naparima College, after having spent seven years (external students spend fewer years) at the institution. The form classes: Lower 6 Level: 6BA 6BS1 6BS2 6BS3 Upper 6 Level: 6A 6AS1 6AS2 6AS3 The promotion scheme is the same with students being allowed the opportunity of promotion into the appropriate class as shown below. The 6AS3 form class was officially recognized in September of the academic year spanning 2011–2012. There are 28 populated form classes at present. Academics Naparima College offers a diverse selection of subject areas. Many of which are drawn from the fields of Science, Mathematics, Business Studies, Modern Studies and Creative Arts. At the end of three years, Form 3 students must sit the National Certificate of Secondary Education (NCSE) examinations. Whilst at the end of five years, Form 5 students sit the Caribbean Secondary Education Certificate Examinations (CSEC) examinations and the Caribbean Advanced Proficiency Examinations (CAPE) Examinations at the Form 6 level. Form 1 Level The Form 1 Level consists of the youngest students at Naparima College. Academically, the subjects offered to these students are extremely similar to what they were exposed to at the Primary Level albeit, a little more advanced. Students at this level may be allowed to go into the Library during regular class hours for periods of study. Subjects offered at the Form 1 Level: Art Drama English Language English Literature French - As of 2016 Integrated Science Information Technology Health and Family Life Education Mathematics Music Physical Education Religious Instructions Social Studies Spanish Technical Education History Form 2 Level Subjects offered at the Form 2 Level: Art Biology Chemistry English Language English Literature French - as of 2017 History Mathematics Information Technology Physical Education Physics Social Studies/History Spanish Form 3 Level The form 3 level is the level at which students prepare for the National Certificate of Secondary Education (NCSE) examinations. Essentially, students at this level are exposed to all the academic fields of study offered at Naparima College. Thus aiding them in selecting subjects for further study at the CSEC level and possible careers. They are also encouraged to attend the Career Guidance Seminars held annually at Naparima College. Subjects offered at the Form 3 Level: Art Biology Business Studies Chemistry Drama English Language English Literature French Geography History Information Technology Mathematics Music Physical Education Physics Spanish Forms 4 & 5 Levels The Form 4 & 5 Levels are the levels at which students prepare for the Caribbean Secondary Education Certificate Examinations – CSEC Entry into Form 4 is based on the subjects selected by students upon promotion from the Form 3 Level. Each student is required to study at least nine subjects, four of which are compulsory – English Language, English Literature, Mathematics and either a choice of Biology or Human and Social Biology. Students continue into Form 5 with the subjects they have selected and write the CSEC examinations at the end of their 5th Form academic year. Subjects offered at the Form 4 & 5 Levels: Additional Mathematics Biology - compulsory Chemistry English Language – compulsory English Literature – compulsory French Geography History Human & Social Biology Information Technology Mathematics – compulsory Physical Education Physics Principles of Accounts Principles of Business Social Studies Spanish The following subjects are also offered at the Form 4 & 5 Levels albeit not during normally timetabled sessions: Technical Drawing Visual and Performing Arts (one choice of either Art or Music or Drama) Sixth form The sixth form is split into two sub-levels, the Lower Sixth Form and the Upper Sixth Form. These are the levels at which students prepare for the Caribbean Advanced Proficiency Examinations – CAPE Application to this level may occur after fifth form students obtain their CSEC results. Students at this level are allowed a choice to study 3 academic subjects (some opt for 4 and may be accepted) and 1 compulsory subject Communication Studies at the lower sixth form and Caribbean Studies at the upper sixth form. However choice of subjects is limited; for example, a student who only studied modern studies subjects at the CSEC level (History, Social Studies etc.) is barred to study subjects such as Pure Mathematics or Physics at the CAPE level. Subjects at the CAPE Level tend to be divided into Units. Lower sixth form students study Unit 1 of their chosen subjects while the Unit 2 subject counterparts are studied at the Upper sixth form level. However exceptions may be made in which the Unit 2 of a subject may be studied at the lower sixth form followed by Unit 1 at the upper sixth form. Only the Units 1 of the compulsory subjects – Caribbean Studies and Communication Studies are studied. Subjects offered at the Sixth Form Level: Accounting Animation and Game Design Applied Mathematics Biology Caribbean Studies – compulsory Chemistry Communication Studies – compulsory Computer Science Economics Entrepreneurship Environmental Science French Geography History Information Technology Literature in English Management of Business Performing Arts Physical Education Physics Pure Mathematics Sociology Spanish Admissions Admission to Naparima College is determined by performance on an examination, known as the Secondary Entrance Assessment (S.E.A) .Being a Denominational school that was originally founded by Christian missionaries from Canada, the Presbyterian Church in Trinidad and Tobago (PCTT) has the prerogative of selecting 20% of the annual intake of new students into Form 1. This is a practice that is also exercised by all of the other religious organisations in Trinidad and Tobago (e.g. the Roman Catholic Church, the Anglican Church, the Anjuman Sunnat ul Jamaat Association (A.S.J.A.) and the Sanatan Dharma Maha Sabha (S.D.M.S.)) where they have jurisdiction over secondary schools in this regard and therefore should not be construed as unfair in any way whatsoever. The S.E.A examination comprises three papers that must be attempted by all candidates; Creative Writing, Mathematics and Language Arts. Naparima College tends to be an institution of first choice of the four prospective institutions each examine is required to list, in preferential order of interest prior to the examination. The four preferences are drawn from the totality of secondary institutions in Trinidad and Tobago. For a candidate to attain admission, he must obtain a percentage of 94–100 after writing the examinations. Female students have not been accepted since 1967 and can instead apply for the Naparima Girls High School. Student life Naparima College offers a range of clubs and activities which students may join. Extracurricular activities are strongly supported by the administration and the teaching staff as it provides the skills for students at Naparima College to develop themselves into all-rounded individuals and is particularly one of the reasons the school has been able to perform outstanding in all aspects as an educational institution. The following is a list (as of September 2015) of clubs and activities: 1st Naparima College Sea Scout Troop African Cultural Club - A.C.C Art Club Astronomy Club Audio-Visual Club Anime Club Big-Brother Program Chess Club Club Castilé – The UNESCO Spanish Club Culinary Club Drama Club Disaster Preparedness Club Environmental Club Essay Writing Film and Photography Club French Club Geography Club Guitar Club Indian Cultural Club – I.C.C. Information Technology (I.T.) Club Inter School Christian Fellowship – I.S.C.F. Keyboard/Piano Club Mathematics Club Naparima College Choir Naparima College Pan Ensemble – NaparhythmsNaparima College Islamic Society − N.C.I.S Naparima College Media Association - N.C.M.A Rotary International – Interact Club Rhythm Section Scrabble Club Sports – Cricket, Football, Table Tennis, Lawn Tennis, Martial Arts, Athletics, Volleyball, Golf, Swimming, Scuba Diving, Badminton and more. Student Council Spanish Club Tassa Group The Literary and Debating Society – L.A.D.S. The Naparima College Peer Supporters Writers' Guild - More clubs to be stated The following competitions occur within Naparima College. Hence this does not include students who participate in competitions to which Naparima College is not the host of. Such would fall into the aforementioned category of Extra-curricular activities and clubs, whose members (and members of the student population not related to a club or activity) are allowed to 'represent' the school in various external competitions. However they may be considered as a sub-class of extra-curricular activity. Competitions within Naparima College include: Elocution Contest Events of Sports Day Oration Contest Science Fair Inter-class Competitions Such activities and clubs have regular meetings and encourage their members to participate in various competitions or creative events and even charity. If any student wishes to suggest an activity not listed, they may petition and propose their idea to any one of the administrative teaching staff and have it evaluated. As such the aforementioned list is constantly growing. Academic year The following is a list of events that are annually held at Naparima College during each academic year. (Some events may not occur during the term mentioned and as such, may be subjected to change). The events are listed in the order of occurrence. Term 1 Interclass Football Competition School Bazaar (Normally The First Saturday in October) Speech Day Oration Contest Eid Celebrations Divali Celebrations End of Term Examinations Christmas Celebrations Term 2 Parents' Day Inter-Class Cricket Tournament Founder's Day Teachers Vs. Students Annual Cricket Match Elocution Contest Sports Day Carnival Celebrations Naparima College's Annual Walkathon End of Term Examinations Easter Celebrations Upper 6 Sleepover Term 3 Annual Prize Giving and Awards Functions Student Baccalaureate Service CSEC and CAPE Examinations End of Term Examinations Graduates' Dinner and Dance Apart from those listed, numerous other events occur at various times throughout an academic year at Naparima College. These may include the following: Regular meetings of the Naparima Association of Parents School and Community (NAPSAC) Fundraisers (hosted by student clubs) Events hosted by the Naparima Association of Past Students (NAPS) such as the "Breakfast On The Hill" or the "Naps Men Can Cook Too" Career Guidance Seminars (other secondary schools may be invited to attend) Eid Celebrations Valedictory Ceremonies Various other special occasions may also be held. The first major event would be that of the annual school bazaar which takes place on the school's campus. This social event normally occurs during the month of October and is usually on a weekend, most commonly on a Saturday afternoon. Each class would be given the responsibility of a particular stall from which planning may begin a few weeks beforehand. These stalls may take the form of many miscellaneous activities including games such as the popular air rifle or the tin pan alley, sale of food and drinks, and attractions such as a horror house, car show and disco. The entire bazaar itself serves a dual purpose of being both a social event and a fundraiser. Speech Day is usually the day which might also be referred to as a Graduation Day, however it is also the day for recognizing the academic achievements of both the Form 5 and Upper Form 6 students, the latter of whom it would be considered as a Graduation Day. Certificates and awards for academic achievements in the Caribbean Secondary Education Certificate examinations and Caribbean Advanced Proficiency Examinations as well as other academic achievements are distributed to the students at this official ceremony. Teachers, parents, guardians and special guests are normally issued invitations to attend. House system The House System was instituted in 1959 by then-principal Rev. Lute. Each member of the student population is usually assigned to a particular house during their form 1 year and remain in that house during the rest of their academic career at Naparima College. The houses usually compete with each other in some aspects of school life, most significantly is that of the annual Sports Day, where each house competes for the top ranking in sporting activities at Naparima College. Initially there were six houses but was then reduced to four. The four houses are as follows: Flemington House (gold) – named for Allen Flemington, who served as a missionary and a French teacher at the school from 1939 to 1940. He left the school to volunteer for service in World War II as a fighter pilot, where he died in combat. Grant House (green) – named for the founder of Naparima College, Kenneth J. Grant. Sammy House (blue) – named for James Sammy, who taught at Naparima College from 1912 to 1968.He was the father of two other Masters at the College - Carl Sammy, long standing History teacher at Naparima College until his retirement in 2006 and David Sammy, former Vice Principal at Naparima College who later went on to become the Principal of Tableland High School in 2009. Walls House (red) – named for long-serving principal, Victor B. Walls. Publications The Olympian Magazine Naparima College has a school magazine dubbed The Olympian which was started in 1945 by Ralph Laltoo. Efforts are made to publish one annually. It usually acts as a yearbook and highlights the events which may have transpired during a particular academic year at Naparima College. These may include academic and sporting achievements as well as brief summaries of extracurricular clubs and activities and special events such as the school bazaar or valedictory functions. Many of these normally being from the perspective of the student population. The Naparima College Handbook Normally issued to students during their Form 1 year, the Naparima College Handbook outlines the rules of the institution, codes of conduct as well as other information regarding school life. Notable alumni Miscellanea Motto"A posse ad esse"'' This Latin phrase literally translates into English as "From Possibility To Actuality". The motto was selected by a contest launched by Rev. Dr. Victor B. Walls among the staff and students shortly after his arrival in 1923. The identity of the person who coined the motto remains a mystery. The motto serves as a force to encourage students to strive to be the best that they can be in all aspects of school life and in their other achievements. It is inscribed on the school's insignia. Rationale Naparima College, the pioneer secondary school in San Fernando, was founded in 1894 by the Canadian Presbyterian Missionaries to spread education and civilisation amongst the depressed indentured immigrant population. College Hymn or College Anthem This college hymn was written around 1930 by Marion Elizabeth Walls, the wife of Reverend Victor Benjamin Walls. The melody is that of a well-known 18th-century evangelical hymn, recorded in the Anglican and Presbyterian hymn-books, and known variously as The Church of God, Thou whose Almighty Word, and Come, thou Almighty King. See also Naparima Girls' High School Hillview College Iere High School List of schools in Trinidad and Tobago References External links Official Site Alumni Site Naparima Alumni Association of Canada Educational institutions established in 1894 Buildings and structures in San Fernando, Trinidad and Tobago Presbyterian schools in Trinidad and Tobago 1894 establishments in Trinidad and Tobago Schools in Trinidad and Tobago
8868366
https://en.wikipedia.org/wiki/Operations%20specialist%20%28United%20States%20Navy%29
Operations specialist (United States Navy)
Operations Specialist (abbreviated as OS) is a United States Navy and United States Coast Guard occupational rating. It is a sea duty-intensive rating in the Navy while most of Coast Guard OS's are at ashore Command Centers. Brief history The rating started from the radarman (RD) rating. In the U.S. Coast Guard the Operations specialist rate was formed by combining the radarman (RD) and telecommunications specialist rate (TC). When the radarman rating was split up into OS, electronics technician (ET), and electronic warfare technician (EW) ratings, the original RD rating badge continued to be used by the operations specialist. It symbolizes the radar scope (circle portion of symbol) oscilloscope radar (O-scope) used to determine a targets range from the radar antenna (the two spikes in the line drawn across the scope), and the arrow represents the ability to detect the azimuth or direction of the target. Description Operations specialists aboard U.S. Navy combat vessels work in the combat information center (CIC) or combat direction center (CDC), aka: "combat", the tactical nerve center of the ship. Using a wide variety of assets available to them, Operations Specialists are responsible for the organized collection, processing, display, competent evaluation and rapid dissemination of pertinent tactical combat information to command and control stations, upon which sound tactical decisions may be made. Beginning training (called "A" school) for operations specialist's was originally located at the Naval Training Center (NTC) in Great Lakes Illinois. In 1979 it was moved to Dam Neck in Virginia Beach, Virginia; the school has since moved back to Training Support Center (TSC) of NAVSTA Great Lakes in Illinois. Intermediate and advanced training are in locations like California, Virginia, and various locations throughout the United States Navy. An RDA school was also located on Treasure Island at San Francisco, Calif. They maintain combat information center displays of strategic and tactical information, including various plotting boards and tables depicting position and movement of submarines, ships and aircraft as well as tote boards containing data relevant to the tactical picture. They operate surveillance, tracking and height-finding radars, identification friend or foe (IFF) equipment, HF, VHF and UHF radios, tactical data link (TADIL-A/Link 11, TADIL-J/Link 16, etc.) systems and displays, and computerized consoles and peripheral equipment that allows them interface with the Aegis combat system. The tactical data links exchange data between other units in the force; i.e., ships, aircraft and other military units such as deployed Army, Air Force, Marine and Coast Guard commands. They operate encrypted and non-encrypted long and short range radio-telephone equipment as well as intra-ship communication systems. With specialized training, they also may serve as combat air controllers for helicopters, anti-submarine patrol aircraft, and jet strike fighter aircraft in anti-submarine tactical air controller (ASTAC), sea combat air controller (SCAC), and air intercept controller (AIC) roles. They also serve as watch supervisors, watch officers, and section leaders underway and in port aboard ship and at commands ashore. Operations specialists assist in shipboard navigation through plotting and monitoring the ship's position using satellite and other electronic navigation resources, as well as fixing the ship's position near landfall using radar imaging. They interpret and evaluate presentations and tactical situations and make recommendations to the commanding officer, CIC watch officer (CICWO), tactical action officer (TAO), officer of the deck (OOD), or any of their commissioned officer surrogates during various watch or combat/general quarters conditions. They apply a thorough knowledge of doctrine and procedures applicable to CIC operations contained in U.S. Navy instructions and allied tactical or U.S. Navy tactical publications. Operations Specialists are responsible for maintaining the physical space of CIC as well as performing minor planned maintenance care of the equipment they operate. A minimum of a secret security clearance is required for this rating, with more senior personnel holding a top secret clearance. Operations specialists provide to their shipboard or shore-based command a wide range of technical information and assistance related to anti-surface warfare, anti-air warfare, anti-submarine warfare, amphibious warfare, mine warfare, naval gunfire support, search and rescue operations, radar and dead reckoning navigation, overt intelligence gathering and transmittal, and other matters pertaining to the operations specialist's area. They also have a working knowledge of protocols and procedures in electronic warfare, though this area is normally covered by its own occupational rating, such as cryptologic technician (CT) aboard ship or ashore, or naval aircrewman (AW) aboard specific naval electronic warfare and reconnaissance aircraft. Duties The duties performed by Navy operations specialists include: Operate a variety of computer-interfaced detection, tracking and height-finding radars Plot a ship's position, heading, and speed, using computerized or manual trigonometric methods using a Maneuvering Board (MOE Board) Maintain a tactical picture of the surrounding seas by plotting and maintaining a visual representation of ships, submarines and aircraft in the area, including friendly, neutral, hostile and civilian contacts Use secure and non-secure radio in communicating, in plain voice or coded signals, with other air, sea or land units to coordinate tactical and combat evolution's Operate common marine electronic navigation instruments including radar and satellite systems, plot own ship's position and movement on charts and make recommendations in navigation to the officer of the deck Provide target plotting data to the command and control based on information received from target tracking devices Make recommendations to command and control regarding tactical and combat procedures Assist in the coordination and control of landing craft during amphibious assaults Communicate with spotters, plot and make calculations to adjust fire during naval gunfire support missions Coordinate and assist in plotting and ship maneuvers for emergency evolution's such as man overboard and other search and rescue activities Provide assisted and direct air control of combat aircraft in anti-air, anti-surface and anti-submarine warfare The job of an operations specialist can be very intense and stressful while the ship is underway. Operational tempos go from next to no contacts in the middle of an ocean to dozens, if not more in congested waters. They are required to be able to think quickly, drawing on a large reserve of tactical and procedural knowledge and make calculations on the fly in the fast-paced and information-saturated environment of naval combat operations at sea. The duties performed by Coast Guard operations specialists include: Search and rescue or law enforcement case execution Combat Information Center operations Intelligence gathering See also List of United States Navy ratings References United States Navy ratings
3193137
https://en.wikipedia.org/wiki/History%20of%20the%20single-lens%20reflex%20camera
History of the single-lens reflex camera
The history of the single-lens reflex camera (SLR) begins with the use of a reflex mirror in a camera obscura described in 1676, but it took a long time for the design to succeed for photographic cameras. The first patent was granted in 1861, and the first cameras were produced in 1884, but while elegantly simple in concept, they were very complex in practice. One by one these complexities were overcome as optical and mechanical technology advanced, and in the 1960s the SLR camera became the preferred design for many high-end camera formats. The advent of digital point-and-shoot cameras in the 1990s through the 2010s with LCD viewfinder displays reduced the appeal of the SLR for the low end of the market. The mirrorless interchangeable-lens camera is increasingly challenging the mid-price range market. But the SLR remains the camera design of choice for most professional and ambitious amateur photographers. Early large and medium format SLRs The photographic single-lens reflex camera (SLR) was invented in 1861 by Thomas Sutton, a photography author and camera inventor who ran a photography related company together with Louis Désiré Blanquart-Evrard on Jersey. Only a few of his SLRs were made. The first production SLR with a brand name was Calvin Rae Smith's Monocular Duplex (USA, 1884). Other early SLR cameras were constructed, for example, by Louis van Neck (Belgium, 1889), Thomas Rudolphus Dallmeyer (England, 1894) and Max Steckelmann (Germany, 1896), and Graflex of the United States and Konishi in Japan produced SLR cameras as early as 1898 and 1907 respectively. These first SLRs were large format cameras. While SLR cameras were not very popular at the time, they proved useful for some work. These cameras were used at waist level; the ground glass screen was viewed directly, using a large hood to keep out extraneous light. In most cases, the mirror had to be raised manually as a separate operation before the shutter could be operated. As with camera technology in general, SLR cameras became available in smaller and smaller sizes. Medium format SLRs soon became common, at first larger box cameras and later "pocketable" models such as the Ihagee Vest-Pocket Exakta of 1933. Development of the 35 mm SLR The first 35mm prototype SLR was the "Filmanka" developed in 1931 by A. Min in the Soviet Union. In 1933 A.O. Gelgar developed the "GelVeta" also in the USSR, later re-named Спорт ("Sport"). "Sport" was a very smart design with a 24mm × 36mm frame size, but, according to some sources, it did not enter the market until 1937, although there is now some evidence emerging that "Sport" may have been in limited production before 1936. Therefore, it is claimed not to be the first mass produced 35mm SLR. Early innovations Early 35 mm SLR cameras had functionality that was similar to larger models, with a waist-level ground-glass viewfinder and a mirror that remained in the taking position and blacked out the viewfinder after an exposure, and then returned when the film was wound on. Innovations that transformed the SLR were the pentaprism eye-level viewfinder and the instant-return mirror—the mirror flipped briefly up during exposure, immediately returning to the viewfinding position. The half-silvered fixed pellicle mirror, without even the brief blackout of the instant-return mirror, was also innovative but did not become standard. Through-the-lens light metering was another important advance. As electronics advanced, new functionality, discussed below, became available. Exakta The first 35mm format SLR in large scale production was the Ihagee Kine Exakta, produced in 1936 in Germany, which was fundamentally a scaled-down Vest-Pocket Exakta. This camera used a waist-level finder. Various other models were produced, such as the Kine-Exakta, the Exakta II, the Exakta Varex (featuring an interchangeable pentaprism eye-level viewfinder and identified in the United States as the 'Exakta V'), the Exakta Varex VX (identified in the United States as the 'Exakta VX'), the Exakta VX IIa, the Exakta VX IIb, the Exakta VX500 and the Exakta VX1000. Ihagee also manufactured less expensive cameras under the 'Exa' camera label, such as the Exa, the Exa Ia, the Exa II, the Exa IIa, the Exa IIb (which was generally not considered part of the "official" Exa line), and the Exa 500. The Exacta sold well and triggered other camera manufacturers to develop 35mm SLRs. Sales were particularly strong in the medical and scientific fields. A large range of lenses and accessories were made by a variety of manufacturers, turning the camera into one of the first system cameras, although motor drives and bulk loading backs were never produced by Ihagee. Rectaflex Rectaflex was the name of an Italian camera maker from 1947 to 1958. It was also the name of their sole model. The Rectaflex was a 35 mm SLR camera with a focal plane shutter, interchangeable lenses, and a pentaprism eye-level finder. Rectaflex (followed by Contax S) was the first SLR camera introducing the modern pentaprism eye-level finder. The first prototype (Rectaflex 947) was presented in 1947, with a final presentation in April 1948 and start of series production (A 1000) in September the same year, thus hitting the market one year before the Contax S, presented in 1949. Both were preceded by Alpa-Reflex, first presented to a wider public in April 1944 at the Swiss Trade Fair in Basel (Schweizer Mustermesse). Alpa’s production was slow up to 1945, and it lacked a pentaprism, so the image was reversed left-to-right. Zeiss Zeiss had begun work on a 35mm SLR camera in 1936 or 1937. This camera used an eye-level pentaprism, which allowed eye-level-viewing of an image oriented correctly from left to right. Waist-level finders, however, showed a reversed image, which the photographer had to mentally adjust for while composing the image by looking downward and viewing and focusing. To brighten the viewfinder image, Zeiss incorporated a fresnel lens inbetween the ground-glass screen and the pentaprism. This design principle became the conventional SLR design used today. World War II intervened, and the Zeiss SLR did not emerge as a production camera until Zeiss, in the newly created factory in East Germany, introduced the Contax S in 1949, with production ending in 1951. The Italian Rectaflex, series 1000 went into series production the year before, in September 1948, thus being market-ready one year before the Contax. Both were historic progenitors of many later SLRs that adopted this arrangement. Praktiflex Praktica In 1939, Kamerawerk Niedersedlitz Dresden presented the Praktiflex at the Leipzig spring fair. The camera was a waist type with an M40x1 screw mount and a horizontal cloth focal shutter. This camera is the pattern for most of the 35 mm SLR cameras and also for the Japanese and the digital SLR cameras today. After the war, Praktiflex was the most manufactured 35 mm SLR in Dresden, especially for the Russians as reparations. KW changed to the M42 screw mount invented at Zeiss for Contax S—later used by Pentax, Yashica and others to become a near universal mount. In 1949, it was redesigned with longer shutter speeds, and the name was changed to Praktica. In 1958, KW Niedersedlitz became a part of the VEB Kamer und Kinowerk (old Zeiss), later VEB Pentacon. Praktica was typically a consumer/amateur camera. Many developments were added. It was produced until 2000. 1949 Praktica until Praktica V 1964 Praktica Nova 1969 Praktica L with vertical metal focal plane shutter 1979 Praktica B with new bayonet mount Highlights: 1956 Praktica FX2 2. version with internal camera stepdown aperture, world standard for more than 20 years. 1959 Praktica IV with permanent eye-level pentaprism. 1964 Praktica V with instant return mirror. 1965 Praktica Mat first European TTL semiautomatic (work aperture) in production 1966 Praktica Super TL with centerweighted TTL 1969 Praktica LLC world first camera with electric simulation of aperture From 1952 to 1960 the KW factory/VEB Pentacon also produced the Praktina, a system SLR camera for professionals and advanced amateurs with a bayonet mount and focal shutter, but production was closed, partly for political reasons. Praktica was the camera that could be sold outside the DDR and bring foreign currency to the country. Edixa Another German manufacturer, Edixa was a brand of camera manufactured by Wirgin Kamerawerk, based in Wiesbaden, West Germany. This company's product line included 35mm SLR cameras such as the Edixa Reflex, which featured a Steinheil 55mm f/1.9 Quinon lens and an Isco Travegar 50mm f/2.8 lens, the Edixamat Reflex, the Edixa REX TTL, and the Edixa Electronica. The removable pentaprism could be swapped for a waistlevel viewfinder with a pop up magnifier. The lens mount was the same screw thread as the Praktica. Rise of the Japanese SLRs The earliest Japanese SLR for roll film was perhaps the Baby Super Flex (or Super Flex Baby), a 127 camera made by Umemoto and distributed by Kikōdō from 1938. This camera had a leaf shutter, but two years later came the Shinkoflex, a 6×6 camera made by Yamashita Shōkai, with a focal-plane shutter and interchangeable lenses. However, Japanese camera makers concentrated on rangefinder and twin-lens reflex cameras (as well as simpler viewfinder cameras), similar to those of the Western makers. Pentax The Asahi Optical Company took a different manufacturing path, inspired by the German SLRs. Its first model, the Asahiflex I, existed in prototype form in 1951 and production in 1952, making it the first Japanese-built 35mm SLR. The Asahiflex IIB of 1954 was the first Japanese SLR with an instant-return mirror. Previously, the mirror would remain up and the viewfinder black until the user released the shutter button. In 1957, the Asahi Pentax became the first Japanese fixed-pentaprism SLR; its success led Asahi to eventually rename itself Pentax. This was the first SLR to use the right-hand single-stroke film advance lever of the Leica M3 of 1954 and Nikon S2 of 1955. Asahi (starting with the Asahi Pentax) and many other camera makers used the M42 lens mount from the Contax S, which came to be called the Pentax screw mount. Pentax is now part of the Ricoh. Miranda Orion's (later name-changed to Miranda's) Miranda SLR camera was sold in Japan from August 1955 with the launch of the Miranda T camera. The camera was narrowly the first Japanese-made pentaprism 35mm SLR. It featured a removable pentaprism for eye-level viewing that could be removed for use as a waist-level finder. Yashica The Yashica Company introduced its own SLR in 1959, the Pentamatic, an advanced, modern 35mm SLR camera with a proprietary bayonet-mount. The Pentamatic featured an automatic stop-down diaphragm (offered only with the Auto Yashinon 50mm/1.8 lens), instant-return mirror, a fixed pentaprism, and a mechanical focal-plane shutter with speeds of 1-1/1000 second, along with additional interchangeable lenses. Zunow The Zunow SLR, which went on sale in 1958 (in Japan only), was the first 35mm SLR camera with an automatic diaphragm, which stopped down to the preselected aperture upon release of the shutter. (Although this invention had been anticipated by the 1954 Praktina FX-A which featured a semi-automatic diaphragm, which stopped down automatically, but had to be opened manually after the exposure.) The automatic diaphragm feature eliminated one downside to viewing with an SLR: the darkening of the viewfinder screen image when the photographer selected a small lens aperture. The Zunow Optical Company also supplied the Miranda Camera Company with lenses for their Miranda T SLR cameras. General operation of a 35 mm SLR A photographer using an SLR would view and focus with the lens diaphragm (aperture) fully open; he then had to adjust the aperture just before taking the picture. Some lenses had manual diaphragms—the photographer had to take the camera down from his eye and look at the aperture ring to set it. A "pre-set" diaphragm had two aperture rings next to each other: one could be set in advance to the aperture needed for the picture while the other ring controlled the diaphragm directly. Turning the second ring all the way clockwise gave full aperture; turn it all the way counterclockwise gave the preset shooting aperture, speeding up the process. Such lenses were commonly made into the 1960s. A lens with an "automatic" diaphragm allows the photographer to forget about closing the diaphragm to shooting aperture; such diaphragms have been taken for granted for decades. Usually this means a pin or lever on the back of the lens is pushed or released by a part of the shutter release mechanism in the camera body; the external automatic diaphragms on lenses for Exakta and Miranda cameras were the exception to that. Some lenses had "semi-automatic" diaphragms that closed to shooting aperture like an automatic diaphragm but had to be re-opened manually with a flip of a ring on the lens. When the shutter release is pressed the mirror flips up against the viewing screen, the diaphragm closes down (if automatic), the shutter opens and closes, the mirror returns to its 45-degree viewing position (on most or all 35 mm SLRs made since 1970) and the automatic diaphragm re-opens to full aperture. Most but not all SLRs had shutters behind the mirror, next to the film; if the shutter was in or immediately behind the lens it had to be open before the photographer clicked the shutter and then had to close, then open, then close. Standardization of designs In the following 30 years the vast majority of SLRs standardized the layout of the controls. The film was transported from left to right, so the rewind crank was on the left, followed in order by the pentaprism, shutter speed dial, shutter release, and the film advance lever, which in some cameras was ratcheted so that multiple strokes could be used to advance the film. Some cameras, such as Nikon's Nikkormat FT cameras (marketed under the brand-name 'Nikormat' in European countries and elsewhere) and some models of Olympus OM series, deviated from this layout by placing the shutter speed control as a ring around the lens mount. Miranda Camera Company Miranda produced early SLRs in the 1950s which were initially manufactured with external auto-diaphragms, then added a second mount with internal auto-diaphragm. To list some of Miranda's cameras with external diaphragm, there was the Miranda Sensorex line. The internal auto-diaphragm Miranda cameras consisted of the Miranda 'D', the popular Miranda 'F', the 'FV' and the 'G' model, which had a larger than normal reflex mirror thereby eliminating viewfinder image vignetting when the camera was used with long telephoto lenses. Miranda cameras were known in some photographic discussions as 'the poor man's Nikon'. Periflex One unique brand of cameras was the Corfield Periflex made by K. G. Corfield Ltd in England. Three models were produced from 1957, all of which used a retractable periscope inserted into the light path for focussing through the single lens. Pressing the shutter release moved the spring-loaded periscope out of the film path before the focal-plane shutter operated Minolta Minolta's first SLR, the SR-2, was introduced to the export market in the same year (in fact, at the same Philadelphia show as the Canon and Nikon products) but had been on sale in Japan since August 1958. Lenses started with the designation 'Rokkor'. With the introduction of the SRT-101, the lenses added the designation of 'MC' for 'meter-coupled', and then later to 'MD' when the Minolta XD-11 was introduced with full-program mode. Minolta was taken over in 2003 by Konica, to form 'Konica-Minolta'. Konica-Minolta sold its imaging division to Sony in January 2006. Nikon F Nikon's 'F' model, introduced in April 1959 as the world's first system camera (if the commercially unsuccessful Praktina is not considered), became enormously successful and was the camera design that demonstrated the superiority of the SLR and of the Japanese camera manufacturers. This camera was the first SLR system that was adopted and used seriously by the general population of professional photographers, especially by those photographers covering the Vietnam War, and those news photographers utilizing motor-driven Nikon F's with 250-exposure backs to record the various launches of the space capsules in the Mercury, Gemini, and Apollo space programs, both in the 1960s. After the introduction of the Nikon F, the more expensive rangefinder cameras (those with focal plane shutters) became less attractive. It was a combination of design elements that made the Nikon F successful. It featured interchangeable prisms and focusing screens; the camera had a depth-of-field preview button; the mirror had lock-up capability; it featured a large bayonet mount and a large lens release button; a single-stroke ratcheted film advance lever; a titanium-foil focal plane shutter; various types of flash synchronization; a rapid rewind lever; and a fully removable back. it was a well-made, extremely durable camera, and adhered closely to the then current, successful design scheme of the Nikon rangefinder cameras. Instead of the M42 screw mount used by Pentax and other camera manufacturers, Nikon had introduced the three-claw F-mount bayonet lens mount system, which is still current in a modified form today. The focal plane shutter, unlike other SLRs of the period which used a cloth material for the focal plane shutter design (NOTE: with this design, it was possible to burn a hole into the cloth of the shutter during mirror lock-up in bright sunlight) used titanium foil which was rated for 100,000 cycles of releases of the shutter (according to Nikon). The F was also a modular camera, in which various assemblies such as the pentaprisms, the focusing screens, the special 35mm roll film 250 exposure film back and the Speed Magny film backs (two models: one using the Polaroid 100 (now 600) type pack films; and another Speed Magny was designed for 4×5 film accessories, including Polaroid's own 4×5 instant film back). These could be fitted and removed, allowing the camera to adapt to almost any particular task. It was the first 35 mm camera offered with a successful motor drive system. Unlike most of the other manufacturers involved in 35mm camera production, the Nikon F was released with a full range of lenses from 21 mm to 1000 mm focal length. Nikon was also among the first to introduce what is commonly known today as 'mirror lenses' – lenses with Catadioptric system designs, which allowed the light path to be folded and thus yielded lens designs that were more compact than the standard telephoto designs. Subsequent top-of-the-line Nikon models carried on the F series, which has reached the F6 (although this camera has a fixed pentaprism). With the introduction and continued improvements being made in digital photography, the Nikon F6 is likely to be the last of the flagship Nikon F-line film SLRs. Canon In May 1959, the Canonflex SLR was introduced. The camera featured a quick return mirror and an automatic diaphragm, and was introduced with an interchangeable black pentaprism housing. It also featured newly developed 'R' series breech lock mount lenses. This SLR was superseded by the Canonflex RM, a fixed prism SLR which featured a built-in selenium cell meter. Later came the Canonflex R2000, with a top shutter speed of 1/2000 of a second. This model was also superseded by the Canonflex RM. In 1962, FL series lenses were introduced along with a new camera body, the Canon FX, which had a built-in CdS light meter positioned on the front left side of the camera, a design which appeared much like the Minolta SR-7. Olympus Pen F The Olympus Pen F series was introduced and produced by Olympus of Japan between 1963 and 1966. The System consisted of the original Olympus Pen F, later the behind-the-lens metering Pen FT, 1966–1972; and the non-metered version of the FT, known as the Olympus Pen FV, which was manufactured from 1967 to 1970. The design considerations used were unusual. The camera produced a half-frame 35 mm negative; it used a Porro prism as a design-replacement for the conventional pentaprism thus producing the 'flat top' appearance; and the view through the viewfinder was of 'portrait' orientation' (unlike standard 35mm SLRs which had 'landscape' orientation). These half-frame cameras were also exceptional in that all used a rotary shutter, rather than the traditional horizontally travelling focal-plane shutter commonly used in other SLR camera designs. The camera was produced with various interchangeable lenses. The smaller image format made the Pen F system one of the smallest SLR camera systems ever made. Only the Pentax Auto 110 was smaller, but the Pentax system was of much more limited range in terms of lenses and accessories. Introduction of light metering Professional Photographers of the 1940s and 1950s time-period preferred to use hand-held meters such as the Weston or GE selenium cell light meters, and others which were common during these periods. These hand-held meters did not require any batteries and provided good analog readouts of shutter speeds, apertures, ASA (now referred to as 'ISO') and EV (exposure value). Selenium cells, however, could easily be judged for their light sensitivity by simply looking at the size of the cell's metering surface. A small surface meant it lacked low-light sensitivity. These would prove to be useless for in-camera light metering. Built-in light metering with SLRs started with clip-on selenium cell meters. One such meter was made for the Nikon F which coupled to the shutter speed dial and the aperture ring. While the selenium cell area was big, the add-on made the camera look clumsy and unattractive. In order for built-in light metering to be successful in SLR cameras, the use of Cadmium Sulfide Cells (CdS) was imperative. Some early SLRs featured a built-in CdS meter, usually on the front left side of the top plate, as in the Minolta SR-7. Other manufacturers, such as Miranda and Nikon introduced a CdS prism which fitted to their interchangeable prism SLR cameras. Nikon's early Photomic finder utilized a cover in front of the cell which was raised and a reading was taken and the photographer would either turn the coupled shutter speed dial and/or the coupled aperture ring to center a galvanometer-based meter needle shown in the viewfinder. The disadvantage of this early Photomic prism finder was that the meter had no ON/OFF switch so the meter was constantly 'ON', thus draining battery power. A later Photomic housing had an ON/OFF switch on the Pentaprism. CdS light meters proved more sensitive to light and thus metering in available light situations was becoming more prominent and useful. Further advances in CdS sensitivity, however, were needed as CdS cells suffered from a 'memory effect'. That is, if exposed to bright sunlight, the cell would require many minutes to return to normal operation and sensitivity. Through-the-lens metering Through-the-lens metering measures the light that comes through the camera lens, thus eliminating much of the potential for error inherent in separate light meters. It is of particular advantage with long telephoto lenses, macro photography, and photomicrography. The first SLRs with through-the-lens metering were introduced by Japanese manufacturers in the early to mid-1960s. Nikon F and F2 with interchangeable photomic prisms The Nikon F was delivered since 1962 with various pentaprism metering heads. The Photomic series of prisms, which was initially designed with a direct coupled-metering CdS photocell (2 models were produced). The Photomic prism head later evolved to include the Photomic T with TTL in 1965, a behind-the-lens metering prism head which metered an averaging pattern of the focusing screen. The later center-area reading Photomic Tn, concentrated 60% of its sensitivity in the central portion of the focusing screen and the remaining 40% for the outlying screen area. The Photomic FTn was the last of the Photomic finders for the Nikon F. In 1972, the Nikon F2 was introduced. It had a more streamlined body, a better mirror-locking system, a top shutter speed of 1/2000 of a second and was introduced with its own proprietary, continually improving Photomic meter prism heads. This camera's construction was mechanically superior to the F, with some models using titanium for the top and bottom cover plates, and featured slower shutter speeds via the self-timer mechanism. All Nikon F and F2 Photomic prism heads coupled to the shutter speed dial of the respective camera, and also to the aperture ring via a coupling prong on the diaphragm ring of the lens. This design feature was incorporated into most Auto Nikkor lenses of that time. Nikon technicians can still install a coupling prong on D type Auto Nikkor lenses so that these newer lenses will fully couple and operate with the older Nikon camera bodies. This is not possible with the G type Auto Nikkor lenses and lenses with the DX designation. Pentax – the Spotmatic Pentax was the first manufacturer to show a prototype camera with a behind-the-lens spot metering CdS meter system in 1961, the Pentax Spotmatic. Production Spotmatics, however, didn't appear until mid-to-late 1964, and these models were featured with an averaging meter system. Topcon – the RE Super Tokyo Optical's Topcon RE Super (Beseler Topcon Super D in the US), however, preceded Pentax into production in 1963. Topcon cameras used behind-the-lens CdS (Cadmium Sulfide Cells) light meters which were integrated into a partially silvered area of the mirror. Minolta – the SRT-101 with contrast light compensation Japanese-made SLRs from the mid-1960s (1966) included the Minolta SRT-101, and later the SRT-202 and 303 models, which used Minolta's own version of behind-the-lens metering which they referred to as CLC (contrast light compensation). Miranda and other camera manufacturers Other camera manufacturers followed with their own behind-the-lens meter camera designs in order to compete in the marketplace. 35mm SLR film cameras such as Miranda with their Miranda Sensomat, unlike most other systems used a behind-the-lens meter system built into the pentaprism itself. Other Miranda 35mm SLR cameras could be adapted to behind-the-lens capability through the use of a separate pentaprism which included coupled or non-coupled built-in CdS meters. Miranda had a second lens system, consisting of the Sensorex models which had an externally coupled auto diaphragm. Sensorex camera bodies had built-in meters and these evolved to include TTL and 'EE' capability. The 1970s – improvements in design, light metering and automation Design One of the most significant designs of the seventies for the 35mm SLR camera industry was the introduction of the Olympus OM-1 in 1973. After experiencing success with their small Olympus Pen half-frame cameras, particularly with their half-frame SLR-based Olympus Pen-F, Pen-Ft and Pen-FV cameras, Olympus set out with its chief designer Yoshihisa Maitani to later create a compact SLR—the M-1—with new compact lenses and a large bayonet mount that could accept almost any SLR design optic. Shortly after being launched the camera was renamed the OM-1 to avoid a trademark conflict with Leica. The mechanical, manual OM-1 was significantly smaller and lighter than contemporary SLRs, but no less functional. The camera was supported by one of the most comprehensive 35 mm SLR lens and accessory systems available. Maitani decreased the size and weight by totally redesigning the SLR from the ground up with unprecedented use of metallurgy, which included repositioning the shutter speed selector to the front of the lens mount, instead of a more conventional position on top of the body. 'Off-the-film' electronic flash metering Olympus – the OM-2 Olympus made another significant advance with the OM-2 in 1975, featuring aperture-priority automatic exposure with the world's first off-the-film plane available-light metering and off-the-film (which Olympus referred to as 'OTF') flash metering systems. By metering light in real time off the film plane the OM-2 was able to adjust exposure if light levels changed during exposure. By eliminating flash metering via a built-in photocell on a flash unit the OTF system was able to meter more accurately, and also significantly simplify multi-flash shooting as it was no longer necessary to calculate and factor-in exposure for multiple light sources. This system was especially valuable in photomacrography (macrophotography) and photomicrography (microphotography). The Olympus OM System was further enlarged; its Zuiko lenses gained a reputation as being among the sharpest lenses in the world, and in the 1980's, Olympus added further improvements by replacing the OM-1 and OM-2 cameras with the OM-3, a mechanical manual SLR and the OM-4 automatic, both of which featured multi-spot metering capabilities. These cameras were further improved into the last of the OM SLRs, the titanium-bodied OM-3Ti and OM-4Ti, introducing at the same time, the world's fastest electronic flash synchronization speeds, at 1/2000 second with their new Full-Synchro strobe-based flash technology. Gradually, other manufacturers incorporated this feature into their own SLR camera designs. Programmed autoexposure By 1974, the autoexposure SLR brands had aligned into two camps (shutter-priority: Canon, Konica, Miranda, Petri, Ricoh and Topcon; aperture-priority: Asahi Pentax, Chinon, Cosina, Fujica, Minolta, Nikkormat and Yashica) supposedly based on the superiority of their chosen mode. (In reality, based on the limitations of the electronics of the time and the ease of adapting each brand's older mechanical designs to automation.) These AE SLRs were only semi-automatic. With shutter-priority control, the camera would set the lens aperture after the photographer chose a shutter speed to freeze or blur motion. With aperture-priority control, the camera would set the shutter speed after the photographer chose a lens aperture f-stop to control depth of field (focus). Canon – the A-1 Perhaps the most significant milestone of the 1970s era of SLR computerization was the 1978 release of the Canon A-1, the first SLR with a "programmed" autoexposure mode. Although the Minolta XD11 was the first SLR to offer both aperture-priority and shutter-priority modes in 1977, it was not until the next year that the A-1 came out with a microprocessor computer powerful enough to offer both of those modes and add the ability to automatically set both the shutter speed and lens aperture in a compromise exposure from light meter input. Programmed autoexposure, in many variations, became a standard camera feature by the mid-1980s. This is the order of first introduction of 35 mm SLRs, by brand, with a computer programmed autoexposure mode, before the rise of autofocus (see next section): 1978, Canon A-1 (plus AE-1 Program, 1981 and T50, 1983); 1980, Fujica AX-5; 1980, Leica R4; 1981, Mamiya ZE-X; 1982, Konica FP-1; 1982, Minolta X-700; 1982, Nikon FG (plus FA, 1983); 1983, Pentax Super Program (plus Program Plus, 1984 and A3000, 1985); 1983, Chinon CP-5 Twin Program (also first with two program modes); 1984, Ricoh XR-P (tied with Canon T70 as first with three program modes); 1985, Olympus OM-2S Program; 1985, Contax 159MM; 1985, Yashica FX-103. Of the brands active in the mid-1970s, Cosina, Miranda, Petri, Praktica, Rolleiflex, Topcon and Zenit never introduced programmed 35 mm SLRs; usually the inability to make the transition forced the company to quit the 35 mm SLR business altogether. Note that the Asahi Pentax Auto 110, Pentax Auto 110 Super (Pocket Instamatic 110 SLRs from 1978 and 1982) and Pentax 645 (a 645 format SLR from 1985) also had programmed autoexposure. Autofocus revolution Autofocus compact cameras had been introduced in the late 1970s. The SLR market of the time was crowded, and autofocus seemed an excellent option to attract novice photographers. The first autofocus SLR was the 1978 Polaroid SX-70 SONAR OneStep. It used an ultrasonic autofocus system called SONAR. The first 35 mm SLR (the SX-70 was not 35 mm) with autofocus capability was the Pentax ME F of 1981 (using a special autofocus lens with an integral motor). In 1981 Canon introduced a self-contained autofocus lens, the 35–70 mm AF, which contained an optical triangulation system that would focus the lens on the subject in the exact center when a button on the side of the lens was pushed. It would work on any Canon FD camera body. Nikon's F3AF was a highly specialized autofocus camera. It was a variant of the Nikon F3 that worked with the full range of Nikon manual focus lenses, but also featured two dedicated AF lenses (an 80 mm and a 200 mm) that coupled with a special AF viewfinder. F3AF lenses were only supported by the F3AF, the F501, and the F4. Nikon's later AF cameras and lenses used an entirely different design. These cameras, and other experiments in autofocus from other manufacturers, had limited success. Minolta – the Maxxum 7000 The first true 35mm SLR autofocus camera that had a successful design was the Minolta Dynax/Maxxum 7000, introduced in 1985. This SLR featured a built-in motor drive and dedicated flash capability. Minolta also introduced a completely new bayonet mount lens system, the Maxxum AF lens system (currently known as the Sony A-Mount), which was incompatible with its previous MD-bayonet mount system, in which the lenses' focusing action was driven from a motor in the camera body. This reduced complexity in the camera body and the lens. Canon responded with the T80 and a range of three motor-equipped AC lenses, but this was regarded as a stopgap move. Nikon introduced the N2020 (known in Europe as the Nikon F-501), which was their first SLR with built-in autofocus motor, and redesigned autofocus Auto Nikkor lenses. Nikon's AF lenses, however, remained compatible with older Nikon 35mm SLR cameras, and older manual focus Nikon lenses could be used with varying degrees of compatibility on the new AF cameras. Canon – the new EOS System In 1987, Canon followed Minolta in introducing a new lens-mount system, which was incompatible with their previous mount-system: EOS, the Electro-Optical System. Unlike Minolta's motor-in-body approach, this design located the motor within the lens. New, more compact motor designs meant that both focus and aperture could be driven electrically without motor bulges in the lens. The Canon EF lens mount has no mechanical linkages; all communication between body and lens is electronic. Nikon and Pentax Nikon and Pentax both chose to extend their existing lens mounts with autofocus capability, retaining the ability to use older manual-focus lenses with an autofocus body, and driving the lens focus mechanism with a motor inside the camera. Later, Nikon added Silent Wave Motor (SWM) mechanisms into its lenses, supporting both focusing schemes until the introductions of the entry-level Nikon D40 and Nikon D40X in 2006. Pentax introduced its Supersonic Drive Motor (SDM) in 2006 with Pentax K10D model and two lenses (DA*16-50/2.8 AL ED [IF] SDM and DA*50-135/2.8 ED [IF] SDM). Since then all Pentax DSLR support both SDM and the motor inside the body. Earlier SDM lenses support both systems as well. The first SDM lens that did not support the old focusing system was the DA 17-70/4 AL [IF] SDM (2008). Consolidation to autofocus and the transition to digital photography The major 35mm camera manufacturers, Canon, Minolta, Nikon, and Pentax were among the few companies to transition successfully to autofocus. Other camera manufacturers also introduced functionally successful autofocus SLRs but these cameras were not as successful. Some manufacturers eventually withdrew from the SLR market. Nikon still markets its manual-focus SLR, the FM10. Olympus continued production of its OM system camera line until 2002. Pentax also continued to produce the manual-focus LX until 2001. Sigma and Fujifilm also managed to continue manufacturing cameras, although Kyocera ended production in 2005 of its (Contax) camera systems. The newly formed Konica Minolta sold its camera business to Sony two years later. Arrival of digital photography In the 2000s, film became supplanted by digital photography, which had a huge impact on all camera manufacturers, including the SLR market. Nikon, for instance, has ceased production of all film SLRs except for its flagship 35 mm SLR film camera, the F6; and the introductory-level Nikon FM10. Replacing film with a similar-sized digital sensor is possible, but expensive because larger sensor areas imply a greater probability that a defect will render the sensor non-functional. Such "full frame" sensor digital SLRs (DSLRs) however gained early popularity with professional photographers who could both justify their initial high cost, and retain the use of their investment in expensive 35 mm film lenses. By 2008, full-frame models such the Canon EOS 1Ds and 5D, the Nikon D3 and D700, and the Sony Alpha A850 and Alpha A900, designed and priced for professionals, were available. As of 2017, several manufacturers have introduced more affordable 35 mm sensor SLRs such as the Canon EOS 6D, the Nikon D610 and the new Pentax K-1. These cameras, while still positioned as premium products, all retail for less than 3000$; significantly, all but the K-1 are priced below the manufacturer's top APS-C camera. In addition, the full-frame format is now found in Sony's MILC cameras and high-end fixed prime lens compacts, as well as Leica's M-mount digital rangefinders. SLRs designed for amateurs and consumers generally use APS-C sensors, which are significantly smaller than 35 mm film frames and these require either their own specialist lenses or accepting a change in equivalent focal length and field-of-view angle when using lenses designed for the 35 mm format (wide-angle lenses become normal, normal become short telephoto, etc.). During most of the 2000s, Panasonic and Olympus also marketed SLRs built around the now-defunct Four-Thirds System, which was even smaller. Medium-format SLRs While twin-lens reflex cameras have been more numerous in the medium format film category, many medium-format SLRs had been (and some still are) produced. Hasselblad of Sweden has one of the best-known camera systems utilizing 120 and 220 film to produce 6 cm × 6 cm (2" × 2") negatives. They also produce other film backs which produce a 6 cm × 4.5 cm image; a back which uses 70mm roll film, a Polaroid Back for instant 'proofs' and even a 35mm film back. Pentax produces two medium-format SLR systems, the Pentax 645, which produces a 6 cm × 4.5 cm image; and the Pentax 67 series, which system evolved from the late 1960s introduced Pentax 6 × 7 camera. These Pentax 6 × 7 series cameras resembled huge 35mm SLR camera in look and function. In 2010 Pentax introduced a digital version of the 645, the 645D, with a Kodak-built 44X33 sensor. Bronica (which has discontinued camera production), Fuji, Kyocera (which has also ceased production of their Contax cameras), Mamiya, Rollei, Pentacon (former East Germany), and Kiev (former Soviet Union) have also produced Medium Format SLR systems for a considerable period of time. Mamiya produces what is termed a medium format digital SLR. Other medium-format SLRs, such as those from Hasselblad, accept digital backs in place of film rolls or cartridges, effectively converting their film designs to digital format use. In the case of Polaroid Corporation with its instant film line, the introduction of the Polaroid SX-70 was one of the few SLRs produced that was a rare case of a folding SLR. Future The vast majority of SLRs now sold are digital models, even though their size, form factor, and other design elements remain derived from their 35 mm film predecessors. Whether a dedicated digital design such as the Olympus Four-Thirds system, which permits equivalent performance with smaller and lighter cameras, will ultimately supersede the film-derived designs from Canon, Nikon, Pentax, and Sony is as yet unclear. Additionally SLRs are facing a threat from the rapidly expanding mirrorless interchangeable-lens camera segment among all types of camera user. Chronology Significant SLR technology firsts (including optics peculiar to SLRs and important SLR evolutionary lines now extinct). Pre-19th century 1676 Johann Sturm (Germany) described first known use of a reflex mirror in a camera obscura. The camera obscura was known to Aristotle as an aid in observing solar eclipses, but its use as an artist's aid was first expounded by Giambattista della Porta (Italy) in 1558. The reflex mirror corrected the up-down image reversal that could make using a non-SLR camera obscura disconcerting – but not the left-right reversal. 1685 Johann Zahn (Germany) developed a portable SLR camera obscura with focusable lens, adjustable aperture and translucent viewing screen. These are all the core elements in a modern SLR photographic camera – except for an image capture medium. It would not be until 1826/27 before Joseph Nicéphore Niépce (France) made the first permanent photograph using a bitumen photosensitized pewter plate in a non-SLR camera. All advances in photographic technology since then – mechanical, optical, chemical or electronic – have been convenience or quality improvements only. 18th century SLR camera obscuras popular as drawing aids. Artist can trace over the ground glass image to produce a true-life realistic picture. 19th century 1861 Thomas Sutton (UK) received first patent for SLR photographic camera. An unknown number made but very few; no known production model; no known surviving examples. The manually levered reflex mirror also served as the camera's shutter. Used glass plates. 1884 Calvin Rae Smith Monocular Duplex (USA): first known production SLR. Used glass plates (original model 3¼×4¼ inch, later 4×5 inch); many were adapted to use Eastman sheet film. Large-format glass plate or sheet film SLRs were the dominant SLR type until circa 1915. However, SLRs themselves were not commonplace until the 1930s. The Duplex's name was a reference to the SLR's one lens performing both viewing and imaging duties, in contrast to the two separate viewing and imaging lenses of the twin lens cameras (first production 1882 [Marion Academy; UK]; not necessarily twin-lens reflex [TLR] camera, invented 1880 [one-of-a-kind Whipple-Beck camera; UK]) popular in the 1880s and 90s. 1891 Bram Loman Reflex Camera (Netherlands): first focal-plane shutter SLR. Had mirror rise synchronized with the release of a roller blind shutter, with speeds from ½ to 1/250 second, internally mounted in front of the focal plane, instead of the previously normal unsynchronized, external accessory in front of the lens. An internal camera-mounted traveling-slit FP shutter's main advantage over the competing interlens leaf shutter was the ability to use a very narrow slit to offer up to an action-stopping 1/1000 second shutter speed at a time when leaf shutters topped out at 1/250 sec. – although the available contemporaneous ISO 1 to 3 equivalent speed emulsions limited the opportunities to use the high speeds. Early 20th century 1903 Folmer & Schwing Stereo Graflex (USA): first (and only) stereo SLR. Strictly speaking, the Stereo Graflex was not a "single"-lens reflex camera, because, as a stereo camera, it had two imaging lenses. However, it had a reflex mirror and a typical for the era leather "chimney"-hooded waist level finder, albeit with dual eyepiece magnifiers. It took 5×7-inch glass dry plates. 1907 Folmer & Schwing Graflex No. 1A (USA): first medium format roll film SLR. Took eight exposures of 2½×4½ inch frames on 116 roll film. Had folding waist level finder and focal-plane shutter. A sister SLR camera, the Graflex No. 3A, was released at about the same time. It took six 3¼×5½ inch "postcard" frames on 122 roll film. Roll film (usually 120 type) SLRs became the dominant SLR type in the 1930s. The various models of large and medium format Graflex SLRs made beginning in 1898, and culminating in the 4×5 inch sheet film Graflex Super D of 1948, are the best and most famous American-made SLRs, if only for the shortage of competition. Graflex quit the camera business in 1973. A-127 is the rarest and most valuable at 1254 dollars – 3400 dollars 1925 Ernemann (merged into Zeiss Ikon, 1926) Ermanox Reflex (Germany): first SLR with high speed lens (10.5 cm f/1.8 or 85mm f/1.8 Ernostar). Established SLR as viable photojournalist's available-light camera. Had folding waist level finder and focal-plane shutter. Used 4.5×6 cm glass plates or sheet film; adaptable to roll film. 1930s 1933 Ihagee VP Exakta (Germany): first 127 roll film SLR. Preliminary designs were on paper by June 1932. Took eight exposures of 4×6.5 cm (1⅝×2½ inch) nominal frames (40×62 mm actual frames) on 127 "Vest Pocket" roll film, and had a folding waist level finder and focal-plane shutter. The 1935 version was the first camera with a built-in flash synchronization socket (called Vacublitz) to automatically synchronize the recently invented flashbulb (first marketed as Vacublitz in 1929) with its shutter. The VP also established the oblong body shape and handling soon to be standard in 35 mm SLRs except that Exakta SLRs had primarily left-handed controls and were more trapezoidal shaped than rectangular. 1934 Eichapfel Noviflex (Germany): first 2¼ square format, medium format roll film SLR. Took twelve exposures of 6×6 cm (2¼×2¼ inch) frames on 120 roll film. Also had a fixed lens and focal-plane shutter. The 1937 version had interchangeable lenses. The square frame format precluded the awkward manipulations needed to take a vertical photograph with horizontal rectangular format SLRs having then standard waist-level viewfinders. The Noviflex was not commercially successful; it was the Franz Kochmann Reflex-Korelle (Germany) of 1935 that established the popularity of the 2¼ square format SLR. 1935 135 film, commonly called 35 mm film, introduced by Kodak (USA). Was (and is) 35 mm nominal width (1⅜ inch actual width), acetate base, double perforated film, pre-loaded into felt-lipped, daylight-loading cartridges ready-to-use for still cameras. Originally intended for Kodak Retina, Zeiss Ikon Contax and E. Leitz Leica 35 mm rangefinder cameras. Previously, bulk rolls of 35 mm motion picture film would need to be user cut and loaded, in complete darkness, into camera specific cartridges or magazines. The September 1936 release of Kodachrome (the first high speed [ISO 8 equivalent], realistic color film) in standardized 135 format (but not medium format roll film) spurred explosive growth in the popularity of all types of miniature format 35 mm cameras. The vast majority were not high-end SLRs or RFs, but basic amateur RFs such as the nearly three million selling Argus C3 (USA) of 1939. Originally, each US$3.50 (including processing) Kodachrome cartridge gave eighteen exposures if the camera used the 24×36 mm frame size (double the frame size of 35 mm cine cameras) established by the Multi-Speed Shutter Co. Simplex (USA) camera of 1914 and popularized by the E. Leitz Leica A (Germany) of 1925. The 24×36 mm frame size did not become the universal standard frame size until the early 1950s. Note that 135 film cameras using non-standard frame sizes, such as 24×18 mm or 24×24 mm, continued to be made into the early 1990s. Panoramic 135 film cameras using extra-wide aspect ratio frame sizes (up to 24×160 mm for the 360° revolving slit Globuscope [USA] of 1981) were still available in 2006. The Sport (camera) is the series production model of a prototype camera called Gelveta. The Gelveta was designed and built by A. O. Gelgar between 1934 and 1935. It is the earliest known 35mm SLR camera ever to be built, but fewer than 200 examples were made. It was manufactured by the Soviet camera factory Gosudarstvennyi Optiko-Mekhanicheskii Zavod, The State Optical-Mechanical Factory in Leningrad. GOMZ for short. The camera name is engraved in Cyrillic on the finder housing above the lens: „Спорт“. The manufacturer's prism logo in gold on black with the factory initials ГОМЗ (GOMZ) is shown behind a circular magnifying window on the top left camera front. An estimated number of 16,000 cameras were made 1936 Ihagee Kine Exakta (Germany): first production 35 mm SLR, first system SLR, first interchangeable lens camera with bayonet lens mount. This was exhibited at the Leipzig Spring Fair in March and was in production by April 1936. Had left-handed shutter release and rapid film wind thumb lever, folding waist level finder and 12 to 1/1000 second focal-plane shutter. Well-integrated design with excellent interchangeable lenses and good accessory system. Fewer than 30,000 Kine Exaktas were made before World War 2 stopped production in 1940. Production of improved models re-started after the war and Exakta was among the best known 35 mm SLR brands through the 50s. 1936 E. Leitz PLOOT (Germany): first reflex housing for 35 mm rangefinder cameras. For use with a Leica IIIa RF and the Leitz 20 cm f/4.5 Telyt or 40 cm f/5 Telyt long focus lenses (all Germany). Long focus (and telephoto) lenses have very shallow depth of field and the short baseline rangefinders built into RF cameras cannot triangulate the subject distance accurately enough for acceptably sharp focusing. SLRs do not suffer from this problem, because they are focused by directly assessing the sharpness of the lens image – the lens serves as its own rangefinder.<ref>Goldberg, Camera Technology", pp. 12–26</ref> Reflex housings converted RFs into very awkward SLRs by inserting a reflex mirror and focusing screen between the lens and camera. Some even had image reversing optics. They also solved the RF camera's parallax error problem in macrophotography. Eventually, real SLRs were recognized as the simpler solution and supplanted RFs in the 1960s. The last reflex housing for a film camera, the Leica Visoflex III (West Germany; for Leica M4 series RFs), was discontinued in 1984. 1937 Gosudarstevennyi Optiko-Mekhanichesky Zavod (GOMZ) Sport (Спорт; Soviet Union): a 35 mm (not 135 type) SLR apparently prototyped in 1935. However, sources are uncertain or conflict upon the Sport's introduction date – a plurality say 1937. If it was sold in 1935, it would be the first 35 mm SLR. In any event, the Sport was not widely available and had no influence on later SLRs."Contact Sheet: Slighting the Exacta?" p. 94. Popular Photography, Volume 64 Number 10; October 2000. Lea, pp. 99–101Wade, Collector's Guide. p. 24 1940s 1947 Gamma Duflex (Hungary): first instant return mirror SLR, first metal focal-plane shutter SLR, first internal semi-automatic lens diaphragm SLR. Also had a mirror "prism" viewfinder, an intermediate step to a solid pentaprism. Reflex mirrors coupled to the shutter release had been spring actuated to rise automatically since the 19th century, but the viewfinder would remain blacked-out until the mirror was manually cocked back down. With an automatic, instant return mirror, the viewfinder blackout time might be as short as ⅛th second. The semi-auto diaphragm closed the lens diaphragm with shutter release, but it needed to be manually re-cocked open. The Duflex was very ambitious but very unreliable and Gamma's first and last production SLR. 1947 Rectaflex (Italy): first SLR camera equipped with a pentaprism for eye-level viewing. The first prototype of the Rectaflex was presented by Telemaco Corsi at the Milano Fair in April 1947. It was a wooden mock-up, with a mirror eye-level finder. This first prototype used a five-facet roof optic prism giving a left to right inverted image. For vertical pictures, the image was upside down, and that was a big drawback. This was corrected with a Goulier prism before the 1948 Milano Fair. 1948 Hasselblad 1600F (Sweden): first 2¼ medium format system SLR suitable for professional use. Took twelve exposures of 2¼×2¼ inch (6×6 cm) nominal frames (56×56 mm actual frames) on 120 film. Had modular design accepting interchangeable lenses, film magazines and folding waist level finder. The 1/1600 second corrugated stainless steel focal-plane shutter was unreliable and was replaced by a slower but more reliable 1/1000 second focal-plane shutter in the Hasselblad 1000F (Sweden) of 1952.Lothrop & Schneider, "The SLR Saga (part 2)," p. 64Jason Schneider, "Classic Cameras; The Top 20 Cameras of All-Time Countdown: Schneider’s List, The Next Five—Do You Agree?" pp. 68–70, 130. Shutterbug, Volume 37 Number 7 Issue 452; May 2008. 1948 Telemaco Corsi from Rome shows world's first working Pentaprism SLR, Italian Rectaflex, at Milano Fair in April. Production of preseries Standard 947 model starts in June. Series production of model A 1000 starts in September. Alpa Prisma Reflex (Switzerland) had a pentaprism viewfinder in 1948, but its eyepiece was angled upward at 45°. 1949 VEB Zeiss Ikon (Dresden) Contax S (East Germany): second pentaprism eyelevel viewing 35 mm SLR.Matanle, pp. 54, 69–71, 85Jason Schneider, "The 10 most important cameras of the 20th century," p. 88Marc James Small and Charles M. Barringer. Zeiss Compendium: East and West – 1940 to 1972. Second Edition 1999. Small Dole, UK: Hove Books, 1995. . pp. 16, 140–143Wade, Short History. p. 106Spira, Lothrop and Spira, p. 163 First M42 screw mount camera. (The East German KW Praktica came out at about the same time.) With earlier "waist level" SLR viewfinder systems (in which the photographer looks downward at the reflex mirror's image on the focusing screen), moving subjects are seen to track across the field-of-view in reverse direction of their actual motion, making action shooting counter-intuitive. A pentaprism is an eight-sided (only five are of significance; the other three are cut off corners) chunk of glass silvered on three sides that collects, redirects and re-reverses the light from the mirror with minimal light loss.Michael J. Langford, Basic Photography: A Primer for Professionals. Third Edition. Garden City, NY: Amphoto/Focal Press Limited, 1973. . pp. 128–131 With a proper pentaprism, all a photographer needs to do is hold the camera up to eyelevel and everything is there.Ray, pp. 314–315, 318–319 The pentaprism SLR had first been proposed in the 19th century and was used in non-35 mm SLRs in the 1930s. Similar systems (or, in the 1990s, its cheaper alternative, the pentamirrorKimata and Schneider, p. 41) became so common in 35 mm SLRs by the late 1950s that it is the characteristic pentaprism "head" atop the camera body that defines the type for most people. 1950s 1950 Ihagee Exakta Varex (East Germany; called Exakta V in USA): first interchangeable viewfinder, first interchangeable focusing screens, first viewfinder condenser lens SLR.Matanle, pp. 52–54 Original viewfinder selection was waist-level or pentaprism. For the next half-century, interchangeable viewfinder customization was the signal feature of fully professional level SLRs, although they have not made the transition to digital SLRs. 1950 Angénieux 35mm f/2.5 Retrofocus Type R 1 (France): first retrofocus wide angle lens for 35 mm SLRs (for Exaktas).Herbert Keppler, "SLR: Are the sacrifices we make to use an SLR worth it?" pp. 27–28, 30, 34. Popular Photography, Volume 64 Number 6; June 2000. Regular wide angle lenses (meaning short focal length lenses) need to be mounted close to the film. However, SLRs require that lenses be mounted far enough in front of the film to provide space for the movement of the mirror – the "mirror box." Therefore, the focal length of early 35 mm SLR lenses were no less than about 40 mm. This prompted the development of wide view lenses with more complex retrofocus optical designs. These use very large negative front elements to force back-focus distances long enough to ensure clearance.Kraszna-Krausz, pp. 1675–1676 Note, "retrofocus" was an Angénieux trademark before losing exclusive status. The original generic term is "inverted telephoto." A telephoto lens (multiple inventions, 1891) has a front positive group and rear negative group; retrofocus lenses have the negative group in front and positive group to the rear. The first inverted-telephoto imaging lens was the Taylor-Hobson 35mm f/2 (1931, UK) developed to provide back-focus clearance for the beamsplitter prism used by the full-color via three negative Technicolor motion picture process. Retrofocus wide angle prime lenses reached fields of view as wide as 118° with the Nikkor 13mm f/5.6 (Japan) lens for Nikon 35 mm SLRs in 1975, but they are extremely large compared to non-SLR short focal length lenses because of their gigantic negative elements.Steve Sint with Peter Moore, "Fantasy Glass: Longer than long, faster than fast, wider than wide, made of pure golden glass, optics of unbelievable promise and price are being produced by major lens makers. Are they all hype … or do they perform?" pp. 44–49. Modern Photography, Volume 48, Number 2; February 1984. 1952 Zenit (Soviet Union, Russia; Зенит): first Russian pentaprism eyelevel viewing 35 mm SLR. 1952 Asahiflex I (Japan): first Japanese 35 mm SLR. Had folding waist level finder and focal-plane shutter.Cecchi, pp. 32–33Jason Schneider, "Camera Collector: What was the first 35mm SLR made in Japan? The illustrious Asahiflex, proud precursor of the prestigious Pentax." pp. 25, 30. Popular Photography, Volume 66 Number 1; January 2002. Wade, Collector's Guide. pp. 142–143 From 1952 to 1983, Asahi Optical (today called Pentax and owned by Ricoh) manufactured cameras exclusively of SLR type and has made them in the greatest variety of formats of any modern camera company – from 110 to 6×7 film, and today's digital. 1953 VEB Zeiss Ikon (Dresden) Contax E (East Germany): first built-in light meter SLR. Had an external selenium photoelectric cell mounted behind a door on the pentaprism housing, above the lens. The meter was uncoupled – the photographer would need to wait until the meter stabilized and manually set the shutter speed and lens aperture to match the indicated exposure reading. The first camera with a built-in meter (also uncoupled) was the Zeiss Ikon Contaflex (Germany) 35 mm twin-lens reflex (TLR) camera of 1935.Wade, Collector's Guide. pp. 23–24, 54 1953 Zeiss Ikon Contaflex I (West Germany): first leaf shutter 35 mm SLR. Had Synchro-Compur leaf shutter and front cell focusing 45mm f/2.8 Tessar lens,Matanle, pp. 193–194Wade, Collector's Guide. pp. 40–41 built-in selenium-coupled exposure meter. For many years, reliable focal-plane shutters were very expensive and SLRs equipped with Compur or Prontor leaf shutters were strong competitors.Small and Barringer, pp. 50–60, 160 As FP shutters improved, their faster available speeds won out in the late 1960s and leaf shutter 35 mm SLRs disappeared around 1976.Lea, "Mamiya 528 AL" p. 139 1953 Metz/Kilfitt Mecaflex (West Germany): first (and only) square format 35 mm SLR. Took up to fifty exposures of 24×24 mm frames on 135 film. A compact Prontor leaf shutter design with bayonet mount interchangeable lenses.Wade, Collector's Guide. pp. 44–47 135 film's standard 24×36 mm frame size is inefficient. Its 3:2 aspect ratio is too wide, recording only 59% of a required 43.3 mm diameter lens image circle. This makes lenses for the format overly large for the image area. A square 24×24 mm frame maximizes coverage at 64% of a smaller 33.9 mm image circle. The Mecaflex's designer, Heinz Kilfitt, also designed the Robot (Germany) of 1934, the first 24×24 mm 35 mm (not 135 type) camera. Both failed to disturb the entrenched rectangular format and the 3:2 ratio still dominates digital SLRs. Olympus' Four-Thirds System digital format of 2002 is the latest attempt at a narrower, albeit not square, format. Note that dual 24×24 mm frames on 135 film were used by the non-SLR David White Stereo Realist (USA, 1947), leader of the 1950s stereo photography fad. 1954 Asahiflex IIB (Japan; called Sears Tower 23 in USA): first SLR with reliable instant return mirror.Lothrop & Schneider. "The SLR Saga (part 2)," pp. 50–51Schneider, "First 35mm SLR made in Japan?" pp. 25, 30 1954 Praktina FX (East Germany): first available spring powered motor drive accessory for SLR, first breech-lock lens mount. 1954 Tokiwa Seiki Firstflex 35 (Japan): first interchangeable lens, leaf shutter 35 mm SLR. Otherwise a wholly forgettable camera; cheaply made to low specifications and of poor quality, with waist level finder. 1955 Miranda T (Japan): first Japanese pentaprism eyelevel viewing 35 mm SLR.Matanle, pp. 171–172 Note that the Tokiwa Seiki Pentaflex (Japan), a modified Firstflex 35 (see above), had an eyelevel viewfinder four months before the Miranda, but using a porroprism. Orion Seiki (company renamed Miranda Camera in 1956) produced a versatile SLR system in the 1960s, called by some "the poor man's Nikon," but was unable to keep up with the rapid advances in electronics of the 1970s and went bankrupt december 1976. 1955 Kilfitt 4 cm f/3.5 Makro-Kilar (West Germany/Liechtenstein): first close focusing "macro" lens for 35 mm SLRs (for Exaktas and others). Version D focused from infinity to 1:1 ratio (life-size) at two inches; version E, to 1:2 ratio (half life-size) at four inches.Stephen Gandy, "1st 35mm SLR MACRO LENS: Kilfitt Makro-Kilar of 1955: infinity to 1:2 or 1:1" retrieved 5 January 2006 Because SLRs do not suffer from parallax error due to the offset between the taking lens and a viewfinder lens, they are far superior for close-up photography than cameras with other optical viewfinder systems (though the viewfinder screens on digital cameras also show the image as seen by the taking lens). Most SLR lens lines continue to include macro lenses optimized for high magnification, although their focal lengths tend to be longer than the original Makro-Kilar to allow more working distance. "Macro zoom" lenses began appearing in the 1970s, but traditionalists object to calling most of them macro because they usually do not focus closer than 1:4 ratio, with relatively poor image quality.Lester Lefkowitz, "Lenses: Facts and Fallacies," pp. 75–98. Modern Photography, Volume 47, Number 9; September 1983. . p. 95 1956 Zeiss Ikon Contaflex III (West Germany): first high-quality, interchangeable lens attachments, leaf shutter 35 mm pentaprism SLR with built-in selenium exposure meter. Was improved Contaflex I (see above) with redesigned Carl Zeiss unit-focusing Tessar lens,Small and Barringer, pp. 50–52 its front element can be removed and replaced with a set of Pro-Tessar lenses. 1957 Asahi Pentax (Japan; called Sears Tower 26 in USA): first SLR with right-handed rapid-wind thumb lever, first fold-out film rewind crank, first microprism focusing aid. First Asahi SLR with M42 screw mount. Established the "modern" control layout of the 35 mm SLR. Well-integrated focal-plane shutter, instant return mirror and pentaprism design.Matanle, p. 118 1957 Hasselblad 500C (Sweden): replaced the Hasselblad 1600F/1000F's (see above) problematic focal-plane shutter with reliable interlens Synchro-Compur leaf shutters and made the 2¼ medium format SLR the dominant professional studio camera by the late 1950s. Well-integrated, durable and reliable design without instant return mirror, but with excellent auto-diaphragm interchangeable lenses and large accessory system.Matanle, pp. 221–222 1958 Zunow SLR (Japan): first internal auto-diaphragm (Zunow-matic Diaphragm System) 35 mm SLR and lenses. Well-integrated focal-plane shutter, instant return mirror, pentaprism and auto-diaphragm design with excellent lenses and good accessory system. Stopping down (closing) the lens aperture (iris) to prepare for exposure transmits less light to the mirror and the viewfinder may become very dim – perhaps even too dark to see the image. Auto-diaphragms coupled to the shutter release that automatically stop down when the mirror swings up and reopen when the mirror comes down provides almost continuous fully open aperture viewing. Auto-diaphragm lenses and instant return mirror, focal-plane shutter SLRs require precise camera-to-lens linkage, but can choreograph the entire shutter-button release, close lens, raise mirror, open shutter, close shutter, lower mirror, open lens exposure sequence in as little as ⅛th second. Originally, these were mechanical spring/gear/lever systems energized concurrent with manually winding the film, but modern systems are electronically timed and operated by an electromagnet. The financially weak Zunow company was unable to capitalize on its design; few examples of the camera (and much fewer of the wide and tele lenses for it) were produced before the company switched back to lenses for other companies' cameras. Zunow went bankrupt in 1961. Note, the 1954 version of the Ihagee Exakta VX (East Germany) 35 mm SLR introduced an external auto-diaphragm lens system using a spring-loaded shutter button plunger connection rod.Wade, Collector's Guide. p. 152 1959 Zeiss Ikon Contarex (West Germany): first SLR with a built-in light meter coupled to a viewfinder exposure control indicator – a galvanometer needle pointer. It had an external, circular selenium photoelectric cell mounted above the lens;Lea, pp. 282–283 earning it "Bullseye" (in USA) and "Cyclops" (in UK) nicknames. For proper exposure, the photographer would adjust the meter, which was also coupled to the shutter speed and lens aperture, until the needle was centered on a mark.Small and Barringer. pp. 73–76 (The Carl Braun Paxette Reflex [West Germany] leaf shutter SLR had an external top mounted, coupled light meter needle system in 1958.) The Contarex also had interchangeable film backs, a feature common with medium format SLRs and used in a few 35 mm rangefinder cameras, but almost exclusive to Contarex/Contaflex series among 35 mm SLRs. Although Contarex SLRs and their Zeiss lenses were of extremely high quality, they were also extremely expensive"Modern Photography's Annual Guide to 47 Top Cameras: Zeiss Contarex SE," p. 124. Modern Photography, Volume 36, Number 12; December 1972. . (Contarex SE w/55mm/1.4 Planar $1212; Leicaflex SL w/50mm f/2 Summicron-R $918, p. 111; Nikon F2 Photomic w/50mm f1.4 Nikkor-S $660, p. 117; Topcon Super D w/58mm f/1.4 Auto-Topcor $520, p. 105)Norris D. and A. Ross McWhirter, compilers, Guinness Book of World Records. 1971–1972 (10th) edition. Bantam Books, New York, 1971. "The most expensive miniature camera is the Zeiss Contarex, Pentaprism Reflex, with a built-in photo-electric meter and Zeiss Planar f/1.4 55 mm lens. With a full range of accessories, including two wide-angle and three telephoto lenses, this would cost about $4000." p. 149 and of idiosyncratic (even clumsy) handling.Matanle, pp. 93–96 1959 Nikon F (Japan): first pro caliber 35 mm system SLR,Schneider, "How The Japanese Camera Took Over." p. 78Stafford, Hillebrand & Hauschild, pp. 19–24, 267–270 first electric motor drive accessory for SLR. (The Japanese Nikon SP 35 mm rangefinder camera had the first electric motor drive for any camera type in 1957.Stafford, Hillebrand & Hauschild, pp. 13, 281–282) Well-integrated, durable and reliable focal-plane shutter, instant return mirror, pentaprism and auto-diaphragm design with excellent interchangeable lenses and huge accessory system. Although the F was not technologically ground-breaking, it sold 862,600 units and made the 35 mm SLR the dominant professional miniature format camera (displacing the 35 mm RF) by the early 1960s.Dan Richards, "F Is For Family Tree," p. 67. Popular Photography & Imaging, Volume 68 Number 11; November 2004. The perfection of the optical and mechanical formulae of the interchangeable lens SLR in the one-two punch of the Hasselblad 500C (see above) and Nikon F also ended the popularity of the medium format twin-lens reflex (TLR) camera (typified by the Franke & Heidecke Rolleiflex/Rolleicord series [Germany, later West Germany]) by the early 1960s.Jason Schneider, "The Camera Collector: A farewell to the twin-lens Rolleiflex: elegant to the end. It never switched lenses or lowered its patrician standards." pp. 82, 86, 92–93, 136. Modern Photography, Volume 47, Number 11; November 1983. The F's improved successor, the Nikon F2 (Japan) of 1971, is widely regarded as the finest mechanically controlled 35 mm SLR camera ever made. 1959 Voigtländer–Zoomar 1:2.8 f=36mm–82mm (USA/West Germany): first zoom lens for 35 mm still cameras.Ray, pp. 172–173 Designed by Zoomar in USA and manufactured by Kilfitt in West Germany for Voigtländer. Originally mounted for Voigtländer Bessamatic series (West Germany) 35 mm leaf shutter SLRs, but later available in Exakta and other mounts.Kingslake, pp. 173–174 Zoom lenses and SLR film cameras are perfect for each other, because an SLR always shows what the lens is imaging during zooming, something difficult, if not impossible, to do with other optical viewfinder systems.Kraszna-Krausz, pp. 1698–1699 1960s 1960 Konica F (Japan): first SLR with 1/2000 second and 1/125 second flash X-synchronization focal-plane shutter.Lea, p. 121 Modern focal-plane shutters are dual curtain traveling slit shutters.Horder, p. 174Langford, Basic Photography. Third Edition. p. 109 They provide faster shutter speeds by timing the second shutter curtain to close sooner after the first curtain opens and narrowing the slit "wiping" the exposure on the film, instead of moving the curtains faster across film gate,Langford, Basic Photography. Third Edition. pp. 109–110 because they are too fragile to survive the necessary accelerative shocks. Long wipe times can cause cartoonish distortion of very fast moving objects instead of truly freezing their motion.Horder, p. 175Langford, Basic Photography. Third Edition. pp. 109–111 (The use of leaning in illustration to give the impression of speed is a caricature of the distortion caused by the slow wiping FP shutters of Graflex large format SLRs from the first half of the 20th century.) Unacceptable distortion (as well as difficulties in precisely timing very narrow slits) had stalled traditional cloth horizontal-travel FP shutters for 35 mm cameras at 1/1000 sec. and 1/60 sec. X-sync for decades. The F's Hi-Synchro shutter provided faster speeds by having its metal blades travel vertically, i.e. along the shorter side of the 24×36 mm frame.Goldberg, Camera Technology. pp. 72–75 In 1982, the Nikon FM2 (Japan) reached 1/4000 sec. (and 1/200 sec. flash X-sync) with a vertical-travel FP shutter using honeycomb pattern etched titanium foil blades, stronger and lighter than plain stainless steel. This allowed quicker shutter-curtain travel time (3.6 milliseconds, about half of earlier vertical, metal bladed shutters) and thereby truly faster shutter speeds. The Nikon FE2 (Japan), with an improved version of this shutter, boosted X-sync speed to 1/250 sec. (3.3 ms curtain travel time) in 1983. The fastest FP shutter ever used in a film camera was the 1/12,000 sec. (1/300 sec. X-sync; 1.8 ms curtain travel time) duralumin and carbon fiber bladed one introduced by the Minolta Maxxum 9xi (Japan) in 1992.Lea, pp. 159–160 1960 Royer Savoyflex Automatique (France): first autoexposure SLR. Had an unreliable mechanical shutter-priority autoexposure system controlled by an external selenium light meter, Prontor leaf shutter and fixed 50mm f/2.8 Som-Berthiot lens.Lea, pp. 224–225Jason Schneider, "Camera Collector: Deutschland discoveries yield cache of cheap collectable classics," pp. 67–68. Popular Photography, Volume 65 Number 6; June 2001. The first autoexposure still camera was the non-SLR Kodak Super Kodak Six-20 (USA) of 1938 with a mechanical system controlling both aperture and shutter speed via trapped-needle method coupled to external selenium photoelectric cell.Schneider, "The 10 most important cameras of the 20th century." p. 87Wade, Short History. pp. 101–102 1960 Krasnogorsky Mekhanichesky Zavod (KMZ) Narciss camera (Soviet Union; Нарцисс): first subminiature SLR. Took 14×21 mm frames on unperforated, specially spooled 16 mm film. Compact design with interchangeable lenses and removable finder. Subminiature film format cameras (those using smaller than 135 film) require a very high degree of enlargement to make even small 3½×5 inch prints, magnifying image imperfections compared to larger formats; they are mostly used where the small camera size and weight are more important than image quality.Herbert Keppler, "SLR: A multi-featured AF SLR that weighs only 12 ounces? Aw c'mon, who are ya kiddin'?" pp. 21–22, 24, 26. Popular Photography, Volume 65 Number 9; September 2001. Gerald McMullon, Go to the 16mm Collection index16mm - Variations in Narciss Subminiature Cameras retrieved 22 March 2016 1961 35 mm f/3.5 PC-Nikkor lens — the first perspective control lens for a 35 mm camera, permitting control of perspective in architectural photography. 1962 Nikkorex Zoom 35 (Japan): first 35 mm SLR with fixed zoom lens (Zoom-Nikkor Auto 43–86mm f/3.5). Had non-pentaprism, four mirror reflex viewfinder and leaf shutter."Modern Tests: Chinon Genesis: Can a Top-Notch P&S Be a Solid SLR?" pp. 52–56, 104, 114, 118. Modern Photography, Volume 52, Number 10; October 1988. Stafford, Hillebrand & Hauschild, pp. 17–18 Fixed lens SLRs have been an occasional phenomenon bridging simpler viewfinder cameras and more ambitious interchangeable lens SLRs. Presently, they are off-again with non-SLR electronic viewfinder (EVF) superzoom digital cameras occupying this market segment.Dan Richards, "Steal This Camera! Here's Your Assignment: Find a Nice 400mm f/3.5 Lens, With Image Stabilization. While You're at It, Make It a Zoom–Say, 35-420mm. Get a 6MP (or Better) Digital Camera to Go With It. With Through-the-Lens Viewing. Got All That? Now Do It For Under $500." pp. 70–72, 74. Popular Photography & Imaging, Volume 70, Number 6; June 2006. 1963 Topcon RE Super (Japan; called Super D in USA; name became Super D worldwide in 1972): first SLR with through-the-lens (TTL) light meter for convenient exposure control.Lea, pp. 227–228 Had internal cadmium sulfide (CdS) photoresistive cells mounted behind non-silvered slits in the reflex mirror for coupled center-the-needle, open aperture, full area averaging metering with auto-diaphragm lenses.Matanle, pp. 181–183 Film is rated at a particular "speed" sensitivity. It needs a specific amount of light to form an image. The Weston Universal 617 (USA) helped introduce light exposure metering by a handheld selenium photoelectric device to sense the ambient light in 1932,"Weston 617" retrieved 22 October 2008Peres, p. 775 but miniature light meters built into the camera that gave TTL readings were a great leap forward in convenience introduced by the Feinwerk Technik Mec 16SB (West Germany) non-SLR subminiature (10×14 mm frames on 16 mm film) camera in 1960.Wade, Collector's Guide. p. 93 TTL metering became normal in virtually all 35 mm SLRs by the late 1960s.Wade, "A Half Century of The World's Greatest Cameras" p. 58 The durable and rugged RE Super had excellent interchangeable Exakta mount lenses and was the only pro level 35 mm SLR to compete with the Nikon F (see above) with any success. However, Topcons never progressed and Tokyo Kogaku (or Tokyo Optical) quit the consumer camera business circa 1980. 1963 Olympus Pen F (Japan): first single frame (also called half frame) 35 mm SLR.Joe McGloin, "Olympus Pen F Cameras" retrieved 18 May 2007Wade, Collector's Guide. pp. 43–44 Took up to 72 exposures of vertical 18×24 mm frames on 135 film. Had flat-topped non-pentaprism porroprism reflex and optical relay viewfinder,Hansen and Dierdorff, p. 175 and rotary focal-plane shutter.Matanle, pp. 174–175 Well-integrated compact design with excellent interchangeable lenses and large accessory system. The original non-SLR Olympus Pen (Japan) of 1959 helped give 35 mm still cameras that used the standard motion picture frame size of 35 mm film a burst of popularity. It ended by the late 1960s.Jason Schneider, "The Camera Collector: Users of classics unite! You have nothing to lose but convenience!" pp. 60–62. Modern Photography, Volume 50, Number 6; June 1986. Although single frame cameras used standard 135 film, single frame photofinishing was always special-order.Keppler, "SLR: Digital Uproar: Do you feel you're being dragged into digital imaging whether you like it or not? Here's why, and what's coming." pp. 42, 44, 134. Popular Photography, Volume 66 Number 10; October 2002. Kyocera/Yashica unsuccessfully attempted to revive the format as "Double 35" with their Yashica Samurai series (Japan) SLRs in 1988. 1964 Asahi (Honeywell in USA) Pentax Spotmatic (Japan): second SLR with coupled center-the-needle TTL metering (stop-down aperture, full area averaging). Well-integrated, compact and reliable focal-plane shutter, instant return mirror and pentaprism design with excellent non-auto-diaphragm interchangeable lenses.Cecchi, pp. 21–26, 60–66Hansen and Dierdorff, p. 201 Although the Spotmatic's stop-down (manual diaphragm lenses) system was less convenient than the RE Super's open aperture (auto-diaphragm lenses) system, the Spotmatic's two CdS cells on either side of the eyepiece reading off the focusing screen was less expensive and complex than the RE Super's system (see above), and thereby more popular.Lea, p. 24 The Spotmatic's TTL system was (and is) very influential and widely imitated, often with open aperture. It (and rival TTL metering SLRs, including the Canon FT [1966; stop-down aperture, partial area],Shell, pp. 46–48 Minolta SRT101 [1966; open aperture, modified centerweighted]Robert E. Mayer, Minolta Classic Cameras; Maxxum/Dynax 7000, 9000, 7000i, 8000i; SRT series, XD11. First Edition. Magic Lantern Guides. Rochester, NY: Silver Pixel Press, 1995. . pp. 14–26 and Nikkormat FTN [1967; open aperture, centerweighted];Paul Comen, Nikon Classic Cameras; F, FE, FE2, FA and Nikkormat F series. First Edition. Magic Lantern Guides. Rochester, NY: Silver Pixel Press, 1996. . pp. 63–85 all from Japan) made the Japanese 35 mm SLR the dominant advanced amateur camera by the late 1960s. 1964 Krasnogorsky Mekhanichesky Zavod (KMZ) Zenit 5 (Soviet Union; Зенит 5): first SLR with built-in electric motor drive. Had a Ni-Cd battery powered motor for automatic single-frame film advance with a backup film wind knob. In 1970, the Minolta SRM (Japan) was the first SLR with built-in electric sequential motor drive and first SLR with auto film-rewind. It was a modified Minolta SRT101 with a permanently bottom-mounted motor drive (eight AA [LR6] batteries) and detachable handgrip for continuous three frames per second sequence shooting, but no light meter.Hansen and Dierdorff, p. 99 Built-in motor drives did not become common in 35 mm SLRs until the mid-1980s when high-powered, energy efficient "coreless" micro-motors were perfected, but accessory drives or autowinders taking four to twelve AA (LR6) batteries were very popular in the 1970s."Modern Tests: [Konica] Autoreflex T4: The Best of Two Worlds," pp. 110–112. Modern Photography, Volume 43, Number 2; February 1979. . p. 110 This is, of course, a non-issue in modern digital SLRs. 1964 Kodak Retina Reflex IV (USA/West Germany): first SLR with standard ISO hot shoe atop the pentaprism housing for direct flash mounting and synchronization. Was a 35 mm, leaf shutter design. A flash is a necessary accessory for auxiliary or fill light in dim or high contrast conditions. The first camera with any kind of hot shoe connector was the Univex Mercury (USA) non-SLR half frame 35 mm in 1938 and many post World War 2 non-SLRs (such as the Bell & Howell Foton [1948, USA] 35 mm rangefinderJason Schneider. Jason Schneider on Camera Collecting: A fully illustrated handbook of articles originally published in MODERN PHOTOGRAPHY. Second Printing 1980. Des Moines, IA: Wallace-Homestead Book Co., 1978. . pp. 153–155) had a Leica-type accessory shoe with added electrical contact (the present day ISO hot shoe). Although the Nikon F (see above) had a non-ISO hot shoe surrounding the film rewind crank in 1959, most 1960s 35 mm SLRs used screw-on accessory shoes attached to the eyepiece to mount flashes but a PC cable socket to sync them.Herbert Keppler, "Inside Straight: Shoe Fetish: An unsung camera feature that's almost as old as the camera itself." pp. 36–37. Popular Photography & Imaging, Volume 71 Number 2; February 2007. The ISO hot shoe became a standard SLR feature in the early 1970s. However, in 1971, SLRs using "dedicated" electronic flashes with automatic flash exposure control began appearing with the Canon FTb (Japan). They used ISO-style shoes with extra electrical contacts. Each SLR brand used incompatible contact configurations and the time of use-any-flash-with-any-SLR passed by the late 1970s. Note, although the hot shoe had been de facto standardized in the 1950s, the International Organization for Standardization did not promulgate its ISO 518 hot shoe specification until 1977. 1965 Canon Pellix (Japan): first (really second after french Focaflex of 1959) pellicle reflex mirror SLR.Lea, pp. 44–45Shell, pp. 43–46Wade, Collector's Guide. pp. 140–142 Virtually all SLRs use fast-moving reflex mirrors that swivel out of the way to take the picture, causing mirror shock vibration, blacking-out the viewfinder and delaying shutter firing. Camera shake can blur the image and the subject (which might have moved) cannot be seen at the instant of exposure.Goldberg, Camera Technology. p. 150Herbert Keppler, "SLR: Is the price you pay for modern SLR conveniences too much?" pp. 23–24, 25, 28–29, 92. Popular Photography, Volume 63, Number 12; December 1999. A fixed semi-transparent pellicle reflex mirror, reflecting 30% of the light to the viewfinder and transmitting 70% to the film, prevents camera shake and viewfinder blackout, and reduces shutter lag time at the costs of a dimmer viewfinder image, longer exposure times and possible image quality loss.Herbert Keppler, "SLR notebook: Want faster autofocus? Less vibration and noise? Buy a Canon EOS RT." p. 30. Popular Photography, Volume 96 Number 12; December 1989. Modern instant return mirrors are fast enough and have efficient enough shock damping systems that the trade offs are not usually considered worthwhile.Herbert Keppler, "SLR: For sharpest images, do you really need mirror lockup? If so, why don't all top cameras have it?" pp. 18, 20, 22, 24, 64. Popular Photography, Volume 63 Number 6; June 1999. Pellicle mirror SLRs are very rare and are usually specialized designs for ultra-high speed (10+ frames per second) sequence shooting.Stafford, Hillebrand & Hauschild, pp. 28–29 1966 Praktica Electronic (East Germany) first SLR with an electronically controlled shutter. Used electronic circuitry to time its focal-plane shutter instead of spring /gear/lever clockwork mechanisms. 1966 Konica Autorex (Japan; called AutoReflex in USA): first (really second after Soviet russian Kiev 10 Automat of 1964) 35 mm SLR with successful shutter-priority automation (first with a focal-plane shutter). The camera also had the rare ability to allow selection between frame sizes (horizontal 24×36 mm or vertical 18×24 mm) between frames on the same roll of film. The camera used a mechanical "trap-needle" autoexposure system controlled by an external CdS meter that read light directly (not through-the-lens).Lea, pp. 122–123Lothrop & Schneider "Deutschland discoveries," pp. 67–68 1967 Zeiss Ikon Contaflex 126 (West Germany): first Kodapak Instamatic 126 cartridge film SLR. Was a Voigtländer focal-plane shutter design unrelated to 35 mm Contaflexes (see above), accepting fully interchangeable lenses.David Francis, "Contaflex 126," retrieved 10 March 2008 Took up to twenty exposures of 28×28 mm frames on paper-backed, singly perforated, 35 mm wide film pre-threaded into double-ended cartridge with film supply and take-up spools.Herbert Keppler, "New 35mm Film Format: Bogeyman Or Blessing? Kodak's teamed up with four Japanese companies for a new 35mm film system. Will it blow your present camera and lenses into the weeds of photo history?" pp. 46–49. Popular Photography, Volume 57 Number 11; November 1993. Drop-in loading 126 film was introduced by Kodak in 1963, solving the problem of amateurs' difficulty in loading 135 film manually. The 126 cartridge was an extremely popular non-SLR snapshot format for more than a decade, but began to fall victim to the increasing popularity of the 110 format throughout the 1970's.Schneider, Keppler and Lothrop, pp. 58–61 1968 Konica Autoreflex T (Japan): first SLR with internal open aperture TTL metering autoexposure (mechanical shutter-priority).Lothrop & Schneider. "The SLR Saga (part 2)," p. 64 Was an improved Konica AutoReflex (see above) with internal CdS centerweighted light meter and reduced shutter button travel, but without half frame capability.Matanle, pp. 164–165 1968 OP Fisheye-Nikkor 10mm f/5.6 (Japan): first SLR lens with aspherical elements. Was a 180° orthographic projection fisheye lens for Nikon and Nikkormat 35 mm SLRs. Typical lens elements have spherically curved surfaces. However, this causes off-axis light to be focused closer to the lens than axial rays (spherical aberration) and degrading image sharpness;Kingslake, p. 322Peres, pp. 176, 716Stroebel and Zakia, p. 840 especially severe in very wide angle or aperture lenses. This can be prevented by using elements with convoluted aspheric curves. Although this was understood since the 17th century, the grinding of aspheric glass surfaces was extremely difficultRay, pp. 50–51, 110–111 and prevented their consumer use until the E. Leitz 50mm f/1.2 Noctilux (West Germany) in 1966; for Leica M-series 35 mm RFs.Peres, pp. 314, 780 The Canon FD 55mm f/1.2 AL (Japan) of 1971 was the first rectilinear aspheric SLR lens; for FD mount Canon SLRs,Shell, pp. 106, 178 and the Asahi SMC Takumar 15mm f/3.5 (Japan/West Germany) of 1975 was the first rectilinear aspheric wide angle SLR lens; for M42 screw mount Asahi Pentax SLRs (co-designed with Carl Zeiss [Oberkochen]). The use of modern precision molded plastic or glass aspheric lens elements has made aspheric lenses common today. 1969 Yashica TL Electro X (Japan): first SLR with all solid-state electronic light metering system. Had a stop-down aperture, full area averaging, CdS light meter linked via a four transistor circuit board to an extinguish-both-red-over-and-underexposure-lights exposure control system instead of a galvanometer meter needle. Also had another four transistor timing circuit to electronically control its metal-bladed Copal Square SE focal-plane shutter."Modern Photography's Annual Guide to 47 Top Cameras: Yashica TL Electro-X ITS," p. 123. Modern Photography, Volume 36, Number 12; December 1972. Matanle, p. 184 1969 Asahi (Honeywell in USA) Pentax 6×7 (Japan; name shortened to Pentax 67 in 1990): first 67 medium format SLR. Took ten exposures of 2¼×2¾ inch (6×7 cm) nominal frames (56×69.5 mm actual frames) on 120 film. The 67 format is called "perfect" or "ideal," because its aspect ratio enlarges to an 8×10 inch print without cropping. The Pentax 6×7 resembled a greatly scaled-up 35 mm SLR."Modern Photography's Annual Guide to 47 Top Cameras: Honeywell Pentax 6×7," p. 131. Modern Photography, Volume 36, Number 12; December 1972. 1970s 1970 Mamiya RB67 (Japan): first 67 medium format system SLR. Took ten exposures of 2¼×2¾ inch (6×7 cm) nominal frames (56×69.5 mm actual frames) on 120 film. Also had "revolving" rotatable interchangeable film backs to easily take vertical photographs with the normally horizontal format and standard interchangeable waist level viewfinder.Schneider, "How The Japanese Camera Took Over." p. 86 1971 Asahi SMC Takumar lenses (Japan): first all multicoated (Super-Multi-Coated) lenses for consumer cameras; for M42 screw mount Asahi Pentax SLRs.Comen, Pentax Classic Cameras. pp. 136–137 Process co-developed with Carl Zeiss (Oberkochen, West Germany). Lenses with glass elements "single-coated" with a very thin layer (about 130–140 nanometers) of magnesium or calcium fluoride to suppress flare producing surface reflections were invented by Carl Zeiss (Jena, Germany) in 1936Kingslake, pp. 16–17 and first sold in 1939. They became standard for high quality cameras by the early 1950s. Coating lenses with up to a dozen different layers of chemicals to suppress reflections across the visual spectrum (instead of at only one compromise wavelength) was a logical progression.Ray, pp. 74–75, 108–109 1971 Asahi Pentax Electro Spotmatic (Japan; name shortened to Asahi Pentax ES in 1972; called Honeywell Pentax ES in USAWilliam P. Hansen, Hansen's Complete Guide Illustrated Guide to Cameras; Volume 1. Kennesaw, GA: Rochdale Publishing Company, 2003. . p. 17): first SLR with electronic aperture-priority (using stop-down TTL metering) autoexposure plus electronically controlled shutter."Modern Photography's Annual Guide to 47 Top Cameras: Honeywell Pentax ES," p. 99. Modern Photography, Volume 36, Number 12; December 1972. Earlier mechanical AE systems tended to be unreliable, but reliable and convenient AE systems (as well as other electronic control systems) that electronically set either the camera shutter speed or lens aperture from light meter readings once the other was manually set began with the Electro Spotmatic. Rival electronic AE SLRs included the Canon EF (1973; shutter priority),Shell, pp. 54–56 Minolta XE–7 (1975; aperture priority)William P. Hansen, Hansen's Complete Guide Illustrated Guide to Cameras; Volume 2. Kennesaw, GA: Rochdale Publishing Company, 2003. . p. 19 and Nikkormat EL (1972; aperture priority),B. Moose Peterson, Nikon Classic Cameras, Volume II; F2, FM, EM, FG, N2000 (F-301), N2020 (F-501), EL series. First Edition. Magic Lantern Guides. Rochester, NY: Silver Pixel Press, 1996. . pp. 50–65 all from Japan. Electronic AE came to most 35 mm SLRs by the late 1970s. The Japanese electronic AE SLR effectively ended the German camera industry when they failed to keep up with their Japanese counterparts. After ailing throughout the 1960s, such famous nameplates as Contax, Exakta, Leica, Rollei and Voigtländer went bankrupt, were sold off, contracted production to East Asia, or became boutique brands in the 1970s."Annual Guide to 54 Top Cameras: Contax RTS," p. 123. Modern Photography, Volume 40, Number 12; December 1976. Lothrop & Schneider. "The SLR Saga (part 2)," pp. 50–51, 64Schneider, "How The Japanese Camera Took Over." pp. 56–57, 78, 86 1971 Praktica LLC (East Germany)(really December 1969): first interchangeable lens camera with electric contact lens mount, first camera with electromechanical lens diaphragm stopdown control."Modern Photography's Annual Guide to 47 Top Cameras: Praktica LLC," p. 120. Modern Photography, Volume 36, Number 12; December 1972. Had M42 screw mount modified for open aperture metering. The M42 mount was a very popular interchangeable lens mount system for a quarter century. It was used by almost two dozen different SLR brands, most notably Asahi Pentax. (Asahi became so closely associated with this mount that it was, and still is, often erroneously referred to as the Pentax screw mount.Schneider, "Camera Collector: screw-mount SLR saga, part 1," pp. 20, 23, 26) However, by the early 1970s, the M42's limitations, especially no provision for auto-diaphragm lens open aperture viewing and metering, were becoming serious liabilities. After unpopular and uncoordinated attempts to modify the screw mount to support auto-diaphragm lenses with open aperture metering, Asahi abandoned the M42 screw mount in 1975, effectively ending production of this lens mounting system. 1971 Fujica ST701 (Japan): first SLR with silicon photodiode light meter sensors.Mantale, p. 163 Early SLR TTL meters used cadmium sulfide (CdS) cells (see Topcon RE Super and Asahi Pentax Spotmatic above), as they were the first sensors small enough to be internally mounted. However, CdS needed fairly bright light and suffered from a "memory" effect where it might take 30 seconds or more to respond to a light level change. Although silicon's infrared response required blue filtration to match the eye's spectral response, silicon supplanted CdS by the late 1970s because of its greater sensitivity and instantaneous response. 1972 Fujica ST801 (Japan): first SLR with viewfinder light emitting diodes.Matanle, p. 163 Had a seven LED dot scale to indicate extreme overexposure, +1 EV, +½ EV, 0 (correct exposure), –½ EV, −1 EV, extreme underexposure readings of its silicon photodiode light meter, instead of the traditional but delicate galvanometer needle pointer. A sister camera, the Fujica ST901 (Japan) of 1974, was the first SLR with a viewfinder LED digital data display.Lea, p. 93 It had calculator-style LEDs showing camera's aperture priority autoexposure set shutter speeds from 20 to 1/1000 second in 14 nonstandard steps. Although they were replaced by more energy efficient and informative LCDs in the 1980s (see Nikon F3, below), the use of LEDs in the ST801/ST901 were major steps in the escalation of electronics in 1970s camera design 1972 Olympus OM-1 (Japan): first compact full-featured 35 mm SLR. At 83×136×50 mm and 510 g, it was about two-thirds the size and weight of most earlier 35 mm SLRs."Modern Tests: Olympus OM-1: World's Smallest 24×36mm SLR," pp. 98–100. Modern Photography, Volume 37, Number 4; April 1973. Wade, Short History. pp. 125–126 Excellent mechanical design with excellent interchangeable lenses and large accessory system. Note that the initial production batches were marked as the M-1, but this designation was quickly changed when E. Leitz objected over conflicts with their Leica M-series RFs trademarks."Modern Photography's Annual Guide to 47 Top Cameras: Olympus M-1," p. 119. Modern Photography, Volume 36, Number 12; December 1972. Wade, Collector's Guide. pp. 159–160 M-1 marked cameras are currently a collector's item SLR. 1972 Polaroid SX-70 (USA): first instant film SLR. Had non-pentaprism mirror reflex system and electronic autoexposure in flat-folding body with bellows and fixed 116mm f/8 lens. Took ten exposure, 3⅛×3⅛ inch frame Polaroid SX-70 instant film packs.Herbert Keppler, "SLR: It's the 25th anniversary of an instant classic, the most incredible, most ingenious SLR ever invented. And here it comes again!" pp. 17–18, 20. Popular Photography, Volume 61, Number 10; October 1997. Wade, Collector's Guide. pp. 134–135 The principle of self-developing "instant photography" came to Edwin Land in 1943. The first production instant camera was the non-SLR Polaroid Land Model 95 (USA) of 1948, producing sepia-toned, peel-apart pictures.Weston Andrews, "Instant Pictures: 40 Years of Instant Success; From the sepia-tone print to the see-through Spectra, Polaroid's story is a saga of invention and innovation…with just a few slight detours." pp. 54–55, 94. Modern Photography, Volume 51, Number 10; October 1987. Wade, Short History. p. 105 Steady improvements culminated in the seven-year, nearly quarter-billion dollar SX-70 camera and film project to create full-color, self-contained, develop-before-your-eyes, "garbage-free" prints.William Doerner, "Polaroid's Big Gamble on Small Cameras," pp. cover, 80–82, 84, 86, 88. TIME, Volume 99, Number 26; 26 June 1972. 1974 Vivitar Series 1 70–210mm f/3.5 (USA/Japan): first professional-level quality close focusing "macro" zoom lens for 35 mm SLRs.Herbert Keppler, "Keppler's SLR Notebook: Good Grief! Three Series 1 70–210 Vivitar Zooms?" pp. 35, 74. Modern Photography, Volume 48, Number 8; August 1984. Early zoom lenses often had very inferior optical quality compared to prime lenses,Jason Schneider, "The Camera Collector: Auto and match-needle exposure, instant-return mirror and diaphragm plus full finder information in an early-60s SLR? Alas, it was too good to be really reliable." pp. 24, 26, 28, 32, 34, 144. Modern Photography, Volume 45, Number 9; September 1981. but improvements in computer assisted zoom lens design and construction allowed annual Japanese 35 mm SLR zoom lens production to surpass prime lenses in 1982 and zooms became normal on virtually all but the highest end still cameras by the late 1980s.Jason Schneider, "50mm: How do seven leading normal lenses compare? We put 'em to the test!" pp. 42–49. Popular Photography, Volume 98 Number 5; May 1991. . p. 42 Ponder & Best's designed in the USA/made in Japan Vivitar Series 1 lenses were among the best available (many were the first of their kind) for about a dozen years, before new owners debased the brand. 1975 E. Leitz APO-Telyt-R 180mm f/3.4 (West Germany): first apochromatic lens for consumer cameras (Leicaflex series SLRs). The refractive index of glass increases from red to blue of the light spectrum (color dispersion). Blue is focused closer to the lens than red causing rainbow-like color fringing (chromatic aberration).Kingslake, pp. 71–72, 316, 317Peres, pp. 175, 712, 717Stroebel and Zakia, pp. 424–425 Most photographic camera lenses are achromatically corrected to bring blue and red to a common focus – leaving large residual green and violet chromatic aberrationsRay, pp. 54–55 that degrades image sharpness; especially severe in long focus or telephoto lenses. If red, green and blue are brought to a common focus (plus other aberration corrections) with very little residual aberration, the lens is called apochromatic.Kingslake, p. 316 Chromatic aberration was an issue at the dawn of photography (daguerreotypes [invented 1839] were blue sensitive only, while the human eye focused primarily using yellow), but apochromatic photographic lenses were considered unnecessary until the dominance of color film. The use of extra-low dispersion glasses made most 1980s professional telephotosBennett Sherman, "Techniques Tomorrow: New glasses make the optical scene brighter and clearer. What are they and what are they doing?" pp. 10, 14. Modern Photography, Volume 48, Number 8; August 1984. and many 1990s amateur telephoto zooms apochromatic. 1975 Mamiya M645 (Japan): first 645 medium format system SLR. Took fifteen exposures of 2¼×1⅝ inch (6×4.5 cm) nominal frames (56×41.5 mm actual frames) on 120 film."Modern Tests: Mamiya M645 Super: Advanced Full-system 6×4.5 cm SLR?" pp. 46–55. Modern Photography, Volume 50, Number 9; September 1986. Mamiya was never successful at producing 35 mm SLRs, despite a half dozen attempts between 1959 and 1980. However, it was a leader in medium format cameras; first with the Mamiya C series (1956, Japan), the only successful interchangeable lens twin-lens reflex (TLR) cameras ever made,"Modern Photography's Annual Guide to 47 Top Cameras: Mamiya C330" p. 135. Modern Photography, Volume 36, Number 12; December 1972. and then with the RB67 (see above) and M645 series SLRs. 1975 Olympus OM-2 (Japan): first SLR with TTL, off-the-film (OTF) flash autoexposure. Had two rearward-facing silicon photodiodes in the mirror box to meter light reflecting off the film. Circuitry could detect when enough light was exposed and automatically quench a specially "dedicated" accessory Olympus Quick Auto 310 electronic flash."Annual Guide: 46 Top Cameras: Olympus OM-2," p. 117. Modern Photography, Volume 42, Number 12; December 1978. Hans van Veluwen, "Quick Auto 300/310" retrieved 25 September 2007 Manual flash exposure control for a natural look is complex and convenient TTL autoflash metering became standard in virtually all SLRs by the mid-1980s. 1976 Canon AE-1 (Japan): first SLR with microprocessor electronics. Well-integrated and compact shutter-priority autoexposure design with excellent interchangeable lenses and large accessory system.Francke, pp. 12–25 Backed by a major advertising campaign, including celebrity endorsements, TV commercials and a catchy slogan ("So advanced, it's simple."),"Modern Tests: Aperture-Preferred Canon AV-1 SLR," pp. 96–98. Modern Photography, Volume 43, Number 8; August 1979. that targeted snapshooters, the AE-1 sold five million units and immediately made the 35 mm SLR an important mass-market camera."Modern Tests: Canon AE-1 Program: Upgrading a Legend," pp. 112–114, 116, 118, 120, 122. Modern Photography, Volume 45, Number 8; August 1981. An improved model, the Canon AE-1 Program (Japan) of 1981, added another four million units to the tally. 1976 Asahi Pentax ME (Japan): first autoexposure-only SLR. Had aperture-priority exposure control only (photographer could not manually select a shutter speed) for simple snapshooter operation.Cecchi, pp. 103–106, 134–137Norman Goldberg, Michele A. Frank and Leif Ericksenn, "Lab Report: Pentax ME," pp. 126–129, 145–147, 212. Popular Photography, Volume 85, Number 3; March 1978. Interchangeable lens autoexposure-only SLRs disappeared in the mid-1980s, because even snapshooters demanded that SLRs (as "good cameras") have a manual mode. However, most recent amateurs never use manual control and even some professionals depend on autoexposure, making the great majority of modern SLRs de facto autoexposure-only cameras. 1976 Minolta 110 Zoom SLR (Japan): first Pocket Instamatic 110 cartridge film SLR. Had built-in zoom lens (fixed 25–50mm f/4.5 Zoom Rokkor-Macro).Keppler, "New 35mm Film Format." pp. 46–49 Took up to 24 exposures of 13×17 mm frames on paper-backed, singly perforated, 16 mm wide film pre-threaded into double-ended cartridge with film supply and take-up spools. Compact, drop-in loading 110 film was introduced by Kodak in 1972. It was briefly an extremely popular non-SLR snapshot format but almost dead by 1982. 1977 Fujica AZ-1 (Japan): first interchangeable lens camera to be sold with a zoom lens as the primary lens. The AZ-1's Fujinon-Z 43-75mm f/3.5-4.5 zoom, despite its modest specifications, was the earliest attempt to supersede the 35 mm SLRs heretofore standard 50 to 58 mm "normal" prime lens with today's ubiquitous zoom lens. The regular Fujinon-Z 55mm f/1.8 lens remained a popular option.Lea, p. 94 The AZ-1 was also one of the last Japanese-made M42 screw mount cameras released.Hansen and Dierdorff, pp. 36, 38, 62 The purchase of a zoom instead of a prime as the first lens became normal with virtually all amateur 35 mm SLRs in the latter 1980s. 1977 Minolta XD11 (Japan; called XD7 in Europe, XD in Japan): first dual mode autoexposure SLR. Had both aperture-priority and shutter-priority autoexposure.Norman Goldberg, Michele A. Frank and P. I. Moore, "Lab Report: Minolta XD-11 [sic]," pp. 123–127, 132, 166, 188. Popular Photography, Volume 86, Number 1; January 1979. Mayer, Minolta Classic Cameras. pp. 28–49Wade, Collector's Guide. pp. 157–158 Previously, each AE SLR brand offered only one or the other mode, and aggressively touted their choice as superior to other. The XD11 offered both modes and trumped the debate. 1978 Canon A-1 (Japan): first SLR with an electronically controlled programmed autoexposure mode. Instead of the photographer picking a shutter speed to freeze or blur motion and choosing a lens aperture f-stop to control depth of field (focus), the A-1 had a microprocessor computer programmed to automatically select a compromise exposure from light meter input."Annual Guide: 46 Top Cameras: Canon A-1," p. 106. Modern Photography, Volume 42, Number 12; December 1978. Norman Goldberg, Michele A. Frank and Frank D. Grande, "Lab Report: Canon A-1," pp. 125–129, 131, 142, 144–145, 228. Popular Photography, Volume 86 Number 4; April 1979. Virtually all cameras had some sort of program mode or modes by the mid-1980s. It was also the first camera to have all four of the now standard PASM (program/aperture-priority/shutter-priority/manual) exposure modes. Canon's long term emphasis on the highest possible technology eventually allowed the company to dominate the 35 mm SLR market; first at the amateur level, with their AE-1 (see above) and A-1,Kusumoto with Murray, p. 213 and then (despite a stumble in the mid-1980s when they came late to autofocus) the professional level in the early 1990s with the Canon EOS-1 (Japan) of 1989. Canon remains the leading digital SLR maker, with a 38% worldwide market share in 2008. 1978 Polaroid SX-70 Sonar (USA): first electronic autofocus SLR. Had active ultrasonic sonar echo-location rangefinder AF system. This unique-to-Polaroid AF system had no influence on any other type of AF SLR. Took ten exposure, 3⅛×3⅛ inch frame, Polaroid Time-Zero SX-70 instant film packs."Cameras That See by Sound," p. 62. TIME, Volume 111, Number 19. 8 May 1978. Keppler, "It's the 25th anniversary of an instant classic" pp. 17–18, 20 1978 Asahi Pentax Auto 110 (Japan): first interchangeable lens Pocket Instamatic 110 film system SLR. Mini-35mm SLR-like programmed autoexposure design with good interchangeable lenses and large accessory system."Annual Guide: 46 Top Cameras: Asahi Pentax Auto 110," p. 146. Modern Photography, Volume 42, Number 12; December 1978. Wade, Collector's Guide. pp. 126–127 Was the smallest and lightest SLR ever made – 56×99×45 mm, 185 g with Pentax-110 24mm f/2.8 lens.Norman Goldberg, Michele A. Frank and Norman Rothschild, "Lab Report: Pentax Auto 110," pp. 121–125, 141. Popular Photography, Volume 87 Number 5; May 1980. The Auto 110 and its improved successor, the Pentax Auto 110 Super (Japan) of 1982, were the only interchangeable lens 110 SLRs ever produced and the most advanced 110 cameras ever made, but were unable to prevent the demise of 110 film.Joe McGloin, "Pentax 110 Super" retrieved 18 May 2007 1979 Konica FS-1 (Japan): first SLR with built-in motorized autoloading.Wade, Collector's Guide. pp. 161–162 Also had autowinding (motorized single frame or continuous up to 1.5 frames per second film advance), but not auto-rewind."Annual Guide To 47 Top Cameras: Konica FS-1," p. 98. Modern Photography, Volume 44, Number 12; December 1980. A snapshooter's great dislike (and Kodak bugbear) of 135 film was the need to manually thread the film leader into the camera's take-up spool. Built-in, motorized, automated film-transport systems (auto-load/wind/rewind) arrived with the Canon T70 (Japan) in 1984.Francke, pp. 76–94 Completely automated film handling systems appeared when automatic "DX" film speed setting was added to auto-transport in the Minolta Maxxum 7000 (Japan; see below) in 1985 and became standard in virtually all 35 mm SLRs by the late 1980s. This is, of course, a non-issue in modern digital SLRs. Although Konishiroku has a rich history including several first rank camera innovations, it was never able to establish Konica as a first tier brand and quit the SLR business in 1988.Lea, pp. 121–125 1979 Asahi Pentax ME Super (Japan): first SLR with primarily electronic push button controls. Had increase/decrease push buttons for shutter speed selection instead of a traditional shutter speed dial."Annual Guide To 47 Top Cameras: Asahi Pentax ME Super," p. 88. Modern Photography, Volume 44, Number 12; December 1980. Norman Goldberg, Michele A. Frank and Norman Rothschild, "Lab Report: Pentax ME Super," pp. 115–119, 128–129, 137. Popular Photography, Volume 87, Number 9; September 1980. As digital computerized SLR features multiplied, push button controls also multiplied and replaced analogue electromechanical dial switches in most 35 mm SLRs by the late 1980s. 1979 Sedic Hanimex Reflex Flash 35 (Australia/Japan): first SLR with built-in electronic flash. Otherwise a wholly forgettable camera; a cheaply made 35 mm SLR of low specifications and poor quality, with a fixed Hanimar 41mm f/2.8 lens and mirror gate shutter. 1980s 1980 Nikon F3 (Japan): first SLR with viewfinder liquid crystal display digital data display. LCD showed shutter speeds; manual mode and under/overexposure indicators."Annual Guide To 47 Top Cameras: Nikon F3," p. 104. Modern Photography, Volume 44, Number 12; December 1980. Stafford, Hillebrand & Hauschild, pp. 29–38, 272–273 As computerized SLR features multiplied, comprehensive viewfinder LCD panels became normal in virtually all 35 mm SLRs by the late 1980s. 1981 Rolleiflex SL 2000 F (West Germany): first 35 mm SLR to not use the oblong body plus viewfinder head configuration and handling established by the Kine Exacta, 45 years before (see above). Had a cubic body, like a miniature 2¼ medium format SLR, with fixed dual telescopic eyelevel plus folding waist level finder. Also had interchangeable film backs, built-in motor drive, aperture priority AE and TTL autoflash.Herbert Keppler, "Keppler on the SLR: Pentax sets out to knock off Canon and Olympus with smallest SLRs ever – Rollei's unbelievable SL2000," pp. 55–57, 186, 208, 212–214, 230. Modern Photography, Volume 40, Number 12; December 1976. The 1980s saw varied attempts to stand out in a crowded marketplace by using unconventional 35 mm SLR body layouts.Steven Pollock and Barry Tanenbaum, "SLR Notebook: Three for the Road Ahead," pp. 24–25. Modern Photography, Volume 52, Number 5; May 1988. Besides the professional level Rolleiflex, they included the vertical Yashica Samurai series and the flat Ricoh Mirai"Popular Photography: Test Report: Ricoh Mirai: Why let it all hang out if you can build it in?" pp. 56–65. Popular Photography, Volume 96, Number 7; July 1989. (both 1988 and from Japan) point-and-shoot SLRs."Modern Picks! Pointers, Shooters…and Specialties for '89," pp. 54–57. Modern Photography, Volume 52, Number 12; December 1988. They were all unsuccessful in establishing a new paradigm and the rectangular body plus pentaprism head layout reemerged universal again in the early 1990s, albeit usually with a large handgrip and rounded contours. 1981 Pentax ME F (Japan): first built-in autofocus 35 mm SLR. Had passive contrast detection AF system."Modern Tests: Pentax ME-F: 35mm Auto-Focus SLR," pp. 110–117. Modern Photography, Volume 46, Number 5; May 1982"Modern's Inside Your Camera Series #33: Pentax ME-F," pp. 72–73, 110–111, 116, 120, 130, 136, 142, 148, 150–151, 162. Modern Photography, Volume 47, Number 3; March 1983. Autofocused poorly and was not commercially successful."Modern Tests: Pentax SF1: The AF SLR That Does More With K-Mount Lenses," pp. 62–69, 80. Modern Photography, Volume 51, Number 10; October 1987. Schneider, "The Top 20 Cameras of All-Time," July 2008, p. 148 Also had Pentax K–F mount, a unique bayonet lens mount with five electric contact pins to pass focus control information between the ME F and its unique autofocusing SMC Pentax AF 35mm-70mm f/2.8 Zoom Lens."Modern Photography's 46 Top Cameras: Annual Guide '83: Pentax ME F," p. 101 Note that the Ricoh AF Rikenon 50mm f/2 (Japan) lens of 1980 had a self-contained passive electronic rangefinder AF system in a bulky top-mounted box and was the first interchangeable autofocus SLR lens (for any Pentax K mount 35 mm SLR)."Inside Your Camera: Pentax ME-F," p. 110 1981 Sigma 21-35mm f/3.5-4 (Japan): first super-wide angle zoom lens for SLRs. For decades, combining the complexities of rectilinear super-wide angle lenses, retrofocus lenses and zoom lenses seemed impossibly difficult. Sigma did the impossible and reached a 91° maximum field of view for 35 mm SLRs with an all-moving eleven element formula through the maturation of computer-aided design and multicoating. In 2004, the Sigma 12-24mm f/4.5-5.6 EX DG Aspherical HSM (Japan) zoom reached 122°, wider than any SLR prime lens ever made, by taking additional advantage of aspherics and low dispersion glasses. 1982 Ricoh XR-S (Japan): first solar powered SLR. Was a Ricoh XR-7 (Japan) aperture priority AE 35 mm SLR of 1981 modified with two silicon photovoltaic cells in the sides of the pentaprism housing that charged a unique 3 volt 2G13R "5-year" rechargeable silver oxide battery. This battery could be replaced with two regular 1.5 volt S76 (SR44) silver oxide batteries.Jim Bailey, "Phototronics: A solar greenhouse in your SLR? No, but here's how the Ricoh XR-S is recharged by old sol!" pp. 44, 49, 182. Modern Photography, Volume 46, Number 5; May 1982. The XR-7 and XR-S also had unusual viewfinder LCD showing meter pseudo-needle pointing along an analogue shutter speed scale to indicate light meter recommended settings, mimicking a traditional galvanometer needle.Herbert Keppler, "Keppler's slr notebook: Ricoh XR-7: a meter needle that isn't. It's a liquid crystal illusion." pp. 60–61. Modern Photography, Volume 45, Number 5; May 1981. 1982 Polaroid SLR 680 (USA): first high-quality SLR with built-in electronic flash. Also had active sonar echo-location AF system. Took ten exposure, 3⅛×3⅛ inch frame Polaroid 600 instant film packs. Was improved Polaroid SX-70 Sonar (see above) AF SLR with almost-all plastic (acrylonitrile butadiene styrene [ABS]) body, built-in flash and faster film.Andrews, "40 Years of Instant Success," p. 94Keppler, "It's the 25th anniversary of an instant classic!" pp. 17–18, 20 The SLR 680 represents the zenith of instant photography and was the finest instant camera ever made. For a time in the 1960s and 70s, Polaroid instant cameras outsold all other high-end cameras combined, but the popularity of instant photography waned throughout the 1980s as auto-everything 35 mm point-and-shoot cameras and fast one-hour film developing became common. Polaroid went bankrupt in 2001. 1983 Pentax Super A (Japan; called Super Program in USA): first SLR with external LCD data display. With push buttons for shutter speed selection instead of a shutter speed dial, the Super Program used an LCD to show set shutter speed."Modern Photography's Annual Guide '84: 48 Top Cameras: Pentax Super Program," p. 88. Modern Photography, Volume 47, Number 12; December 1983. As computerized SLR features multiplied, large external LCD panels became normal on virtually all 35 mm SLRs by the late 1980s. 1983 Nikon FA (Japan): first camera with multi-segmented (or matrix or evaluative; called Automatic Multi-Pattern) light meter. The FA had a built-in computer system programmed to analyze light levels in five different segments of the field of view for convenient exposure control in difficult lighting situations."Modern Photography's Annual Guide '84: 48 Top Cameras: Nikon FA," p. 84. Modern Photography, Volume 47, Number 12; December 1983. "Modern's Inside Your Camera Series #37: Nikon FA," pp. 50–51, 64, 90, 92, 98. Modern Photography, Volume 50, Number 6; June 1986. Goldberg, Camera Technology. pp. 57–58Stafford, Hillebrand & Hauschild, pp. 64–67, 159 After TTL SLR meters were introduced by the Topcon RE Super in 1963 (see above), the various SLR manufacturers tried many different sensitivity schemes (full area averaging, centerweighted, partial area and spot were the most common) in the 1960s before settling in the mid-1970s on centerweighted as the best (90% acceptable exposures) available system. AMP cut the error rate by half. Matrix meters became virtually standard in 35 mm SLRs by 1990 and modern ones are virtually 100% technically accurate. Note however, the technically correct "18% gray" exposure is not necessarily the artistically desirable exposure.Goldberg, Camera Technology. p. 51Shull, p. 48 In 1996, the number of computer analyzed segments reached a maximum of 1005 in the Nikon F5 (Japan). 1983 Olympus OM-4 (Japan): first camera with built-in multiple spot-meter (2% of view; 3.3° with 50mm lens). Meter could measure eight individual spots and average them for precise exposure control in difficult lighting situations."Modern Tests: Olympus OM-4 Has Multiple Spot, LCD Panel Metering," pp. 78–86. Modern Photography, Volume 48, Number 5; May 1984. Y. Maitani, and K. Tsunefuuji, "Modern's Inside Your Camera Series #35: Olympus OM-4," pp. 78–79, 136, 138, 142. Modern Photography, Volume 48, Number 9; September 1984. Spotmeters versus matrix meters represent the opposite ends of the light meter spectrum: fully manual contemplative metering versus completely computerized instantaneous metering."Too Hot to Handle," p. 63. Modern Photography, Volume 48, Number 5; May 1984. 1985 Minolta Alpha 7000 (Japan; called Maxxum 7000 in USA, 7000 AF in Europe): first commercially successful autofocus 35 mm SLR, first passive phase comparison AF SLR, first system AF SLR, first SLR with completely automated film handling (auto-load/wind/rewind/speed setting). Well-integrated PASM autoexposure and built-in motor winder design with very good interchangeable lenses and large accessory system."Annual Guide '86: Modern Photography's 48 Top Cameras: Minolta Maxxum 7000," p. 46. Modern Photography, Volume 49, Number 12; December 1985. Herbert Keppler, "Keppler’s SLR Notebook: Minolta's Incredible MAXXUM 7000 SLR," pp. 16–17, 110, 112, 116, 118, 124, 98. Modern Photography, Volume 49, Number 3; March 1985. Schneider, "The Top 20 Cameras of All-Time," July 2008, pp. 148, 150 Ever since the first autofocus camera, the non-SLR Konica C35 AF 35 mm P/S of 1977 (with its built-in passive electronic rangefinder system),"Modern Tests: Konica C35AF: First Auto-Focus Still Camera," pp. 136–139. Modern Photography, Volume 43, Number 4; April 1979. Jason Schneider, "The Top 20 Cameras of All-Time," July 2008, pp. 146, 148Wade, Classic Cameras. pp. 163–165 AF had been common in 35 mm point-and-shoot cameras. The phenomenal success of the Maxxum temporarily made Minolta the world's number one SLR brand and permanently made the AF SLR the dominant 35 mm SLR type. Minolta suffered major reverses in the 1990s and was forced to merge with Konica in 2003, and then to transfer its technology to Sony and quit the camera business in 2006, after selling 13.5 million Maxxums. 1985 Kiron 28-210mm f/4-5.6 (Japan): first very large ratio focal length "superzoom" lens for still cameras. Was first 135 film zoom lens to range from standard wide angle to long telephoto;"Modern Tests: Wide Ranging 28–210 One-Touch Kiron," pp. 52–53, 75. Modern Photography, Volume 50, Number 1; January 1986. albeit with a small variable maximum aperture to keep size, weight and cost within reason. Although the 10 to 1 ratio Angénieux 12-120mm f/2.2 (France) zoom had been introduced for 16 mm movie cameras in 1961, and consumer Super-8 movie and Betamax/VHS video cameras long had superzooms, early 35 mm SLR zoom focal length ratios rarely exceeded 3 to 1, because of 135 film's much higher acceptable image standards. Despite their many image quality compromises,Herbert Keppler, "SLR: Doesn't a 28-300mm close-focusing AF zoom do all you need?" pp. 24, 26, 28, 41. Popular Photography, Volume 63, Number 7; July 1999. convenient superzooms (sometimes with ratios over 10 to 1) became common on amateur level 35 mm SLRs by the late 1990s. They remain a standard lens on today's amateur digital SLRs, with the Tamron AF18-270mm f/3.5-6.3 Di II VC LD Aspherical (IF) MACRO attaining 15× in 2008.The Ultimate All-In-One Zoom: Longest, Steadiest Lens on Earth. Tamron AF18-270mm f/3.5-6.3 Di II VC LD Aspherical [IF] MACRO brochure. Saitama, Japan: Tamron Co., Ltd., 2008 Note, the Canon DIGISUPER 100 xs, a 100× (9.3-930mm f/1.7-4.7; Japan) broadcast television zoom lens, was introduced in 2002. 1987 Pentax SFX (Japan; called SF1 in USA): first interchangeable lens SLR with built-in electronic flash (first built-in flash with TTL autoexposure in any camera).Cecchi, pp. 175–180 Built-in electronic flashes for convenient auxiliary light in dim situations or for fill-light in high contrast situations first appeared on the non-SLR Voigtländer Vitrona (West Germany) of 1964 and had been common on point-and-shoot cameras since the mid-1970s. Built-in TTL autoflashes became standard on all but the most expensive 35 mm SLRs cameras by the early 1990s. 1987 Canon EF mount (Japan): first all-electronic contact camera lens mount for interchangeable lens cameras. Introduced by Canon EOS 650"Annual Guide: Modern Photography's Top Cameras for '88: Canon EOS 650," p. 29. Modern Photography, Volume 51, Number 12; December 1987. and EOS 620"Modern Tests: Canon EOS 620: An Autofocus for Pros?" pp. 56–60. Modern Photography, Volume 52, Number 2; February 1988. 35 mm SLR bodies and Canon EF lenses, this lens mount is essentially a computer data port. Mechanical camera-to-lens linkages can link auto-diaphragm lenses and instant return mirror, focal-plane shutter SLRs, but electronic autofocus required additional electronic data exchange between camera and lens. Canon decided to place everything under electronic control, even though it meant that earlier Canon lenses would not be usable with the new bodies.Goldberg, Camera Technology. pp. 221–222Tamotsu Shingu, "Modern Photography's Inside Your Camera Series #39: The EOS System," pp. 17–24. Modern Photography, Volume 53, Number 6; June 1989. 1988 Minolta Maxxum 7000i (Japan; called Dynax 7000i in Europe, Alpha 7700i in JapanHansen and Dierdorff, p. 95): first multi-sensor (three, in an "H" pattern) passive autofocus SLR. First generation AF SLRs had a single central AF sensor. However, composition rules generally say it is wrong to have dead center subjects and most compositions have off-center subjects. Multiple AF sensor arrays covering a wide area can more easily focus on these compositions."Popular Photography: Test Report: Minolta Maxxum 7000i: Is Minolta's prize AF SLR still the standard?" pp. 54–62. Popular Photography, Volume 96, Number 10; October 1989. Mayer, Minolta Classic Cameras. pp. 104–133 In 2007, the number of AF sensors reached 51 in the Nikon D3 and D300 (Japan) digital SLRs. In 1990, the 7000i and a sister camera, the Minolta Maxxum 8000i (Japan, 1990),Herbert Keppler and Larry White, "Minolta Moves Up!! So you were expecting a Maxxum 9000i; will you settle for an 8000i instead?" p. 54. Popular Photography, Volume 97 Number 3; March 1990. were also the first 35 mm SLRs with available "panoramic" format film gate mask and focusing screen accessory. Introduced in 1989 by the Kodak Stretch 35 (USA) single-use camera, this 13×36 mm frame on 135 film with 3½×10 inch prints was a faddish snapshot format during the 1990s.Dan Richards, "Kodak's Wild Disposables Are WIDE and WET; Fuji's is a Tele!" pp. 26, 85, 95. Popular Photography, Volume 96, Number 7; July 1989. 1989 Yashica Samurai Z-L (Japan): first SLR intentionally designed for left-handed operation. Took up to 72 exposures of horizontal 18×24 mm single frames (also called half frames) on 135 film. Had flat-topped non-pentaprism mirror reflex and optical relay viewfinder. Also had unique-to-Samurai-series vertical body design with fixed autofocus 25–75mm f/4–5.6 zoom lens, interlens leaf shutter, programmed autoexposure, built-in motor drive and electronic flash. Was mirror copy of auto-everything, point-and-shoot Samurai Z camera."popular photography's Guide To Point-And-Shoot Cameras," pp. 55, 62–63 1990s 1991 Kodak Digital Camera System DCS (USA/Japan): first digital still capture SLR. Was a heavily modified Nikon F3 (Japan) 35 mm SLR and MD-4 motor drive with 1024×1280 pixel (1.3 MP) charge-coupled device (CCD) sensor, 8 MB DRAM memory and a tethered 200 MB (160 images) Digital Storage Unit (DSU) hard drive. Used manual focus Nikon F mount lenses with 2× lens field of view factor compared to standard 135 film. List price was US$19,995 (standard Nikon F3HP was US$1295 list; MD-4, US$485). Electronic still (then using analogue processing and called still video) photography was first publicly demonstrated by original Sony Mavica (Japan) 490×570 pixel (280 kP) CCD, prototype SLR camera in 1981.Tony Galluzzo, "Video Today and Tomorrow: Sony Shows First Color Prints Made From Video Signals!" pp. 77–78, 120. Modern Photography, Volume 46, Number 5; May 1982. The Institute of Electrical and Electronics Engineers has called the DCS's Kodak KAF-1300 (USA, 1986) image sensor one of "25 Microchips That Shook the World" because the DCS began the digital photography revolution. Digital photography did not alter the basic focal-plane shutter, instant return mirror, pentaprism, auto-diaphragm lens, TTL meter, autoexposure and autofocus formula of SLR camera design developed over the previous century – except, of course, it is filmless. 1992 Nikonos RS (Japan): first waterproof 35 mm system SLR for 100 m maximum depth, underwater diving use. Had autofocus, autoexposure, TTL autoflash, excellent interchangeable lenses and good accessory system.Hansen and Dierdorff, p. 154 1995 Canon EF 75-300mm f/4-5.6 IS USM (Japan): first SLR lens with built-in image stabilization (called Image Stabilizer; for Canon EOS 35 mm SLRs). Had an electromechanical system to detect and counteract handheld camera/lens unsteadiness, allowing sharp photographs of static subjects at shutter speeds much slower than normally possible without a tripod."Canon Camera Museum: Camera Hall: EF Mount: EF75-300 f/4-5.6 IS USM: Telephoto Zoom Lens" retrieved 30 January 2008 The first stabilized lens for consumer cameras was the 38-105mm f/4-7.8 lens built into the Nikon Zoom-Touch 105 VR (Japan) 35 mm point-and-shoot of 1994. Image stabilized lenses were initially very expensive and used mostly by professional photographers.Peter Kolonia and Dan Richards, "Canon Image Stabilization VS Nikon Vibration Reduction," pp. 62, 64, 66, 68, 204. Popular Photography, Volume 65 Number 9; September 2001. Stabilization surged into the amateur digital SLR market in 2006.Michael J. McNamara, "Test: Sony Alpha 100 DSLR: Mix Master: Blending a proven DSLR, 10.2MP sensor, and cool technology," pp. 64, 66, 68. Popular Photography & Imaging, Volume 70 Number 9; September 2006Julia Silber, "Lens Test: Nikon 18-200mm f/3.5-5.6G DX VR AF-S: Super Superzoom," p. 67. Popular Photography & Imaging, Volume 70 Number 4; April 2006. However, the Konica Minolta Maxxum 7D (Japan) digital SLR introduced the first camera body-based stabilization system in 2004Michael J. McNamara, "Test: Konica Minolta Maxxum 7D: Rock Solid: New 6MP DSLR can be shaken, but not stirred," pp. 52–55. Popular Photography & Imaging, Volume 69 Number 2; February 2005. and there is now a great engineering and marketing battle over whether the system should be lens-based (counter-shift lens elements) or camera-based (counter-shift image sensor). In the case of the Pentax system, the backwards compatibility of the modern DSLRs for the entire K-mount lens range (and the M42 screwmount lenses with an adapter) means that even these older lenses are fully stabilised, something not possible with in-lens systems such as those of Canon or Nikon.Dan Richards, "DSLR Truth Squad. Buying a Camera? Don’t Believe Everything You Hear. Here Are 10 Facts You Must Know Now," pp. 90–92, 94, 96–97. Popular Photography & Imaging, Volume 70 Number 12; December 2006. . Table "The State of Stabilization," p. 94 1996 Minolta Vectis S-1 (Japan/Malaysia): first Advanced Photo System (APS) IX240 film SLR.Herbert Keppler, "SLR: How do the three interchangeable-lens APS SLRs compare with each other in features?" pp. 12–13, 16, 18. Popular Photography, Volume 61, Number 1; January 1997. Took up to forty exposures of 16.7×30.2 mm frames on polyethylene naphthalate base, singly perforated 24 mm wide film coated with invisible magnetic data encoding stripe, pre-loaded into self-locking ready-to-use cartridges. Had flat-topped non-pentaprism sideways mirror reflex and optical relay viewfinder. Compact design with good lenses and large accessory system. APS film was introduced by Kodak, Canon, Fuji, Minolta and Nikon in 1996 as Kodak's last attempt (of many) at drop-in film loading. APS was moderately popular, but faded quickly and almost dead by 2002. 21st century 2000 Canon EOS D30 (Japan): first complementary metal-oxide-semiconductor (CMOS) sensor digital SLR; first digital SLR intended to be a relatively affordable, advanced amateur level camera. Took up to 1440×2160 pixel (3.11 MP) digital images. Used Canon EF mount lenses with a 1.6× lens factor, compared to 135 film. The use of a cheaper and lower quality CMOS sensor allowed a price (US$3499 initial list price; US$2999 in 2001; body only) about half of contemporary professional CCD digital SLRs; giving ambitious amateurs the choice of an interchangeable lens digital SLR, in addition to the digital point-and-shoots common in the late 1990s."60 2002 Top 35mm & APS Cameras: Canon EOS D30," p. 54. Popular Photography, Volume 65 Number 12; December 2001. 2003 Canon EOS Kiss Digital (Japan; called EOS Digital Rebel in USA, EOS 300D Digital in Europe): first sub-US$1000 high-resolution digital SLR. Well-integrated focal-plane shutter, instant return mirror, pentamirror, auto-diaphragm, autoexposure, matrix-metering, autofocus, built-in autoflash, computer-controlled design with excellent lenses and good accessory system. Took up to 2048×3072 pixel (6.3 MP) digital images using a 15.1×22.7 mm complementary metal-oxide-semiconductor (CMOS) sensor (1.6× lens factor). With an original list price of US$899 (body only; US$999 with 18-55mm f/3.5-5.6 Canon EF-S zoom lens), it sold 1.2 million units around the world in sixteen months and was primarily responsible for digital SLR sales vaulting past film SLR sales worldwide in 2004.Camera & Imaging Products Association (CIPA), "Production, Shipment of Digital Still Camera: January – December in 2004" (d_2004.pdf). (2,475,758 total D-SLR shipments; 372,630 to Japan; 815,582 to Europe; 950,927 to North America; 293,599 to Asia; 43,020 other)Camera & Imaging Products Association (CIPA), "Production, Shipment of Still Camera and Interchangeable Lens: January - December in 2004" (s_2004.pdf) retrieved 26 June 2007. (1,175,159 total film SLR shipments; 115,659 to Japan; 365,513 to Europe; 484,179 to North America; 174,029 to Asia; 35,779 other) 2006 Olympus Evolt E-330 (Japan): first live view digital SLR. Had a secondary CCD sensor to send a live video feed to a swiveling color LCD panel (normally used for camera function data) and allow its use as an auxiliary viewfinder when the photographer's eye cannot be at the SLR viewfinder eyepiece. A sharper live view mode was available that temporarily flipped aside the reflex mirror (blacking out the primary porro-mirror SLR viewfinder) and opened the shutter to send a live feed from the primary 2352×3136 pixel (7.5 MP) Four-Thirds format MOS image sensor. Most new for 2008 digital SLRs had a live view mode. Although today live view has limitations (unintelligibility in bright sunlight, image lag with moving subjects, rapid battery drain, etc.), its perfection, plus an electronic shutter, would make the bulky and expensive precision mechanisms and optics of a focal-plane shutter, instant return mirror and pentaprism unnecessary and allow the camera to be a completely electronic device. (This has already occurred with snapshot cameras – the vast majority of point-and-shoot digital cameras lack an optical viewfinder.) In other words, the Micro Four-Thirds format Panasonic LUMIX DMC-G1 (Japan, 2008) mirror-less non-SLR, interchangeable lens digital camera with high resolution electronic live view viewfinder and LCD might be the first of a new breed of camera with the potential to end the history of the single-lens reflex camera.Peter K. Burian, "Future Tech: Shutterbug Contributors Get Out Their Crystal Ball: The End of D-SLRs?" pp. 48, 50. Shutterbug, Volume 38 Number 2 Issue 459; December 2008. John Owens, "Less is More: The revolutionary little camera that takes the SLR out of DSLR," pp. 13–14. Popular Photography, Volume 72 Number 11; November 2008. Philip Ryan, "Test: Feat in Inches: Panasonic LUMIX DMC-G1: The incredible shrinking camera," pp. 90, 92, 94, 96. Popular Photography, Volume 73 Number 1; January 2009. 2008 Nikon D90 (Japan): first digital SLR with high definition video recording capability. Had 12.3 MP APS-sized CMOS sensor with secondary 1280×720 pixel (720p), 24 frames per second HD video capture with monaural sound for five minutes in September.Joe Farace, "Nikon's D90: The Legendary N90 Returns in Digital Form" pp. 120–122, 124, 158, 160. Shutterbug, Volume 38 Number 4 Issue 461; February 2009. Philip Ryan, "Test: Nikon D90: Movie Channel: This DSLR shoots HD video," pp. 72, 74, 76–77. Popular Photography, Volume 72 Number 11; November 2008. Two months later, the Canon EOS 5D Mark II (Japan) 21.1MP full-frame CMOS D-SLR came out with 1920×1080 pixel (1080p), 30 frame/s HD video with monaural sound (stereo with external microphone) for twelve minutes."Canon Camera Museum: Camera Hall: Digital SLR: EOS 5D Mark II". Retrieved 23 February 2009George Schaub, "Canon's EOS 5D Mark II: HD Videos And 21MP 'Full Frame' Stills," pp. 122–127. Shutterbug, Volume 38 Number 6 Issue 463; April 2009. The D90 and 5D II are otherwise straightforward 2008 D-SLRs. Point-and-shoot digital cameras have had video recording (usually standard definition, but HD recently) for a few years and it is expected that HD video recording will soon become a standard D-SLR feature. 2010 Sony SLT α33 and SLT α55 (Japan): first SLRs without an optical viewfinder. What appears to be a pentaprism head is a high resolution electronic viewfinder (EVF). Had 16.2 MP (α55) or 14.2 MP (α33) APS-sized CMOS sensors with secondary 1080i high definition video capture. Also had a swiveling live view LCD panel. The SLTs' fixed so-called "Translucent Mirror Technology" reflex mirrors (a revival of pellicle mirrors [see Canon Pellix above]) siphon off light to their fifteen phase comparison autofocus sensors to provide continuous autofocusing in their HD video mode.David Pogue, "State of the Art: Sony Raises Camera Feats to New Level," p. B1. The New York Times; Thursday, 23 September 2010. . See also :Category:SLR cameras Alpa Box camera Cosina Digital single-lens reflex camera Fujifilm Full-frame digital SLR List of photographic equipment makers Miranda Camera Company Nikon SP — rangefinder camera from which the Nikon F evolved Optics Pentacon Photographic film Rangefinder camera Scheimpflug principle Single lens reflex camera Twin-lens reflex camera Zeiss Ikon Zorki References Bibliography Aguila, Clément and Rouah, Michel Exakta Cameras, 1933–1978. 2003 reprint. Small Dole, West Sussex, UK: Hove Collectors Books, 1987. . Antonetto, Marco: "Rectaflex, The Magic Reflex". Nassa Watch Gallery, Roma 2002. 261 pages. , . Capa, Cornell, editorial director, ICP Encyclopedia of Photography. New York, NY: Crown Publishers Inc., 1984. . "SLR (Single-Lens Reflex) Camera,". Cecchi, Danilo Asahi Pentax and Pentax SLR 35mm Cameras: 1952–1989. Susan Chalkley, translator. Hove Collectors Book. Hove, Sussex, UK: Hove Foto Books, 1991. . Franklin, Harold User's Guide to Olympus Modern Classics. 1997 Printing. Jersey, Channel Islands: Hove Foto Books Limited, 1991. . Gilbert, George Collecting Photographica: The Images and Equipment of the First Hundred Years of Photography. New York, NY: Hawthorn/Dutton, 1976. . Goldberg, Norman Camera Technology: The Dark Side of the Lens. San Diego, CA: Academic Press, 1992. . Hansen, Bill and Dierdorff, Michael Japanese 35mm SLR Cameras: A Comprehensive Data Guide. Small Dole, UK: Hove Books, 1998. . Kimata, Hiroshi and Schneider, Jason "The Truth About SLR Viewfinders. Some are bright, some are light, but few are both", Popular Photography, Volume 58 Number 6; June 1994. . Kingslake, Rudolf A History of the Photographic Lens. San Diego, CA: Academic Press, 1989. . Kraszna-Krausz, A., chairman of editorial board, The Focal Encyclopedia of Photography. Revised Desk Edition, 1973 reprint. New York, NY: McGraw-Hill Book Co., 1969. Krause, Peter "50 Years of Kodachrome," Modern Photography, Volume 49, Number 10; October 1985. . Lea, Rudolph The Register of 35mm Single Lens Reflex Cameras: From 1936 to the Present. Second Edition. Hückelhoven, Germany: Rita Wittig Fachbuchverlag, 1993. . Lothrop, Eaton S. Jr., "Time Exposure: The first SLR? It all began with a small 'dark room'". Popular Photography, Volume 83 Number 1; January 1976. . Matanle, Ivor Collecting and Using Classic SLRs. First Paperback Edition. New York, NY: Thames and Hudson, 1997. . Ray, Sidney F. The Photographic Lens. Second revised edition. Oxford, UK: Focal Press/Butterworth-Heinemann, 1992. . Shell, Bob Canon Compendium: Handbook of the Canon System. Hove, UK: Hove Books, 1994. . Shull, Henry "Tough Exposures? Hit The Spot!! The best metering system for tricky lighting situations isn't in your camera—it's behind the eyepiece!" Modern Photography, Volume 51, Number 11; November 1987. . Spira, S. F.; Lothrop, Eaton S. Jr. and R. Spira, Jonathan The History of Photography as Seen Through the Spira Collection. New York, NY: Aperture, 2001 .The Japanese Historical Camera. 日本の歴史的カメラ (Nihon no rekishiteki kamera). 2nd ed. Tokyo: JCII Camera Museum, 2004. The (minimal) text is in both Japanese and English. Wade, John A Short History of the Camera. Watford, Hertfordshire, UK: Fountain Press/Argus Books Limited, 1979. . Wade, John The Collector's Guide to Classic Cameras: 1945–1985.'' Small Dole, UK: Hove Books, 1999. . External links Alpa Reflex Alpa of Switzerland Canon Camera Story 1955–1969 Canon Camera Story 1987–1991 Birth of New-Generation Autofocus SLR Camera, "EOS" Canon SLR cameras 1970–2000 article on Dpreview.com Contax (USA) Contax-info Kyocera press release on termination of production Leica Camera AG history Miranda Site Update History Nikon F History Olympus Camera History History of the Honeywell Pentax Spotmatic by Karen Nakamura. The Unofficial Pentax Collector's Starter Page Brief History of Pentax Cameras Soviet Start SLR Professional camera 1958–64 Stephen Rothery Photographer http://www.pentax-slr.com/PP SLR: 1948-1954 Single-lens reflex camera, History of the
60674039
https://en.wikipedia.org/wiki/Dorothy%20Stein
Dorothy Stein
Dorothy Josephine Del Bourgo Kellogg Stein (March 31, 1931 – March 16, 2019) was an early computer programmer, psychologist, author and social activist. Her activities landed her on the cusp of or ahead of her time. She is best known for researching and writing the book Ada, which argued that Ada Lovelace was not the first computer programmer and did not have the mathematical ability to assist Charles Babbage as much as was believed. Early life She was born Dorothy Josephine del Bourgo in Newark, New Jersey. She was the first of two daughters of Jacob Del Bourgo, a civil engineer and Charlotte Del Bourgo (née Styler). Jacob was a Sephardic Jew born to a family of pearl traders, and Charlotte was of Ashkenazi descent, fleeing Eastern Europe as a youth. The two sisters were raised as culturally Jewish but not particularly devout, and Dorothy had fond memories of bacon as a special treat during the Great Depression. Jacob Del Bourgo was unable to find work in the United States for some time, and so moved the family to Venezuela. In the aftermath of World War II, engineers became more in demand and employment discrimination against Jews declined, allowing the family to settle in New York. Stein graduated high school early and attended Cornell University, earning her degree in Physics in 1951. Work on her second degree, involving experiments with a cloud chamber, was interrupted by meeting and eventually marrying Paul Kellogg, then completing his PhD in physics at Cornell. After a year in Copenhagen at the high energy physics institute led by Niels Bohr, the couple returned to America, where in 1955 Stein worked on one of the first computers, calculating missile trajectories, while her husband worked in nuclear physics and the new field of solar plasma. In 1956, they moved to Minnesota, where Paul became a professor in the physics department of the University of Minnesota. There Dorothy gave birth to their two sons, Kenneth in 1956 and David in 1959. Dorothy and Paul divorced in 1964, and two years later Dorothy married Burton Stein, a professor of Indian history. In 1968 Dorothy finished a PhD in child psychology that established, using dichotic listening techniques, that syllables are not phonemes, but are psychologically real (the precise implications of this remain indistinct). Career The Steins moved to Hawaii, where Burt became a Professor at the University of Hawaii. During the rise of the feminist movement, Stein helped to establish a Women’s studies department there with Joan Abramson and Doris Ladd. When Burt retired from teaching in 1980, they moved to London, where Burt wrote histories of India. Stein became interested in the life of Ada, Countess of Lovelace, the only legitimate child of Lord Byron, who at the time was believed to have written the very first computer programs (the US Defense Department, which Stein used to work for, had just named the ADA programming language after her). Through assiduous work at the Bodleian library and elsewhere, Stein began to realize that Ada was a more Byronic heroine; she gambled, took drugs, probably had extra-marital liaisons, and certainly had only a feeble grasp of the mathematics behind the computer. In a set of papers and her book Ada, A life and a Legacy which is still highly controversial. Stein speculated that much of her computer work was really ghost-written by Charles Babbage. Burton Stein died in 1996, but by that time, Stein had become fond of London, and continued to live in their flat. Concerned about population growth, she wrote another book, People Who Count, which argued that women would choose to have fewer children if they were given the freedom to use family planning. In later years she devoted herself to gardening, bookbinding, making clothes, and working for Oxfam. Dr. Stein began to suffer memory loss, and eventually moved into Nightingale’s in Clapham. Her decline became rapid in early 2019 and she died on March 16. Works 1985: 1995: References 1931 births 2019 deaths American computer scientists American women psychologists American psychologists Writers from Newark, New Jersey Jewish American scientists Cornell University alumni University of Hawaiʻi faculty Writers from London American expatriates in England 20th-century American women writers American women academics 21st-century American Jews 21st-century American women
54443528
https://en.wikipedia.org/wiki/FraudWatch%20International
FraudWatch International
FraudWatch International is an internet security organization that was founded in 2003 that mainly specializes in online fraud protection and anti-phishing activities. The headquarters of this privately owned company are in Melbourne, Australia and it has offices in London, Dubai and San Francisco. Its CEO is Trent Youl. The activities of the company include anti-phishing, protection against malware and online brand protection, offering Security as a Service to other companies. Fraudwatch International is active in sponsoring and participating in congresses on cybercrime. It also sponsors the Anti-Phishing Working Group. The techniques that are used by FraudWatch International include: Anti-phishing techniques Anti-vishing techniques Anti-pharming techniques Takedown of fake domains Takedown of fake profiles on social media References External links Official Website of FraudWatch International Cybercrime Confidence tricks Identity theft Social engineering (computer security) Computer security organizations
20518502
https://en.wikipedia.org/wiki/1975%20Rose%20Bowl
1975 Rose Bowl
The 1975 Rose Bowl was the 61st edition of the college football bowl game, played at the Rose Bowl in Pasadena, California, on Wednesday, January 1. The fifth-ranked USC Trojans of the Pacific-8 Conference defeated #3 Ohio State Buckeyes of the Big Ten Conference, in one of the most exciting games in the history of the After a touchdown pass with two minutes remaining to draw within a point, USC quarterback Pat Haden passed to Shelton Diggs for a two-point conversion to take the lead. It gave the Trojans the Rose Bowl victory and the UPI coaches poll national title. This was the third consecutive year for these teams in the Rose Bowl: USC won in 1973, Ohio State in 1974. Teams Ohio State Buckeyes The defending Rose Bowl champs were the nation's top-ranked team for much of the season, until they were upset by Michigan State at East Lansing on November 9. Two weeks later, the Buckeyes earned the Rose Bowl berth with a victory over Michigan, when kicker Mike Lantry's last-second field goal attempt sailed Ohio State was favored to win the Rose Bowl by six points. USC Trojans USC was upset by Arkansas in Little Rock in the season opener, then reeled off five straight wins before a at home against California. They won their final four games, the most dramatic being a season-ending win over #5 Notre Dame in which the Trojans trailed Scoring summary First quarter USC – Chris Limahelu - 30-yard field goal. Second quarter OSU – Champ Henson - 2-yard run (PAT - Tom Klaban kick) Third quarter No scoring Fourth quarter USC – Jim Obradovich 9-yard pass from Pat Haden (PAT - Limahelu kick). OSU – Cornelius Greene 3-yard run (PAT - Klaban kick) OSU – Klaban - 32-yard field goal. USC – J. K. McKay 38-yard pass from Haden (PAT - Haden pass to Shelton Diggs) Aftermath Undefeated Oklahoma was the #1 team in the AP poll, but were on probation and ineligible for a bowl game. The UPI poll excluded teams on probation, and after the regular season, the UPI had Alabama first, followed by Ohio State, Michigan, USC, and Auburn. The Trojans' dramatic Rose Bowl win over Ohio State enabled them to leapfrog idle Michigan, and when Notre Dame upset Alabama in the Orange Bowl, 13–11, USC was voted #1 in the UPI poll. This game marked USC head coach John McKay's eighth and last appearance in the Rose Bowl and his fifth win. This was the last season in which the Big Ten and Pac-8 conferences allowed just one bowl team each, to the Rose Bowl. Michigan and twelfth-ranked Michigan State did not participate in this bowl season; USC was the only Pac-8 team in the top twenty of either final poll. Michigan missed the postseason for three straight seasons, despite ten wins each year and an overall record of . Game notes Third straight time the two teams meet in the Rose Bowl. Head coach John McKay wins fourth national title. McKay ends his Rose Bowl career with a 5–3 record, tying Howard Jones for victories. Anthony Davis injured and plays less than 1 quarter. Quarterback Pat Haden & split end John McKay, Jr. were named co-MVPs. USC kicker Chris Limahelu died of prostate cancer in 2010 at age 59. References Rose Bowl Rose Bowl Game Ohio State Buckeyes football bowl games USC Trojans football bowl games Rose Bowl January 1975 sports events in the United States
3026202
https://en.wikipedia.org/wiki/Microelectronics%20Education%20Programme
Microelectronics Education Programme
The UK government's Microelectronics Education Programme ran from 1980 to 1986. It was conceived and planned by a Labour government and set up under a Conservative government during Mrs Thatcher's era. Its aim was to explore how computers could be used in schools in the UK. This was a controversial time for Conservative school policies. The programme was administered by the Council for Educational Technology in London, but the directorate operated, unusually, from a semi-detached house on the Coach Lane Campus of the then Newcastle Polytechnic (now Northumbria University). Origins The Microelectronics Education Programme was developed by the Department for Education and Science when the Prime Minister at the time, Jim Callaghan asked each government department to draw up an action plan to meet the challenge of new technologies. Whilst the prior programme, the National Development Programme in Computer Aided Learning, covered schools, colleges, universities and training establishments, MEP was specifically aimed at secondary schools in England, Northern Ireland and Wales (a primary school programme was added in 1982). Following a change of government in 1979, Keith Joseph as Education Secretary finally approved the proposal in 1980 and in March a four-year programme for schools, costing £9 million. was announced by the Under Secretary of State at the Department of Education and Science, Mr Neil MacFarlane. Central team The director of the programme was Richard Fothergill. By April 1981 he had set up a small team of people, operating from offices at Cheviot House in Newcastle Polytechnic. John Anderson was appointed Deputy, and the rest of the central team consisted of Bob Coates, Helen Hindess, Mike Bostock and Lynn Craig later supported by Mike Page for Press and Media, Bill Broderick for International, and Alan Greenwell and Ralph Tabberer for Curriculum Development. The information collection and dissemination was carried out by the information officer who used an early form of Teletext (called Prestel) and email (called Telecom Gold) to disseminate news of materials and training opportunities. Each member of staff created correspondence (see Old Computers link below) on a handheld wordprocessor, a Microwriter, designed by Cy Endfield. Strategy Richard Fothergill published MEP's strategy in April 1981, having been appointed in the previous November. It had a number of innovative ideas in it, including a wide definition of its work covering computer aided learning, computer studies, microelectronics and information handling and a strong emphasis on regional collaboration. The aim of the programme was to help schools to prepare children for life in a society in which devices and systems based on microelectronics are commonplace and pervasive. Curriculum materials Educational materials were initially devised by teachers for teachers, financed by the Department of Education and Science of England, Northern Ireland and Wales. It was common to see written on various books and leaflets that the aims of the programme were to 'promote, within the school curriculum, the study of microelectronics and its effects, and to encourage the use of the technology as an aid to teaching and learning'. DTI Computer Scheme By 1982, the Department of Trade and Industry became involved and began to introduce computers in the secondary schools, later the primary schools. Teams of teachers, programmers and publishers worked hard to develop software to run on a variety of machines. The two most popular were Acorn Computers and Research Machines computers. The Sinclair ZX Spectrum was used in a variety of situations, very often for control projects, such as teaching children how traffic lights worked. Regional structure Fourteen regional information centres were set up around the UK to demonstrate materials to local teachers. There was one information officer, one director and a number of training coordinators per region. The focus for the training was split into four 'domains': the Computer as a Device (exploring and developing Computer Science as a subject); Communications and Information Systems (looking at the electronic office and developing a Business Studies theme); Electronics and Control Technology (developing devices and resources to support Science and Technology subjects); and Computer Based Learning (looking and developing how uses of technology could support teaching and learning right through and across the whole curriculum). Primary Project Originally conceived as a programme to develop secondary education, it was soon perceived that many primary schools were ready to adopt new methodologies. A National Primary Project was established, which developed a substantial amount of high class resources that were the basis for significant curriculum development. The young children, and many primary school teachers, were enthusiastic and used the computer as a tool. There was often only one computer per school, and it was on a trolley which could be moved to wherever it was required. Children were then familiar with it as a tool, a resource, not as an item which they might find at home, as is the case today. Richard Fothergill predicted the computer would become pervasive in society. Closure The programme's closure was announced in June 1985 and a successor organisation the Microelectronics Education Support Unit was announced. The programme continued until 1986 and was formally evaluated by Her Majesty's Inspectorate in that year. HMI reported "The MEP years will be remembered by those directly involved, and by most of those on its periphery, as a time of creativity and fruitful development. There was a new found and remarkable enthusiasm for IT and its potential impact on all phases and many aspects of the curriculum." Whilst the Programme was running it attracted world attention and was highly commended. References External links Guardian obituary of Richard Fothergill Description of Microwriter http://old-computers.com/museum/computer.asp?c=558 BBC Micro and Professor Hopper http://www.educationengland.org.uk/history/chapter05.html http://www.naec.org.uk/organisations/the-microelectronics-education-programme/the-microelectronics-education-programme-strategy Computer science education in the United Kingdom Educational technology projects Governmental educational technology organizations Science and technology in Tyne and Wear United Kingdom educational programs
68843492
https://en.wikipedia.org/wiki/Mubashir%20Husain%20Rehmani
Mubashir Husain Rehmani
Mubashir Husain Rehmani (Urdu: مبشر حسین رحمانی) (born on 23 February 1983 in Karachi, Pakistan) is a Pakistani researcher and computer scientist. His areas of work are computer networking, telecommunications, wireless communications and blockchain. He was recognised in 2020 and 2021 as one of the Highly Cited Researchers in computer science by Clarivate. Education and career He received his B.Engg. degree in Computer Systems Engineering from Mehran University of Engineering and Technology. M.S. degree in Networks and Telecommunications from University of Paris XI, Paris, France. PhD in Computer Science, from Sorbonne University, France. He worked as a Post Doctoral Researcher in Waterford Institute of Technology. Dr Rehmani is currently teaching in Department of Computer Science at the Munster Technological University (MTU). He is a senior member of IEEE, an Editorial Board Member of Nature Scientific Reports. Prior to that he is editor of various generals. He was associated with COMSATS University Islamabad, Wah Cantt for 5 years as an Associate Professor. Mubashir serves as an Area Editor (Wireless Communications) in IEEE Communications Surveys and Tutorial (top ranked # 1 journal in Telecommunications by Clarivate). Awards In 2021, Rehmani won Clarivate Highly Cited Researcher award, recognizing him as top 1% in the world in Cross field. In 2020, Rehmani won Clarivate Highly Cited Researcher award, recognizing him as top 1% in the world in Computer Science. The Editor-in-Chief of Elsevier Journal of Network and Computer Applications awarded him Best Survey Paper award in 2018 and a cash prize. In 2018 and 2017, Rehmani won Publons Peer Review Award, placing him in Top 1% of reviewers in Computer Science. In 2017, Rehmani receiving Outstanding Associate Editor by IEEE Access. In 2015/2016, Best Research Paper award was giving to him along with Ayaz Ahmad, Sadiq Ahmad, and Naveed Ul Hassan by Higher Education Commission (HEC), Government of Pakistan. In 2017, Best Paper Award was given to him by Communications Systems Integration and Modeling Technical Committee by IEEE. In 2016 & 2017, Research Productivity Award was given to him by Pakistan Council for Science and Technology (PCST), Ministry of Science and Technology, Pakistan. In 2015, Rehmani won Exemplary Editor Award of IEEE Communications Surveys and Tutorials, recognizing his efforts for furthering the objectives of the Society. Editorial Activities Area Editor IEEE Communications Surveys and Tutorial - 2018 to present Editorial Board Member NATURE Scientific Reports Associate Editor IEEE Communications Surveys and Tutorial - 2015 to 2018 IEEE Transactions on Green Communications and Networking - 2021–Present Elsevier Journal of Network and Computer Applications - 2015 to Present Springer Wireless Networks Journal - 2015–present Elsevier Future Generation Computer Systems - 2017 to present Associate Technical Editor IEEE Communications Magazine - 2014 to 2020 Teachers Rehmani's teachers included Mufti Muhammad Naeem Memon sahib of Hyderabad (who is the caliph of Hazrat Mufti Muhammad Taqi Usmani and Hazrat Maulana Muhammad Yusuf Ludhianvi shaheed). Bibliography Mubashir Rehmani is the author of one textbook and several edited books: TextBook Edited Books Islamic work Rehmani wrote several articles on Islamic related matters: One of his article on "Scientific Research, Modern Education, and Madaris" is published in Bayyinat in Nov and Dec 2021- Jamia Uloom-ul-Islamia Another article "عصرِ حاضر کی سائنسی تحقیق اور متعلقہ اسلامی احکام" was published in Darul Uloom Deoband Media coverage Rehmani appeared on several TV shows and his story was covered in several print and online media outlets. BBC Urdu - 26 Nov 2021 SAMAA TV Naya Din Morning Show - 1st Dec 2020 Express News Expresso Morning Show - 7th Dec 2020 ARY News Channel - 9 pm, 5th Dec 2020 Jang newspaper - 07th Dec 2020 Dawn newspaper - 14 Jan 2021 TheCork.ie website - 22nd Nov 2020 The Echo newspaper - 11th Feb 2021 SAMAA TV English - 24 Nov 2021 Silicon Republic Website, Ireland - 18th Nov 2020 Mehran University of Engineering and Technology Alumni Portal - 25 Nov 2020 IEEE United Kingdom and Ireland Section - 06th Dec 2020 References 1983 births Living people Academics from Karachi Pakistani computer scientists Mehran University of Engineering & Technology alumni Sorbonne Paris North University alumni
41048011
https://en.wikipedia.org/wiki/SDC%20Verifier
SDC Verifier
SDC Verifier (Structural Design Codes Verifier) is a commercial finite element analysis post-processor software with a calculation core for checking structures according to different standards, either predefined or self programmed, and final report generation with all checks. The goal is to automate routine work and speed up a verification of the engineering projects. It works as an addon for popular FEA software Ansys, Femap and Simcenter 3D. It is possible to apply complicated loads: buoyancy, tank ballast and wind. Automatic recognition of joints, welds and panels. Implemented Standards The rules for popular design standards are predefined in SDC Verifier. The open structure of the standard makes all checks customizable. The Custom standard can be saved and used for other models, password protected and added to the custom library. This standard can be shared between other users. ABS 2004: Guide for buckling and ultimate strength assessment for offshore structures; ABS 2014: Rules for building and classing (floating production installations); AISC ASD 9th edition (July 1989); AISC 360–10, 14th edition (2010); API RP 2A LRFD, 1st edition (1993); API RP 2A WSD 21st edition (2007); DIN 15018 (1984); DNV OS C101 LRFD (April 2011) DNV OS C201 WSD (April 2011) DNV Classification Notes NO. 30.1 (Buckling Strength Analysis, July 1995); DNV RP C201: Buckling Strength of Plated Structures (Recommended Practice, October 2010); FEM 1.001, 3rd edition (1998); Eurocode 3, Part 1-9: Fatigue (2006); Eurocode 3, Part 1-1: Member checks (2005); Eurocode 3, Part 1-8: Weld Strength; ISO 19902, 1st edition (2007); Norsok N004, Rev. 3 (2013); Alternative software GENIE Midas nCode SACS SkyCiv References External links Femap and SDC Verifier Siemens and SDC Verifier SDC Verifier Presentation SDC Verifier new version Computer-aided engineering software Finite element software Product lifecycle management Siemens software products
43258186
https://en.wikipedia.org/wiki/William%20Hertling
William Hertling
William Hertling is a science fiction writer and programmer. He was a co-founder and Director of Engineering at Tripwire, and a web strategist and software developer at Hewlett-Packard where he obtained numerous software engineering patents in the areas of networking protocols, printing, and web applications. Writing Hertling began publishing science fiction in 2011 with Avogadro Corp: The Singularity is Closer than it Appears. Influenced by Ray Kurzweil and Charles Stross, his work examines the emergence of strong artificial intelligence and how humankind reacts to and coexists with AI. The resulting Singularity series has received critical acclaim from Wired and KurzweilAI, as well as notable people in the technology industry, including Brad Feld, Harper Reed, Ben Huh, Amber Case, and John Walker. The first novel, Avogadro Corp, a near-term technothriller, is about the modification of an email language optimization software program giving the software a survival instinct, accidentally creating a self-motivated artificial intelligence. His second novel, A.I. Apocalypse, set ten years later, explores the creation of strong artificial intelligence through software evolution and the resulting organizational principles and values of an AI society. The third book, The Last Firewall, again set ten years further into the future, is a cyberpunk novel examining post-humanism, the effects of social class on AI and humans, and technological unemployment. The fourth novel, The Turing Exception, is set where humans and AI have co-existed peacefully until 2043, where a nanotech event seen as a terrorist act by AI results in the destruction of Miami and large controls placed on AI, and explores the unfolding events revolving around XOR - an AI splinter group with the goal of taking the reign of Earth from humans. Throughout all four novels, the reaction of humans to strong AI, and the coexistence of both groups are recurring themes. He published his first children's novel in 2014, The Case of the Wilted Broccoli, a detective novel about three elementary school students who solve a food supply chain mystery. Awards and Expertise A self-published author, Hertling is a frequent presenter at technical, writing, and science fiction conventions, where he talks about the intersection of science fiction and technology, self-publishing, book marketing, technology, and innovation. He describes his success with self-publishing in Indie and Small Press Book Marketing, his non-fiction manual for marketing books. He was nominated for the Prometheus Award for Best Novel for A.I. Apocalypse, won Foreword Review's Science Fiction Book of the Year in 2011 for Avogadro Corp, and won Independent Publisher's IPPY Bronze medal for The Last Firewall. Bibliography Avogadro Corp: The Singularity is Closer than it Appears (2011) A.I. Apocalypse (2012) Indie and Small Press Book Marketing (2012) The Last Firewall (2013) The Case of the Wilted Broccoli (2014) The Turing Exception (2015) Kill Process (2016) Kill Switch (2018) References External links American science fiction writers Living people University of Arizona alumni 1970 births Writers from Portland, Oregon American male novelists Novelists from Oregon
19343974
https://en.wikipedia.org/wiki/Roberto%20Di%20Cosmo
Roberto Di Cosmo
Roberto Di Cosmo is a computer scientist and director of IRILL, the Innovation and research initiative for free software (). He graduated from the Scuola Normale Superiore di Pisa and obtained a PhD from the University of Pisa, before becoming tenured professor at the École normale supérieure in Paris, then professor at the Paris 7 University. Since 2010, he has been director of the IRILL. Di Cosmo was an early member of the AFUL, association of the French community of Linux and Free Software users and is also known for his support of the Open Source Software movement. He became famous after releasing a paper criticizing Microsoft in 1998, entitled Piège dans le cyberespace (Hijacking the world, the dark side of Microsoft). Co-written with the journalist Dominique Nora, this book is now available under the BY-NC-ND Creative Commons licence. His most famous contribution to Linux is the first "live" Linux distribution (2000 to 2002), demolinux, which made it possible to boot Linux from a CD-ROM without setting up the entire distribution. He was one of the founders, and the first president, of the Open Source Thematic Group within the Systematic innovation cluster. Di Cosmo is a member of the Board of Trustees at the IMDEA Software Institute. On June 30, 2016, Inria announced the creation of the Software Heritage initiative, which was conceived and is directed by Roberto Di Cosmo. References External links Own Page Hijacking the world free under licence CC-BY-NC-ND. Printed: Calmann-Levy 1998, Demolinux Interview of Roberto Di Cosmo in I-CIO, July 2009 Introducing Software Heritage, the Library of Alexandria for Code, Slate, July 2016 Italian computer scientists Living people Year of birth missing (living people)
1965451
https://en.wikipedia.org/wiki/Optimized%20Systems%20Software
Optimized Systems Software
Optimized Systems Software (OSS) was a company that produced disk operating systems, programming languages with integrated development environments, and applications primarily for the Atari 8-bit family of home computers. OSS was best known for their enhanced versions of Atari BASIC and the MAC/65 assembler, both of which are much faster than Atari's products, and the Action! programming language. OSS also sold some products for the Apple II. OSS transitioned to 16-bit platforms with Personal Pascal for the Atari ST and Personal Prolog for Macintosh (which was also advertised for the Atari ST, but may not have been released). OSS was not as significant in those markets. History Optimized Systems Software was formed in early 1981 by Bill Wilkinson, Mike Peters, Paul Laughton, and Kathleen O'Brien. Laughton, the primary author of Atari BASIC, was still employed by Atari, Inc. at the time, and had permission to be involved with OSS from his manager. O'Brien wrote the Atari Assembler Editor for Atari. Laughton and O'Brien (married) were not as involved with the company and were bought out by Peters and Wilkinson. OSS purchased Atari BASIC, Atari DOS, and the Atari Assembler Editor product from Shepardson Microsystems who had concluded that their BASIC and DOS products were not viable. The new company enhanced the products, renaming them OS/A+ (the Disk Operating System), BASIC A+ (a disk-based language), and EASMD (an update to the Assembler Editor). OSS continued to work with Atari (who had previously contracted with SMI) on enhanced products, most of which never reached the market. OSS debuted at the West Coast Computer Faire in March 1981. The products they released over the next several years became respected among Atari programmers, particularly the MAC/65 assembler, the Action! programming language, and BASIC XL. In a 1984 interview, Bill Wilkinson said the company consisted of 15 people. In January 1988, OSS merged with ICD (the makers of SpartaDOS and various Atari computer hardware add-ons). In 1994, Fine Tooned Engineering obtained limited rights to ICD's 8-bit products before disappearing. Disk Operating Systems OS/A+ Atari DOS 2.0S consisted of two portions, a memory-resident portion that facilitated access to disk files by programs, and a disk-resident portion providing menu-driven utilities to format, copy, delete, rename, and otherwise manipulate files on Atari's 810 disk drive. The menu system was too large to keep memory-resident, but the necessity to reload the menu system after every program was frustrating to many users. OS/A+ 2.0, 2.1 was a disk-based replacement for the Atari DOS and the Apple II DOS. It replaced the menu-driven utilities with a compact command line approach similar to CP/M (and later, MS-DOS). The command line was small enough to remain in memory with most applications, removing the need for the dreaded post-program reload. When first introduced at the West Coast Computer Faire, the program was named CP/A, but a lawyer from Digital Research (owners of CP/M) visited the booth and the name was changed. OSS couldn't have afforded even a court filing fee. OS/A+ 4.1 OSS extended the successful OS/A+ product with additional capabilities for version 4, many of which were arguably ahead of their time. For example, the strict "8.3" naming scheme (eight alphanumeric characters with a three character extension) was replaced by "long" filenames, similar to the Microsoft DOS transition to VFAT in 1995. However, unlike VFAT, OS/A+ 4.1 disks were not backward compatible with earlier systems; Atari DOS or OS/A+ 2.1 could not read disks formatted by OS/A+ 4.1, breaking backward compatibility. The memory footprint was larger as well, resulting in insufficient memory to run some popular applications. As a result of these drawbacks, OS/A+ 4.1 did not achieve the market penetration as the earlier product. OSS did reissue OS/A+ 4.1 for a brief period when they decided not to modify DOS XL for double-sided disk support. DOS XL DOS XL was designed to replace OS/A+. Included support for single and double-density disk drives. Utilized the command-prompt of OS/A+ but also included a menu program. Featured extensions that took advantage of unused memory space in Atari XL/XE computers and OSS supercartridges. Included support for Indus GT Synchromesh. Due to lack of demand and Atari working on a new version of DOS, OSS decided to halt development of DOS XL 4 and reissue OS/A+ version 4.1. BASIC The team that developed Atari BASIC while at Shepherdson Microsystems developed a series of three increasingly sophisticated BASIC interpreters at OSS. BASIC A+ Atari BASIC was designed to fit in an 8K cartridge, with an optional cartridge for the second slot of the Atari 800 adding additional capability. The second cartridge was never produced. Instead, OSS produced the disk-based BASIC A Plus (or BASIC A+), which is compatible with Atari BASIC, corrects several bugs, and adds many new features. It includes PRINT USING (for formatted output), trace and debug enhancements, direct DOS commands, and explicit support for the graphics hardware including player/missile graphics. Because BASIC A+ had to be purchased, programs developed using its extended features could not be shared with people who did not own the interpreter. BASIC XL BASIC XL is a bank-selected cartridge version of the language that replaced BASIC A+. It fixes bugs and has even more features. The BASIC XL Toolkit contains additional code and examples for use with the BASIC XL and a runtime package for redistribution. A significant change in BASIC XL is the handling of line number lookups in GOTO/GOSUB and FOR...NEXT loops. In Atari BASIC, any GOTO searches the entire program for the provided line number, and FOR...NEXT loops use the same code. Microsoft BASIC simply jumps to a FOR statement via its address. The BASIC XL FAST command replaces constant targets of GOTO/GOSUB/NEXT with addresses. This gives a huge performance boost, making loops run as fast as Microsoft BASIC, and the program as a whole even faster. The downside is that an address becomes invalid if the program is edited during runtime, preventing it from being CONTinued, unlike Atari BASIC which generally allows this after any edit. Antic in 1984 stated that "BASIC XL is the fastest and most powerful version of BASIC available for Atari computers", with "exceptional" documentation. The magazine concluded that "This is the language that should be built into Atari computers. Is anyone at Atari listening?" BASIC XE BASIC XE is an enhanced version of the BASIC XL bank-selected cartridge, with additional functions and high-speed math routines. Because it requires 64KB, it only runs on an XL/XE systems. A runtime package was not released. The BASIC XL runtime can be used, but restricted to XL functions. Assemblers EASMD EASMD (Edit/ASseMble/Debug) is the first editor/assembler from OSS. Based on the original Atari Assembler Editor, it was released in 1981 on disk. It was superseded by MAC/65. MAC/65 MAC/65 is a 6502 editor and assembler originally released on disk in 1982, then on a bank-switched "supercartridge" in 1983 which includes an integrated debugger (DDT). Like Atari BASIC, MAC/65 uses line-numbered source code and tokenizes each line as it is entered. It is significantly faster than Atari's assemblers. The MAC/65 Toolkit disk contains additional code and examples. BUG/65 BUG/65 is a machine language debugger. It was initially included with MAC/65, but the cartridge-based version of the assembler added its own debugger, DDT. BUG/65 was later added to DOS XL. Other languages Action! A cartridge-based development system for a readable ALGOL-like language that compiles to efficient 6502 code. Action! combines a full-screen editor with a compiler that generates code directly to memory without involving disk access. The language found a niche for being over a hundred times faster than Atari BASIC, but much easier to program in than assembly language. Compiled Action! programs require the cartridge to be present, because standard library functions are on the cartridge. The separately available Action! Run-Time Package overcomes this limitation and allows distribution of Action!-compiled projects. The Action! Toolkit (originally called the Programmer's Aid Disk, or PAD) contains additional code and examples for use with the Action! language. C/65 C/65 is a compiler developed by LightSpeed Software for a subset of the C programming language. C/65 outputs assembly source code. An assembler like MAC/65 is needed to create an executable file. Tiny C Tiny C, stylized as tiny-c, is an interpreter for a subset of the C programming language; it was developed by Tiny C Associates. Personal Pascal A one-pass, machine code generating compiler for the Pascal language developed by J. Lohse for the Atari ST and released by OSS in 1987. It came with a 500+ page manual. Applications The Writer's Tool A word processing application available in a bank-selected cartridge and a double-sided disk (master disk on one side, dictionary disk on the other side). It was developed by Madison Micro and published by OSS in 1984. According to Bill Wilkinson, OSS was already building a word processor, but stopped when The Writer's Tool was submitted. SpeedRead+ SpeedRead+ is a speed reading tutor developed for the Atari 8-bit and Apple II computers. Sales According to Bill Wilkinson, OSS sold about 12,000 copies of Basic XL before the ICD merger. Basic XL outsold Action! by about 2.5 or 3 to 1. MAC/65 outsold Action! by about 1.5 to 1. Basic XE sold poorly, a money-loser. Personal Pascal sold over 10,000 copies. References Notes Wilkinson, Bill (1983). The Atari BASIC Source Book. Compute! Books. . A User's Guide and Reference Manual for DOS XL 2.30, 1983 OSS Newsletter - Spring 1984 OSS Newsletter - October 1984 External links Dan's tribute to OSS — A site dedicated to the products produced for Atari 8-bit computers by Optimized Systems Software. The Atari 400/800 and OSS Antic Vol. 4, No. 9 - Jan 1986 Basic XE from O.S.S. (Product Review) Defunct software companies of the United States Atari 8-bit family Software companies based in the San Francisco Bay Area Companies based in Cupertino, California American companies established in 1981 Software companies established in 1981 Software companies disestablished in 1988 1981 establishments in California 1988 disestablishments in California Defunct companies based in the San Francisco Bay Area
10993498
https://en.wikipedia.org/wiki/Select%20%28Unix%29
Select (Unix)
select is a system call and application programming interface (API) in Unix-like and POSIX-compliant operating systems for examining the status of file descriptors of open input/output channels. The select system call is similar to the poll facility introduced in UNIX System V and later operating systems. However, with the c10k problem, both select and poll have been superseded by the likes of kqueue, epoll, /dev/poll and I/O completion ports. One common use of select outside of its stated use of waiting on filehandles is to implement a portable sub-second sleep. This can be achieved by passing NULL for all three fd_set arguments, and the duration of the desired sleep as the timeout argument. In the C programming language, the select system call is declared in the header file sys/select.h or unistd.h, and has the following syntax: int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout); fd_set type arguments may be manipulated with four utility macros: FD_SET(), FD_CLR(), FD_ZERO(), and FD_ISSET(). Select returns the total number of bits set in readfds, writefds and errorfds, or zero if the timeout expired, and -1 on error. The sets of file descriptor used in select are finite in size, depending on the operating system. The newer system call poll provides a more flexible solution. Example #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <sys/select.h> #include <fcntl.h> #include <unistd.h> #include <err.h> #include <errno.h> #define PORT "9421" /* function prototypes */ void die(const char*); int main(int argc, char **argv) { int sockfd, new, maxfd, on = 1, nready, i; struct addrinfo *res0, *res, hints; char buffer[BUFSIZ]; fd_set master, readfds; int error; ssize_t nbytes; (void)memset(&hints, '\0', sizeof(struct addrinfo)); hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; hints.ai_flags = AI_PASSIVE; if (0 != (error = getaddrinfo(NULL, PORT, &hints, &res0))) errx(EXIT_FAILURE, "%s", gai_strerror(error)); for (res = res0; res; res = res->ai_next) { if (-1 == (sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol))) { perror("socket()"); continue; } if (-1 == (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, (char*)&on, sizeof(int)))) { perror("setsockopt()"); continue; } if (-1 == (bind(sockfd, res->ai_addr, res->ai_addrlen))) { perror("bind()"); continue; } break; } if (-1 == sockfd) exit(EXIT_FAILURE); freeaddrinfo(res0); if (-1 == (listen(sockfd, 32))) die("listen()"); if (-1 == (fcntl(sockfd, F_SETFD, O_NONBLOCK))) die("fcntl()"); FD_ZERO(&master); FD_ZERO(&readfds); FD_SET(sockfd, &master); maxfd = sockfd; while (1) { memcpy(&readfds, &master, sizeof(master)); (void)printf("running select()\n"); if (-1 == (nready = select(maxfd+1, &readfds, NULL, NULL, NULL))) die("select()"); (void)printf("Number of ready descriptor: %d\n", nready); for (i=0; i<=maxfd && nready>0; i++) { if (FD_ISSET(i, &readfds)) { nready--; if (i == sockfd) { (void)printf("Trying to accept() new connection(s)\n"); if (-1 == (new = accept(sockfd, NULL, NULL))) { if (EWOULDBLOCK != errno) die("accept()"); break; } else { if (-1 == (fcntl(new, F_SETFD, O_NONBLOCK))) die("fcntl()"); FD_SET(new, &master); if (maxfd < new) maxfd = new; } } else { (void)printf("recv() data from one of descriptors(s)\n"); nbytes = recv(i, buffer, sizeof(buffer), 0); if (nbytes <= 0) { if (EWOULDBLOCK != errno) die("recv()"); break; } buffer[nbytes] = '\0'; printf("%s", buffer); (void)printf("%zi bytes received.\n", nbytes); close(i); FD_CLR(i, &master); } } } } return 0; } void die(const char *msg) { perror(msg); exit(EXIT_FAILURE); } See also Berkeley sockets Polling epoll kqueue Input/output completion port (IOCP) References External links C POSIX library Events (computing) System calls Articles with example C code
4214
https://en.wikipedia.org/wiki/Bioinformatics
Bioinformatics
Bioinformatics () is an interdisciplinary field that develops methods and software tools for understanding biological data, in particular when the data sets are large and complex. As an interdisciplinary field of science, bioinformatics combines biology, chemistry, physics, computer science, information engineering, mathematics and statistics to analyze and interpret the biological data. Bioinformatics has been used for in silico analyses of biological queries using mathematical and statistical techniques. Bioinformatics includes biological studies that use computer programming as part of their methodology, as well as specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidates genes and single nucleotide polymorphisms (SNPs). Often, such identification is made with the aim to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences, called proteomics. Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. It plays a role in the text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions. History Historically, the term bioinformatics did not mean what it means today. Paulien Hogeweg and Ben Hesper coined it in 1970 to refer to the study of information processes in biotic systems. This definition placed bioinformatics as a field parallel to biochemistry (the study of chemical processes in biological systems). Sequences There has been a tremendous advance in speed and cost reduction since the completion of the Human Genome Project, with some labs able to sequence over 100,000 billion bases each year, and a full genome can be sequenced for a few thousand dollars. Computers became essential in molecular biology when protein sequences became available after Frederick Sanger determined the sequence of insulin in the early 1950s. Comparing multiple sequences manually turned out to be impractical. A pioneer in the field was Margaret Oakley Dayhoff. She compiled one of the first protein sequence databases, initially published as books and pioneered methods of sequence alignment and molecular evolution. Another early contributor to bioinformatics was Elvin A. Kabat, who pioneered biological sequence analysis in 1970 with his comprehensive volumes of antibody sequences released with Tai Te Wu between 1980 and 1991. In the 1970s, new techniques for sequencing DNA were applied to bacteriophage MS2 and øX174, and the extended nucleotide sequences were then parsed with informational and statistical algorithms. These studies illustrated that well known features, such as the coding segments and the triplet code, are revealed in straightforward statistical analyses and were thus proof of the concept that bioinformatics would be insightful. Goals To study how normal cellular activities are altered in different disease states, the biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data. This also includes nucleotide and amino acid sequences, protein domains, and protein structures. The actual process of analyzing and interpreting data is referred to as computational biology. Important sub-disciplines within bioinformatics and computational biology include: Development and implementation of computer programs that enable efficient access to, management, and use of, various types of information. Development of new algorithms (mathematical formulas) and statistical measures that assess relationships among members of large data sets. For example, there are methods to locate a gene within a sequence, to predict protein structure and/or function, and to cluster protein sequences into families of related sequences. The primary goal of bioinformatics is to increase the understanding of biological processes. What sets it apart from other approaches, however, is its focus on developing and applying computationally intensive techniques to achieve this goal. Examples include: pattern recognition, data mining, machine learning algorithms, and visualization. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug discovery, protein structure alignment, protein structure prediction, prediction of gene expression and protein–protein interactions, genome-wide association studies, the modeling of evolution and cell division/mitosis. Bioinformatics now entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data. Over the past few decades, rapid developments in genomic and other molecular research technologies and developments in information technologies have combined to produce a tremendous amount of information related to molecular biology. Bioinformatics is the name given to these mathematical and computing approaches used to glean understanding of biological processes. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures. Relation to other fields Bioinformatics is a science field that is similar to but distinct from biological computation, while it is often considered synonymous to computational biology. Biological computation uses bioengineering and biology to build biological computers, whereas bioinformatics uses computation to better understand biology. Bioinformatics and computational biology involve the analysis of biological data, particularly DNA, RNA, and protein sequences. The field of bioinformatics experienced explosive growth starting in the mid-1990s, driven largely by the Human Genome Project and by rapid advances in DNA sequencing technology. Analyzing biological data to produce meaningful information involves writing and running software programs that use algorithms from graph theory, artificial intelligence, soft computing, data mining, image processing, and computer simulation. The algorithms in turn depend on theoretical foundations such as discrete mathematics, control theory, system theory, information theory, and statistics. Sequence analysis Since the Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode proteins, RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Computer programs such as BLAST are used routinely to search sequences—as of 2008, from more than 260,000 organisms, containing over 190 billion nucleotides. DNA sequencing Before sequences can be analyzed they have to be obtained from the data storage bank example Genbank. DNA sequencing is still a non-trivial problem as the raw data may be noisy or afflicted by weak signals. Algorithms have been developed for base calling for the various experimental approaches to DNA sequencing. Sequence assembly Most DNA sequencing techniques produce short fragments of sequence that need to be assembled to obtain complete gene or genome sequences. The so-called shotgun sequencing technique (which was used, for example, by The Institute for Genomic Research (TIGR) to sequence the first bacterial genome, Haemophilus influenzae) generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly usually contains numerous gaps that must be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced today, and genome assembly algorithms are a critical area of bioinformatics research. Genome annotation In the context of genomics, annotation is the process of marking the genes and other biological features in a DNA sequence. This process needs to be automated because most genomes are too large to annotate by hand, not to mention the desire to annotate as many genomes as possible, as the rate of sequencing has ceased to pose a bottleneck. Annotation is made possible by the fact that genes have recognisable start and stop regions, although the exact sequence found in these regions can vary between genes. The first description of a comprehensive genome annotation system was published in 1995 by the team at The Institute for Genomic Research that performed the first complete sequencing and analysis of the genome of a free-living organism, the bacterium Haemophilus influenzae. Owen White designed and built a software system to identify the genes encoding all proteins, transfer RNAs, ribosomal RNAs (and other sites) and to make initial functional assignments. Most current genome annotation systems work similarly, but the programs available for analysis of genomic DNA, such as the GeneMark program trained and used to find protein-coding genes in Haemophilus influenzae, are constantly changing and improving. Following the goals that the Human Genome Project left to achieve after its closure in 2003, a new project developed by the National Human Genome Research Institute in the U.S appeared. The so-called ENCODE project is a collaborative data collection of the functional elements of the human genome that uses next-generation DNA-sequencing technologies and genomic tiling arrays, technologies able to automatically generate large amounts of data at a dramatically reduced per-base cost but with the same accuracy (base call error) and fidelity (assembly error). Gene function prediction While genome annotation is primarily based on sequence similarity (and thus homology), other properties of sequences can be used to predict the function of genes. In fact, most gene function prediction methods focus on protein sequences as they are more informative and more feature-rich. For instance, the distribution of hydrophobic amino acids predicts transmembrane segments in proteins. However, protein function prediction can also use external information such as gene (or protein) expression data, protein structure, or protein-protein interactions. Computational evolutionary biology Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists by enabling researchers to: trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone, compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation, build complex computational population genetics models to predict the outcome of the system over time track and share information on an increasingly large number of species and organisms Future work endeavours to reconstruct the now more complex tree of life. The area of research within computer science that uses genetic algorithms is sometimes confused with computational evolutionary biology, but the two areas are not necessarily related. Comparative genomics The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Ultimately, whole genomes are involved in processes of hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectrum of algorithmic, statistical and mathematical techniques, ranging from exact, heuristics, fixed parameter and approximation algorithms for problems based on parsimony models to Markov chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models. Many of these studies are based on the detection of sequence homology to assign sequences to protein families. Pan genomics Pan genomics is a concept introduced in 2005 by Tettelin and Medini which eventually took root in bioinformatics. Pan genome is the complete gene repertoire of a particular taxonomic group: although initially applied to closely related strains of a species, it can be applied to a larger context like genus, phylum, etc. It is divided in two parts- The Core genome: Set of genes common to all the genomes under study (These are often housekeeping genes vital for survival) and The Dispensable/Flexible Genome: Set of genes not present in all but one or some genomes under study. A bioinformatics tool BPGA can be used to characterize the Pan Genome of bacterial species. Genetics of disease With the advent of next-generation sequencing we are obtaining enough sequence data to map the genes of complex diseases infertility, breast cancer or Alzheimer's disease. Genome-wide association studies are a useful approach to pinpoint the mutations responsible for such complex diseases. Through these studies, thousands of DNA variants have been identified that are associated with similar diseases and traits. Furthermore, the possibility for genes to be used at prognosis, diagnosis or treatment is one of the most essential applications. Many studies are discussing both the promising ways to choose the genes to be used and the problems and pitfalls of using genes to predict disease presence or prognosis. Analysis of mutations in cancer In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways. Massive sequencing efforts are used to identify previously unknown point mutations in a variety of genes in cancer. Bioinformaticians continue to produce specialized automated systems to manage the sheer volume of sequence data produced, and they create new algorithms and software to compare the sequencing results to the growing collection of human genome sequences and germline polymorphisms. New physical detection technologies are employed, such as oligonucleotide microarrays to identify chromosomal gains and losses (called comparative genomic hybridization), and single-nucleotide polymorphism arrays to detect known point mutations. These detection methods simultaneously measure several hundred thousand sites throughout the genome, and when used in high-throughput to measure thousands of samples, generate terabytes of data per experiment. Again the massive amounts and new types of data generate new opportunities for bioinformaticians. The data is often found to contain considerable variability, or noise, and thus Hidden Markov model and change-point analysis methods are being developed to infer real copy number changes. Two important principles can be used in the analysis of cancer genomes bioinformatically pertaining to the identification of mutations in the exome. First, cancer is a disease of accumulated somatic mutations in genes. Second cancer contains driver mutations which need to be distinguished from passengers. With the breakthroughs that this next-generation sequencing technology is providing to the field of Bioinformatics, cancer genomics could drastically change. These new methods and software allow bioinformaticians to sequence many cancer genomes quickly and affordably. This could create a more flexible process for classifying types of cancer by analysis of cancer driven mutations in the genome. Furthermore, tracking of patients while the disease progresses may be possible in the future with the sequence of cancer samples. Another type of data that requires novel informatics development is the analysis of lesions found to be recurrent among many tumors. Gene and protein expression Analysis of gene expression The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq, also known as "Whole Transcriptome Shotgun Sequencing" (WTSS), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells. Analysis of protein expression Protein microarrays and high throughput (HT) mass spectrometry (MS) can provide a snapshot of the proteins present in a biological sample. Bioinformatics is very much involved in making sense of protein microarray and HT MS data; the former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples where multiple, but incomplete peptides from each protein are detected. Cellular protein localization in a tissue context can be achieved through affinity proteomics displayed as spatial data based on immunohistochemistry and tissue microarrays. Analysis of regulation Gene regulation is the complex orchestration of events by which a signal, potentially an extracellular signal such as a hormone, eventually leads to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, gene expression can be regulated by nearby elements in the genome. Promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Enhancer elements far away from the promoter can also regulate gene expression, through three-dimensional looping interactions. These interactions can be determined by bioinformatic analysis of chromosome conformation capture experiments. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). One can then apply clustering algorithms to that expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements. Examples of clustering algorithms applied in gene clustering are k-means clustering, self-organizing maps (SOMs), hierarchical clustering, and consensus clustering methods. Analysis of cellular organization Several approaches have been developed to analyze the location of organelles, genes, proteins, and other components within cells. This is relevant as the location of these components affects the events within a cell and thus helps us to predict the behavior of biological systems. A gene ontology category, cellular component, has been devised to capture subcellular localization in many biological databases. Microscopy and image analysis Microscopic pictures allow us to locate both organelles as well as molecules. It may also help us to distinguish between normal and abnormal cells, e.g. in cancer. Protein localization The localization of proteins helps us to evaluate the role of a protein. For instance, if a protein is found in the nucleus it may be involved in gene regulation or splicing. By contrast, if a protein is found in mitochondria, it may be involved in respiration or other metabolic processes. Protein localization is thus an important component of protein function prediction. There are well developed protein subcellular localization prediction resources available, including protein subcellular location databases, and prediction tools. Nuclear organization of chromatin Data from high-throughput chromosome conformation capture experiments, such as Hi-C (experiment) and ChIA-PET, can provide information on the spatial proximity of DNA loci. Analysis of these experiments can determine the three-dimensional structure and nuclear organization of chromatin. Bioinformatic challenges in this field include partitioning the genome into domains, such as Topologically Associating Domains (TADs), that are organised together in three-dimensional space. Structural bioinformatics Protein structure prediction is another important application of bioinformatics. The amino acid sequence of a protein, the so-called primary structure, can be easily determined from the sequence on the gene that codes for it. In the vast majority of cases, this primary structure uniquely determines a structure in its native environment. (Of course, there are exceptions, such as the bovine spongiform encephalopathy (mad cow disease) prion.) Knowledge of this structure is vital in understanding the function of the protein. Structural information is usually classified as one of secondary, tertiary and quaternary structure. A viable general solution to such predictions remains an open problem. Most efforts have so far been directed towards heuristics that work most of the time. One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In the structural branch of bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. In a technique called homology modeling, this information is used to predict the structure of a protein once the structure of a homologous protein is known. This currently remains the only way to predict protein structures reliably. One example of this is hemoglobin in humans and the hemoglobin in legumes (leghemoglobin), which are distant relatives from the same protein superfamily. Both serve the same purpose of transporting oxygen in the organism. Although both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes and shared ancestor. Other techniques for predicting protein structure include protein threading and de novo (from scratch) physics-based modeling. Another aspect of structural bioinformatics include the use of protein structures for Virtual Screening models such as Quantitative Structure-Activity Relationship models and proteochemometric models (PCM). Furthermore, a protein's crystal structure can be used in simulation of for example ligand-binding studies and in silico mutagenesis studies. Network and systems biology Network analysis seeks to understand the relationships within biological networks such as metabolic or protein–protein interaction networks. Although biological networks can be constructed from a single type of molecule or entity (such as genes), network biology often attempts to integrate many different data types, such as proteins, small molecules, gene expression data, and others, which are all connected physically, functionally, or both. Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes that comprise metabolism, signal transduction pathways and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms. Molecular interaction networks Tens of thousands of three-dimensional protein structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR) and a central question in structural bioinformatics is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without performing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field. Other interactions encountered in the field include Protein–ligand (including drug) and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms, for studying molecular interactions. Others Literature analysis The growth in the number of published literature makes it virtually impossible to read every paper, resulting in disjointed sub-fields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example: Abbreviation recognition – identify the long-form and abbreviation of biological terms Named-entity recognition – recognizing biological terms such as gene names Protein–protein interaction – identify which proteins interact with which proteins from text The area of research draws from statistics and computational linguistics. High-throughput image analysis Computational technologies are used to accelerate or fully automate the processing, quantification and analysis of large amounts of high-information-content biomedical imagery. Modern image analysis systems augment an observer's ability to make measurements from a large or complex set of images, by improving accuracy, objectivity, or speed. A fully developed analysis system may completely replace the observer. Although these systems are not unique to biomedical imagery, biomedical imaging is becoming more important for both diagnostics and research. Some examples are: high-throughput and high-fidelity quantification and sub-cellular localization (high-content screening, cytohistopathology, Bioimage informatics) morphometrics clinical image analysis and visualization determining the real-time air-flow patterns in breathing lungs of living animals quantifying occlusion size in real-time imagery from the development of and recovery during arterial injury making behavioral observations from extended video recordings of laboratory animals infrared measurements for metabolic activity determination inferring clone overlaps in DNA mapping, e.g. the Sulston score High-throughput single cell data analysis Computational techniques are used to analyse high-throughput, low-measurement single cell data, such as that obtained from flow cytometry. These methods typically involve finding populations of cells that are relevant to a particular disease state or experimental condition. Biodiversity informatics Biodiversity informatics deals with the collection and analysis of biodiversity data, such as taxonomic databases, or microbiome data. Examples of such analyses include phylogenetics, niche modelling, species richness mapping, DNA barcoding, or species identification tools. Ontologies and data integration Biological ontologies are directed acyclic graphs of controlled vocabularies. They are designed to capture biological concepts and descriptions in a way that can be easily categorised and analysed with computers. When categorised in this way, it is possible to gain added value from holistic and integrated analysis. The OBO Foundry was an effort to standardise certain ontologies. One of the most widespread is the Gene ontology which describes gene function. There are also ontologies which describe phenotypes. Databases Databases are essential for bioinformatics research and applications. Many databases exist, covering various information types: for example, DNA and protein sequences, molecular structures, phenotypes and biodiversity. Databases may contain empirical data (obtained directly from experiments), predicted data (obtained from analysis), or, most commonly, both. They may be specific to a particular organism, pathway or molecule of interest. Alternatively, they can incorporate data compiled from multiple other databases. These databases vary in their format, access mechanism, and whether they are public or not. Some of the most commonly used databases are listed below. For a more comprehensive list, please check the link at the beginning of the subsection. Used in biological sequence analysis: Genbank, UniProt Used in structure analysis: Protein Data Bank (PDB) Used in finding Protein Families and Motif Finding: InterPro, Pfam Used for Next Generation Sequencing: Sequence Read Archive Used in Network Analysis: Metabolic Pathway Databases (KEGG, BioCyc), Interaction Analysis Databases, Functional Networks Used in design of synthetic genetic circuits: GenoCAD Software and tools Software tools for bioinformatics range from simple command-line tools, to more complex graphical programs and standalone web-services available from various bioinformatics companies or public institutions. Open-source bioinformatics software Many free and open-source software tools have existed and continued to grow since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have helped to create opportunities for all research groups to contribute to both bioinformatics and the range of open-source software available, regardless of their funding arrangements. The open source tools often act as incubators of ideas, or community-supported plug-ins in commercial applications. They may also provide de facto standards and shared object models for assisting with the challenge of bioinformation integration. The range of open-source software packages includes titles such as Bioconductor, BioPerl, Biopython, BioJava, BioJS, BioRuby, Bioclipse, EMBOSS, .NET Bio, Orange with its bioinformatics add-on, Apache Taverna, UGENE and GenoCAD. To maintain this tradition and create further opportunities, the non-profit Open Bioinformatics Foundation have supported the annual Bioinformatics Open Source Conference (BOSC) since 2000. An alternative method to build public bioinformatics databases is to use the MediaWiki engine with the WikiOpener extension. This system allows the database to be accessed and updated by all experts in the field. Web services in bioinformatics SOAP- and REST-based interfaces have been developed for a wide variety of bioinformatics applications allowing an application running on one computer in one part of the world to use algorithms, data and computing resources on servers in other parts of the world. The main advantages derive from the fact that end users do not have to deal with software and database maintenance overheads. Basic bioinformatics services are classified by the EBI into three categories: SSS (Sequence Search Services), MSA (Multiple Sequence Alignment), and BSA (Biological Sequence Analysis). The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single, standalone or web-based interface, to integrative, distributed and extensible bioinformatics workflow management systems. Bioinformatics workflow management systems A bioinformatics workflow management system is a specialized form of a workflow management system designed specifically to compose and execute a series of computational or data manipulation steps, or a workflow, in a Bioinformatics application. Such systems are designed to provide an easy-to-use environment for individual application scientists themselves to create their own workflows, provide interactive tools for the scientists enabling them to execute their workflows and view their results in real-time, simplify the process of sharing and reusing workflows between the scientists, and enable scientists to track the provenance of the workflow execution results and the workflow creation steps. Some of the platforms giving this service: Galaxy, Kepler, Taverna, UGENE, Anduril, HIVE. BioCompute and BioCompute Objects In 2014, the US Food and Drug Administration sponsored a conference held at the National Institutes of Health Bethesda Campus to discuss reproducibility in bioinformatics. Over the next three years, a consortium of stakeholders met regularly to discuss what would become BioCompute paradigm. These stakeholders included representatives from government, industry, and academic entities. Session leaders represented numerous branches of the FDA and NIH Institutes and Centers, non-profit entities including the Human Variome Project and the European Federation for Medical Informatics, and research institutions including Stanford, the New York Genome Center, and the George Washington University. It was decided that the BioCompute paradigm would be in the form of digital 'lab notebooks' which allow for the reproducibility, replication, review, and reuse, of bioinformatics protocols. This was proposed to enable greater continuity within a research group over the course of normal personnel flux while furthering the exchange of ideas between groups. The US FDA funded this work so that information on pipelines would be more transparent and accessible to their regulatory staff. In 2016, the group reconvened at the NIH in Bethesda and discussed the potential for a BioCompute Object, an instance of the BioCompute paradigm. This work was copied as both a "standard trial use" document and a preprint paper uploaded to bioRxiv. The BioCompute object allows for the JSON-ized record to be shared among employees, collaborators, and regulators. Education platforms Software platforms designed to teach bioinformatics concepts and methods include Rosalind and online courses offered through the Swiss Institute of Bioinformatics Training Portal. The Canadian Bioinformatics Workshops provides videos and slides from training workshops on their website under a Creative Commons license. The 4273π project or 4273pi project also offers open source educational materials for free. The course runs on low cost Raspberry Pi computers and has been used to teach adults and school pupils. 4273π is actively developed by a consortium of academics and research staff who have run research level bioinformatics using Raspberry Pi computers and the 4273π operating system. MOOC platforms also provide online certifications in bioinformatics and related disciplines, including Coursera's Bioinformatics Specialization (UC San Diego) and Genomic Data Science Specialization (Johns Hopkins) as well as EdX's Data Analysis for Life Sciences XSeries (Harvard). University of Southern California offers a Masters In Translational Bioinformatics focusing on biomedical applications. Conferences There are several large conferences that are concerned with bioinformatics. Some of the most notable examples are Intelligent Systems for Molecular Biology (ISMB), European Conference on Computational Biology (ECCB), and Research in Computational Molecular Biology (RECOMB). See also References Further reading Sehgal et al. : Structural, phylogenetic and docking studies of D-amino acid oxidase activator(DAOA ), a candidate schizophrenia gene. Theoretical Biology and Medical Modelling 2013 10 :3. Raul Isea The Present-Day Meaning Of The Word Bioinformatics, Global Journal of Advanced Research, 2015 Achuthsankar S Nair Computational Biology & Bioinformatics – A gentle Overview, Communications of Computer Society of India, January 2007 Aluru, Srinivas, ed. Handbook of Computational Molecular Biology. Chapman & Hall/Crc, 2006. (Chapman & Hall/Crc Computer and Information Science Series) Baldi, P and Brunak, S, Bioinformatics: The Machine Learning Approach, 2nd edition. MIT Press, 2001. Barnes, M.R. and Gray, I.C., eds., Bioinformatics for Geneticists, first edition. Wiley, 2003. Baxevanis, A.D. and Ouellette, B.F.F., eds., Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins, third edition. Wiley, 2005. Baxevanis, A.D., Petsko, G.A., Stein, L.D., and Stormo, G.D., eds., Current Protocols in Bioinformatics. Wiley, 2007. Cristianini, N. and Hahn, M. Introduction to Computational Genomics, Cambridge University Press, 2006. ( |) Durbin, R., S. Eddy, A. Krogh and G. Mitchison, Biological sequence analysis. Cambridge University Press, 1998. Keedwell, E., Intelligent Bioinformatics: The Application of Artificial Intelligence Techniques to Bioinformatics Problems. Wiley, 2005. Kohane, et al. Microarrays for an Integrative Genomics. The MIT Press, 2002. Lund, O. et al. Immunological Bioinformatics. The MIT Press, 2005. Pachter, Lior and Sturmfels, Bernd. "Algebraic Statistics for Computational Biology" Cambridge University Press, 2005. Pevzner, Pavel A. Computational Molecular Biology: An Algorithmic Approach The MIT Press, 2000. Soinov, L. Bioinformatics and Pattern Recognition Come Together Journal of Pattern Recognition Research (JPRR), Vol 1 (1) 2006 p. 37–41 Stevens, Hallam, Life Out of Sequence: A Data-Driven History of Bioinformatics, Chicago: The University of Chicago Press, 2013, Tisdall, James. "Beginning Perl for Bioinformatics" O'Reilly, 2001. Catalyzing Inquiry at the Interface of Computing and Biology (2005) CSTB report Calculating the Secrets of Life: Contributions of the Mathematical Sciences and computing to Molecular Biology (1995) Foundations of Computational and Systems Biology MIT Course Computational Biology: Genomes, Networks, Evolution Free MIT Course External links Bioinformatics Resource Portal (SIB)
17279239
https://en.wikipedia.org/wiki/AnywhereTS
AnywhereTS
AnywhereTS is a software thin client solution for Microsoft Windows. AnywhereTS was created in 2005 as a Thinstation configuration tool, and has since grown to become one of the most common ways of creating software thin clients. AnywhereTS is utilizing several 3rd party software under the hood, with Thinstation as the base for the client OS. However, there are not many references to Thinstation as this is a Windows software with user interface for Windows only. One common use of AnywhereTS is recycling old computers into thin clients. The modest hardware requirements (Pentium II with 32 MB RAM) have made it possible even for many organizations in developing countries to convert PCs to thin clients. AnywhereTS is most frequently used in mid-sized installations with 30-150 computers, but installations with more than a thousand computers have also been reported. The program is freeware, but in 2007 a commercial Pro version was also released. Features Converts office PCs to thin clients Boot via network, using PXE Boot client from hard disk, CD or USB flash drive Built-in TFTP and DHCP server Support for Microsoft DHCP server Support of Microsoft Remote Desktop Protocol (RDP) and Citrix (ICA) Redirection of sound and serial ports Redirection of USB flash drives Customizable client OS Customizable boot pictures International keyboard layouts History The first version was released 2005 and was quickly adopted by many users in order to make software thin clients and configure Thinstation, as this could now be done without the need for a Linux installation. During 2005 and 2006 several versions were released, adding various functionality. In August 2007, Version 2.0 was released, featuring a new user interface and more supported hardware. In November 2007, a commercial version called AnywhereTS Pro was released. The main differences from the free version being runtime configuration and support. Also a Windows-based control panel, where users could change their settings, was included. In June 2008 a 3.0 version was released, among other things adding a new user interface, integrated help system and more configuration options. With this version the software also started using Windows installer. AnywhereTS went out of business on September 1, 2009. The final release version was 3.4. After some delay, on May 31, 2010, the developers made AnywhereTS available through open source on AnywhereTS on SourceForge. References External links Internet Protocol based network software Windows-only freeware Remote desktop Thin clients
33927215
https://en.wikipedia.org/wiki/U.S.%20Department%20of%20Defense%20Strategy%20for%20Operating%20in%20Cyberspace
U.S. Department of Defense Strategy for Operating in Cyberspace
The 2011 U.S. Department of Defense Strategy for Operating in Cyberspace is a formal assessment of the challenges and opportunities inherent in increasing reliance on cyberspace for military, intelligence, and business operations. Although the complete document is classified and 40 pages long, this 19 page summary was released in July 2011 and explores the strategic context of cyberspace before describing five “strategic initiatives” to set a strategic approach for DoDʼs cyber mission. Strategic Context The strategy for operating in cyberspace first outlines DoD strengths, including rapid communication and information sharing capabilities as well as knowledge in the global information and communications technology sector, including cybersecurity expertise. These are considered “strategic advantages in cyberspace.” Additional emphasis is placed on furthering U.S. international cyberspace cooperation through international engagement, collective self-defense, and the establishment of international cyberspace norms. Cyber Threats The DoD begins discussion of current cyber threats by focusing on threats to DoD daily operations, with a progressively expanding scope to encompass broader national security concerns. The DoD is aware of the potential for adversaries to use small scale-technology, such as widely available hacking tools, to cause a disproportionate impact and pose a significant threat to U.S. national security. The DoD is concerned with external threat actors, insider threats, supply chain vulnerabilities, and threats to the DoDʼs operational ability. Additionally, the document mentions the DoDʼs need to address “the concerted efforts of both state and non-state actors to gain unauthorized access to its networks and systems.” The DoD strategy cites the rapidly evolving threat landscape as a complex and vital challenge for national and economic security. Strategic Initiatives In light of the risks and opportunities inherent in DoD and U.S. Government use of cyberspace, this strategy presents five strategic initiatives as a roadmap to "operate effectively in cyberspace, defend national interests, and achieve national security objectives." According to the DoD, pursuit of this strategy will see the DoD capitalize on the opportunities of cyberspace, defend against intrusions and malicious activity, strengthen cybersecurity, and develop robust cyberspace capabilities and partnerships. Strategic Initiative 1 "Treat cyberspace as an operational domain to organize, train, and equip so that DoD can take full advantage of cyberspace’s potential." According to the DoD, this consideration allows them "to organize train and equip for cyberspace as we do in air, land, maritime, and space to support national security interests.” Consequently they established the U.S. Cyber Command under the U.S. Strategic Command to coordinate cyber activities of the Army, the U.S. fleet cyber command/U.S. 10th fleet, the 24th air force, USMC cyber command, and USCG cyber command. U.S. Cyber Command is collocated with the National Security Agency, with the head of the NSA also serving as the commander of Cybercom. This serves to coordinate training for operations in a "degraded" environment, including the use of red teams in war games, operating with presumption of a security breach, and development of secure networks for redundancy purposes. Strategic Initiative 2 "Employ new defense operating concepts to protect DoD networks and systems." This includes enhancing best practices and “cyber hygiene," featuring updated software and better configuration management. The DoD will take steps to strengthen workforce communications, accountability, internal monitoring, and information management capabilities to mitigate insider threats. The DoD will also focus on maintaining an active cyber defense to prevent intrusions. In addition to these reactionary concepts, the DoD will develop new defense operating concepts and computing architectures including mobile media and secure cloud computing to embrace evolutionary and rapid change. Strategic Initiative 3 "Partner with other U.S. government departments and agencies and the private sector to enable a whole-of-government cybersecurity strategy." Many critical functions of DoD rely on commercial assets such as Internet Service Providers and global supply chains, constituting a vulnerability that DoD and DHS will work together to mitigate. The formalized structure of DOD and DHS understanding sets limits to DoD and DHS policy. Their joint planning will increase effectiveness of cyber needs while respecting privacy and civil liberties and will conserve budget resources. The DoD also maintains a partnership with the Defense Industrial Base to protect sensitive information. The DoD launched the Defense Industrial Base Cyber Security and Information Assurance program in 2007. The DoD is also establishing pilot public-private partnership to enhance information sharing. They will continue to work with interagency partners towards a collaborative national effort to develop solutions to increase cybersecurity. A Whole-of-government approach will lead DoD to continue to support interagency cooperation with DHS to analyze and mitigate supply chain threats to government and private sector technology. Strategic Initiative 4 "Build robust relationships with U.S. allies and international partners to strengthen collective cybersecurity." In support of the U.S. International Strategy for Cyberspace, the DoD will seek “robust” relationships to develop international shared situational awareness and warning capabilities for self-defense and collective deterrence. The DoD will assist US efforts to help develop international cyberspace norms and principles, dissuade and deter malicious actors, reserve the right to defend vital national assets as necessary and appropriate. The DoD will also advance cooperation with allies to defend allied interests in cyberspace, work to develop shared warning capabilities, build capacity, conduct joint training, share best practices and develop burden sharing arrangements. Strategic Initiative 5 "Leverage the nation’s ingenuity through an exceptional cyber workforce and rapid technological innovation." The DoD intends to "catalyze US scientific, academic, and economic resources to build a pool of talented civilian and military personnel to operate in cyberspace and achieve DoD objectives.” The DoD will foster rapid innovation and invest in people, technology and R&D to create and sustain cyber capabilities vital to national security. The DoD outlines 5 principles for the acquisition of information technology: Speed is a critical priority. Incremental development and testing. Sacrifice/defer customization for speedy incremental improvement. Adopt differing levels of oversight based on prioritization of critical systems. Improved security measures for hardware and software. The DoD will also promote opportunities for small and medium businesses, work with entrepreneurs in technology innovation hubs to develop concepts quickly. Targeted investments and joint ventures will enable the DoD to foster the development of impactful and innovative technologies. The DoD also developed the National Cyber Range to allow rapid creation of models of networks intended to enable the military to address needs by simulating and testing new technologies and capabilities. Development and retention of cyber workforce is central to strategic success outlined in this strategy. Consequently, the DoD will work to streamline hiring for their cyber workforce, enable crossflow of professionals between public and private sectors. As part of this plan, the DoD will also endeavor to develop reserve and national guard cyber capabilities, as well as continue educating their cyber workforce. Media Reception Xinhua News Agency cited the opinion of Li Shuisheng, a research fellow with the top military science academy of the People's Liberation Army, alleging the document is "fundamentally an attempt of the US to maintain its unparalleled global military superiority." Li noted that the strategy "clearly aims at sovereign nations in retaliating to cyber attacks," which could lead to a mistake in attribution that may provoke war. Furthermore, the president of Beijing University of Posts and Telecommunications, Fang Binxing, alleged that the United States is "more often on the offensive not the defensive side of cyber warfare, " and consequently can "fulfill its political and military purposes, including interference in domestic affairs of other countries and military intrusion, by making up technological effects on the Web." Essentially, Chinese media reporting considers the 2011 Department of Defense Strategy for Operating in Cyberspace clearly stated ambitions for enhancing U.S. hegemony. The day after the DoD strategy document was published, The Voice of Russia published an article citing a recent admission that the Pentagon was successfully hacked in March 2011. The author suggested "the Pentagon admission could be just a strategic solution to gain support for its new program of cyber defense." The article states that the strategy received "a serious amount of criticism," and concludes by stating that in light of the recent announcement of attacks in March, "the scared public should be much more supportive to the controversial strategy." CRN News.com cited the opinions of several American cyber security experts who believe the DoD strategy is "too vague, lacks enforcement and likely won't warrant an immediate uptick of future business." Furthermore, security experts cite DoD plans to recruit experts from the private sector as a risk for weakening public technological development. At best, the experts observed the document "represented a collective growing the issue" and could be "a public affirmation from the government about activities and plans already in progress." CRN News.com Australia covered the strategy release, focusing on the DoD's consideration of cyberspace as the fifth warfighting domain. The attitude of the article suggested the DoD strategy is a reaction to reports of data breaches, and should have been developed sooner. References Cyberwarfare in the United States United States Department of Defense doctrine
38049209
https://en.wikipedia.org/wiki/FBI%20Cyber%20Division
FBI Cyber Division
The FBI Cyber Division is a Federal Bureau of Investigation division which heads the national effort to investigate and prosecute internet crimes, including "cyber based terrorism, espionage, computer intrusions, and major cyber fraud." This division of the FBI uses the information it gathers during investigation to inform the public of current trends in cyber crime. It focuses around three main priorities: computer intrusion, identity theft, and cyber fraud. It was created in 2002. In response to billions of dollars lost in cyber-crimes, that have had devastating impact on the United States' economic and national security, the FBI created a main "Cyber Division at FBI Headquarters to "address cyber crime in a coordinated and cohesive manner." Branching out from there, specially trained cyber squads have been placed in 56 field offices across the United States, staffed with "agents and analysts who protect against computer intrusions, theft of intellectual property and personal information, child pornography and exploitation, and online fraud." Due to internet threats around the world, the FBI has developed "cyber action teams" that travel globally to help in "computer intrusion cases" and gather information that helps to identify cyber crimes that are most dangerous to our national security. Keeping the focus not only on national security but also on threats to citizens of the United States, the FBI has long been focused on identity theft, which is a growing concern for American citizens. Since fiscal year 2008 through the middle of fiscal year 2013, the number of identity theft related crimes investigated by the Bureau across all programs have resulted in more than 1,600 convictions, $78.6 million in restitutions, $4.6 billion in recoveries, and $6.8 billion in fines. High priority is given to investigations that involve terrorist organizations or intelligence operations sponsored by foreign governments, which FBI calls "national security cyber intrusions". The Cyber Division has primary responsibility for the FBI's efforts to counter national security–related cyber intrusions. The Cyber Division priorities in rank order are: (a) cyber intrusions; (b) child sexual exploitation; (c) intellectual property rights; and (d) internet fraud. FBI Cyber Division works through the National Cyber Investigative Joint Task Force (NCIJTF) and cyber investigative squads located in each FBI field office. Since 2008, NCIJTF is the primary American agency responsible for coordinating cyber threats investigations, and liaisons with Central Intelligence Agency (CIA), Department of Defense (DOD), Department of Homeland Security (DHS), and National Security Agency (NSA). A large number of cases investigated by the Cyber Division come from the Internet Fraud Complaint Center (IFCC), which in 2002 received about 75,000 complaints. Some cases that the Cyber Division has investigated included: dismantling a ring of criminals using malware to redirect users to rogue DNS servers (Operation Ghost Click); taking down a botnet based on Coreflood trojan used for fraud; taking down a group responsible for robbing over 2,000 ATM machines at once; taking down a group of about 100 involved in phishing (Operation Phish Phry); and taking down of the DarkMarket cyber crime forum used by criminals. Organization Cyber Operations Branch Cyber Operations Section I Cyber Operations Section II Cyber Operations Section III Cyber Operations Section IV Cyber Operations Section V Cyber Readiness, Outreach, and Intelligence Branch Cyber Readiness Section Cyber Outreach Section Cyber Intelligence Section See also Cyberterrorism References External links FBI Cyber Division: Cyber Crime stories FBI Most Wanted Cyber Criminals Testimony on the activities of the FBI's Cyber Division in relation to the theft of intellectual property Cyber Security Focusing on Hackers and Intrusions FBI jobs: Cyber Division FBI Expands Cybercrime Division 2002 establishments in the United States Computer security organizations Federal Bureau of Investigation
1931976
https://en.wikipedia.org/wiki/Name%20Service%20Switch
Name Service Switch
The Name Service Switch (NSS) connects the computer with a variety of sources of common configuration databases and name resolution mechanisms. These sources include local operating system files (such as , , and ), the Domain Name System (DNS), the Network Information Service (NIS, NIS+), and LDAP. This operating system mechanism, used in billions of computers, including all Unix-like operating systems, is indispensable to functioning as part of the networked organization and the Internet. Among other things, it is invoked every time a computer user clicks on or types a website address in the web browser or responds to the password challenge to be authorized access to the computer and the Internet. A system administrator usually configures the operating system's name services using the file . This file lists databases (such as passwd, shadow and group), and one or more sources for obtaining that information. Examples for sources are files for local files, ldap for the Lightweight Directory Access Protocol, nis for the Network Information Service, nisplus for NIS+, dns for the Domain Name System (DNS), and wins for Windows Internet Name Service. The nsswitch.conf file has line entries for each service consisting of a database name in the first field, terminated by a colon, and a list of possible source databases in the second field. A typical file might look like: passwd: files ldap shadow: files group: files ldap hosts: dns nis files ethers: files nis netmasks: files nis networks: files nis protocols: files nis rpc: files nis services: files nis automount: files aliases: files The order of the source databases determines the order the NSS will attempt to look up those sources to resolve queries for the specified service. A bracketed list of criteria may be specified following each source name to govern the conditions under which the NSS will proceed to querying the next source based on the preceding source's response. History Earlier Unix-like systems either accessed only local files or had hard-coded rules for accessing files or network-stored databases. Ultrix was a notable exception with its nearly identical functionality of the NSS configuration file in . Sun Microsystems first developed the NSS for their Solaris operating system. Solaris' compliance with SVR4, which Sun Microsystems and AT&T Unix System Laboratories jointly developed by merging UNIX System V, BSD and Xenix, required that third parties be able to plug in name service implementations for the transport layer of their choosing (OSI or IP) without rewriting SVR4-compliant Transport-Independent RPC (TI-RPC) applications or rebuilding the operating system. Sun introduced the NIS+ directory service in Solaris to supersede NIS, which required co-existence of the two directory services within an enterprise to ease migration. Sun engineers Thomas Maslen and Sanjay Dani were the first to design and implement the Name Service Switch. They fulfilled Solaris requirements with the nsswitch.conf file specification and the implementation choice to load database access modules as dynamically loaded libraries, which Sun was also the first to introduce. Sun engineers' original design of the configuration file and runtime loading of name service back-end libraries has withstood the test of time as operating systems have evolved and new name services are introduced. Over the years, programmers ported the NSS configuration file with nearly identical implementations to many other operating systems including FreeBSD, NetBSD, Linux, HP-UX, IRIX and AIX. More than two decades after the NSS was invented, GNU libc implements it almost identically. See also BSD Authentication Group (database) Name server Pluggable Authentication Modules External links Name Service Switch implementation in the GNU C Library NSS module supporting LDAP: nss_ldap Another NSS module supporting LDAP: nss-ldapd NSS module supporting AFS: nss_afs Unix Domain Name System Directory services
25716
https://en.wikipedia.org/wiki/Refreshable%20braille%20display
Refreshable braille display
A refreshable braille display or braille terminal is an electro-mechanical device for displaying characters, usually by means of round-tipped pins raised through holes in a flat surface. Visually impaired computer users who cannot use a standard computer monitor can use it to read text output. Deafblind computer users may also use refreshable braille displays. Speech synthesizers are also commonly used for the same task, and a blind user may switch between the two systems or use both at the same time depending on circumstances. Mechanical details The base of a refreshable braille display often integrates a pure braille keyboard. Similar to the Perkins Brailler, the input is performed by two sets of four keys on each side, while output is via a refreshable braille display consisting of a row of electro-mechanical character cells, each of which can raise or lower a combination of eight round-tipped pins. Other variants exist that use a conventional QWERTY keyboard for input and braille pins for output, as well as input-only and output-only devices. The mechanism which raises the dots uses the piezo effect of some crystals, whereby they expand when a voltage is applied to them. Such a crystal is connected to a lever, which in turn raises the dot. There has to be a crystal for each dot of the display (i.e., eight per character). Because of the complexity of producing a reliable display that will cope with daily wear and tear, these displays are expensive. Usually, only 40 or 80 braille cells are displayed. Models with between 18 and 40 cells exist in some notetaker devices. On some models the position of the cursor is represented by vibrating the dots, and some models have a switch associated with each cell to move the cursor to that cell directly. Software The software that controls the display is called a screen reader. It gathers the content of the screen from the operating system, converts it into braille characters and sends it to the display. Screen readers for graphical operating systems are especially complex, because graphical elements like windows or slidebars have to be interpreted and described in text form. Modern operating systems usually have an API to help screen readers obtain this information, such as UI Automation (UIA) for Microsoft Windows, VoiceOver for macOS and iOS, and AT-SPI for GNOME. Rotation-wheel Braille display A rotating-wheel Braille display was developed in 2000 by the National Institute of Standards and Technology (NIST) and another at the Leuven University in Belgium. In these units, braille dots are put on the edge of a spinning wheel, which allows the user to read continuously with a stationary finger while the wheel spins at a selected speed. The braille dots are set in a simple scanning-style fashion as the dots on the wheel spin past a stationary actuator that sets the braille characters. As a result, manufacturing complexity is greatly reduced and rotating-wheel braille displays, when in actual production, should be less expensive than traditional braille displays. Braille e-book See also GNOME accessibility VoiceOver References External links Information on Bi-directional Refreshable Tactile Display US Patent 6,692,255 American inventions Computer accessibility Braille technology Haptic technology
66932695
https://en.wikipedia.org/wiki/Dark%20Basin
Dark Basin
Dark Basin is a hack-for-hire group, discovered in 2017 by Citizen Lab. They are suspected to have acted on the behalf of companies such as Wirecard and ExxonMobil. Background In 2015, Matthew Earl, a managing partner at ShadowFall Capital & Research, began to study Wirecard AG hoping to short sell them. Wirecard had just announced the purchase of Great Indian Retail Group for $254 million, which seemed overpriced to Earl. In February 2016, he started to write publicly about his discoveries under the alias Zatarra Research & Investigations, accusing Wirecard of corruption, corporate fraud, and money laundering. Soon after, the identity of Zatarra Research & Investigations was revealed online, along with surveillance pictures of Earl in front of his house. Earl quickly realized that he was being followed. Employees from Jones Day, a law firm representing Wirecard, came to visit Earl and gave him a letter, accusing him of collusion, conspiracy, defamation, libel, and market manipulation. Earl also started to receive targeted phishing emails, appearing to be from his friends and family members. In the spring of 2017, Earl shared those emails with Citizen Lab, a research laboratory specializing in information control. Citizen Lab's investigation Initial findings Citizen Lab discovered that the attackers were using a custom URL shortener that allowed enumeration, giving them access to a list of 28,000 URLs. Some of those URLs redirected to websites looking like Gmail, Facebook, LinkedIn, Dropbox or various webmails – each page customized with the name of the victim, asking the user to re-enter their password. Citizen Lab baptized this hacker group 'Dark Basin' and identified several clusters among the victims: American environmental organizations linked to the #ExxonKnew campaign: Rockefeller Brothers Fund, Climate Investigations Center, Greenpeace, Center for International Environmental Law, Oil Change International, Public Citizen, Conservation Law Foundation, Union of Concerned Scientists, M+R Strategic Services or 350.org US media outlets Hedge funds, short sellers and financial journalists International banks and investment firms Legal firms in the US, UK, Israel, France, Belgium, Norway, Switzerland, Iceland, Kenya, and Nigeria Petroleum and energy companies Eastern European, Central European and Russian oligarchs Well-resourced people involved in divorces or other legal matters The variety of targets made Citizen Lab think of a mercenary activity. The research laboratory confirmed that some of these attacks were successful. Links to India Several clues allowed Citizen Lab to assert with high confidence that Dark Basin was based in India. Working hours Timestamps in Dark Basin phishing emails were consistent with working hours in India, which has only one timezone: UTC+5:30. Cultural references The instances of the URL shortening service used by Dark Basin had names related to Indian culture: Holi, Rongali and Pochanchi. Phishing kit Dark Basin let their phishing kit source code, including some log files, available online. The source code was configured to print timestamps in India's timezone. The log file, that showed some testing activity, included an IP address based in India. Links to BellTroX Citizen Lab believes with high confidence, that BellTroX, also known as BellTroX InfoTech Services and BellTroX D|G|TAL Security, is the company behind Dark Basin. BellTroX, a Delhi-based company, advertises on its website doing activities such as penetration testing, certified ethical hacking, and medical transcription. BellTroX employees are described as noisy and were often posting publicly about their illegal activities. BellTroX's founder Sumit Guptra has been previously indicted and charged in the United States for a hack-for-hire scheme on the behalf of ViSalus. BellTroX used the CV of one of their employees to test Dark Basin's URL shortener. They also publicly posted screenshots of links to Dark Basin's infrastructure. Hundreds of people, working in corporate intelligence and private investigation, endorsed BellTroX on LinkedIn. Some of them are suspected to be possible clients. Those endorsements included a Canadian government official, an investigator at the US Federal Trade Commission, law enforcement officers and private investigators with prior roles in the FBI, police, military and other branches of government. On June 7, 2020, BellTroX took down their website. In December 2021, Meta (Facebook) banned BellTroX as a "cyber-mercenary" group. Reactions Both Wirecard and ExxonMobil have denied any involvement with Dark Basin. References Cyberattacks Hacker groups Hacking in the 2010s