id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
25770546
https://en.wikipedia.org/wiki/Dove%20si%20vola%20%28album%29
Dove si vola (album)
Dove si vola is the first studio EP by Italian singer Marco Mengoni. The album peaked at number 9 in the Italian Albums Chart and sold more than 70,000 copies in Italy, receiving Platinum certification from the Federation of the Italian Music Industry. Background and release In December 2009, Mengoni won the third series of the Italian talent show X-Factor. The 7-track EP was recorded during his experience at the singing contest, and it was released on 4 December 2009, two days after he was announced winner of The X-Factor. Dove si vola includes the previously unreleased song with the same title, written by Bungaro and Saverio Grandi and released as lead single. Mengoni sang it for the first time on 25 November 2009, during the semi-final live show of the talent show. The other tracks are "Lontanissimo da te", and 5 covers of popular Italian and international songs, chosen from the ones that his judge Morgan assigned him during the competition. Track listing Charts Year-end charts Personnel Marco Mengoni – vocals Pietro Caramelli – mastering Alessandra Tisato – photography Daniela Boccadoro – graphics Luca Rustici – arrangements, production, recording, mixing, computer programming, guitars, keyboards Valerio Gagliano – studio assistant Roberto Di Falco – studio assistant Lorenzo Cazzaniga – mastering Giancarlo Ippolito – drums Gaetano Diodato – bass Luciano Luisi – piano, keyboards Lucio Fabbri – production, bass, guitar, violin, viola, keyboards Alessandro Marcantoni – recording, mixing Roberto Gualdi – drums Salvo Calvo – guitar Stefano Cisotto – keyboards, computer programming Carlo Palmas – keyboards, computer programming Morgan – arrangements, production Piero Calabrese – production, arrangements, Pro Tools programming, keyboards Massimo Calabrese – production, bass Stella Fabiani – production Roberto Procaccini – arrangements, Pro Tools programming, keyboards Stefano Calabrese – recording, electric guitar Gianluca Vaccaro – mixing Marco Del Bene – electric guitar Alessandro Canini – drums References 2009 debut EPs Marco Mengoni EPs Sony Music Italy EPs Italian-language EPs
456119
https://en.wikipedia.org/wiki/University%20of%20Genoa
University of Genoa
The University of Genoa, known also with the acronym UniGe (), is one of the largest universities in Italy. It is located in the city of Genoa and regional Metropolitan City of Genoa, on the Italian Riviera in the Liguria region of northwestern Italy. The original university was founded in 1481. According to Microsoft Academic Search 2016 rankings, the University of Genoa has high-ranking positions among the European universities in multiple computer science fields: in machine learning and pattern recognition the University of Genoa is the best scientific institution in Italy and is ranked 36th in Europe; in computer vision the University of Genoa is the best scientific institution in Italy and is ranked 34th in Europe; in computer graphics the University of Genoa is ranked 2nd institution in Italy and 35th in Europe. The University of Genoa has a strong collaboration with the Italian Institute of Technology (IIT), since its foundation in 2005. The University of Genoa is currently setting up a big project for a new Faculty of Engineering within the Erzelli Great Campus science technology park, in the Western side of Genoa. The contracts were signed in October 2018, the final project should be released in 2019, the construction works should start in 2020, and the new faculty should open in 2023. The university of Genoa since its foundation has delivered 46 Gold medals to the italian students, and 2 gold medals to the International students, specifically to the Israeli student Khor hoksari in 1993, and to the Albanian student Agasi Bledar in 2021, It has delivered 122 honoris titles to its alumni, and has been part of a continuos public opening in the last 20 years. Campus The University of Genoa is organized in several independent campuses located in different city areas. Notable buildings are the main University premises (Via Balbi, 5) designed by the architect Bartolomeo Bianco and built in 1640, the new complex in Valletta Puggia, built in the 1980s and 1990s and hosting the Departments of Chemistry, Computer Science, Mathematics, and Physics, and the new seat of Facoltà di Economia, realized in 1996 by refurbishing old seaport docks. The University's botanical garden, the Orto Botanico dell'Università di Genova, occupies one hectare in the city center, just above the University's main building. University of Genoa also has a number of regional campuses in Savona, Imperia, Santa Margherita Ligure, Ventimiglia and La Spezia. History Already in the 13th century in Genoa there were Colleges which conferred degrees in law, theology, medicine and arts. The College of Theology was established officially in 1471 with a Papal Bull of Sixtus IV (Francesco della Rovere). Some years after dates the promulgation of a Statute of the College of Medicine by the Council of the Elders in 1481. In 1569, by a decree of the Senate of Republic of Genoa, the Colleges were incorporated into the schools run by the Jesuits. The Jesuits settled near the old Church of San Girolamo Del Rosso, and enlarged their premises by buying some land on which to house their College and schools. The building, which is now the main University premises, was designed by the architect Bartolomeo Bianco, and began to be used in 1640. After the suppression of the Society of Jesus in 1773, a special Committee reorganized the various courses of study, dividing them in higher education (Canon Law, Philosophy, Civil Law, Theology, Logic and Metaphysics, Physics) and primary education (courses in Rhetoric, Reading and Writing). After the establishment of the French Empire, which absorbed the Republic of Genoa, higher education was subdivided into different special Schools: Law, Medicine, Physical and Mathematical Sciences, Commerce, Language and Literature, Chemistry. The University of Genoa was affiliated to the Imperial University of Paris. It was reinstated as a separate university in 1812. After the fall of Napoleon, the provisional Government of the Republic appointed a new Committee in charge of higher education, and at the Congress of Vienna in 1815 it was decided that the University of Genoa be entrusted to the Kingdom of Sardinia, enjoying the same privileges as those granted to the University of Turin. The university was closed owing to political disturbances between 1821 and 1823 and again between 1830 and 1835. In 1870, two first technical institutes of higher education were established: the Royal Naval School and the Royal School of Economic Studies, that in 1936 were absorbed by the Royal University of Genoa, becoming the Faculties of Engineering and Economics respectively. In the late 20th century, the university expanded rapidly, with new regional campuses. In 1996 some departments were established in Savona within a remodeled Army Barrack area. That campus hosts the Department of Engineering and also courses in Business. New laboratories have been made in Simulation, Logistics & Industrial Engineering, among others. In January 2001, an "Institutional Review of University of Genoa" was given by CRE Institutional Evaluation Programme. This evaluation, surveys taken and reports made, explain The University's current promotion of invitations to outside professorships and student body. Organization As of the academic year 2012-2013 the university is headed by a rector and it was divided into 5 schools, comprising a total of 23 departments: School of Natural Sciences, Mathematics and Physics Department of Chemistry and Industrial Chemistry (DCCI) Department of Physics (DIFI) Department of Mathematics (DIMA) Department of Earth, Environmental and Life Sciences (DISTAV) Department of Computer Science, Bioengineering, Robotics and System Engineering (DIBRIS) School of Medical and Pharmaceutical Sciences Department of Pharmaceutics (DIFAR) Department of Internal Medicine and Medical Specialties (DIMI) Department of Experimental Medicine (DIMES) Department of Neurosciences, Rehabilitation, Ophthalmology and Maternal-Fetal Medicine (DINOGMI) Department of Surgical and Integrated Diagnostic Sciences (DISC) Department of Health Sciences (DISSAL) School of Social Sciences Department of Economics Department of Law Department of Science Education (DISFOR) Department of Political Sciences (DISPO) School of Humanities Department of Antics, Philosophy and History (DAFIST) Department of Italian, Roman, Antics, Arts and Drama Studies (DIRAAS) Department of Modern Cultures and Languages Polytechnic School Department of Computer Science, Bioengineering, Robotics and System Engineering (DIBRIS) Department of Civil, Chemical and Environmental Engineering (DICCA) Department of Mechanical, Energy, Management, and Transportation Engineering (DIME) Department of Naval, Electrical, Electronic and Telecommunications Engineering (DITEN) Department of Architectural Sciences (DSA) Rankings In the ranking of Italian universities, the University of Genoa is ranked 13th by ARWU, 18th by QS, and 18th by THE. The university is ranked 151-200 in Engineering - Civil and Structural in the QS World University Subject Rankings. Times Higher Education gave the university a rank of 150+ in the Law category in its 2020 list of subjects. Students Today the university has a student population of around 40,000, including both undergraduate and graduate students. The University of Genoa shares a branch campus of Florida International University in Miami, Florida, United States, in Genoa. The two universities mutually host students of either university's School of Architecture. Faculty In 2004 there were about 1,710 professors and scientific employees and about 2000 non-scientific employees working for University of Genoa, making it one of Genoa's biggest employers. Noted alumni thumb|Giacomo Della Chiesa studied theology at Genoa and later became Pope Benedict XV Ornella Barra, graduated as a pharmacist from the University of Genoa. She worked at as a manager at a local pharmacy which she later bought. Later creating a successful pharmaceutical distribution company known as Di Pharma. Now chief executive of Alliance Healthcare, the Pharmaceutical Wholesale Division of Alliance Boots Gianaurelio Cuniberti, Italian scientist and professor Kostas Georgakis, anti-fascist dissident who set himself ablaze as a protest against the Greek military junta of 1967-1974 Franco Malerba, first Italian astronaut Sandro Pertini, antifascist dissident, later 7th President of the Italian Republic Enrico Piaggio, industrialist Giacomo Della Chiesa, later Pope Benedict XV Giuseppe Mazzini Alessandro Riberi, noted physician and surgeon See also List of Italian universities List of Jesuit sites List of medieval universities References External links Official University of Genoa website——— Profile of University of Genoa on the Times Higher Education website Educational institutions established in the 15th century 15th-century establishments in the Republic of Genoa 1471 establishments in Europe Metropolitan City of Genoa Education in Genoa
9284321
https://en.wikipedia.org/wiki/IBM%20System/7
IBM System/7
The IBM System/7 was a computer system designed for industrial control, announced on October 28, 1970 and first shipped in 1971. It was a 16-bit machine and one of the first made by IBM to use novel semiconductor memory, instead of magnetic core memory conventional at that date. IBM had earlier products in industrial control market, notably the IBM 1800 which appeared in 1964. However, there was minimal resemblance in architecture or software between the 1800 series and the System/7. System/7 was designed and assembled in Boca Raton, Florida. Hardware architecture The processor designation for the system was IBM 5010. There were 8 registers which were mostly general purpose (capable of being used equally in instructions) although R0 had some extra capabilities for indexed memory access or system I/O. Later models may have been faster, but the versions existing in 1973 had register to register operation times of 400 ns, memory read operations at 800 ns, memory write operations at 1.2 µs, and direct IO operations were generally 2.2 μs. The instruction set would be familiar to a modern RISC programmer, with the emphasis on register operations and few memory operations or fancy addressing modes. For example, the multiply and divide instructions were done in software and needed to be specifically built into the operating system to be used. The machine was physically compact for its day, designed around chassis/gate configurations shared with other IBM machines such as the 3705 communications controller, and a typical configuration would take up one or two racks about high, the smallest System/7's were only about high. The usual console device was a Teletype Model 33 ASR (designated as the IBM 5028), which was also how the machine would generally read its boot loader sequence. Since the semiconductor memory emptied when it lost power (in those days, losing memory when you switched off the power was regarded as a novelty) and the S/7 didn't have ROM, the machine had minimal capabilities at startup. It typically would read a tiny bootloader from the Teletype, and then that program would in turn read in the full program from another computer or from a high speed paper tape reader, or from an RPQ interface to a tape cassette player. Although many of the external devices used on the system used the ASCII character set, the internal operation of the system used the EBCDIC character set which IBM used on most systems. Specialization There were various specializations for process control. The CPU had 4 banks of registers each of different priority and it could respond to interrupts within one instruction cycle by switching to the higher priority set. Many specialized I/O devices could be configured for things such as analog measurement or signal generation, solid state or relay switching, or TTL digital input and output lines. The machine could be installed in an industrial environment without air conditioning, although there were feature codes available for safe operation in extreme environments. Standard Hardware Units A System/7 is typically a combination of the following: IBM 5010: Processing Module. This module is always present in a System/7. Effectively this is the controller for the System/7, performing arithmetic and logical functions as well as providing control functions. IBM 5012: Multifunction Module. This module handles both digital and analog I/O. It can also be used to control an IBM 2790. IBM 5013: Digital Input/Output Module. This module handles digital I/O as well as the attachment for custom products. It can also be used to control an IBM 2790. IBM 5014: Analog Input Module. This module could take voltage signals and turn them into data inputs. IBM 5022: Disk Storage Unit. Announced in 1971, it could hold either 1.23 million or 2.46 million 16-bit words. IBM 5025: Enclosure. This is effectively the rack into which the power supplies and I/O modules are installed. IBM 5028: Operator Station. This is a stand-alone station that includes a keyboard and a printer. It also includes a paper tape punch and a paper tape reader. In the photo captioned IBM System/7 in use, it is to the left of the operator in the foreground of the photo. When first announced in 1970, one Operator Station was mandatory for each System/7, but in 1971 IBM announced that one 5028 could be shared by several System/7s. Maritime Application/Bridge System This is a solution specifically for on board ship navigation. It consists of the following hardware: 5010E Processing Module. This module is always present. 5022 Disk Storage Unit. 5026 C03 Enclosure. This has been modified to handle extended heavy vibrations and tilting 5028 Operator Station. 5090: N01 Radar Navigation Interface Module (RNIM). Interfaces with OEM equipment such as radar, gyros, navigation equipment. 5090: N02 Bridge Console. This provides a radar plan position indicator (PPI) that allows the navigator to communicate with and control the system. There are also RPQs to ruggedize the hardware, provide interfaces to various navigation equipment and provide spares for on board ship. Software The operating system would more properly be called a monitor. IBM provided a wide variety of subroutines, mostly written in assembler, that could be configured into a minimum set to support the peripherals and the application. The application-specific code was then written on top of the monitor stack. A minimal useful configuration would run with 8 kilobytes of memory, though in practice the size of the monitor and application program was usually 12kB and upwards. The maximum configuration had 64kB of memory. The advanced (for the time) semiconductor memory made the machine fast but also expensive, so a lot of work went into minimizing the typical memory footprint of an application before deployment. The development tools normally ran on IBM's 360 computer system and the program image was then downloaded to a System/7 in a development lab by serial link. Up until 1975 at least it was rare to use disk overlays for the programs, with no support for that in the software tools. Hard disks, in the IBM Dolphin line of sealed cartridges, were available but expensive and were generally used as file systems storing data and executable programs (thereby eliminating the need to rely on the paper tape reader for system boot-up). Most work was done in a macro assembly language, with a fairly powerful macro language facility allowing great flexibility in code configuration and generation. Static variable binding, like Fortran, was the norm and the use of arbitrary subroutine call patterns was rare. The machines were usually deployed for very fixed jobs with a rigidly planned set of software. This often extended to the real-time interrupt latency, using the 4 levels of priority and the carefully crafted software paths to ensure guaranteed latencies. Fortran and a PL/1 subset (PL/7) compilers were available no later than 1976 as larger configurations became more affordable and more complex data processing was required. System/7 programmers still needed to be aware of the actual instructions that were available for use. Much development work was done on S/360 or S/370 using a variation of the HLASM program geared to the MSP/7 macro language. To provide more flexibility in programming the System/7, a group in the IBM San Jose Research Laboratory in San Jose, California developed the LABS/7 operating environment, which with its language Event Driven Language (EDL), was ported to the Series/1 environment as the very successful Event Driven Executive (EDX). Uses The System/7 was designed to address the needs of specific "real-time" markets which required collecting and reacting to input and output (I/O) from analog devices (e.g. temperature sensors, industrial control devices). This was a very limited market at the time. Specific commercial uses included factory control systems and air conditioning energy control systems. AT&T was also a large customer. However, the main use may have been for, what were at the time, classified military uses. Example customers This is an eclectic list of customers intended to show the variety of use cases for which the System/7 could be employed: In 1971 IBM claimed their first customer delivery of a System/7, made to American Motors Corporation (AMC) in Kenosha, WA. The system was delivered on September 16, 1971, and installed 24 hours later. It was the first of two that were to be used to measure the emissions of new production automobiles. In 1972 it was reported that the University of Pennsylvania was using remote terminals with card readers, attached to an IBM System/7, to reduce the incidence of meal contract abuse among 2000 students. It helped ensure students did not exceed their meal limits or eat meals in multiple dining rooms in the same meal period. In 1978 it was reported that Pfizer Corporation was using a System/7 equipped with audio-response to allow around 1,300 sales representatives to remotely enter orders through a mini-terminal that could send touch-tone signals via a telephone. They called the system "Joanne". Maritime Application/Bridge System This solution was the first navigational aid that the Control Engineering Department of Lloyds Register added to their list of Approved Control and Electrical Equipment. The System/7 Maritime Application/Bridge System is designed to make the navigation of large ships safer and more efficient, by reducing the amount of data that bridge personnel needed to correlate while improving how it is presented. It provides five programmed functions: Collision assessment: This uses the ships radar as well as speed log and gyrocompass to determine where collision risks exist in up to a 16.5 nautical mile radius. Position fixing: This uses various inputs including satellite navigation receiver, Decca Navigator, gyrocompass and ships speed log to show the ships position. Adaptive auto pilot: This constantly adapts the ships steering in response to sea conditions Route planning: This allows forecasting for navigational changes, based on the ships current position and then either the inputted destination or the next turning point. Routes could be stored and retrieved. Route tracking: This uses boundaries inputed by the navigator and position fixing data. It then uses the PPI to display channels or lanes. It could sound an alarm if a boundary was being approached. Withdrawal The product line was withdrawn from marketing on March 20, 1984. IBM's subsequent product in industrial control was the Series/1, also designed at Boca Raton. References System 7 16-bit computers
17446884
https://en.wikipedia.org/wiki/Vortex86
Vortex86
The Vortex86 is a computing system-on-a-chip (SoC) based on a core compatible with the x86 microprocessor family. It is produced by DM&P Electronics, but originated with Rise Technology. History Vortex86 previously belonged to SiS, which got the basic design from Rise Technology. SiS sold it to DM&P Electronics in Taiwan. Before adopting the Vortex86 series, DM&P manufactured the M6117D, an Intel 386SX compatible, 25–40 MHz SoC. CPU Vortex86 CPUs implement the IA-32 architecture but which instructions are implemented varies depending on the model. Vortex86SX and the early versions of Vortex86 do not have a floating point unit (FPU). Any code that runs on i586 but does not use floating point instructions will run on these models. Any i586 code will run on Vortex86DX and later. Some Linux kernels (by build-time option) emulate the FPU on any CPU that is missing one, so a program that uses i586-level floating point instructions will work on any Vortex86 family CPU under such a kernel, albeit more slowly on a model with no FPU. The more advanced models have FPUs that have i686-level instructions, such as FUCOMI. Code intended for i686 may fail on some models because they lack a Conditional Move (CMOV) instruction. Compilers asked to optimize code for a more advanced CPU (for example the GNU Compiler with its -march=i686 option) generate code that uses CMOV. Linux systems intended to run on i686 are generally not compatible with these Vortex86 models because the GNU C Library, when built for i686, uses a CMOV instruction in its assembly language strcmp function, which its dynamic loader (ld.so) uses. Hence, no program that uses shared libraries can execute. Below are the properties of a Vortex86 original CPU reported by the Linux kernel tool /proc/cpuinfo. Note that this CPU is a later version with an FPU. processor : 0 vendor_id : SiS SiS SiS cpu family : 5 model : 0 model name : 05/00 stepping : 5 cpu MHz : 199.978 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu tsc cx8 mmx up bogomips : 399.95 clflush size : 32 cache_alignment : 32 address sizes : 32 bits physical, 32 bits virtual power management: Compatible components DM&P maintained an embedded Linux distribution customized to use the SoCs features. Other operating systems may work depending on the SoC model, including various RTOS systems such as QNX and VxWorks, Linux distributions, FreeBSD or various versions of Microsoft Windows systems such as Windows Embedded Compact or Windows IoT. Versions Vortex86 original This was developed by SiS and called SiS55x/Rise mP6 or simply Vortex86. It has three integer and MMX pipelines, branch prediction. Vortex86SX This runs at 300 MHz and has 16 KB Data + 16 KB Instruction L1 cache, no FPU, no L2 cache. It can use both SD and DDR2 RAM. Vortex86DX This runs at 600 MHz to 1 GHz (2.02 W @ 800 MHz ), and has 16 KB Data + 16 KB Instruction L1 cache, FPU, 256 KB L2 cache, 6-staged pipeline. It can address up to 1 GiB DDR2 RAM The PDX-600 is a version of the Vortex86DX that differs only in the number of RS-232 ports (3 instead of 5) and has no I²C and servo controllers, thus targeting more the embedded than the industrial market. Netbooks similar to the Belco 450R use this chip. The package is a single 581-pin BGA package. Vortex86MX This runs at 1 GHz. The CPU core hardly differs from the Vortex86DX, but according to several sources, the processor does appear to have implemented SIMD multi-media instructions (MMX). This version drops conformance to ISA and integrates a GPU and a HD Audio controller, it also integrates a UDMA/100 IDE controller. The consumer grade version is known as the PMX-1000. Current models of the Gecko Edubook use the Xcore86, a rebadge of the Vortex86MX. Vortex86MX+ This has a 32KB write through 2-way L1 cache, 256KB write through/write back 4-way L2 cache, PCI rev. 2.1 32-bit bus interface at 33 MHz, DDR2, ROM controller, IPC (Internal Peripheral Controllers with DMA and interrupt timer/counter included), Fast Ethernet, FIFO UART, USB2.0 Host and ATA controller. The MX+ Adds a VGA controller on chip with shared memory. The package is a single 720-pin BGA package. Vortex86DX2 This has a 32KB write through 4-way L1 cache (16K Instruction + 16K Data), 256KB write through/write back 4-way L2 cache, PCI rev. 2.1 32-bit bus interface at 33 MHz, DDR2, ROM controller, IPC (Internal Peripheral Controllers with DMA and interrupt timer/counter included), VGA, 100 Mbps ethernet, FIFO UART, USB2.0 Host and ATA controller. Enhancements over the DX include more COM ports (9), 2GB of RAM, and an HD Audio codec, as well as more GPIO pins. The package is a single 720-pin BGA package. Vortex86EX This has a 32KB write through 2-way L1 cache, 128KB write through/write back 2-way L2 cache, PCI-e bus interface, 300 MHz DDR3, ROM controller, IPC (Internal Peripheral Controllers with DMA and interrupt timer/counter included), Fast Ethernet, FIFO UART, USB2.0 Host and ATA controller. The package is a single 288-pin TFBGA-package. Vortex86DX3 This has a 1.0 GHz dual-core i686-compatible CPU. It has an eight-way 32K I-Cache, an eight-way 32K D-Cache, a four-way 512 KB L2 cache with a write-through or write-back policy, ability to use up to 2GB of DDR3 RAM, a PCI-e bus interface, 100 Mbps Ethernet, FIFO UART, a USB 2.0 host, integrated GPU, an ATA controller that has an IDE controller, PATA 100 (2x HDD) or 2x SD at Primary Channel, and SATA 1.5Gbit/s (1 Port) at Secondary Channel. The package is a single 720-pin BGA-package. Vortex86EX2 The EX2 model has two asymmetrical master/slave CPU cores. The master core runs at 600 MHz, has 16K I-Cache, 16K D-Cache, and four-way 128 KB L2 cache with a write-through or write-back policy. The slave core operates at 400 MHz and also has 16KB I-Cache, 16KB D-Cache, but has no L2 cache. Both have a built-in FPU. Maximum DDR3 RAM capacity is 2GB. It can also use ECC memory. It is produced using the 65 nm manufacturing process and uses the 19x19 mm LFBGA-441 package. See also Embedded x86 Manufacturers References External links DM&P Electronics official website Vortex86 Series overview Change CPU speed in DOS, Linux, Windows Embedded microprocessors System on a chip X86 microprocessors
219328
https://en.wikipedia.org/wiki/Heap%20overflow
Heap overflow
A heap overflow, heap overrun, or heap smashing is a type of buffer overflow that occurs in the heap data area. Heap overflows are exploitable in a different manner to that of stack-based overflows. Memory on the heap is dynamically allocated at runtime and typically contains program data. Exploitation is performed by corrupting this data in specific ways to cause the application to overwrite internal structures such as linked list pointers. The canonical heap overflow technique overwrites dynamic memory allocation linkage (such as malloc metadata) and uses the resulting pointer exchange to overwrite a program function pointer. For example, on older versions of Linux, two buffers allocated next to each other on the heap could result in the first buffer overwriting the second buffer's metadata. By setting the in-use bit to zero of the second buffer and setting the length to a small negative value which allows null bytes to be copied, when the program calls free() on the first buffer it will attempt to merge these two buffers into a single buffer. When this happens, the buffer that is assumed to be freed will be expected to hold two pointers FD and BK in the first 8 bytes of the formerly allocated buffer. BK gets written into FD and can be used to overwrite a pointer. Consequences An accidental overflow may result in data corruption or unexpected behavior by any process that accesses the affected memory area. On operating systems without memory protection, this could be any process on the system. For example, a Microsoft JPEG GDI+ buffer overflow vulnerability could allow remote execution of code on the affected machine. iOS jailbreaking often uses heap overflows to gain arbitrary code execution. Detection and prevention As with buffer overflows there are primarily three ways to protect against heap overflows. Several modern operating systems such as Windows and Linux provide some implementation of all three. Prevent execution of the payload by separating the code and data, typically with hardware features such as NX-bit Introduce randomization so the heap is not found at a fixed offset, typically with kernel features such as ASLR (Address Space Layout Randomization) Introduce sanity checks into the heap manager Since version 2.3.6 the GNU libc includes protections that can detect heap overflows after the fact, for example by checking pointer consistency when calling unlink. However, those protections against prior exploits were almost immediately shown to also be exploitable. In addition, Linux has included support for ASLR since 2005, although PaX introduced a better implementation years before. Also Linux has included support for NX-bit since 2004. Microsoft has included protections against heap resident buffer overflows since April 2003 in Windows Server 2003 and August 2004 in Windows XP with Service Pack 2. These mitigations were safe unlinking and heap entry header cookies. Later versions of Windows such as Vista, Server 2008 and Windows 7 include: Removal of commonly targeted data structures, heap entry metadata randomization, expanded role of heap header cookie, randomized heap base address, function pointer encoding, termination of heap corruption and algorithm variation. Normal Data Execution Prevention (DEP) and ASLR also help to mitigate this attack. See also Heap spraying Stack buffer overflow Exploit Shellcode References External links Smashing The Heap For Fun And Profit Heap Overflow article at Heise Security Defeating Microsoft Windows XP SP2 Heap protection and DEP bypass Computer security exploits Software anomalies de:Heap Overflow
64832660
https://en.wikipedia.org/wiki/Express%20Packet%20%281808%20ship%29
Express Packet (1808 ship)
Express Packet (or Express) was built in France in 1807, probably under another name, and taken in prize circa 1808. From 1809 she sailed as a packet for the Post Office Packet Service out of Falmouth, Cornwall. In 1812 an American privateer captured here in a notable single ship action, but then returned her to her captain and crew after plundering her. Express stopped sailing as a packet in 1817 and then made one more voyage to Spain, after which she disappeared from online records. Career Captain John Bullock assumed command of Express Packet on 3 December 1808, and she started sailing for the packet service in 1809. Two French privateers captured Jacob, of Philadelphia, Jellig, master, as she was sailing from Cadiz and Gibraltar. On 30 June 1810 Express Packet recaptured Jacob and carried her into Gibraltar. On 23 March 1811 ths of Express Packet were offered for sale. Four days earlier, she had arrived at Falmouth from Jamaica with 560,000 dollars. Captain John Watkins assumed command of Express Packet on 4 February 1812. Captain John Quick assumed command of Express Packet on 31 December 1812. On 23 March 1813, Express Packet, John Quick, master, left Rio de Janeiro, bound for Falmouth. She ha a crew of 32 men and boys. On 14 April she encountered the American privateer at . Anaconda was armed with 18 guns and had a crew of 120 men. The ensuing action lasted for an hour and a half before Captain Quick felt he had to strike. Although Express Packet had suffered no casualties but four guns had been dismounted, her rigging was cut to pieces, and holes between wind and water had resulted in her having taken on 3½ feet of water with more coming in. The Americans plundered express Packet of all her stores and threw her guns overboard. They also took out £10,000 or £12,000 in gold bullion. The Americans restored the passengers' private property and gave Express Packet back to Quick and his crew. Express Packet arrived back at Falmouth on 19 May 1813. She had sunk her mails before she was captured. The Captains' Enquiry into the action praised Captain Quick for his conduct. The damage to Express reduced her valuation from £3071 to £2270 12s. Quick received th of that as salvage. The repairs took over two months to complete and cost £2341 14s 9d. Express Packet then returned to normal service. Fate Express Packet was no longer listed among the "Falmouth Packets" in the LR volume for 1818. She was listed as Express among the regular merechant vessels, still with Quick as master and trade Falmouth. The last mention of Express, Quick, master in Lloyd's Lists ship arrival and departure data showed her arriving in Corunna on 4 October 1817 from Falmouth. Notes, citations, and references Notes Citations References 1807 ships Captured ships Age of Sail merchant ships of England Falmouth Packets
43282104
https://en.wikipedia.org/wiki/Softeq
Softeq
Softeq Development Corporation is a full-stack development company focusing on low level programming (drivers, firmware), hardware (from PCBs to full-scale devices) and software apps for web, desktop, and mobile. History The company was founded by Christopher A. Howard in Houston, Texas, the United States, in 1997. The name “Softeq” originally implied “technical software” that used to be the core focus area of the company in its early days. In 2008, the company opened its branch in Minsk, Belarus, which became its development center, and now has 300 employees. Softeq Development is a resident of High Tech Park, a scientific and business cluster located in Minsk, Belarus. In 2018, Softeq Development completed the acquisition of NearShore Solutions GmbH. It is located in Munich, Germany and functions as a customer delivery center. Technology partnerships Softeq is a Gold Application Development Partner of Microsoft, Xamarin Authorized Consulting Partner and an official member of Apple Inc.’s MFi Program. Key industries served Software and Technology Automotive Industrial Automation Data Storage Healthcare and Life Science Sports Media and Entertainment Manufacturing Consumer Electronics Retail and e-Commerce Transportation Flash memory expertise Softeq’s Minsk office has been the locus of NAND flash memory research and development for SanDisk for seven years. The company’s embedded software development department was a long-term partner of the world’s second-largest flash memory producer SK Hynix. In 2014, the chipmaker’s acquisition of the dedicated development center within Softeq led to the foundation of Softeq Flash Solutions LLC. The Softeq’s spin-off became part of SK Hynix's R&D center for flash memory products. Affiliates zGames zGames LLC, the game development brand of Softeq, was launched in 2008. The company provides full-cycle game design and development services and delivers casual, mixed reality, educational, and gamblified titles. The key platforms targeted include mobile, desktop (including VR), web, and AR. zGames has also made a number of remakes of classic game titles, including Pong World and QIX Galaxy. In 2012, zGames won the Pong® Indie Developer Challenge by Atari. Brands DURATEQ DURATEQ ATV is a rugged handheld device, featuring Disney SyncLink Technology. The solution was built relying on content development services by MGBH Media Access Group and R&D efforts in the field of media accessibility for disabled people by the Accessible Learning and Assessment Technologies group of NCAM. DURATEQ is installed across multiple branches of the National Park Service, including George Washington's Mount Vernon, Walt Disney Parks and Resorts, the World of Coca-Cola Museum in Florida, the Hall at Patriot Place in Foxborough, MA, AT&T Stadium in Dallas, and the National Museum of the American Indian. Customers The company works with startups and large enterprises, including Verizon, Intel, Epson, Women's Tennis Association, Munich Airport, and uQontrol. References Companies based in Houston Companies established in 1997 Outsourcing companies Companies based in Texas Software companies of Belarus Custom software projects
30306198
https://en.wikipedia.org/wiki/1955%20USC%20Trojans%20football%20team
1955 USC Trojans football team
The 1955 USC Trojans football team represented the University of Southern California (USC) in the 1955 college football season. In their fifth year under head coach Jess Hill, the Trojans compiled a 6–4 record (3–3 against conference opponents), finished in sixth place in the Pacific Coast Conference, and outscored their opponents by a combined total of 265 to 158. Attendance at seven home games was 464,104, an average of 66,300. Attendance at all 10 games was 615,196. Jim Contratto led the team in passing with 22 of 55 passes completed for 406 yards, five touchdowns and five interceptions. Jon Arnett led the team in rushing with 141 carries for 672 yards and 11 touchdowns. Arnett was also the team's leading punt returners with 16 returns and an average of 17.6 yards per return, including one returned for a touchdown. With three touchdown catches, Arnett also led the team with 15 touchdowns and 105 points scored. Leon Clarke was the leading receiver with 15 catches for 215 yards and two touchdowns. Two Trojans received first-team honors from the Associated Press on the 1955 All-Pacific Coast Conference football team: back Jon Arnett and guard Orlando Ferrante. Arnett was also a two-time recipient of the W. J. Voit Memorial Trophy as the outstanding football player on the Pacific Coast, winning the award in both 1954 and 1955. He was inducted into the USC Athletic Hall of Fame in 1994 and the College Football Hall of Fame in 2001. Schedule Players The following players were members of the 1955 USC football team. Fabian Abram, tackle Jon Arnett, junior tailback #26 ("led the team in total offense, punt returns, kickoff returns, punting, rushing, scoring and in total playing time") George Belotti, tackle Bing Bordier, end Ron Brown, back Leon Clarke, end (the first PCC player to be selected in the 1956 NFL Draft, selected 14th overall by the Los Angeles Rams) Jim Contratto, quarterback Gordon Duvall, back (offense) and linebacker (defense) Dick Eldredge, center Dick Enright, tackle Orlando Ferrante, guard (All-PCC, 62 unassisted tackles, 41 assists) Ron Fletcher, tackle George Galli, guard (co-captain) Marv Goux (co-captain) Frank Hall, quarterback Don Hickman Bob Isaacson Ludwig Keehn, end Ells Kissinger, quarterback Doug Kranz, halfback Chuck Leimbach, end Don McFarland Ernie Merk (set a USC record with a 93-yard punt return against Minnesota) John Miller, guard Chuck Perpich Fred Pierce, left halfback C. R. Roberts, fullback Karl Rubke, center Vern Sampson, lineman Roy Smith, tackle Joe Tisdale Laird Willott, guard Dick Westphal Ernie Zampese, tailback Coaching staff and other personnel Head coach: Jess Hill Assistant coaches: Nick Pappas (defensive backs), Don Clark (line coach), George Ceithaml (offensive backfield coach), Mel Hein (line coach), Bill Fisk (end coach), Jess Mortensen (freshman coach) Senior manager: Jim Maddux Yell kings: Bill Hillinck, Kent Blanche, Al Green, Woody Wilmore, Larry Knudsen References USC USC Trojans football seasons USC Trojans football
782392
https://en.wikipedia.org/wiki/IBM%20Common%20User%20Access
IBM Common User Access
Common User Access (CUA) is a standard for user interfaces to operating systems and computer programs. It was developed by IBM and first published in 1987 as part of their Systems Application Architecture. Used originally in the MVS/ESA, VM/CMS, OS/400, OS/2 and Microsoft Windows operating systems, parts of the CUA standard are now implemented in programs for other operating systems, including variants of Unix. It is also used by Java AWT and Swing. Motivations and inspirations IBM wanted a standard way to interact with text-based user interface software, whether the screen was a dumb terminal connected to a mainframe or a PS/2 with VGA graphics. CUA was a detailed specification and set strict rules about how applications should look and function. Its aim was in part to bring about harmony among DOS applications, which until then had independently implemented different user interfaces. Examples: In WordPerfect, the command to open a file was , . In Lotus 1-2-3, a file was opened with (to open the menus), (for File), (for Retrieve). In Microsoft Word, a file was opened with (to open the menus), (for Transfer), (for Load). In WordStar, it was . In emacs, a file is opened with + followed by + (for find-file). was often the help key (such as Volkswriter (1982)), but in WordPerfect, help was on instead. Some programs used to cancel an action, while some used it to complete one; WordPerfect used it to repeat a character. Some programs used to go to the end of a line, while some used it to complete filling in a form. sometimes toggled between overtype and inserting characters, but some programs used it for "paste". Thus every program had to be learned individually and its complete user interface memorised. It was a sign of expertise to have learned the UIs of dozens of applications, since a novice user facing a new program would find their existing knowledge of a similar application either of no use or actively a hindrance to understanding as learned behavior might need to be unlearned for the new application. The detailed CUA specification, published in December 1987, is 328 pages long. It has similarities to Apple Computer's detailed human interface guidelines (139 pages). The Apple HIG is a detailed book specifying how software for the 1984 Apple Macintosh computer should look and function. When it was first written, the Mac was new, and graphical user interface (GUI) software was a novelty, so Apple took great pains to ensure that programs would conform to a single shared look and feel. CUA had a similar aim, but it faced the more difficult task of trying to impose this retroactively on an existing, thriving but chaotic industry, with the much more ambitious goal of unifying all UI, from personal computers to minicomputers to mainframes; and supporting both character and GUI modes, and both batch and interactive designs. By comparison, the Apple HIG only supported interactive GUI on a standalone personal computer. CUA also attempted to be a more measurable standard than the Apple HIG and had large sections formatted as checklists to measure compliance. Description The CUA contains standards for the operation of elements such as dialog boxes, menus and keyboard shortcuts that have become so influential that they are implemented today by many programmers who have never read the CUA. Some of these standards can be seen in the operation of Windows itself and DOS-based applications like the MS-DOS 5 full-screen text editor edit.com. CUA hallmarks include: All operations can be done with either the mouse or the keyboard; If applicable to the page/screen in question provides a refresh function; Menus are activated/deactivated with the key; Menus are opened by pressing the key plus the underlined letter of the menu name; Menu commands that require parameters to proceed are suffixed with an ellipsis ("…"); Options are requested using secondary windows (often called dialog boxes); Options are divided into sections using notebook tabs; Navigation within fields in dialog boxes is by cursor key; navigation between fields is by pressing the key; + moves backwards; Dialog boxes have a 'Cancel' button, activated by pressing the key, which discards changes, and an 'OK' button, activated by pressing , which accepts changes; Applications have online help accessed by a Help menu, which is the last option on the menu bar; context sensitive help can be summoned by ; The first menu is to be called 'File' and contains operations for handling files (new, open, save, save as) as well as quitting the program; the next menu 'Edit' has commands for undo, redo, cut, copy, delete, paste commands; The Cut command is +; Copy is +; Paste is +; The size of a window can be changed by dragging one of the 8 segments of the border. CUA not only covers DOS applications, but is also the basis for the Windows Consistent User Interface standard (CUI), as well as that for OS/2 applications — both text-mode and the Presentation Manager GUI — and IBM mainframes which conform to the Systems Application Architecture. CUA was more than just an attempt to rationalise DOS applications — it was part of a larger scheme to bring together, rationalise and harmonise the overall functions of software and hardware across IBM's entire computing range from microcomputers to mainframes. This is perhaps partly why it was not completely successful. The third edition of CUA took a radical departure from the first two by introducing the object-oriented workplace. This changed the emphasis of the user's interactions to be the data (documents, pictures, and so on) that the user worked on. The emphasis on applications was removed with the intention of making the computer easier to use by matching users' expectations that they would work on documents using programs (rather than operating programs to work on documents). (See also object-oriented user interface.) Influence CUA strongly influenced the early Microsoft Windows operating system during the period of joint IBM and Microsoft cooperation on OS/2 Presentation Manager. But later releases of IBM's CUA documents were not used for Microsoft products, and so CUA became less significant in the Windows environment. For instance, the Start menu was introduced. Most of the standard keystrokes and basic GUI widgets specified by the CUA remain available in Windows. The well-known combination for closing a window, , stems from CUA. CUA never had significant impact on the design of Unix terminal (character-mode) applications, which preceded CUA by more than a decade. However, all major Unix GUI environments/toolkits, whether or not based on the X Window System, have featured varying levels of CUA compatibility, with Motif/CDE explicitly featuring it as a design goal. The current major environments, GNOME and KDE, also feature extensive CUA compatibility. The subset of CUA implemented in Microsoft Windows or OSF/Motif is generally considered a de facto standard to be followed by any new Unix GUI environment. See also Table of keyboard shortcuts References IBM, Systems Application Architecture: Common User Access: Panel Design and User Interaction, Document SC26-4351-0, 1987. IBM, Systems Application Architecture: Common User Access: Advanced Interface Design Guide, Document SC26-4582-0, 1990. IBM, Systems Application Architecture: Common User Access: Basic Interface Design Guide, Document SC26-4583-00 , 1992. (Partial archive) IBM, Systems Application Architecture: Common User Access: Guide to User Interface Design, Document SC34-4289-00 1991 IBM, Systems Application Architecture: Common User Access: Advanced Interface Design Reference, Document SC34-4290-00 1991 External links  , by Richard E. Berry, IBM Systems Journal, Volume 27, Nº 3, 1988. Citations. This link is down, PDF still available at: https://web.archive.org/web/20070927082756/http://www.research.ibm.com/journal/sj/273/ibmsj2703E.pdf  , by Richard E. Berry, Cliff J. Reeves, IBM Systems Journal, Volume 31, Nº 3, 1992. Citations.  , by Richard E. Berry, IBM Systems Journal, Volume 31, Nº 3, 1992. Citations. IBM BookManager SAA CUA bookshelf 1992 CUA Window Emulation for SlickEdit - A table of CUA-based hotkeys provided by a SlickEdit mode Common User Access Human–computer interaction User interface techniques Common User Access
1059701
https://en.wikipedia.org/wiki/Superman%20III
Superman III
Superman III is a 1983 superhero film directed by Richard Lester from a screenplay by David Newman and Leslie Newman based on the DC Comics character Superman. It is the third installment in the Superman film series and a sequel to Superman II (1980). The film features a cast of Christopher Reeve, Richard Pryor, Jackie Cooper, Marc McClure, Annette O'Toole, Annie Ross, Pamela Stephenson, Robert Vaughn, and Margot Kidder. Although the film recouped its budget of $39 million, it proved less successful than the first two Superman films, both financially and critically. While harsh criticism focused on the film's comedic and campy tone, as well as on the casting and performance of Pryor, the special effects and Christopher Reeve's performance as Superman were praised. A sequel, Superman IV: The Quest for Peace, was released in July 1987. Plot While Superman protects Metropolis, the Metropolis-based conglomerate Webscoe hires Gus Gorman, a talented computer-programmer. Gus embezzles from his employer through salami slicing, which brings him to the attention of CEO Ross Webster. Webster is intrigued by Gus' potential to help him financially. Webster, his sister Vera, and Webster's girlfriend Lorelei blackmail Gus into helping him. At the Daily Planet, Clark Kent convinces Perry White to let him and Jimmy Olsen visit Smallville for Clark's high-school reunion, while fellow reporter and Clark's unrequited romantic interest Lois Lane leaves for a Bermuda vacation. En route, as Superman, Kent extinguishes a fire in a chemical plant containing unstable beltric acid, which produces corrosive vapor when superheated. At the reunion, Clark reunites with childhood friend Lana Lang, a divorcée with a young son named Ricky. Clark is harassed by Brad Wilson, his former bully and Lana's ex-boyfriend. While visiting Lana, Superman saves Ricky from being killed by a combine harvester. Infuriated by Colombia's refusal to do business with him, Webster orders Gus to command Vulcan, an American weather satellite, to create a tornado to destroy Colombia's coffee crop, allowing Webster to corner the market. Gus travels to Smallville to use a Webscoe subsidiary to reprogram the satellite. Although Vulcan creates a devastating storm, Superman neutralizes it and saves the harvest. Seeing Superman as a legitimate threat to his plans, Webster orders Gus to create synthetic Kryptonite. Gus uses Vulcan to locate and analyze Krypton's debris. As one of the elements of Kryptonite is unknown, he substitutes tar after glancing at his pack of cigarettes. Lana convinces Superman to appear at Ricky's birthday party, but Smallville turns it into a town celebration. Gus and Vera, disguised as Army officers, give Superman the flawed Kryptonite as an award. Instead of the Kryptonite killing him as Webster intended, Superman becomes selfish; his desire for Lana causes him to delay rescuing a truck driver from a jackknifed rig hanging from a bridge. The hero commits petty acts of vandalism such as straightening the Leaning Tower of Pisa and blowing out the Olympic Flame. Gus asks Webster to build the world's most sophisticated supercomputer; the CEO agrees, if Gus creates an energy crisis by directing all oil tankers to the middle of the Atlantic Ocean. When the captain of one tanker insists on maintaining his original course, Lorelei seduces Superman, persuading him to waylay the tanker and breach its double hull, causing an oil spill. The villains decamp to the supercomputer's location in Glen Canyon. Superman suffers a nervous breakdown and splits into two beings: the immoral, corrupted dark Superman and the moral, mild-mannered Clark Kent. The two fight in a junkyard, with Clark eventually gaining the upper hand and defeating his evil self. Regaining his sanity, Superman repairs the damage he caused in the oil spill and heads west to deal with the villains. After defending himself from exploding rockets and an ASALM missile, Superman confronts Webster, Vera, and Lorelei. The supercomputer quickly identifies Superman's weakness and unleashes a beam of pure Kryptonite. Guilt-ridden and horrified by the notion of "going down in history as the man who killed Superman", Gus destroys the Kryptonite ray with a firefighter's axe. Superman escapes, but the computer becomes self-aware, defending itself against Gus's attempts to disable it. The computer transforms Vera Webster into a cyborg that attacks her brother and Lorelei with beams of energy that immobilize them. Superman returns with beltric acid, which the supercomputer believes is not dangerous. The intense heat emitted by the supercomputer causes the acid to become volatile, destroying it. Superman leaves Webster and his cronies for the authorities and drops Gus off at a West Virginia coal mine, recommending him to the company as a computer programmer. As Clark, Superman visits Lana after she moves to Metropolis. A drunken Brad appears and attacks Clark, but the reporter defeats him without revealing his secret identity. Lana's new job as Perry White's secretary surprises Lois Lane, who returns from her vacation with an article about corruption in Bermuda, and has a newfound respect for Clark after reading his story. Before lunch with Lana, Superman restores the Leaning Tower of Pisa and flies into the sunrise for further adventures. Cast Christopher Reeve as Clark Kent / Superman: After discovering his origins in the earlier films, he sets himself to helping those on Earth. After defeating his arch enemies, Lex Luthor twice and General Zod, Superman comes face-to-face with a new villain: the megalomaniac Ross Webster, who is determined to control the world's coffee and oil supplies. Superman also battles personal demons after an exposure to a synthetic form of kryptonite that corrupts him. Richard Pryor as August "Gus" Gorman: A bumbling computer genius who works for Ross Webster and inadvertently gets mixed up in Webster's scheme to destroy Superman. Jackie Cooper as Perry White: The editor of the Daily Planet. Marc McClure as Jimmy Olsen: A photographer for the Daily Planet. Annette O'Toole as Lana Lang: Clark's high school friend who reconciles with Clark after seeing him during their high school reunion. O'Toole later portrayed Martha Kent on the Superman television series Smallville. Robert Vaughn as Ross "Bubba" Webster: A villainous, super-wealthy industrialist and philanthropist. After Superman prevents him from taking over the world's coffee supply, Ross is determined to destroy Superman before he can stop his plan to control the world's oil supply. He is an original character created for the movie. Annie Ross as Vera Webster: Ross' sister and partner in his corporation and villainous plans. Pamela Stephenson as Lorelei Ambrosia: Ross' assistant. Lorelei, a voluptuous blonde bombshell, is well-read, articulate and skilled in computers, but conceals her intelligence from Ross and Vera, to whom she adopts the appearance of a superficial, stereotypical klutz. As part of Ross' plan, she seduces Superman. Margot Kidder as Lois Lane: A reporter at the Daily Planet who has a history with both Clark Kent and Superman. She is away from Metropolis on vacation to Bermuda, which put her in the middle of a front-page story. Gavan O'Herlihy as Brad Wilson: Lana's ex-boyfriend and Clark's high school rival. Film director/puppeteer Frank Oz originally had a cameo in this film as a surgeon, but the scene was ultimately deleted, though it was later included in the TV extended version of the film. Shane Rimmer, who had a role in Superman II as a NASA controller, has a small part as a state police officer. Pamela Mandell, who played a diner waitress in the same film, appears here as the hapless wife of a Daily Planet sweepstakes winner. Aaron Smolinski, who had played baby Clark Kent in the first film, appears in this one as the little boy next to the photo booth Superman changes into. He also would later appear in Man of Steel as a communications officer. Production Development Richard Donner confirmed that he had been interested in writing at least two more Superman films, which he intended to allow Tom Mankiewicz to direct, and that he would have included Brainiac as the villain of the third film. However, Donner departed the series during the production of Superman II. The film was formally announced at the 33rd Cannes Film Festival in May 1980, months before the theatrical release of the second film. In December 1980, producer Ilya Salkind wrote a treatment for this film that included Brainiac, Mister Mxyzptlk and Supergirl. The treatment was released online in 2007. The Mr. Mxyzptlk portrayed in the outline varies from his good-humored comic counterpart, as he uses his abilities to cause serious harm. Dudley Moore was the top choice to play the role. Meanwhile, in the same treatment, Brainiac was from Colu and had discovered Supergirl in the same way that Superman was found by the Kents. Brainiac is portrayed as a surrogate father to Supergirl and eventually fell in love with his "daughter", who did not reciprocate his feelings, as she had fallen in love with Superman. Brainiac retaliates by using a personality machine to corrupt and manipulate Superman. The climax of the film would have seen Superman, Supergirl, Brainiac, Jimmy Olsen, and Lana Lang time travel to the Middle Ages for a final confrontation in a fiefdom taken over by Brainiac. After defeating him and leaving him behind as a helpless serf, Superman and Supergirl would have either been married at either the end of Superman III or in Superman IV. The treatment was rejected by Warner Bros. Pictures as being too complex and expensive to shoot, and Salkind additionally wanted to save the character of Supergirl for a solo film. Because of the high budgets required for the series the Salkinds considered selling the rights to the series to Dino De Laurentiis. The significance of computers to the plot, the villains’ plan to corrupt Superman, and the splitting of him into a good and an evil half would be carried over into the final film. The film was originally intended to be titled Superman vs. Superman, but it was retitled after the producers of Kramer vs. Kramer threatened a lawsuit. Casting Both Gene Hackman and Margot Kidder are said to have been angry with the way the Salkinds treated Superman director Richard Donner, with Hackman retaliating by refusing to reprise the role of Lex Luthor. After Margot Kidder publicly criticized the Salkinds for their treatment of Donner, the producers reportedly "punished" the actress by reducing her role in Superman III to a brief appearance. Hackman later denied such claims, stating that he had been busy with other movies and general consensus that making Luthor a constant villain would be akin to incessant horror movie sequels where a serial killer keeps coming back from the grave. Hackman would reprise his role as Lex Luthor in Superman IV, with which the Salkinds had no involvement. In his commentary for the 2006 DVD release of Superman III, Ilya Salkind denied any ill will between Margot Kidder and his production team and denied the claim that her part was cut for retaliation. Instead, he said, the creative team decided to pursue a different direction for a love interest for Superman, believing the Lois and Clark relationship had been played out in the first two films, (but could be revisited in the future). With the choice to give a more prominent role to Lana Lang, Lois' part was reduced for story reasons. Salkind also denied the reports about Gene Hackman being upset with him, stating that Hackman was unable to return because of other film commitments. After an appearance by Richard Pryor on The Tonight Show, telling Johnny Carson how much he enjoyed seeing Superman II, the Salkinds were eager to cast him in a prominent role in the third film, riding on Pryor's success in films such as Silver Streak, Stir Crazy and The Toy. Pryor accepted a $5 million salary to appear in the film. Following the release of the film, Pryor signed a five-year contract with Columbia Pictures worth $40 million. Filming Principal photography commenced on June 21, 1982. Most of the interior scenes were shot, like the previous Superman films, at Pinewood Studios outside London. The junkyard scene was filmed on Pinewood's backlot. The coal mine scene, where Superman leaves Gus, was filmed at Battersea Power Station, where Richard Lester had previously shot scenes for the Beatles film Help!. Most exteriors were filmed in Calgary, Alberta due to Canada's tax breaks for film companies. Superman's drinking binge was filmed at the St. Louis Hotel in Downtown East Village, Calgary, while other scenes such as the slapstick-comedy opening were shot several blocks to the west. While the supercomputer set was created on Pinewood's 007 Stage, exteriors were shot at Glen Canyon in Utah. Effects and animation The film includes "the same special effects team" from the prior two films. Atari, part of Warner, created the video game computer animation for the missile defense scene. Music As with the previous sequel, the musical score was composed and conducted by Ken Thorne, using the Superman theme and most other themes from the first film composed by John Williams. Giorgio Moroder was hired to create songs for the film, though their use in the film is minimal. Release Theatrical Superman III was shown at the Uptown Theater in Washington, D.C. on June 12, 1983, and then had its New York premiere on June 14, 1983, at Cinema I. It was released in theatres on June 17, 1983, in the United States and July 19, 1983, in the United Kingdom. Marketing William Kotzwinkle wrote a novelization of the film published in paperback by Warner Books in the U.S. and by Arrow Books in the United Kingdom to coincide with the film's release; Severn House published a British hardcover edition. Kotzwinkle thought the novelization "a delight the world has yet to find out about." However, writing in Voice of Youth Advocates, Roberta Rogow hoped this would be the final Superman film and said, "Kotzwinkle has done his usual good job of translating the screenplay into a novel, but there are nasty undertones to the film, and there are nasty undertones to the novel as well. Adults may enjoy the novel on its own merits, as a Black Comedy of sorts, but it's not written for kids, and most of the under-15 crowd will either be puzzled or revolted by Kotzwinkle's dour humor." Extended television edition Like the previous films, a separate extended edition was produced. It was aired on ABC. The opening credits were in outer space, featuring the main Superman theme with slight differences. This is followed by a number of scenes, including additional dialogue but not added into any of the official VHS, DVD or Blu-ray cuts of the film. The "Deluxe Edition" of Superman III, released in 2006 on par with the DVD release of Superman Returns, included these scenes in its extra features section as "deleted scenes". Reception Box office Superman III grossed $60 million at the United States box office, and $20.2 million internationally, for a total of $80.2 million worldwide. The film was the 12th highest-grossing film of 1983 in North America. Critical response Superman III holds a 29% approval rating and has an average rating of 4.6/10 on Rotten Tomatoes based on 55 reviews. The website's critical consensus states, "When not overusing sight gags, slapstick and Richard Pryor, Superman III resorts to plot points rehashed from the previous Superman flicks." The film has a Metacritic rating of 44, indicating "mixed or average reviews" from 13 professional reviewers. Film critic Leonard Maltin said that Superman III was an "appalling sequel that trashed everything that Superman was about for the sake of cheap laughs and a co-starring role for Richard Pryor". The film was nominated for two Razzie Awards including Worst Supporting Actor for Richard Pryor and Worst Musical Score for Giorgio Moroder. Audiences also saw Robert Vaughn's villainous Ross Webster as an inferior fill-in for Lex Luthor. Christopher John reviewed Superman III in Ares Magazine #16 and commented that "compared to the first film in this series, everything about Superman III is a joke, a harsh cruel joke played on all the people who wanted to see more of the Superman they saw a few years ago." Colin Greenland reviewed Superman III for Imagine magazine, and stated that "What ultimately spoils the fun in Superman III is not the incoherent story or even the technophobia. It is simply overloaded - too many ideas, too many gadgets, too many stars (Pamela Stephenson is completely wasted in a part which would have been too dumb for Goldie Hawn). The wiring all comes loose at the end; an anticlimax, and a rushed one at that." Fans of the Superman series placed a great deal of the blame on director Richard Lester. Lester made a number of popular comedies in the 1960s — including The Beatles' A Hard Day's Night — before being hired by the Salkinds in the 1970s for their successful Three Musketeers series, as well as Superman II which, although better received, was also criticised for unnecessary sight gags and slapstick. Lester broke tradition by setting the opening credits for Superman III during a prolonged slapstick sequence rather than in outer space. The film's screenplay, by David and Leslie Newman, was also criticized. When Richard Donner was hired to direct the first two films, he found the Newmans' scripts so distasteful that he hired Tom Mankiewicz for heavy rewrites. Since Donner and Mankiewicz were no longer attached, the Salkinds were free to bring their version of Superman to the screen and once again hired the Newmans for writing duties. Reeve stated in his autobiography that the original script for the first Superman had so many puns and gags that it risked having Superman earn a reputation akin to that of Batman being associated with the campy TV show of the 1960s. "In one scene in this script, Superman would be in pursuit of Lex Luthor, identified by his bald head and grab him, only to realize he had captured Telly Savalas who would remark "Who loves ya, baby?" and offer Superman a lollipop. Dick [Donner] had done away with much of that inanity." Reeve's own performance as a corrupted Man of Steel received praise, particularly the junkyard battle between this newly darkened Superman and Clark Kent. One of the film's positive reviews was from the fiction writer Donald Barthelme, who praised Reeve as "perfect", also describing Vaughn as "essentially playing William Buckley - all those delicious ponderings, popping of the eyes, licking of the corner of the mouth." References External links Official DC Comics Site Official Warner Bros. Site 1980s action comedy films 1980s fantasy-comedy films 1980s science fiction comedy films 1980s English-language films 1983 comedy films 1983 films American films American sequel films British films British sequel films Films about computing Films directed by Richard Lester Films produced by Pierre Spengler Films scored by Giorgio Moroder Films set in Colombia Films set in Kansas Films set in West Virginia Films shot in Buckinghamshire Films shot in Calgary Films shot in England Films shot in Italy Films shot in Utah Films shot at Pinewood Studios Films with screenplays by David Newman (screenwriter) Films with screenplays by Leslie Newman Pisa in fiction Superman (1978 film series) Superman films Warner Bros. films Films scored by Ken Thorne
7528242
https://en.wikipedia.org/wiki/Apache%20OFBiz
Apache OFBiz
Apache OFBiz is an open source enterprise resource planning (ERP) system. It provides a suite of enterprise applications that integrate and automate many of the business processes of an enterprise. OFBiz is an Apache Software Foundation top level project. Overview Apache OFBiz is a framework that provides a common data model and a set of business processes. All applications are built around a common architecture using common data, logic and process components. Beyond the framework itself, Apache OFBiz offers functionality including: Accounting (agreements, invoicing, vendor management, general ledger) Asset maintenance Catalogue and product management Facility and warehouse management system (WMS) Manufacturing execution / manufacturing operations management (MES/MOM) Order processing Inventory management, automated stock replenishment etc. Content management system (CMS) Human resources (HR) People and group management Project management Sales force automation Work effort management Electronic point of sale (ePOS) Electronic commerce (eCommerce) Scrum (development) (Scrum software development support) Technology All Apache OFBiz functionality is built on a common framework. The functionality can be divided into the following distinct layers: Presentation layer Apache OFBiz uses the concept of "screens" to represent the Apache OFBiz pages. Each page is, normally, represented as a screen. A page in Apache OFBiz consists of components. A component can be a header, footer, etc. When the page is rendered all the components are combined as specified in the screen definition. Components can be Java Server Pages ([JSP]s) <deprecated>, FTL pages built around FreeMarker template engine, forms or menus widgets. Widgets are an OFBiz specific technology. Business layer The business, or application layer defines services provided to the user. The services can be of several types: Java methods, SOAP, simple services, workflow, etc. A service engine is responsible for invocation, transactions and security. Apache OFBiz uses a set of open source technologies and standards such as Java, Java EE, XML and SOAP. Although Apache OFBiz is built around the concepts used by Java EE, many of its concepts are implemented in different ways; either because Apache OFBiz was designed prior to many recent improvements in Java EE or because Apache OFBiz authors didn't agree with those implementations. Data layer The data layer is responsible for database access, storage and providing a common data interface to the business layer. Data is accessed not in object oriented fashion but in a relational way. Each entity (represented as a row in the database) is provided to the business layer as a set of generic values. A generic value is not typed, so fields of an entity are accessed by the column name. History The OFBiz project was created by David E. Jones and Andrew Zeneski on April 13, 2001. The project was initially hosted as The Apache Open For Business Project on SourceForge and Open For Business Project (Apache OFBiz) at Open HUB. Between September 2003 and May 2006, it was hosted as a java.net project, but the project has been removed from there. It has begun to be widely used around 2003. After incubating since January 31, 2006, it became a Top Level Apache project on December 20, 2006: Apache OFBiz Incubation Status. See also Comparison of shopping cart software Comparison of accounting software Comparison of project management software List of ERP software packages References External links Official Apache OFBiz website OFBiz Free accounting software Free e-commerce software Free industrial software Free ERP software Free software programmed in Java (programming language) Web applications
81991
https://en.wikipedia.org/wiki/Podalirius
Podalirius
In Greek mythology, Podalirius or Podaleirius or Podaleirios () was a son of Asclepius. Mythology Trojan war With Machaon, his brother, he led thirty ships from Tricca, Thessaly in the Trojan War on the side of the Greeks. Like Machaon, he was a legendary healer. He healed Philoctetes, holder of the bow and arrows of Heracles required to end the war. He was one of those who entered the Trojan Horse. Alongside Amphimachus, Calchas, Leonteus and Polypoetes he traveled to Colophon, where Calchas died. Aftermath Unlike his brother, Podalirius survived the war, and subsequently settled in Caria. Accounts vary as to how he ended up there. According to one version, he returned to Argos after the war but later went on to consult the Delphian oracle about a preferable place for himself to live, and was instructed to stay at a place where he would suffer no harm should the sky fall; thus he chose the Carian peninsula which was surrounded by mountains. Others relate that on the way back from Troy Podalirius' ship was blown off course so he landed in Syrnus, Caria, where he settled. In yet another version, he got shipwrecked near the Carian coast but was rescued by a shepherd named Bybassus, the eponym-to-be of a city in Caria. Podalirus could be the founder of Syrnus, which he became after the following series of events. Podalirius arrived at the court of the Carian king Damaethus and healed the king's daughter Syrna, who had fallen off a roof. In reward, Damaethus gave him Syrna in marriage and handed the power over the peninsula over to him. Podalirus founded two cities, one of which he named Syrnus after his wife and the other Bybassus after the shepherd to whom he owed his life. According to Strabo, a heroön of Podalirius, and another of Calchas, were located in Daunia, Italy, on a hill known as Drium. By the hero-shrine of Podalirius there flowed a stream believed to cure animals of any diseases. Lycophron writes that Podalirius was buried in Italy near the cenotaph of Calchas, but John Tzetzes accuses him of providing false information and defends the versions cited above. See also 4086 Podalirius, a Jovian asteroid Podalyria, a plant genus in Fabaceae, was named for Podalirius. Iphiclides podalirius, the scarce swallowtail butterfly. References External links Children of Asclepius Achaean Leaders Thessalians in the Trojan War Greek mythological heroes Mythological Greek physicians
6889882
https://en.wikipedia.org/wiki/Data%20quality%20firewall
Data quality firewall
A data quality firewall is the use of software to protect a computer system from the entry of erroneous, duplicated or poor quality data. Gartner estimates that poor quality data causes failure in up to 50% of customer relationship management systems. Older technology required the tight integration of data quality software, whereas this can now be accomplished by loosely coupling technology in a service-oriented architecture. Features and functionality A data quality firewall guarantees database accuracy and consistency. This application ensures that only valid and high quality data enter the system, which means that it obliquely protects the database from damage; this is extremely important since database integrity and security are absolutely essential. A data quality firewall provides real time feedback information about the quality of the data submitted to the system. The main goal of a data quality process consists in capturing erroneous and invalid data, processing them and eliminating duplicates and, lastly, exporting valid data to the user without failing to store a back-up copy into the database. A data quality firewall acts similarly to a network security firewall. It enables packets to pass through specified ports by filtering out data that present quality issues and allowing the remaining, valid data to be stored in the database. In other words, the firewall sits between the data source and the database and works throughout the extraction, processing and loading of data. It is necessary that data streams be subject to accurate validity checks before they can be considered as being correct or trustworthy. Such checks are of a temporal, formal, logic and forecasting kind. See also Data validation Data quality
50939230
https://en.wikipedia.org/wiki/Necurs%20botnet
Necurs botnet
The Necurs botnet is a distributor of many pieces of malware, most notably Locky. Reports Around June 1, 2016, the botnet went offline, perhaps due to a glitch in the command and control server running Necurs. However, three weeks later, Jon French from AppRiver discovered a spike in spam emails, signifying either a temporary spike in the botnet's activity or return to its normal pre-June 1 state. Distributed malware Bart Dridex Locky RockLoader Globeimposter See also Conficker Command and control (malware) Gameover ZeuS Operation Tovar Timeline of computer viruses and worms Tiny Banker Trojan Torpig Zeus (malware) Zombie (computer science) References Botnets
29495415
https://en.wikipedia.org/wiki/1994%20USC%20Trojans%20football%20team
1994 USC Trojans football team
The 1994 USC Trojans football team represented the University of Southern California (USC) in the 1994 NCAA Division I-A football season. In their ninth year under head coach John Robinson, the Trojans compiled an 8–3–1 record (6–2 against conference opponents), finished in second place in the Pacific-10 Conference (Pac-10), and outscored their opponents by a combined total of 356 to 243. Quarterback Rob Johnson led the team in passing, completing 186 of 276 passes for 2,499 yards with 15 touchdowns and six interceptions. Shawn Walters led the team in rushing with 193 carries for 976 yards and 11 touchdowns. Keyshawn Johnson led the team in receiving with 66 catches for 1,362 yards and nine touchdowns. Schedule Roster Season summary Washington Penn State Baylor Shawn Walters 31 rushes, 207 yards Oregon Oregon State Stanford Shawn Walters 31 rushes, 234 yards California Washington State Arizona UCLA Notre Dame Cotton Bowl Classic References USC USC Trojans football seasons Cotton Bowl Classic champion seasons USC Trojans football
58258709
https://en.wikipedia.org/wiki/OS/7
OS/7
OS/7 is a discontinued operating system from Sperry Univac for its 90/60 and 90/70 computer systems. The system was first announced in November 1971 for Univac's 9700 system and was originally scheduled for delivery in March 1973. However, the delivery slipped by nearly a year, which impacted the 9700 marketing effort. It was first demonstrated by Univac on the new 90/60 system in October 1973. The official release was then planned for January 1974. OS/7 was abruptly discontinued in 1975 in favor of VS/9, Univac's name for RCA's VMOS operating system. "OS/7 is a multi-tasking, multi-programming system that utilizes a roll-in, roll-out capability to keep the CPU optimally busy." References Discontinued operating systems UNIVAC mainframe computers
872922
https://en.wikipedia.org/wiki/SCO%20Group%2C%20Inc.%20v.%20Novell%2C%20Inc.
SCO Group, Inc. v. Novell, Inc.
SCO v. Novell was a United States lawsuit in which The SCO Group (SCO), a Linux and Unix vendor, claimed ownership of the source code for the Unix operating system. SCO sought to have the court declare that SCO owned the rights to the Unix code, including the copyrights, and that Novell had committed slander of title by asserting a rival claim to ownership of the Unix copyrights. Separately, SCO was attempting to collect license fees from Linux end-users for Unix code (that they alleged was copied into Linux) through their SCOsource division, and Novell's rival ownership claim was a direct challenge to this initiative. The case hinged upon the interpretation of asset-transfer agreements governing Novell's sale of their Unix business to one of SCO's predecessor companies, the Santa Cruz Operation. The original APA explicitly excluded all copyrights from the assets transferred from Novell to Santa Cruz Operation. The second amendment to the APA amended the agreement to exclude all copyrights, "except for the copyrights required for Santa Cruz Operation to exercise its rights" under the APA. The list of included assets was never amended, which caused ambiguity years after the deal was signed. At trial, Novell successfully argued that they had retained copyrights to protect their 95% ownership of Unix SVRX royalties, and that the amendment to the exclusion clause was merely affirming that Santa Cruz Operation had a license to the Unix code. Novell counter-sued with their own duelling slander of title claim and several additional claims against SCO related to the APA and asked the Court to find that SCO had breached the agreements by signing Unix license agreements with Sun Microsystems and Microsoft without paying Novell the agreed percentage of those agreements. Novell further asked the court to find that Novell retained the right to direct SCO to waive rights under existing Unix licenses at "Novell's sole discretion", and that they had the right to take the action on SCO's behalf if SCO refused (a purported right that Novell had used repeatedly in the months leading up to the lawsuit to waive SCO's claims against IBM and others). Additional claims and counterclaims were added by the parties before resolution of the case. Novell was found to be the owner of the Unix copyrights, to have the right to direct SCO to waive its claims against IBM and other Unix licensees (and to do so on behalf of SCO if they refused), and SCO was found to have breached the asset-transfer agreements and to have committed conversion of Novell's property. SCO prevailed on none of their claims against Novell. Background Novell, a vendor of proprietary network operating systems, acquired the rights to the original Unix source code when it purchased Unix System Laboratories from Unix's creator, AT&T Corporation, on June 14, 1993. Novell's rights to parts of the Unix source code were established as part of the settlement in USL v. BSDi. On September 19, 1995, Novell entered into an Asset Purchase Agreement (APA) with the Santa Cruz Operation ("Santa Cruz"), a Unix vendor. The APA transferred certain rights regarding Unix, and Novell's UnixWare version of Unix, from Novell to Santa Cruz. These rights included the right to develop and market new versions of UnixWare, and the right to license SVRX (System V Release X) UNIX incidentally or with Novell's permission. It also required Santa Cruz to act as Novell's agent for the collection for certain royalties due under such licenses. In 2000, Caldera Systems acquired the Server Software and Services divisions of Santa Cruz, as well as the UnixWare and OpenServer Unix technologies. Caldera Systems thus became the legal successor to Santa Cruz for the purposes of the APA. A year later Caldera Systems changed its name to Caldera International in 2001 and to The SCO Group (SCO) in 2002. Although the Santa Cruz Operation was colloquially known as "SCO", legally The SCO Group is a different company from the Santa Cruz Operation. SCO goes on the offensive In 2003, SCO initiated a campaign to compel Linux users to pay them software license fees, claiming that unspecified SCO intellectual property had been improperly included in Linux. As part of this campaign, SCO made several statements that they were the owners of Unix, implying that they held the copyright for the original AT&T source code of UNIX, and derivatives of that code. After SCO filed suit against IBM, claiming that IBM had violated SCO's copyrights to Unix, Novell publicly responded to these allegations. On May 28, 2003, Novell claimed that although it had transferred certain Unix assets to SCO's predecessor, the Santa Cruz Operation, it had never transferred the copyrights upon which the IBM case hinged. In separate private letters, Novell exercised rights they claimed under the APA to direct SCO to waive their claims against IBM in the related SCO Group, Inc. v. International Business Machines Corp. litigation. Novell asserts it owns the copyrights On June 6, 2003, SCO held a press conference in which it revealed a second amendment to the "asset purchase agreement between Novell and Santa Cruz Operation". SCO claimed this amendment supported its claim to the Unix copyrights. In response, Novell issued a press release in which it stated: While SCO publicly claimed victory, behind the scenes SCO and Novell traded a series of heated letters. In these letters, Novell continued its claim that Novell was still the legal owner of the Unix copyrights. In a flurry of private letters later released by Novell, Novell exercised a series of rights under the APA over the following months (including issuing waivers of SCO's claims against certain parties, auditing SCO's collection of Unix SVRX royalties, demanding access to all versions of Unix code under SCO's control per the APA's related Technology License Agreement and issuing a cease and desist against SCO for seeking information from former Novell executives). The relationship between the companies quickly deteriorated. On October 14, 2003, Novell registered several key Unix copyrights with the United States Copyright Office and submitted declarations for recordation against SCO's own Unix copyright registrations. These declarations stated: "The SCO Group, Inc. ... has failed ... to demonstrate that any of the UNIX copyrights owned by Novell are required for SCO to exercise its rights..." and that "Novell hereby declares that it retains all or substantially all of the ownership of the copyrights in UNIX, including [SCO's] U.S. Copyright Registration referenced above". After the registrations became public knowledge, Novell issued a press release on December 22, 2003 stating: On January 13, 2004, Novell announced that it was indemnifying Linux users—agreeing to protect them from lawsuits by third parties, like SCO, on the basis of violating the copyrights claimed by Novell. The same day, Novell released over 30 letters that SCO and Novell had exchanged in the previous months. SCO immediately responded with a press release reiterating its earlier claim, and announcing that it was preparing to file a lawsuit against Novell. The lawsuit Utah state court SCO filed a Slander of Title lawsuit against Novell on January 20, 2004. Filed in Utah state court, the lawsuit requested both preliminary and permanent injunctions assigning all of Novell's Unix copyright registrations to SCO and forcing Novell to retract all of their claims to the Unix code. United States District Court for the District of Utah Novell removed the lawsuit to the Federal court system on February 6, 2004. This removal was upheld in the court's June 9 ruling. On February 10, 2004, Novell filed a motion to dismiss the case. Novell requested dismissal for failure to state a claim upon which relief could be granted. Novell argued that: SCO did not show a valid transfer of copyright ownership, because the Asset Purchase Agreement was merely a promise to assign under specific circumstances, and that the Agreement is therefore—by law—not sufficient to transfer the copyrights to SCO; and SCO did not specify specific special damages required for such a claim. In response, SCO filed several memoranda opposing Novell's motion to dismiss the case. Additionally, SCO filed a motion to remand the case back to State court. Novell countered that, because the case would hinge upon interpretation of Federal copyright law, it should be tried in Federal court. On May 9, 2004, Federal Judge Dale A. Kimball heard both parties' arguments and took both motions under advisement. Judge Kimball denied SCO's motion to remand and partially granted Novell's motion to dismiss on June 9, 2004, on a pleading technicality. The case was dismissed without prejudice, which allowed SCO to amend their complaint to include properly pleaded special damages. Novell's countersuit On July 29, 2005, Novell filed a countersuit against SCO claiming slander of title, breach of contract, failure to remit royalties, and failure to conduct audit obligations. Novell sought damages in excess of SCO's net worth, and, as SCO was quickly burning through its assets and cash on hand, Novell asked the court to sequester this money from SCO so that it would not be spent before the resolution of the case. Novell also asked the court to attach SCO's assets pending adjudication of their claims. Had Novell won this motion, SCO would have been forced to file for bankruptcy. Novell accused SCO of licensing Unix System V Release 4 to Microsoft and Sun Microsystems without then sending Novell the 95% of the license fees as required by the APA. In the counter-claim, Novell stated that SCO had asked Novell to participate in the SCO's Linux IP Infringement Licensing Plan. When Novell refused, SCO asked Novell to turn the Unix copyrights over to SCO, a request Novell also refused. SuSE arbitration SCO filed a second amended complaint on February 6, 2006, containing the original slander of title claim as well as several new claims, including unfair competition, copyright infringement (for Novell's distribution of SUSE Linux), and breaching a purported non-compete agreement (again, related to SUSE Linux). On April 10, 2006, Novell's SuSE division (a European vendor of Linux operating systems) filed a request for arbitration against SCO with the Secretariat of the International Chamber of Commerce's International Court of Arbitration in Paris, France. Years earlier, while still known as Caldera International, SCO had signed contracts with then-independent SuSE, among others, involving the United Linux product. The United Linux members agreed that each member would have broad licenses to exploit and distribute Linux products that included United Linux technology. These agreements included clauses requiring the members to use an arbitration process to resolve disputes. SuSE's arbitration request was a response to SCO's amended complaint against Novell. The arbitration process has relatively strict timelines, unlike the U.S. courts' procedures. Novell filed a Motion to Stay Claims Raising Issues Subject to Arbitration in the U.S. courts, saying that four of SCO's five claims had been brought to arbitration, including the claim of copyright infringement, and thus should be stayed until the Arbitration Tribunal rendered its decision. Novell also filed an Answer to SCO's 2d Amended Complaint and Counterclaims, claiming a large number of affirmative defenses, including a claim that SCO committed fraud upon the U.S. copyright office. Licensing agreements and Novell's motion for summary judgement On September 22, 2006, Novell sought leave to file amended counterclaims. Through discovery, Novell had obtained copies of SCO's Unix licensing agreements with Microsoft and Sun. Upon reviewing the agreements, Novell claimed that they breached the APA. The added claims were conversion and breach of fiduciary duties. SCO stipulated to Novell's motion, and therefore Judge Kimball granted it. Novell filed a motion on September 29, 2006, asking for summary judgment, or if that was rejected, then for a preliminary injunction. Novell alleged that SCO, through their agreements with Sun and Microsoft, licensed Novell's property without paying Novell the royalties it was due under the APA. Novell asked the court to force SCO to turn the royalties over to Novell—or, in the alternative, be forced to put the money into a collective trust, where neither party would be able to access it until the issue was decided by the courts. On August 10, 2007, Judge Kimball ruled that "...Novell is the owner of the UNIX and UnixWare Copyrights." Novell was awarded summary judgment on a number of its claims, and a number of SCO's claims were denied. SCO was instructed to account for, and pass to Novell, an appropriate portion of its income from the Sun and Microsoft licenses. Judge Kimball's ruling stated that "SCO is obligated to recognize Novell's waiver of SCO's claims against IBM and Sequent," referring to other cases SCO had filed against those companies for allegedly violating SCO's intellectual property rights in Unix. After the ruling, Novell announced they had no interest in suing people over Unix, stating "We don't believe there is Unix in Linux." SCO's bankruptcy The parties were expected to go to trial on September 17, 2007, in order to determine exactly how much money SCO owed Novell. However, on September 14, the SCO Group filed for bankruptcy under Chapter 11 of the United States Bankruptcy Code. As SCO was a Delaware corporation, the bankruptcy filing was made with the United States Bankruptcy Court for the District of Delaware. The filing caused all pending litigation to be automatically stayed as required by the United States Code. On November 27, 2007, United States Bankruptcy Judge Kevin Gross lifted the automatic stay so as to allow the Utah court to determine how much money SCO owed Novell, but the bankruptcy court retained jurisdiction over any constructive trust that the Federal court might create. Trial to determine damages For the purposes of the trial to determine how much money SCO owed Novell, SCO was named the defendant and Novell was named the plaintiff, because SCO had not prevailed on any of its initial claims. The trial commenced April 30, 2008. Novell sought the recovery of $19,979,561 from SCO based on its licenses to Microsoft, Sun, and others. On July 16, 2008, the Utah court awarded The decision, which SCO could appeal, granted Novell the asked-for $2,547,817 award due to the 2003 Sun Agreement's modification of the 1994 confidentiality provisions. These modifications permitted the release of OpenSolaris. On November 20, 2008, Kimball's final judgment in the case affirmed his August 10, 2007 ruling, granting Novell the award plus interest of $918,122, plus $489 additional interest for every day after August 29, 2008 should SCO fail to pay the award by that date. The ruling also ordered a constructive trust of $625,486.90. Judge Kimball dismissed the case with no possibility to re-file the suit with an amended complaint, restricting SCO to pursue the case only in appeals. Circuit Court appeal On August 24, 2009, the U.S. Court of Appeals for the Tenth Circuit partially reversed Kimball's August 10, 2007 summary judgement, insofar as Kimball had found that Novell owned the copyright to Unix. The portion dealing with the 2003 Sun agreement was upheld by the appeals court. As a result, SCO could pursue its ownership of the Unix copyrights at trial. However, it remained liable for the $2,547,817 royalty award. Novell filed a petition for a writ of certiorari on March 4, 2010, seeking intervention by the Supreme Court of the United States. Novell argued that there is a circuit split on the correct interpretation of the Copyright Act's transfer requirements, and that the correct requirements are more strict than the Tenth Circuit's holding in this case. The petition was dismissed by the Supreme Court. Trial on remand for copyright issues The jury trial on the remanded copyright issues began on March 8, 2010 before Judge Ted Stewart. It was expected to last three weeks. On March 30, the jury returned a unanimous verdict in favor of Novell. On June 10, Judge Stewart ruled in favor of Novell on all issues, closing the case. The court found that Novell had not committed slander of title, was not now required to transfer the copyrights under the APA, that its copyright waivers issued to IBM were authorized, and that they did not violate the covenant of good faith. SCO appealed the district court judgment to the United States Court of Appeals for the Tenth Circuit on July 7, 2010. On August 30, 2011 the Appeals Court affirmed the trial decision See also SCO/Linux controversies Groklaw Caldera OpenLinux References External links Novell's Unique Legal Rights - Novell's official page about this lawsuit SCO v. Novell at SCO.com - SCO's official page about the lawsuit Novell-SCO Timeline at Groklaw Groklaw's Legal documents section for SCO vs Novell - An archive of court documents related to this lawsuit Novell Files Arbitration Request in Paris -The Jacobs Declaration Novell Files Motion to Stay, Answer with Counterclaims etc. SCO–Linux disputes United States district court cases 2010 in United States case law Novell
60538816
https://en.wikipedia.org/wiki/NationBuilder
NationBuilder
NationBuilder is a Los Angeles based technology start-up that develops content management and customer relationship management (CRM) software. Although the company initially targeted political campaigns and nonprofit organizations, it later expanded its marketing efforts to include other people and organizations trying to build an online following, such as artists, musicians and restaurants. The software uses voter data such as names, addresses and other information, such as previous voting records in the case of political campaigns, to allow users to centralize, build and manage campaigns by integrating various communication tools like websites, newsletters, text messaging and social media channels under one platform. Among other features, the software enables users to quickly create websites, build databases through registrations, send targeted newsletters, analyse data from multiple sources and leverage micro-donations. The software's appeal towards political campaigns comes from the combination of a number of previously separate campaigning services, channels and data sources into a single platform that was presented as a facile solution for non-technical users and which enabled political campaigners to quickly deploy campaigns by convincing numerous people to donate. History NationBuilder was founded in 2009 in Los Angeles by Jim Gilliam and launched in 2011. In 2012 Joe Green joined NationBuilder as co-founder and president. He left that role 11 months later in February 2013. Gilliam was previously a movie-maker who co-founded Brave New Films with Robert Greenwald and had sought funding for his films through crowd-sourcing. Green, who studied organizing at Harvard and was Mark Zuckerberg's roommate, is also the co-founder of the Causes Facebook app; he left NationBuilder in 2013. They both claim that NationBuilder is strictly nonpartisan. Since its founding, the company has helped campaigns raise $1.2 billion. In 2012, NationBuilder announced that 1,000 subscribers have used its software to amass 2.5 million supporters and raise $12 million in campaign donations. In 2015 it has helped raise $264 million, recruit over one million volunteers and coordinate some 129,000 events. By 2016, the company said its software was used by about 40 percent of all contested elections at the state and national level in the U.S., which included 3,000 political campaigns. Using such software is easier in the U.S. than Europe, where comprehensive data protection and privacy laws are in effect since 2018. Scottish National Party was the first political party to use NationBuilder, prompting astonishment among journalists, who did not expect to "see the voting intentions of millions of people hoarded in extraordinary detail." Funding Investors in NationBuilder include Chris Hughes - the Facebook co-founder, Sean Parker - first president of Facebook and co-founder of Napster and Causes, Dan Senor - the former Republican foreign-policy adviser and Ben Horowitz, co-founder of Andreessen Horowitz. In 2012, it has raised $6.3 million in funding from a number of investors. Notable use cases The software is reported to have played a role in some public elections in Europe, the US and New Zealand, as well as non-profit initiatives, and political parties in Australia. Notable users include Bernie Sanders, Mitch McConnell, Andrew Yang, Theresa May, Amnesty International, the NAACP and Donald Trump. France La République En Marche used NationBuilder to help in the 2017 National Assembly. New Zealand NationBuilder's services were used in the campaigns of both the National and Labour parties in the 2017 New Zealand general election. United Kingdom Despite better data protection and privacy laws in the UK and EU, NationBuilder was used to significant impact in a number of UK elections, most notably in the 2016 campaign for withdrawal of the United Kingdom from the European Union. The company later made a public announcement that both sides in the Brexit campaign have used its software. United States NationBuilder was used in the Donald Trump presidential campaign to advance his election efforts and eventually win the 2016 presidential race. Jill Stein of the Green Party, Republican Rick Santorum, and supporters of Bernie Sanders all used NationBuilder during their 2016 runs for president, although Sanders' overall campaign used technology from a rival firm. During the 2018 US election cycle, political entities paid more than $1 million for the use of NationBuilder. Among the entities paying the most are Donald J. Trump for President, Prosperity Action and the Republican Party of Tennessee. References External links Official website Political software 2009 establishments in California Companies based in Los Angeles Lists of software Web applications Software companies of the United States
5851467
https://en.wikipedia.org/wiki/Server%20emulator
Server emulator
A server emulator, also called a Freeshard, Private server or fan clone, is the reimplementation of online game servers, typically as clones of proprietary commercial software by a third party of the game community. The private server is not always made by the original company, but usually attempts to mimic it in some way. Technically, a server emulator does not emulate by the traditional definition. Instead it is the alternative implementation of the proprietary gaming server that communicates with the same gaming client through the same, reverse-engineered proprietary protocols. Server emulators exist for many online games. If the original proprietary servers were shut down, server emulators can be considered community continuations as fix for an orphaned software product. Disambiguation Original server software that is stolen, like AEGIS, is also not a server emulator. Reimplementations of standardized protocols or server behavior is not considered to be emulation. Uses According to a study based on Ragnarok Online emulated servers, “Players turn to the illegal private server solution to fulfill their expectations for better means of avatar customization, specific technical features, an improved social environment, and enhanced gamemaster availability.” Other reasons why players use server emulators is to avoid the monthly fees or purchasing fees for certain games. Being able to experience an accelerated and/or altered mode of play (sometimes changing the game completely). Some server emulators are designed on the emphasis that a game has closed and is no longer playable, like Toontown Online. Players also choose to play on private servers to relive an older version of the game (like vanilla World of Warcraft private servers). Legal issues Emulating the server of the proprietary commercial game often violates EULA as many commercial MMORPGs require the user to sign a clause not to create or use server emulators. Additionally, many server emulators retain portions of the original code, and thus violating copyright law. Examples of such violations include the popular RuneScape emulator Winterlove—which retained decompiled, unauthorized portions of the original game client. The server may try to avoid violations by serving from the country where some intellectual property laws possibly apply differently or not at all. Typically, the locations chosen rarely differ enough in copyright and patent law to protect the individual(s) behind the emulator. Examples of these offshore misconceptions include the popular hosting choice that is the Netherlands. Another issue is a possible infringement of the game creator's copyright. If the complete emulator is a work of its own, copyright violation is not as obvious as EULA violation (see Lotus v. Borland case). However sometimes the original server software leaks out of the company that created the game, for example AEGIS (Ragnarok Online). Use or distribution of leaked code is widely held to be copyright infringement. There are cases where a game creator has effectively shut down private game servers by threatening lawsuits due to intellectual property violations, such as offering a modified client (see information on NEXON v OdinMS) for download or offering downloads of modified files from the original game package. In August 2010, a California Central District Court awarded Blizzard Entertainment $88 million in a lawsuit against Scapegaming over copyright-infringement. Scapegaming’s violation involved operation of an unauthorized copyrighted version of World of Warcraft. Scapegaming ran microtransactions encouraging players to donate money to advance in the game resulting in $3,053,339 of inappropriate profits. This is one of the first big cases implemented against server emulation. In July 2011, Nexon has threatened to take MMORPG development community RaGEZONE to court over users creating and sharing custom emulated servers. Nexon claims to file legal proceedings against all parties involved in the MMORPG development scene. Disney has also fought against server emulators for its MMO Club Penguin, resulting in the closure of iCPv3 in October 2010, which had over 100,000 users when Disney filed a cease and desist notice against the emulator. In late 2011, the online chatbox provider XChat filed a lawsuit after a developer published a copy of the source code to her server emulator. The suit was later dropped as the developer had not infringed copyright. See also Game engine recreation Examples Ultima Online shard emulation External links Announcement - of a Star Wars Galaxies server emulator on slashdot. Announcement on Nexon and legal proceedings - on RaGEZONE. References Massively multiplayer online role-playing games Software maintenance Software release Unofficial adaptations Video game development Fan labor
13256389
https://en.wikipedia.org/wiki/Ounce%20Labs
Ounce Labs
Ounce Labs (an IBM company) is a Waltham, Massachusetts-based security software vendor. The company was founded in 2002 and created a software analysis product that analyzes source code to identify and remove security vulnerabilities. The security software looks for a range of vulnerabilities that leave an application open to attack. Customers have included GMAC, Lockheed Martin, and the U.S. Navy. On July 28, 2009, Ounce was acquired by IBM, for an undisclosed sum, with the intention of integrating it into IBM's Rational Software business. Platform support Programming languages that are supported by Ounce's security scan include ASP.NET, C, C++, C# and other .NET languages, Java, JSP, VB.NET, classic ASP; and platform support for Windows, Solaris, and Linux. References External links IBM Security AppScan Source Development software companies Software companies of the United States IBM acquisitions
5829027
https://en.wikipedia.org/wiki/Battlespace
Battlespace
Battlespace or battle-space is a term used to signify a unified military strategy to integrate and combine armed forces for the military theatre of operations, including air, information, land, sea, cyber and outer space to achieve military goals. It includes the environment, factors, and conditions that must be understood to successfully apply combat power, protect the force, or complete the mission. This includes enemy and friendly armed forces, infrastructure, weather, terrain, and the electromagnetic spectrum within the operational areas and areas of interest. Concept From "battlefield" to "battle-space" Over the last 25 years, the understanding of the military operational environment has transformed from primarily a time and space-driven linear understanding (a "battlefield") to a multi-dimensional system of systems understanding (a battle-space). This system of systems understanding implies that managing the battle-space has become more complex, primarily because of the increased importance of the cognitive domain, a direct result of the information age. Today, militaries are expected to understand the effects of their actions on the operational environment as a whole, and not just in the military domain of their operational environment. Battle-space agility Battle-space agility refers to the speed at which the war-fighting organization develops and transforms knowledge into actions for desired effects in the battle-space. Essentially it argues that you must be better than the opposition at doing the right actions at the right time and place. Inbuilt into this understanding is that battle-space agility is not just about speed, but it is also about executing the most effective action (ways) in the most efficient manner (means) relative to achieving the desired impact on the system (ends). At all times battle-space agility is dependent on the quality of situational awareness and holistic understanding of the battle-space to determine the best actions, a logic that has become a driving force behind a renaissance of interest in the quality of military intelligence. It has been heavily linked to the ability of intelligence analysts and operational planners to understand their battle-space, and their targets, as networks in order to facilitate a faster, and more accurate shared situational understanding. This in turn increases targeting efficacy and helps retain the overall initiative. Battle-space agility has its roots solidly in the more generic Command & Control (C2) research field on C2 agility conducted by NATO, but works specifically with an agility concept within the context of war-fighting only. Hence it is framed by effects based thinking, system of systems analysis, and competing Observation Orient Decide Act (OODA) loops. Battle-space awareness Battle-space awareness (BA) is a practice of military philosophy that is used as a valuable asset by joint component and force commanders, to predict courses of action before employing troops into a prescribed area of operation (AO). It utilizes the intelligence preparation asset to assist the commander in being 'aware' of recent, current, and near term events in his battle-space. It is based around its knowledge and understanding obtained by the intelligence, surveillance, and reconnaissance (ISR) system. It is another methodical concept used to gain information about the operational area—the environment, factors, and conditions, including the status of friendly and adversary forces, neutrals and noncombatants, weather and terrain—that enables timely, relevant, comprehensive and accurate assessments. It has become an effective concept for conventional and unconventional operations in successfully projecting, or protecting, a military force, and/or completing its mission. Battle-space digitization Battle-space digitization is designed to improve military operational effectiveness by integrating weapons platforms, sensor networks, ubiquitous command and control (UC2), intelligence, and network-centric warfare. This military doctrine reflects that in the future, military operations will be merged into joint operations rather than take place in separate battle-spaces under the domain of individual armed services. Battlespace intelligence preparation Intelligence preparation Intelligence preparation of the battlespace (IPB) is an analytical methodology employed to reduce uncertainties concerning the enemy, environment, and terrain for all types of operations. Intelligence preparation of the battle-space builds an extensive database for each potential area in which a unit may be required to operate. The database is then analyzed in detail to determine the impact of the enemy, environment and terrain on operations and presents it in graphic form. Intelligence preparation of the battle-space is a continuing process. Joint intelligence preparation Joint intelligence preparation of the battle-space (JIPB) is the analytical process used by joint intelligence organizations to produce intelligence assessments, estimates and other intelligence products in support of the joint force commander's decision making process. It is a continuous process that includes defining the total battle-space environment; describing the battle-space's effects; evaluating the adversary; and determining and describing adversary potential courses of action. The process is used to analyze the aerial, terrestrial, maritime/littoral, spatial, electromagnetic, cyberspace, and human dimensions of the environment and to determine an opponent's capabilities to operate in each. JPIB products are used by the joint force and component command staffs in preparing their estimates and are also applied during the analysis and selection of friendly courses of action. Battle-space measures Maneuver control Maneuver control measures are the basic preliminary step in effective clearance of fire support (e.g. artillery, naval gunfire support, and close air support), marked by imaginary boundary lines used by commanders to designate the geographical area for which a particular unit is tactically responsible. It is usually established on identifiable terrain to help aid in hasty referencing for better lateral advantage in the science of fire support, normally orchestrated by a higher echelon of the general staff, mainly the operations staff sections. They are normally designated along terrain features easily recognizable on the ground. An important point on maneuver control graphics: staffs must be knowledgeable regarding the different maneuver control measures and their impact on clearance of fires. For instance, boundaries are both restrictive and permissive; corridors are restrictive, while routes, axis, and directions of attack are neither. It should be reminded of the effect on clearance of fires if subordinate maneuver units are not given zones or sectors (i.e. no boundaries established). Since boundaries serve as both permissive and restrictive measures, the decision not to employ them has profound effects upon timely clearance of fires at the lowest possible level. The higher echelon may coordinate all clearance of fires short of the Coordinated Fire Line (CFL), a very time-intensive process. It allows the unit to maneuver successfully and to swiftly and efficiently engage targets. It requires coordination and clearance only within that organization. They affect fire support in two ways: Restrictive—Restrictive control that is established in conjunction with a host nation to preclude damage or destruction to a national asset, population center, or religious structure. Its key role is the protection of an element of tactical importance, such as a fuel storage area. Restrictive fire area (RFA) is an area with specific restrictions and in which fires that exceed those restrictions will not be delivered without coordination with the establishing headquarters, or higher echelon; occasionally, it may be established to operate independently. No-fire area (NFA) is a designated area which no fire support may be delivered for fires or effects. When the establishing headquarters allows fires on a mission-by-mission basis. When a friendly force is engaged by an enemy located within the NFA and the commander returns fire to defend his forces, the amount of return fire should not exceed that sufficient to protect the force and continue the mission. Permissive—Permissive control that gives the maneuver commander the liberty to announce and engage fire support at his will, unless it otherwise is restricted by a higher echelon. Most cases, a commander will deny the use of Fire Support Coordinating Measures (FSCM). There are free-fire areas (FFA) which fire support can commence without additional coordination with the establishing headquarters. Normally, it is established on identifiable terrain by division or higher headquarters. Battle-space shaping Battle-space shaping is a concept involved in the practice of maneuver warfare that are used for shaping a situation on the battlefield, gaining the military advantage for the commander. It forecasts the elimination of the enemy's capability by fighting in a coherent manner before deploying determine-sized forces. See also List of command and control abbreviations Command and control Fog of war Network-centric warfare Psychological warfare References Further reading Mitchell, W. (2013). Battle-space Agility 101. Royal Danish Defense College Publishing House. Mitchell, W. (2013). Battle-space Agility 201.Royal Danish Defense College Publishing House. Mitchell, W. (2012). Battle-space Intelligence. Royal Danish Defense College Publishing House. Mitchell, W. (2012). Battle-space Agility in Helmand. Royal Danish Defense College Publishing House. Mitchell, W. (2008). Comprehensive Approach Capacity Building.Royal Danish Defense College Publishing House. Blackmore, T. (2005). War X: Human Extensions in Battle-space. University of Toronto Press. Owens, W. (2002). Dominant Battle-space Knowledge. University Press of the Pacific. External links Marine Corps Doctrinal Publication (MCDP) 1-0: Marine Corps Operations' Achieving Dominant Battlespace Awareness Joint Synthetic Battlespace: Cornerstone for Predictive Battlespace Awareness Battlespace Digitization - Coping With Uncertainty In The Command Process Challenges for Joint Battlespace Digitization (JBD) Command and control Military strategy Military terminology
20003
https://en.wikipedia.org/wiki/Multitier%20architecture
Multitier architecture
In software engineering, multitier architecture (often referred to as n-tier architecture) is a client–server architecture in which presentation, application processing and data management functions are physically separated. The most widespread use of multitier architecture is the three-tier architecture. N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific tier, instead of reworking the entire application. A three-tier architecture is typically composed of a presentation tier, a logic tier, and a data tier. While the concepts of layer and tier are often used interchangeably, one fairly common point of view is that there is indeed a difference. This view holds that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure. For example, a three-layer solution could easily be deployed on a single tier, such in the case of an extreme database-centric architecture called RDBMS-only architecture or in a personal workstation. Layers The "Layers" architectural pattern has been described in various publications. Common layers In a logical multilayer architecture for an information system with an object-oriented design, the following four are the most common: Presentation layer (a.k.a. UI layer, view layer, presentation tier in multitier architecture) Application layer (a.k.a. service layer or GRASP Controller Layer ) Business layer (a.k.a. business logic layer (BLL), domain logic layer) Data access layer (a.k.a. persistence layer, logging, networking, and other services which are required to support a particular business layer) The book Domain Driven Design describes some common uses for the above four layers, although its primary focus is the domain layer. If the application architecture has no explicit distinction between the business layer and the presentation layer (i.e., the presentation layer is considered part of the business layer), then a traditional client-server (two-tier) model has been implemented. The more usual convention is that the application layer (or service layer) is considered a sublayer of the business layer, typically encapsulating the API definition surfacing the supported business functionality. The application/business layers can, in fact, be further subdivided to emphasize additional sublayers of distinct responsibility. For example, if the model–view–presenter pattern is used, the presenter sublayer might be used as an additional layer between the user interface layer and the business/application layer (as represented by the model sublayer). Some also identify a separate layer called the business infrastructure layer (BI), located between the business layer(s) and the infrastructure layer(s). It's also sometimes called the "low-level business layer" or the "business services layer". This layer is very general and can be used in several application tiers (e.g. a CurrencyConverter). The infrastructure layer can be partitioned into different levels (high-level or low-level technical services). Developers often focus on the persistence (data access) capabilities of the infrastructure layer and therefore only talk about the persistence layer or the data access layer (instead of an infrastructure layer or technical services layer). In other words, the other kind of technical services are not always explicitly thought of as part of any particular layer. A layer is on top of another, because it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Another common view is that layers do not always strictly depend on only the adjacent layer below. For example, in a relaxed layered system (as opposed to a strict layered system) a layer can also depend on all the layers below it. Three-tier architecture Three-tier architecture is a client-server software architecture pattern in which the user interface (presentation), functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded in Cambridge, Massachusetts. Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently in response to changes in requirements or technology. For example, a change of operating system in the presentation tier would only affect the user interface code. Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic that may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe that contains the computer data storage logic. The middle tier may be multitiered itself (in which case the overall architecture is called an "n-tier architecture"). Presentation tier This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing and shopping cart contents. It communicates with other tiers by which it puts out the results to the browser/client tier and all other tiers in the network. In simple terms, it is a layer which users can access directly (such as a web page, or an operating system's GUI). Application tier (business logic, logic tier, or middle tier) The logical tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing. Data tier The data tier includes the data persistence mechanisms (database servers, file shares, etc.) and the data access layer that encapsulates the persistence mechanisms and exposes the data. The data access layer should provide an API to the application tier that exposes methods of managing the stored data without exposing or creating dependencies on the data storage mechanisms. Avoiding dependencies on the storage mechanisms allows for updates or changes without the application tier clients being affected by or even aware of the change. As with the separation of any tier, there are costs for implementation and often costs to performance in exchange for improved scalability and maintainability. Web development usage In the web development field, three-tier is often used to refer to websites, commonly electronic commerce websites, which are built using three tiers: A front-end web server serving static content, and potentially some cached dynamic content. In web-based application, front end is the content rendered by the browser. The content may be static or generated dynamically. A middle dynamic content processing and generation level application server (e.g., Symfony, Spring, ASP.NET, Django, Rails, Node.js). A back-end database or data store, comprising both data sets and the database management system software that manages and provides access to the data. Other considerations Data transfer between tiers is part of the architecture. Protocols involved may include one or more of SNMP, CORBA, Java RMI, .NET Remoting, Windows Communication Foundation, sockets, UDP, web services or other standard or proprietary protocols. Often middleware is used to connect the separate tiers. Separate tiers often (but not necessarily) run on separate physical servers, and each tier may itself run on a cluster. Traceability The end-to-end traceability of data flows through n-tier systems is a challenging task which becomes more important when systems increase in complexity. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers. Generally, the term "tiers" is used to describe physical distribution of components of a system on separate servers, computers, or networks (processing nodes). A three-tier architecture then will have three processing nodes. The term "layers" refers to a logical grouping of components which may or may not be physically located on one processing node. See also Abstraction layer Client–server model Database-centric architecture Front-end and back-end Hierarchical internetworking model Load balancing (computing) Open Services Architecture Rich web application Service layer Shearing layers Web application References External links Linux journal, Three Tier Architecture Microsoft Application Architecture Guide Example of free 3-tier system What Is the 3-Tier Architecture? Description of a concrete layered architecture for .NET/WPF Rich Client Applications Distributed computing architecture Software architecture World Wide Web Architectural pattern (computer science) Software design Software engineering terminology Software design patterns
540653
https://en.wikipedia.org/wiki/Super%203D%20Noah%27s%20Ark
Super 3D Noah's Ark
Super 3D Noah's Ark is a Christian video game developed by Wisdom Tree for the Super Nintendo Entertainment System, and was ported a year later to MS-DOS, and re-released in 2015 on Steam for Microsoft Windows, Mac OS X, and Linux. The game was an officially licensed id Software Wolfenstein 3D engine title, but was not licensed by Nintendo, so it was sold in Christian bookstores instead of typical video game retailers. Gameplay The game plays similarly to Wolfenstein 3D, but the graphics were changed to reflect a non-violent theme. Instead of killing Nazi soldiers in a castle, the player takes the part of Noah, wandering the Ark, using a slingshot to shoot sleep-inducing food at angry attacking animals, mostly goats, in order to render them unconscious. The animals behave differently: goats, the most common enemy, will only kick Noah, while the other animals such as sheep, ostriches, antelopes and oxen will shoot spittle at him from a distance. Goats are also unable to open doors, while the other animals can. The gameplay is aimed at younger children. Noah's Ark includes secret passages, food, weapons and extra lives. There are secret levels, and shortcut levels as well. The player eventually comes across larger and more powerful slingshots, and flings coconuts and watermelon at the larger boss-like animals, such as Ernie the Elephant and Carl the Camel. History Development The game that would eventually become Super 3D Noah's Ark was originally conceived as a licensed game based on the movie Hellraiser, a movie that Wisdom Tree founder Dan Lawton was a great fan of. Wisdom Tree acquired the game rights to Hellraiser for $50,000, along with a license to use the Wolfenstein 3D game engine from id Software, believing that the fast, violent action of Wolfenstein would be a good match for the mood of the film. Development initially began on the Nintendo Entertainment System, with Wisdom Tree intending to ship the game on a special cartridge that came equipped with a co-processor that could increase the system's RAM and processing speed several times over. Eventually the Hellraiser game concept was abandoned due to several issues: the hardware of the NES was found unsuitable because of its low color palette and the addition of a co-processor would have made the cartridge far too expensive for consumers. According to Vance Kozik of Wisdom Tree, little progress was made on the NES incarnation of the game, which he described as "a barely up-and-running demo". The platform for Hellraiser was then switched to the PC, and the developers were able to make more progress on this version. However, by the time the first prototype was finished, Doom had been released, and Wisdom Tree felt that Hellraiser would not be able to compete. In addition, the management at Wisdom Tree decided that developing and publishing a horror-themed game would clash with their religious, family-friendly image. With these factors in mind, Wisdom Tree decided to let their Hellraiser license expire, transfer development to the Super Nintendo Entertainment System, and redesign the game with a Christian theme, eventually coming up with a game about Noah's Ark. As the game was not officially sanctioned by Nintendo, Wisdom Tree devised a pass-through system similar to the Game Genie to bypass the system's copy protection, where the player had to insert an officially licensed SNES game into the cartridge slot on top of the Super 3D Noah's Ark cartridge. A popular rumor claims that id Software licensed the Wolfenstein 3D engine to Wisdom Tree in retaliation against Nintendo for the content restrictions Nintendo placed on the Super NES version of Wolfenstein 3D. In actuality, Wisdom Tree offered id Software very lucrative terms for the Wolfenstein 3D game engine, which id regarded as having already outlived its usefulness, and id staff have stated that they never had any problems with Nintendo in the first place. Re-release In January 2014, the game was re-released for the SNES, initially available only by private email orders, but later through Piko Interactive's website. The game was also updated for the 20th Anniversary Edition and released on itch.io on May 26 the same year for Windows, Mac OS X, and Linux. These modern PC re-releases are based on the ECWolf game engine, a derivative of Wolfenstein 3D and ZDoom. This version was released in digital distribution on Steam in June 2015. A community reconstructed source code variant became available on bitbucket in October 2015. See also Chex Quest Christian media References External links - Wisdom Tree Games - Christian and Family oriented video games and video game products. - Super 3D Noah's Ark of Wisdom Tree Games at itch.io - old official game website of Wisdom Tree Games 1994 video games Christian video games Commercial video games with freely available source code DOS games Linux games Noah's Ark in popular culture North America-exclusive video games MacOS games Piko Interactive games Super Nintendo Entertainment System games Unauthorized video games Video games based on the Bible Video games developed in the United States Windows games Wisdom Tree games Wolfenstein 3D engine games Sprite-based first-person shooters
24095765
https://en.wikipedia.org/wiki/Eco-costs
Eco-costs
Eco-costs are the costs of the environmental burden of a product on the basis of prevention of that burden. They are the costs which should be made to reduce the environmental pollution and materials depletion in our world to a level which is in line with the carrying capacity of our earth. For example: for each 1000 kg CO2 emission, one should invest €116,- in offshore windmill parks (plus in the other CO2 reduction systems at that price or less). When this is done consequently, the total CO2 emissions in the world will be reduced by 65% compared to the emissions in 2008. As a result, global warming will stabilise. In short: "the eco-costs of 1000kg CO2 are € 116,-". Similar calculations can be made on the environmental burden of acidification, eutrophication, summer smog, fine dust, eco-toxicity, and the use of metals, rare earths, fossil fuels, water and land (nature). As such, the eco-costs are 'external costs', since they are not yet integrated in the real life costs of current production chains (Life Cycle Costs). The eco-costs should be regarded as hidden obligations. The eco-costs of a product are the sum of all eco-costs of emissions and use of resources during the life cycle "from cradle to cradle". The widely accepted method to make such a calculation is called life cycle assessment (LCA), which is basically a mass and energy balance, defined in the ISO 14040, and the ISO 14044 (for the building industry the EN 15804). The practical use of eco-costs is to compare the sustainability of several product types with the same functionality. The advantage of eco-costs is that they are expressed in a standardized monetary value (€) which appears to be easily understood 'by instinct'. Also the calculation is transparent and relatively easy, compared to damage based models which have the disadvantage of extremely complex calculations with subjective weighting of the various aspects contributing to the overall environmental burden. The system of eco-costs is part of the bigger model of the ecocosts/value ratio, EVR. Background information The eco-costs system has been introduced in 1999 on conferences, and published in 2000-2004 in the International Journal of LCA, and in the Journal of Cleaner Production. In 2007 the system has been updated, and published in 2010. The next updates were in 2012 and 2017. It is planned to update the system every 5 years to incorporate the latest developments in science. The concept of eco-costs has been made operational with general databases of the Delft University of Technology, and is described at www.ecocostsvalue.com. The method of the eco-costs is based on the sum of the marginal prevention costs (end of pipe as well as system integrated) for toxic emissions related to human health as well as ecosystems, emissions that cause global warming, and resource depletion (metals, rare earths, fossil fuels, water, and land-use). For a visual display of the system see Figure 1. Marginal prevention costs of toxic emissions are derived from the so-called prevention curve as depicted in Figure 2. The basic idea behind such a curve is that a country (or a group of countries, such as the European Union), must take prevention measures to reduce toxic emissions (more than one measure is required to reach the target). From the point of view of the economy, the cheapest measures (in terms of euro/kg) are taken first. At a certain point at the curve, the reduction of the emissions is sufficient to bring the concentration of the pollution below the so-called no-effect-level. The no-effect-level of emissions is the level that the emissions and the natural absorption of the earth are in equilibrium again at a maximum temperature rise of 2 degrees C. The no-effect-level of a toxic emission is the level where the concentration in nature is well below the toxicity threshold (most natural toxic substances have a toxicity threshold, below which they might even have a beneficial effect), or below the natural background level. For human toxicity the 'no-observed-adverse-effect level' is used. The eco-costs are the marginal prevention costs of the last measure of the prevention curve to reach the no-effect-level. See the abovementioned references 4 and 8 for a full description of the calculation method (note that in the calculation 'classes' of emissions with the same 'midpoint' are combined, as explained below). The classical way to calculate a 'single indicator' in LCA is based on the damage of the emissions. Pollutants are grouped in 'classes', multiplied by a 'characterisation' factor to account for their relative importance within a class, and totalised to the level of their 'midpoint' effect (global warming, acidification, nutrification, etc.). The classical problem is then to determine the relative importance of each midpoint effect. In damage based systems this is done by 'normalisation' (= comparison with the pollution in a country or a region) and 'weighting' (= giving each midpoint a weight, to take the relative importance into account) by an expert panel. The calculation of the eco-costs is based on classification and characterisation tables as well (combining tables from IPCC (), the USEtox model (usetox.org), tables of the ILCD (), however has a different approach to the normalisation and weighting steps. Normalisation is done by calculating the marginal prevention costs for a region (i.e. the European Union), as described above. The weighting step is not required in the eco-costs system, since the total result is the sum of the eco-costs of all midpoints. The advantage of such a calculation is that the marginal prevention costs are related to the cost of the most expensive Best Available Technology which is needed to meet the target, and the corresponding level of Tradable Emission Rights which is required in future. From a business point of view, the eco-costs are the costs of non-compliance with future governmental regulations. Example from the past: emissions of Volkswagen diesel. The eco-costs have been calculated for the situation in the European Union. It is expected that the situation in some states in the US, like California and Pennsylvania, give similar results. It might be argued that the eco-costs are also an indication of the marginal prevention costs for other parts of the globe, under the condition of a level playing field for production companies. Eco-costs 2017 The method of the eco-costs 2017 (version 1.6) comprises tables of over 36.000 emissions, and has been made operational by special database for SimaPro: Idematapp 2020 and Idemat2020 (based on LCIs from Ecoinvent V3.5), Agri Footprint, and a database for CES (Cambridge Engineering Selector). Over 10.000 materials and processes are covered in total. Excel look-up tables are provided at www.ecocostsvalue.com. For emissions of toxic substances, the following set of multipliers (marginal prevention costs) is used in the eco-costs 2017 system: The characterisation ('midpoint') tables which are applied in the eco-costs 2017 system, are recommended by the ILCD: IPPC 2013, 100 years, for greenhouse gasses USETOX 2, for human toxicity (carcinogens), and ecotoxicity ILCD recommended tables for acidification, eutrification, and photochemical oxidant formation (summer smog) UNEP/SETAC 2016, for fine dust PM2.5 (for PM10 the default factors are used of the ILCD Midpoint+) In addition to abovementioned eco-costs for emissions, there is a set of eco-costs to characterize the 'midpoints' of resource depletion: eco-costs of abiotic scarcity (metals, including rare earth, and energy carriers) eco-costs of land-use change (based on loss of biodiversity, of vascular plants and mammals, used for eco-costs of tropical hardwood) eco-costs of water scarcity (based on the Baseline Water Stress indicator - BWS - of countries ) eco-costs of landfill The abovementioned marginal prevention costs at midpoint level can be combined to 'endpoints' in three groups, plus global warming as a separate group: Since the endpoints have the same monetary unit (e.g. euro, dollar), they are added up to the total eco-costs without applying a 'subjective' weighting system. This is an advantage of the eco-costs system (see also ISO 14044 section 4.4.3.4 and 4.4.5). So called 'double counting' (ISO 14044 section 4.4.2.2.3) is avoided. The eco-costs system is in compliance with ISO 14008 (“Monetary valuation of environmental impacts and related environmental aspects”), and uses the ‘averting costs method’, also called ‘(marginal) prevention costs method’ (see section 6.3). The issue of the 'plastic soup' is dealt with in the midpoint 'use of energy carriers' (in products). In the calculation of the marginal prevention costs (i.e. the eco-costs) the price of feedstock for plastics, diesel and gasoline, is based on the system alternative of substitution by 'second generation' oil from biomass (pyrolysis of agricultural waste, wood harvesting waste, or algae), and producing bio-degradable plastics from it. By this substitution, the increase of plastic soup is stopped. However, the problem of the plastic soup that exists already is not resolved by this prevention measure. The eco-costs of global warming (also called eco-costs of carbon footprint) can be used as an indicator for the carbon footprint. The eco-costs of resource scarcity can be regarded as an indicator for 'circularity' in the theory of the circular economy. However, it is advised to include human toxicity and eco-toxicity, and include the eco-costs of global warming in the calculations on the circular economy as well. The eco-costs of global warming are required to reveal the difference between fossil-based products and bio-based products, since biogenic CO2 is not counted in LCA (biogenic CO2 is part of the natural recycle loop in the biosphere). Therefore, total eco-costs can be regarded as a robust indicator for cradle-to-cradle calculations in LCA for products and services in the theory of the circular economy. Since the economic viability of a business model is also an important aspect of the circular economy, the added value of a product-service system should be part of the analysis. This requires the two dimensional approach of Eco-efficient Value Creation as described at the Wikipedia page on the model of the ecocosts/value ratio, EVR. The Delft University of Technology has developed a single indicator for S-LCA as well, the so-called s-eco-costs, to incorporate the sometimes appalling working conditions in production chains (e.g. production of garments, mining of metals). Aspects are the low minimum wages in developing countries (the "fair wage deficit"), the aspects of "child labour" and extreme poverty", the aspect of "excessive working hours", and the aspect of "OSH (Occupational Safety and Health)". The s-eco-costs system has been published in the Journal of Cleaner Production. Prevention costs versus damage costs Prevention measures will decrease the costs of the damage, related to environmental pollution. The damage costs are in most cases the same (or a bit higher) compared to the prevention costs. So the total effect of prevention measures on our society is that it results in a better environment at no extra costs. Discussion There are many 'single indicators' for LCA. Basically, they fall into three categories: single issue damage based prevention based The best known 'single issue' indicator is the carbon footprint: the total emissions of kg CO2, or kg CO2 equivalent (taking methane and some other greenhouse gasses into account as well). The advantage of a single issue indicator is, that its calculation is simple and transparent, without any complex assumptions. It is easy as well to communicate to the public. The disadvantage is that is ignores the problems caused by other pollutants and it is not suitable for cradle-to-cradle calculations (because materials depletion is not taken into account). The most common single indicators are damage based. This stems from the period of the 1990s, when LCA was developed to make people aware of the damage of production and consumption. The advantage of damage based single indicators is, that they make people aware of the fact that they should consume less, and make companies aware that they should produce cleaner. The disadvantage is that these damage based systems are very complex, not transparent for others than who make the computer calculations, need many assumptions, and suffer from the subjective normalization and weighting procedure as last step, to combine the 3 scores for human health, ecosystems and resource depletion. Communication of the result is not easy, since the result is expressed in 'points' (scientific attempts to express the results in money were not very successful so far, because of methodological flaws and uncertainties). Prevention based indicators, like the system of the eco-costs, are relatively new. The advantage, in comparison to the damage based systems, is that the calculations are relatively easy and transparent, and that the results can be explained in terms of money and in measures to be taken. The system is focused on the decision taking processes of architects, business people, designers and engineers. The advantage is that it provides 1 single endpoint in euro's, without the need of normalization and weighting. The disadvantage is that the system is not focused on the fact that people should consume less. The eco-costs are calculated for the situation of the European Union, but are applicable worldwide under the assumption of a level playing field for business, and under the precautionary principle. There are two other prevention based systems, developed after the introduction of the eco-costs, which are based on the local circumstances of a specific country: In the Netherlands, 'shadow prices' have been developed in 2004 by TNO/MEP on basis of a local prevention curve: it are the costs of the most expensive prevention measure required by the Dutch government for each midpoint. It is obvious that such costs are relevant for the local companies, but such a shadow price system doesn't have any meaning outside the Netherlands, since it is not based on the no-effect-level In Japan, a group of universities have developed a set of data for maximum abatement costs (MAC, similar to the midpoint multipliers of the eco-costs as given in the previous section), for the Japanese conditions. The development of the MAC method started in 2002 and has been published in 2005. The so-called avoidable abatement cost (AAC) in this method is comparable to the eco-costs. Five available databases In line with the policy of the Delft University of Technology to bring LCA calculations within reach of everybody, open access excel databases (tables) are made available on the internet, free of charge. Experts on LCA who want to use the eco-costs as a single indicator, can download the full database for Simapro (the Eco-costs Method as well as the Idematapp LCIs), when they have a Simapro licence. Engineers, designers and architects can have databases, free of charge, for CES and ArchiCAD software, provided that they have a licence for the software. The following databases are available: excel tables on www.ecocostsvalue.com, tab data (look-up tables for designers and engineers): an excel table with data on emissions and materials depletion (more than 35.000 substances), see an excel table on products and processes, based on LCIs of Ecoinvent, Idemat, and Agri Footprint (more than 10,000 lines), only for students at the campus, see an import SimaPro database for the method and an import SimaPro database for Idemat LCIs (software for LCA specialists. www.simapro.com) for people who have a Simapro licence a database for Cambridge Engineering Selector, Level 2 (software for designers and engineers who have a software licence) a dataset for ArchiCAD (software for architects) the IdematApp for Sustainable Materials Selection (available in the App Store of Apple and in the Google Play store). See for more information www.idematapp.com. See also Environmental full-cost accounting References Environmental economics Research Industrial ecology
760813
https://en.wikipedia.org/wiki/Epyx
Epyx
Epyx, Inc. was a video game developer and publisher active in the late 1970s and 1980s. The company was founded as Automated Simulations by Jim Connelley and Jon Freeman, originally using Epyx as a brand name for action-oriented games before renaming the company to match in 1983. Epyx published a long series of games through the 1980s. The company is currently owned by Bridgestone Multimedia Group Global. History Formation In 1977, Susan Lee-Merrow invited Jon Freeman to join a Dungeons & Dragons game hosted by Jim Connelley and Jeff Johnson. Connelley later purchased a Commodore PET computer to help with the bookkeeping involved in being a dungeon master, and came up with the idea of writing a computer game for the machine before the end of the year so he could write it off on his taxes. Freeman had written on gaming for several publications, and joined Connelley in the design of a new space-themed wargame. Starting work around August 1978, Freeman wrote the basic rules, mission sets, background stories and the manual, while Connelley coded up the system in PET BASIC. The BASIC era The two formed Automated Simulations around Thanksgiving 1978 to market the game, and released it in December as Starfleet Orion. Examining contemporary magazines (Byte and Creative Computing) suggests this is the first commercial space-themed wargame for a personal computer. As the game was written in BASIC, it was easy to port to other home computers of the era, starting with the TRS-80 and then the Apple II, the latter featuring rudimentary graphics. They followed this game with 1979's Invasion Orion, which included a computer opponent so as not to require two human players. The company's next release, Temple of Apshai, was very successful, selling over 20,000 copies. As the game was not a "simulation" of anything, the company introduced the Epyx brand name for these more action-oriented titles. Rated as the best computer game by practically every magazine of the era, Apshai was soon ported from the TRS-80 to additional systems, such as the Atari 400/800 and the Commodore 64. Apshai spawned a number of similar adventure games based on the same game engine, including two direct sequels, branded under the Dunjonquest label. The games were so successful that they were later re-released in 1985 as the Temple of Apshai Trilogy. Using the same BASIC game engine, a series of "semi-action" games followed under the Epyx brand, including Crush, Crumble and Chomp!, Rescue at Rigel, and Star Warrior, each of which added twists to the Apshai engine. Growth and action focus Freeman became increasingly frustrated by Connelley's refusal to update the game engine. He left the company to start Free Fall Associates in 1981, leaving Connelley to lead what was now a large company. A year later, Epyx was starting to have financial difficulties. Jim Connelley wanted and received money through venture capital, and the venture capitalists installed Michael Katz to manage the company. Connelley clashed with new management, left Epyx, and formed his own development team, The Connelley Group with all of the programmers going with him, but continued to work under the Epyx umbrella. With no programmers to develop any games in-house, Michael Katz needed to hire programmers to ensure a steady supply of games. Several venture capital owners involved in Epyx also had ownership of a company called Starpath. While Starpath had several young programmers and hardware engineers, they were facing financial difficulties as well. Around this time, an independent submission to publish a game called Jumpman came through and was a big hit for Epyx. The success of Jumpman made Epyx a lot of money, so Michael Katz had the capital to create a merger between Epyx and Starpath, bringing Starpath's programmers and hardware engineers under the same company. Michael Katz left Epyx in 1984 after being hired away by Atari Corporation as their President of Entertainment Electronics Division (and later, became the President of Sega of America), and was replaced by Gilbert Freeman (no relation to Jon Freeman). By 1983 Epyx discontinued its older games because, Jerry Pournelle reported, "its managers tell me that arcade games so outsell strategic games that it just isn't cost-effective to put programmer time on strategy". By early 1984, InfoWorld estimated that Epyx was the world's 16th-largest microcomputer-software company, with $10 million in 1983 sales. Many successful action games followed, including the hits Impossible Mission and Summer Games. The latter created a long run of successful sequels, including Summer Games II, Winter Games, California Games, and World Games. The company produced games based on licenses of Hot Wheels, G.I. Joe, and Barbie. In Europe, U.S. Gold published Epyx games for the Commodore 64, and also ported many of the games to other major European platforms such as the ZX Spectrum and Amstrad CPC. For the Commodore 64, Epyx made the Fast Load cartridge which enables a fivefold speedup of floppy disk drive accesses through Commodore's very slow serial interface. Another hardware product was the Epyx 500XJ Joystick, which uses high-quality microswitches and a more ergonomic form factor than the standard Atari CX40 joystick while remaining compatible. Starting in 1986, Epyx realized that the Commodore 64 was starting to show its age, and needed to think about the future of the company. They hired David Shannon Morse to explore the next generation of consoles and computers and to learn about their strengths. David's son wanted his father to come up with a portable game system, so he had a meeting with former colleagues at Amiga Corporation, R. J. Mical and Dave Needle, to see if there was a way to design a portable gaming system. Internally, the handheld gaming system they were working on was called the Handy. Unable to continue due to high costs, it was sold to Atari Corporation which brought it to market in 1989 as the Atari Lynx. Litigation In 1987, Epyx faced an important copyright infringement lawsuit from Data East USA regarding Epyx's Commodore 64 video game World Karate Championship. Data East thought the whole game, and particularly the depiction of the referee, looked too much like its 1984 arcade game Karate Champ. Data East won at the US District Court level and Judge William Ingram ordered Epyx to recall all copies of World Karate Championship. Epyx appealed the case to the United States Court of Appeals for the Ninth Circuit, who reversed the judgment and ruled in favor of Epyx, stating that copyright protection did not extend to the idea of a tournament karate game, but specific artistic choices not dictated by that idea. The Court noted that a "17.5 year-old boy" could see clear differences between the elements of each game actually subject to copyright. Bankruptcy and asset sales Epyx had become heavily dependent on the Commodore 64 market, which accounted for the bulk of its revenues most years, but by 1988 the C64 was an aging machine now in its sixth year and the focus of computer gaming was shifting to PC compatibles and 16-bit machines. Although the console market, dominated by the NES, was highly lucrative, Epyx objected to Nintendo's strict rules and licensing policies and instead initiated a failed attempt to develop their own game console. Epyx were unable to fulfill its contract with Atari to finish developing Lynx hardware and software, and the latter withheld payments that the former needed. By the end of 1989, Epyx discontinued developing computer games, began making only console games, and filed for Chapter 11 bankruptcy protection. According to Stephen Landrum, a long-time game programmer at Epyx, the company went bankrupt "because it never really understood why it had been successful in the past, and then decided to branch out in a lot of directions, all of which turned out to be failures." Epyx had shrunk from 145 employees in 1988 to fewer than 20 by the end of 1989. After emerging from bankruptcy the company resumed game development but only for the Lynx, with Atari acting as publisher. In 1993, with eight employees left, they decided just to sell off the rest of the company. Bridgestone Media Group eventually acquired the rights the rest of Epyx's assets. Job offers were extended to the eight remaining employees, but only Peter Engelbrite accepted. In 2006, British publisher System 3 announced it had licensed certain Epyx's assets on a time limited basis to release games such as California Games and Impossible Mission for Nintendo DS, PlayStation Portable, and Wii in 2007. Products Games Other software Hardware Notes References External links Epyx profile on MobyGames "Epyx Journey" – An in-depth history of Epyx Epyx history and game list – GOTCHA on GameSpy. Images of some early Epyx brochures Epyx Consumer Software Catalog Winter 1984 Epyx 500XJ Joystick Brochure Epyx 500XJ Joystick Commercial (1986) 1993 disestablishments in California Defunct computer hardware companies Companies that filed for Chapter 11 bankruptcy in 1989 Video game companies established in 1978 Video game companies disestablished in 1993 Defunct video game companies of the United States
7293328
https://en.wikipedia.org/wiki/MSConfig
MSConfig
MSConfig (officially called System Configuration in Windows Vista, Windows 7, Windows 8 or Windows 10, or Windows 11 and Microsoft System Configuration Utility in previous operating systems) is a system utility to troubleshoot the Microsoft Windows startup process. It can disable or re-enable software, device drivers and Windows services that run at startup, or change boot parameters. It is bundled with all versions of Microsoft Windows operating systems since Windows 98 except Windows 2000. Windows 95 and Windows 2000 users can download the utility as well, although it was not designed for them. Uses MSConfig is often used for speeding up the Microsoft Windows startup process of the machine. According to Microsoft, MSConfig was not meant to be used as a startup management program. Features MSConfig is a troubleshooting tool which is used to temporarily disable or re-enable software, device drivers or Windows services that run during startup process to help the user determine the cause of a problem with Windows. Some of its functionality varies by Windows versions: In Windows 98 and Windows Me, it can configure advanced troubleshooting settings pertaining to these operating systems. It can also launch common system tools. In Windows 98, it can back up and restore startup files. In Windows Me, it has also been updated with three new tabs called "Static VxDs", "Environment" and "International". The Static VxDs tab allows users to enable or disable static virtual device drivers to be loaded at startup, the Environment tab allows users to enable or disable environment variables, and the International tab allows users to set international language keyboard layout settings that were formerly set via the real-mode MS-DOS configuration files. A "Cleanup" button on the "Startup" tab allows cleaning up invalid or deleted startup entries. In Windows Me and Windows XP versions, it can restore an individual file from the original Windows installation set. On Windows NT-based operating systems prior to Windows Vista, it can set various BOOT.INI switches. In Windows XP and Windows Vista, it can hide all operating system services for troubleshooting. In Windows Vista and later, the tool allows configuring various switches for Windows Boot Manager and Boot Configuration Data. It also gained additional support for launching a variety of tools, such as system information, other configuration areas, such as Internet options, and the ability to enable/disable UAC. An update is available for Windows XP and Windows Server 2003 that adds the Tools tab. References Further reading Windows components Windows administration Configuration management Windows 98
3234557
https://en.wikipedia.org/wiki/Xcon
Xcon
The R1 (internally called XCON, for eXpert CONfigurer) program was a production-rule-based system written in OPS5 by John P. McDermott of CMU in 1978 to assist in the ordering of DEC's VAX computer systems by automatically selecting the computer system components based on the customer's requirements. Overview In developing the system, McDermott made use of experts from both DEC's PDP/11 and VAX computer systems groups. These experts sometimes even disagreed amongst themselves as to an optimal configuration. The resultant "sorting it out" had an additional benefit in terms of the quality of VAX systems delivered. XCON first went into use in 1980 in DEC's plant in Salem, New Hampshire. It eventually had about 2500 rules. By 1986, it had processed 80,000 orders, and achieved 95–98% accuracy. It was estimated to be saving DEC $25M a year by reducing the need to give customers free components when technicians made errors, by speeding the assembly process, and by increasing customer satisfaction. Before XCON, when ordering a VAX from DEC, every cable, connection, and bit of software had to be ordered separately. (Computers and peripherals were not sold complete in boxes as they are today.) The sales people were not always very technically expert, so customers would find that they had hardware without the correct cables, printers without the correct drivers, a processor without the correct language chip, and so on. This meant delays and caused a lot of customer dissatisfaction and resultant legal action. XCON interacted with the sales person, asking critical questions before printing out a coherent and workable system specification/order slip. XCON's success led DEC to rewrite XCON as XSEL—a version of XCON intended for use by DEC's salesforce to aid a customer in properly configuring their VAX (so they would not, say, choose a computer too large to fit through their doorway or choose too few cabinets for the components to fit in). Location problems and configuration were handled by yet another expert system, XSITE. McDermott's 1980 paper on R1 won the AAAI Classic Paper Award in 1999. Legendarily, the name of R1 comes from McDermott, who supposedly said as he was writing it, "Three years ago I couldn't spell knowledge engineer, now I are one." See also MYCIN References The AI Business: The commercial uses of artificial intelligence, ed. Patrick Winston and Karen A. Prendergast. External links "Configuration with R1/XCon (1978)" "AAAI Classic Paper Award" "R1-SOAR: A Research Experiment in Computer Learning" Expert systems Business software Information systems History of artificial intelligence
40394
https://en.wikipedia.org/wiki/Design%20Patterns
Design Patterns
This article is about the book. For the generic article, see software design patterns, or Design Pattern (for non-software use). Design Patterns: Elements of Reusable Object-Oriented Software (1994) is a software engineering book describing software design patterns. The book was written by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, with a foreword by Grady Booch. The book is divided into two parts, with the first two chapters exploring the capabilities and pitfalls of object-oriented programming, and the remaining chapters describing 23 classic software design patterns. The book includes examples in C++ and Smalltalk. It has been influential to the field of software engineering and is regarded as an important source for object-oriented design theory and practice. More than 500,000 copies have been sold in English and in 13 other languages. The authors are often referred to as the Gang of Four (GoF). History The book started at a birds of a feather (BoF) session at OOPSLA '90, "Towards an Architecture Handbook", run by Bruce Anderson, where Erich Gamma and Richard Helm met and discovered their common interest. They were later joined by Ralph Johnson and John Vlissides. The original publication date of the book was October 21, 1994 with a 1995 copyright, hence it is often cited with a 1995-year, despite being published in 1994. The book was first made available to the public at the OOPSLA meeting held in Portland, Oregon, in October 1994. In 2005 the ACM SIGPLAN awarded that year's Programming Languages Achievement Award to the authors, in recognition of the impact of their work "on programming practice and programming language design". As of March 2012, the book was in its 40th printing. Introduction Chapter 1 is a discussion of object-oriented design techniques, based on the authors' experience, which they believe would lead to good object-oriented software design, including: "Program to an interface, not an implementation." (Gang of Four 1995:18) Composition over inheritance: "Favor 'object composition' over 'class inheritance'." (Gang of Four 1995:20) The authors claim the following as advantages of interfaces over implementation: clients remain unaware of the specific types of objects they use, as long as the object adheres to the interface clients remain unaware of the classes that implement these objects; clients only know about the abstract class(es) defining the interface Use of an interface also leads to dynamic binding and polymorphism, which are central features of object-oriented programming. The authors refer to inheritance as white-box reuse, with white-box referring to visibility, because the internals of parent classes are often visible to subclasses. In contrast, the authors refer to object composition (in which objects with well-defined interfaces are used dynamically at runtime by objects obtaining references to other objects) as black-box reuse because no internal details of composed objects need be visible in the code using them. The authors discuss the tension between inheritance and encapsulation at length and state that in their experience, designers overuse inheritance (Gang of Four 1995:20). The danger is stated as follows: "Because inheritance exposes a subclass to details of its parent's implementation, it's often said that 'inheritance breaks encapsulation'". (Gang of Four 1995:19) They warn that the implementation of a subclass can become so bound up with the implementation of its parent class that any change in the parent's implementation will force the subclass to change. Furthermore, they claim that a way to avoid this is to inherit only from abstract classes—but then, they point out that there is minimal code reuse. Using inheritance is recommended mainly when adding to the functionality of existing components, reusing most of the old code and adding relatively small amounts of new code. To the authors, 'delegation' is an extreme form of object composition that can always be used to replace inheritance. Delegation involves two objects: a 'sender' passes itself to a 'delegate' to let the delegate refer to the sender. Thus the link between two parts of a system are established only at runtime, not at compile-time. The Callback article has more information about delegation. The authors also discuss so-called parameterized types, which are also known as generics (Ada, Eiffel, Java, C#, VB.NET, and Delphi) or templates (C++). These allow any type to be defined without specifying all the other types it uses—the unspecified types are supplied as 'parameters' at the point of use. The authors admit that delegation and parameterization are very powerful but add a warning: "Dynamic, highly parameterized software is harder to understand and build than more static software." (Gang of Four 1995:21) The authors further distinguish between 'Aggregation', where one object 'has' or 'is part of' another object (implying that an aggregate object and its owner have identical lifetimes) and acquaintance, where one object merely 'knows of' another object. Sometimes acquaintance is called 'association' or the 'using' relationship. Acquaintance objects may request operations of each other, but they are not responsible for each other. Acquaintance is a weaker relationship than aggregation and suggests much looser coupling between objects, which can often be desirable for maximum maintainability in a design. The authors employ the term 'toolkit' where others might today use 'class library', as in C# or Java. In their parlance, toolkits are the object-oriented equivalent of subroutine libraries, whereas a 'framework' is a set of cooperating classes that make up a reusable design for a specific class of software. They state that applications are hard to design, toolkits are harder, and frameworks are the hardest to design. Patterns by type Creational Creational patterns are ones that create objects, rather than having to instantiate objects directly. This gives the program more flexibility in deciding which objects need to be created for a given case. Abstract factory groups object factories that have a common theme. Builder constructs complex objects by separating construction and representation. Factory method creates objects without specifying the exact class to create. Prototype creates objects by cloning an existing object. Singleton restricts object creation for a class to only one instance. Structural These concern class and object composition. They use inheritance to compose interfaces and define ways to compose objects to obtain new functionality. Adapter allows classes with incompatible interfaces to work together by wrapping its own interface around that of an already existing class. Bridge decouples an abstraction from its implementation so that the two can vary independently. Composite composes zero-or-more similar objects so that they can be manipulated as one object. Decorator dynamically adds/overrides behaviour in an existing method of an object. Facade provides a simplified interface to a large body of code. Flyweight reduces the cost of creating and manipulating a large number of similar objects. Proxy provides a placeholder for another object to control access, reduce cost, and reduce complexity. Behavioral Most of these design patterns are specifically concerned with communication between objects. Chain of responsibility delegates commands to a chain of processing objects. Command creates objects that encapsulate actions and parameters. Interpreter implements a specialized language. Iterator accesses the elements of an object sequentially without exposing its underlying representation. Mediator allows loose coupling between classes by being the only class that has detailed knowledge of their methods. Memento provides the ability to restore an object to its previous state (undo). Observer is a publish/subscribe pattern, which allows a number of observer objects to see an event. State allows an object to alter its behavior when its internal state changes. Strategy allows one of a family of algorithms to be selected on-the-fly at runtime. Template method defines the skeleton of an algorithm as an abstract class, allowing its subclasses to provide concrete behavior. Visitor separates an algorithm from an object structure by moving the hierarchy of methods into one object. Criticism Criticism has been directed at the concept of software design patterns generally, and at Design Patterns specifically. A primary criticism of Design Patterns is that its patterns are simply workarounds for missing features in C++, replacing elegant abstract features with lengthy concrete patterns, essentially becoming a "human compiler" or "generating by hand the expansions of some macro". Paul Graham wrote: Peter Norvig demonstrates that 16 out of the 23 patterns in Design Patterns are simplified or eliminated by language features in Lisp or Dylan. Related observations were made by Hannemann and Kiczales who implemented several of the 23 design patterns using an aspect-oriented programming language (AspectJ) and showed that code-level dependencies were removed from the implementations of 17 of the 23 design patterns and that aspect-oriented programming could simplify the implementations of design patterns. There has also been humorous criticism, such as a show trial at OOPSLA '99 on 3 November 1999, and a parody of the format, by Jim Coplien, entitled "Kansas City Air Conditioner". In an interview with InformIT in 2009, Erich Gamma stated that the book authors had a discussion in 2005 on how they would have refactored the book and concluded that they would have recategorized some patterns and added a few additional ones. Gamma wanted to remove the Singleton pattern, but there was no consensus among the authors to do so. See also Software design pattern Enterprise Integration Patterns GRASP (object-oriented design) Pedagogical patterns Notes References Software engineering books Software design patterns 1994 non-fiction books Addison-Wesley books
2425846
https://en.wikipedia.org/wiki/Edu-Ware
Edu-Ware
Edu-Ware Services, Inc. was an educational and entertainment software publisher established in 1979 by Sherwin Steffin and Steven Pederson. It was known for its adventure games, role-playing video games, and flight simulators for the Apple II family of computers. History Edu-Ware founders Sherwin Steffin and Steven Pederson met at UCLA, where Steffin was working as a faculty advisor to the campus radio station where Pederson worked as a student. When Steffin was laid off in the spring of 1979, he and Pederson decided to form a software publishing company specializing in educational software for the Apple II. In particular, Steffin, who held degrees in experimental psychology and instructional technology, wanted to create computer aided instruction that encouraged divergent thinking, in contrast to current school curriculum, which he believed encouraged convergent thinking. Working out of his Woodland Hills, California apartment, Steffin programmed educational software, while Pederson favored games the games he created while completing his studies at UCLA. Edu-Ware's first products were Perception, followed by Compu-Read, which Steffin had begun programming before starting Edu-Ware, with the intention of selling it to Programma International. Software store Rainbow Computing, enticed by Pederson's concept for a new role-playing video game called Space, gave him his first Apple II computer, which he used to also write the strategy game Terrorist and the educational program Compu-Spell, for which Pederson wrote the first version of Edu-Ware's EWS graphics engine for generating text on the Apple's high-resolution graphics screen. The company expanded beyond the two founders when it hired Mike Lieberman, who had also worked at the student radio station, as sales manager, and contracted game developer David Mullich, who met Steffin while working at Rainbow Computing. After writing several games for Edu-Ware as a freelancer, he joined Edu-Ware after completing his own studies at California State University, Northridge in 1980, and as his first assignment created the ground-breaking adventure game The Prisoner, the product for which Edu-Ware is best remembered today. The game was also a financial success for the company, which moved into actual officespace, at 22222 Sherman Way in Canoga Park, California, by the year's end. Sometime later, the company relocated to larger facilities overlooking the 101 Freeway in Agoura Hills, California. Edu-Ware may be most noted for what it failed to publish rather than what it did publish: Ken Williams originally shopped the first graphical adventure, Mystery House to Eduware in 1980. Unhappy with how the negotiations were proceeding, he formed On-line Systems to publish the game. On-line Systems became Sierra On-line and Sierra became extremely successful, based largely on their reputation in the graphic adventure genre. While The Prisoner remained Edu-Ware's best-selling individual product during its first two years of business, educational software remained its primary focus. The Compu-Math series, consisting of three programs designed by Steffin and programmed by Mullich for teaching elementary mathematics, unveiled Edu-Ware's vision of teaching by objectives and measuring learning through pretesting and posttesting. The company's educational approach was perfected in 1981 with the release of the first in the Algebra series, in which learners choose the cognitive approach by which they want to learn. The Algebra series greatly surpassed The Prisoner in sales and became Edu-Ware's greatest source of revenue. Despite the company's successes, by 1982 it was obvious that to Steffin and Pederson that they could not continue running the company themselves. Rapidly climbing marketing costs and heavier competition from rivals like Davidson & Associates and Spinnaker Software were taking their toll. For the 1.5 million dollar software company to survive, Edu-Ware needed more management strength and expertise. In July 1983 Management Sciences America, then the world's largest independent software manufacturer, announced that it was purchasing Edu-Ware for a combination of cash and MSA stock, valued at $1.5 million, plus a percentage of future earnings. Having previously specialized in mainframe computer software, MSA saw the purchase as its entry into educational software, which it saw as a future growth market. However, the relationship soon soured as Edu-Ware's marketing was taken over by MSA's Peachtree Software accounting software division, and the Edu-Ware brand identity was slowly extinguished. The final straw came when Personal Computing hit the newsstands in October 1984. The issue featured a well-publicized peach-scented insert that unfolded into eight pages, 32-inches wide, displaying a shelf of 67 Peachtree Software products, all in identical packaging. This included 45 Edu-Ware products that were virtually indistinguishable from the accounting software packaging, the only difference being that the Edu-Ware products had the word 'Education' on the box, even for the Edu-Ware games like Prisoner 2. Steffin's protests over how MSA was handling Edu-Ware caused him to be fired in August 1984. The next month, he filed a lawsuit against MSA, claiming the company had violated securities laws in making fraudulent representations to Edu-Ware's stockholders in order to buy the latter's stock and for the promise of future payments not materialized. Steffin further claimed he was to be employed by Edu-Ware for four years after the sale, and charged that MSA undercut Edu-Ware sales to diminish the payments it had promised. He said MSA sabotaged the company by holding some products off the market, eliminating advertising and discontinuing use of the Edu-Ware name. Two months after Steffin filed his lawsuit, MSA announced plans to sell its retail microcomputer software group of Peachtree Software, DesignWare, and Eduware, which together lost $2 million that year. MSA cited the millions of dollars Peachtree Software had spent on advertising and promotion, including the expensive peach-scented insert, as a reason for selling off the group. In March 1985 Encyclopædia Britannica announced that it had purchased Designware and Edu-Ware from MSA for an undisclosed sum. The EduWare development team was to be disbanded, and DesignWare would handle both development and marketing of Edu-Ware and Designware products. Steffin started another software publishing company, BrainPower, along with sales manager Lieberman, while Pederson, who had left Edu-Ware several months earlier, went on to other ventures. Mullich and a few other remaining Edu-Ware employees acquired two of the computer games in development, an adventure game called Wilderness: A Survival Adventure and a space flight simulator called Tranquility Base, and formed their own game company, Electric Transit. Besides Mullich, another notable Edu-Ware alumni include former Apple Computer evangelist Guy Kawasaki, who was director of marketing at the company, and NASA official Wesley Huntress, who developed Rendezvous: A Spaceflight Simulator. Products Unique software for the unique mind Edu-Ware's initial product line was an eclectic mix of analytical software, educational software and computer games, which it marketed under the slogan "unique software for the unique mind". Its 1979 product listed such diverse titles as the metric conversion calculator Metri-Vert, an E.S.P. program to help determine if users have extrasensory perception, and a drinking game called Zintar. However, the photocopied documentation that was packaged in a zip-lock bag with each of Edu-Ware's early products outlined the company's goal of creating software that fell into two distinct categories: K-12 educational products that aimed to provide computer aided instruction that went beyond "random drill and practice routines', and entertainment products which were “often more intellectually powerful, and educational, than the educational products themselves". While many of the company's initial efforts fell short of that vision and were soon dropped from future catalogs, several early products typified the Edu-Ware experience, including its durable speed reading program Compu-Read, and its science fiction role-playing video game Space. The science of learning In 1981, Edu-Ware formalized the distinction between its educational and entertainment products by creating two separate product lines, each with its own packaging. The "Science of Learning" product line consisted of no-nonsense tutorials such as the Compu-Spell, Compu-Math and Algebra series. In each, the learner is given specific, measurable learning objectives; then pre-tested to assess current skill levels before presented with sequenced learning modules; after which he is post-tested to determine what he has learned. Several of these products featured a classroom management module, which measured the individual progress of an entire classroom of students and provide teacher control over the learning process. While Edu-Ware's attempts at applying formal learning theory were often praised, its no-nonsense approach to learning had its critics. For example, a review of Compu-Math: Arithemetic Skills complained that the program is "devoid of the fun aspect that makes computerized learning human and inspiring. The sole reinforcement is an ever-increasing complexity of the problems". Although most of Edu-Ware's Science of Learning products were developed internally, by 1982 the company was attracting outside educators such as Judith S. Priven, Ed.M., who developed several PSAT/SAT products; Neil Bennett, Ph.D., who created an interactive tutorial for teaching BASIC programming; and M. David Merill, who created the first of a (never-completed) comprehensive series to teach poetry. Interactive fantasies While educational software was Edu-Ware's bread-and-butter, its innovative games are what the company is remembered for today. The goal of Edu-Ware's games was to "test, challenge and perhaps inspire that closet intellectual in all of us." Dubbed "Interactive Fantasies", they tackled such weighty topics as the oil crisis (Windfall), television programming (Network), and global terrorism (Terrorist). Noted one magazine reviewer, "there is that residual element of reality that makes Edu-Ware stuff so good". Many of Edu-Ware's games were written by game designer David Mullich. The most famous (or notorious) of these was Prisoner 2, an update that added graphics to their earlier game The Prisoner. The game was Mullich's homage to the Patrick McGoohan 1967 TV series The Prisoner, which had recently been rebroadcast in the United States. The game was Edu-Ware's most critically acclaimed title, and was ported to the Atari and IBM PC computers. While the game was one of Edu-Ware's best-selling titles, like most of EduWare's output, it proved too outside the mainstream to be considered a true hit. Interactive simulations In 1982 Edu-Ware introduced a third brand, Interactive Simulations, when it released Rendezvous: A Space Shuttle Simulation, developed by NASA scientist Wesley Huntress. Accompanied by a thick "Spacecraft Operations" manual with a chapter on use in the classroom, this flight simulator was marketed as being as educational as it was fun to play. Dragonware While the typical Edu-Ware educational product adopted a very serious tone in its instruction, developer John Conrad had created a series of educational products such as Introduction to Counting and Spelling and Reading Primer for Edu-Ware that were designed for the younger learner and thus more playful than the typical Edu-Ware product. However, two of Conrad's later products, Spelling Bee Games and Webster’s Numbers, fell so far into the realm of edutainment that Edu-Ware created a fourth product line for them in 1983. The Dragonware line featured a dragon mascot named Webster, who was to be the child's companion this series of educational games. Peachtree software Edu-Ware's final products – the comprehensive Learning to Read literacy series, the final chapter in the Empire role-playing video game saga, a Tranquility Base lunar lander simulator, and a children's game called Merry Canned Nightmare’s and Dreams – would each have fit well into its Science of Learning, Interactive Fantasies, Interactive Simulations, and Dragonware brands, respectively. However, Edu-Ware's new owner, MSA, decided to strip Edu-Ware of all its brands and marketed the entire software line in identical packaging, bearing the logo of its Peachtree Software accounting software division. All of the products were promoted as being educational software – even such games as Prisoner 2 – until the product line was sold to Encyclopædia Britannica in 1985. Published titles References External links Edu-Ware Services, Inc. title list — on the Internet Movie Database. 01 01 Defunct educational software companies Defunct software companies of the United States Defunct video game companies of the United States Software companies based in California Video game companies based in California Technology companies based in Greater Los Angeles Companies based in Agoura Hills, California American companies established in 1979 Software companies established in 1979 Software companies disestablished in 1985 Video game companies established in 1979 Video game companies disestablished in 1985 1979 establishments in California 1985 disestablishments in California Defunct companies based in Greater Los Angeles
45075900
https://en.wikipedia.org/wiki/PandaDoc
PandaDoc
PandaDoc is an American software company providing SaaS software. The platform provides sales processes software. PandaDoc is based in San Francisco, California with main offices in St. Petersburg, Florida. PandaDoc is document automation software as a service with built-in electronic signatures, workflow management, a document builder, and CPQ functionality. Some Belarusian-born employees of the company were persecuted in Belarus for participating in 2020 Belarusian protests. History In 2012 company was founded by Mikita Mikado and Sergey Barysiuk in Minsk, Belarus. In 2014 company headquarters were moved to the Silicon Valley. Mikita Mikado and Sergey Barysiuk initially created Quote Roller in 2011, in 2017 company has opened new office in St. Petersburg, Florida In 2015 company raised $5M in Series A, led by Altos Ventures. PandaDoc closed two Series B fundings, B1 in May 2017 with $15M, and B2 in August 2018 worth $30 million led by One Peak Partners. In September 2021, PandaDoc closed a Series C with a $1 billion valuation, thus becoming the first Belarus-originated unicorn. Software PandaDoc proposal and contract software is a SaaS product for sales processes. Features PandaDoc includes features to create, track and execute documents, as well as functionality for electronic signatures. It consists of features in the following categories: proposals, quotes, team management, content management, branding, tracking, workflow, productivity, etc. It integrates with several CRMs, as well as ERP, payment, cloud storage, and other systems. Political activity Several PandaDoc employees were arrested in Belarus in September 2020 in retaliation for founders personally protesting crackdowns on participants of 2020–21 Belarusian protests that followed rigged elections earlier; the founders have offered a financial, aid and professional retraining (in the tech industry) to the police officers who have lost their jobs because of refusing to illegally suppress protesters. Most of the arrested employees were conditionally released later that autumn; the last remaining person under arrest was released in August 2021. On 31 August 2021, the authorities of Belarus announced that the case against PandaDoc was closed after the defendants admitted their guilt and compensated the alleged damage. As March 2021, the company’s office in Minsk was in liquidation process, and many employees have moved to an office located in Kyiv. Recognition and awards 2017 - Hot Vendor in Modern Content Management in 2017 by Aragon Research 2020 - Best Overall SaaS Award Winner by APPEALIE See also Sales quote References External links Software companies of the United States Business software Companies based in San Francisco American companies established in 2013 2013 establishments in California
53951618
https://en.wikipedia.org/wiki/HackTool.Win32.HackAV
HackTool.Win32.HackAV
HackTool.Win32.HackAV or not-a-virus:Keygen (or HackTool:Win32/Keygen (Microsoft Malware Protection Center)) is the definition from Kaspersky Labs for a program designed to assist hacking. These programs often contain the signatures of potential malware, that is not dangerous by itself, but can interfere with the work on a PC, or can be used by a hacker to get some personal information from a user's computer. According to the Microsoft Malware Protection Center, its first known detection goes back to July 16, 2009. Behaviour This riskware is able to create license keys for illegally downloaded, non-registered software. This kind of tool may appear differently, depending on what software the tool is designed to create a key for. The following security threats were most often found on PCs that have been related to these tools: Blackhole exploit kit Win32/Autorun Win32/Dorkbot Win32/Obfuscator Other aliases RiskWare/HackAV (Fortinet) Troj/Keygen (Sophos) CRCK_KEYGEN or HKTL_HACKAV (Trend Micro) See also Dorkbot HackTool.AutoKMS iframe virus References External links For more about this Threat, see Volume 13 of the Security Intelligence Report (.pdf download) Analysis of a file at VirusTotal 2009 in computing Cryptanalytic software Hacking (computer security) Malware Types of malware Hacking in the 2000s
63873249
https://en.wikipedia.org/wiki/Jellyfin
Jellyfin
Jellyfin is a suite of multimedia applications designed to organize, manage, and share digital media files to networked devices. Jellyfin consists of a server application installed on a machine running Microsoft Windows, macOS, Linux or in a Docker container, and another application running on a client device such as a smartphone, tablet, smart TV, streaming media player, game console or in a web browser. Jellyfin also can serve media to DLNA and Chromecast-enabled devices. It is a free and open-source software fork of Emby. Features Jellyfin follows a client–server model that allows for multiple users and clients to connect, even simultaneously, and stream digital media remotely. A fully self-contained server, there is no subscription-based consumption model that exists, and Jellyfin does not utilize an external connection nor third-party authentication for any of its functionality. This enables Jellyfin to work on an isolated intranet in much the same fashion as it does over the Internet. Because it shares a heritage with Emby, some clients for that platform are unofficially compatible with Jellyfin, however as Jellyfin's codebase diverges from Emby, this becomes less possible. Jellyfin does not support a direct migration path from Emby. Jellyfin is extensible, and optional third-party plugins exist to provide additional feature functionality. The project hosts an official repository, however plugins need not be hosted in the official repository to be installable. Version 10.6.0 of the server software introduced a feature known as "SyncPlay", which provides functionality for multiple users to consume media content together in a synchronized fashion. Support to read epub ebooks with Jellyfin was also added. Also introduced is multiple plugin repositories. Anyone can now create unofficial plugins for Jellyfin and do not need to wait for them to be added to the official plugin repository. The web front end has been split off in a separate system in anticipation of the move towards a SQL backend and High Availability with multiple servers. Development The project began on December 8, 2018, when co-founders Andrew Rabert and Joshua Boniface, among other users, agreed to fork Emby as a direct reaction to closing of open-source development on that project. A reference to streaming, Jellyfin's name was conceived of by Rabert the following day. An initial release was made available on December 30, 2018. Version history Jellyfin's unique version numbering began with version 10.0.0 in January of 2019. See also Plex (company) Kodi (software) Emby Self-hosting (web services) Home theater PC References External links Official website 2018 software Android (operating system) software Audio player software for Linux Audio streaming software for Linux Cross-platform free software Free software Free and open-source Android software IOS software Linux software MacOS media players Media servers Multimedia software for Linux Open-source cloud applications Software forks Streaming media systems Streaming software TvOS software Windows media players
49111333
https://en.wikipedia.org/wiki/Double%20Ratchet%20Algorithm
Double Ratchet Algorithm
In cryptography, the Double Ratchet Algorithm (previously referred to as the Axolotl Ratchet) is a key management algorithm that was developed by Trevor Perrin and Moxie Marlinspike in 2013. It can be used as part of a cryptographic protocol to provide end-to-end encryption for instant messaging. After an initial key exchange it manages the ongoing renewal and maintenance of short-lived session keys. It combines a cryptographic so-called "ratchet" based on the Diffie–Hellman key exchange (DH) and a ratchet based on a key derivation function (KDF), such as a hash function, and is therefore called a double ratchet. The algorithm is considered self-healing because under certain conditions it prevents an attacker from accessing the cleartext of future messages after having compromised one of the user's keys. New session keys are exchanged after a few rounds of communication. This effectively forces the attacker to intercept all communication between the honest parties, since they lose access as soon as a key exchange occurs that is not intercepted. This property was later named Future Secrecy, or Post-Compromise Security. Etymology "Axolotl" was in reference to the salamander's self-healing properties. The term "ratchet" in cryptography is used in analogy to a mechanical ratchet. In the mechanical sense, a ratchet only allows advancement in one direction; a cryptographic ratchet only allows keys to be generated from the previous key. Unlike a mechanical ratchet, however, each state is unique. Origin The Double Ratchet Algorithm was developed by Trevor Perrin and Moxie Marlinspike (Open Whisper Systems) in 2013 and introduced as part of the Signal Protocol in February 2014. The Double Ratchet Algorithm's design is based on the DH ratchet that was introduced by Off-the-Record Messaging (OTR) and combines it with a symmetric-key ratchet modeled after the Silent Circle Instant Messaging Protocol (SCIMP). The ratchet was initially named after the critically endangered aquatic salamander axolotl, which has extraordinary self-healing capabilities. In March 2016, the developers renamed the Axolotl Ratchet as the Double Ratchet Algorithm to better differentiate between the ratchet and the full protocol, because some had used the name Axolotl when referring to the Signal Protocol. Properties The Double Ratchet Algorithm features properties that have been commonly available in end-to-end encryption systems for a long time: encryption of contents on the entire way of transport as well as authentication of the remote peer and protection against manipulation of messages. As a hybrid of DH and KDF ratchets, it combines several desired features of both principles. From OTR messaging it takes the properties of forward secrecy and automatically reestablishing secrecy in case of compromise of a session key, forward secrecy with a compromise of the secret persistent main key, and plausible deniability for the authorship of messages. Additionally, it enables session key renewal without interaction with the remote peer by using secondary KDF ratchets. An additional key-derivation step is taken to enable retaining session keys for out-of-order messages without endangering the following keys. It is said to detect reordering, deletion, and replay of sent messages, and improve forward secrecy properties in comparison to OTR messaging. Combined with public key infrastructure for the retention of pregenerated one-time keys (prekeys), it allows for the initialization of messaging sessions without the presence of the remote peer (asynchronous communication). The usage of triple Diffie–Hellman key exchange (3-DH) as initial key exchange method improves the deniability properties. An example of this is the Signal Protocol, which combines the Double Ratchet Algorithm, prekeys, and a 3-DH handshake. The protocol provides confidentiality, integrity, authentication, participant consistency, destination validation, forward secrecy, backward secrecy (aka future secrecy), causality preservation, message unlinkability, message repudiation, participation repudiation, and asynchronicity. It does not provide anonymity preservation, and requires servers for the relaying of messages and storing of public key material. Functioning A client renews session key material in interaction with the remote peer using Diffie–Hellman ratchet whenever possible, otherwise independently by using a hash ratchet. Therefore, with every message a client using the double ratchet advances one of two hash ratchets (one for sending, one receiving) which get seeded with a common secret from a DH ratchet. At the same time it tries to use every opportunity to provide the remote peer with a new public DH value and advance the DH ratchet whenever a new DH value from the remote peer arrives. As soon as a new common secret is established, a new hash ratchet gets initialized. As cryptographic primitives, the Double Ratchet Algorithm uses for the DH ratchet Elliptic curve Diffie–Hellman (ECDH) with Curve25519, for message authentication codes (MAC, authentication) Keyed-Hash Message Authentication Code (HMAC) based on SHA-256, for symmetric encryption the Advanced Encryption Standard (AES), partially in Cipher Block Chaining mode (CBC) with padding as per PKCS #5 and partially in Counter mode (CTR) without padding, for the hash ratchet HMAC. Applications The following is a list of applications that use the Double Ratchet Algorithm or a custom implementation of it: ChatSecure Conversations Cryptocat Facebook Messenger G Data Secure Chat Gajim GNOME Fractal Google Allo Haven Pond Element Signal Silent Phone Skype Viber WhatsApp Wire Notes References Literature External links Specification by Open Whisper Systems "Advanced cryptographic ratcheting", abstract description by Moxie Marlinspike Olm: implementation in C++ under the Apache license Cryptographic algorithms
65497549
https://en.wikipedia.org/wiki/Ivy%20Hooks
Ivy Hooks
Ivy Fay Hooks (born November 17, 1941) is an American mathematician and engineer who worked for the National Aeronautics and Space Administration (NASA). She joined NASA after graduating from the University of Houston with a master's degree in mathematics and physics in 1965. Her first assignment was with the Apollo program, where she worked on the modeling of lighting on the Moon and the dynamics of the launch escape system, among other projects. She then went on to play an important role in the design and development of the Space Shuttle, being one of only two women engineers assigned to the original design team for the orbiter. Early life Ivy Fay Hooks was born in Houston, Texas, on November 17, 1941, and grew up in Livingston, Texas. She was named after Ivy Parker, one of the founding members of the Society of Women Engineers, and a close friend of her parents. She graduated from Livingston High School and entered Southwestern University in Georgetown, Texas, where she studied mathematics. In June after her first year, she got married, and moved to Lufkin, Texas, where her husband was a reporter for a local newspaper. She then went to Austin College. She entered the University of Houston in her junior year. There, she also became interested in physics. She earned a Bachelor of Science degree in mathematics in 1963. NASA career Project Apollo Jobs for women with mathematics degrees were not common in the early 1960s, and she did not want to become a teacher, so she went to graduate school, where she worked towards her master's degree. A friend's mother drew her attention to an article in the newspaper that said "NASA's looking for women scientists and engineers." When she went for an interview at the Manned Spaceflight Center, she was unimpressed with the building, which was a disused box factory with no windows, and the people, who she thought were strange. However, she met a woman whose husband worked at NASA and was looking for people. A second interview was arranged, and this job was more to her liking. She was hired as an "aerospace technologist", which annoyed some of the engineers who had the same job classification. Hook's first assignment was modeling lighting on the Moon. This was of great importance at the time, as it was vital to know what the view would look like when astronauts attempted to land the lunar module. Most engineers were not much interested; it was not something on the college syllabus. She found that the subject had been researched by Russian physicists in the 1920s, who were interested in the lunar albedo, and they had created a full mathematical treatment of the subject. There were very few women working for NASA at the time in technical roles, and the men often played cruel practical jokes. She became fed up with the behavior of two of the men in her group, and decided to switch to another. She went to work for Humboldt C. Mandell, Jr., who was working on developing cost models. These were projections far into the future. She returned to the University of Houston, where she completed her studies, and was awarded a Master of Science degree in mathematics in 1965. Hooks studied the dynamics of the launch escape system, and the effects of jet plumes coming from the Lunar Module's ascent propulsion system and descent propulsion system. She also investigated the dynamics of the Apollo flight systems. Space Shuttle In April 1969 she became one of two women engineers assigned to the original design team for the Space Shuttle Orbiter. She recalled that Max Faget walked into the room carrying a balsa wood model airplane and declared; "We’re going to build America’s next spacecraft. And it’s going to launch like a spacecraft, it’s going to land like a plane." Hooks studied various configurations for such a spacecraft. She was particularly involved with the analysis and management of the mechanism for the separation of the Space Shuttle Orbiter from the Shuttle Carrier Aircraft for the Approach and Landing Tests. Another area in which Hooks made a significant contribution was the means by which the Space Shuttle Solid Rocket Boosters separated from the Space Shuttle external tank. For her work on the design of the Space Shuttle, Hooks received the Arthur S. Flemming Award in 1978, and the NASA Exceptional Service Medal in 1981. Software Foreseeing that computers would become more important, Hooks headed the Aerodynamics Systems Analysis Section of the Aerodynamics Branch in the Engineering Analysis Division from 1973 to 1977. She headed the Spacecraft Software Division in Data Systems and Analysis Directorate from 1978 to 1980, and was the Software Manager in the Spacecraft Software Division in the Data Systems and Analysis Directorate from 1980 to 1981. She was manager of the Shuttle Data Office in the Space Shuttle Program Office from 1981 to 1982, acting head of the Integration and Operations Section, Flight Software Branch in the Spacecraft Software Division from 1982 to 1983, and Chief of the Flight Software Branch of Spacecraft Software Division in the Mission Support Directorate from 1982 to 1984. Later life Hooks left NASA in 1984, and joined Barrios Technology, an aerospace contractor. In 1986 she became President and Chief Executive Officer of Bruce G. Jackson and Associates. She later founded her own software company, Compliance Automation. Notes External links Hooks raw interview (video) 1941 births Living people American engineers American women engineers NASA people Recipients of the NASA Exceptional Service Medal Southwestern University alumni University of Houston alumni 21st-century American women
37665417
https://en.wikipedia.org/wiki/PPSSPP
PPSSPP
PPSSPP (an acronym for "PlayStation Portable Simulator Suitable for Playing Portably") is a free and open-source PSP emulator for Windows, macOS, Linux, iOS, Android, Nintendo WiiU, Nintendo Switch, BlackBerry 10, MeeGo, Pandora, Xbox Series X, Xbox Series S, and Symbian with an increased focus on speed and portability. It was first released to the public on November 1, 2012, licensed under the GNU GPLv2 or later. The PPSSPP project was created by Henrik Rydgård, one of the co-founders of the Dolphin emulator. Features and development PPSSPP supports save states, dynamic recompilation (JIT) and has rudimentary support of ad hoc wireless networking. To decode PSP multimedia data PPSSPP uses the FFmpeg software library, which was enhanced to enable it to handle Sony's proprietary ATRAC3plus audio format as used by the PSP. PPSSPP offers graphical features that are enhancements over the PSP's capabilities, such as higher screen resolutions, antialiasing, image scaling, support for shaders, and linear and anisotropic filtering. The ports of PPSSPP for mobile devices offer additional features specific to each platform, such as 'immersive mode' for Android devices, support of the multimedia buttons within Symbian devices and screen stretching on BlackBerry 10 devices to support square screens. All ports of PPSSPP for mobile devices support the use of accelerometers, keyboards and gamepads as input devices. PPSSPP also supports the Vulkan API, which was added in v1.5.4 release and is intended to provide a substantial performance boost on supported devices. Portability Since its inception, PPSSPP has had a focus on portability with support for multiple architectures and operating systems. While initially only supporting Microsoft Windows and Android, this quickly grew to include Blackberry 10, Symbian, macOS, Linux and later iOS. The source code also unofficially supports a wide variety of operating systems and platforms, including Raspberry Pi, Loongson, Maemo, Universal Windows Platform (Microsoft Windows 10 Mobile, Xbox One, Microsoft Windows 10 (X86_32, X86_64, ARM and ARM64)), Meego Harmattan and Pandora. There was at one stage a port for Xbox 360. Although the port was abandoned, the support code remains, offering support for big-endian CPUs and DirectX compatible GPUs. To aid with the portability two cross-platform development libraries, SDL and Qt, are able to be used in addition to the non-portable Blackberry, Android and Win32 interfaces. The Qt frontend was instrumental in adding support for platforms such as Symbian. The Qt frontend is able to support all officially supported platforms and is the suggested alternative if no native interface exists. Compatibility As of March 2017, 984 games are playable in PPSSPP, while 67 games load to some frame of in-game state. 4 games can only reach the main menu or introduction sequence. As of July 2020, almost all games are playable in PPSSPP emulator. See also List of PSP emulators References External links Android emulation software Cross-platform software Free and open-source Android software Free emulation software Free software programmed in C++ Free software projects Free software that uses SDL Linux emulation software MacOS emulation software PlayStation Portable emulators Portable software Windows emulation software
37314
https://en.wikipedia.org/wiki/Cypherpunk
Cypherpunk
A cypherpunk is any individual advocating widespread use of strong cryptography and privacy-enhancing technologies as a route to social and political change. Originally communicating through the Cypherpunks electronic mailing list, informal groups aimed to achieve privacy and security through proactive use of cryptography. Cypherpunks have been engaged in an active movement since at least the late 1980s. History Before the mailing list Until about the 1970s, cryptography was mainly practiced in secret by military or spy agencies. However, that changed when two publications brought it into public awareness: the US government publication of the Data Encryption Standard (DES), a block cipher which became very widely used; and the first publicly available work on public-key cryptography, by Whitfield Diffie and Martin Hellman. The technical roots of Cypherpunk ideas have been traced back to work by cryptographer David Chaum on topics such as anonymous digital cash and pseudonymous reputation systems, described in his paper "Security without Identification: Transaction Systems to Make Big Brother Obsolete" (1985). In the late 1980s, these ideas coalesced into something like a movement. Etymology and the Cypherpunks mailing list In late 1992, Eric Hughes, Timothy C. May and John Gilmore founded a small group that met monthly at Gilmore's company Cygnus Solutions in the San Francisco Bay Area, and was humorously termed cypherpunks by Jude Milhon at one of the first meetings - derived from cipher and cyberpunk. In November 2006, the word was added to the Oxford English Dictionary. The Cypherpunks mailing list was started in 1992, and by 1994 had 700 subscribers. At its peak, it was a very active forum with technical discussion ranging over mathematics, cryptography, computer science, political and philosophical discussion, personal arguments and attacks, etc., with some spam thrown in. An email from John Gilmore reports an average of 30 messages a day from December 1, 1996 to March 1, 1999, and suggests that the number was probably higher earlier. The number of subscribers is estimated to have reached 2000 in the year 1997. In early 1997, Jim Choate and Igor Chudov set up the Cypherpunks Distributed Remailer, a network of independent mailing list nodes intended to eliminate the single point of failure inherent in a centralized list architecture. At its peak, the Cypherpunks Distributed Remailer included at least seven nodes. By mid-2005, al-qaeda.net ran the only remaining node. In mid 2013, following a brief outage, the al-qaeda.net node's list software was changed from Majordomo to GNU Mailman and subsequently the node was renamed to cpunks.org. The CDR architecture is now defunct, though the list administrator stated in 2013 that he was exploring a way to integrate this functionality with the new mailing list software. For a time, the cypherpunks mailing list was a popular tool with mailbombers, who would subscribe a victim to the mailing list in order to cause a deluge of messages to be sent to him or her. (This was usually done as a prank, in contrast to the style of terrorist referred to as a mailbomber.) This precipitated the mailing list sysop(s) to institute a reply-to-subscribe system. Approximately two hundred messages a day was typical for the mailing list, divided between personal arguments and attacks, political discussion, technical discussion, and early spam. The cypherpunks mailing list had extensive discussions of the public policy issues related to cryptography and on the politics and philosophy of concepts such as anonymity, pseudonyms, reputation, and privacy. These discussions continue both on the remaining node and elsewhere as the list has become increasingly moribund. Events such as the GURPS Cyberpunk raid lent weight to the idea that private individuals needed to take steps to protect their privacy. In its heyday, the list discussed public policy issues related to cryptography, as well as more practical nuts-and-bolts mathematical, computational, technological, and cryptographic matters. The list had a range of viewpoints and there was probably no completely unanimous agreement on anything. The general attitude, though, definitely put personal privacy and personal liberty above all other considerations. Early discussion of online privacy The list was discussing questions about privacy, government monitoring, corporate control of information, and related issues in the early 1990s that did not become major topics for broader discussion until at least ten years later. Some list participants were highly radical on these issues. Those wishing to understand the context of the list might refer to the history of cryptography; in the early 1990s, the US government considered cryptography software a munition for export purposes. (PGP source code was published as a paper book to bypass these regulations and demonstrate their futility.) In 1992, a deal between NSA and SPA allowed export of cryptography based on 40-bit RC2 and RC4 which was considered relatively weak (and especially after SSL was created, there was many contests to break it). The US government had also tried to subvert cryptography through schemes such as Skipjack and key escrow. It was also not widely known that all communications were logged by government agencies (which would later be revealed during the NSA and AT&T scandals) though this was taken as an obvious axiom by list members. The original cypherpunk mailing list, and the first list spin-off, coderpunks, were originally hosted on John Gilmore's toad.com, but after a falling out with the sysop over moderation, the list was migrated to several cross-linked mail-servers in what was called the "distributed mailing list." The coderpunks list, open by invitation only, existed for a time. Coderpunks took up more technical matters and had less discussion of public policy implications. There are several lists today that can trace their lineage directly to the original Cypherpunks list: the cryptography list ([email protected]), the financial cryptography list ([email protected]), and a small group of closed (invitation-only) lists as well. Toad.com continued to run with the existing subscriber list, those that didn't unsubscribe, and was mirrored on the new distributed mailing list, but messages from the distributed list didn't appear on toad.com. As the list faded in popularity, so too did it fade in the number of cross-linked subscription nodes. To some extent, the cryptography list acts as a successor to cypherpunks; it has many of the people and continues some of the same discussions. However, it is a moderated list, considerably less zany and somewhat more technical. A number of current systems in use trace to the mailing list, including Pretty Good Privacy, /dev/random in the Linux kernel (the actual code has been completely reimplemented several times since then) and today's anonymous remailers. Main principles The basic ideas can be found in A Cypherpunk's Manifesto (Eric Hughes, 1993): "Privacy is necessary for an open society in the electronic age. ... We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy ... We must defend our own privacy if we expect to have any. ... Cypherpunks write code. We know that someone has to write software to defend privacy, and ... we're going to write it." Some are or were quite senior people at major hi-tech companies and others are well-known researchers (see list with affiliations below). The first mass media discussion of cypherpunks was in a 1993 Wired article by Steven Levy titled Crypto Rebels: The three masked men on the cover of that edition of Wired were prominent cypherpunks Tim May, Eric Hughes and John Gilmore. Later, Levy wrote a book, Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age, covering the crypto wars of the 1990s in detail. "Code Rebels" in the title is almost synonymous with cypherpunks. The term cypherpunk is mildly ambiguous. In most contexts it means anyone advocating cryptography as a tool for social change, social impact and expression. However, it can also be used to mean a participant in the Cypherpunks electronic mailing list described below. The two meanings obviously overlap, but they are by no means synonymous. Documents exemplifying cypherpunk ideas include Timothy C. May's The Crypto Anarchist Manifesto (1992) and The Cyphernomicon (1994), A Cypherpunk's Manifesto. Privacy of communications A very basic cypherpunk issue is privacy in communications and data retention. John Gilmore said he wanted "a guarantee -- with physics and mathematics, not with laws -- that we can give ourselves real privacy of personal communications." Such guarantees require strong cryptography, so cypherpunks are fundamentally opposed to government policies attempting to control the usage or export of cryptography, which remained an issue throughout the late 1990s. The Cypherpunk Manifesto stated "Cypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act." This was a central issue for many cypherpunks. Most were passionately opposed to various government attempts to limit cryptography — export laws, promotion of limited key length ciphers, and especially escrowed encryption. Anonymity and pseudonyms The questions of anonymity, pseudonymity and reputation were also extensively discussed. Arguably, the possibility of anonymous speech and publication is vital for an open society and genuine freedom of speech — this is the position of most cypherpunks. That the Federalist Papers were originally published under a pseudonym is a commonly-cited example. Privacy and self-revelation A whole set of issues around privacy and the scope of self-revelation were perennial topics on the list. Consider a young person who gets "carded" when he or she enters a bar and produces a driver's license as proof of age. The license includes things like full name and home address; these are completely irrelevant to the question of legal drinking. However, they could be useful to a lecherous member of bar staff who wants to stalk a hot young customer, or to a thief who cleans out the apartment when an accomplice in the bar tells him you look well off and are not at home. Is a government that passes a drinking age law morally obligated to create a privacy-protecting form of ID to go with it, one that only shows you can legally drink without revealing anything else about you? In the absence of that, is it ethical to acquire a bogus driver's license to protect your privacy? For most cypherpunks, the answer to both those questions is "Yes, obviously!" What about a traffic cop who asks for your driver's license and vehicle registration? Should there be some restrictions on what he or she learns about you? Or a company that issues a frequent flier or other reward card, or requires registration to use its web site? Or cards for toll roads that potentially allow police or others to track your movements? Or cameras that record license plates or faces on a street? Or phone company and Internet records? In general, how do we manage privacy in an electronic age? Cypherpunks naturally consider suggestions of various forms of national uniform identification card too dangerous; the risks of abuse far outweigh any benefits. Censorship and monitoring In general, cypherpunks opposed the censorship and monitoring from government and police. In particular, the US government's Clipper chip scheme for escrowed encryption of telephone conversations (encryption supposedly secure against most attackers, but breakable by government) was seen as anathema by many on the list. This was an issue that provoked strong opposition and brought many new recruits to the cypherpunk ranks. List participant Matt Blaze found a serious flaw in the scheme, helping to hasten its demise. Steven Schear first suggested the warrant canary in 2002 to thwart the secrecy provisions of court orders and national security letters. , warrant canaries are gaining commercial acceptance. Hiding the act of hiding An important set of discussions concerns the use of cryptography in the presence of oppressive authorities. As a result, Cypherpunks have discussed and improved steganographic methods that hide the use of crypto itself, or that allow interrogators to believe that they have forcibly extracted hidden information from a subject. For instance, Rubberhose was a tool that partitioned and intermixed secret data on a drive with fake secret data, each of which accessed via a different password. Interrogators, having extracted a password, are led to believe that they have indeed unlocked the desired secrets, whereas in reality the actual data is still hidden. In other words, even its presence is hidden. Likewise, cypherpunks have also discussed under what conditions encryption may be used without being noticed by network monitoring systems installed by oppressive regimes. Activities As the Manifesto says, "Cypherpunks write code"; the notion that good ideas need to be implemented, not just discussed, is very much part of the culture of the mailing list. John Gilmore, whose site hosted the original cypherpunks mailing list, wrote: "We are literally in a race between our ability to build and deploy technology, and their ability to build and deploy laws and treaties. Neither side is likely to back down or wise up until it has definitively lost the race." Software projects Anonymous remailers such as the Mixmaster Remailer were almost entirely a cypherpunk development. Among the other projects they have been involved in were PGP for email privacy, FreeS/WAN for opportunistic encryption of the whole net, Off-the-record messaging for privacy in Internet chat, and the Tor project for anonymous web surfing. Hardware In 1998, the Electronic Frontier Foundation, with assistance from the mailing list, built a $200,000 machine that could brute-force a Data Encryption Standard key in a few days. The project demonstrated that DES was, without question, insecure and obsolete, in sharp contrast to the US government's recommendation of the algorithm. Expert panels Cypherpunks also participated, along with other experts, in several reports on cryptographic matters. One such paper was "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". It suggested 75 bits was the minimum key size to allow an existing cipher to be considered secure and kept in service. At the time, the Data Encryption Standard with 56-bit keys was still a US government standard, mandatory for some applications. Other papers were critical analysis of government schemes. "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption", evaluated escrowed encryption proposals. Comments on the Carnivore System Technical Review. looked at an FBI scheme for monitoring email. Cypherpunks provided significant input to the 1996 National Research Council report on encryption policy, Cryptography's Role In Securing the Information Society (CRISIS). This report, commissioned by the U.S. Congress in 1993, was developed via extensive hearings across the nation from all interested stakeholders, by a committee of talented people. It recommended a gradual relaxation of the existing U.S. government restrictions on encryption. Like many such study reports, its conclusions were largely ignored by policy-makers. Later events such as the final rulings in the cypherpunks lawsuits forced a more complete relaxation of the unconstitutional controls on encryption software. Lawsuits Cypherpunks have filed a number of lawsuits, mostly suits against the US government alleging that some government action is unconstitutional. Phil Karn sued the State Department in 1994 over cryptography export controls after they ruled that, while the book Applied Cryptography could legally be exported, a floppy disk containing a verbatim copy of code printed in the book was legally a munition and required an export permit, which they refused to grant. Karn also appeared before both House and Senate committees looking at cryptography issues. Daniel J. Bernstein, supported by the EFF, also sued over the export restrictions, arguing that preventing publication of cryptographic source code is an unconstitutional restriction on freedom of speech. He won, effectively overturning the export law. See Bernstein v. United States for details. Peter Junger also sued on similar grounds, and won. Civil disobedience Cypherpunks encouraged civil disobedience, in particular US law on the export of cryptography. Until 1997, cryptographic code was legally a munition and fall until ITAR, and the key length restrictions in the EAR was not removed until 2000. In 1995 Adam Back wrote a version of the RSA algorithm for public-key cryptography in three lines of Perl and suggested people use it as an email signature file: #!/bin/perl -sp0777i<X+d*lMLa^*lN%0]dsXx++lMlN/dsM0<j]dsj $/=unpack('H*',$_);$_=`echo 16dio\U$k"SK$/SM$n\EsN0p[lN*1 lK[d2%Sa2/d0$^Ixp"|dc`;s/\W//g;$_=pack('H*',/((..)*)$/) Vince Cate put up a web page that invited anyone to become an international arms trafficker; every time someone clicked on the form, an export-restricted item — originally PGP, later a copy of Back's program — would be mailed from a US server to one in Anguilla. Cypherpunk fiction In Neal Stephenson's novel Cryptonomicon many characters are on the "Secret Admirers" mailing list. This is fairly obviously based on the cypherpunks list, and several well-known cypherpunks are mentioned in the acknowledgements. Much of the plot revolves around cypherpunk ideas; the leading characters are building a data haven which will allow anonymous financial transactions, and the book is full of cryptography. But, according to the author the book's title is — in spite of its similarity — not based on the Cyphernomicon, an online cypherpunk FAQ document. Legacy Cypherpunk achievements would later also be used on the Canadian e-wallet, the MintChip, and the creation of bitcoin. It was an inspiration for CryptoParty decades later to such an extent that the Cypherpunk Manifesto is quoted at the header of its Wiki, and Eric Hughes delivered the keynote address at the Amsterdam CryptoParty on 27 August 2012. Notable cypherpunks Cypherpunks list participants included many notable computer industry figures. Most were list regulars, although not all would call themselves "cypherpunks". The following is a list of noteworthy cypherpunks and their achievements: Marc Andreessen: co-founder of Netscape which invented SSL Jacob Appelbaum: Former Tor Project employee, political advocate Julian Assange: WikiLeaks founder, deniable cryptography inventor, journalist; co-author of Underground; author of Cypherpunks: Freedom and the Future of the Internet; member of the International Subversives. Assange has stated that he joined the list in late 1993 or early 1994. An archive of his cypherpunks mailing list posts is at the Mailing List Archives. Derek Atkins: computer scientist, computer security expert, and one of the people who factored RSA-129 Adam Back: inventor of Hashcash and of NNTP-based Eternity networks; co-founder of Blockstream Jim Bell: author of Assassination Politics' Steven Bellovin: Bell Labs researcher; later Columbia professor; Chief Technologist for the US Federal Trade Commission in 2012 Matt Blaze: Bell Labs researcher; later professor at University of Pennsylvania; found flaws in the Clipper Chip Eric Blossom: designer of the Starium cryptographically secured mobile phone; founder of the GNU Radio project Jon Callas: technical lead on OpenPGP specification; co-founder and Chief Technical Officer of PGP Corporation; co-founder with Philip Zimmermann of Silent Circle Bram Cohen: creator of BitTorrent Matt Curtin: founder of Interhack Corporation; first faculty advisor of the Ohio State University Open Source Club; lecturer at Ohio State University Hugh Daniel (deceased): former Sun Microsystems employee; manager of the FreeS/WAN project (an early and important freeware IPsec implementation) Suelette Dreyfus: deniable cryptography co-inventor, journalist, co-author of Underground Hal Finney (deceased): cryptographer; main author of PGP 2.0 and the core crypto libraries of later versions of PGP; designer of RPOW Eva Galperin: malware researcher and security advocate; Electronic Frontier Foundation activist John Gilmore*: Sun Microsystems' fifth employee; co-founder of the Cypherpunks and the Electronic Frontier Foundation; project leader for FreeS/WAN Mike Godwin: Electronic Frontier Foundation lawyer; electronic rights advocate Ian Goldberg*: professor at University of Waterloo; co-designer of the off-the-record messaging protocol Rop Gonggrijp: founder of XS4ALL; co-creator of the Cryptophone Matthew D. Green, influential in the development of the Zcash system Sean Hastings: founding CEO of Havenco; co-author of the book God Wants You Dead Johan Helsingius: creator and operator of Penet remailer Nadia Heninger: assistant professor at University of Pennsylvania; security researcher Robert Hettinga: founder of the International Conference on Financial Cryptography; originator of the idea of Financial cryptography as an applied subset of cryptography Mark Horowitz: author of the first PGP key server Tim Hudson: co-author of SSLeay, the precursor to OpenSSL Eric Hughes: founding member of Cypherpunks; author of A Cypherpunk's Manifesto Peter Junger (deceased): law professor at Case Western Reserve University Paul Kocher: president of Cryptography Research, Inc.; co-author of the SSL 3.0 protocol Ryan Lackey: co-founder of HavenCo, the world's first data haven Brian LaMacchia: designer of XKMS; research head at Microsoft Research Ben Laurie: founder of The Bunker, core OpenSSL team member, Google engineer. Jameson Lopp: software engineer, CTO of Casa Morgan Marquis-Boire: researcher, security engineer, and privacy activist Matt Thomlinson (phantom): security engineer, leader of Microsoft's security efforts on Windows, Azure and Trustworthy Computing, CISO at Electronic Arts Timothy C. May (deceased): former Assistant Chief Scientist at Intel; author of A Crypto Anarchist Manifesto and the Cyphernomicon; a founding member of the Cypherpunks mailing list Jude Milhon (deceased; aka "St. Jude"): a founding member of the Cypherpunks mailing list, credited with naming the group; co-creator of Mondo 2000 magazine Vincent Moscaritolo: founder of Mac Crypto Workshop; Principal Cryptographic Engineer for PGP Corporation; co-founder of Silent Circle and 4th-A Technologies, LLC Sameer Parekh: former CEO of C2Net and co-founder of the CryptoRights Foundation human rights non-profit Vipul Ved Prakash: co-founder of Sense/Net; author of Vipul's Razor; founder of Cloudmark Runa Sandvik: Tor developer, political advocate Len Sassaman (deceased): maintainer of the Mixmaster Remailer software; researcher at Katholieke Universiteit Leuven; biopunk Steven Schear: creator of the warrant canary; street performer protocol; founding member of the International Financial Cryptographer's Association and GNURadio; team member at Counterpane; former Director at data security company Cylink and MojoNation Bruce Schneier*: well-known security author; founder of Counterpane Richard Stallman: founder of Free Software Foundation, privacy advocate Nick Szabo: inventor of smart contracts; designer of bit gold, a precursor to Bitcoin Wei Dai: Created b-money; cryptocurrency system and co-proposed the VMAC message authentication algorithm. The smallest subunit of Ether, the wei, is named after him. Zooko Wilcox-O'Hearn: DigiCash and MojoNation developer; founder of Zcash; co-designer of Tahoe-LAFS Jillian C. York: Director of International Freedom of Expression at the Electronic Frontier Foundation (EFF) John Young: anti-secrecy activist and co-founder of Cryptome Philip Zimmermann: original creator of PGP v1.0 (1991); co-founder of PGP Inc. (1996); co-founder with Jon Callas of Silent Circle * indicates someone mentioned in the acknowledgements of Stephenson's Cryptonomicon. References Further reading Andy Greenberg: This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World's Information''. Dutton Adult 2012, Punk Internet privacy
16106348
https://en.wikipedia.org/wiki/Turku%20Trojans
Turku Trojans
Turku Trojans is one of the oldest American football teams in Finland, established in 1982. Turku Trojans plays in the Maple League (Vaahteraliiga in Finnish) operated by American Football Association in Finland. Finnish champions in 2003. Maple Bowl appearances in 1984, 1987, 1992, 1993, 1994, 1998, 1999, 2002, 2004 and 2014. Third place in 1991, 1997, 2000 and 2001. After the 2012 season the Trojans decided to voluntarily drop one level down and play the next season in division I. In 2013, the Trojans won all the regular season games, the semifinal game and the Spaghetti Bowl. After a perfect season the Trojans was granted a Maple League license and this season (2014) the team will play again in the highest level. Lineup Roster 2014 References External links Official site Sport in Turku American football teams in Finland 1982 establishments in Finland American football teams established in 1982
10123245
https://en.wikipedia.org/wiki/I/O%20scheduling
I/O scheduling
Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling. Purpose I/O scheduling usually has to work with hard disk drives that have long access times for requests placed far away from the current position of the disk head (this operation is called a seek). To minimize the effect this has on system performance, most I/O schedulers implement a variant of the elevator algorithm that reorders the incoming randomly ordered requests so the associated data would be accessed with minimal arm/head movement. I/O schedulers can have many purposes depending on the goals; common purposes include the following To minimize time wasted by hard disk seeks To prioritize a certain processes' I/O requests To give a share of the disk bandwidth to each running process To guarantee that certain requests will be issued before a particular deadline Disciplines Common scheduling disciplines include the following: Random scheduling (RSS) First In, First Out (FIFO), also known as First Come First Served (FCFS) Last In, First Out (LIFO) Shortest seek first, also known as Shortest Seek / Service Time First (SSTF) Elevator algorithm, also known as SCAN (including its variants, C-SCAN, LOOK, and C-LOOK) N-Step-SCAN SCAN of N records at a time FSCAN, N-Step-SCAN where N equals queue size at start of the SCAN cycle Completely Fair Queuing (CFQ) on Linux Anticipatory scheduling Noop scheduler Deadline scheduler mClock scheduler Budget Fair Queueing (BFQ) scheduler. Kyber NONE (used for NVM Express drives) mq-deadline (used for SSD SATA drives) cfq bfq and bfq-mq (used for HDD drives) See also Tagged Command Queuing (TCQ) Native Command Queuing (NCQ) References Further reading Linux I/O schedulers, from Ubuntu Wiki Operating Systems: Three Easy Pieces, by Remzi H. Arpaci-Dusseau and Andrea C. Arpaci-Dusseau. Arpaci-Dusseau Books, 2014. Relevant chapter: Hard Disk Drives Love, R. (2005). Linux Kernel Development, Novell Press. Operating Systems: Internals and Design Principles, seventh edition, by William Stallings. External links
43965881
https://en.wikipedia.org/wiki/Ellis%20Horowitz
Ellis Horowitz
Ellis Horowitz is an American computer scientist and Professor of Computer Science and Electrical Engineering at the University of Southern California (USC). Horowitz is best known for his computer science textbooks on data structures and algorithms, co-authored with Sartaj Sahni. At USC, Horowitz was chairman of the Computer Science Department from 1990 to 1999. During his tenure he significantly improved relations between Computer Science and the Information Sciences Institute (ISI), hiring senior faculty, and establishing the department's first industrial advisory board. From 1983 to 1993 with Lawrence Flon he co-founded Quality Software Products which designed and built UNIX application software. Their products included two spreadsheet programs, Q-calc and eXclaim, a project management system, MasterPlan, and a floating license server, Maitre D. The company was sold to Island Graphics. Education B.S. (Mathematics) Brooklyn College, 1964. M.S. (Computer Science) University of Wisconsin–Madison, 1967. Ph.D. (Computer Science) University of Wisconsin–Madison, 1969. Peer-to-peer systems Horowitz has been actively engaged as an expert witness testifying in numerous peer-to-peer file sharing legal cases. Generally, he has represented the copyright owner, including individual record companies, the Recording Industry Association of America, and the Motion Picture Association of America. His testimony has been cited numerous times in various decisions and orders, in particular: Horowitz was cited in the Arista Records LLC v. Lime Group LLC case. His testimony was also cited in RIAA versus MP3tunes. In several BitTorrent cases including MPAA versus isoHunt. More recently, Horowitz has represented Universal Music Group (UMG) and others against the music streaming service Grooveshark.com. Summary judgment was awarded to UMG, with the decision citing Horowitz' expert reports. Distance education In 1999, Horowitz was appointed Director of Information Technology and Distance Education in USC's Viterbi School of Engineering. Part of his responsibilities included their satellite-based closed circuit instructional network. He renamed the organization USC's Distance Education Network (DEN) and moved course delivery from satellite to the Web. DEN currently offers numerous graduate level courses leading to master's degrees, primarily in computer science and electrical engineering. In 2000 he received an outstanding distance education educator award from R1edu.org. Selected publications Ellis Horowitz has published numerous technical articles and several books, including: 1975. 1984. 2007. References External links 1944 births Living people People from New York City Brooklyn College alumni University of Wisconsin–Madison College of Letters and Science alumni American computer scientists Computer science writers University of Southern California faculty American textbook writers
1633340
https://en.wikipedia.org/wiki/Whosarat.com
Whosarat.com
Whosarat.com is a website, which, in its words, allows individuals to "post, share and request any and all information that has been made public at some point to at least 1 person of the public prior to posting it on this site pertaining to local, state and federal Informants and Law Enforcement Officers." The site's extensive disclaimer notes that in part that "All posts made by users should be considered as inaccurate opinions unless backed by official documents." It urges members to "Please post informants that are involved with non-violent crimes only." The Department of Homeland Security is said to have issued an advisory about the site, warning law enforcement officers not even to view the site. "Visiting the site could result in the compromise of government IP addresses. Searching the site for a particular name could result in that name being cross-indexed to the IP address of the computer used to make the inquiry. Searching for the names of officers or informants could compromise those individual's identities. Any website is capable of collecting IP address and search information from visitors, but this site is remarkable because it makes visitor information public." The site believes it is protected by legal precedents set in connection with another website, charmichaelcase, which also posts information about informants. See also Stop Snitchin' References Website rouses informants' fear, investigators' ire, Kathleen Burge, The Boston Globe, March 21, 2005 (does not identify the site but names Anthony Capone, spokesman and domain registrant for whosarat.com as "a spokesman for the site") Ethics Scoreboard External links Who's a Rat? Website SIS Bulletin #137 Whosarat Website Advisory Cryptome website: text of what is described as a Department of Homeland Security advisory https://www.facebook.com/Whosaratcom-502978833196992/ Cop Blaster Snitch List similar to Who's a Rat? Law enforcement websites Internet properties established in 2004
7760098
https://en.wikipedia.org/wiki/Infor
Infor
Infor is a multinational enterprise software company, headquartered in New York City, United States. Infor focuses on business applications for organizations delivered via cloud computing as a service. Originally focused on software ranging from financial systems and enterprise resource planning (ERP) to supply chain and customer relationship management, in 2010 Infor began to focus on software for industry niches, as well as user-friendly software design. Infor deploys its cloud applications through Amazon Web Services, Azure and various open source software platforms. Infor acquired over forty other software companies since its 2002 founding as Agilysis, for example acquiring GEAC ERP for US$1 billion, Lawson Software for US$2 billion, and GT Nexus for $675 million. Infor had around 58 million cloud users as of July 2016, as well as around 90,000 corporate customers overall. Infor is active in 200 countries and territories with around 17,300 employees. Infor's customers included Bausch & Lomb, Ferrari, Heineken, Wyndham Hotels, Hershey Entertainment and Resorts, Boskalis, EBSCO, Legacy Health, The Madern Group and Best Western International. History Founding and early acquisitions (2002–2009) Infor was founded in June 2002 under the name Agilsys in Malvern, Pennsylvania. With 1,300 customers to start and a focus on enterprise software, the company was constructed through a series of acquisitions led by private equity backers Golden Gate Capital and Summit Partners. In December 2002, Agilisys International acquired Brain AG, followed by the acquisition of Future Three the following June. Agilisys moved its headquarters to Alpharetta in the Atlanta metropolitan area, and in February 2004 acquired Infor Business Solutions, a German company headquartered in Friedrichsthal (Saar), Germany. Daly.commerce was acquired by Agilisys in early 2004. In September 2004, Agilisys changed its name to Infor Global Solutions. On October 20, 2004, Infor announced the acquisition of Lilly Software Associates and its VISUAL Enterprise (now referred to as VISUAL ERP) suite of products. In February 2005, Infor acquired MAPICS for around US$347 million, and subsequently acquired GEAC ERP in 2006 for US$1 billion. After a lawsuit filed in May 2006 attempted to halt SSA Global's merger with Infor unless certain shareholder demands were met, Infor later completed the purchase and integrated the company. Golden Gate's early push for acquisitions resulted in Infor having 17,500 customers by the summer of 2006. Also that year, Infor began rewriting industry-specific applications into up-to-date .Net and Java-based application providers. In June 2009, Infor acquired SoftBrands (hotel, spa and entertainment business, because on August 14, 2006, The Softbrands Inc. acquired MAI Systems Corporation together with The LodgingTouch Property Management System of his Hotel Information Systems department) for US$80 Million. New management and move (2010–2013) After several acquisitions, in October 2010, Infor's board of directors hired a new management team composed of four senior executives from Oracle Corporation, all four of whom joined Infor on the same day. Charles Phillips was appointed CEO, Pam Murphy was appointed chief operating officer (COO), and Duncan Angove and Stephan Scholl became co-presidents. Phillips established a new strategic direction as CEO. In order to gain market share on Oracle and SAP, Infor also focused on providing applications with user-friendly interfaces, while continuing to rewrite applications into new code. The company acquired ERP rival Lawson Software for US$2 billion in 2011. Infor launched its Infor10 line of products in September 2011. Within the products were the ION middleware suite and a new user interface. In the summer of 2012, Infor relocated its headquarters from Alpharetta, Georgia to New York City, citing the availability of engineers and designers in “Silicon Alley,” investment in the technology educational sector by a consortium of universities, and the proximity to customers. With several hundred employees remaining in Georgia, Infor's new "loft-style" offices at 641 Avenue of the Americas in Manhattan were designed with the intent of promoting "collaboration and transparency." Infor acquired Groupe Laurier CIM in August 2012. On December 4, 2012, Infor announced it had acquired Orbis Global, which used a software as a service model for its marketing resource management software. Infor integrated Orbis Global's technology into its own customer experience product suite. "Micro-vertical" focus and pivot to cloud (2013–2014) After the 2010 management changes, Infor began specializing in "micro-verticals," purpose building its software to meet the needs of customers in niche industries. Arguing that similar businesses have similar software needs, CEO Phillips explained in January 2012 that the food and beverage industry, for example, is made up of smaller "micro-verticals" such as bakers, butchers, and breweries, each of which have unique needs. In contrast to many of its competitors, Infor designs software with the intent that companies can use the programs immediately, without consultation or customization. As of 2013, Infor continued to rework industry-specific applications from older software languages into more widely used .Net and Java programming languages. On April 2, 2013, Infor announced the acquisition of CERTPOINT Systems, a software as a service learning technologies company. On May 16, 2013, Infor acquired TDCI, Inc., and by late 2013, Infor had launched 300 new products and hired 1,500 new engineers since 2010. Among other companies, Infor acquired PeopleAnswers in January 2014. Advancements in cloud computing enabled Infor to integrate applications into suites for specific industries and sell them as a service. Infor debuted its Infor CloudSuite product, which is delivered exclusively through Amazon Web Services (AWS), in 2014. CloudSuite is designed to integrate software solutions intended for "micro-vertical" industries. Infor also formed partnerships with Red Hat and EnterpriseDB to offer an open-source PostgreSQL stack that year, and Infor's net income in 2014 equaled US$121.7 million. GT Nexus acquisition (2015–2016) The company saw around a 300 percent increase in service bookings in 2015, with revenue up that year up around 60 percent. By July 2015, Infor had around 45 million users signed up to use its cloud computing software. On August 11, 2015, Infor announced its upcoming acquisition of GT Nexus for $675 million. At the time, GT Nexus was the world's largest cloud-based global commerce platform, with $100 billion of trade in direct goods conducted each year though its network. Explained Bloomberg, the acquired technology allows Infor customers to "leave elaborate business-management systems that run at headquarters untouched while supplying those clients’ plants with programs specialized by industry and delivered via cloud computing." Infor acquired Predictix in July 2016, at which point Infor related that it had "more than 58 million users" in the "Infor cloud." Investment by Koch Industries (2017) In February 2017, Koch Equity Development LLC invested $2.68 billion in Infor, for a 66.67% equity ownership stake in the company. Infor is valued at $10 billion, and has $6 billion of debt, most of which is publicly-traded. Purchase by Koch Industries (2020) In February 2020, Koch Equity Development LLC purchased full equity in Infor, buying out Golden Gate Capital's remaining equity. Infor is valued at $11 billion. Software Since CEO Charles Phillips joined Infor in 2010, the company has focused on creating applications that result in a positive user experience, among other factors such as efficiency. Infor deploys its applications primarily on the Amazon Web Services cloud and open source platforms. Programs and divisions Hook and Loop Infor created Hook and Loop in 2012 as an internal creative agency of writers, designers, developers, and filmmakers, with the intent of designing user-friendly and aesthetically pleasing software. The Hook & Loop agency was established in New York City with the initial goal of building a new UI to work across the Infor 10x product line. With the overhaul dubbed “SoHo,” the goal was establishing a “holistic” user experience with a unified design. The agency was founded with five people, and by 2014 had 80 employees. Infor Partner Network (IPN) The Infor Partner Network (IPN) provides business services to customers using Infor products. In 2016, Infor stated it had 700 partners through the program, with 125 in North America. The program supports members of the "Infor Partner Network" in various ways, including training, certification, and funds. Products sold through the partner network include Infor CloudSuite Industrial and Infor CloudSuite Business. Partners also sell Infor ION and the social collaboration platform Infor Ming.le, as well as Infor ERP products for distribution and management, for example programs such as Infor SyteLine, Infor LN, Infor XA, Infor VISUAL, Infor M3, Infor Distribution SX.e, and Infor Distribution FACTS. Infor formed alliances with companies such as HCL Technologies, in 2015. Dynamic Science Labs Infor Dynamic Science Labs is a team of scientists in Kendall Square on the Massachusetts Institute of Technology (MIT) campus who are building applications using by predictive analytics and machine learning. The founder of this team and Infor's Chief Scientist is Dr. Ziad Nejmeldeen, a PhD from MIT, and an expert in data sciences. Infor Xi, the company's platform for enterprise apps, is powered by Dynamic Science Labs. Digital & Value Engineering Infor's value engineering organization helps customers identify, quantify, realize and measure tangible business value from the use of technology solutions. The program was formally set up in 2013 by Riaz Raihan, SVP, who came to Infor from SAP. By 2015, the team expanded its mandate to digital engineering. Infor Digital Engineering helps customers build and execute on digital strategies that included IoT, advanced analytics, artificial intelligence, cloud, mobile, social and machine learning. Education Alliance Program Infor's Education Alliance Program allows college and university students to intern at the company. Student interns are placed on a team in an Infor department, with the opportunity to switch between departments. Acquisitions Philanthropy In 2013, the company was recognized as a ComputerWorld honors laureate for its work with Habitat for Humanity, partnering with the organization to provide software at free or reduced prices. Through Habitat for Humanity, Infor employees also participate in a Volunteer Build Program, Employee Giving Campaign, and Annual Global Village Build. Infor also sponsors the Leukemia & Lymphoma Society's “Light the Night Walks” events, held in years such as 2012. Infor has furthermore supported the United Negro College Fund (UNCF) by funding its "A Mind Is a Terrible Thing to Waste" event. Accolades Infor has won a number of awards since its founding in 2002. The company has received several SIIA CoDie Awards, including the 2014 Best Healthcare IT Solution and Best Social Business Solution. In May 2015, Infor's XTreme Support won a Confirmit ACE (Achievement in Customer Excellence Award) for the seventh consecutive year. In August 2015, Infor's CloudSuite, Talent Science, and Ming.le Mobile products won People's Choice for Favorite New Products at the Stevie Awards. In 2016, Plant Engineering named Infor's EAM solutions as "Product of the Year" in the Energy Performance Management and the Maintenance Software categories. See also Comparison of OLAP servers List of ERP software packages and vendors List of Georgia (U.S. state) companies Syspro References External links Privately held companies based in New York City Software companies based in New York City ERP software companies CRM software companies American companies established in 2002 Software companies established in 2002 2002 establishments in Pennsylvania Software companies of the United States 2002 establishments in the United States Companies established in 2002
336968
https://en.wikipedia.org/wiki/Carians
Carians
The Carians (; , Kares, plural of , Kar) were the ancient inhabitants of Caria in southwest Anatolia. Historical accounts It is not clear when the Carians enter into history. The definition is dependent on corresponding Caria and the Carians to the "Karkiya" or "Karkisa" mentioned in the Hittite records. Bronze Age Karkisa are first mentioned as having aided the Assuwa League against the Hittite King Tudhaliya I. Later in 1323 BC, King Arnuwandas II was able to write to Karkiya for them to provide asylum for the deposed Manapa-Tarhunta of "the land of the Seha River", one of the principalities within the Luwian Arzawa complex in western Anatolia. This they did, allowing Manapa-Tarhunta to take back his kingdom. In 1274 BC, Karkisa are also mentioned among those who fought on the Hittite Empire side against the Egyptians in the Battle of Kadesh. Taken as a whole, Hittite records seem to point at a Luwian ancestry for the Carians and, as such, they would have lost their literacy through the Dark Age of Anatolia. The relationship between the Bronze Age "Karkiya" or "Karkisa" and the Iron Age Caria and the Carians is complicated, despite having western Anatolia as common ground, by the uncertainties regarding the exact location of the former on the map within Hittite geography. Yet, the supposition is suitable from a linguistic point-of-view given that the Phoenicians were calling them "KRK" in their abjad script and they were referred to as in Old Persian. The Carians next appear in records of the early centuries of the first millennium BC; Homer's writing about the golden armour or ornaments of the Carian captain Nastes, the brother of Amphimachus and son of Nomion, reflects the reputation of Carian wealth that may have preceded the Greek Dark Ages and thus recalled in oral tradition. In some translations of Biblical texts, the Carians are mentioned in 2 Kings 11:4, 11:19 (; כָּרִי, in Hebrew literally "like fat sheep/goat", contextually "noble" or "honored") and perhaps alluded to in 2 Samuel 8:18, 15:18, and 20:23 (; כְּרֵתִי, probably unrelated due to the "t", may be Cretans). They are also named as mercenaries in inscriptions found in ancient Egypt and Nubia, dated to the reigns of Psammetichus I and II. They are sometimes referred to as the "Cari" or "Khari". Carian remnants have been found in the ancient city of Persepolis or modern Takht-e-Jamshid in Iran. The Greek historian Herodotus recorded that Carians themselves believed to be aborigines of Caria but they were also, by general consensus of ancient sources, a maritime people before being gradually pushed inland. Plutarch mentions the Carians as being referred to as "cocks" by the Persians on account of their wearing crests on their helmets; the epithet was expressed in the form of a Persian privilege when a Carian soldier responsible for killing Cyrus the Younger was rewarded by Artaxerxes II (r. 405/404–359/358 BC) with the honor of leading the Persian army with a golden cock on the point of his spear. According to Thucydides, it was largely the Carians who settled the Cyclades prior to the Minoans. The Middle Bronze Age (MMI–MMII) expansion of the Minoans into this region seems to have come at their expense. Intending to secure revenue in the Cyclades, Minos of Knossos established a navy with which he established his first colonies by taking control of the Hellenic sea and ruling over the Cyclades. In doing so, Minos expelled the Carians, many of which had turned to piracy as a way of life. During the Athenian purification of Delos, all graves were exhumed and it was found that more than half were Carians (identified by the style of arms and the method of interment). According to Strabo, Carians, of all the "barbarians", had a particular tendency to intermingle with the Greeks, "This was particularly the case with the Carians, for, although the other peoples were not yet having very much intercourse with the Greeks nor even trying to live in Hellenic fashion or to learn our language ... yet the Carians roamed throughout the whole of Greece serving on expeditions for pay. ... and when they were driven thence [from the islands] into Asia, even here they were unable to live apart from the Greeks, I mean when the Ionians and Dorians later crossed over to Asia." (Strabo 14.2.28) Indeed, the term barbarian was coined by Homer in reference to the Carians speaking an unintelligible language.<ref>Tuplin, C, Greek Racism? Observations on the Character and Limits of Greek Ethnic Prejudice' in G.R. Tsetskhladze (ed.), 'Ancient Greeks East and West' ' (Leiden-Boston-Cologne) 1999, 47-75</ref> Carians and Leleges The Carians were often linked by Greek writers to the Leleges, but the exact nature of the relationship between Carians and Leleges remains mysterious. The two groups seem to have been distinct, but later intermingled with each other. Strabo wrote that they were so intermingled that they were often confounded with each other. However, Athenaeus stated that the Leleges stood in relation to the Carians as the Helots stood to the Lacedaemonians. This confusion of the two peoples is found also in Herodotus, who wrote that the Carians, when they were allegedly living amid the Cyclades, were known as Leleges. Language The Carian language belongs to the Luwic group of the Anatolian family of languages. Other Luwic languages besides Luwian proper are Lycian and Milyan (Lycian B). Although the ancestors of Carian and Lycian must have been very close to Luwian, it is probably incorrect to claim that they are linear descendants of Luwian. It is possible that the speakers of Proto-Carian, or the common ancestor of Carian and Lycian, supplied the elites of the Bronze Age kingdom of Arzawa, the population of which partly consisted of Lydians. An important evidence of the Carians' own belief in their blood ties and cultural affinity with the Lydians and Mysians is the admittance, apart from theirs, exclusively of Lydians and Mysians to the temple of the "Carian Zeus" in their first capital that was Mylasa. Religion One of the Carian ritual centers was Mylasa, where they worshipped their supreme god, called 'the Carian Zeus' by Herodotus. Unlike Zeus, this was a warrior god. It is possible that the goddess Hecate, the patron of pathways and crossroads, originated among the Carians. Herodotus calls her Athena and says that her priestess would grow a beard when disaster pended. On Mount Latmos near Miletus, the Carians worshipped Endymion, who was the lover of the Moon and fathered fifty children. Endymion slept eternally, in the sanctuary devoted to him, which lasted into Roman times. There is at least one named priestess known to us from this region, Carminia Ammia who was priestess of Thea Maeter Adrastos and of Aphrodite. Greek mythology According to Herodotus, the Carians were named after an eponymous Car, a legendary early king and a brother of Lydus and Mysus, also eponymous founders respectively of Lydians and Mysians and all sons of Atys. Homer records that Miletus (later an Ionian city), together with the mountain of Phthries, the river Maeander and the crests of Mount Mycale were held by the Carians at the time of the Trojan War and that the Carians, qualified by the poet as being of incomprehensible speech, joined the Trojans against the Achaeans under the leadership of Nastes, brother of Amphimachos ("he who fights both ways") and son of Nomion. These figures appear only in the Iliad and in a list in Dares of Phrygia's epitome of the Trojan War. Classical Greeks would often claim that part of Caria to the north was originally colonized by Ionian Greeks before the Dorians. The Greek goddess Hecate possibly originated among the Carians. Indeed, most theophoric names invoking Hecate, such as Hecataeus or Hecatomnus, the father of Mausolus, are attested in Caria. Archaeology Throughout the 1950s, J.M. Cook and G.E. Bean conducted exhaustive archaeological surveys in Caria. Cook ultimately concluded that Caria was virtually devoid of any prehistoric remains. According to his reports, third millennium finds were mostly confined to a few areas on or near the Aegean coast. No finds from the second millennium were known aside from the Submycenean remains at Asarlik and the Mycenaean remains at Miletus and near Mylasa. Archaeologically, there was nothing distinguishing about the Carians since the material evidence so far only indicated that their culture was merely a reflection of Greek culture. During the 1970s, further archaeological excavations in Caria revealed Mycenean buildings at Iasus (with two "Minoan" levels underneath them), as well as Protogeometric and Geometric material remains (i.e. cemeteries and pottery). Archaeologists also confirmed the presence of Carians in Sardis, Rhodes, and in Egypt where they served as mercenaries of the Pharaoh. In Rhodes, specifically, a type of Carian chamber-tomb known as a Ptolemaion'' may be attributed to a period of Carian hegemony on the island. Despite this period of increased archaeological activity, the Carians still appear not to have been an autochthonous group of Anatolia since both the coastal and interior regions of Caria were virtually unoccupied throughout prehistoric times. As for the assumption that the Carians descended from Neolithic settlers, this is contradicted by the fact that Neolithic Caria was essentially desolate. Though a very small Neolithic population may have existed in Caria, the people known as "Carians" may in fact have been of Aegean origin that settled in southwestern Anatolia during the second millennium BC. See also Caria Carian language Carian script Mysians Lydians Lycians References Citations Sources Further reading External links Livius – Caria (Jona Lendering) Ancient peoples of Anatolia Luwians
3671672
https://en.wikipedia.org/wiki/Adobe%20Lightroom
Adobe Lightroom
Adobe Lightroom (officially Adobe Photoshop Lightroom) is a creative image organization and image manipulation software developed by Adobe Inc. as part of the Creative Cloud subscription family. It is supported on Windows, macOS, iOS, Android, and tvOS (Apple TV). Its primary uses include importing, saving, viewing, organizing, tagging, editing, and sharing large numbers of digital images. Lightroom's editing functions include white balance, presence, tone, tone curve, HSL, color grading, detail, lens corrections, and calibration manipulation, as well as transformation, spot removal, red eye correction, graduated filters, radial filters, and adjustment brushing. The name of the software is based on darkrooms used for processing light-sensitive photographic materials. Overview Unlike Photoshop, Lightroom is a non-destructive editing software that keeps the original image separate from any in-program edits, saving the edited image as a new file. While Photoshop includes doctoring functions like adding, removing or altering the appearance of individual image items, rendering text or 3D objects on images, or modifying individual video frames, Lightroom is a library and development software. Lightroom can store and organize photos once imported into the platform database, and is currently compatible with TIFF, JPEG, PSD (Photoshop), PNG, CMYK (edited in RGB color space) and raw image formats. Initially, Adobe Lightroom was only available on desktop operating systems. However, in 2017, it was expanded to support mobile operating systems with the release of Lightroom Mobile. Later in 2017, Adobe released a brand new variant of Lightroom called Lightroom CC to be more cohesive with their mobile software. The existing version of Lightroom was renamed Lightroom Classic CC, and Lightroom Mobile was renamed to Lightroom CC to have the same name as this new desktop version. While similar in some ways, all three Lightroom variations have significant differences in how they store images and interact with Adobe's cloud storage offering and in feature parity. Lightroom CC stores all uploaded photos and raw files on a cloud server, while Lightroom Classic CC stores files locally and has a more comprehensive set of features. Both CC platforms and Lightroom Mobile also allow users to create, upload, and export Lightroom presets, a batch copy of an image's in-program edits. There is currently a large market for Lightroom presets as a tool for both mobile and digital photographers looking for an easy way to apply a stylized look to their images. Lightroom Classic CC and Lightroom CC feature the following workflow steps: Library Similar in concept to the 'Organizer' in Adobe Photoshop Elements and other image organizers, this module imports and exports images, creates image collections, organizes images by their metadata, and allows for users to flag, rate, tag, and color code images. Library is the gateway into Lightroom. Library is home to Lightroom extensions, extras, and plug-ins like focus finder. Develop Supports non-destructive editing of images in batch form. This module is more for retouching and manipulations, such as enhancing and improving digital photographs by changing color balance, improving tone, sharpening, reducing noise, cropping, straightening, and converting to black-and-white. Lightroom cannot create or edit non-photographic images, such as drawings, symbols, line arts or diagrams or maps, or render text or 3D objects. It has very limited photo doctoring features, including spot removal, brush adjustments, radial and graduated filters, and red eye removal. Another often used feature in the Develop module is the ability to synchronize edits from one selected photo to the whole selection. Upon download, Lightroom provides users with several standard presets for color correction and effects, and supports sharing custom presets online. There is currently a large market for both desktop and mobile image manipulation packages. Photographers and creators with large followings on Instagram and Facebook sell Lightroom Presets to their audience, marketing to their ease and versatility after download. Presets are attached to .XMP and .LRTEMPLATE files that can be imported to Lightroom via the presets pane and include all adjustment settings from the originally doctored photo. Presets are around 4 Kilobytes in size and can range in price from free to upwards of $200. Map Added in Lightroom 4, this module facilitates geographically organizing photos based on embedded or manually added geolocation data (since end of 2018 this is no longer supported for up to Lightroom CC 2015.x / Lightroom 6.x). Book Added in Lightroom 4, this module allows users to create and format photo books. Books can be exported to the self publishing vendor Blurb or printed at any local press as a PDF. Slideshow This module creates slideshows from any number of photos, to which music or a background can be added. Print Allows users to print images and adjusts printing parameters such as layout and orientation. Web Allows website owners or editors to create simple or sophisticated HTML5 web galleries from their uploaded images. This module has several templates available to users that create layout suggestions. The design and HTML can be exported locally to the device or directly to a site's server. History In 1999, veteran Photoshop developer Mark Hamburg began a new project, code-named Shadowland (a reference to the 1988 KD Lang music album of same name). Hamburg contacted Andrei Herasimchuk, former interface designer for the Adobe Creative Suite, to start the project. It was an intentional departure from many of Adobe's established conventions. Forty percent of Photoshop Lightroom is written in the scripting language Lua. In 2002, Hamburg left the Photoshop project and in fall of the same year he sent a first experimental software sample, name PixelToy, to his former teammate Jeff Schewe for review; in 2003, Hamburg presented Schewe a first version of Shadowland in a very early UI version. After a few years of research by Hamburg, Herasimchuk, Sandy Alves (the former interface designer on the Photoshop team), and Grace Kim (a product researcher at Adobe), the Shadowland project accelerated around 2004. However, Herasimchuk chose to leave Adobe Systems at that time to start a Silicon Valley design company. Hamburg then chose Phil Clevenger, a former associate of Kai Krause, to design a new look for the application. Photoshop Lightroom's developers work mostly in Minnesota, comprising the team that had already created the program Adobe ImageReady. Troy Gaul, Melissa Gaul, and the rest of their crew (reportedly known as the "Minnesota Phats"), with Hamburg, developed the architecture behind the application. George Jardine was the product manager. Beta development On January 9, 2006, an early version of Photoshop Lightroom, formerly named only Lightroom, was released to the public as a Macintosh-only public beta, on the Adobe Labs website. This was the first Adobe product released to the general public for feedback during its development. This method was later used in developing Adobe Photoshop CS3. On June 26, 2006, Adobe announced that it had acquired the technology of Pixmantec, developers of the Rawshooter image processing software. Further beta releases followed. Notable releases included Beta 3 on July 18, 2006, which added support for Microsoft Windows systems. On September 25, 2006, Beta 4 was released, which saw the program merged into the Photoshop product range, followed by a minor update on October 19, which was released as Beta 4.1. Version 1.0 On January 29, 2007, Adobe announced that Lightroom would ship on February 19, 2007, list priced at $299 US, £199 UK. Lightroom v1.x is not updated when an upgrade to v2 is installed; a new serial number is needed. Version 2.0 Adobe Photoshop Lightroom 2.0 Beta was advertised in official emails from Adobe in April 2008. New features included: Localized corrections: edit specific parts of an image Improved organization tools Multiple monitor support Flexible printing options 64-bit support The official release of Lightroom v2 was on July 29, 2008, along with the release of Adobe Camera Raw v4.5 and DNG Converter 4.5. Adobe Camera Raw allows importing the proprietary raw data images of various camera manufacturers. Adobe added DNG Camera Profiling to both releases. This technology allows custom camera color profiles, or looks, to be created and saved by users. It also allows profiles matching the creative styles built into cameras to be replicated. At the same time as the Lightroom v2 release, Adobe [through Adobe Labs] released a full set of such Camera Profiles for Nikon and Canon models, along with basic Standard Profiles for all supported makes and models. This technology is open to all programs compliant with the DNG file format standard. Version 3.0 Adobe Photoshop Lightroom 3.0 beta was released on October 22, 2009. New features included: New chroma noise reduction Improved sharpening tool New import pseudo module Watermarking Grain Publish services Custom package for print On March 23, 2010, Adobe released a second beta, which added the following features: New luminance noise reduction Tethered shooting for selected Nikon and Canon cameras Basic video file support Point curve Although not included in any beta release, version 3 also contains built-in lens correction and perspective control. The final version was released on June 8, 2010 with no major new functions added. It had all the features included in the betas, added the lens corrections and perspective transformations, and a few more improvements and performance optimizations. Version 4.0 Adobe Photoshop Lightroom 4.0 was officially released on March 5, 2012 after being available in beta format since January 10, 2012. It dropped support for Windows XP. New features included: Highlight and shadow recovery to bring out detail in dark shadows and bright highlights Photo book creation with templates Location-based organization to find and group images by location, assign locations to images, and display data from GPS-enabled cameras white balance brush to refine and adjust white balance in specific areas of images Added local editing controls to adjust noise reduction and remove moiré in targeted areas Extended video support to organize, Version 5.0 Adobe Photoshop Lightroom 5.0 was officially released on June 9, 2013 after being available in beta format since April 15, 2013. The program needs Mac OS X 10.7 or later, or Windows 7 or 8. Some of the changes include: Radial gradient to highlight an elliptical area Advanced healing-cloning brush to brush the spot removal tool over an area Smart previews to allow working with offline images The ability to save custom layouts in the Book module Support of PNG files Support of video files in slideshows Various other updates, including automatic perspective correction and enhancements to smart collections An update to Version 5, 5.4 allows syncing a collection to Lightroom Mobile App released for iPad on April 8, 2014. Version 6.0 Adobe Photoshop Lightroom CC 2015 (version 6.0) was officially released on April 21, 2015. The program needs OS X 10.8 or later, or Windows 7 or 8. It is the first release of Lightroom to only support 64-bit operating systems. New features include: HDR Merge Panorama Merge Performance improvements, GPU acceleration Facial recognition Advanced video slideshows Filter Brush Lightroom 6.7 increased the minimum version of macOS required to OS X 10.10. Apple TV On July 26, 2016, Adobe launched Lightroom on Apple TV, a means of displaying photographs on a large screen using Apple's network appliance and entertainment device. Development branches Adobe Photoshop Lightroom Classic CC (unofficially: version 7.0) was officially released on October 18, 2017. It is the first version of Lightroom that is not available with a perpetual license (one-time purchase price); instead, it must be licensed through a monthly subscription model, with the fee initially set at US$9.99/month. Once the user stops paying the monthly fee, the program will be limited to viewing existing catalogs, without the ability to apply further changes to images. Adobe Lightroom CC is the new online cloud-based version of Adobe's Lightroom application and can be installed alongside Lightroom Classic CC. It is included in the same US$9.99/month photography plan, but has limited editing features in comparison to Lightroom Classic CC. It can be installed on desktops, laptops, iPad and mobile. Lightroom CC has the ability to sync developed photos easily between a laptop, iPad and mobile devices, which is the major difference between both applications. Its user interface is also more similar to that of Adobe's mobile version of the applications. Adobe Lightroom Classic CC (version 8.0+) Version 8.0 () HDR panoramas Depth Map Masking Support for the HEIC file format Better tethered camera support Support for Process Engine 5.0 New camera and lens support Bug fixes Version 8.1 () Ability to customize the develop panel order "Snap to Grid" in the Book module Ability to show partially compatible presets Photo Merge improvements New camera and lens support Bug fixes Version 8.2 () Enhance Details tool which extracts additional detail from raw files during initial processing New camera and lens support Bug fixes Version 8.2.1 () Bug fixes Version 8.3 () Flat-Field Correction tool to reduce shading or lens cast New Texture slider Ability to import photos from devices using the Files section Improved performance of the Auto setting New camera and lens support Bug fixes Version 8.3.1 () Bug fixes for issues exporting photos to a network drive Version 8.4 () Advanced GPU improvements Batch HDR and panorama merges Book Auto-Create Cell feature Export as PNG Color labels for collections Filmstrip index numbers New camera and lens support Bug fixes Version 8.4.1 () Bug fixes Version 9.0 () Updated system requirements on both Windows and macOS Fill edges for panorama merge New export presets Additional filter options Improved keyword performance Removed photos shortcut New camera and lens support Bug fixes Version 9.1 () New camera and lens support Bug fixes Version 9.2 () Custom preset defaults per camera upon import PSB file support Auto Sync button Better multiple monitor support Export dialog updates GPU updates Catalog migration from Photoshop Elements 2020 New camera and lens support Bug fixes Version 9.2.1 () New camera and lens support Bug fixes Version 9.3 () New icons Local HSL adjustment New presets for defaults ISO adaptive presets New tone curve user interface Improved sync activity Improved performance HEVC video file format support New user tutorials New camera and lens support Bug fixes Version 9.4 () "Done" button in import dialog New camera and lens support Bug fixes Version 10.0 () Updated system requirements Improved split toning which gives control over midtones in addition to shadows and highlights (renamed tool to Color Grading) Zoom enhancements Major GPU performance improvements Updated font New camera and lens support Bug fixes Version 10.1 () Performance improvements Bug fixes related to macOS Big Sur New camera and lens support Support for other Creative Cloud ecosystem updates Version 10.1.1 () Bug fix for missing lens metadata which caused Creative Cloud sync to fail. Version 10.2 () Performance improvements, bug fixes and new camera/lens support. Improved macOS Big Sur compatibility. The entire Lightroom Cloud ecosystem has also been updated. Version 10.3 () Apple M1 Chip Compatibility Super Resolution Preset Changes Scrolling by Page (Library Grid view) Performance Improvements New camera support Tethering for new cameras New lens correction support Bug fixes Version 10.4 () Duplicate Collection Sets (Classic only) Nikon Tethered Live View support (Classic only) New camera profiles and new supported lenses Bug fixes Version 11 () Masking Metadata Panel Performance Improvements More New Presets Filter by Specific Date Adobe Stock plug-in update Catalog Upgrade New Camera-matching Profiles New camera profiles and new supported lenses Bug fixes Version 11.1 () Auto Save to XMP Android Camera New camera support New lens correction support Bug fixes Version 11.2 () Masking Update Migration from Photoshop Elements 2022 is now supported performance improvements New camera support New lens correction support Lightroom Cloud ecosystem has also been updated Bug fixes Adobe Lightroom CC (version 3.0+ on desktop; 5.0+ on mobile) Version 3.0 on desktop; 5.0 on mobile () Initial release Version 3.1 on desktop; 5.1 on mobile () Contribute photos to Lightroom shared albums Directly import photos from a camera or SD card* Export photos in format of your choice* New camera profiles and supported lenses Bug fixes Version 3.2 on desktop; 5.2 on mobile () Export photos as DNG** Import presets and profiles from Google Drive* New camera and lens support New keyboard shortcuts** Version 3.2.1 on desktop; 5.2.1 on mobile () New camera and lens support Bug fixes Version 3.3 on desktop; 5.3 on mobile () Share photos to Discover section Local HSL adjustment Create edit versions Customize default settings for raw photos Add text watermarks to photos** Send photos to Photoshop for iPad* New camera and lens support Bug fixes Version 3.4 on desktop; 5.4 on mobile () New camera and lens support Bug fixes Version 4.0 on desktop; 6.0 on mobile () Improved split toning which gives control over midtones in addition to shadows and highlights (renamed tool to Color Grading) Support for graphical watermarks upon export New "For you" tab in Discover section "Choose Best Photos" feature More precise zoom control Reorganized Photos panel** New camera and lens support Bug fixes Version 4.1 on desktop; 6.1 on mobile () Native Apple M1 support** New camera and lens support Bug fixes *Mobile versions only **Desktop version only See also Comparison of raster graphics editors References External links Current versions Talk given by Troy Gaul, Adobe's lead Lightroom programmer in 2009 at the C4 conference, covering Lightroom's history, code and architecture up to version 2.0 Lightroom Photo software Image organizers Raster graphics editors Lua (programming language) software 2007 software Raw image processing software
6654492
https://en.wikipedia.org/wiki/NNIT
NNIT
NNIT A/S is a Danish public IT company that provides IT consultancy, development, implementation and outsourcing of IT services to clients within life sciences in Denmark and internationally as well as to all types of customers in Denmark. Its clients include among others Danish and international life science companies, public organizations, financial institutions and large enterprise companies. As of 2017, NNIT is currently the third-largest IT services provider in Denmark. NNIT's more than 3,000 employees primarily work at the headquarters in Denmark and its offices in Asia, Europe and the USA. The company was founded as Novo Nordisk IT in 1994 through the merger of Novo Nordisk's two existing information technology units. In 1999, Novo Nordisk IT was established as a private limited company, wholly owned by Novo Nordisk. In 2004, the company changed its name to the current NNIT A/S. In March 2015, NNIT was listed on the NASDAQ OMX. History The company was founded as Novo Nordisk IT in 1994 through the merger of Novo Nordisk's two existing information technology units. The company was converted into a wholly owned aktieselskab in 2004. In March 2015, NNIT was floated on the NASDAQ OMX Nordic, and has traded below the listing price ever since, due to lack of growth. Activities NNIT A/S offers IT services, primarily in the life sciences sector in Denmark and internationally and to customers in the public, enterprise and finance sectors in Denmark. As of 30 June 2018 NNIT A/S had 3,122 employees. NNIT has approximately 400 clients of which around 150 are located outside Denmark. Some 20% are international life sciences clients (June 2018). NNIT is headquartered in Søborg, Denmark with sales offices in: Zurich (Switzerland) and Princeton (United States). NNIT's primary offshore delivery center is in Tianjin (China) from which the company also target sales to companies in the Chinese life sciences industry. In addition, NNIT operate delivery centers in Manila (the Philippines) and Prague (the Czech Republic). Many of NNIT's customers operate in the life sciences sector (including NNIT's major customer, the Novo Nordisk Group, a world leading life sciences group, which comprises Novo Nordisk A/S and subsidiaries), but NNIT also provide services to customers in the public, enterprise and finance sectors among these DSB, Arla Foods and PFA. See also Novo Nordisk Novozymes References Information technology companies of Denmark Software companies based in Copenhagen Companies based in Gladsaxe Municipality Danish companies established in 1994
678694
https://en.wikipedia.org/wiki/Small%20form-factor%20pluggable%20transceiver
Small form-factor pluggable transceiver
The small form-factor pluggable (SFP) is a compact, hot-pluggable network interface module used for both telecommunication and data communications applications. An SFP interface on networking hardware is a modular slot for a media-specific transceiver in order to connect a fiber-optic cable or sometimes a copper cable. The advantage of using SFPs compared to fixed interfaces (e.g. modular connectors in Ethernet switches) is that individual ports can be equipped with any suitable type of transceiver as needed. The form factor and electrical interface are specified by a multi-source agreement (MSA) under the auspices of the Small Form Factor Committee. The SFP replaced the larger gigabit interface converter (GBIC) in most applications, and has been referred to as a Mini-GBIC by some vendors. SFP transceivers exist supporting synchronous optical networking (SONET), Gigabit Ethernet, Fibre Channel, PON, and other communications standards. At introduction, typical speeds were 1 Gbit/s for Ethernet SFPs and up to 4 Gbit/s for Fibre Channel SFP modules. In 2006, SFP+ specification brought speeds up to 10 Gbit/s and the SFP28 iteration is designed for speeds of 25 Gbit/s. A slightly larger sibling is the four-lane Quad Small Form-factor Pluggable (QSFP). The additional lanes allow for speeds 4 times their corresponding SFP. In 2014, the QSFP28 variant was published allowing speeds up to 100 Gbit/s. In 2019, the closely related QSFP56 was standardized doubling the top speeds to 200 Gbit/s with products already selling from major vendors. There are inexpensive adapters allowing SFP transceivers to be placed in a QSFP port. Both a SFP-DD, which allows for 100 Gbit/s over two lanes, as well as a QSFP-DD specifications, which allows for 400 Gbit/s over eight lanes, have been published. These use a form factor which is directly backward compatible to their respective predecessors. An alternative competing solution, the OSFP (Octal Small Format Pluggable) has products being released in 2022 capable of 800 Gbit/s links between network equipment. It is a slightly larger version than the QSFP form factor allowing for larger power outputs. The OSFP standard was initially announced in 2016 with the 4.0 version released in 2021 allowing for 800 Gbit/s via 8×100 Gbit/s electrical data lanes. Its proponents say a low-cost adapter will allow for backwards compatibility with QSFP modules. SFP types SFP transceivers are available with a variety of transmitter and receiver specifications, allowing users to select the appropriate transceiver for each link to provide the required optical or electrical reach over the available media type (e.g. twisted pair or twinaxial copper cables, multi-mode or single-mode fiber cables). Transceivers are also designated by their transmission speed. SFP modules are commonly available in several different categories. 100 Mbit/s SFP Multi-mode fiber, LC connector, with or color coding SX850 nm, for a maximum of 550 m Multi-mode fiber, LC connector, with color coding FX 1300 nm, for a distance up to 5 km. LFX (name dependent on manufacturer)1310 nm, for a distance up to 5 km. Single-mode fiber, LC connector, with color coding LX1310 nm, for distances up to 10 km EX1310 nm, for distances up to 40 km Single-mode fiber, LC connector, with color coding ZX1550 nm, for distances up to 80 km, (depending on fiber path loss) EZX1550 nm, for distances up to 160 km (depending on fiber path loss) Single-mode fiber, LC connector, Bi-Directional, with and color coding BX (officially BX10)1550 nm/1310 nm, Single Fiber Bi-Directional 100 Mbit SFP Transceivers, paired as BX-U () and BX-D () for uplink and downlink respectively, also for distances up to 10 km. Variations of bidirectional SFPs are also manufactured which higher transmit power versions with link length capabilities up to 40 km. Copper twisted-pair cabling, 8P8C (RJ-45) connector 100BASE-TX for distances up to 100m. 1 Gbit/s SFP 1 Gbit/s multi-mode fiber, LC connector, with black or beige extraction lever SX850 nm, for a maximum of 550 m at 1.25 Gbit/s (gigabit Ethernet). Other multi-mode SFP applications support even higher rates at shorter distances. 1.25 Gbit/s multi-mode fiber, LC connector, extraction lever colors not standardised SX+/MX/LSX (name dependent on manufacturer)1310 nm, for a distance up to 2 km. Not compatible with SX or 100BASE-FX. Based on LX but engineered to work with a multi-mode fiber using a standard multi-mode patch cable rather than a mode-conditioning cable commonly used to adapt LX to multi-mode. 1 to 2.5 Gbit/s single-mode fiber, LC connector, with blue extraction lever LX1310 nm, for distances up to 10 km (originally, LX just covered 5 km and LX10 for 10 km followed later) EX1310 nm, for distances up to 40 km ZX1550 nm, for distances up to 80 km (depending on fiber path loss), with green extraction lever (see GLC-ZX-SM1) EZX1550 nm, for distances up to 160 km (depending on fiber path loss) BX (officially BX10)1490 nm/1310 nm, Single Fiber Bi-Directional Gigabit SFP Transceivers, paired as BX-U and BX-D for uplink and downlink respectively, also for distances up to 10 km. Variations of bidirectional SFPs are also manufactured which use 1550 nm in one direction, and higher transmit power versions with link length capabilities up to 80 km. 1550 nm 40 km (XD), 80 km (ZX), 120 km (EX or EZX) SFSWsingle-fiber single-wavelength transceivers, for bi-directional traffic on a single fiber. Coupled with CWDM, these double the traffic density of fiber links. Coarse wavelength-division multiplexing (CWDM) and dense wavelength-division multiplexing (DWDM) transceivers at various wavelengths achieving various maximum distances. CWDM and DWDM transceivers usually support link distances of 40 km, 80 km and 120 km. 1 Gbit/s for copper twisted-pair cabling, 8P8C (RJ-45) connector 1000BASE-Tthese modules incorporate significant interface circuitry for Physical Coding Sublayer recoding and can be used only for gigabit Ethernet because of the specific line code. They are not compatible with (or rather: do not have equivalents for) Fibre Channel or SONET. Unlike non-SFP, copper 1000BASE-T ports integrated into most routers and switches, 1000BASE-T SFPs usually cannot operate at 100BASE-TX speeds. 100 Mbit/s copper and opticalsome vendors have shipped 100 Mbit/s limited SFPs for fiber-to-the-home applications and drop-in replacement of legacy 100BASE-FX circuits. These are relatively uncommon and can be easily confused with 100 Mbit/s SFPs. Although it is not mentioned in any official specification document the maximum data rate of the original SFP standard is 5 Gbit/s. This was eventually used by both 4GFC Fibre Channel and the DDR Infiniband especially in its four lane QSFP form. In recent years, SFP transceivers have been created that will allow 2.5 Gbit/s and 5 Gbit/s Ethernet speeds with SFPs with 2.5GBASE-T and 5GBASE-T. 10 Gbit/s SFP+ The SFP+ (enhanced small form-factor pluggable) is an enhanced version of the SFP that supports data rates up to 16 Gbit/s. The SFP+ specification was first published on May 9, 2006, and version 4.1 published on July 6, 2009. SFP+ supports 8 Gbit/s Fibre Channel, 10 Gigabit Ethernet and Optical Transport Network standard OTU2. It is a popular industry format supported by many network component vendors. Although the SFP+ standard does not include mention of 16 Gbit/s Fibre Channel, it can be used at this speed. SFP+ also introduces direct attach for connecting two SFP+ ports without dedicated transceivers. Direct attach cables (DAC) exist in passive (up to 7 m), active (up to 15 m), and active optical (AOC, up to 100 m) variants. 10 Gbit/s SFP+ modules are exactly the same dimensions as regular SFPs, allowing the equipment manufacturer to re-use existing physical designs for 24 and 48-port switches and modular line cards. In comparison to earlier XENPAK or XFP modules, SFP+ modules leave more circuitry to be implemented on the host board instead of inside the module. Through the use of an active electronic adapter, SFP+ modules may be used in older equipment with XENPAK ports and X2 ports. SFP+ modules can be described as limiting or linear types; this describes the functionality of the inbuilt electronics. Limiting SFP+ modules include a signal amplifier to re-shape the (degraded) received signal whereas linear ones do not. Linear modules are mainly used with the low bandwidth standards such as 10GBASE-LRM; otherwise, limiting modules are preferred. 25 Gbit/s SFP28 SFP28 is a 25 Gbit/s interface which evolved from the 100 Gigabit Ethernet interface which is typically implemented with 4 by 25 Gbit/s data lanes. Identical in mechanical dimensions to SFP and SFP+, SFP28 implements one 28 Gbit/s lane accommodating 25 Gbit/s of data with encoding overhead. SFP28 modules exist supporting single- or multi-mode fiber connections, active optical cable and direct attach copper. cSFP The compact small form-factor pluggable (cSFP) is a version of SFP with the same mechanical form factor allowing two independent bidirectional channels per port. It is used primarily to increase port density and decrease fiber usage per port. SFP-DD The small form-factor pluggable double density (SFP-DD) multi source agreement is a standard published in 2019 for doubling port density. According to the SFD-DD MSA website: "Network equipment based on the SFP-DD will support legacy SFP modules and cables, and new double density products." SFP-DD uses eight lanes to transmit. Currently the following speeds are supported: SFP-DD: / (8 × and 8 × ) SFP-DD112: (8 × ) QSFP types Quad Small Form-factor Pluggable (QSFP) transceivers are available with a variety of transmitter and receiver types, allowing users to select the appropriate transceiver for each link to provide the required optical reach over multi-mode or single-mode fiber. 4 Gbit/s QSFP The original QSFP document specified four channels carrying Gigabit Ethernet, 4GFC (FiberChannel), or DDR InfiniBand. 40 Gbit/s QSFP+ QSFP+ is an evolution of QSFP to support four 10 Gbit/s channels carrying 10 Gigabit Ethernet, 10GFC FiberChannel, or QDR InfiniBand. The 4 channels can also be combined into a single 40 Gigabit Ethernet link. 50 Gbit/s QSFP14 The QSFP14 standard is designed to carry FDR InfiniBand, SAS-3. or 16G Fibre Channel 100 Gbit/s QSFP28 The QSFP28 standard is designed to carry 100 Gigabit Ethernet, EDR InfiniBand, or 32G Fibre Channel. Sometimes this transceiver type is also referred to as "QSFP100" or "100G QSFP" for sake of simplicity. 200 Gbit/s QSFP56 QSFP56 is designed to carry 200 Gigabit Ethernet, HDR InfiniBand, or 64G Fibre Channel. The biggest enhancement is that QSFP56 uses four-level pulse-amplitude modulation (PAM-4) instead of non-return-to-zero (NRZ). It uses the same physical specifications as QSFP28 (SFF-8665), with electrical specifications from SFF-8024 and revision 2.10a of SFF-8636. Sometimes this transceiver type is referred to as "200G QSFP" for sake of simplicity. Fanout or breakout Switch and router manufacturers implementing QSFP+ ports in their products frequently allow for the use of a single QSFP+ port as four independent 10 gigabit ethernet connections, greatly increasing port density. For example, a typical 24-port QSFP+ 1U switch would be able to service 96x10GbE connections. There also exist fanout cables to adapt a single QSFP28 port to four independent 25 gigabit ethernet SFP28 ports (QSFP28-to-4×SFP28) as well as cables to adapt a single QSFP56 port to four independent 50 gigabit ethernet SFP56 ports (QSFP56-to-4×SFP56). Applications SFP sockets are found in Ethernet switches, routers, firewalls and network interface cards. They are used in Fibre Channel host adapters and storage equipment. Because of their low cost, low profile, and ability to provide a connection to different types of optical fiber, SFP provides such equipment with enhanced flexibility. Standardization The SFP transceiver is not standardized by any official standards body, but rather is specified by a multi-source agreement (MSA) among competing manufacturers. The SFP was designed after the GBIC interface, and allows greater port density (number of transceivers per given area) than the GBIC, which is why SFP is also known as mini-GBIC. However, as a practical matter, some networking equipment manufacturers engage in vendor lock-in practices whereby they deliberately break compatibility with "generic" SFPs by adding a check in the device's firmware that will enable only the vendor's own modules. Third-party SFP manufacturers have introduced SFPs with EEPROMs which may be programmed to match any vendor ID. Color coding of SFP Color coding of SFP Color coding of CWDM SFP Color coding of BiDi SFP Color coding of QSFP Signals SFP transceivers are 'right-handed': From their perspective, they transmit on the right and receive on the left. When looking into the optical connectors, transmission comes from the left and reception is on the right. The SFP transceiver contains a printed circuit board with an edge connector with 20 pads that mate on the rear with the SFP electrical connector in the host system. The QSFP has 38 pads including 4 high-speed transmit data pairs and 4 high-speed receive data pairs. Mechanical dimensions The physical dimensions of the SFP transceiver (and its subsequent faster variants) are narrower than the later QSFP counterparts, which allows for SFP transceivers to be placed in QSFP ports via an inexpensive adapter. Both are smaller than the XFP transceiver. EEPROM information The SFP MSA defines a 256-byte memory map into an EEPROM describing the transceiver's capabilities, standard interfaces, manufacturer, and other information, which is accessible over a serial I²C interface at the 8-bit address 1010000X (A0h). Digital diagnostics monitoring Modern optical SFP transceivers support standard digital diagnostics monitoring (DDM) functions. This feature is also known as digital optical monitoring (DOM). This capability allows monitoring of the SFP operating parameters in real time. Parameters include optical output power, optical input power, temperature, laser bias current, and transceiver supply voltage. In network equipment, this information is typically made available via Simple Network Management Protocol (SNMP). A DDM interface allows end users to display diagnostics data and alarms for optical fiber transceivers and can be used to diagnose why a transceiver is not working. See also Interconnect bottleneck Optical communication Parallel optical interface Notes References Hot-swappable transceiver Ethernet
46637906
https://en.wikipedia.org/wiki/Conduit%20%28company%29
Conduit (company)
Conduit Ltd. is an international software company. From its founding in 2005 to 2013, its most well-known product was the Conduit toolbar, which was widely-described as malware. In 2013, it spun off its toolbar business; today, its main product is a mobile development platform that allows users to create native and web mobile applications for smartphones. Products From 2005 to 2013, the company's most well-known product was the Conduit toolbar, which is flagged by most antivirus software as potentially unwanted and adware. Conduit's toolbar software is often downloaded by malware packages from other publishers. The company spun off the toolbar division that manages the Conduit toolbar in 2013. Today, the company's main product is a mobile development platform that allows users to create native and web mobile applications for smartphones. App creation for its App Gallery is free, but it charges a monthly subscription fee to place apps on the Apple Store or Google Play. History Conduit was founded in 2005 by Shilo, Dror Erez, and Gaby Bilcyzk. Between years 2005 and 2013, it ran a successful but controversial toolbar platform business. Conduit was part of the so-called Download Valley companies monetizing free software and downloads by bundling adware. The toolbars were criticized by some as being very difficult to uninstall. The toolbar software was referred to as a "potentially unwanted program" by some in the computer industry because it could be used to change browser settings. The company had more than 400 employees in 2013. In September same year, Conduit spun off its entire website toolbar business division, which combined with Perion Network. After the deal, Conduit shareholders owned 81% of Perion's existing shares and both Perion and Conduit remained independent companies. The substantial size of the Conduit user base allowed Perion to immediately surpass AOL in U.S. searches. Conduit announced it would purchase Keeprz, a mobile customer loyalty platform, for $45 million. See also Perion Network Download Valley Conduit toolbar References Software companies established in 2005 Software companies of Israel Mobile applications
2471586
https://en.wikipedia.org/wiki/Solaris%20Cluster
Solaris Cluster
Oracle Solaris Cluster (sometimes Sun Cluster or SunCluster) is a high-availability cluster software product for Solaris, originally created by Sun Microsystems, which was acquired by Oracle Corporation in 2010. It is used to improve the availability of software services such as databases, file sharing on a network, electronic commerce websites, or other applications. Sun Cluster operates by having redundant computers or nodes where one or more computers continue to provide service if another fails. Nodes may be located in the same data center or on different continents. Background Solaris Cluster provides services that remain available even when individual nodes or components of the cluster fail. Solaris Cluster provides two types of HA services: failover services and scalable services. To eliminate single points of failure, a Solaris Cluster configuration has redundant components, including multiple network connections and data storage which is multiply connected via a storage area network. Clustering software such as Solaris Cluster is a key component in a Business Continuity solution, and the Solaris Cluster Geographic Edition was created specifically to address that requirement. Solaris Cluster is an example of kernel-level clustering software. Some of the processes it runs are normal system processes on the systems it operates on, but it does have some special access to operating system or kernel functions in the host systems. In June 2007, Sun released the source code to Solaris Cluster via the OpenSolaris HA Clusters community. Solaris Cluster Geographic Edition SCGE is a management framework that was introduced in August 2005. It enables two Solaris Cluster installations to be managed as a unit, in conjunction with one or more Data replication products, to provide Disaster Recovery for a computer installation. By ensuring that data updates are continuously replicated to a remote site in near-real time, that site can rapidly take over the provision of a service in the event that the entire primary site is lost as a result of a disaster, either natural or man-made. This is a key to minimizing the Recovery point objective (RPO) and Recovery time objective (RTO) for the service. Proxy file system PxFS (Proxy file system) is a distributed, high availability, POSIX compliant filesystem internal to Solaris Cluster nodes. Global devices in Sun Cluster are made possible by PxFS. Supported applications Solaris Cluster uses software components called agents which monitor an application to detect whether it is operating correctly, and take action if a problem is detected. Agents for common applications are included such as Siebel Systems, SAP Livecache, WebLogic Server, Sun Java Application Server, MySQL, Oracle RAC, Oracle E-Business Suite and Samba among others; there is also a wizard which allows the cluster implementer to create agents for other applications. Releases Oracle Solaris Cluster 11.2 See also Computer cluster High-availability cluster SunPlex Manager, GUI used to view the status and administer some aspects of Solaris Cluster References External links Solaris Cluster webpage at Oracle OpenSolaris HA Clusters community Sun BluePrint: Using Solaris Cluster and Sun Cluster Geographic Edition with Virtualization Technologies Blogs about Solaris Cluster - Sun Cluster Oasis* Greg Pfister: In Search of Clusters, Prentice Hall, Evan Marcus, Hal Stern: Blueprints for High Availability: Designing Resilient Distributed Systems, John Wiley & Sons, Joseph Bianco, Peter Lees, Kevin Rabito: Sun Cluster 3 Programming: Integrating Applications into the SunPlex Environment, Prentice Hall, Richard Elling, Tim Read: Designing Enterprise Solutions with Sun Cluster 3.0, Prentice Hall, Kristien Hens, Michael Loebmann: Creating Highly Available Database Solutions: Oracle Real Application Clusters (RAC) and Sun Cluster 3.x Software, Prentice Hall, High-availability cluster computing Sun Microsystems software Cluster computing
31265734
https://en.wikipedia.org/wiki/Carole%20Post
Carole Post
Carole Post is the City of Tampa’s administrator for development and economic opportunity. She previously served as the chief administrative officer of USF Health at the University of South Florida. She was formerly the executive vice president at New York Law School , and before that, the Commissioner of the New York City Department of Information Technology and Telecommunications (DoITT) and New York City's chief information officer (CIO). She was the first woman to have held such an office of the City of New York. Background Carole Post is a native of Bradenton, Florida. She received a B.S. degree from the University of Florida and a J.D. degree from Seton Hall University School of Law. She is licensed to practice law in New York and Florida. Career Early career After graduating from the University of Florida, Post joined Plan Services, Inc. in Tampa, Florida—a division of Dun and Bradstreet. She rose to a national representative position and thereafter was appointed as an executive director. Post remained at Plan Services, Inc. for five years. Post left her corporate position to attend Seton Hall University Law School in Newark, New Jersey. Upon graduating, Post joined a private law firm in Palm Beach Gardens, Florida, becoming the first female member of the firm. She worked in the municipal law department where she represented local municipal governments in Palm Beach County. In 1999, one of her clients, the City of Palm Beach Gardens, hired her as acting city manager, overseeing all city operations. In this position Post first started to deal with matters involving information technology, particularly the operational and technical issues related to the new millennium. She served in this capacity until mid-2000. New York City Government Department of Buildings In late 2001, Post joined the City of New York, initially as a deputy director in the enforcement division of the New York City Department of Buildings., and later serving as its executive director of strategic planning. Mayor Bloomberg's Office of Operations In 2006, Post was appointed director of agency services in Mayor Michael Bloomberg's office of operations. Early in her tenure, she launched NYCStat, a website providing access to key municipal reports and statistics. Following the 2008 recession, she led the creation of the NYCStat Stimulus Tracker, which catalogued stimulus funding data to allow NYC agencies and residents to analyze citywide expenses, performance, and job creation metrics. She also led the upgrade of NYC's 311 call center and 311 online service, as well as the NYC.gov website. She also launched the Citywide Performance Reporting (CPR) system, a public dashboard of city agency performance and developed the Street Conditions Observation Unit (SCOUT), an initiative that directs city inspectors to survey every city street once per month.She also modernized the mayor's management report, the official public record of New York City agencies' annual performance. Department of Information Technology and Telecommunication (DoITT) Post was appointed chief information officer of New York City and commissioner of the NYC department of information technology and telecommunication (DoITT) in 2009. She unveiled a "technology roadmap", and coordinated the adoption of the mayor's open data law, Following the passage of the law in March 2012, she managed the new data system. Post oversaw the design and rollout of the Citywide IT Infrastructure Services (CITIServ) program, a plan developed in March 2010 to consolidate IT systems of more than 40 agencies and 50 data centers across the city into one system. The program was projected to save the city up to $100 million in data management costs over five years. On March 3, 2011, the first modern data center planned under the CITIServ program opened in Brooklyn. She coordinated with the New York City Economic Development Corporation and private sector sponsors to host the NYC "Big Apps" competition, an annual competition that challenges programmers and developers to use municipal data to build technology products to solve specific city problems. She led negotiations for citywide licensing agreements with software vendors to consolidate dozens of contracts; this was projected to save the city up to $68 million over five years. Post managed DoITT participation in a public-private partnership with Microsoft and SelfHelp Community Services to create the “virtual senior center,” which provides home-bound senior citizens in NYC with better access to community services In late 2011, she launched a citywide program to reduce the "broadband gap" through arrangements with cable franchisees. This enabled deployment of free wireless internet in 30 public parks, upgrades to internet service in community centers and libraries, and installation of expanded fiber cable into commercial and industrial areas of the city. She coordinated a DoITT partnership with the New York City Department of Youth and Community Development and Time Warner Cable to create a learning lab at Harlem's James Weldon Johnson Community Center. The lab, which opened in April 2012, provides free high-speed internet, upgraded computer technology, and e-learning programs for adults and children. Post was named a “Top 50 Government CIO” by Information Week: Government magazine in March 2011. She was also named “2011 New York State Public Sector CIO of the Year” at the 2011 New York State CIO Academy on April 6, 2011. Post resigned from DoITT to serve as executive vice president and chief strategy officer at New York Law School in April 2012. New York Law School Post joined New York Law School as executive vice president and chief strategy officer on April 12, 2012. Soon after joining NYLS, she worked with Dean Anthony Crowell and NYLS faculty on the creation of a new long-term Strategic Plan for the institution. I University of South Florida In October 2016, Post joined the University of South Florida as the deputy chief operating officer for USF Health. References External links Now That Open Data Is Law in New York, Meet Carole Post, the Enforcer. Capital New York. March 21, 2012. Retrieved September 12, 2012. New York City's IT Roadmap. CIO Insight. July 16, 2010. Retrieved September 12, 2012. Carole Post Interviewed at Strata Summit 2011. September 20, 2011. Retrieved September 12, 2012. Living people People from Bradenton, Florida 21st-century women Chief information officers American chief operating officers Year of birth missing (living people)
59881283
https://en.wikipedia.org/wiki/Eric%20Rosenbach
Eric Rosenbach
Eric Brien Rosenbach is an American public servant and retired U.S. Army Captain who served as Pentagon Chief of Staff from July 2015 to January 2017 and as Assistant Secretary of Defense for Homeland Defense and Global Security from September 2014 to September 2015. As Chief of Staff, Rosenbach assisted Secretary Ash Carter on the Department of Defense's major challenges of the time, which included increased Russian aggression, the Syrian Civil War, and North Korean missile tests. Born in Colorado Springs, Colorado, Rosenbach received his B.A. in Political Science in 1995 from Davidson College where he participated in the Army Reserve Officers’ Training Corps (ROTC, AROTC) program. He received a Masters in Public Policy from the Harvard Kennedy School of Government in 2004 and a Juris Doctor from Georgetown University Law Center in 2007. He was a Fulbright Scholar 1995 - 1996. Rosenbach's background is in cybersecurity, both public and private sector. Most recently, he acted as the DoD ‘cyber tzar’; from September 2011 to August 2014, he served as the Deputy Assistant Secretary of Defense for Cyber, in which role he oversaw and led the DoD's cybersecurity strategy. Rosenbach continued to oversee cybersecurity as Chief of Staff. From 2000 to 2002 he advised Tiscali, the then-largest Internet service provider in Europe, on cybersecurity as their Chief Security Officer, and prior to that, he was an Army intelligence communications officer. Returning to Harvard Kennedy School to teach in 2007, Rosenbach was the executive director of the Belfer Center for Science and International Affairs for three years before going to the DoD. He would again return to Harvard in May 2017 to become co-director of the Belfer Center with Secretary Ash Carter. Early life Eric Rosenbach was born at the US Air Force Academy in Colorado. His father, Dr. William E. Rosenbach, a thirty-year Air Force Veteran, served in the U.S. Air Force flying a Lockheed C-130 Hercules in the Vietnam War. A professor at the academy, Dr. Rosenbach was an influential force in Eric Rosenbach's decision to join the military through the ROTC program. Education Rosenbach graduated from Gettysburg Area High School in 1991 and played football and basketball while a student there. On a ROTC scholarship at Davidson, Rosenbach was elected student body president, played quarterback for the Davidson football team, and was a ROTC battalion commander. Rosenbach got involved in Davidson's Dean Rusk International Studies Program and found public service appealing. A Davidson article quotes him as saying, “Public service motivated me. It’s a rewarding feeling to know you’re making a little bit of difference in the world. I figured that I’d be either a history or government teacher, or work in the foreign service.” The Dean Rusk program gave Rosenbach a grant to go to Vietnam - the country where his father had flown for the US Air Force and had almost been shot down. The trip inspired him to think deeply about military use of force and US foreign policy. Upon graduation from Davidson, Rosenbach became a Fulbright Scholar and studied privatization in post-communist Bulgaria for a year. He then entered the US Army as an intelligence officer in the Army (see early career). Rosenbach attended Harvard Kennedy School from 2002 to 2004 and received a Masters in Public Policy. He was HKS professors Richard A Clarke and Graham Allison's Graduate Assistant; they both have been great mentors to him. Upon graduating he enrolled into the Georgetown University Law Center and obtained a Juris Doctor in 2007. Rosenbach learned German at the Volkshochschule Rosenheim in Germany. Career Rosenbach served as the commander of a communications intelligence unit in the US Army for four years, 1996–2000. The unit, which worked closely with the National Security Agency,  provided strategic information to support US operations in Bosnia and Kosovo. The Central Intelligence Agency named it the top intelligence organization in the U.S. military for two consecutive years. In 2000 Rosenbach left the Army with the rank of captain. He became the chief security officer for Tiscali, an internet telecommunication company that was then the largest Internet service provider in Europe. For two years he was responsible for their cybersecurity. He recollects, “It was such a different environment from the army. I had a fancy company car and flew all over Europe. But then 9/11 happened, and that really jarred me. I realized what I was doing didn’t feel rewarding, and went back to school intent on doing something in the public sector.” After graduating from HKS in 2004, Rosenbach worked on the core staff of John Kerry's 2004 Presidential Campaign for Kerry's security advisors Rand Beers and Susan Rice. Rosenbach served as a professional staff member for the Senate Select Committee on Intelligence, US Senate. In that role he led the investigation into whether there were ties between Al-Qaeda and Saddam Hussein in the 9/11 attacks. He also had oversight of the US’ counterterrorism profile and the individual agencies’ counterterrorism operations (including the CIA and NSA). Concurrently (2005 - 2007), Rosenbach advised Senator Chuck Hagel as his National Security Advisor. The Department of Defense Rosenbach became the second ever Deputy Assistant Secretary of Defense for Cyber in September 2011. In that position he was responsible for creating and implementing DoD's strategy for US operations in cyberspace. He co-authored Presidential Policy Directive 20, which established principles and processes for US cyber operations and was signed by President Barack Obama in 2012. He also helped design and establish the mission force of the US Cyber Command (USCYBERCOM). In September 2014, Rosenbach was confirmed by the US Senate as Assistant Secretary of Defense for Homeland Defense and Global Security. In that role he led the DoD's efforts to deter Chinese theft of American intellectual property and to counter Iranian and North Korean cyber attacks against US critical infrastructure. He also dealt with the proliferation of weapons of mass destruction, space operations, and antiterrorism. He helped lead the implementation of the Global Health Security Agenda with a stress on multi-sector approaches to combating global health threats. Rosenbach led the DoD's domestic response to the Ebola outbreak in 2014 and established new medical safety policies. Rosenbach became Pentagon Chief of Staff in July 2015. As Chief of Staff, Rosenbach was a senior leader of the Department of Defense, an organization with a yearly budget of $550 billion, 2.8 million personnel, and high-stakes operations across the globe. Rosenbach and Carter's many objectives were defeating ISIL, building effective cyber strategy, and opening all combat positions to female service members. Other major priorities included counterterrorism operations, and strategies for the Asia-Pacific, Europe, and Middle East regions. One of Rosenbach's major projects was innovation improvement at the DoD. Rosenbach's efforts included the Defense Innovation Unit Experimental (DIUx), the Defence Digital Service (a project Rosenbach conceived of and helped lead), the Silicon Valley-based Defense Innovation Unit, and the Defense Innovation Board. Belfer Center Rosenbach is currently co-director of Harvard Kennedy School's Belfer Center for Science and International Affairs and a Harvard Kennedy School Public Policy Lecturer. With Robby Mook and Matt Rhoades, Rosenbach founded and leads the Belfer Center's Defending Digital Democracy Project, a new initiative aiming to identify and build mitigations for cyber vulnerabilities in democratic elections. Personal life Rosenbach is married, has two children. Distinctions The Meritorious Service Medal and the Knowlton Award. The Medal is awarded for, “noncombat meritorious achievement or service that is incontestably exceptional and of magnitude that clearly places the individual above his peers”. The Award, “recognizes individuals who have contributed significantly to the promotion of Army Military Intelligence in ways that stand out in the eyes of the recipients, their superiors, subordinates, and peers[,]... demonstrate the highest standards of integrity and moral character, [and] display an outstanding degree of professional competence". The Director of the Central Intelligence Agency named his intelligence unit the top intelligence organization in the U.S. military for two consecutive years In February 2015, then Secretary of Defense Chuck Hagel awarded Rosenbach with The Secretary of Defense Medal for Outstanding Public Service. “Mr. Rosenbach’s professional skill, leadership, and tireless initiative resulted in major contributions to the Department's and the Nation’s cyber policy”. November 2016 Secretary Carter awarded Rosenbach with The Department of Defense Medal for Distinguished Public Service. “Through his expertise in national security and foreign affairs, he provided invaluable advice and assistance”. Works written Find, Fix, Finish: Inside the Counterterrorism Campaigns that Killed Bin Laden and Devastated Al Qaeda - details the US transformation from having no cohesive counterterrorism policy pre-9/11 to the all out war waged against the perpetrators over the following decade. Co-authored with Aki Peritz, a security expert. Military Leadership and Pursuit of Excellence -  examines the fundamentals of military leadership. Written by Robert L. Taylor; the sixth edition of the book includes fresh perspectives from Rosenbach who served on the book's editorial team. References 1972 births Living people People from Colorado Springs, Colorado Davidson College alumni United States Army officers Recipients of the Meritorious Service Medal (United States) Harvard Kennedy School alumni Georgetown University Law Center alumni Military personnel from Colorado
9008017
https://en.wikipedia.org/wiki/Run%20command
Run command
The Run command on an operating system such as Microsoft Windows and Unix-like systems is used to directly open an application or document whose path is known. Overview The command functions more or less like a single-line command-line interface. In the GNOME (a UNIX-like derivative) interface, the Run command is used to run applications via terminal commands. It can be accessed by pressing . KDE (a UNIX-like derivative) has similar functionality called KRunner. It is accessible via the same key binds. The Multics shell includes a run command to run a command in an isolated environment. The DEC TOPS-10 and TOPS-20 Command Processor included a RUN command for running executable programs. In the BASIC programming language, RUN is used to start program execution from direct mode, or to start an overlay program from a loader program.netsh wlan show profile name-Galaxy F410953 key-clear Accessing the Run command Starting with Windows 95, the Run command is accessible through the Start menu and also through the shortcut key . Although the Run command is still present in Windows Vista and later, it no longer appears directly on the Start menu by default, in favor of the new search box and a shortcut to the Run command in the Windows System sub-menu. The Run command is launched in GNOME and KDE desktop environment by holding . Uses Uses include bringing up webpages; for example, if a user were to bring up the Run command and type in http://www.example.com/, the user's default Web Browser would open that page. This allows user to not only launch http protocol, but also all registered URI schemes in OS and applications associated with them, like mailto and file. In GNOME and KDE, the Run command acts as a location where applications and commands can be executed. See also KDE Plasma 4 Start (command) References External links Essential Windows RUN Commands Customizing Windows Run command 350+ Run Commands for Windows XP, Vista, 7, 8 & 8.1 156 Useful Run Commands Alternative to the standard Windows Run-Dialog. Windows components
684511
https://en.wikipedia.org/wiki/University%20of%20Engineering%20and%20Technology%2C%20Lahore
University of Engineering and Technology, Lahore
The University of Engineering and Technology, Lahore (UET Lahore) is a public university located in Lahore, Punjab, Pakistan specializing in science, technology, engineering and mathematics (STEM) subjects. It is the oldest and one of the most selective engineering institutions in Pakistan. History and overview Founded in 1921 in Mughalpura, a suburban area of Lahore, as Mughalpura Technical College, it later became the 'MacLagan Engineering College', a name given to it in 1923 when Sir Edward Douglas MacLagan, the then Governor of the Punjab who laid the foundation stone of the main building, now called the Main Block. In 1932, it was affiliated with University of the Punjab for the award of Bachelor's Degrees in Electrical and Mechanical Engineering. In 1939, the name was again changed to Punjab College of Engineering and Technology, and Civil Engineering degree was also started in the college. At the time of partition, part of the college relocated to India, called East Punjab College of Engineering. In 1954, a bachelor's degree program in mining engineering was started. In 1962, it was granted charter and was named the West Pakistan University of Engineering and Technology, Lahore. During the 1960s, bachelor's degree programs were started in chemical engineering, petroleum and gas engineering, metallurgical engineering, architecture and city and regional planning. In 1972, it was officially renamed the University of Engineering and Technology, Lahore. By the 1970s, it had established over a score of master's degree programs in engineering, architecture, city and regional planning and allied disciplines. Several Ph.D. degree programs were also started. A second campus of the university was established in 1975 in Sahiwal which was relocated in Taxila in 1978 and became an independent university in 1993 called University of Engineering and Technology, Taxila. Today, it is widely considered one of the best and most prestigious engineering university of Pakistan where more than 50,000 students apply for admission every year. The university, as of 2016, has a faculty of 881 people with 257 with doctorates. It has a total of 9,385 undergraduate and 1,708 postgraduate students studying. It has a strong collaboration with University of South Carolina, University of Manchester and the Queen Mary University of London and has conducted research funded by Huawei, Cavium Networks, Microsoft and MontaVista. It is one of the highest ranked universities in Pakistan with domestic rankings placing it as the fifth best engineering school in Pakistan while QS World University Ranking putting U.E.T as 701st in the world every year between 2013 and 2016. It is also ranked as #701 in world by the same publication in 2018. In 2018 QS rankings university observed downgrading from 701 to among 801–1000 ranked slot. Meanwhile, in Asia university ranking is currently being ranked at No. 200 and in worldwide university ranking is 701. Location The campus is situated on the Grand Trunk Road (GT Road), few kilometers from the Mughal era Shalimar Gardens. Sub campuses and constituent colleges Faculties and departments The university consists of following faculties and departments: Faculty of Architecture and Planning Department of Architecture Department of City and Regional Planning Department of Product and Industrial Design Faculty of Chemical, Metallurgical and Polymer engineering Department of Chemical Engineering Department of Metallurgical and Materials Engineering Department of Polymer and Process Engineering Faculty of Civil Engineering Department of Civil Engineering Department of Transportation Engineering and Management Department of Architectural Engineering and Design Institute of Environmental Engineering and Research Faculty of Earth Sciences and Engineering Department of Petroleum and Gas Engineering Department of Mining Engineering Department of Geological Engineering Faculty of Electrical Engineering Department of Electrical Engineering Department of Computer Engineering Department of Computer Science Faculty of Mechanical Engineering Department of Mechanical Engineering Department of Automotive Engineering Department of Industrial and Manufacturing Engineering Department of Mechatronics and Control Engineering Faculty of Natural Sciences, Humanities and Islamic Studies Department of Humanities, Social Sciences and Modern Languages Department of Mathematics Department of Physics Department of Chemistry Department of Islamic Studies Institute of Business and Management (IBM) Research centers The university consists of following research centers: Al-Khawarizmi Institute of Computer Science (KICS) Huawei – UET Joint TeleComm and IT Center Center for Language Engineering ZTE – UET Joint TeleComm Center Laser and Optronics Center Energy Research Technologies Development Center Institute of Environmental Engineering and Research DSP and Wireless Communication Center Center of Excellence in Water Resources Engineering Research Center Software Engineering Center Manufacturing Technologies Development Center Automotive Engineering Center Nano Technology Research Center Innovation and Technology Development Center Engineering Services UET Pakistan (Pvt) Limited (ESUPAK) Center for Energy Research and Development Bio Medical Engineering Center More than 870 students are foreign students and more than 1000 are female students. Teaching faculty The faculty consists of 741 people including 14 international faculty members; around 122 have doctoral degrees. Its faculty holds one Tamgha-e-Imtiaz, one Sitara-i-Imtiaz, one Izaz-e-Kamal Presidential Award and nine HEC Best Teacher Awards. The university has established a Directorate of Research, Extension and Advisory Services which strives for the promotion and organization of research activities. The Al-Khwarizmi Institute of Computer Sciences is a notable name in research activity in computer sciences in Pakistan. Co-curricular, extracurricular activities and student societies The university has a sports complex, consisting of a swimming pool, tennis court, table tennis court, squash court and a cricket stadium that is also used for athletics. The university has several football grounds. Apart from sports-related facilities, there are societies to promote co-curricular activities. These include: AIChE Aks UET Photography Society ASHRAE ASME Student Section Blood Donors Society (BDS) CESA ACI Student Chapter Environmental and Horticultural Society (EHS) ICE Student Chapter UET Lahore IEEE UET Student Branch IET UET Lahore Section Industrial and Manufacturing Engineers' Club Literary Society Mechatronics Club SGE Society of Mining Engineers (SOME) Society of Product and Industrial Design (SPID) Society of Petroleum Engineers (SPE), UET Lahore Student Chapter UET Photography Club (UPC) UET Science Society (SS) SPACE Student chapter UET-ACM UET Debating Society UET Dramatics Society UET Media Society (UMS) UET Tribune The National Library of Engineering Science The National Library of Engineering Science, inaugurated by Faisal bin Abdul-Aziz Al Saud, is the central library of the university with a seating capacity for 400 readers and more than 125,000 volumes of books, 22,000 volumes of bound serials and 600 issues of scientific and technical serials on diverse fields. The Library has the honor of having recently been chosen by the Higher Education Commission to serve as the primary resource center for engineering and technical education. It is a three-story building in front of Allah Hu Chowk. Notable alumni Fawad Rana, Owner of Lahore Qalandars Parvez Butt, Former Chairman of Pakistan Atomic Energy Commission (PAEC) Junaid Jamshed, Pop Singer and Religious Scholar Jawad Ahmad, Pop Singer Junaid Khan, Rock Singer and Actor Najam Sheraz, Pop Singer Faakhir Mehmood, Pop Singer Sami Khan, Actor Ahsan Iqbal, Politician, Member of Pakistan Muslim League (N) Mir Nooruddin Mengal, slain Leader and Former Acting President of Balochistan National Party (Mengal) Fazal Ahmad Khalid, Former Vice Chancellor of the university Mehreen Faruqi, Australian Politician Peer Zulfiqar Ahmad Naqshbandi, Islamic Scholar Mosharraf Hossain, Bangladeshi Politician Alumni in foreign university faculties Adil Najam, dean Pardee School of Global Studies at Boston University, former vice chancellor of Lahore University of Management Sciences and former associate professor at International Institute for Sustainable Development, former associate professor at Tufts University, USA Ahsan Kareem, Robert M. Moran Professor of Engineering and Director of NatHaz Modeling Laboratory at University of Notre Dame, USA Ishfaq Ahmad, fellow of IEEE, professor at The University of Texas at Arlington, USA References External links UET Lahore official website Industrial Open House and Career Fair, UET Lahore Institute of Business and Management UET Lahore Rachna College of Engineering and Technology Gujranwala UET Lahore Kala Shah Kaku campus UET Lahore Faisalabad campus UET Lahore Narowal campus Pakistan Engineering Council Architecture schools in Pakistan Engineering universities and colleges in Pakistan Public universities and colleges in Punjab, Pakistan 1921 establishments in British India Lahore Universities and colleges in Lahore Lahore District Educational institutions established in 1921
10858841
https://en.wikipedia.org/wiki/Video%20game%20content%20rating%20system
Video game content rating system
A video game content rating system is a system used for the classification of video games based on suitability for target audiences. Most of these systems are associated with and/or sponsored by a government, and are sometimes part of the local motion picture rating system. The utility of such ratings has been called into question by studies that publish findings such as 90% of teenagers claim that their parents "never" check the ratings before allowing them to rent or buy video games, and as such, calls have been made to "fix" the existing rating systems. Video game content rating systems can be used as the basis for laws that cover the sales of video games to minors, such as in Australia. Rating checking and approval is part of the game localization when they are being prepared for their distribution in other countries or locales. These rating systems have also been used to voluntarily restrict sales of certain video games by stores, such as the German retailer Galeria Kaufhof's removal of all video games rated 18+ by the USK following the Winnenden school shooting. Comparison table A comparison of current video game rating systems, showing age on the horizontal axis. Note however that the specific criteria used in assigning a classification can vary widely from one country to another. Thus a color code or age range cannot be directly compared from one country to another. Key:  White  – No restrictions: Suitable for all ages / Aimed at young audiences / Exempt / Not rated / No applicable rating.  Yellow  – No restrictions: Parental guidance is suggested for designated age range.  Purple  – No restrictions: Not recommended for a younger audience but not restricted.  Red  – Restricted: Parental accompaniment required for younger audiences.  Black  – Prohibitive: Exclusively for older audience / Purchase age-restricted / Banned. In the above table, Italics indicate an international organization rather than a single country. Initial controversy Similar to other forms of media, video games have been the subject of argument between leading professionals and restriction and prohibition. Often these bouts of criticism come from use of debated topics such as video game graphic violence, virtual sex, violent and gory scenes, partial or full nudity, drug use, portrayal of criminal behavior or other provocative and objectionable material. Video games have also been studied for links to addiction and aggression. There have been a multitude of studies linking violent video game play with increased aggression. A meta analysis of studies from both eastern and western countries yielded evidence that "strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior." There are also groups that have argued to the contrary, that few if any scientifically proven studies exist to back up these claims, and that the video game industry has become an easy target for the media to blame for many contemporary issues. As is evidenced by meta analyses such as the one cited above, there have been a multitude of studies proving a link between violent game play and short term aggressive behavior; other studies find no concrete link between long term aggression, bullying or criminal behavior. Researchers have also proposed potential positive effects of video games on aspects of social and cognitive development and psychological well-being. It has been shown that action video game players have better hand-eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than non-players. Rating systems Argentina The law 26.043 (passed in 2005) states that the National Council of Children, Youth and Family ('Consejo Nacional de la Niñez, Adolescencia y la Familia') in coordination with the National Institute of Cinema and Audiovisual Arts will be the government agencies that assigns age ratings. The Argentine Game Developer Association (Asociación de Desarrolladores de Videojuegos Argentina) was critical of the law. There are three ratings: "Suitable for all public", "Suitable for those over 13 years of age" and "Suitable for those over 18 years of age". Australia The Australian Classification Board (ACB) is a statutory classification body formed by the Australian Government which classifies films, video games and publications for exhibition, sale or hire in Australia since its establishment in 1970. The Classification Board was originally incorporated in the Office of Film and Literature Classification (OFLC) which was dissolved in 2006. Originally apart the Attorney-General's Department and overseen by the Minister for Justice, the ACB is now a branch of the Department of Communications and the Arts which provides administrative support to the Board and is overseen by the Minister for Communications & the Arts. Decisions made by the Board may be reviewed by the Australian Classification Review Board. Austria There is no uniform ratings system in Austria, and the nine states regulate content in different ways. The two main systems are PEGI (applied in Vienna) and Germany's USK system (applied in Salzburg). Brazil The advisory rating (ClassInd) (Classificação Indicativa in Portuguese) rates films, games and television shows in Brazil. It is controlled by the Ministry of Justice (Ministério da Justiça). Chile Games are classified by the Council of Cinematographic Classification (Consejo de Calificación Cinematográfica) which is a central agency under the Ministry of Education. The current age ratings are: TE (Todo Espectador) – General audience (no objectionable content). Mayores de 8 años – Not recommended for children younger than 8 years. Mayores de 14 años – Not recommended for children younger than 14 years. Mayores de 18 años – Not recommended for children younger than 18 years. In addition to these ratings an educational category also exists. China China introduced a pilot content rating system in December 2020 called the Online Game Age-Appropriateness Warning, which is overseen by the governmental agency (CADPA). Games with online components are required to show one of the three classifications on websites and registration pages: green for "8+" (appropriate for players 8 years and older), blue for "12+", and yellow for "16+". Europe The Pan European Game Information (PEGI) is a European video game content rating system established to help European parents make informed decisions on buying computer games with logos on games boxes. It was developed by the Interactive Software Federation of Europe (ISFE) and came into use in April 2003; it replaced many national age rating systems with a single European system. The PEGI system is now used in more than thirty-one countries and is based on a code of conduct, a set of rules to which every publisher using the PEGI system is contractually committed. PEGI self-regulation is composed by five age categories and seven content descriptors that advise the suitability and content of a game for a certain age range based on the games content. The age rating does not indicate the difficulty of the game or the skill required to play it. Germany Unterhaltungssoftware Selbstkontrolle (USK) (Entertainment Software Self-control), is Germany's software rating organization founded in 1994. USK 0 - Playable for all ages USK 6 - Ages 6 and over USK 12 - Ages 12 and over USK 16 - Ages 16 and over USK 18 - Ages 18 and over Indonesia The Indonesian Game Rating System (IGRS) is an official video game content rating system founded and set by the Indonesian Ministry of Communication and Informatics in 2016. IGRS rates games that are developed and published in Indonesia. There are 5 classifications of ratings based on the game content, which includes the use of alcohol, cigarettes, drugs, violence, blood, language, sexual content, etc. These are the following classifications: SU ("Semua Umur", All Ages in English) Playable for all ages. 3+ Age 3 and over. No restricted content is shown including adult content, use of drugs, gambling simulation, and online interactions. 7+ Age 7 and over. No restricted content is shown including adult content, use of drugs, gambling simulation, and online interactions. 13+ Age 13 and over. Restricted contents are partially shown, including light use of drugs and alcohol by figures/background characters, cartoon violence, mild language, gambling simulation, horror theme, and online interactions. 18+ Age 18 and over. Restricted contents are mostly shown, if not all, including use of drugs and alcohol by main characters, realistic violence (blood, gore, mutilation, etc.), crude humor, gambling simulation, horror theme, and online interactions. As of November 2019, various imported PlayStation titles released since then have been rated by the IGRS after SIE Asia opened their Indonesian office. Those titles are also marked as "Official Indonesia Products" (). Iran The Entertainment Software Rating Association () (ESRA) is a governmental video game content rating system that is used in Iran. Games that have been exempt from the rating are de facto banned from sale in Iran. +3Ages 3 and over +7Ages 7 and over +12Ages 12 and over +15Ages 15 and over +18Ages 18 and over In practise, the rating applies largely to PC and mobile games, as none of the console games are officially released for the Iranian market. Japan In Japan, the content rating is not required by law, but most commercial video game publishers take the industry self-regulations. Console manufacturers force for video game publishers that games must be rated by CERO. Distributors of PC games (mostly dating sims, visual novels, and eroge) require games having the approval of EOCS or Japan contents Review Center. These ratings are referred to by local governments, and the Ordinance Regarding the Healthy Development of Youths (青少年健全育成条例) prohibits retailers from supplying 18+ rating games to persons under 18. Dōjin softs don't have such restrictions, but distribution of obscene materials can be punished under the Article 175 of the Penal Code of Japan. Computer Entertainment Rating Organization The (CERO) is an organization that rates video games in Japan, with different levels of rating that inform the customer of the nature of the product and what age group it suits. It was established in June 2002 as a branch of the Computer Entertainment Supplier's Association, and became an officially recognized non-profit organization in December 2003. It currently consists of five age categories and nine content descriptors. AAll ages. Formerly "All." BAges 12 and over. Formerly "12." CAges 15 and over. Formerly "15." DAges 17 and over. ZAges 18 and over only. Formerly "18." This is the only rating that is legally enforced. CEROAssigned to free demos and trial versions of games 審査予定Assigned to games which are currently awaiting classification Ethics Organization of Computer Software The (EOCS, or Sofurin) is an incorporated association that rates PC games in Japan. It was established on November 20, 1992, and was incorporated in 2009. The association also works to crack down on copyright infringement of PC games for the companies it represents, and sponsors the to help PC game sales. The current ratings are: General Software - All ages. General Software (recommended to ages 12 and over) General Software (recommended to ages 15 and over) Software that is banned from selling to persons under 18 Japan contents Review Center The is a cooperative that reviews adult videos and adult PC games in Japan. The organization was founded on December 1, 2010 as after the dissolution of the Content Soft Association (CSA). Mexico On November 27, 2020, the Secretariat of the Interior (SEGOB) published a new set of guidelines on the Official Journal of the Federation called Lineamentos Generales del Sistema Mexicano de Equivalencias de Clasificación de Contenidos de Videojuegos (General Guidelines of the Mexican System of Classification Equivalencies for Video Game Content). This states that all games distributed in Mexico will have their own set of ratings effective May 27, 2021, replacing the ESRB ratings system that was being used, while still being in accordance with them. The ratings are as follows: A (Todo Público): For all ages. B (+12 Años): Content for teens 12 and over. B15 (+15 Años): Content for ages 15 and over. C (Adultos +18 Años): Content not suitable for those 18 and under. D (Exclusivo Adultos): Extreme and adult content. P (Etiquetado Pendiente): Content pending for its classification. New Zealand The Office of Film and Literature Classification (OFLC) is the government agency in New Zealand that is responsible for classification of all films, videos, publications, and some video games in New Zealand. It was created by the Films, Videos, and Publications Classification Act 1993 (FVPC Act), replacing various film classification acts, and is an independent Crown entity in terms of the Crown Entities Act 2004. The head of the OFLC is called the Chief Censor, maintaining a title that has described the government officer in charge of censorship in New Zealand since 1916. The current ratings are: G: This can be shown and sold to anyone. PG: Films and games with a PG label can be sold, hired, or shown to anyone. The PG label means guidance from a parent or guardian is recommended for younger viewers. M: Films and games with an M label can be sold, hired, or shown to anyone. Films with an M label are more suitable for mature audiences 16 years and over. R13: Restricted to persons 13 years and over. R15: Restricted to persons 15 years and over. R16: Restricted to persons 16 years and over. R18: Restricted to persons 18 years and over. R: Restricted to a particular class of people. North America The Entertainment Software Rating Board (ESRB) is a self-regulatory organization that assigns age and content ratings, enforces industry-adopted advertising guidelines, and ensures responsible online privacy principles for computer and video games and other entertainment software in Canada and the United States. PEGI ratings are used on some French-language games sold in Canada. Despite being self-regulatory, in Canada, games rated by the ESRB are required by law to be rated and/or restricted, though this only varies at a province and territory level. ESRB ratings can be found on games for Nintendo systems in the countries of Malaysia, Saudi Arabia, Singapore, and the United Arab Emirates. This system was used in Mexico as well until it was replaced by a local rating system on May 27, 2021. A similar system also exists for arcade video games, which is enforced by the American Amusement Machine Association (AAMA) and the Amusement and Music Operators Association (AMOA). It is called the Parental Advisory System, and uses three colors for ratings - green (Suitable for All Ages), yellow (Mild Content), and red (Strong Content). Stickers displaying the ratings are placed on the game marquees, and the rating can also be displayed during the attract mode if the game's developer or publisher chooses to do so. Russia The Age classification of information products is a new statutory classification set of rules formed by the Russian Government after enacting in September 2012 a Federal Law of Russian Federation no. 436-FZ of 2010-12-23 “On Protecting of Children from Information Harmful to Their Health and Development” (), which classifies films, video games and publications for exhibition, sale or hire in Russia since 1 September 2012. The Ministry of Culture provides administrative support to the classification. Saudi Arabia The General Commission for Audiovisual Media () (GCAM) is responsible for the age-ratings of films, television programs and interactive games. Singapore The Info-communications Media Development Authority (IMDA) is a statutory board of the Singapore Government which regulates films, television programs and video games in Singapore. Slovakia Jetnotný systém označovania (English: Unified System of Age Rating/Labeling) (JSO) is a statutory board of Ministry of Culture of Slovakia under act 589/2007, which regulates age restriction of films, television programs and video games in Slovakia. The current age ratings are: "Teddy bear's head" – Content targeted towards children younger than 12 years. U – General audience (Parental advisory recommended for children younger than 7 years). 7 – Not recommended for children younger than 7 years. 12 – Not recommended for people younger than 12 years. 15 – Not recommended for people younger than 15 years. 18 – Prohibited for minors under 18 years of age. In addition, educational game ratings are: -7 – Targetted towards children younger than 7 years. 7+ – Appropriate for children older than 7 years. 12+ – Appropriate for people 12 years and over. 15+ – Appropriate for people 15 years and over. The labeling is mandatory for all physical releases (Games reedemable from gift cards including), but there is no legislative basis for labeling electronic releases (instead, PEGI rating is shown). South Africa The South African Film and Publication Board (FPB) is a statutory classification body formed by the South African Government under the Films and Publications Act of 1996 which classifies films, music, television programmes, and video games for exhibition, sale or hire in South Africa. Distributors and exhibitors are legally compelled to comply with the age ratings. South Korea The Game Rating and Administration Committee (게임물관리위원회 Geimmul Gwanri Wiwonhoe) (GRAC) is the South Korean video game content rating board. A governmental organization, the GRAC rates video and computer games to inform customers of the nature of game contents. Taiwan Game Software Rating Regulations (遊戲軟體分級辦法), also translated as Game Software Rating Management Regulations, is the video game content rating system used in Taiwan. United Arab Emirates The National Media Council () (NMC) is a body of the federal U.A.E. government which regulates all aspects of media production, publication, and media trade in the United Arab Emirates. The body was established under Federal Law (1) of 2006. By 2013, the NMC has sustained full authority over the media market in the country. In 2018, the NMC introduced local age rating systems for various media, including video games available in retail. In June 2021, the Ministry of Culture & Youth launched the Media Regulatory Office () to execute a number of functions and tasks previously under the National Media Council, following a restructure of the federal U.A.E. government that was approved in July 2020. United Kingdom The British Board of Film Classification (BBFC), originally British Board of Film Censors, is a non-governmental organisation, funded by the film industry and responsible for the national classification of films within the United Kingdom. It has a statutory requirement to classify videos and DVDs. It no longer has responsibility for rating video games in the UK. This role has been passed to the Video Standards Council (formerly known as the VSC Rating Board). In July 2012, the VSC Rating Board became the sole UK statutory video games regulator for the UK. The VSC Rating Board has been a PEGI Administrator since 2003 and subsequently uses the PEGI criteria to classify video games. The UK Interactive Entertainment Association, a UK industry trade group, works with the VSC to help properly label such games and provide informational material to parents. Games featuring strong pornographic content or ancillary mini-games to be included with a DVD feature will still be rated by the BBFC. International IARC Some app stores that support IARC use this rating in countries and regions where there is no rating system. The classification standard adopted by IARC is the same as that of PEGI. This rating is not recognized in some countries. Usage The image below presents outdated usage of various video game content rating systems around the world. Countries filled with gradients are using several rating systems. See also International Age Rating Coalition Mobile software content rating system Motion picture content rating system Research on the effects of violence in mass media Television content rating system Video game controversy References External links Video games ratings face overhaul http://www.gamesindustry.biz/articles/tiga-responds-to-byron-review http://www.gamesindustry.biz/articles/ELSPA-concerned-by-Byron-proposals http://www.esrb.org/ratings/ratings_guide.jsp
6322954
https://en.wikipedia.org/wiki/Keith%20Waters
Keith Waters
Keith Waters (born 1962 in Kent, England) is a British animator who is best known for his work in the field of computer facial animation. He has received international awards from Parigraph, the National Computer Graphics Association and the Computer Animation Film Festival. Early life Keith Waters was born in Kent in 1962 and attended Sevenoaks School. He received his PhD from Middlesex University (UK) in 1988 after completing a BA in Graphic Design at Cat Hill Barnet. His early work on algorithms for face animation in 1986 allowed him to transfer from a MPhil to a PhD while studying under the supervision of Paul Brown and John Vince at Middlesex Polytechnic at the Centre for Advanced Studies in Computer Aided Art and Design. His studies required numerous trips to Bounds Green to use the computing facilities within the school of engineering. Waters is best known for his work in computer facial animation that includes a muscle-based model for facial animation, a physically-based skin tissue model as well as a visual text-to-speech system called DECface. He is a co-author of the book "Computer Facial Animation" a guide to facial animation. The first edition published in 1995 and a second edition was published in 2008. His muscle algorithms for face animation were widely used in the computer film industry, most notably by Pixar, which first used the technique in their animation short Tin Toy. Waters was the director of engineering device software at Nexage, prior to that a principal architect at Akamai Technologies Inc., and previously a Director of Research at Orange Labs, Boston MA, USA. Keith has been active in Human Computer Interaction (HCI) and has published scientific papers on novel applications. He has been engaged with the W3C is the development of Web standards for the mobile Web. Professional career After graduating from the Centre for Advanced Studies in Computer Aided Art and Design (now the Lansdown Centre for Electronic Arts) at Middlesex University in 1988 with a PhD in computer graphics, Keith Waters worked for Schlumberger in Palo Alto and then at their Research Lab for Computer Science Austin Texas where he worked on parallel CM-2 data visualization. In 1991 he joined the Cambridge Research Lab of Digital Equipment Boston MA where he continued to work on user interfaces including DECface the visual equivalent of DECtalk the text-to-speech engine. Later, he went on to create the FaceWorks product that was used at Comdex. While at Compaq he invented a variety of user interfaces including the first Smart Kiosk, the invisible mouse, an image-based touchscreen and a wallable macrodevice. He worked on film and television high-performance face animation techniques at LifeF/X before joining Orange in 2001 where he became a senior expert in mobile services developing next-generation mobile Web technologies for high performance open source devices. Recently he was a principal architect at Akamai Technologies Inc., assisting them with their mobile strategy. Awards Keith Waters received international awards from Parigraph, the National Computer Graphics Association and the Computer Animation Film Festival for his animation shorts on face animation and in particular for the sequences of the Queen and Margret Thatcher in 1986. The computer-generated characters were generated from the animated puppets of Spitting Image which were kindly moulded specifically for his research by Roger Law and Peter Fluck. See also White Heat Cold Logic (2008) References Further reading A muscle model for animating three-dimensional facial animation DECface: An automatic lip-synchronization algorithm for synthetic faces Japanese Put a Human Face on Computers, New York Times, Andrew Pollack, June 28, 1994 1st edition: Frederic Park and Keith Waters, Computer Facial Animation, 1996, A. K. Peters Press Ltd., A Wallable Macrodevice External links Keith Waters at Interaction Design Foundation 1962 births Living people People from Kent Alumni of Middlesex University Computer graphics professionals English animators
28309031
https://en.wikipedia.org/wiki/MTS%20system%20architecture
MTS system architecture
MTS System Architecture describes the software organization of the Michigan Terminal System, a time-sharing computer operating system in use from 1967 to 1999 on IBM S/360-67, IBM System/370, and compatible computers. Overview The University of Michigan Multi-Programming Supervisor (UMMPS), has complete control of the hardware and manages a collection of job programs. One of the job programs is MTS, the job program with which most users interact. MTS operates as a collection of command language subsystems (CLSs). One of the CLSs allows for the execution of user programs. MTS provides a collection of system subroutines that are available to CLSs, user programs, and MTS itself. Among other things these system subroutines provide standard access to Device Support Routines (DSRs), the components that perform device dependent input/output. Organization The system is organized as a set of independent components with well-defined interfaces between them. This idea is, of course, neither new nor unique; but MTS components are generally larger, interfaces between components more rigid, and a component communicates with fewer other components than in many systems. As a result, components are more independent of each other and it is easier to replace one component without affecting others. The interface with the supervisor is the same for all components and very few special cases are allowed; for example, all input/output operations are done using the same supervisor facilities whether the input/output is for a card reader, a paging device, or any other device. Most access to supervisor services is via system subroutines that issue the necessary Supervisor Call instructions (SVCs) rather than by direct use of SVCs. Control blocks are accessed only indirectly by calls to subroutines within the component that "owns" the control block. The interfaces used by user programs are the cleanest of all. User programs may never refer directly to any system control block (neither for reference nor change), because the virtual memory segment(s) that contain system control blocks (the system segments) are removed from a job's virtual address space when a user mode program is running. The subroutine interfaces available to user programs are also used by most other parts of the system (system mode programs, CLSs, ...) even through components running in system mode do have access to the "system" virtual memory segment(s). Transitions from user mode to system mode and back are managed by a special protected set of subroutine interfaces known as "the gate" (initially developed at Wayne State University). The programming effort for MTS is divided vertically rather than horizontally. This means that one or two individuals are assigned responsibility for a component and then follow it from design through implementation and maintenance. The responsible person has considerable freedom to design the internal structure of the component and even extend interfaces, so long as all appropriate existing interfaces are maintained unchanged. Programming languages and system level debugging The supervisor, most job programs, large parts of MTS including many DSRs and CLSs are written in 360/370 assembler language. A few job programs and portions of MTS including some DSRs and CLSs are written in higher level languages such as Plus or GOM. User programs are written in a wide range of languages from assembler to any of the higher level languages that are available. Most components of the system, including user programs, CLSs, and subroutines loaded in shared virtual memory, can be debugged and new versions of many can be installed while the system is running without requiring a system shutdown. It is possible to substitute a private copy of all components except the supervisor and parts of some job programs. A "test" version of the MTS job program (TMTS) is available to allow testing in the regular production environment. SWAT is an interface that allows the Symbolic Debugging System, which is normally used to debug user programs, to be used to debug MTS. $PEEK is a privileged MTS command that uses Program Event Recording (PER) and other facilities to facilitate debugging one job program from another. Components that cannot be debugged in this way can be debugged by running in an MTS virtual machine (a user program). Supervisor University of Michigan Multi-Programming Supervisor (UMMPS) is the name of the MTS supervisor. UMMPS is the only portion of the system that runs in S/360 supervisor state. It runs with virtual memory (relocation) turned off and with hardware interrupts disabled. With multi-processor configurations it may be executing on more than one processor concurrently. UMMPS is what today would be called a microkernel, although UMMPS was developed long before that term was in common use. To jobs UMMPS appears to be an extension of the S/360 or S/370 hardware and is responsible for: allocating all hardware resources (processors, real memory, input/output devices), scheduling I/O operations, processing all hardware interrupts including page-faults and program interrupts due to errors in job programs, implementing virtual memory including: the allocation of VM addresses, managing segment and page tables, providing protected or read-only memory by setting storage keys, managing memory reference and change bits, managing named address spaces (NASs), determining when and which pages should be moved between real memory and secondary storage to implement demand paging, providing services to job programs that issue Supervisor Call (SVC) and Monitor call (MC) instructions, including: starting and terminating jobs, initiation of input/output operations (channel programs), scheduling timer interrupts, communication with the system operator, providing inter-task communication services, allowing jobs to acquire and release software locks, allowing jobs to enter and leave user and system mode, where user mode programs do not have access to some virtual memory segments and the full range of SVCs, providing services to allow the synchronization of job programs, providing shadow segment and page tables and other services that allow job programs to provide virtual machine services, simulating a few machine instructions that are present on some, but not all, models of the S/360 or S/370 computers, simulating the Branch on Program Interrupt (BPI) pseudo instructions, machine check error recovery, writing job dumps (making a snapshot of the current execution state of a job by writing out all real memory, all the job's virtual memory, general registers, and program status word to magnetic tape), tracking the amount of processor time used and the number of page-ins for jobs, maintaining the time of day clock, and assisting in the creation of diagnostic trace tapes. After initialization UMMPS is entirely interrupt driven. The interrupts may be due to supervisor (SVC) or monitor (MC) call instructions issued by job programs to request services, page fault interrupts for virtual memory pages that are not in real memory when referenced by a job program, program interrupts caused by abnormal conditions in job programs, timer interrupts on behalf of job programs or used internally within the supervisor, interrupts from the input/output subsystem, machine check interrupts, external (operator initiated) interrupts, and interrupts from other processors in a multiprocessor configuration. A program interrupt in supervisor state is a system failure that results in a supervisor dump (a Super Dump, where the machine state and the contents of all real memory is written to magnetic tape) followed by a system restart (re-IPL). Branch on Program Interrupt (BPI) The Branch on Program Interrupt (BPI) pseudo instruction provides a simple way for a sequence of code to retain control following a program interrupt. This can be useful to test for valid addresses in a parameter list, to catch overflow, underflow, and other exceptions during calculations, or really any situation where a program interrupt is possible. BPIs can be used at very low cost for the usually more common case where there is no program interrupt. UMMPS implements the Branch on Program Interrupt (BPI) pseudo instruction using a special type of NOP instruction. The form of the BPI instruction is: BPI M2,D2(B2) [RX] or BC 0,D2(M2,B2) [RX] Op Code Mask1 Mask2 Base Displacement +--------------+-------+-------+-------+------------+ | x'47' | 0 | M2 | B2 | D2 | +--------------+-------+-------+-------+------------+ 0 8 12 16 20 31 Where Mask1 is always zero, Mask2 is a name or value as described in the table below, and the base and displacement specify a branch address. Several BPI instructions may be given in succession. The BPI instruction is available for use in problem-state as well as supervisor-state (that is, within UMMPS itself). When an instruction causes a program interrupt, the following instruction is checked to determine if it is a BPI instruction. If it is, the type of program interrupt that occurred is compared with the type categories specified in the Mask2 portion of the BPI instruction. If there is a match, the condition code is set to reflect the interrupt that occurred and the branch is taken. Otherwise, the next instruction is checked to determine if it is a BPI instruction, etc. If there is no BPI transfer made (either because there was no BPI instruction or because the program interrupt type did not match the mask of any BPIs that were present), the normal processing of the program interrupt occurs. When the BPI instruction is executed normally (when there is no program interrupt on the previous instruction), it is a NOP or "branch never" instruction. BPI interrupt-type categories: {| class="Wikitable" !Mask2Name !Mask2Value !InterruptNumber !InterruptName !Condition Codeon Branch |- | |- align="center" |OPCD |8 |1 |Operation |1 |- align="center" | | |2 |Privileged operation |2 |- align="center" | | |3 |Execute |3 |- | |- align="center" |OPND |4 |4 |Protection |0 |- align="center" | | |5 |Addressing |1 |- align="center" | | |6 |Specification |2 |- align="center" | | |7 |Data |3 |- | |- align="center" |OVDIV |2 |8 |Fixed overflow |0 |- align="center" | | |9 |Fixed divide |1 |- align="center" | | |10 |Decimal overflow |2 |- align="center" | | |11 |Decimal divide |3 |- | |- align="center" |FP |1 |12 |Exponent overflow |0 |- align="center" | | |13 |Exponent underflow |1 |- align="center" | | |14 |Significance |2 |- align="center" | | |15 |Floating-point divide |3 |} Job programs All job programs run in S/360 problem state, may run with virtual addressing enabled or disabled, and may or may not be reentrant (more than one instance of the job program may or may not be allowed to execute). With multiprocessor configurations a single job will only execute on a single processor at a time, but the supervisor may assign a job to different processors at different times. The MTS job program is the one with which most users interact and provides command interpretation, execution control, file and device management, and accounting services. Other job programs assist the supervisor (the Paging Device Processor or PDP, the OPERATOR console job, the Disk Manager or DMGR, ...), provide common or shared services (spooled local and remote batch services via HASP and the HASPlings or later the Resource Manager or RM which was developed at the University of British Columbia to replace HASP), or allow the system operators to display status and otherwise control the system (JOBS, UNITS, STOP, BLAST, GOOSE, STARTUP, SHUTDOWN, REW, WTM, ...). New jobs, other than the very first job, are started by requests to UMMPS from other jobs, most often the OPERATOR job. The very first job, INIT, is started immediately after IPL and supervisor initialization. 24, 31, and 32-bit addressing From their start and for much of their lifetime UMMPS and MTS operated using 24-bit addressing. UMMPS never used the 32-bit virtual memory addresses that were available on the IBM S/360-67. In August 1982 the University of Alberta changed UMMPS to operate in 31-bit addressing mode to allow more than 16 MB of real memory to be used, although real memory above 16 MB was only used for to hold virtual memory pages. Job programs and user programs continued to use 24-bit addresses. In 1985 Rensselaer Polytechnic Institute (RPI) made changes to UMMPS to support S/370-XA which among other things allowed either 24 or 31-bit addressing for job programs and for user programs running under MTS. Changes were made at the University of Michigan in 1990 to allow user programs using 31-bit addresses to work smoothly: object modules could be flagged as supporting 31-bit addressing (or not), compilers and assemblers were changed to supply the correct flags, programs would switch between 24 and 31-bit addressing modes as needed when transitioning between system and user modes. Protection MTS has a strong protection model that uses the virtual memory hardware and the S/360 and S/370 hardware's supervisor and problem states and via software divides problem state execution into system (privileged or unprotected) and user (protected or unprivileged) modes. Relatively little code runs in supervisor state. For example, Device Support Routines (DSRs, aka device drivers) are not part of the supervisor and run in system mode in problem state rather than in supervisor state. Virtual memory and paging Virtual memory (VM) and demand paging support were added to UMMPS in November 1967, making MTS the first operating system to use the Dynamic Address Translation (DAT) features that were added to the IBM S/360-67. UMMPS uses 4096-byte virtual memory pages and 256-page virtual memory segments. UMMPS could be conditionally assembled to use the small (64 page) segments that were available on S/370 hardware, but job programs were always presented with what appeared to be large (256 page) segments. Both 2K and 4K block storage keys are supported. There is a three-level storage hierarchy: (1) real memory, (2) high-speed paging devices, and (3) paging disks. High-speed paging devices include the IBM 2301 Drum, IBM 2305 Fixed Head File, and various third party "solid-state" I/O devices such as the STC 4305 and Intel 3805 that simulate spinning disks or more often provide more efficient fixed block architecture (FBA) access to external RAM-based storage. The high-speed paging devices are attached using "two-byte" I/O channels operating at up to 3.0 MB per second whenever possible. The paging disks were separate from the disks used for the file system and were used if the higher speed paging devices became full. Virtual memory pages migrate between real memory and the paging devices. In the early versions of MTS pages did not migrate between individual paging devices. In later versions, less frequently used pages would migrate from the high-speed paging devices to the paging disks, when the high-speed devices were close to being full. Later in its life the system was changed to use IBM S/370-XA Extended Storage as part of the second level of the storage hierarchy and to use the same disks for the file system and for paging. Virtual memory is managed by UMMPS with assistance from the Paging Device Processor (PDP) job program. UMMPS responds to requests to allocate and free VM from job programs, allocates VM addresses, allocates real memory, manages segment and page tables, sets storage keys, manages reference and change bits, determines which virtual memory pages should be paged in or out, and communicates with the PDP. New virtual memory pages are initialized to a "core constant" value of x'81' on first reference. The PDP is a real memory job program. It allocates space on the paging devices, initiates all I/O to the paging devices, is responsible for recovery from I/O errors, and communicates with UMMPS. To reduce the likelihood of thrashing UMMPS uses a "big job mechanism" that identifies jobs with more real pages than a threshold, limits the number of these "big" jobs that are eligible to execute at a given time, and gives the big jobs an extended time slice when they do execute. This allows big jobs to accumulate more real memory pages and to make better use of those pages before they come to time slice end, but big jobs will wait longer between time slices when there are too many big jobs contending for limited real memory pages. The number of pages that a job can have before it is considered big (the big job threshold or BJT) and the number of big jobs (NBJ) that are eligible for execution are external parameters that are reevaluated and set outside of the supervisor every 20 seconds based on the overall system load. Other than the big job mechanism, UMMPS storage, processor, and I/O scheduling are independent, with each area allowed to "take care of itself". Virtual memory is divided into regions as follows: Segment 0: shared virtual equals real memory (read-only) Segments 1 to 4: shared virtual memory (read-only) Segment 5: private virtual memory (system segment, only available to system mode (unprotected) programs) Segments 6 to 12: private virtual memory (user segments, read-write to any program) Different numbers of segments were assigned to the various regions over time and with the advent of 31-bit addressing and the ability to use VM segments larger than 16, the regions were expanded as follows: Segment 0: shared virtual equals real memory (read-only) Segments 1 to 5: shared virtual memory (read-only) Segments 6-7: private virtual memory (system segments, only available to system mode (unprotected) programs) Segment 8: shared virtual memory for attachment of named address spaces (NASs) (read-only) Segments 9-55: private virtual memory (user segments, read-write to any program) Segments 56-59: private virtual memory (system segments, only available to system mode (unprotected) programs) Segments 60-63: shared virtual memory for attachment of named address spaces (NASs) (read-only) Some real memory is not addressable using virtual memory addresses and so is only available to UMMPS or real memory job programs. Read-only virtual memory may be changed by privileged programs that turn memory protection off (usually for very limited periods of time). Named address spaces (NASs) allow the attachment of named segments of virtual memory. They are shared virtual memory spaces that may be attached and detached from a given job's virtual address space and the same addresses may have different contents depending on which named address spaces are attached. NAS support is mostly used by MTS to attach VM segments preloaded with system components as a way to extend shared virtual memory without using VM address space below the magic 16 MB line and thus keeping more of this valuable address space available for use by 24-bit user programs. Signon and project IDs Everybody who uses MTS is assigned a signon ID (also called userids or Computing Center IDs, CCIDs). Signon IDs are always 4 characters long. If necessary shorter IDs are automatically padded on the right using the string ".$.". Thus, the IDs "MTS.", "DAB.", "ME$." or "C.$." could be written as "MTS", "DAB", "ME" and "C", respectively. Signon IDs are protected using passwords which must be given at the start of each session (as part of or more often immediately after the $SIGNON command). Exceptions are jobs submitted via *BATCH* that run under the same ID that submitted the new job, jobs scheduled to run repeatedly at a particular time or on a particular day when the jobs run under the same ID that scheduled them, or jobs initiated from the operator's console. Passwords are from 1 to 12 characters long, lower case letters are converted to uppercase, special characters other than comma and blank are allowed. Passwords can be changed using the $SET PW command. Changing a password from a terminal session requires entering the original password, and the new password must be entered twice for verification. Entering an incorrect password is counted and reported to the user at the next successful signon. Too many password failures without a successful entry are reported to the system operator and still more password failures without a successful entry will cause the signon ID to be "locked out" until it is reset by business office staff. A short delay is introduced between failed password entry attempts to prevent large numbers of password "guesses" from being made quickly. Individuals can have multiple sign on IDs for use in different courses, different research projects, or with different funding sources (university, government, non-profit, industry, ...). The sharing of signon IDs by individuals is discouraged, but does occur. Signon IDs are grouped into projects. Each signon ID is a member of one and only one project. Project IDs, like signon IDs, are 4 characters long. Many projects are controlled by a "Project Leader" signon ID which can allocate resources to the accounts that are members of the project (within the resource limits allocated to the project) using the $ACCOUNTING MANAGEMENT command. Signon and project IDs are also used to control access to files and to send e-mail. With one exception there are no signon IDs with "special" privileges by virtue of the ID itself. Instead, flags can be set that allow specific signon IDs to: create public files and set public program keys, run with a zero or negative account balance, perform privileged operations, including: flagging files to run in system (unprotected) rather than user (protected) mode by default, use the PROT=OFF options on the $SET and $RUN commands, use the test command language subsystem ($#CLS), use privileged options of the $SYSTEMSTATUS and other command language subsystems (CLSs). The exception is the signon ID "MTS.", which can read, but not modify or permit, any file in the system regardless of ownership or permit status. The MTS. ID can also use the $SET FILEREF=OFF option, which prevents the file reference dates on files from being updated (useful when recovering from file system problems or investigating security issues). There is no ability for a program or user to assume the privileges of a signon ID other than the one that was used to sign on to the current session. Instead, programs and files may be permitted to specific signon IDs, projects, and program keys or to combinations of signon IDs, projects, and program keys. Terminal, batch, and server sessions MTS supports terminal, batch, and server sessions. All three use the same command language. Terminal sessions are interactive with the user able to respond to the output produced including error messages and prompts. Batch jobs are not interactive and so all input needs to be prepared in advance with little or no opportunity for the user to alter the input (at least not without programming) once the batch job starts to execute. Server sessions can support user to MTS or client to MTS interactions and while there may be interaction with the user, MTS commands are usually read from a command file and the user is not likely to have to know or enter MTS commands. Server sessions can be sponsored in which case they will appear to be free to the user, and do not require that the user enter an ID and password. Server sessions can also be charged for and require a valid ID and password. Server sessions can be initiated from the network or from within an MTS session using the $MOUNT command. The University of Alberta developed a Student Oriented Batch Facility in 1971 to provide quick job turnaround for undergrad students learning to program in FORTRAN, ALGOL, PL/C, and 360 Assembler. It was a dedicated punch card input, printer output system that provided 5 minute turn around and ran several thousands of jobs a week at a fixed cost per job (15 cents). Command language MTS reads commands from the *SOURCE* pseudo device, which is initially the user's terminal or the batch input stream. Programs may execute MTS commands by calling the CMD, CMDNOE, and COMMAND subroutines. Leading and trailing blanks as well as null and all blank lines are ignored. Lines that start with an asterisk (* or $*) are treated as comments. Command lines that end with a continuation character (by default the minus-sign) are continued on the next line. Command lines may be up to 255 characters long. MTS uses keyword oriented commands and command options. The command verb (SIGNON, RUN, EDIT, ...) is the first keyword on the command line. Commands may start with an optional dollar-sign ($SIGNON, $RUN, $EDIT, ...). In batch jobs, following invalid commands and some other errors, MTS looks for the next line that starts with a dollar-sign ($) in column 1 as the next command to execute. All commands and most command options allow initial sub-string abbreviations (C for COPY, R for RUN, DEB for DEBUG, ...). MTS commands and most command options are case-insensitive. MTS has "one-shot" commands (CREATE, FILESTATUS, SIGNOFF, ...) and commands that have sub-command modes (EDIT, CALC, SYSTEMSTATUS, ...). Most commands with sub-command modes can also be invoked as one-shot commands by giving one or more sub-commands on the command line. All MTS jobs start with a SIGNON command and most end with a SIGNOFF command. Commands may be stored in files and executed using the SOURCE command. Commands may be stored in signon-files (sigfiles) or project-signon-files (projectsigfiles) that are always executed immediately after the SIGNON command. The execution of sigfiles may be required (SIGFILEATTN=OFF) or optional (SIGFILEATTN=ON, the default). Global control SIGNON { ccid | * } [ option ... ] [ comment ] SIGNOFF [ SHORT | $ | LONG ] [ RECEIPTS | NORECEIPTS ] ACCOUNTING [ option ... ] ACCOUNTING MANAGEMENT COMMENT [ text ] DISPLAY item [ OUTPUT=FDname ] SET option ... SINK [ FDname | PREVIOUS ] SOURCE [ FDname | PREVIOUS ] SYSTEMSTATUS [ option ] #CLS FDname [ options ] (privileged command that runs a test CLS) File management CREATE filename [ SIZE={ n | nP } ] [MAXSIZE={n | nP} ] [TYPE={LINE | SEQ | SEQWL} ] DESTROY filelist [ OK | ALLOK | PROMPT ] DUPLICATE oldname [ AS | TO ] newname [ options ] [ OK | ALLOK | PROMPT ] EDIT [ filename ] [ :edit-command ] EMPTY [ filelist ] [ OK | ALLOK | PROMPT ] TRUNCATE filelist [ ALLOK | PROMPT ] RENAME oldname [ AS ] newname [ OK | ALLOK | PROMPT ] RENUMBER filelist [ first [ last [ begin [ increment ] ] ] ] [ ALLOK | PROMPT ] FILESTATUS [ filelist ] [ format ] [ items ] FILEMENU [ filelist ] [ items ] FMENU [ filelist ] [ items ] PERMIT filelist [ access [ accessor ] ] PERMIT filelist LIKE filelist2 [ EXCEPT access [ accessor ] ] LOCK filename [ how ] [ WAIT | NOWAIT ] [ QUIT | NOQUIT ] UNLOCK filename LOCKSTATUS [ filename | JOB nnnnnn ] [ LOCK ] [ WAIT ] LSTATUS [ filename | JOB nnnnnn ] [ LOCK ] [ WAIT ] File and device management COPY [ FROM ] { FDlist1 | 'string' } [ [ TO ] [ FDlist2 ] CREATE *pdn* TYPE={ PRINT | IMPORT | EXPORT | DUMMY } DESTROY *pdn* [ OK | ALLOK | PROMPT ] LIST FDlist [ [ ON | TO ] FDname ] [ [ WITH ] option ... ] LIST FDlist WITH options [ {ON | TO} FDname ] LIST MOUNT [ request [; request ] ... ] CANCEL *...* [ [ JOB ] nnnnnn ] [ {ID | CCID}=ccid ] RELEASE { *PRINT* | *PUNCH* | *BATCH* | *pdn* } LOCATE { SYSTEM | LOCAL | FULL | SHORT | HELP } LOCATE { jobnumber | jobname } [ option ... ] VIEW [ jobnumber [ ; view-command ] ] LOG [ FDname1 ] { [ ON ] FDname2 [ format ] [ options ] | OFF } FTP [ hostname ] GET FDname (old fashioned and obsolete, but sometimes still useful) NUMBER (old fashioned and obsolete way to enter data into a file) User program execution and control RUN [ FDname ] [ I/Ounits ] [ option ] ... [ PAR=parameters ] RERUN [ ECHO | NOECHO ] [ I/Ounits ] [ option ] ... [ PAR=parameters ] DEBUG [ FDname ] [ I/Ounits ] [ option ] ... [ PAR=parameters ] SDS [ sds-command ] LOAD [ FDname ] [ I/Ounits ] [ option ] ... [ PAR=parameters ] START [ [ AT ] [ RF={hhhhhh | GRx} ] location ] [ I/Ounits ] [ option ] ... RESTART [ [ AT ] location ] [ I/Ounits ] [ option ] ... UNLOAD [ CLS=clsname ] ALTER location value ... ... DISPLAY [ format ] location [ OUTPUT=FDname ] DUMP [ format ] [ OUTPUT=FDname ] IF RUNRC condition integer, MTS-command ERRORDUMP (obsolete command, causes an automatic dump in batch mode following abnormal termination of a user program) Miscellaneous CALC [ expression ] MESSAGESYSTEM [ message-command ] FSMESSAGE [ FSMessage-command ] NET [ host | *pdn* ] [ .network-command ] HEXADD [ hexnumber1 ] [ hexnumber2 ] (obsolete, replaced by $Calc) HEXSUB [ hexnumber1 ] [ hexnumber2 ] (obsolete, replaced by $Calc) PASSWORD (obsolete, removed, allowed changes to public files before true shared file access was available) File-name patterns Several MTS commands that use file names or lists of file names allow the use of file-name patterns: COPY, DESTROY, DUPLICATE, EMPTY, EDIT, FILESTATUS, FILEMENU, LIST, LOCKSTATUS, PERMIT, RENAME, RENUMBER, and TRUNCATE. A question-mark (?) is the pattern match character. A single question-mark used in a file-name will match zero or more characters. "?" matches all files for the current signon ID, "?.S" matches all files that end with ".S", "A?B" matches all files that begin with "A" and end with "B", "A?B?C" matches all files that start with "A", end with "C", and contain a "B". Two or more consecutive question-marks match "n-1" characters. "???.S" matches all four character file-names that end with ".S", and "????" matches all three character file-names. "W163:?" matches all files under the signon ID "W163" to which the current user has some access. Command Macros The MTS command macro processor allows users to define their own MTS commands. It provides a "scripting" language with conditional commands and is available for use with any lines read from *SOURCE* by user programs or command language sub-systems as well with MTS commands. Macro processor lines are usually prefixed with the greater than character (>). The command macro processor is controlled using the $SET command as well as by I/O modifiers on FDnames. Prefix Characters To help users keep track of what command, command subsystem, or program they are working with and when input is expected, MTS displays a prefix character or sometimes a prefix string at the front of each input and output line it writes to the user's terminal. The common prefixes are: # MTS command mode #- MTS command continuation mode ? Prompts > COPY and LIST commands . Program loader blank User programs : Editor + Symbolic Debugging System (SDS) @ Message System ftp> FTP (File-Transfer) Command language subsystems The MTS job program is always executing one of several command language subsystems or CLSs. Many of the MTS commands are built into MTS and execute as part of the MTS CLS. User programs execute as the USER CLS. The USER CLS has a special relationship to the Symbolic Debugging System (SDS CLS) when the debugger is active. Other MTS commands are implemented as separate modules, confusingly also named command language subsystems or CLSs, that may be executed from shared virtual memory or may be loaded from files. These separate CLSs each have their own four character name and they execute as a separate CLS in the original sense of the term. Many, but not all, of these CLSs provide their own separate sub-command language. There are $SET command options to cause old or new versions of CLSs rather than the current versions to be used. There is an option on the $UNLOAD command to unload a CLS (free the virtual memory it is using, close any FDnames and release any devices or pseudo devices that it has open). Only one CLS is executing at a time, but one CLS of each type may be active and it is possible to switch from one CLS to another without exiting or unloading the original CLS and then to later return to the original CLS and continue working from where one left off. CLSs that have their own sub-commands usually support a STOP command to exit from the CLS, an MTS and/or a RETURN command to return to the calling CLS or MTS command mode, and commands that begin with a dollar-sign ($) are executed as MTS commands with an immediate return to the original CLS. All CLSs except for the USER CLS execute in system mode in problem state. Limited-service state MTS sessions normally operate in "full-service state", but during times of extreme system overload terminal sessions may be placed into "limited-service state" (LSS). The LSS mechanism is manually enabled by the system operator and is normally only used when the hardware system is operating at reduced capacity due to a malfunction. A terminal session is placed into LSS if LSS has been enabled by the system operator and the system is overloaded at signon. LSS sessions may only issue MTS commands and run programs with a short local time limit. Rather than giving all users poor performance, LSS limits the size of the tasks that some users may perform to relatively small tasks such as editing of files and reading of messages in order to allow other users to receive reasonable performance on larger tasks. Users may request that their session be changed to full-service state ($SET LSS=OFF) and such requests are granted if the system is not overloaded at the time the request is made. Command statistics Each MTS command that is issued is recorded, first to a disk file and later to magnetic tape. This information is only available to staff and is used to investigate software problems, security problems, rebate requests, and to provide statistics about how the command language is used. User programs User program refers to a program run by the user and which is not necessarily a program that belongs to or that was created by a user. User programs may be supplied in public files, in files available under the OLD: or NEW: signon IDs, in files belonging to other users and permitted for use by others, or user programs may be developed by the current user in files that they own. User programs are executed using the $RUN, $RERUN, and $DEBUG commands or less often using the $LOAD and $START commands. The $RESTART command may be used to restart execution of a program following an attention interrupt that was not handled by the program, a program interrupt that was not handled by the program (although restarting after a program interrupt usually does not work well), or following an explicit return to MTS from a call to the MTS subroutine. MTS loads programs using a dynamic linking loader (UMLOAD) that reads loader records (ESD, TXT, CSI, RDL, LCS, END, ...) from the file or device specified by the user and will selectively include subroutines from libraries supplied by the user, from system subroutine libraries such as *LIBRARY, and from system subroutines pre-loaded in shared virtual memory. MTS uses standard OS/360 loader records which makes it fairly easy for MTS to use compilers developed for use under other IBM operating systems. When a program starts execution a number of logical I/O units will be set either explicitly on the $RUN or other command or by default. Any text string given following the PAR= keyword is passed to the program as a parameter. By default user programs execute with the program key *EXEC, but a different program key may be set using the $CONTROL command. Programs may call a system subroutine to shorten the program key they are using or switch to the *EXEC program key thus temporary giving themselves less access to files, devices, and other services controlled using program keys. Programs may also call a system subroutine to lengthen or restore their program key according to some pre-established rules. MTS uses the standard S-type and, less often, R-type calling sequences used in OS/360. By default user programs execute in user mode in problem state. User mode programs do not have access to the system virtual memory segment and therefore have no access to system control blocks, may not call privileged system subroutines, and may not issue privileged supervisor calls (SVCs). User mode programs can issue non-privileged SVCs, but few programs do so directly and instead call system subroutines to obtain system services. User mode programs may call system subroutines that switch to system mode after checking that the protected service is allowed for the particular caller, there is a return to user mode when the system subroutine returns. Selected user programs can be flagged to run in system rather than user mode by staff with privileged signon IDs or staff with privileges can cause a user program to run in system mode using a keyword on the $RUN or $SET command. Device independent input/output All input/output requests, whether by the MTS job program itself or by a program running under MTS, is done using a common set of subroutine calls (GETFD, FREEFD, READ, WRITE, CONTROL, GDINFO, ATTNTRP, ...). The same subroutines are used no matter what program is doing the I/O and no matter what type of file or device is being used (typewriter or graphics terminal, line printer, card punch, disk file, magnetic and paper tape, etc.). No knowledge of the format or contents of system control blocks is required to use these subroutines. Programs may use specific characteristics of a particular device, but such programs will be somewhat less device independent. MTS input/output is record or line oriented. Programs read lines from a terminal, card reader, disk file, or tape and write lines to a terminal, printer, disk file, or tape. Conversion to and from ASCII/EBCDIC and end-of-line processing is usually done by a front end processor or Device Support Routine (DSR) and so is not a concern of most programs. While it is possible to do character I/O to a terminal by reading or writing single character lines, reading or writing many such very short lines is not very efficient. Each line read or written consists of from 0 to 32,767 bytes of data and an associated line number (a signed integer number scaled by 1000) giving the line's location. The length of each line read or written is given explicitly, so programs do not need to do their own processing of line ending characters (CR/LF, NL) or other terminators (null). Some devices support zero length lines, while others do not. For many files and devices the line number is simply a sequential count of the lines read, while some file types explicitly associate a specific line number with each line of the file, and in other cases the line number is synthesized from data that appears at the start of an input line or the line number can be prepended to an output line. File or device names Input/output is done directly by referencing a file or device by its name (FDname) or indirectly by referencing a logical I/O unit (SCARDS or INPUT, SPRINT or PRINT, SPUNCH or OBJECT, GUSER, SERCOM, 0 to 99). FDnames are assigned to logical I/O units using keywords in the command language or by default. FDnames can be a simple file name such as MYFILE, a simple device name prefixed with a greater than sign such as >T901, or a pseudo-device name such as *PRINT*. All FDnames are converted to uppercase before they are used, so like MTS commands, FDnames are case independent. I/O modifiers, line number ranges, and explicit concatenation can be used to create complex FDnames from simple FDnames. For example: FILE1@-TRIM (I/O modifier that retains trailing blanks) FILE2(1,10) (line number range that reads lines from 1 to 10 inclusive) FILE3+*SOURCE* (explicit concatenation) FILE4(1,10)@-TRIM+*TAPE*@-TRIM (all of the above in a single complex FDname) Pseudo device names Pseudo device names (PDNs) begin and end with an asterisk (e.g., *name*). Common pseudo devices include: *SOURCE* standard input (normally either a terminal or for batch jobs, the input queue); *SINK* standard output (normally a terminal or for batch jobs, a printer); *MSOURCE* master source, not re-assignable, usually a terminal or a card reader; *MSINK* master sink, not re-assignable, usually a terminal or a printer; *BATCH* spooled input to a new batch job; *PRINT* spooled output to a printer, same as *MSINK* for batch jobs; *PUNCH* spooled output to a card punch (until card punches were retired); and *DUMMY* all data written is discarded and all reads return an End-of-File (much like /dev/null for UNIX); and *AFD* the active file or device as established using the $GET command. The $SOURCE and $SINK commands may be used to reassign the FDnames assigned to *SOURCE* and *SINK*. The $MOUNT command assigns pseudo device names (e.g. *T22*, *NET*) to devices such as magnetic and paper tapes and network connections (including server connections). The $CREATE command can be used to create pseudo device names for use with BITNET import and export, for spooled print jobs, and for dummy devices. I/O modifiers I/O modifiers, possibly negated, may be associated with an FDname to modify default behaviors. An I/O modifier is specified by appending an at-sign followed by the modifier's name to an FDname. For example, *SOURCE*@UC would cause lines read from *SOURCE* to be converted to uppercase before they are presented to a program and MYFILE@UC@-TRIM would cause lines read from the file MYFILE to be converted to uppercase and any trailing spaces at the end of the line would be retained. Some commonly used I/O modifiers are: @S (sequential), @I (indexed), @FWD (forward), @BKWD (backward), @EBCD (EBCDIC), @BIN (binary), @UC (uppercase), @CC (logical carriage control), @MCC (machine carriage control), @NOCC (no carriage control), @TRIM (trim all but last training blank). Some I/O modifiers are processed in a device independent fashion by MTS and others are device dependent and processed by the Device Support Routines (DSRs). Not all files or devices support all I/O modifiers. Different files and devices have different default I/O modifiers and a few I/O modifier defaults can be changed using the $SET command. Line number ranges Specific parts of a file or device can be referenced by including starting and ending line numbers and possibly a line number increment in parentheses separated by commas. The line numbers and increment are integers scaled by 1000 and can be positive or negative (±nnnnn.nnn). For example, SIMPLE.F(-35,197.5) would open the file SIMPLE.F, starting at the first line number greater or equal to -35 and return an 'end of file' instead of the first line number greater than 197.5. One can also include line number increments—as an example: SIMPLE.F(2,200,2) would return all (and only) even line numbers between 2 and 200 (inclusive). The symbolic line numbers FIRST or *F, LAST or *L, MIN, and MAX may refer to the first, last, minimum possible, and maximum possible lines, respectively. For example, SIMPLE.F(*F,0) would refer to the 'negative' lines of the file SIMPLE.F. This is where programmers might place self-documentation for a (often binary) file, actual data in the file would start at line number 1. One can also do simple addition and subtraction with the symbolic line numbers: FIRST±m, *F±m, LAST±m, *L±m, MIN+m, MAX-m, where m is an integer with or without a decimal point scaled by 1000 (±nnnnn.nnn). So to add new lines to the end of an existing file one could use an FDname of the form SIMPLE.F(LAST+1). File or device concatenation Explicit concatenation allows FDnames to be connected using a plus-sign, as NAMEA+NAMEB. In this case MTS transparently returns the contents of NAMEA followed by the contents of NAMEB or writes to NAMEB after writing to NAMEA reaches an end of file or other error condition. Implicit concatenation occurs when an input line contains the string: $CONTINUE WITH FDname MTS will continue with the FDname given as the new source of data. Or, if a line of the form: $CONTINUE WITH FDname RETURN is read, MTS will return the contents of the new FDname until and End-of-File is reached and then return the next line of the original FDname (note that, a file that continues with itself causes an infinite loop, usually a mistake, but sometimes used to good effect). While the line starts with a dollar-sign, $CONTINUE WITH is not an MTS command, but rather a delimiter. The @IC I/O modifier and the command $SET IC={ON | OFF} can be used to control implicit concatenation. $ENDFILE lines If a line contains the string $ENDFILE, MTS returns a 'soft' end of file. While the line starts with a dollar-sign, $ENDFILE is not an MTS command, but rather a delimiter. The @ENDFILE I/O modifier and the command $SET ENDFILE={ALWAYS | SOURCE | NEVER} can be used to control $ENDFILE processing. $9700 and $9700CONTROL lines Lines that begin with the strings "$9700" or "$9700CONTROL" may be copied or written to *PRINT* to control print options on the Xerox 9700 page printer. $9700 lines take effect at the point where they occur, while $9700CONTROL lines apply to the entire print job in which they occur. While these lines have a form similar to MTS commands, they are really device commands and not true MTS commands. Files MTS files are stored as 4096 byte "pages" on one or more public or private disk volumes. Volumes have volume labels, volume numbers, and volume names (usually MTS001, MTS002, ..., MTSnnn). Disk volumes are stored on traditional cylinder-track-record and fixed block architecture (FBA) disk drives or at one time on the IBM 2321 Data Cell. Individual files do not span disk volumes. The maximum size of a file is limited to the free space available on the disk volume where it resides. By default, files are created one page in size, but a larger size as well as a maximum size may be specified ($CREATE name SIZE=nP MAXSIZE=nP). Files will automatically expand until they reach their maximum size or the disk space limit for the owner's signon ID is exceeded. Users may request that a file be created on a specific disk volume ($CREATE name VOLUME=name). MTS files fall into one of three categories: public files, user files, and temporary files: Public files are files whose names begin, but do not end, with an asterisk (e.g., *LIBRARY, *USERDIRECTORY). Public files, often called 'star files', are publicly available files that contain programs and data that are widely available to all users. For example, *LIBRARY is a library of commonly used system subroutines. In the earliest days of MTS public files were the only files that could be shared and then only as read-only files. Later, public files could be permitted and shared in the same fashion as any other files. User files are files whose names do not begin with an asterisk or a minus sign. They must be explicitly created ($CREATE) and destroyed ($DESTROY). They are owned by and initially permitted to just the userID that creates them, but they can be permitted for use by other userIDs using the $PERMIT command. To reference a file belonging to another user, the file name is prefixed with the owner's userID followed by a colon (e.g., W163:MYPROGRAM). There are charges for the amount of disk space used and most signon IDs have a maximum disk space limit. Temporary files are files whose names begin with a minus sign (e.g., -TEMP). Their names are unique within a single session. They are created implicitly on first use, are not charged for, do not count against a signon ID's disk space limit, and are automatically destroyed when the terminal or batch session ends. MTS doesn't implement directories, but there is a de facto two-tier grouping of files owing to the inclusion in a file's name of its owner's four-character MTS user ID. File names, like all FDnames, are converted to uppercase before use and so are case-insensitive. File types MTS supports three types of file, line files, sequential files, and sequential with line number files, but line files were by far the most common: Line files Line files ($CREATE name or $CREATE name TYPE=LINE) are line-oriented files which are indexed (and randomly accessible) by line number. Allowed line numbers are ±2147483.647—essentially a signed integer value divided by 1000, but command line references were limited to ±99999.999. Regular writes to a file increase the line number by 1. Lines are variable length, and a line can be rewritten to any length between 1 and the line length limit (originally 256, but later changed to 32767) without affecting the surrounding lines. Rewriting a pre-existing line to a length of zero deletes that line without affecting surrounding lines. By default the first line number written to an empty file is 1, and is incremented by 1 with each subsequent write. By default, reading a file starts with the first line number at or above 1 and continues by reading each line in order of increasing line numbers. This means that negative line numbers are 'invisible' parts of a file, which require specific references to read. There are commands (and system subroutines) to renumber lines. A contiguous set of lines can be renumbered to any combination of start and increment as long as lines of the file are not re-ordered. For example, if a file consists of lines 10, 20, 30, 40 and 50, lines 30–40 can be renumbered as 35,36, but not as 135,136, as that would change the sequence of lines. The line index and data are stored on separate disk pages except for the smallest (one page) files, where the line index and data are stored together. The $CREATE command creates line files by default. A side effect of the line-based file system is that programs can read and write individual lines incrementally. If one edits a file (usually a text file) with the MTS file editor ($EDIT), any changes made to lines are written immediately, as are insertions and deletions of specific lines. This makes it quite different from most (byte-oriented) file systems where a file is usually read into and changed in memory, and then saved to disk in bulk. Due to hardware or software problems, line files can become corrupt. The program *VALIDATEFILE checks the structure of line files. Sequential files Sequential files ($CREATE name TYPE=SEQ) are line-oriented files with the first line number being implicitly 1 and incremented by 1 for each line. Once written the length of a line (other than the last line of a file) can not be changed, although any line can be replaced by a line of the same length. Sequential files are generally only readable sequentially from start to end, or written by appending to the end. One can, however, request a reference for the current line of a sequential file, and use that reference to jump to that specific location again. Sequential files are somewhat more efficient in terms of space than line files and can be more efficient in terms of CPU time too when compared with large disorganized line files. But the main reason for the existence of SEQ files is that they supported long lines (up to 32,767 characters) before line files did. Sequential files were less common once line files could support long lines. Sequential files are also used to force new lines to be appended to the end of the file without the need to give the line number range (LAST+1). Sequential with line number files Sequential With Line Number files ($CREATE name TYPE=SEQWL) are similar to Sequential Files, except that their line numbers were explicitly stored. They have all the restrictions of Sequential Files, except that the line number could be specifically supplied when writing to a file (as long as it is greater than the last line number written to the file). Unlike Line Files, the first read of an SEQWL file returns the first line of the file, even if it was negative. SEQWL files were rarely used, and were not officially supported or were removed from the documentation by some MTS sites. Because line files did not work well with the Data Cell, SEQWL files were implemented as a way to allow the Data Cell to be used for longer term less expensive storage of files while still preserving line numbers. Shared files Over time the sharing of files between MTS users evolved in four stages. Stage one allowed for limited file sharing, where public or library files (files whose names start with an asterisk) were readable by all users and all other files (user files) could only be accessed by their owners. Public files were owned and maintained by Computing Center staff members, so at this stage only Computing Center files were shared. Stage two allowed for limited file sharing, where the program *PERMIT could be used to (i) make a file read-only (RO) to the file's owner and all other MTS users, (ii) make a file available for copying by members of the same project as the file's owner using the program *COPY, or (iii) make a file available for copying by all other users using the program *COPY. As for stage one, by default owners had unlimited access to their own files and the files were not accessible to other users. Stage three allowed for "really shared files", where the $PERMIT command or the PERMIT subroutine can be used to share a file in a variety of ways with lists of other users, projects, all other users, or a combination of these. The types of access that can be allowed are read, write-extend, write-change or empty, renumber or truncate, destroy, and permit. As for stages one and two, by default a user file is permitted with unlimited access for its owner and no access for others. A file's owner's access can also be changed, although an owner always retains permit access. The $FILESTATUS command or FILEINFO and GFINFO subroutines can be used to obtain a file's permit status. Stage four added program keys (PKeys) to the list of things to which a file can be permitted. Thus files can be permitted to users, projects, all other users, program keys, or a combination of these. Program keys were associated with MTS commands and files, which allowed files to be permitted to specific programs or to specific MTS commands. Among other things this allowed the creation of execute-only or run-only programs in MTS. Files can also be permitted to the initial sub-string of a userID, projectID, or program key. As a result, it may happen that a single userID, projectID, and program key may potentially have more than one type of access. In such cases, the actual access is resolved according to the following rules: (i) userIDs, alone or in combination with program keys, take precedence over projectIDs and program keys, ii) projectIDs, alone or in combination with program keys, take precedence over program keys, (iii) longer sub-string matches take precedence over shorter sub-string matches, and (iv) if there is no specific userID, projectID, or program key match then the access specified for "others" is used. The PKEY subroutine can be used to shorten the program key of the currently running program or switch the program key of the currently running program to *EXEC and later restore the program key, allowing a program to voluntarily limit the access it has to files by virtue of its program key. File locking As part of "really shared files" (stage three above), file locking was introduced to control simultaneous access to shared files between active MTS sessions (that is, between separate running tasks or processes). File locking does not limit or block access to files within a single MTS session (between command language subsystems or user programs running as part of the same MTS session). File locking in MTS is mandatory rather than advisory. Files are locked implicitly on first use of a particular type of access or explicitly using the $LOCK command or the LOCK subroutine. Files are unlocked implicitly when the last use of a file within a task is closed or explicitly using the $UNLOCK command or the UNLK subroutine. The $LOCKSTATUS command or LSFILE and LSTASK subroutines can be used to obtain a file's or a task's current lock status. A file may be "open", "not open", or "waiting for open" and "not locked", "locked for read", "locked for modify", "locked for destroy", "waiting for read", "waiting for modify", or "waiting for destroy". A file's open status is independent of its lock status. Locking a file for modification also locks the file for reading and locking a file for destroying also locks the file for modification and reading. Any number of tasks can have a file locked for reading at any given time, but only one task can have a file locked for modification at any given time and then only if no task has the file locked for reading, or locked for destroying. Only one task can have a file locked for destroying at any given time, and then only if no task has the file open, locked for reading, or locked for modification. When an attempt to lock a file cannot be satisfied, the calling task will wait either indefinitely or for a specific period for another task to unlock the file, or until an attention interrupt is received. If the file cannot be locked, an error indicating this is returned. The file locking software detects deadlocks between tasks using Warshall's algorithm and returns an error indication without locking the file and without waiting. Locking a file is in effect locking the name of that file. For example, the following sequence of commands can be executed while leaving FILE1 locked even though a file with the name FILE1 does not always exist: $lock FILE1 RENAME $rename FILE1 as FILE2 $create FILE1 At a later date this capability to lock names allowed the "file" locking routines to be used to implement record level locking between tasks accessing the centrally managed file *MESSAGES that was used by the MTS $Messagesystem to hold mailboxes and messages for individual users. The addition of file locking allowed removal of the restriction that a single userID could only be signed on once. Instead, the number of simultaneous sign ons was controlled by a maximum that could be set in the user's accounting record by the project manager or a site's business office. File save and restore Files are regularly backed up to tape unless they have been marked as NOSAVE. The file save process includes full and partial backups. Full saves are typically done once a week with no users signed on to the system. Partial saves save just the files that have changed since the last full or partial save and are typically done once each day in the late evening or early morning during normal operation with users signed on to the system. At the University of Michigan two copies of the full save tapes were made and one copy was stored "off-site". Save tapes were kept for six weeks and then reused. The tapes from every sixth full save were kept "forever". Files are saved to allow recovery from "disk disasters" in which the file system becomes damaged or corrupt, usually due to a hardware failure. But users could restore individual files as well using the program *RESTORE. Terminal support At its peak, MTS at the University of Michigan simultaneously supported more than 600 terminal sessions as well as several batch jobs. Terminals are attached to MTS over dial-in modems, leased or dedicated data circuits, and network connections. The Michigan Communications Protocol (MCP), a simple framing protocol for use with asynchronous connections that provides error detection and retransmission, was developed to improve the reliability of terminal to MTS and computer to MTS connections. A very wide range of terminals are supported including the 10 character per second (cps) Teletype Model 33, the 30 cps LA-36 and 120 cps LA-120 DECWriter, the 14 cps IBM 2741, and at ever increasing speeds up to 56,000 bits per second, the VT100 display, the Visual 550 display, the Ontel OP-1 and OP-1/R displays, Tektronix 4000 series of graphic displays, and personal computers from Apple (AMIE for the Apple ][), IBM (PCTie for DOS), and others running terminal emulation programs, including some specifically developed for use with MTS. Most terminals that are compatible with any of these models are also supported. MTS also supports access from 10- or 12-button touch-tone telephones via the IBM 7772 Audio Response Unit and later the Votrax Audio Response Unit,<ref>"The University of Michigan Audio Response System and Speech Synthesis Facility", Edward J. Fronczak, Second USA Japan Computer Conference, Proceedings, pp. 380-84, 1975</ref> IBM 1052 consoles, IBM 3066 console displays, and IBM 3270 family of locally attached displays (IBM 3272 and 3274 control units, but not remote 3270 displays). Front-end communication processors MTS can and does use communication controllers such as the IBM 2703 and the Memorex 1270 to support dial-in terminals and remote batch stations over dial-in and dedicated data circuits, but these controllers proved to be fairly inflexible and unsatisfactory for connecting large numbers of diverse terminals and later personal computers running terminal emulation software at ever higher data rates. Most MTS sites choose to build their own front-end processors or to use a front-end processor developed by one of the other MTS sites to provide terminal support. These front-end processors, usually DEC PDP-8, PDP-11, or LSI-11 based with locally developed custom hardware and software, would act as IBM control units attached to the IBM input/output channels on one side and to modems and phone lines on the other. At the University of Michigan the front-end processor was known as the Data Concentrator (DC). The DC was developed as part of the CONCOMP project by Dave Mills and others and was the first non-IBM device developed for attachment to an IBM I/O Channel. Initially a PDP-8 based system, the DC was upgraded to use PDP-11 hardware and a Remote Data Concentrator (RDC) was developed that used LSI-11 hardware that connected back to a DC over a synchronous data circuit. The University of British Columbia (UBC) developed two PDP-11 based systems: the Host Interface Machine (HIM) and the Network Interface Machine (NIM). The University of Alberta used a PDP-11 based Front-end processor. These front-end systems support their own command language of "device commands", usually lines prefixed with a special character such as a percent-sign (%), to allow the user to configure and control the connections. The $CONTROL command and programs running on MTS can use the CONTROL subroutine to issue device commands to front-end and network control units. Network support Over time some front-ends evolved to provide true network support rather than just providing support for connections to MTS. At the University of Michigan (UM) and Wayne State University (WSU) there was a parallel development effort by the Merit Network to develop network support. The Merit nodes were PDP-11 based and used custom hardware and software to provide host to host interactive connections between MTS systems and between MTS and the CDC SCOPE/HUSTLER system at Michigan State University (MSU). The Merit nodes were known as Communication Computers (CCs) and acted as IBM Control Units on the one side while providing links to other CCs on the other side. The initial host to host interactive connections were supplemented a bit later by terminal to host (TL) connections, and later still by host to host batch connections which allowed remote jobs submitted from one system to be executed (EX) on another with printed (PR) and punched card output (PU) returned to the submitting system or to another host on the network. The remote batch jobs could be submitted from a real card reader or via *BATCH* using a #NET "card" at the front of the job. Merit renamed its Communication Computers to be Primary Communication Processors (PCPs) and created LSI-11 based Secondary Communication Processors (SCPs). PCPs formed the core of the network and were attached to each other over Ethernet and dedicated synchronous data circuits. SCPs were attached to PCPs over synchronous data circuits. PCPs and SCPs would eventually include Ethernet interfaces and support local area network (LAN) attachments. PCPs would also serve as gateways to commercial networks such as GTE's Telenet (later SprintNet), Tymnet, and ADP's Autonet, providing national and international network access to MTS. Later still the PCPs provided gateway services to the TCP/IP networks that became today's Internet. The Merit PCPs and SCPs eventually replaced the Data Concentrators and Remote Data Concentrators at the University of Michigan. At their peak there were more than 300 Merit PCPs and SCPs installed, supporting more than 10,000 terminal ports. Virtual environments UMMPS provides facilities that allow the creation of virtual environments, either virtual machines or virtual operating systems. Both are implemented as user programs that run under MTS. The initial work on the first MTS virtual machine was done at the University of Michigan to simulate the IBM S/360-67 and allow debugging of UMMPS and MTS. Later the University of British Columbia did the initial work to create a S/370 MTS virtual machine. In theory these virtual machines could be used to run any S/360 or S/370 system, but in practice the virtual machines were only used to debug MTS and so there may be subtle features that are not used by MTS that are not completely or correctly implemented. The MTS virtual machine was never updated to support the S/370-XA architecture (instead other tools such as SWAT and PEEK were used to debug MTS and IBM's VM/XA or VM/ESA were used to debug UMMPS). In the early 1970s work was done at Wayne State University to run a version of OS/MVT in a modified virtual machine (VOS) under MTS as a production service. "Student" virtual machines in MTS have also been created as teaching tools. Here the OS running in the virtual machine (written by the student) uses simulated devices and has no connection to the "real" outside world at all (except possibly a console). In addition to virtual machines, MTS provides two programs that implement virtual operating system environments. *FAKEOS, developed at the University of Michigan, allows programs from OS/360 to run as user programs in MTS. *VSS, developed at the University of British Columbia, allows programs from OS/VS1 and MVS/370 to run as user programs in MTS. Neither program actually runs the IBM operating system, instead they simulate enough of the operating environment to allow individual programs developed for those operating systems to run. Both programs can be run directly, but often they are run from driver files that give an end user the impression that they are running a regular MTS user program. Electronic mail At least three different implementations of e-mail were available under MTS at different times: *MAIL from NUMAC, but not available at all MTS sites; CONFER, the computer conferencing system written by Robert Parnes at UM; and $MESSAGESYSTEM from the University of Michigan Computing Center.MTS Volume 23: Messaging and Conferencing in MTS, University of Michigan Computing Center, Ann Arbor, Michigan CONFER and *MAIL only sent and received mail to and from "local" users. Available to users in July 1981, $MESSAGESYSTEM is the last of the three systems to be implemented and became the most widely used. Between 1981 and 1993 it was used to send and receive more than 18 million messages at the University of Michigan. It can send: local and network e-mail messages, dispatches (immediate messages displayed at another user's terminal unless dispatches were blocked by the other user), bulletins (messages sent by the system operator to particular users delivered automatically at the beginning of an MTS session), and signon messages (messages sent by the system operator to all users delivered automatically before the start of an MTS session). Some notable features of $MESSAGESYSTEM include the ability: to send to individuals by signon ID or name, to groups of individuals by signon ID, project ID, or group name, or to the system operator; to send to a list stored in a file; to use the program *USERDIRECTORY to create and maintain a database of e-mail names for individuals and for groups including names and groups that include remote or network users; to recall/delete messages that hadn't already been read; to add or remove recipients to messages after they had been sent; to display a history of messages in an e-mail chain without the need to include the text from older messages in each new message; to set expiration and hold until dates and times for e-mail messages; to display the status of incoming and outgoing messages; to retrieve incoming and outgoing messages using a database model (incoming, outgoing, new, old/seen, to recipients, from recipients, message number, date sent, expiration date, ...); to permit a mailbox allowing uses by signon IDs other than the mailbox owner's; to automatically forward messages from one mailbox to another; to archive older messages, and send and receive messages using a subroutine interface in addition to commands. An application for the Apple Macintosh, InfoX (aka MacHost), was developed to provide a modern interface to the MTS Message System and *USERDIRECTORY. In 1984 MTS could be used to send and receive remote e-mail to and from over 300 sites around the world. The first ability to send and receive e-mail messages to and from users on remote systems (remote messages or network mail) was implemented in 1982 as part of the MAILNET project, a joint effort of 16 universities and EDUCOM (later EDUCAUSE) supported with funding from the Carnegie Corporation. MIT served as a relay hub between the MAILNET sites and as a gateway to CSNET, ARPANET, and BITNET. MTS at the University of Michigan used its connections to the Merit Network and through Merit to GTE's commercial X.25 network, Telenet (later SprintNet), to communicate with MIT. MTS at the University of Michigan served as a relay site for other sites on the UM campus and for other MTS sites that did not have direct access to the MAILNET relay at MIT. The remote e-mail addresses for an MTS user at the University of Michigan were: [email protected] (from MAILNET and BITNET sites) name%[email protected] (from CSNET and ARPANET sites) name@UM (from other UM or MTS sites) To send e-mail to a remote site, MTS users at the University of Michigan used addresses of the form: name@CARNEGIE (to Carnegie-Mellon University a MAILNET site) [email protected] (the more official, but longer name for CMU) name@WSU (to Wayne State University an MTS site) [email protected] (the more official but longer name for WSU) name%[email protected] (to Brown University a CSNET Phonenet site) (to Cornell University a CSNET or ARPANET site) [email protected] (to Stanford University a BITNET site) Over time as more and more computers had direct connections to the Internet the MAILNET relay approach was replaced with the more direct and more reliable peer to peer e-mail delivery and Internet domain style of e-mail addresses in use today (name''@um.cc.umich.edu). InfoX InfoX (pronounced "info-ex", originally InfoDisk) is a program for the Apple Macintosh developed by the Information Technology Division at the University of Michigan. It provides a modern user interface (menus, icons, windows, and buttons) that can be used to check MTS electronic mail, participate in CONFER II conferences, access the MTS User Directory, and create, edit, and manipulate files. InfoX adds Macintosh-style word processing features to the more traditional editing functions available from the MTS, $Message, $Edit, and CONFER command-line interfaces. One can use the standard Cut, Copy, and Paste commands under the Macintosh Edit menu to move text from any Macintosh file. Accounting and charging Each signon ID is allocated resource limits (money, disk space, connect time, ...) which control the amount and types of work that can be done by the ID. IDs can be limited to using just terminal sessions or just batch jobs or restricted to working during times of the day or days of the week when the rates charged are lower. Each signon ID is assigned an expiration date. Resources that can be charged for include: CPU time—charged in seconds of CPU time Memory usage—charged as CPU-VM integral ... e.g. 40 pages of virtual memory used for 10 seconds is charged as 400 page-seconds Printer usage—charged as pages of paper and lines of output (for line printers) or pages and sheets (for page printers) Disk space used—charged in page-months (one page=4096 bytes) Terminal or network connect time-charged in minutes Cards read and punched-charged by the card Paper tape punched-charged by the foot Tapes mounted and tape drive usage time-charged by number of tapes mounted and minutes of usage Program product surcharges (charged on a program by program basis for certain licensed program products) Other resources (e.g. plotters, photo-typesetters, etc.) Note that while there is a charge for virtual memory used, there is no charge for real memory used. Note too that there is no change for page-in operations, although they are included in the session summary information that is reported at sign off. Different rates can be changed for different classes of projects (internal, external, commercial, ...) and for different times of the day or days of the week. Depending on the policies and practices at different sites, charges can be for "real money" or "soft money" (soft money is sometimes called "funny money", although just how funny it is usually depends on who is or isn't paying the bills). Users can display the cost of a session using the $DISPLAY COST command, can display their account balances using the $ACCOUNTING command, and the costs of a session and the account's remaining balance are displayed when the job or session ends. There is also an option ($SET COST=ON) that will cause the incremental and cumulative session cost to be displayed after each MTS command is executed. To prevent a user from overdrawing their account, the money limit is checked when the user attempts to sign on. If the account balance is zero or negative, the sign on is not allowed. For batch jobs, if the account balance is not sufficient to cover the charges estimated for the job, the job is not run. For terminal sessions, when an account's balance falls below one dollar, a warning "You have run out of money" followed by the current balance is printed. This "out of money" message is repeated at regular intervals until the user signs off. Signon IDs can run a negative balance, but usually not a large one or by accident. Depending on the administrative policies at a particular site, projects often have to pay for resources used even if they are beyond the amount authorized. To provide additional protection against accidents that might quickly use more resources than desired, users may also set global and local limits on CPU time usage. Global time limits ($SIGNON ccid T=maxtime) apply to an entire job or session. Local time limits apply to running individual programs ($RUN program T=maxtime). Global and local limits on the number of pages to be printed and the number of cards to be punched can also be set ($SIGNON ccid P=maxpages C=maxcards and $RUN program P=maxpages C=maxcards). A default local CPU time limit can be established using the $SET TIME=maxtime command. References 1960s software Discontinued operating systems IBM mainframe operating systems Operating systems by architecture Time-sharing operating systems
46262693
https://en.wikipedia.org/wiki/Operation%20Newscaster
Operation Newscaster
"Operation Newscaster", as labelled by American firm iSIGHT Partners in 2014, is a cyber espionage covert operation directed at military and political figures using social networking, allegedly done by Iran. The operation has been described as "creative", "long-term" and "unprecedented". According to iSIGHT Partners, it is "the most elaborate cyber espionage campaign using social engineering that has been uncovered to date from any nation". ISight's perceptions On 29 May 2014, Texas-based cyber espionage research firm iSIGHT Partners released a report, uncovering an operation it labels "Newscaster" since at-least 2011, has targeted at least 2,000 people in United States, Israel, Britain, Saudi Arabia, Syria, Iraq and Afghanistan. The victims who are not identified in the document due to security reasons, are senior U.S. military and diplomatic personnel, congresspeople, journalists, lobbyists, think tankers and defense contractors, including a four-star admiral. The firm couldn’t determine what data the hackers may have stolen. According to the iSIGHT Partners report, hackers used 14 "elaborated fake" personas claiming to work in journalism, government, and defense contracting and were active in Facebook, Twitter, LinkedIn, Google+, YouTube and Blogger. To establish trust and credibility, the users fabricated a fictitious journalism website, NewsOnAir.org, using content from the media like Associated Press, BBC, Reuters and populated their profiles with fictitious personal content. They then tried to befriend target victims and sent them "friendly messages" with Spear-phishing to steal email passwords and attacks and infecting them to a "not particularly sophisticated" malware for data exfiltration. The report says NewsOnAir.org was registered in Tehran and likely hosted by an Iranian provider. The Persian word "Parastoo" (; meaning swallow) was used as a password for malware associated with the group, which appeared to work during business hours in Tehran as they took Thursday and Friday off. iSIGHT Partners could not confirm whether the hackers had ties to the Iranian government. Analysis According to Al Jazeera, Chinese army's cyber unit carried out scores of similar phishing schemes. Morgan Marquis-Boire, a researcher at the University of Toronto stated that the campaign "appeared to be the work of the same actors performing malware attacks on Iranian dissidents and journalists for at least two years". Franz-Stefan Gady, a senior fellow at the EastWest Institute and a founding member of the Worldwide Cybersecurity Initiative, stated that “They’re not doing this for a quick buck, to extrapolate data and extort an organization. They’re in it for the long haul. Sophisticated human engineering has been the preferred method of state actors”. Reactions Facebook spokesman said the company discovered the hacking group while investigating suspicious friend requests and removed all of the fake profiles. LinkedIn spokesman said they are investigating the report, though none of the 14 fake profiles uncovered were currently active. Twitter declined to comment. Federal Bureau of Investigation told Al Jazeera "it was aware of the report but that it had no comment". References External links NEWSCASTER – An Iranian Threat Inside Social Media Cyberwarfare in Iran Cyberwarfare in the United States Cyberattacks Hacking in the 2010s Social engineering (computer security)
230834
https://en.wikipedia.org/wiki/CAPTCHA
CAPTCHA
A CAPTCHA (, a contrived acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge–response test used in computing to determine whether the user is human. The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. The most common type of CAPTCHA (displayed as Version 1.0) was first invented in 1997 by two groups working in parallel. This form of CAPTCHA requires someone to correctly evaluate and enter a sequence of letters or numbers perceptible in a distorted image displayed on their screen. Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, a CAPTCHA is sometimes described as a reverse Turing test. This user identification procedure has received many criticisms, especially from people with disabilities, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA. History Since the early days of the Internet, users have wanted to make text illegible to computers. The first such people were hackers, posting about sensitive topics to Internet forums they thought were being automatically monitored on keywords. To circumvent such filters, they replaced a word with look-alike characters. HELLO could become or , as well as numerous other variants, such that a filter could not possibly detect all of them. This later became known as leetspeak. One of the earliest commercial uses of CAPTCHAs was in the Gausebeck–Levchin test. In 2000, idrive.com began to protect its signup page with a CAPTCHA and prepared to file a patent on this seemingly novel technique. In 2001, PayPal used such tests as part of a fraud prevention strategy in which they asked humans to "retype distorted text that programs have difficulty recognizing." PayPal cofounder and CTO Max Levchin helped commercialize this early use. A popular deployment of CAPTCHA technology, reCAPTCHA, was acquired by Google in 2009. In addition to preventing bot fraud for its users, Google used reCAPTCHA and CAPTCHA technology to digitize the archives of The New York Times and books from Google Books in 2011. Inventorship claims Two teams have claimed to be the first to invent the CAPTCHAs used widely on the web today. The first team with Mark D. Lillibridge, Martín Abadi, Krishna Bharat, and Andrei Broder, used CAPTCHAs in 1997 at AltaVista to prevent bots from adding Uniform Resource Locator (URLs) to their web search engine. Looking for a way to make their images resistant to optical character recognition (OCR) attack, the team looked at the manual of their Brother scanner, which had recommendations for improving OCR's results (similar typefaces, plain backgrounds, etc.). The team created puzzles by attempting to simulate what the manual claimed would cause bad OCR. The second team to claim to be the first to invent CAPTCHAs with Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford, first described CAPTCHAs in a 2003 publication and subsequently received much coverage in the popular press. Their notion of CAPTCHA covers any program that can distinguish humans from computers. The controversy of inventorship has been resolved by the existence of a 1997 priority date patent application by Eran Reshef, Gili Raanan and Eilon Solan (second group) who worked at Sanctum on Application Security Firewall. Their patent application details that "The invention is based on applying human advantage in applying sensory and cognitive skills to solving simple problems that prove to be extremely hard for computer software. Such skills include, but are not limited to processing of sensory information such as identification of objects and letters within a noisy graphical environment". Lillibridge, Abadi, Bharat, and Broder (first group) published their patent in 1998. Both patents predate other publications by several years, though they do not use the term CAPTCHA, they describe the ideas in detail and precisely depict the graphical CAPTCHAs used in the Web today. Characteristics CAPTCHAs are, by definition, fully automated, requiring little human maintenance or intervention to administer, producing benefits in cost and reliability. The algorithm used to create the CAPTCHA must be made public, though it may be covered by a patent. This is done to demonstrate that breaking it requires the solution to a difficult problem in the field of artificial intelligence (AI) rather than just the discovery of the (secret) algorithm, which could be obtained through reverse engineering or other means. Modern text-based CAPTCHAs are designed such that they require the simultaneous use of three separate abilities—invariant recognition, segmentation, and parsing—to correctly complete the task with any consistency. Invariant recognition refers to the ability to recognize the large amount of variation in the shapes of letters. There is an overwhelmingly large number of versions of each character that a human brain can successfully identify. The same is not true for a computer, and teaching it to recognize all those differing formations is a challenging task. Segmentation, or the ability to separate one letter from another, is also made difficult in CAPTCHAs, as characters are crowded together with no white space in between. Context is also critical. The CAPTCHA must be understood holistically to correctly identify each character. For example, in one segment of a CAPTCHA, a letter might look like an "m". Only when the whole word is taken into context does it become clear that it is a u and an n. Each of these problems poses a significant challenge for a computer, even in isolation. The presence of all three at the same time is what makes CAPTCHAs difficult to solve. Unlike computers, humans excel at this type of task. While segmentation and recognition are two separate processes necessary for understanding an image for a computer, they are part of the same process for a person. For example, when an individual understands that the first letter of a CAPTCHA is an a, that individual also understands where the contours of that a are, and also where it melds with the contours of the next letter. Additionally, the human brain is capable of dynamic thinking based upon context. It is able to keep multiple explanations alive and then pick the one that is the best explanation for the whole input based upon contextual clues. This also means it will not be fooled by variations in letters. Relation to AI While used mostly for security reasons, CAPTCHAs also serve as a benchmark task for artificial intelligence technologies. According to an article by Ahn, Blum and Langford, "any program that passes the tests generated by a CAPTCHA can be used to solve a hard unsolved AI problem." They argue that the advantages of using hard AI problems as a means for security are twofold. Either the problem goes unsolved and there remains a reliable method for distinguishing humans from computers, or the problem is solved and a difficult AI problem is resolved along with it. In the case of image and text based CAPTCHAs, if an AI were capable of accurately completing the task without exploiting flaws in a particular CAPTCHA design, then it would have solved the problem of developing an AI that is capable of complex object recognition in scenes. Accessibility CAPTCHAs based on reading text — or other visual-perception tasks — prevent blind or visually impaired users from accessing the protected resource. However, CAPTCHAs do not have to be visual. Any hard artificial intelligence problem, such as speech recognition, can be used as the basis of a CAPTCHA. Some implementations of CAPTCHAs permit users to opt for an audio CAPTCHA, though a 2011 paper demonstrated a technique for defeating the popular schemes at the time. For non-sighted users (for example blind users, or color blind people on a color-using test), visual CAPTCHAs present serious problems. Because CAPTCHAs are designed to be unreadable by machines, common assistive technology tools such as screen readers cannot interpret them. Since sites may use CAPTCHAs as part of the initial registration process, or even every login, this challenge can completely block access. In certain jurisdictions, site owners could become targets of litigation if they are using CAPTCHAs that discriminate against certain people with disabilities. For example, a CAPTCHA may make a site incompatible with Section 508 in the United States. In other cases, those with sight difficulties can choose to identify a word being read to them. While providing an audio CAPTCHA allows blind users to read the text, it still hinders those who are both blind and deaf. According to sense.org.uk, about 4% of people over 60 in the UK have both vision and hearing impairments. There are about 23,000 people in the UK who have serious vision and hearing impairments. According to The National Technical Assistance Consortium for Children and Young Adults Who Are Deaf-Blind (NTAC), the number of deafblind children in the USA increased from 9,516 to 10,471 during the period 2004 to 2012. Gallaudet University quotes 1980 to 2007 estimates which suggest upwards of 35,000 fully deafblind adults in the USA. Deafblind population estimates depend heavily on the degree of impairment used in the definition. The use of CAPTCHA thus excludes a small number of individuals from using significant subsets of such common Web-based services as PayPal, Gmail, Orkut, Yahoo!, many forum and weblog systems, etc. Even for perfectly sighted individuals, new generations of graphical CAPTCHAs, designed to overcome sophisticated recognition software, can be very hard or impossible to read. A method of improving CAPTCHA to ease the work with it was proposed by ProtectWebForm and named "Smart CAPTCHA". Developers are advised to combine CAPTCHA with JavaScript. Since it is hard for most bots to parse and execute JavaScript, a combinatory method which fills the CAPTCHA fields and hides both the image and the field from human eyes was proposed. One alternative method involves displaying to the user a simple mathematical equation and requiring the user to enter the solution as verification. Although these are much easier to defeat using software, they are suitable for scenarios where graphical imagery is not appropriate, and they provide a much higher level of accessibility for blind users than the image-based CAPTCHAs. These are sometimes referred to as MAPTCHAs (M = "mathematical"). However, these may be difficult for users with a cognitive disorder. Other kinds of challenges, such as those that require understanding the meaning of some text (e.g., a logic puzzle, trivia question, or instructions on how to create a password) can also be used as a CAPTCHA. Again, there is little research into their resistance against countermeasures. Circumvention There are a few approaches to defeating CAPTCHAs: using cheap human labor to recognize them, exploiting bugs in the implementation that allow the attacker to completely bypass the CAPTCHA, and finally using machine learning to build an automated solver. According to former Google "click fraud czar" Shuman Ghosemajumder, there are numerous services which solve CAPTCHAs automatically. Machine learning-based attacks In its earliest iterations there was not a systematic methodology for designing or evaluating CAPTCHAs. As a result, there were many instances in which CAPTCHAs were of a fixed length and therefore automated tasks could be constructed to successfully make educated guesses about where segmentation should take place. Other early CAPTCHAs contained limited sets of words, which made the test much easier to game. Still others made the mistake of relying too heavily on background confusion in the image. In each case, algorithms were created that were successfully able to complete the task by exploiting these design flaws. These methods proved brittle however, and slight changes to the CAPTCHA were easily able to thwart them. Modern CAPTCHAs like reCAPTCHA no longer rely just on fixed patterns but instead present variations of characters that are often collapsed together, making segmentation almost impossible. These newest iterations have been much more successful at warding off automated tasks. In October 2013, artificial intelligence company Vicarious claimed that it had developed a generic CAPTCHA-solving algorithm that was able to solve modern CAPTCHAs with character recognition rates of up to 90%. However, Luis von Ahn, a pioneer of early CAPTCHA and founder of reCAPTCHA, expressed skepticism, stating: "It's hard for me to be impressed since I see these every few months." He pointed out that 50 similar claims to that of Vicarious had been made since 2003. In August 2014 at Usenix WoOT conference, Bursztein et al. presented the first generic CAPTCHA-solving algorithm based on reinforcement learning and demonstrated its efficiency against many popular CAPTCHA schemas. They concluded that text distortion based CAPTCHAs schemes should be considered insecure moving forward. In October 2018 at ACM CCS'18 conference, Ye et al. presented a deep learning-based attack that could successfully solve all 11 text captcha schemes used by the top-50 popular website in 2018 with a high success rate. Their work shows that an effective CAPTCHA solver can be trained using as few as 500 real CAPTCHAs, showing that it is possible to quickly launch an attack of a new text CAPTCHA scheme. Cheap or unwitting human labor It is possible to subvert CAPTCHAs by relaying them to a sweatshop of human operators who are employed to decode CAPTCHAs. A 2005 paper from a W3C working group stated that such an operator could verify hundreds per hour. In 2010, the University of California at San Diego conducted a large scale study of CAPTCHA farms and found out that the retail price for solving one million CAPTCHAs was as low as $1,000. Another technique that has been described consists of using a script to re-post the target site's CAPTCHA as a CAPTCHA to a site owned by the attacker, which unsuspecting humans visit and correctly solve within a short while for the script to use. This technique is likely to be economically unfeasible for most attackers due to the cost of attracting enough users and running a popular site. Outsourcing to paid services There are multiple Internet companies like 2Captcha and DeathByCaptcha that offer human and machine backed CAPTCHA solving services for as low as US$0.50 per 1000 solved CAPTCHAs. These services offer APIs and libraries that enable users to integrate CAPTCHA circumvention into the tools that CAPTCHAs were designed to block in the first place. Insecure implementation Howard Yeend has identified two implementation issues with poorly designed CAPTCHA systems: Some CAPTCHA protection systems can be bypassed without using OCR simply by reusing the session ID of a known CAPTCHA image CAPTCHAs residing on shared servers also present a problem; a security issue on another virtual host may leave the CAPTCHA issuer's site vulnerable Sometimes, if part of the software generating the CAPTCHA is client-side (the validation is done on a server but the text that the user is required to identify is rendered on the client side), then users can modify the client to display the un-rendered text. Some CAPTCHA systems use MD5 hashes stored client-side, which may leave the CAPTCHA vulnerable to a brute-force attack. Notable attacks Some notable attacks against various CAPTCHAs schemas include: Mori et al. published a paper in IEEE CVPR'03 detailing a method for defeating one of the most popular CAPTCHAs, EZ-Gimpy, which was tested as being 92% accurate in defeating it. The same method was also shown to defeat the more complex and less-widely deployed Gimpy program 33% of the time. However, the existence of implementations of their algorithm in actual use is indeterminate at this time. PWNtcha has made significant progress in defeating commonly used CAPTCHAs, which has contributed to a general migration towards more sophisticated CAPTCHAs. Podec, a trojan discovered by the security company Kaspersky, forwards CAPTCHA requests to an online human translation service that converts the image to text, fooling the system. Podec targets Android mobile devices. Alternative CAPTCHAs schemas With the demonstration that text distortion based CAPTCHAs are vulnerable to machine learning based attacks, some researchers have proposed alternatives including image recognition CAPTCHAs which require users to identify simple objects in the images presented. The argument in favor of these schemes is that tasks like object recognition are typically more complex to perform than text recognition and therefore should be more resilient to machine learning based attacks. Here are some of notable alternative CAPTCHA schemas: Chew et al. published their work in the 7th International Information Security Conference, ISC'04, proposing three different versions of image recognition CAPTCHAs, and validating the proposal with user studies. It is suggested that one of the versions, the anomaly CAPTCHA, is best with 100% of human users being able to pass an anomaly CAPTCHA with at least 90% probability in 42 seconds. Datta et al. published their paper in the ACM Multimedia '05 Conference, named IMAGINATION (IMAge Generation for INternet AuthenticaTION), proposing a systematic way to image recognition CAPTCHAs. Images are distorted in such a way that state-of-the-art image recognition approaches (which are potential attack technologies) fail to recognize them. Microsoft (Jeremy Elson, John R. Douceur, Jon Howell, and Jared Saul) claim to have developed Animal Species Image Recognition for Restricting Access (ASIRRA) which ask users to distinguish cats from dogs. Microsoft had a beta version of this for websites to use. They claim "Asirra is easy for users; it can be solved by humans 99.6% of the time in under 30 seconds. Anecdotally, users seemed to find the experience of using Asirra much more enjoyable than a text-based CAPTCHA." This solution was described in a 2007 paper to Proceedings of 14th ACM Conference on Computer and Communications Security (CCS). However, this project was closed in October 2014 and is no longer available. See also Defense strategy (computing) NuCaptcha Proof-of-work system reCAPTCHA References Further references von Ahn, L; M. Blum and J. Langford. (2004) "Telling humans and computers apart (automatically)". Communications of the ACM, 47(2):57–60. External links Verification of a human in the loop, or Identification via the Turing Test, Moni Naor, 1996. Inaccessibility of CAPTCHA: Alternatives to Visual Turing Tests on the Web, a W3C Working Group Note. CAPTCHA History from PARC. Reverse Engineering CAPTCHAs Abram Hindle, Michael W. Godfrey, Richard C. Holt, 2009-08-24 Turing tests Internet forum terminology Computer vision 2003 neologisms 20th-century inventions
46621887
https://en.wikipedia.org/wiki/Faceware%20Technologies
Faceware Technologies
Faceware Technologies is an American company that designs facial animation and motion capture technology. The company was established under Image Metrics and became its own company at the beginning of 2012. Faceware produces software used to capture an actor's performance and transfer it onto an animated character, as well as hardware needed to capture the performances. The software line includes Faceware Analyzer, Faceware Retargeter, and Faceware Live. Faceware software is used by film studios and video game developers including Rockstar Games, Bungie, Cloud Imperium Games, and 2K in games such as Grand Theft Auto V, Destiny, Star Citizen, and Halo: Reach. Through its application in the video game industry, Faceware won the Develop Award while it was still part of Image Metrics for Technical Innovation in 2008. It won the Develop Award again for Creative Contribution: Visuals in 2014. Faceware received Best of Show recognition at the Game Developers Conference 2011 in San Francisco as well as Computer Graphics World's Silver Edge Award at SIGGRAPH 2014 and 2016. Finally, Faceware won the XDS Gary Award in 2016 for its contributions to the Faceware-EA presentation at the 2016 XDS Summit. History Image Metrics, founded in 2000, is a provider of facial animation and motion capture technology within the video game and entertainment industries. In 2008, Image Metrics offered a beta version of its facial animation technology to visual effects and film studios. The technology captured an actor's performance on video, analyzed it, and mapped it onto a CG model. The release of the beta allowed studios to incorporate the facial animation technology into internal pipelines rather than going to the Image Metrics studio as they had in the past. The first studio to beta test Image Metric's software in 2009 was the visual effects studio Double Negative out of London. In 2010, Image Metrics launched the facial animation technology platform Faceware. Faceware focused on increasing creative control, efficiency and production speed for animators. The software could be integrated into any pipeline or used with any game engine. Image Metrics provided training to learn the Faceware platform. The first studio to sign on as a Faceware customer was Bungie, which incorporated the software into its in-house production. Image Metrics acquired FacePro in 2010, a company that provided automated lip synchronization which could be altered for accurate results, and Image Metrics integrated the acquired technology into its facial animation software. Also in 2010, Image Metrics bought Character-FX, a character animation company. Character-FX produced tools for use in Autodesk’s Maya and 3DS Max which aide in the creation of character facial rigs using an automated weighting transfer system that rapidly shifts facial features on a character to create lifelike movement. Image Metrics raised $8 million in funding and went public through a reverse merger in 2010 with International Cellular Industries. Image Metrics became wholly owned by International Cellular industries, which changed its name and took on facial animation technology as its sole line of business. Faceware 3.0 was announced in March 2011. The upgrade included auto-pose, a shared pose database, and curve refinement. Image Metrics led a workshop and presentation about Faceware 3.0 at the CTN Animation Expo 2011 titled "Faceware: Creating an Immersive Experience through Facial Animation." Faceware's technology was displayed at Edinburgh Interactive in August 2011 to show its ability to add player facial animation from webcam or Kinect sensor into a game in real time. Image Metrics sold the Faceware software to its spinoff company, Faceware Technologies, in January 2012. Following the spinoff, Faceware Technologies focused on producing and distributing its technology to professional animators. The technology was tested through Universities, including the University of Portsmouth. Faceware launched its 3D facial animation tools, software packages Faceware Analyzer and Faceware Retargeter with the Head-Mounted Camera System (HMCS). Analyzer tracks and processes live footage of an actor and Retargeter transfers that movement onto the face of a computer-generated character. The Head-Mounted Camera System is not required to use the software. Six actors can be captured simultaneously. Faceware Live was shown for the first time at SIGGRAPH 2013. It was created to enable the real-time capture and retargeting of facial movements. The live capture of facial performance can use any video source to track and translate facial expressions into a set of animation values and transfer the captured data onto a 3D animated character in real time. In 2014, Faceware released Faceware Live 2.0. The update included the option to stream multiple characters simultaneously, instant calibration, improved facial tracking, consistent calibration, and support for high-frame-rate cameras. In 2015, Faceware launched a plugin for Unreal Engine 4 called Faceware Live. The company co-developed the plugin with Australia-based Opaque Multimedia. It makes motion capture of expressions and other facial movements possible with any video camera through Faceware's markerless 3D facial motion capture software. In 2016, Faceware announced the launch of Faceware Interactive, which is focused on the development of software and hardware that can be used in the creation of digital characters with whom real people can interact. Partners Faceware Technologies partnered with Binari Sonori in 2014 to develop a video-based localization service. Also in 2014, Faceware Technologies entered a global partnership with Vicon, a company focused on motion capture. The partnership would focus on developing new technology to expand into full-body motion capture data. The first step of the integration was to make the Faceware software compatible with Vicon's head rig, Cara, to allow data acquired from Cara to be processed and transferred into Faceware products. Overview Faceware Technologies has two main aspects of facial animation software. Faceware Analyzer is a stand-alone single-camera facial tracking software that converts videos of facial motion into files that can be used for Faceware Retargeter. The Lite version of the software can automatically track facial movements which can be applied to 3D models with Faceware Retargeter. The Pro version can perform shot specific custom calibrations, import and export actor data, auto indicate tracking regions, and has server and local licensing options. The data captured by Faceware Analyzer is then processed in Faceware Retargeter. Faceware Retargeter 4.0 was announced in 2014. Faceware Retargeter uses facial tracking data created in Analyzer to create facial animation in a pose-based workflow. The upgrade has a plug-in for Autodesk animation tools, advanced character expression sets, visual tracking data, shared pose thumbnails, and batch processing. The Lite version of the Retargeter software transfers actor's performances onto animated characters and reduces and smooths key frames. The Pro version includes custom poses, intelligent pose suggestions, shared pose libraries, and the ability to backup and restore jobs. Faceware Live aims to create natural looking faces and facial expressions in real-time. Any video source can be used with the software's one-button calibration. The captured video is transferred onto a 3D animated character. This process combines image processing and data streaming to translate facial expressions into a set of animation values. Faceware has hardware options that can be rented or purchased. Available hardware is the entry level GoPro Headcam Kit and the Professional Headcam System. The Indie Facial Mo-cap package includes hardware, a camera and headmount, and the tools to use it. Selected works Faceware software is used by companies such as Activision-Blizzard, Bethesda, Ubisoft, Electronic Arts, Sony, Cloud Imperium Games, and Microsoft. Rockstar Games used the software in games such as Grand Theft Auto V and Red Dead Redemption and Bungie used Faceware in games including Destiny and Halo: Reach. Faceware has also been used in other games like XCOM2, Dying Light: The Following, Hitman, EA Sports UFC 2, Fragments for Microsoft's HoloLens, DOOM, Mirror's Edge Catalyst, Kingsglaive, F1 2016, ReCore, Destiny: Rise of Iron, Mafia III, Call of Duty Infinite Warfare, Killzone:Shadow Fall, NBA 2K10-2K17, Sleeping Dogs, Crysis 2 and 3, Star Citizen, and in movies like The Curious Case of Benjamin Button and Robert Zemeckis's The Walk. References External links Official site Animation technology Software development Organizations established in 2012 3D graphics software
34035753
https://en.wikipedia.org/wiki/Arfa%20Software%20Technology%20Park
Arfa Software Technology Park
Arfa Software Technology Park (previously known as Software Technology Park) is an information technology park in Lahore, Punjab, Pakistan, built in 2009. It is home to the Information Technology University and PITB. The main building consists of 17 floors and is 106 meters tall. On 15 January 2012, Chief Minister of Punjab Mian Shahbaz Sharif announced a name change of Software Technology Park to Arfa Software Technology Park after World's youngest Microsoft Certified Professional Arfa Karim, who died at the age of 16. See also List of tallest buildings in Pakistan List of parks and gardens in Lahore List of parks and gardens in Pakistan References Science and technology in Punjab, Pakistan 2009 establishments in Pakistan Buildings and structures in Lahore Economy of Lahore Science parks in Pakistan Information technology in Pakistan Skyscrapers in Lahore Office buildings in Lahore
3173429
https://en.wikipedia.org/wiki/VM2000
VM2000
VM 2000 is a hypervisor from Fujitsu (formerly Siemens Nixdorf Informationssysteme) designed specifically for use with the BS2000 operating system. It is an EBCDIC-based operating system. It allows multiple images of BS2000 and Linux to operate on a S-series computer, which is based on the IBM System/390 architecture. It also supports BS2000, Linux and Microsoft Windows on x86-based SQ-series mainframes. Additionally, it can virtualize BS2000 guests on SR- and SX-series mainframes, based on MIPS and SPARC respectively. See also Paravirtualization References External links Virtualization VM2000 Virtualization software MIPS operating systems
57624874
https://en.wikipedia.org/wiki/Jimson%20Olufuye
Jimson Olufuye
Jimson Owodunni Olufuye (Born December 16, 1966) is a Nigerian chief executive officer. Olufuye has more than 30 years experience in the national and global ICT industry. He is a strategic thinker, analyst and a multi-tasking development specialist focusing on people, process and technology. He is passionate about the use of technology to transform lives, create wealth and accelerate development in Nigeria and generally in Africa. He is the CEO of Kontemporary, a leading ICT solution company based in Abuja, Nigeria. He is also the founder/ 1st Chair of the concerned private sector-led and over 30-nation strong Africa ICT Alliance – AfICTA. He has distinguished himself at the United Nations Commission for Science and Technology for Development Working Group on Improvements to the Internet Governance Forum (2011-2012) and on the Working Group on Enhanced Cooperation on public policy matters pertaining to the Internet (2013-2014 & 2016-2018). He was the ICT Consultant for the World Bank-Office of the Auditor General for the Federation, Nigeria (WB-OAuGF) Economic Reform and Governance Project 2012-2013. He thereafter continued to consult for the OAuGF. He is actively involved in policies for cybersecurity assurance, broadband as a right, intellectual property right, internet freedom and the management of critical internet resources. He is directly involved in energizing and transforming the Nigerian Information Technology industry through diverse ICT policy framework advocacy. Some of his efforts in collaboration with other stakeholders resulted in the establishment of National Information Technology Development Agency (NITDA), Galaxy Backbone Plc and the Ministry of Communications and Digital Economy, Nigeria. He was formerly the vice-chairman of the World Information Technology and Services Alliance (WITSA) and the president of the Information Technology (Industry) Association of Nigeria (ITAN). He is a member of the International Chamber of Commerce (ICC) Business Action In Support of the Information Society (BASIS) and formerly the chair, finance and operations of the Business Constituency of the Internet Corporation for Assigned Names and Numbers (BC-ICANN) (2014-2020). He was also until 2012 the chair of the National IT Public-Private Forum – a peer review and development forum of IT administrators and CEOs in the Public and the Private sectors in Nigeria. He was a council member of the Nigeria Computer Society (NCS) and the Computer Professionals’ (Registration Council) of Nigeria (CPN). He has authored four books including Sincere But Sincerely Wrong, Mohammed My Brother, The Transformation and Information Technology Applications. Olufuye is a recipient of the United Nations Youth Ambassador Award for Peace in 2007. He was inducted into the Fellowships of the Nigeria Computer Society (NCS) in 2003 and the Institute of Chartered Management Auditors in 2004. He is a member of the Information System Audit and Control Association (ISACA), United States. Olufuye has attended Advanced Leadership & Professional training courses in Sun-City, South Africa; Silicon Valley, Virginia, Las Vegas and Hawaii in USA & United Kingdom Telecom Academy. He has also attended professional conferences in London, Senegal, Malaysia, UAE, India, Germany, Canada, Switzerland, South Korea, Brazil, Argentina, Singapore, China, Ireland, and in Nigeria. He has served on many Federal Government of Nigeria Committees including Y2K (1999), IT Think Tank (1999), IT Policy (2000 & 2008/2009), IT Harmonization (2004); and on the board of many companies. Early life and education Olufuye is a graduate of Applied Mathematics and Statistics from the University of Lagos in 1988 where he was awarded the Vice-Chancellor’s Prize and the College/Faculty prize for the best all-round performance. He was also awarded full colours for outstanding performance in chess. He obtained a Master of Technology degree in Computing from the Federal University of Technology, Minna in 2000. In 2007, he was awarded a Ph.D. Business Administration(Strategic Management) by the Irish University Business School, Dublin. Olufuye is a PRINCE2 certified Project Management Professional, a Certified Information System Auditor (CISA); a Certified Information Security Manager (CISM) and a Certified Risk & Information System Control (CRISC) professional. References External links Jimson's Profile 1966 births Living people Federal University of Technology, Minna alumni Nigerian chief executives Nigerian politicians University of Lagos alumni
14483963
https://en.wikipedia.org/wiki/UltraDefrag
UltraDefrag
UltraDefrag is a disk defragmentation utility for Microsoft Windows. Prior to version 8.0.0 it was released under the GNU General Public License. The only other Windows-based defragmentation utility licensed under the GNU GPL was JkDefrag, discontinued in 2008. In 2018, UltraDefrag sources have been relicensed to Green Gate Systems. Their enhanced 8.0.0 version, released under a proprietary license, features automatic defragmentation and is said to have much faster disk processing algorithms. UltraDefrag uses the defragmentation part of Windows API and works on Windows NT 4.0 and later. It supports FAT12, FAT16, FAT32, exFAT, and NTFS file systems. Jean-Pierre André, one of the developers of NTFS-3G, has created a fork of UltraDefrag 5 that runs on Linux. It only has a command-line interface. Features Automatic defragmentation Defragmentation of individual files and folders Defragmentation of locked system files Defragmentation of NTFS metafiles (including MFT) and streams Exclusion of files by path, size and number of fragments Optimization of disks Disk processing time limit Defragmentation of disks having a certain fragmentation level Automatic hibernation or shutdown after the job completion Multilingual graphical interface (over 60 languages available) One click defragmentation via Windows Explorer's context menu Command line interface Portable edition Full support of 64-bit editions of Windows See also Comparison of defragmentation software File system fragmentation References External links Free defragmentation software Free software programmed in C Free software programmed in Lua (programming language) Windows-only free software Lua (programming language)-scripted software
23497920
https://en.wikipedia.org/wiki/Arthur%20Humphreys
Arthur Humphreys
Arthur L. C. Humphreys (1917–2003) was a managing director of International Computers Limited and a long-time member of the British computer industry. He joined the British Tabulating Machine Company in 1940, and was involved in the negotiations with Powers-Samas that led to the formation of International Computers and Tabulators in 1958. In 1968, on the formation of ICL, he became its first Managing Director. When Geoff Cross became managing director in 1972, Humphreys was moved to the post of Deputy Chairman, where he remained until his retirement in 1983. External links Oral history interview with Arthur L. C. Humphreys, Charles Babbage Institute, University of Minnesota. Humphreys, a former managing director of International Computers, Limited (ICL), reviews the history of the British computer industry. Topics include: the termination in 1949 of the trade agreement between IBM and the British Tabulating Machine Company, the merger in 1959 of British Tabulating and the Powers Samas Company into International Computers and Tabulators, Ltd. (ICT), and the merger in 1968 of English Electric Computers Limited and ICT into ICL. Humphreys explains how the last merger was enacted by the government to establish a single national computer company. Humphreys also discusses the strengths and weaknesses of the British computer industry, and compares the management of the British and American computer industries. He mentions the European Economic Community's efforts to establish Unidata, a multinational computer company, and the problems associated with conducting business across Europe's linguistic and cultural boundaries. References International Computers Limited people 1917 births 2003 deaths
16138360
https://en.wikipedia.org/wiki/Symantec%20Endpoint%20Protection
Symantec Endpoint Protection
Symantec Endpoint Protection, developed by Broadcom Inc., is a security software suite that consists of anti-malware, intrusion prevention and firewall features for server and desktop computers. It has the largest market-share of any product for endpoint security. Version history The first release of Symantec Endpoint Protection was published in September 2007 and was called version 11.0. Endpoint Protection is the result of a merger of several security software products, including Symantec Antivirus Corporate Edition 10.0, Client Security, Network Access Control, and Sygate Enterprise Edition. Endpoint Protection also included new features. For example, it can block data transfers to unauthorized device types, such as USB flash drives or Bluetooth devices. At the time, Symantec Antivirus Corporate Edition was widely criticized as having become bloated and unwieldy. Endpoint Protection 11.0 was intended to address these criticisms. The disk footprint of Symantec Corporate Edition 10.0 was almost 100 MB, whereas Endpoint Protection's was projected to be 21 MB. In 2009, Symantec introduced a managed service, whereby Symantec staff deploy and manage Symantec Endpoint Protection installations remotely. A Small Business Edition with a faster installation process was released in 2010. In February 2011, Symantec announced version 12.0 of Endpoint Protection. Version 12 incorporated a cloud-based database of malicious files called Symantec Insight. Insight was intended to combat malware that generates mutations of its files to avoid detection by signature-based anti-malware software. In late 2012, Symantec released version 12.1.2, which supports VMware vShield. A cloud version of Endpoint Protection was released in September 2016. This was followed by version 14 that November. Version 14 incorporates machine learning technology to find patterns in digital data that may be indicative of the presence of a cyber-security threat. It also incorporates memory exploit mitigation and performance improvements. Features Symantec Endpoint Protection is a security software suite that includes intrusion prevention, firewall, and anti-malware features. According to SC Magazine, Endpoint Protection also has some features typical of data loss prevention software. It is typically installed on a server running Windows, Linux, or macOS. As of 2018, Version 14 is the only currently-supported release. Endpoint Protection scans computers for security threats. It is used to prevent unapproved programs from running, and to apply firewall policies that block or allow network traffic. It attempts to identify and block malicious traffic in a corporate network or coming from a web browser. It uses aggregate information from users to identify malicious software. As of 2016, Symantec claims to use data from 175 million devices that have installed Endpoint Security in 175 countries. Endpoint Protection has an administrative console that allows the IT department to modify security policies for each department, such as which programs or files to exclude from antivirus scans. It does not manage mobile devices directly, but treats them as peripherals when connected to a computer and protects the computer from any malicious software on the mobile device. Vulnerabilities In early 2012, source code for Symantec Endpoint Protection was stolen and published online. A hacker group called "The Lords of Dharmaraja" claimed credit, alleging the source code was stolen from Indian military intelligence. The Indian government requires vendors to submit the source code of any computer program being sold to the government, to ensure that they are not being used for espionage. In July 2012, an update to Endpoint Protection caused compatibility issues, triggering a Blue Screen of Death on Windows XP machines running certain third-party file system drivers. In 2014, Offensive Security discovered an exploit in Symantec Endpoint Protection during a penetration test of a financial services organization. The exploit in the Application and Device control driver allowed a logged-in user to get system access. It was patched that August. In 2019, Ofir Moskovitch, a Security Researcher discovered a Race Condition bug which involves 2 Critical Symantec Endpoint Protection Client Core Components: Client Management & Proactive Threat Protection and directly results in Protection Mechanism Failure that can lead to a Self-Defense Bypass, aka "SEMZTPTN" - Symantec Endpoint Minimized Timed Protection. Reception According to Gartner, Symantec Endpoint Protection 14 is one of the more comprehensive endpoint security products available and regularly scores well in independent tests. However, a common criticism is that customers are "fatigued" by "near constant changes" in the product and company direction. SC Magazine said Endpoint Protection 14 was the "most comprehensive tool of its type . . . with superb installation and documentation." The review said EndPoint Protection had a "no-brainer setup and administration," but it does have a "wart" that support fees are "a bit steep." Forrester said version 12.1 was the most complete endpoint security software product on the market, but the different IT security functions of the software were not well-integrated. The report speculated the lack of integration would be addressed in version 14. Network World ranked Symantec Endpoint Protection sixth in endpoint security products, based on data from NSS Labs testing. References External links NortonLifeLock software Security software Antivirus software Firewall software Proprietary software Windows security software MacOS security software Linux security software
23764543
https://en.wikipedia.org/wiki/WANdisco
WANdisco
WANdisco, plc., dually headquartered in Sheffield, England and San Ramon, California in the US, is a public software company specialized in the area of distributed computing. It has development offices in San Ramon, California; Sheffield, England; and Belfast, Northern Ireland. WANdisco is a corporate contributor to Hadoop, Subversion and other open source projects. History The name WANdisco is an acronym for wide area network distributed computing. Initially offering a replication solution for distributed teams using the Concurrent Versions System (CVS), this was expanded to include Apache Subversion with SVN MultiSite Plus in 2006, Git with Git MultiSite in 2013 and Gerrit with Gerrit MultiSite in 2014. In 2012, WANdisco acquired AltoStor, and entered the Big Data market with its Non-Stop Hadoop product. AltoStor's founders, Dr. Konstantin Shvachko and Jagane Sundar, joined WANdisco as part of the acquisition, and helped develop the company's next generation Hadoop product released in 2015, WANdisco Fusion. Technology WANdisco's Distributed Coordination Engine (DConE) is the shared component for WANdisco clustering products. The DConE system allows multiple instances of the same application to operate on independent hardware without sharing any resources. All of the application servers are kept in synchronisation by DConE regardless of whether the servers are on the same LAN or globally separated and accessible only over a wide area network (WAN). WANdisco's replication technology was the work of Yeturu Aahlad, who had previously worked for Sun, Netscape and IBM, and was involved in developing the CORBA Framework. Aahlad theorized a model for effective Active replication over a WAN. In the development of DConE, WANdisco has taken the Paxos algorithm as a baseline and added innovations relevant to mission-critical, high transaction volume distributed environments. WANdisco provides replication products for CVS, Apache Subversion, Git, Gerrit, Apache Hadoop, Amazon Web Services, Microsoft Azure, Google Cloud Platform. In addition, the company offers support, consultancy and training services. The company's website lists companies such as ARM, Avaya, Bally Technologies, Barclays, BlackRock, Bosch, Cisco, Dell EMC, Disney, Fujitsu, General Electric, Honda, Juniper Networks and Pitney Bowes. IBM OEM In April 2016, WANdisco announced that IBM had signed a deal to OEM WANdisco Fusion. The deal allows IBM to rebrand Fusion as "IBM Big Replicate" and plays an important role in the IBM Big Data and Cloud Computing strategy, including movement of data between on-premises software and Cloud. Blockchain In July 2018, WANdisco announced that it had filed a new patent in Blockchain. The company claims that the patent "enables effective permissioned blockchain transactions with an underlying algorithmic mechanism. This mechanism enables throughput to be achieved that is orders of magnitude higher than public blockchains." Defunct products In 2011 WANdisco announced uberSVN, a deployment of Apache Subversion which included a web based management console and the ability to add additional application lifecycle management features. The uberSVN download was available through mid-2013. Open source contributions In September 2013 WANdisco announced it is an official sponsor of the UC Berkeley AMPLab, a five-year collaborative effort at the University of California, Berkeley. Hadoop WANdisco has one Apache Hadoop committer on staff: Jagane Sundar. In February 2013 WANdisco released a free distribution of Hadoop containing additional components developed by WANdisco. Subversion WANdisco was involved in the Apache Subversion open source project from 2008 through 2015. They employed several contributors to work on the Subversion project during that time. Server and client binaries WANdisco provides Subversion binary downloads for Windows, CentOS, Debian, Oracle Linux, RHEL, SUSE Linux, Ubuntu, Mac OS X and Solaris via its website. These binaries use the default package management system for each Linux distribution. Project announcements In December 2010, WANdisco announced its intention to develop some features for the Subversion project, specifically aimed at improving branching and merging functionality. The Apache Foundation and some Subversion developers said the announcement contained unfounded claims and insinuations about community involvement and the lack of development on these features. According to Apache, these features were already being worked on at the time. David Richards from WANdisco clarified this position to the Subversion community and followed up by announcing WANdisco's sponsorship and ongoing support for the work of the Apache Software Foundation. References External links WANdisco web site Software companies of England Software companies based in the San Francisco Bay Area Companies based in San Ramon, California Software companies established in 2005 2005 establishments in California Companies listed on the Alternative Investment Market Companies based in Sheffield Big data companies Cloud computing providers Hadoop 2012 initial public offerings Software companies of the United States
51265085
https://en.wikipedia.org/wiki/PCem
PCem
PCem (short for PC Emulator) is an IBM PC emulator for Windows and Linux that specializes in running old operating systems and software that are designed for IBM PC compatibles. Originally developed as an IBM PC XT emulator, it later added support for other IBM PC compatible computers as well. A fork known as 86Box is also available, which includes a number of added features, such as support for SCSI and additional boards. On 14 June 2021, lead developer Sarah Walker announced her departure from the project. A new maintainer, Michael Manley, was appointed on 18 December 2021. When there was no maintainer, the project's forums were closed. Features Hardware PCem is capable of emulating Intel processors (and its respective clones, including AMD, IDT and Cyrix) from Intel 8088 through the Pentium Tillamook MMX/Mobile MMX processors from 1997 until 1999. A recompiler has been added in v10.1, being mandatory for P5 Pentium and Cyrix processors and optional for i486 processors and IDT WinChip processors. Yet a rather fast processor is needed for full emulation speed (such as an Intel Core i5 at 4 GHz). However, the current developer of PCem has a main concern that the recompiler is not fast enough to emulate the Intel Pentium Pro/Pentium II processors yet. PCem emulates various IBM PC compatible systems/motherboards from 1981 until 1996, this includes almost all IBM PC models (including the IBM PS/1 model 2121 and the IBM PS/2 model 2011), some American Megatrends BIOS clones (from 1989 until 1994), Award BIOS systems (Award 286 clone, Award SiS 496/497 and Award 430VX PCI), and Intel Premiere/PCI and Intel Advanced/EV motherboards. However, unofficial builds of PCem (PCem-X and PCem-unofficial) also supports IBM PC compatible systems/motherboards (from 1996 until 2000) that supports Intel Pentium Pro/Pentium II processors. PCem simulates the BIOS cache, which relies on the processor rather than on system memory. PCem can emulate different graphic modes, this includes text mode, Hercules, CGA (including some composite modes and the 160 × 100 × 16 tweaked modes), Tandy, EGA, VGA (including Mode X and other tweaks), VESA, as well as various video APIs such as DirectX and 3Dfx's Glide. PCem can also emulate various video cards such as the ATI Mach64 GX and the S3 Trio32/64/Virge series. PCem also emulates some sound cards, such as the AdLib, Sound Blaster (including the Game Blaster), Sound Blaster Pro, Sound Blaster 16, Sound Blaster AWE32, Gravis UltraSound, Innovation SSI-2001, Aztech Sound Galaxy Pro 16, Windows Sound System, Ensoniq AudioPCI 64V/ES1371, and Sound Blaster PCI 128. Voodoo cards are also emulated since PCem v10 and PCem v12, which added support for Voodoo 2 and various optimizations. However, there some shortcomings regarding Voodoo emulation such as the lack of mip-mapping, slightly wobbling triangles, lack of speed limiting, and wrong refresh rates on almost every resolution (except 640 × 480@60 Hz). As of PCem v11, a separate recompiler has been added for Voodoo emulation, making it faster to emulate the Voodoo graphics card. An unofficial build of PCem allows to use SLiRP/WinPcap as a networking interface, plus emulated NE2000 and Realtek RTL8029AS Ethernet cards. However, starting with PCem v13, the emulation of NE2000 was officially added. Operating system support Similar to Virtual PC, Bochs and QEMU, it emulates almost all versions of Microsoft Windows until Windows Vista (including Service Pack 2), MS-DOS, FreeDOS and CP/M-86 are also supported. Earlier versions of OS/2 requires the hard drive to be formatted prior to installation, while OS/2 Warp 3 until Warp 4.5 requires an unaccelerated video card to run. Other operating systems are also supported on PCem, such as versions of Linux that supports the Pentium processor, BSD derivatives (e.g. FreeBSD), and BeOS 5, which only works on the Award SiS 497 motherboard. Version history Versions of PCem from v0.5 until v8 have been removed from the official webpage, due to the use of the MAME OPL2/OPL3 emulation code from when it was not yet licensed under a GPL-compatible license. See also DOSBox DOSEMU QEMU Bochs Parallels VirtualBox VMware Fusion VMware Workstation Windows Virtual PC References 2007 software DOS emulators DOS on IBM PC compatibles Free emulation software Free software programmed in C Linux emulation software Windows emulation software X86 emulators
11898409
https://en.wikipedia.org/wiki/Pilot%20%28operating%20system%29
Pilot (operating system)
Pilot is a single-user, multitasking operating system designed by Xerox PARC in early 1977. Pilot was written in the Mesa programming language, totalling about 24,000 lines of code. Overview Pilot was designed as a single user system in a highly networked environment of other Pilot systems, with interfaces designed for inter-process communication (IPC) across the network via the Pilot stream interface. Pilot combined virtual memory and file storage into one subsystem, and used the manager/kernel architecture for managing the system and its resources. Its designers considered a non-preemptive multitasking model, but later chose a preemptive (run until blocked) system based on monitors. Pilot included a debugger, Co-Pilot, that could debug a frozen snapshot of the operating system, written to disk. A typical Pilot workstation ran 3 operating systems at once on 3 different disk volumes : Co-Co-Pilot (a backup debugger in case the main operating system crashed), Co-Pilot (the main operating system, running under co-pilot and used to compile and bind programs) and an inferior copy of Pilot running in a 3rd disk volume, that could be booted to run test programs (that might crash the main development environment). The debugger was written to read and write variables for a program stored on a separate disk volume. This architecture was unique because it allowed the developer to single-step even operating system code with semaphore locks, stored on an inferior disk volume. However, as the memory and source code of the D-series Xerox processors grew, the time to checkpoint and restore the operating system (known as a "world swap") grew very high. It could take 60-120 seconds to run just one line of code in the inferior operating system environment. Eventually, a co-resident debugger was developed to take the place of Co-Pilot. Pilot was used as the operating system for the Xerox Star workstation. See also Timeline of operating systems References Further reading Horsley, T.R., and Lynch, W.C. Pilot: A software engineering case history. In Proc. 4th Int. Conf. Software Engineering, Munich, Germany, Sept. 1979, pp. 94-99. External links Pilot: An Operating System for a Personal Computer Computer-related introductions in 1981 History of human–computer interaction Proprietary operating systems Window-based operating systems Pilot 1981 software
3686796
https://en.wikipedia.org/wiki/Capitalization%20of%20Internet
Capitalization of Internet
Conventions for the capitalization of Internet (versus internet) when referring to the global system of interconnected computer networks have varied over time, and vary by publishers, authors, and regional preferences. Increasingly, the proper noun sense of the word takes a lowercase i, in orthographic parallel with similar examples of how the proper names for the Sun (the sun), the Moon (the moon), the Universe (the universe), and the World (the world) are variably capitalized in . The term Internet was originally coined as a shorthand for internetwork in the first specification of the Transmission Control Program, , by Vint Cerf, Yogen Dalal, and Carl Sunshine in 1974. Because of the widespread deployment of the Internet protocol suite in the 1980s by educational and commercial networks beyond the ARPANET, the core network became increasingly known as the Internet, treated as a proper noun. The Oxford English Dictionary says that the global network is usually "the internet", but most of the American historical sources it cites use the capitalized form. The spelling internet has become often used, as the word almost always refers to the global network; the generic sense of the word has become rare in non-technical writings. As a result, The Chicago Manual of Style and the Associated Press (AP) both revised their formerly capitalized stylization of the word to lowercase internet in 2016. The New York Times, which followed suit in adopting the lowercase style, said that such a change is common practice when "newly coined or unfamiliar terms" become part of the lexicon. The Internet versus generic internets The Internet standards community historically differentiated between an internet, as a short-form of an internetwork, and the Internet: treating the latter as a proper noun with a capital letter, and the former as a common noun with lower-case first letter. An internet is any set of interconnected Internet Protocol (IP) networks. The distinction is evident in Request for Comments documents from the early 1980s, when the transition from the ARPANET, funded by the U.S. Department of Defense, to the Internet, with broad commercial support, was in progress, although it was not applied with complete uniformity. Another example from that period is IBM's TCP/IP Tutorial and Technical Overview () from 1989, which stated that: In the Request for Comments documents that define the evolving Internet Protocol standards, the term was introduced as a noun adjunct, apparently a shortening of "internetworking" and is mostly used in this way. As the impetus behind IP grew, it became more common to regard the results of internetworking as entities of their own, and internet became a noun, used both in a generic sense (any collection of computer networks connected through internetworking) and in a specific sense (the collection of computer networks that internetworked with ARPANET, and later NSFNET, using the IP standards, and that grew into the connectivity service we know today). In its generic sense, "internet" is a common noun, a synonym for internetwork; therefore, it has a plural form (first appearing in the RFC series RFC 870, RFC 871 and RFC 872) and is not capitalized. In a 1991 court case, Judge Jon O. Newman used it as a mass noun: "Morris released the worm into INTERNET, which is a group of national networks that connect university, governmental, and military computers around the country." Argument for common noun usage In 2002, a New York Times column said that Internet has been changing from a proper noun to a generic term. Words for new technologies, such as phonograph in the 19th century, are sometimes capitalized at first, later becoming uncapitalized. In 1999, another column said that Internet might, like some other commonly used proper nouns, lose its capital letter. Capitalization of the word as an adjective (specifically, a noun adjunct) also varies. Some guides specify that the word should be capitalized as a noun but not capitalized as an adjective, e.g., "internet resources." Usage Increasingly, organizations that formerly capitalized Internet have switched to the lowercase form, whether to minimize distraction (The New York Times) or to reflect growing trends as the term became generic (Associated Press Stylebook). According to Oxford Dictionaries Online, in 2016 Internet remained more usual in the US, while internet had become predominant in the UK. Organizations and style guides that capitalize Internet include the Modern Language Association. Organizations and style guides that use lowercase internet include Apple, Microsoft, Google, Wired News (since 2004), the United States Government Publishing Office, CNN (since 2010), the Associated Press (since 2016), The New York Times and Wall Street Journal (both since 2016), The Chicago Manual of Style (since 2017), APA style (since 2019), The Economist, the Financial Times, The Times, The Guardian, The Observer, The Sydney Morning Herald, the BBC, BuzzFeed and Vox Media. References External links Internet, Web, and Other Post-Watergate Concerns, The Chicago Manual of Style Internet terminology Capitalization Linguistic controversies English usage controversies
93483
https://en.wikipedia.org/wiki/Web%20service
Web service
The term Web service (WS) is either: a service offered by an electronic device to another electronic device, communicating with each other via the World Wide Web, or a server running on a computer device, listening for requests at a particular port over a network, serving web documents (HTML, JSON, XML, images). In a Web service a Web technology such as HTTP is used for transferring machine-readable file formats such as XML and JSON. In practice, a web service commonly provides an object-oriented Web-based interface to a database server, utilized for example by another Web server, or by a mobile app, that provides a user interface to the end-user. Many organizations that provide data in formatted HTML pages will also provide that data on their server as XML or JSON, often through a Web service to allow syndication, for example, Wikipedia's Export. Another application offered to the end-user may be a mashup, where a Web server consumes several Web services at different machines and compiles the content into one user interface. Web services (generic) Asynchronous JavaScript And XML Asynchronous JavaScript And XML (AJAX) is a dominant technology for Web services. Developing from the combination of HTTP servers, JavaScript clients and Plain Old XML (as distinct from SOAP and W3C Web Services), now it is frequently used with JSON as well as, or instead of, XML. REST Representational State Transfer (REST) is an architecture for well-behaved Web services that can function at Internet scale. In a 2004 document, the W3C sets following REST as a key distinguishing feature of Web services: Web services that use markup languages There are a number of Web services that use markup languages: JSON-RPC. JSON-WSP Representational state transfer (REST) versus remote procedure call (RPC) Web Services Conversation Language (WSCL) Web Services Description Language (WSDL), developed by the W3C Web Services Flow Language (WSFL), superseded by BPEL Web template WS-MetadataExchange XML Interface for Network Services (XINS), provides a POX-style web service specification format Web API A Web API is a development in Web services where emphasis has been moving to simpler representational state transfer (REST) based communications. Restful APIs do not require XML-based Web service protocols (SOAP and WSDL) to support their interfaces. W3C Web services In relation to W3C Web services, the W3C defined a Web service as: W3C Web Services may use SOAP over HTTP protocol, allowing less costly (more efficient) interactions over the Internet than via proprietary solutions like EDI/B2B. Besides SOAP over HTTP, Web services can also be implemented on other reliable transport mechanisms like FTP. In a 2002 document, the Web Services Architecture Working Group defined a Web services architecture, requiring a standardized implementation of a "Web service." Explanation The term "Web service" describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet Protocol backbone. XML is the data format used to contain the data and provide metadata around it, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI lists what services are available. A Web service is a method of communication between two electronic devices over a network. It is a software function provided at a network address over the Web with the service always-on as in the concept of utility computing. Many organizations use multiple software systems for management. Different software systems often need to exchange data with each other, and a Web service is a method of communication that allows two software systems to exchange this data over the Internet. The software system that requests data is called a service requester, whereas the software system that would process the request and provide the data is called a service provider. Different software may use different programming languages, and hence there is a need for a method of data exchange that doesn't depend upon a particular programming language. Most types of software can, however, interpret XML tags. Thus, Web services can use XML files for data exchange. Rules for communication different systems need to be defined, such as: How one system can request data from another system. Which specific parameters are needed in the data request. What would be the structure of the data produced. (Normally, data is exchanged in XML files, and the structure of the XML file is validated against a .xsd file.) What error messages to display when a certain rule for communication is not observed, to make troubleshooting easier. All of these rules for communication are defined in a file called WSDL (Web Services Description Language), which has a .wsdl extension. (Proposals for Autonomous Web Services (AWS) seek to develop more flexible Web services that do not rely on strict rules.) A directory called UDDI (Universal Description, Discovery, and Integration) defines which software system should be contacted for which type of data. So when one software system needs one particular report/data, it would go to the UDDI and find out which other systems it can contact for receiving that data. Once the software system finds out which other systems it should contact, it would then contact that system using a special protocol called SOAP (Simple Object Access Protocol). The service provider system would first validate the data request by referring to the WSDL file, and then process the request and send the data under the SOAP protocol. Automated design methods Automated tools can aid in the creation of a Web service. For services using WSDL, it is possible to either automatically generate WSDL for existing classes (a bottom-up model) or to generate a class skeleton given existing WSDL (a top-down model). A developer using a bottom-up model writes implementing classes first (in some programming language) and then uses a WSDL generating tool to expose methods from these classes as a Web service. This is simpler to develop but may be harder to maintain if the original classes are subject to frequent change. A developer using a top-down model writes the WSDL document first and then uses a code generating tool to produce the class skeleton, to be completed as necessary. This model is generally considered more difficult but can produce cleaner designs and is generally more resistant to change. As long as the message formats between the sender and receiver do not change, changes in the sender and receiver themselves do not affect the Web service. The technique is also referred to as contract first since the WSDL (or contract between sender and receiver) is the starting point. A developer using a Subset WSDL (SWSDL) (i.e. a WSDL with the subset operation in the original WSDL) can perform Web service testing and top-down development. Criticism Critics of non-RESTful Web services often complain that they are too complex and based upon large software vendors or integrators, rather than typical open source implementations. There are also concerns about performance due to Web services' use of XML as a message format and SOAP/HTTP in enveloping and transporting. Regression testing of Web services Functional and non-functional testing of Web services is done with the help of WSDL parsing. Regression testing is performed by identifying the changes made to upgrade software. Web service regression testing needs can be categorized in three different ways, namely, changes in WSDL, changes in the code, and selective re-testing of operations. We can capture the above three needs in three intermediate forms of Subset WSDL, namely, Difference WSDL (DWSDL), Unit WSDL (UWSDL), and Reduced WSDL (RWSDL), respectively. These three Subset WSDLs are then combined to form Combined WSDL (CWSDL) that is further used for regression testing of the Web service. This will help in Automated Web Service Change Management (AWSCM), by performing the selection of the relevant test cases to construct a reduced test suite from the old test suite. Web services testing can also be automated using several test automation tools like SOAP UI, Oracle Application Testing Suite (OATS), Unified Functional Testing, Selenium, etc. Web service change management Work-related to the capture and visualization of changes made to a Web service. Visualization and computation of changes can be done in the form of intermediate artifacts (Subset WSDL). The insight on the computation of change impact is helpful in testing, top-down development and reduce regression testing. AWSCM is a tool that can identify subset operations in a WSDL file to construct a subset WSDL. See also List of Web service frameworks List of Web service protocols List of Web service specifications Middleware Service-oriented architecture (SOA) Web Map Service Web API Notes References External links Messaging Design Pattern documentation at SOA Patterns The Web Services Activity page at W3C Web Services Architecture, the W3C Working Group Note (11 February 2004) Investigating Web Services on the World Wide Web, the analysis presented at the WWW2008 conference Guide to Secure Web Services (SP 800-95) at NIST Web services
59828688
https://en.wikipedia.org/wiki/Leisure%20Suit%20Larry%3A%20Wet%20Dreams%20Don%27t%20Dry
Leisure Suit Larry: Wet Dreams Don't Dry
Leisure Suit Larry: Wet Dreams Don't Dry is an adventure video game developed by German studio CrazyBunch and published by Assemble Entertainment for Microsoft Windows, macOS, Nintendo Switch and PlayStation 4. The game is set during the 21st century and follows Larry as he attempts to navigate the world of online dating in order to meet up with his latest dream girl. An Xbox One version was released on 15 September 2020. A sequel, Leisure Suit Larry: Wet Dreams Dry Twice, was released on October 23, 2020 for Microsoft Windows and macOS, and also planned to release for Nintendo Switch, PlayStation 4 and Xbox One in 2021. Plot The game begins with Larry waking up in a dark room, unaware of where he is or what is happening. He leaves the room via an elevator that places him in front of Lefty's, where he realizes that the landscape has dramatically changed. Inside the bar Larry talks to Lefty, who tells him that he has been missing for about thirty years and that much has changed in his absence, even as Larry has seemingly not changed at all, with the exception of him being thinner. Briefly taken aback, Larry nevertheless resolves himself to chasing women. Soon after, Larry discovers a Pi phone in Lefty's bathroom, which tells him that it is an experimental prototype and that it must be returned to Bill "BJ" Jobs at Prune headquarters. In doing so he meets and attempts to woo Faith, BJ's beautiful assistant, who states that she only dates men with a perfect score on the dating app Timber. Gaining a new Pi phone as a reward for returning the prototype, Larry sets about meeting various people via the Timber app in search of a perfect score. This task leads him to various people such as a lawyer seeking access to Prune's files and eventually results in Larry following Faith and BJ to Cancúm, where he breaks into BJ's mansion. He then discovers that Faith is the true genius behind Prune and that she hired BJ to serve as its male figurehead, as she states that the business would not have thrived if it was known a woman was behind everything. The game ends with Larry escaping capture and accidentally blowing up the mansion, resulting in Faith getting shipped out to sea. Prior to the close of the game, Larry receives more matches on Timber. Development The game is not a parody of any known title; it is commonly assumed that the game's name is a parody of the 2014 adventure game D4: Dark Dreams Don't Die, but the developers did not confirm this when asked. Al Lowe, creator of the original Leisure Suit Larry games, was not involved with the game and prior to its release stated that he was unimpressed with the game's title. In anticipation of the game's release, CrazyBunch released a three-part documentary about the making of Wet Dreams Don't Dry to YouTube. Reception Leisure Suit Larry: Wet Dreams Don't Dry received mixed reviews. The PC version has a score of 71% on Metacritic based on 16 reviews, while the Switch version has a score of 50% based on 8 reviews, indicating "mixed or average reviews" for both. The game received praise from outlets such as Adventure Gamers and Destructoid, both of which felt that the game was an enjoyable addition to the Leisure Suit Larry series. ScreenRant was generally favorable towards the game, but commented that the game's puzzles could occasionally confuse players. Player Attack and Common Sense Media were both critical of Wet Dreams Don't Dry, with the former criticizing the game's story and gameplay. References External links Point-and-click adventure games Leisure Suit Larry games MacOS games Nintendo Switch games PlayStation 4 games Video games developed in Germany Windows games 2018 video games Xbox One games Single-player video games
60321809
https://en.wikipedia.org/wiki/Mahara%20%28software%29
Mahara (software)
Mahara is a free and open-source web-based electronic portfolio (eportfolio) management system written in PHP and distributed under the GNU Public License. The Māori language word means "to think about or consider". History Mahara began in 2006 as a collaboration between Massey University, Auckland University of Technology, the Open Polytechnic of New Zealand and Victoria University of Wellington, funded by the New Zealand Tertiary Education Commission. Mahara was initially developed by Catalyst IT Limited, a New Zealand open-source software company, and first released in April 2008. Development of Mahara has since expanded to include a community of contributors, including the New Zealand Ministry of Education. The software was designed to be an open-source electronic portfolio platform to support the student learning and personal learning environment goals of educational institutions. Mahara allows students to select their own work and prepare an online portfolio, to both share in a university classroom context and show to future employers. Language support Mahara supports translation into different languages using language packs, and contributions of complete or near-complete coverage have been provided for Japanese, Basque, French, Māori, Slovenian, German, Czech, and Danish languages. References External links Cross-platform software Free educational software Free learning support software Free software programmed in PHP Free content management systems Classroom management software
13757963
https://en.wikipedia.org/wiki/Troy%20%28novel%29
Troy (novel)
Troy is a young adult novel by Adèle Geras, published in 2000. It is based on events in The Iliad, incorporating original stories set in the heart of the city towards the end of the Trojan War. The novel was shortlisted for the Carnegie Medal, the Whitbread Award and the Guardian Award. Plot summary It starts ten years into the Trojan War. Xanthe and Marpessa are sisters living in Troy, which is besieged by the Greeks. After Paris swept Helen away from her husband in Sparta to his home in Troy, Menelaus started a war to win her back. The Deities have already decided its outcome of the war. The Goddess Aphrodite, who started it all when she promised Paris the love of the most beautiful woman in the world, is tired of the war. Therefore, she turns her attention to the two sisters. When her son Eros, the God of Love, aims his love arrow, neither of the sisters can escape its power. They both fall in love with Alastor, a handsome fallen soldier with power. The story is filled with encounters with Greek deities, which only Marpessa can remember. See also Troy (film) References External links 2000 British novels British fantasy novels Young adult fantasy novels British young adult novels Novels set during the Trojan War Novels set in ancient Troy Novels based on the Iliad
48546809
https://en.wikipedia.org/wiki/Windows%2010%20version%20history
Windows 10 version history
Windows 10 is a series of operating systems developed by Microsoft. Microsoft described Windows 10 as an "operating system as a service" that would receive ongoing updates to its features and functionality, augmented with the ability for enterprise environments to receive non-critical updates at a slower pace or use long-term support milestones that will only receive critical updates, such as security patches, over their five-year lifespan of mainstream support. It was first released in July 2015. Channels Windows 10 Insider Preview builds are delivered to Insiders in three different channels (previously "rings"). Insiders in the Dev Channel (previously Fast Ring) receive updates prior to those in the Beta Channel (previously Slow Ring), but might experience more bugs and other issues. Insiders in the Release Preview Channel (previously Release Preview Ring) do not receive updates until the version is almost available to the public, but are comparatively more stable. PC version history Mainstream builds of Windows 10 are labeled "YYMM", with YY representing the two-digit year and MM representing the month of planned release (for example, version 1507 refers to builds which initially released in July 2015). Starting with version 20H2, Windows 10 release nomenclature changed from the year and month pattern to a year and half-year pattern (YYH1, YYH2). Version 1507 Version 1511 (November Update) The second stable build of Windows10 is version 1511 (build number 10586), known as the November Update. It was codenamed "Threshold 2" (TH2) during development. This version was distributed via Windows Update on November 12, 2015. It contains various improvements to the operating system, its user interface, bundled services, as well as the introduction of Skype-based universal messaging apps, and the Windows Store for Business and Windows Update for Business features. On November 21, 2015, the November Update was temporarily pulled from public distribution. The upgrade was re-instated on November 24, 2015, with Microsoft stating that the removal was due to a bug that caused privacy and data collection settings to be reset to defaults when installing the upgrade. Version 1607 (Anniversary Update) The third stable build of Windows 10 is called version 1607, known as the Anniversary Update. It was codenamed "Redstone 1" (RS1) during development. This version was released on August 2, 2016, a little over one year after the first stable release of Windows 10. The Anniversary Update was originally thought to have been set aside for two feature updates. While both were originally to be released in 2016, the second was moved into 2017 so that it would be released in concert with that year's wave of Microsoft first-party devices. The Anniversary Update introduces new features such as the Windows Ink platform, which eases the ability to add stylus input support to Universal Windows Platform apps and provides a new "Ink Workspace" area with links to pen-oriented apps and features, enhancements to Cortana's proactive functionality, a dark user interface theme mode, a new version of Skype designed to work with the Universal Windows Platform, improvements to Universal Windows Platform intended for video games, and offline scanning using Windows Defender. The Anniversary Update also supports Windows Subsystem for Linux, a new component that provides an environment for running Linux-compatible binary software in an Ubuntu-based user mode environment. On new installations of Windows 10 on systems with Secure Boot enabled, all kernel-mode drivers issued after July 29, 2015 must be digitally signed with an Extended Validation Certificate issued by Microsoft. This version is the basis for "LTSB 2016", the first upgrade to the LTSB since Windows 10's release. The first LTSB release, based on RTM (version 1507), has been retroactively named "LTSB 2015". Version 1703 (Creators Update) The fourth stable build of Windows 10 is called version 1703, known as the Creators Update. It was codenamed "Redstone 2" (RS2) during development. This version was announced on October 26, 2016, and was released for general availability on April 11, 2017, and for manual installation via Windows 10 Upgrade Assistant and Media Creation Tool tools on April 5, 2017. This update primarily focuses on content creation, productivity, and gaming features—with a particular focus on virtual and augmented reality (including HoloLens and virtual reality headsets) and on aiding the generation of three-dimensional content. It supports a new virtual reality workspace designed for use with headsets; Microsoft announced that several OEMs planned to release VR headsets designed for use with the Creators Update. Controls for the Game Bar and Game DVR feature have moved to the Settings app, while a new "Game Mode" option allows resources to be prioritized towards games. Integration with Microsoft acquisition Mixer (formerly Beam) was added for live streaming. The themes manager moved to Settings app, and custom accent colors are now possible. The new app Paint 3D allows users to produce artwork using 3D models; the app is designed to make 3D creation more accessible to mainstream users. Windows 10's privacy settings have more detailed explanations of data that the operating system may collect. Additionally, the "enhanced" level of telemetry collection was removed. Windows Update notifications may now be "snoozed" for a period of time, the "active hours" during which Windows will not try to install updates may now extend up to 18 hours in length, and updates may be paused for up to seven days. Windows Defender has been replaced by the universal app Windows Defender Security Center. Devices may optionally be configured to prevent use of software from outside of Microsoft Store, or warn before installation of apps from outside of Microsoft Store. "Dynamic Lock" allows a device to automatically lock if it is outside of the proximity of a designated Bluetooth device, such as a smartphone. A "Night Light" feature was added, which allows the user to change the color temperature of the display to the red part of the spectrum at specific times of day (similarly to the third-party software f.lux). Version 1709 (Fall Creators Update) The fifth stable build of Windows 10 is called version 1709, known as the Fall Creators Update. It was codenamed "Redstone 3" (RS3) during development. This version was released on October 17, 2017. Version 1709 introduces a new feature known as "My People", where shortcuts to "important" contacts can be displayed on the taskbar. Notifications involving these contacts appear above their respective pictures, and users can communicate with the contact via either Skype, e-mail, or text messaging (integrating with Android and Windows 10 Mobile devices). Support for additional services, including Xbox, Skype for Business, and third-party integration, are to be added in the future. Files can also be dragged directly to the contact's picture to share them. My People was originally announced for Creators Update, but was ultimately held over to the next release, and made its first public appearance in Build 16184 in late April 2017. A new "Files-on-Demand" feature for OneDrive serves as a partial replacement for the previous "placeholders" function. It also introduces a new security feature known as "controlled folder access", which can restrict the applications allowed to access specific folders. This feature is designed mainly to defend against file-encrypting ransomware. Version 1803 (April 2018 Update) The sixth stable build of Windows 10 is called version 1803, known as the April 2018 Update. It was codenamed "Redstone 4" (RS4) during development. This version was released as a manual download on April 30, 2018, with a broad rollout on May 8, 2018. This update was originally meant to be released on April 10, but was delayed because of a bug which could increase chances of a "Blue Screen of Death" (Stop error). The most significant feature of this build is Timeline, which is displayed within Task View. It allows users to view a list of recently-used documents and websites from supported applications ("activities"). When users consent to Microsoft data collection via Microsoft Graph, activities can also be synchronized from supported Android and iOS devices. Version 1809 (October 2018 Update) The seventh stable build of Windows 10 is called version 1809, known as the October 2018 Update. It was codenamed "Redstone 5" (RS5) during development. This version was released on October 2, 2018. Highlighted features on this build include updates to the clipboard function (including support for clipboard history and syncing with other devices), SwiftKey virtual keyboard, Snip & Sketch, and File Explorer supporting the dark color scheme mode. On October 6, 2018, the build was pulled by Microsoft following isolated reports of the update process deleting files from user directories. It was re-released to Windows Insider channel on October 9, with Microsoft citing a bug in OneDrive's Known Folder Redirection function as the culprit. On November 13, 2018, Microsoft resumed the rollout of 1809 for a small percentage of users. The long term servicing release, Windows 10 Enterprise 2019 LTSC, is based on this version and is equivalent in terms of features. Version 1903 (May 2019 Update) The eighth stable build of Windows 10, version 1903, codenamed "19H1", was released for general availability on May 21, 2019 after being on the Insider Release Preview branch since April 8, 2019. Because of new practices introduced after the problems affecting the 1809 update, Microsoft used an intentionally slower Windows Update rollout process. New features in the update include a redesigned search tool—separated from Cortana and oriented towards textual queries, a new "Light" theme (set as default on Windows 10 Home) using a white-colored taskbar with dark icons, the addition of symbols and kaomoji to the emoji input menu, the ability to "pause" system updates, automated "Recommended troubleshooting", integration with Google Chrome on Timeline via an extension, support for SMS-based authentication on accounts linked to Microsoft accounts, and the ability to run Windows desktop applications within the Windows Mixed Reality environment (previously restricted to universal apps and SteamVR only). A new feature on Pro, Education, and Enterprise known as Windows Sandbox allows users to run applications within a secured Hyper-V environment. A revamped version of Game Bar was released alongside 1903, which redesigns it into a larger overlay with a performance display, Xbox friends list and social functionality, and audio and streaming settings. Version 1909 (November 2019 Update) The ninth stable build of Windows 10, version 1909, codenamed "19H2", was released to the public on November 12, 2019 after being on the Insider Release Preview branch since August 26, 2019. Unlike previous updates, this one was released as a minor service update without major new features. Version 2004 (May 2020 Update) The tenth stable build of Windows 10, version 2004, codenamed "20H1", was released to the public on May 27, 2020 after being on the Insider Release Preview branch since April 16, 2020. New features included faster and easier access to Bluetooth settings and pairing, improved Kaomojis, renamable virtual desktops, DirectX12 Ultimate, a chat-based UI for Cortana, greater integration with Android phones on the Your Phone app, Windows Subsystem for Linux 2 (WSL 2), and WSL 2 version includes a custom Linux kernel, unlike older WSL, the ability to use Windows Hello without the need for a password, improved Windows Search with integration with File Explorer, a cloud download option to reset Windows, accessibility improvements, and the ability to view disk drive type and discrete graphics card temperatures in Task Manager. Version 20H2 (October 2020 Update) The eleventh stable build of Windows 10, version 20H2, was released to the public on October 20, 2020 after being on the Beta Channel since June 16, 2020. New features include new theme-aware tiles in the Start Menu, new features and improvements to Microsoft Edge (such as a price comparison tool, integration for tab switching, and easy access to pinned tabs), a new out-of-box experience with more personalization for the taskbar, notifications improvements, improvements to tablet mode, improvements to Modern Device Management, and the move of the System tab in Control Panel to the About page in Settings. This is the first version of Windows 10 to include the new Chromium-based Edge browser by default. Version 21H1 (May 2021 Update) The Windows 10 May 2021 Update (codenamed "21H1") is the eleventh update to Windows 10 as the cumulative update to the October 2020 Update. It carries the build number 10.0.19043. The first preview was released to Insiders who opted in to Beta Channel on February 17, 2021. The update began rolling out on May 18, 2021. Notable changes in the May 2021 Update include: Added multi-camera support for Windows Hello New "News and Interests" feature on the taskbar Performance improvements to Windows Defender Application Guard and WMI Group Policy Service Version 21H2 (November 2021 Update) The Windows 10 November 2021 Update (codenamed "21H2") is the twelfth and current major update to Windows 10 as the cumulative update to the May 2021 Update. It carries the build number 10.0.19044. The first preview was released on July 15, 2021 to Insiders who opted in to Release Preview Channel that failed to meet minimum system requirements for Windows 11. The update began rolling out on November 16, 2021. Notable changes in the November 2021 Update include: Support for Wi-Fi 6E GPU compute support in the Windows Subsystem for Linux (WSL) and Azure IoT Edge for Linux on Windows (EFLOW) deployments New simplified passwordless deployment models for Windows Hello for Business Support for WPA3 Hash-to-Element (H2E) standards Fast Ring / Dev Channel Fast Ring On December 16, 2019, Microsoft announced that Windows Insiders in the Fast Ring will receive builds directly from the RS_PRERELEASE branch, which are not matched to a specific Windows 10 release. The first build released under the new strategy, build 19536, was made available to Insiders on the same day. The MN_RELEASE branch was available from May 13, 2020 to June 17, 2020. The branch was mandatory for Insiders in the Fast Ring. Dev Channel As of June 15, 2020, Microsoft has introduced the "channels" model to its Windows Insider Program, succeeding its "ring" model. All future builds starting from build 10.0.20150, therefore, would be released to Windows Insiders in the Dev Channel. The FE_RELEASE branch was available from October 29, 2020 to January 6, 2021. The branch was mandatory for Insiders until December 10. Afterward, Insiders could choose to move back to the RS_PRERELEASE branch. The CO_RELEASE branch was available from April 5 to June 14, 2021. The branch was mandatory for Insiders. As of June 28, 2021, the Dev Channel has transitioned to Windows 11. Mobile version history See also Windows Server 2016 version history Windows Server 2019 version history Windows Phone version history Windows 10 Mobile version history Xbox OS version history Windows 11 version history References External links Windows release health Flight Hub History of Microsoft Software version histories Tablet operating systems
65799363
https://en.wikipedia.org/wiki/Valentin%20Goranko
Valentin Goranko
Valentin Feodorov Goranko (born 22 September 1959 in Sofia, Bulgaria) is a Bulgarian-Swedish logician, Professor of Logic and Theoretical Philosophy at the Department of Philosophy, Stockholm University. Education and academic career Goranko studied mathematics (M.Sc. 1984) and obtained Ph.D. in Mathematical Logic at the Faculty of Mathematics and Informatics of the Sofia University "St. Kliment Ohridski" in 1988. Before joining Stockholm University in 2014, he has had several academic positions at universities in Bulgaria (until 1992), South Africa (1992-2009), Denmark (2009-2014) and Sweden (since 2014) and has taught a wide variety of courses in Mathematics, Computer Science, and Logic. Research fields Goranko has a broad range of research interests in the theory and applications of Logic to artificial intelligence, multi-agent systems, philosophy, computer science, and game theory, where he has published 3 books and over 120 research papers and chapters in handbooks and other research collections. Professional service President (since 2018) of the Scandinavian Logic Society Senior member and past president (2016-2020) of the Management Board of the Association for Logic, Language and Information (FoLLI) Editor-in-chief (Logic) of the FoLLI Publications series on Logic, Language and Information, a sub-series of Springer LNCS. Executive member of the Board of the European Association for Computer Science Logic EACSL Associate Editor of the ACM Transactions on Computational Logic and member of the editorial boards of several other scientific journals. Published books 2015 Logic and Discrete Mathematics: A Concise Introduction 2016 Temporal Logics in Computer Science 2016 Logic as a Tool: A Guide to Formal Logical Reasoning References 1959 births Bulgarian logicians Logicians Mathematical logicians Theoretical philosophy Living people
20042035
https://en.wikipedia.org/wiki/Timeline%20of%20Electronic%20Frontier%20Foundation%20actions
Timeline of Electronic Frontier Foundation actions
The Electronic Frontier Foundation (EFF) is an international non-profit advocacy and legal organization based in the United States. 1990–1995 July 10, 1990: EFF is founded and the groundwork is laid for the successful representation of Steve Jackson Games in a Federal court case to prosecute the United States Secret Service for unlawfully raiding their offices and seizing computers. 1991: Steve Jackson Games v. United States Secret Service. EFF files in federal court. 1992: EFF gives first annual Pioneer Awards at 2nd Computers, Freedom and Privacy Conference in Washington, D.C. March 12th, 1993: Steve Jackson Games, Inc. v. United States Secret Service. Steve Jackson Games wins its case against the United States Secret Service. 1994: Center for Democracy and Technology is formed by Jerry Berman. 1994: Scientology v. the Internet. 1995: Bernstein v. United States. Ninth Circuit Court of Appeals rules in favor of Daniel J. Bernstein, holding that software source code is speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional. 1995: EFF moves to San Francisco. 1995–1999 1995–1996: EFF opens its "Blue Ribbon Campaign" in direct response to the Communications Decency Act. 1996: EFF mounts its "Golden Key Campaign" to back calls for liberalisation of cryptography export laws 1996: EFF co-founds TRUSTe, the first Privacy Seal company, with CommerceNet, a non-profit industry consortium. 1998: EFF builds Deep Crack, a machine that decrypts a DES-encrypted message after only 56 hours of work, winning RSA Security's DES Challenge II-2. 1999: EFF and Anonymizer launch the Kosovo Privacy Project, an anonymous and secure email and Web surfing service conceived by Alex Fowler and Patrick Ball to ensure the protection of Kosovars, Serbs, and others reporting on the Kosovo War within the region from reprisal from Serbian officials. 2001–2004 2001: Felten v. RIAA. EFF supports Edward Felten in suing RIAA after Secure Digital Music Initiative (SDMI), RIAA, and Verance Corporation threaten Felten with legal action under the terms of the Digital Millennium Copyright Act (DMCA) over Felten's plan to present a paper about methods for defeating the SDMI watermarks. The case is dismissed. April 2002: EFF supports the Chilling Effects Clearinghouse effort to organize a database of IP law abuse and educate potential victims. November 2002: Universal v. Reimerdes ("2600 magazine case"). EFF loses its appeal before the Second Circuit Court of Appeals, establishing a legal precedent to permit prior restraint. 2600 magazine is restrained from publishing links to the Second Circuit code under provisions of the DMCA and declines to appeal to the Supreme Court. December 2003: RIAA v. Verizon, D.C. Cir. EFF supports Verizon Communications in a successful challenge to a lower court ruling holding that the company must reveal the identity of a Verizon customer accused of copyright infringement using the peer-to-peer file-sharing software Kazaa. The DC Circuit Court of Appeals agrees with Verizon and EFF that the special subpoena provisions in the DMCA apply to potentially infringing material stored on an ISP server, not material stored on an individual's own computer. 2004: DirecTV v. Treworgy, 11th Circuit. EFF helps defend "smart card" technology owner Mike Treworgy after DirecTV sued him based on the fact that he purchased hardware that could be used to intercept the company's satellite TV signals. Treworgy prevails in the 11th Circuit Court of Appeals, which finds that DirecTV cannot sue individuals for "mere possession" of smart-card technology. In separate negotiations with DirecTV, EFF succeeds in getting the company to drop its "guilt-by-purchase" litigation strategy altogether. 2004-02-18: EFF files amicus curiae brief in 1-800 Contacts, Inc. v. WhenU.Com and Vision Direct, Inc. appeal (see 1-800 Contacts) April 19, 2004: EFF initiates the Patent Busting Project to challenge "illegitimate patents that suppress non-commercial and small business innovation or limit free expression online" May 2004: Doe v. Ashcroft. EFF files amicus supporting ACLU's challenge to the constitutionality of 18 U.S.C. § 2709, which authorizes the FBI to compel the production of subscriber and communications records in the possession of a broad range of ISPs, potentially covering billions of records from tens of thousands of entities. These demands, known as National Security Letters, were issued without judicial oversight of any kind, yet allowed the FBI to obtain a vast amount of constitutionally protected information. In September 2004, Judge Victor Marrero of the Southern District of New York issues a landmark decision striking down the NSL statute and the associated gag provision. August 2004: Chamberlain v. Skylink. EFF helps defend Skylink in the Federal Circuit that puts limits on the controversial "anti-circumvention" provision of the DMCA. Chamberlain, the manufacturer of garage doors, invoked the provision to stop Skylink from selling a "universal" remote control that works with Chamberlain garage doors. The court rejected Chamberlain's claims, noting that if it adopted the company's interpretation of the DMCA, it would threaten many legitimate uses of software within electronic and computer products — something the law aims to protect. August 19, 2004: MGM v. Grokster. EFF prevails before the Ninth Circuit Court of Appeals with a decision affirming the "Betamax doctrine" — the rule following the Supreme Court's 1984 holding that a company that creates a technology cannot be held liable for copyright violations by users if the technology has substantial legal uses. The Ninth Circuit Court of Appeals rules that neither were liable for infringements by people using their software to distribute copyrighted works. However, on June 27, 2005 U.S. Supreme Court reverses, finding the defendants liable for copyright infringement, though the Court preserved the Betamax doctrine. Co-defendant Grokster eventually settles with MGM and disbands the company. October 6, 2004: EFF submits a brief in cooperation with eight other public interest organizations, challenging the FCC's authority to impose the broadcast flag mandate, which was to go into effect during July 2005. October 15, 2004: EFF successfully represents the non-profit ISP Online Policy Group (OPG) and two Swarthmore College students who published major security flaws in Diebold Election Systems (now Premier Election Solutions) voting machines. From the press release: "Diebold is the first company to be held liable for violating section 512(f) of the DMCA, which makes it unlawful to use DMCA takedown threats when the copyright holder knows that infringement has not actually occurred." 2004: JibJab Media v. Ludlow Music, N.D. Cal. EFF successfully defends JibJab, the creators of a parody flash animation piece using Woody Guthrie's "This Land Is Your Land", and uncovered evidence that the classic folk song is in fact already part of the public domain. November 2004: EFF files brief opposing the FCC's proposal to expand CALEA to broadband Internet access providers and VoIP systems. December 2004: EFF starts promoting and supporting Tor, a second generation Onion Routing network that allows people to communicate anonymously. 2005–present June 2005: EFF issues a Legal Guide for Bloggers, designed to be a basic roadmap to the legal issues one may confront as a blogger, to let bloggers know their rights. October 2005: EFF investigates and documents how the Xerox DocuColor printer's serial number, as well as the date and time of the printout, are encoded in a repeating 15 by 8 dot pattern in the yellow channel on printed pages. EFF is working to reverse engineer additional printers. (see Printer steganography) November 2005: EFF files suit against Sony BMG — along with two leading national class action law firms — demanding that the company repair the damage done by Fortium Technologies (then First 4 Internet)'s XCP and SunnComm MediaMax CD-3 software it included on over 24 million music CDs (see 2005 Sony BMG CD copy prevention scandal). January 2006: Hepting v. AT&T. EFF files a class action lawsuit against AT&T alleging that AT&T allowed the NSA to potentially tap the entirety of its clients' Internet and Voice over IP communications. August 2006: EFF files a complaint with the Federal Trade Commission, accusing AOL of violating the Federal Trade Commission Act and asking for the FTC to take action. EFF alleges that AOL has engaged in deceptive and unfair trade practices with the disclosure of 20 million search queries of 650,000 anonymized users, intended for research purposes (see AOL for details). AOL is failing to "implement reasonable and appropriate measures to protect personal consumer information from public disclosure." Beyond that, AOL's failure to "employ proper security measures ... to protect personal consumer information from public disclosure" constitutes an unfair trade practice. November 2006: EFF files a lawsuit, on behalf of blogger Jeffrey Diehl, against Michael Crook, a webmaster Diehl claimed filed false DMCA claims against his and other websites. The case is settled in early 2007. October 2007: The United States Patent and Trademark Office accepted the EFF's request for the reexamination of NeoMedia's patent #6,199,048 that is alleged to threaten mobile information access. May 2007 files Sapient v. Geller August 2008: United States v. Arnold September 2008: Jewel v. NSA October 2008: Hepting v. AT&T December 2010: Following the United States diplomatic cables leak, the EFF offered support to WikiLeaks, with John Perry Barlow saying the EFF was 'trying to make sure they have plenty of mirror sites, back-ups, we're organising donations for them and generally doing everything we can to see that Wikileaks is not assailable by the methods that have been used against it so far'. June 2012: EFF started the Defend Innovation patent reform project. April 2015: EFF started the Fight215.org website to stop the mass surveillance enabled by Patriot Act. November 2020: EFF representing the developer of youtube-dl, a popular video archive utility software, to file a DMCA counter-notice to GitHub against RIAA. References Timeline
51410470
https://en.wikipedia.org/wiki/Sarkar%20%28film%20series%29
Sarkar (film series)
Sarkar is a series of Indian political crime thriller films set in the world of Marathi politics and crime, co-produced and directed by Ram Gopal Varma. The first part Sarkar released in 2005, the second part Sarkar Raj in 2008, and the third installment Sarkar 3 in 2017. Overview Sarkar (2005) Subhash Nagre (Amitabh Bachchan), known by his followers as Sarkar, lives in Mumbai. The opening scenes show a rape victim's father (Veerendra Saxena) approaching Sarkar for justice (which the corrupt law and order system has failed to deliver) which Sarkar promptly establishes by having the rapist beaten up by his henchmen. His son, Vishnu (Kay Kay Menon), plays a sleazy producer who is more interested in the film actress Sapna (Nisha Kothari) than his wife Amrita (Rukhsar). Sarkar's other, more upright son, Shankar (Abhishek Bachchan), returns from the United States with his love Pooja (Katrina Kaif) after completing his education there. Pooja's doubts about Sarkar's image cause Shankar, who firmly believes in his father's righteousness, to break up with her later in the movie. One day, a Dubai-based don, Rasheed (Zakir Hussain) tries to strike a deal with Sarkar; he promptly refuses on moral grounds and also forbids him from doing it himself. Rasheed tries to eliminate Sarkar's supremacy with the help of Selvar Mani (Kota Srinivasa Rao), Sarkar's former associate, Vishram Bhagat and Swami Virendra (Jeeva). Meanwhile, they trap Sarkar by assassinating a righteous, upright, Ahimsa political leader and an outspoken critic of Sarkar, Motilal Khurana (Anupam Kher). Everyone, including Vishnu believe that Sarkar is guilty but Shankar has deep faith in his father. Sarkar gets arrested. Shankar now takes over the position of Sarkar temporarily. On learning of a plot to murder his father in prison, he approaches the police commissioner (Anant Jog) who mocks him and his father besides not providing protection. By the time he reaches the prison and appropriate action is taken, the attempt on Sarkar's life is already made. Sarkar is later acquitted. He remains bedridden as Shankar takes on Sarkar's enemies. Meanwhile, Selvar Mani, Swami, Vishram and Rasheed try to convince Vishnu to murder Sarkar. Vishnu was previously thrown out of Sarkar's house because he had murdered the actor who was having an affair with Sapna. Vishnu returns home pretending to have repented. When he approaches Sarkar in the dark of the night with the intent of murdering him, Shankar foils his plan and later kills him (establishing justice by the way of his father). Shankar eliminates Rasheed, Vishram and Selvar Mani. He also succeeds in making Swami his puppet. Shankar has also realised that Chief Minister Madan Rathore (Deepak Shirke) also has a part in the attempt to end Sarkar and his rule. This results in legal action against the Chief Minister. The closing scenes show people approaching Shankar for justice and his father apparently retired. Sarkar Raj (2008) The sequel is chronologically set two years after the first film. Anita Rajan (Aishwarya Rai Bachchan), CEO of an international electrical power firm based in London, holds a meeting with Mike Rajan (Victor Banerjee), her father and boss and Hassan Qazi, as a seemingly shady adviser and facilitator; regarding an ambitious proposal to set up a multimillion-dollar power plant in rural parts of the state of Maharashtra in India. Qazi states that this project will be impossible due to possible political entanglements. When Anita asks him for a solution, Qazi states that enlisting the support of Subhash Nagre (Amitabh Bachchan) (commonly referred to by his title of Sarkar), who he describes as a criminal in the garb of a popular and influential political leader, might help their cause. The resulting socio-political drama forms the crux of the story. Sarkar 3 (2017) In 2009 Ram Gopal Verma stated that he had no plans finalised for the third instalment in the series and shelved Sarkar 3. However, in 2012 it was reported that the sequel would go ahead once again and currently is in the pre production stage where the script is being written. The film is expected to go on floors at the end of 2013, primarily with the same cast of Amitabh and Abhishek Bachchan although his character dies at the end of this film and also Aishwarya Rai is to be left out. In August, 2016 director Ram Gopal Varma confirmed Sarkar 3. He told on his Twitter that Abhishek and Aishwarya will not be a part of the third installment. Cast and characters Crew Production The first film, Sarkar (2005), is often said to be a remake of The Godfather (1972). Debutante Rajesh Shringarpore's character of Sanjay Somji in its sequel Sarkar Raj (2008) was also reportedly based on Raj Thackeray, the estranged nephew of political leader Bal Thackeray; thus furthering the general viewpoint that the series is based on Bal Thackeray and his family. Apparently Ram Gopal Verma had even shown Raj Thackeray rushes of the film to allay his fears of being wrongly portrayed. Release and revenue Awards and nominations Sarkar Filmfare Best Supporting Actor Award for Abhishek Bachchan Zee Cine Award Best Actor in a Supporting Role - Male for Abhishek Bachchan IIFA Award for Best Supporting Actor for Abhishek Bachchan Sarkar Raj Star Screen Awards Nominated Screen Award for Best Film (2009) Screen Award for Best Director (2009)- Ram Gopal Varma Screen Award for Best Actor (2009)- Amitabh Bachchan Screen Award for Best Actor in a Supporting Role (2009)- Abhishek Bachchan Screen Award for Best Actor in a Negative Role (Male/Female) (2009)- Dilip Prabhawalkar Screen Award for Best Background Music (2009)- Amar Mohile Stardust Awards Nominated Stardust Award for Star of the Year – Male (2009)- Amitabh Bachchan Stardust Award for The New Menace (2009)- Rajesh Shringapure Stardust Award for Best Director (2009)- Ram Gopal Varma Stardust Award for Star of the Year – Male (2009)- Abhishek Bachchan Stardust Award for Star of the Year – Female (2009)- Aishwarya Rai Filmfare Awards Nominated Filmfare Award for Best Actor in Supporting Role (2009)- Abhishek Bachchan IIFA Awards Nominated IIFA Award for Best Actor in Supporting Role (2009)- Abhishek Bachchan Remake The Telugu sequel, titled Rowdy, in the backdrop of south Indian factionism, was released on 4 April 2014. Rowdy had also received equally positive reviews from critics but was a moderate commercial success, grossing approximately crores in its full run. References External links Indian film series Indian gangster films Bal Thackeray Indian crime drama films Indian crime thriller films Films set in Mumbai Hindi-language films Indian political thriller films Thriller film series 2000s crime drama films 2000s Hindi-language films 2000s crime thriller films Films about dysfunctional families Films about organised crime in India Films directed by Ram Gopal Varma Trilogies Works based on The Godfather
299560
https://en.wikipedia.org/wiki/Andrew%20Yao
Andrew Yao
Andrew Chi-Chih Yao (; born December 24, 1946) is a world renowned Chinese computer scientist and computational theorist. He is currently a Professor and the Dean of Institute for Interdisciplinary Information Sciences (IIIS) at Tsinghua University. Yao used the minimax theorem to prove what is now known as Yao's Principle. Yao was a naturalized U.S. citizen, and worked for many years in the U.S. In 2015, together with Yang Chen-Ning, he renounced his U.S. citizenship and became an academician of the Chinese Academy of Sciences. Early life Yao was born in Shanghai, China. He completed his undergraduate education in physics at the National Taiwan University, before completing a Doctor of Philosophy in physics at Harvard University in 1972, and then a second PhD in computer science from the University of Illinois at Urbana–Champaign in 1975. Academic career Yao was an assistant professor at MIT (1975–1976), assistant professor at Stanford University (1976–1981), and professor at the University of California, Berkeley (1981–1982). From 1982 to 1986, he was a full professor at Stanford University. From 1986 to 2004, Yao was the William and Edna Macaleer Professor of Engineering and Applied Science at Princeton University, where he continued to work on algorithms and complexity. In 2004, Yao became a Professor of the Center for Advanced Study, Tsinghua University (CASTU) and the Director of the Institute for Theoretical Computer Science (ITCS), Tsinghua University in Beijing. Since 2010, he has served as the Dean of Institute for Interdisciplinary Information Sciences (IIIS) in Tsinghua University. In 2010, he initiated the Conference on Innovations in Theoretical Computer Science (ITCS). Yao is also the Distinguished Professor-at-Large in the Chinese University of Hong Kong. Awards In 1996, Yao was awarded the Knuth Prize. Yao also received the Turing Award in 2000, one of the most prestigious awards in computer science, "in recognition of his fundamental contributions to the theory of computation, including the complexity-based theory of pseudorandom number generation, cryptography, and communication complexity". In 2021, Yao received the Kyoto Prize in Advanced Technology. Yao is a member of U.S. National Academy of Sciences, a fellow of the American Academy of Arts and Sciences, a fellow of the American Association for the Advancement of Science, a fellow of the Association for Computing Machinery, and an academician of Chinese Academy of Sciences. His wife, Frances Yao, is also a theoretical computer scientist. See also Yao's principle Dolev-Yao model Important publications in cryptography Yao's test Yao's Millionaires' Problem Yao graph Garbled circuit References External links Andrew Yao at CASTU 1946 births Living people 20th-century American scientists 20th-century Chinese scientists 21st-century American scientists 21st-century Chinese scientists American computer scientists American emigrants to China Chinese computer scientists Chinese emigrants to the United States Chinese University of Hong Kong people Fellows of the Association for Computing Machinery Harvard Graduate School of Arts and Sciences alumni International Association for Cryptologic Research fellows Knuth Prize laureates Members of Academia Sinica Members of the Chinese Academy of Sciences Members of the United States National Academy of Sciences Foreign associates of the National Academy of Sciences National Taiwan University alumni Naturalized citizens of the People's Republic of China Naturalized citizens of the United States Former United States citizens Princeton University faculty Scientists from Shanghai Stanford University Department of Computer Science faculty Tsinghua University faculty Turing Award laureates Grainger College of Engineering alumni UC Berkeley College of Engineering faculty
35023931
https://en.wikipedia.org/wiki/Outline%20of%20computing
Outline of computing
The following outline is provided as an overview of and topical guide to computing: Computing – activity of using and improving computer hardware and computer software. Branches of computing Computer science (see also Outline of computer science) Information technology – refers to the application (esp in businesses and other organisations) of computer science, that is, its use by mankind (see also Outline of information technology) Information systems – refers to the study of the application of IT to business processes Computer engineering (see also Outline of computer engineering) Software engineering (see also Outline of software engineering) Computer science Computer science – (outline) Computer science Theory of computation Scientific computing Metacomputing Autonomic computing Computers See information processor for a high-level block diagram. Computer Computer hardware History of computing hardware Processor design Computer network Computer performance by orders of magnitude Instruction-level taxonomies After the commoditization of memory, attention turned to optimizing CPU performance at the instruction level. Various methods of speeding up the fetch-execute cycle include: designing instruction set architectures with simpler, faster instructions: RISC as opposed to CISC Superscalar instruction execution VLIW architectures, which make parallelism explicit Software Software engineering Computer programming Computational Software patent Firmware System software Device drivers Operating systems Utilities Application Software Databases Geographic information system Spreadsheet Word processor Programming languages interpreters Compilers Assemblers Speech recognition Speech synthesis History of computing History of computing History of computing hardware from the tally stick to the quantum computer History of computer science History of computer animation History of computer graphics History of computer networking History of computer vision Punched card Unit record equipment IBM 700/7000 series IBM 1400 series IBM System/360 History of IBM magnetic disk drives Business computing Accounting software Computer-aided design Computer-aided manufacturing Computer-aided dispatch Customer relationship management Data warehouse Decision support system Electronic data processing Enterprise resource planning Geographic information system Hospital information system Human resource management system Management information system Material requirements planning Product Lifecycle Management Strategic enterprise management Supply chain management Utility Computing Human factors Accessible computing Computer-induced medical problems Computer user satisfaction Human-computer interaction (outline) Human-centered computing Computer network Wired and wireless computer network Types Wide area network Metropolitan area network City Area Network Village Area Network Local area network Wireless local area network Mesh networking Collaborative workspace Internet Network management Computing technology based wireless networking (CbWN) The main goal of CbWN is to optimize the system performance of the flexible wireless network. Source coding Codebook design for side information based transmission techniques such as Precoding Wyner-Ziv coding for cooperative wireless communications Security Dirty paper coding for cooperative multiple antenna or user precoding Intelligence Game theory for wireless networking Cognitive communications Flexible sectorization, Beamforming and SDMA Software Software defined radio (SDR) Programmable air-interface Downloadable algorithm: e.g., downloadable codebook for Precoding Computer security Cryptology – cryptography – information theory Cracking – demon dialing – Hacking – war dialing – war driving Social engineering – Dumpster diving Physical security – Black bag job Computer security Computer surveillance Defensive programming Malware Security engineering Data Numeric data Integral data types – bit, byte, etc. Real data types: Floating point (Single precision, Double precision, etc.) Fixed point Rational number Decimal Binary-coded decimal (BCD) Excess-3 BCD (XS-3) Biquinary-coded decimal representation: Binary – Octal – Decimal – Hexadecimal (hex) Computer mathematics – Computer numbering formats Character data storage: Character – String – text representation: ASCII – Unicode – Multibyte – EBCDIC (Widecharacter, Multicharacter) – FIELDATA – Baudot Other data topics Data compression Digital signal processing Image processing Data management Routing Data Protection Act Classes of computers There are several terms which describe classes, or categories, of computers: Analog computer Calculator Desktop computer Desktop replacement computer Digital computer Embedded computer Home computer Laptop Mainframe Minicomputer Microcomputer Personal computer Portable computer Personal digital assistant (aka PDA, or Handheld computer) Programmable logic controller or PLC Server Smartphone Supercomputer Tablet computer Video game console Workstation Organizations Companies – current Apple Asus Avaya Dell Fujitsu Gateway Computers Groupe Bull HCL Hewlett-Packard Hitachi, Ltd. Intel Corporation IBM Lenovo Microsoft NEC Corporation Novell Panasonic Red Hat Silicon Graphics Sun Microsystems Unisys Companies – historic Acorn, bought by Olivetti Amdahl Corporation, bought by Fujitsu Bendix Corporation Burroughs Corporation, merged with Sperry to become Unisys Compaq, bought by Hewlett-Packard Control Data Cray Data General Digital Equipment Corporation, bought by Compaq, later bought by Hewlett-Packard Digital Research – produced system software for early Intel microprocessor-based computers Elliott Brothers English Electric Company Ferranti General Electric, computer division bought by Honeywell, then Bull Honeywell, computer division bought by Bull ICL Leo Lisp Machines, Inc. Marconi Micro Instrumentation and Telemetry Systems produced the first widely sold microcomputer system (kit and assembled) Nixdorf Computer, bought by Siemens Norsk Data Olivetti Osborne Packard Bell PERQ Prime Computer Raytheon Royal McBee RCA Scientific Data Systems, sold to Xerox Siemens Sinclair Research, created the Sinclair ZX Spectrum, ZX80 and ZX81 Southweat Technical products Corporation produced microcomputers systems (kit and assembled), peripherals, and software based on Motorola 6800 and 6809 microcomputer chips Sperry, which bought UNIVAC, and later merged with Burroughs to become Unisys Symbolics UNIVAC Varian Data Machines, a division of Varian Associates which was bought by Sperry Wang Professional organizations Association for Computing Machinery (ACM) Association for Survey Computing (ASC) British Computer Society (BCS) Canadian Information Processing Society (CIPS) Computer Measurement Group (CMG) Institute of Electrical and Electronics Engineers (IEEE), in particular the IEEE Computer Society Institution of Electrical Engineers International Electrotechnical Commission (IEC) Standards bodies International Electrotechnical Commission (IEC) International Organization for Standardization (ISO) Institute of Electrical and Electronics Engineers (IEEE) Internet Engineering Task Force (IETF) World Wide Web Consortium (W3C) Open standards bodies See also Open standard Apdex Alliance – Application Performance Index Application Response Measurement (ARM) Computing publications Digital Bibliography & Library Project – , lists over 910,000 bibliographic entries on computer science and several thousand links to the home pages of computer scientists. Persons influential in computing Major figures associated with making personal computers popular. Microsoft Bill Gates Paul Allen Apple Inc. Steve Jobs Steve Wozniak External links FOLDOC: the Free On-Line Dictionary Of Computing Computing Computing
57082943
https://en.wikipedia.org/wiki/Live%20Home%203D
Live Home 3D
Live Home 3D is a virtual home design software for macOS, Windows 10 computers and iOS. The app allows design in both 2D and 3D, and the creation of high-resolution interior and exterior renderings, on video walkthrough or 360-degree panoramic images. Features Detailed 2D floor plans. 3D mode that renders the design live. (*7) Project Gallery with house projects and sample rooms. Room tool, to draw complete rooms Arc and Straight Wall tools for drawing walls. Measurement units (inches, feet, meters, etc.). Dimension tool, to set the distance between underlying objects or walls. More than 2,100 materials and 1,500 objects Import from Trimble 3D Warehouse. 3D view export to JPEG, TIFF, PNG, and BMP. 360° Panorama JPEG images. Stereo 3D Video and 360° Video. Export of projects or selected objects to COLLADA, VRML Version 2.0 or X3D format. MacBook Pro Touch Bar supported Universal Windows Platform Physically-Based materials House movement and rotation Terrain editing tools Version history Version 1 was Live Interior 3D Released on April 4, 2007 Mac only version Version 2 macOS version released on October 13, 2008 Windows 8 version Released on March 15, 2014 Windows 10 version Released on July 21, 2015 Version 3 macOS and Windows 10 version released on September 19, 2016 iOs version released on October 23, 2018 Version 4 macOs, iOs, Windows 10 version released on April 20, 2021 References External links Windows multimedia software MacOS software Computer-aided design software Computer-aided design Design engineering Software 3D graphics software 3D imaging Architectural design Interior design
622973
https://en.wikipedia.org/wiki/Trusted%20operating%20system
Trusted operating system
Trusted Operating System (TOS) generally refers to an operating system that provides sufficient support for multilevel security and evidence of correctness to meet a particular set of government requirements. The most common set of criteria for trusted operating system design is the Common Criteria combined with the Security Functional Requirements (SFRs) for Labeled Security Protection Profile (LSPP) and mandatory access control (MAC). The Common Criteria is the result of a multi-year effort by the governments of the U.S., Canada, United Kingdom, France, Germany, the Netherlands and other countries to develop a harmonized security criteria for IT products. Examples Examples of certified trusted operating systems are: Apple Mac OS X 10.6 (Rated EAL 3+) HP-UX 11i v3 (Rated EAL 4+) Some Linux distributions (Rated up to EAL 4+) Microsoft Windows 7 and Microsoft Server 2008 R2 (Rated EAL 4+) AIX 5L with PitBull Foundation (Rated EAL 4+) Trusted Solaris Trusted UNICOS 8.0 (Rated B1) XTS-400 (Rated EAL5+) IBM VM (SP, BSE, HPO, XA, ESA, etc.) with RACF Examples of operating systems that might be certifiable are: FreeBSD with the TrustedBSD extensions SELinux (see FAQ) Companies that have created trusted operating systems include: Addamax (BSD, SVR3, SVR4, HP/UX) Argus Systems Group (Solaris, AIX, Linux) AT&T (System V) BAE Systems (XTS Unix) Bull (AIX) Data General (DG/UX) Digital Equipment Corporation (Ultrix) Forcepoint (Hardened SELinux) Gemini Computers (GEMSOS) General Dynamics C4 Systems (Linux) Harris Corporation (SVR3, SVR4) Hewlett-Packard (HP/UX) Honeywell (Multics) IBM (OS/390, AIX) SCO (SCO Unix) Secure Computing Corporation (LOCK, Mach, BSD) SecureWare (Apple A/UX, HP/UX, SCO) Sequent Computer Systems (Dynix/ptx) Silicon Graphics (IRIX) Sun Microsystems (SunOS, Solaris) Trusted Information Systems (Xenix, Mach) See also Common Criteria Comparison of operating systems Security-evaluated operating system Security-focused operating system References External links Common Criteria Portal - certified products NSA FAQ on SELinux Argus Systems Operating system security
71916
https://en.wikipedia.org/wiki/List%20of%20operating%20systems
List of operating systems
This is a list of operating systems. Computer operating systems can be categorized by technology, ownership, licensing, working state, usage, and by many other characteristics. In practice, many of these groupings may overlap. Criteria for inclusion is notability, as shown either through an existing Wikipedia article or citation to a reliable source. Proprietary Acorn Computers Arthur ARX MOS RISC iX RISC OS Amazon Fire OS Amiga Inc. AmigaOS AmigaOS 1.0-3.9 (Motorola 68000) AmigaOS 4 (PowerPC) Amiga Unix (a.k.a. Amix) Amstrad AMSDOS Contiki CP/M 2.2 CP/M Plus SymbOS Apple Inc. Apple II family Apple DOS Apple Pascal ProDOS GS/OS GNO/ME Contiki Apple III Apple SOS Apple Lisa Apple Macintosh Classic Mac OS A/UX (UNIX System V with BSD extensions) Copland MkLinux Pink Rhapsody macOS (formerly Mac OS X and OS X) macOS Server (formerly Mac OS X Server and OS X Server) Apple Network Server IBM AIX (Apple-customized) Apple MessagePad Newton OS iPhone and iPod Touch iOS (formerly iPhone OS) iPad iPadOS Apple Watch watchOS Apple TV tvOS Embedded operating systems A/ROSE bridgeOS iPod software (unnamed embedded OS for iPod) Unnamed NetBSD variant for Airport Extreme and Time Capsule Apollo Computer, Hewlett-Packard Domain/OS – One of the first network-based systems. Run on Apollo/Domain hardware. Later bought by Hewlett-Packard. Atari Atari DOS (for 8-bit computers) Atari TOS Atari MultiTOS Contiki (for 8-bit, ST, Portfolio) BAE Systems XTS-400 Be Inc. BeOS BeIA BeOS r5.1d0 magnussoft ZETA (based on BeOS r5.1d0 source code, developed by yellowTAB) Bell Labs Unix ("Ken's new system," for its creator (Ken Thompson), officially Unics and then Unix, the prototypic operating system created in Bell Labs in 1969 that formed the basis for the Unix family of operating systems) UNIX Time-Sharing System v1 UNIX Time-Sharing System v2 UNIX Time-Sharing System v3 UNIX Time-Sharing System v4 UNIX Time-Sharing System v5 UNIX Time-Sharing System v6 MINI-UNIX PWB/UNIX USG CB Unix UNIX Time-Sharing System v7 (It is from Version 7 Unix (and, to an extent, its descendants listed below) that almost all Unix-based and Unix-like operating systems descend.) Unix System III Unix System IV Unix System V Unix System V Releases 2.0, 3.0, 3.2, 4.0, and 4.2 UNIX Time-Sharing System v8 UNIX Time-Sharing System v9 UNIX Time-Sharing System v10 Non-Unix Operating Systems: BESYS Plan 9 from Bell Labs Inferno Burroughs Corporation, Unisys Burroughs MCP Commodore International GEOS AmigaOS AROS Research Operating System Control Data Corporation Lower 3000 series SCOPE (Supervisory Control Of Program Execution) Upper 3000 series SCOPE (Supervisory Control Of Program Execution) Drum SCOPE 6x00 and related Cyber Chippewa Operating System (COS) MACE (Mansfield and Cahlander Executive) Kronos (Kronographic OS) NOS (Network Operating System) NOS/VE NOS Virtual Environment SCOPE (Supervisory Control Of Program Execution) NOS/BE NOS Batch Environment SIPROS (Simultaneous Processing Operating System) CloudMosa Puffin OS Convergent Technologies Convergent Technologies Operating System – later acquired by Unisys Cromemco Cromemco DOS (CDOS) – a Disk Operating system compatible with CP/M Cromix – a multitasking, multi-user, Unix-like OS for Cromemco microcomputers with Z80A and/or 68000 CPU Data General AOS for 16-bit Data General Eclipse computers and AOS/VS for 32-bit (MV series) Eclipses, MP/AOS for microNOVA-based computers DG/UX RDOS Real-time Disk Operating System, with variants: RTOS and DOS (not related to PC DOS, MS-DOS etc.) Datapoint CTOS Cassette Tape Operating System for the Datapoint 2200 DOS Disk Operating System for the Datapoint 2200, 5500, and 1100 DDC-I, Inc. Deos – Time & Space Partitioned RTOS, Certified to DO-178B, Level A since 1998 HeartOS – POSIX-based Hard Real-Time Operating System Digital Research, Inc. CP/M CP/M CP/M for Intel 8080/8085 and Zilog Z80 Personal CP/M, a refinement of CP/M CP/M Plus with BDOS 3.0 CP/M-68K CP/M for Motorola 68000 CP/M-8000 CP/M for Zilog Z8000 CP/M-86 CP/M for Intel 8088/8086 CP/M-86 Plus Personal CP/M-86 MP/M Multi-user version of CP/M-80 MP/M II MP/M-86 Multi-user version of CP/M-86 MP/M 8-16, a dual-processor variant of MP/M for 8086 and 8080 CPUs. Concurrent CP/M, the successor of CP/M-80 and MP/M-80 Concurrent CP/M-86, the successor of CP/M-86 and MP/M-86 Concurrent CP/M 8-16, a dual-processor variant of Concurrent CP/M for 8086 and 8080 CPUs. Concurrent CP/M-68K, a variant for the 68000 DOS Concurrent DOS, the successor of Concurrent CP/M-86 with PC-MODE Concurrent PC DOS, a Concurrent DOS variant for IBM compatible PCs Concurrent DOS 8-16, a dual-processor variant of Concurrent DOS for 8086 and 8080 CPUs Concurrent DOS 286 Concurrent DOS XM, a real-mode variant of Concurrent DOS with EEMS support Concurrent DOS 386 Concurrent DOS 386/MGE, a Concurrent DOS 386 variant with advanced graphics terminal capabilities Concurrent DOS 68K, a port of Concurrent DOS to Motorola 68000 CPUs with DOS source code portability capabilities FlexOS 1.0 – 2.34, a derivative of Concurrent DOS 286 FlexOS 186, a variant of FlexOS for terminals FlexOS 286, a variant of FlexOS for hosts Siemens S5-DOS/MT, an industrial control system based on FlexOS IBM 4680 OS, a POS operating system based on FlexOS IBM 4690 OS, a POS operating system based on FlexOS Toshiba 4690 OS, a POS operating system based on IBM 4690 OS and FlexOS FlexOS 386, a later variant of FlexOS for hosts IBM 4690 OS, a POS operating system based on FlexOS Toshiba 4690 OS, a POS operating system based on IBM 4690 OS and FlexOS FlexOS 68K, a derivative of Concurrent DOS 68K Multiuser DOS, the successor of Concurrent DOS 386 CCI Multiuser DOS Datapac Multiuser DOS Datapac System Manager, a derivative of Datapac Multiuser DOS IMS Multiuser DOS IMS REAL/32, a derivative of Multiuser DOS IMS REAL/NG, the successor of REAL/32 DOS Plus 1.1 – 2.1, a single-user, multi-tasking system derived from Concurrent DOS 4.1 – 5.0 DR-DOS 3.31 – 6.0, a single-user, single-tasking native DOS derived from Concurrent DOS 6.0 Novell PalmDOS 1.0 Novell "Star Trek" Novell DOS 7, a single-user, multi-tasking system derived from DR DOS Caldera OpenDOS 7.01 Caldera DR-DOS 7.02 and higher Digital Equipment Corporation, Compaq, Hewlett-Packard, Hewlett Packard Enterprise Batch-11/DOS-11 OS/8 RSTS/E – multi-user time-sharing OS for PDP-11s RSX-11 – multiuser, multitasking OS for PDP-11s RT-11 – single user OS for PDP-11 TOPS-10 – for the PDP-10 TENEX – an ancestor of TOPS-20 from BBN, for the PDP-10 TOPS-20 – for the PDP-10 DEC MICA – for the DEC PRISM Digital UNIX – derived from OSF/1, became HP's Tru64 UNIX Ultrix VMS – originally by DEC and HP now by VMS Software Inc.) for the VAX mini-computer range, Alpha and Intel Itanium i2 and i4; later renamed OpenVMS WAITS – for the PDP-6 and PDP-10 ENEA AB OSE – Flexible, small footprint, high-performance RTOS for control processors Fujitsu Towns OS XSP OS/IV MSP MSP-EX General Electric, Honeywell, Bull Real-Time Multiprogramming Operating System GCOS Multics Google Chromium OS is an open source operating system development version of Chrome OS. Both operating systems are based on the Linux kernel. Chrome OS is designed to work exclusively with web applications. Announced on July 7, 2009, Chrome OS is currently publicly available and was released summer 2011. The Chrome OS source code was released on November 19, 2009, under the BSD license as Chromium OS. Container-Optimized OS (COS) is an operating system that is optimized for running Docker containers, based on Chromium OS. Android is an operating system for mobile devices. It consists of Android Runtime (userland) with Linux (kernel), with its Linux kernel modified to add drivers for mobile device hardware and to remove unused Vanilla Linux drivers. gLinux, a Linux distribution that Google uses internally Fuchsia is a capability-based, real-time, operating system (RTOS) scalable to universal devices, in early development, from the tiniest embedded hardware, wristwatches, tablets to the largest personal computers. Unlike Chrome OS and Android, it is not based on the Linux kernel, but instead began on a new microkernel called "Zircon", derived from "Little Kernel". Wear OS a version of Google's Android operating system designed for smartwatches and other wearables. Green Hills Software INTEGRITY – Reliable Operating system INTEGRITY-178B – A DO-178B certified version of INTEGRITY. µ-velOSity – A lightweight microkernel. Harris Corporation Vulcan O/S – Proprietary O/S for Harris' Computer Systems (HCX) Harris UNIX – Proprietary UNIX based OS for Harris' Computers (MCX) Heathkit, Zenith Data Systems HDOS – ran on the H8 and Heath/Zenith Z-89 series HT-11 – a modified version of RT-11 that ran on the Heathkit H11 Hewlett-Packard, Hewlett Packard Enterprise HP Multi-Programming Executive (MPE, MPE/XL, and MPE/iX) – runs on HP 3000 and HP e3000 mini-computers HP-UX – runs on HP9000 and Itanium servers (from small to mainframe-class computers) Honeywell CP-6 Huawei Harmony OS LiteOS Intel Corporation iRMX – real-time operating system originally created to support the Intel 8080 and 8086 processor families in embedded applications. ISIS, ISIS-II – "Intel Systems Implementation Supervisor" was an environment for development of software within the Intel microprocessor family in the early 1980s on their Intellec Microcomputer Development System and clones. ISIS-II worked with 8 inch floppy disks and had an editor, cross-assemblers, a linker, an object locator, debugger, compilers for PL/M, a BASIC interpreter, etc. and allowed file management through a console. IBM On early mainframes: 1410, 7010, 704, 709, 7090, 7094, 7040, 7044, 7030 BESYS – for the IBM 7090 Compatible Time-Sharing System (CTSS) – developed at MIT's Computation Center for use on a modified IBM 7094 FORTRAN Monitor System (FMS) – for the IBM 709 and 7090 GM OS & GM-NAA I/O – for the IBM 704 IBSYS – tape based operating system for IBM 7090 and IBM 7094 7040/7044 Operating System (16/32K) - 7040-PR-150 IJMON – A bootable serial I/O monitor for loading programs for the IBM 1400 series 1410 Processor Operating System (PR-155) for the 1410 and 7010 SHARE Operating System (SOS) – for the IBM 704 and 709 University of Michigan Executive System (UMES) – for the IBM 704, 709, and 7090) On S/360, S/370, and successor mainframes OS/360 and successors on IBM S/360, S/370, and successor mainframes OS/360 (first official OS targeted for the System/360 architecture) PCP (Primary Control Program, a kernel and a ground breaking automatic space allocating file system) MFT (original Multi-programming with a Fixed number of Tasks, replaced by MFT II) MFT II (Multi-Programming with a Fixed number of Tasks, had up to 15 fixed size application partitions, plus partitions for system tasks, initially defined at boot time but redefinable by operator command) MVT (Multi-Programming with a Variable number of Tasks, had up to 15 application regions defined dynamically, plus additional regions for system tasks) M65MP (MVT with support for a multiprocessor 360/65) OS/VS (port of OS/360 targeted for the System/370 virtual memory architecture (OS/370 is not the correct name for OS/VS1 and OS/VS2.) OS/VS has the following variations: OS/VS1 (Operating System/Virtual Storage 1, Virtual-memory version of OS/360 MFT II) OS/VS1 Basic Programming Extensions (BPE) adds device support and VM handshaking OS/VS2 (Operating System/Virtual Storage 2, Virtual-memory version of OS/360 MVT) OS/VS2 R1 (Called Single Virtual Storage (SVS), Virtual-memory version of OS/360 MVT but without multiprocessing support) OS/VS2 R2 through R3.8 (called Multiple Virtual Storage, MVS, eliminated most need for VS1). MVS/SE (MVS System Extensions) MVS/SP (MVS System Product) V1 MVS/370 refers to OS/VS2 MVS, MVS/SE and MVS/SP Version 1 MVS/XA (MVS/SP V2, supports S/370 Extended Architecture, 31-bit addressing) MVS/ESA (MVS supported Enterprise Systems Architecture, horizontal addressing extensions: data only address spaces called Dataspaces) MVS/SP V3 MVS/ESA SP V4 (a Unix environment was available for MVS/ESA SP V4R3) MVS/ESA SP V5 (the UNIX environment was bundled in this and all subsequent versions) OS/390 replacement for MVS/ESA SP V5 with some products bundled z/OS z/Architecture replacement for OS/390 with 64-bit virtual addressing Phoenix/MVS (Developed at Cambridge University) DOS/360 and successors on IBM S/360, S/370, and successor mainframes BOS/360 (early interim version of DOS/360, briefly available at a few Alpha & Beta System/360 sites) TOS/360 (similar to BOS above and more fleeting, able to boot and run from 2x00 series tape drives) DOS/360 (Disk Operating System (DOS), multi-programming system with up to 3 partitions, first commonly available OS for System/360) DOS/360/RJE (DOS/360 with a control program extension that provided for the monitoring of remote job entry hardware (card reader & printer) connected by dedicated phone lines) DOS/VS (First DOS offered on System/370 systems, provided virtual storage) DOS/VSE (also known as VSE, upgrade of DOS/VS, up to 14 fixed size processing partitions ) VSE/Advanced Functions (VSE/AF) - Additional functionality for DOS/VSE VSE/SP (program product including DOS/VSE and VSE/AF) VSE/ESA, replaces VSE/SP, supports ESA/370 and ESA/390 with 31-bit addresses z/VSE (latest version of the four decades old DOS lineage, supports 64-bit addresses, multiprocessing, multiprogramming, SNA, TCP/IP, and some virtual machine features in support of Linux workloads) CP/CMS (Control Program/Cambridge Monitor System) and successors on IBM S/360, S/370, and successor mainframes CP-40/CMS (for System/360 Model 40) CP-67/CMS (for System/360 Model 67) Virtual Machine Facility/370 (VM/370) - the CP virtual machine hypervisor, Conversational Monitor System (CMS) operating system and supporting facilities for System/370 (24-bit addresses) VM/370 Basic System Extensions Program Product (VM/BSE, AKA BSEPP) is an enhancement to VM/370 VM/370 System Extensions Program Product (VM/SE, AKA SEPP) is an enhancement to VM/370 that includes the facilities of VM/BSE Virtual Machine/System Product (VM/SP) replaces VM/370, VM/BSE and VM/SE. Virtual Machine/Extended Architecture (VM/XA) refers to three versions of VM that support System/370 Extended Architecture (S/370-XA) with 31-bit virtual addresses Virtual Machine/Extended architecture Migration Aid (VM/XA MA) - Intended for MVS/370 to MVS/XA migration Virtual Machine/Extended Architecture Systems Facility (VM/XA SF) - new release of VM/XA MA with additional functionality Virtual Machine/Extended Architecture System Product (VM/XA SP) - Replaces VM/SP, VM/SP HPO and VM/XA SF VM/ESA (Virtual Machine/Enterprise Systems Architecture, supports S/370, ESA/370 and ESA/390) z/VM (z/Architecture version of the VM OS with 64-bit addressing) TPF Line (Transaction Processing Facility) on IBM S/360, S/370, and successor mainframes (largely used by airlines) ACP (Airline Control Program) TPF (Transaction Processing Facility) z/TPF (z/Architecture extension) Unix-like on IBM S/360, S/370, and successor mainframes AIX/370 (IBM's Advanced Interactive eXecutive, a System V Unix version) AIX/ESA (IBM's Advanced Interactive eXecutive, a System V Unix version) OpenSolaris for System z UTS (developed by Amdahl) Linux on IBM Z Others on IBM S/360, S/370, and successor mainframes: BOS/360 (Basic Operating System) Distributed Processing Programming Executive/370 (DPPX/370) a port of DDPX from 8100 to S/370. MTS (Michigan Terminal System, developed by a group of universities in the US, Canada, and the UK for the IBM System/360 Model 67, System/370 series, and compatible mainframes) RTOS/360 (IBM's Real Time Operating System, ran on 5 NASA custom System/360-75s) TOS/360 (Tape Operating System) TSS/360 (IBM's Time Sharing System) MUSIC/SP (developed by McGill University for IBM System/370) ORVYL and WYLBUR (developed by Stanford University for IBM System/360) On PC and Intel x86 based architectures PC DOS, IBM DOS PC DOS 1.x, 2.x, 3.x (developed jointly with Microsoft) IBM DOS 4.x, 5.0 (developed jointly with Microsoft) PC DOS 6.1, 6.3, 7, 2000, 7.10 OS/2 OS/2 1.x (developed jointly with Microsoft) OS/2 2.x OS/2 Warp 3 (ported to PPC via Workplace OS) OS/2 Warp 4 eComStation (Warp 4.5/Workspace on Demand, rebundled by Serenity Systems International) ArcaOS (Warp 4.52 based system sold by Arca Noae, LLC) IBM 4680 OS version 1 to 4, a POS operating system based on Digital Research's Concurrent DOS 286 and FlexOS 286 1.xx IBM 4690 OS version 1 to 6.3, a successor to 4680 OS based on Novell's FlexOS 286/FlexOS 386 2.3x Toshiba 4690 OS version 6.4, a successor to 4690 OS 6.3 Unix-like on PS/2 AIX (IBM's Advanced Interactive eXecutive, a System V Unix version) On other hardware platforms IBM Series/1 EDX (Event Driven Executive) RPS (Realtime Programming System) CPS (Control Programming Support, subset of RPS) SerIX (Unix on Series/1) IBM 1130 DMS (Disk Monitor System) IBM 1800 TSX (Time Sharing eXecutive) MPX (Multi Programming eXecutive) IBM 8100 DPCX (Distributed Processing Control eXecutive) DPPX (Distributed Processing Programming Executive) IBM System/3 DMS (Disk Management System) IBM System/34, IBM System/36 SSP (System Support Program) IBM System/38 CPF (Control Program Facility) IBM System/88 Stratus VOS (developed by Stratus, and used for IBM System/88, Original equipment manufacturer from Stratus) IBM AS/400, iSeries, System i, IBM Power Systems IBM i (previously known as OS/400 and i5/OS, descendant of System/38 CPF, includes System/36 SSP and AIX environment) UNIX on IBM RT PC AOS (a BSD Unix version, not related to Data General AOS) AIX (Advanced Interactive eXecutive, a System V Unix version) UNIX on POWER ISA, PowerPC, and Power ISA AIX (Advanced Interactive eXecutive, a System V Unix version) Others Workplace OS (a microkernel based operating system including OS/2, developed and canceled in the 1990s) K42 (open-source research operating system on PowerPC or x86 based cache-coherent multiprocessor systems) Dynix (developed by Sequent, and used for IBM NUMA-Q too) International Computers Limited J and MultiJob – for the System 4 series mainframes GEORGE 2/3/4 GEneral ORGanisational Environment – used by ICL 1900 series mainframes Executive – used on the 1900 and 290x range of minicomputers. A modified version of Executive was also used as part of GEORGE 3 and 4. TME – used on the ME29 minicomputer ICL VME – including early variants VME/B and VME/2900, appearing on the ICL 2900 Series and Series 39 mainframes, implemented in S3 VME/K – on early smaller 2900s Jide Remix OS Jolla Sailfish OS KaiOS KaiOS Lynx Real-time Systems, LynuxWorks, Lynx Software Technologies LynxOS Meizu Flyme OS Micrium Inc. MicroC/OS-II – a small pre-emptive priority based multi-tasking kernel MicroC/OS-III – a small pre-emptive priority based multi-tasking kernel, with unlimited number of tasks and priorities, and round-robin scheduling Microsoft Corporation Xenix (licensed version of Unix; licensed to SCO in 1987) MS-DOS (developed jointly with IBM, versions 1.0–6.22) MSX-DOS (developed by MS Japan for the MSX 8-bit computer) DOS/V OS/2 1.x (developed jointly with IBM until version 1.3) Windows (16-bit and 32-bit preemptive and cooperative multitasking, running atop MS-DOS) Windows 1.0 (Windows 1) Windows 2.0 (Windows 2 – separate version for i386 processor) Windows 3.0 (Windows 3) Windows 3.1x (Windows 3.1) Windows for Workgroups 3.1 (Codename Snowball) Windows 3.2 (Chinese-only release) Windows for Workgroups 3.11 Windows 95 (codename Chicago – Windows 4.0) Windows 98 (codename Memphis – Windows 4.1) Windows Millennium Edition (Windows ME – Windows 4.9) Windows NT (Full 32-bit or 64-bit kernel, not dependent on MS-DOS) Windows NT 3.1 Windows NT 3.5 Windows NT 3.51 Windows NT 4.0 Windows 2000 (Windows NT 5.0) Windows XP (Windows NT 5.1) Windows Server 2003 (Windows NT 5.2) Windows Fundamentals for Legacy PCs (based on Windows XP) Windows Vista (Windows NT 6.0) Windows Azure (Cloud OS Platform) 2009 Windows Home Server (based on Windows Server 2003) Windows Server 2008 (based on Windows Vista) Windows 7 (Windows NT 6.1) Windows Phone 7 Windows Server 2008 R2 (based on Windows 7) Windows Home Server 2011 (based on Windows Server 2008 R2) Windows 8 (Windows NT 6.2) Windows RT Windows Phone 8 Windows Server 2012 (based on Windows 8) Windows 8.1 (Windows NT 6.3) Windows Phone 8.1 Windows Server 2012 R2 (based on Windows 8.1) Windows 10 (Windows NT 10) Windows 10 Mobile Windows Server 2016 Windows Server 2019 Windows 11 Windows CE (OS for handhelds, embedded devices, and real-time applications that is similar to other versions of Windows) Windows CE 3.0 Windows CE 5.0 Windows Embedded CE 6.0 Windows Embedded Compact 7 Windows Embedded Compact 2013 Windows Mobile (based on Windows CE, but for a smaller form factor) Singularity – A research operating system written mostly in managed code (C#) Midori – A managed code operating system Xbox system software Xbox 360 system software Xbox One system software Azure Sphere ThreadX MITS Altair DOS – An early disk operating system for the Altair 8800 machine. MontaVista MontaVista Mobilinux NCR Corporation TMX – Transaction Management eXecutive IMOS – Interactive Multiprogramming Operating System (circa 1978), for the NCR Century 8200 series minicomputers VRX – Virtual Resource eXecutive Nintendo ES is a computer operating system developed originally by Nintendo and since 2008 by Esrille. It is open source and runs natively on x86 platforms. NeXT NeXTSTEP Novell NetWare – network operating system providing high-performance network services. Has been superseded by Open Enterprise Server line, which can be based on NetWare or Linux to provide the same set of services. UnixWare Novell "SuperNOS" – a never released merge of NetWare and UnixWare Novell "Corsair" Novell "Exposé" Open Enterprise Server – the successor to NetWare Open Mobile Platform Aurora OS – the successor to Sailfish OS (not to be confused with a different Aurora OS) Quadros Systems RTXC Quadros RTOS – proprietary C-based RTOS used in embedded systems RCA Time Sharing Operating System (TSOS) – first OS supporting virtual addressing of the main storage and support for both timeshare and batch interface RoweBots DSPnano RTOS – 8/16 Bit Ultra Tiny Embedded Linux Compatible RTOS Samsung Electronics Bada Tizen is an operating system based on the Linux kernel, a project within the Linux Foundation and is governed by a Technical Steering Group (TSG) while controlled by Samsung and backed by Intel. Tizen works on a wide range of Samsung devices including smartphones, tablets, smart TVs, PCs and wearable. Orsay One UI - Android skin Sinclair Research Sinclair BASIC was used in the 8-bit home computers from Sinclair Research and Timex Sinclair. It was included in the ROM, and the computers booted to the Basic interpreter. Various versions exist, with the latter ones supporting disk drive operations. SCO, SCO Group Xenix, Unix System III based distribution for the Intel 8086/8088 architecture Xenix 286, Unix System V Release 2 based distribution for the Intel 80286 architecture Xenix 386, Unix System V Release 2 based distribution for the Intel 80386 architecture SCO Unix, SCO UNIX System V/386 was the first volume commercial product licensed by AT&T to use the UNIX System trademark (1989). Derived from AT&T System V Release 3.2 with an infusion of Xenix device drivers and utilities plus most of the SVR4 features SCO Open Desktop, the first 32-bit graphical user interface for UNIX Systems running on Intel processor-based computers. Based on SCO Unix SCO OpenServer 5, AT&T UNIX System V Release 3 based SCO OpenServer 6, SVR5 (UnixWare 7) based kernel with SCO OpenServer 5 application and binary compatibility, system administration, and user environments UnixWare UnixWare 2.x, based on AT&T System V Release 4.2MP UnixWare 7, UnixWare 2 kernel plus parts of 3.2v5 (UnixWare 2 + OpenServer 5 = UnixWare 7). Referred to by SCO as SVR5 Scientific Data Systems (SDS) Berkeley Timesharing System for the SDS 940 SYSGO PikeOS – a certified real time operating system for safety and security critical embedded systems Tandem Computers, Compaq, Hewlett-Packard, Hewlett Packard Enterprise NonStop OS – runs on HP's NonStop line of Itanium servers Tandy Corporation TRSDOS – A floppy-disk-oriented OS supplied by Tandy/Radio Shack for their TRS-80 Z80-based line of personal computers. Eventually renamed as LS-DOS or LDOS. Color BASIC – A ROM-based OS created by Microsoft for the TRS-80 Color Computer. NewDos/80 – A third-party OS for Tandy's TRS-80 personal computers. DeskMate – Operating system created by Tandy Corporation and introduced with the Tandy 1000 computer. TCSC (later NCSC) Edos – enhanced version of IBM's DOS/360 (and later DOS/VS and DOS/VSE) operating system for System/360 and System/370 IBM mainframes Texas Instruments TI-RTOS Kernel – Real-time operating system for TI's embedded devices. TRON Project TRON – open real-time operating system kernel T-Kernel UNIVAC, Unisys EXEC I EXEC II EXEC 8/OS 1100/OS 2200 VS/9, successor to RCA TSOS Wang Laboratories WPS Wang Word Processing System. Micro-code based system. OIS Wang Office Information System. Successor to the WPS. Combined the WPS and VP/MVP systems. Wind River Systems VxWorks – Small footprint, scalable, high-performance RTOS for embedded microprocessor based systems. Zilog Z80-RIO Other Lisp-based Lisp Machines, Inc. (also known as LMI) used an operating system written in MIT's Lisp Machine Lisp. Symbolics Genera written in a systems dialect of the Lisp programming language called ZetaLisp and Symbolics Common Lisp. Genera was ported to a virtual machine for the DEC Alpha line of computers. Texas Instruments' Explorer Lisp machine workstations also had systems code written in Lisp Machine Lisp. Xerox 1100 series of Lisp machines used an operating system also written in Interlisp, and was also ported to a virtual machine called "Medley." For Elektronika BK ANDOS CSI-DOS MK-DOS Non-standard language-based Pilot operating system – written in the Mesa language and used on Xerox Star workstations. PERQ Operating System (POS) – written in PERQ Pascal. Other proprietary non-Unix-like Эльбрус-1 (Elbrus-1) and Эльбрус-2 – used for application, job control, system programming, implemented in uЭль-76 (AL-76). EOS – developed by ETA Systems for use in their ETA-10 line of supercomputers EMBOS – developed by Elxsi for use on their mini-supercomputers GCOS – a proprietary Operating System originally developed by General Electric MAI Basic Four – An OS implementing Business Basic from MAI Systems. Michigan Terminal System – Developed by a group of universities in the US, Canada, and the UK for use on the IBM System/360 Model 67, the System/370 series, and compatible mainframes MUSIC/SP – an operating system developed for the S/370, running normally under VM OS ES – an operating system for ES EVM PC-MOS/386 – DOS-like, but multiuser/multitasking Prolog-Dispatcher – used to control Soviet Buran space shuttle. SINTRAN III – an operating system used with Norsk Data computers. SkyOS – commercial desktop OS for PCs SODA – used by the Odra 1204 computers. THEOS TSX-32 – a 32-bit operating system for x86 platform. TX990/TXDS, DX10 and DNOS – proprietary operating systems for TI-990 minicomputers Other proprietary Unix-like and POSIX-compliant Aegis (Apollo Computer) Amiga Unix (Amiga ports of Unix System V release 3.2 with Amiga A2500UX and SVR4 with Amiga A3000UX. Started in 1990, last version was in 1992) Coherent (Unix-like OS from Mark Williams Co. for PC class computers) DC/OSx (DataCenter/OSx—an operating system developed by Pyramid Technology for its MIPS-based systems) DG/UX (Data General Corp) DNIX from DIAB DSPnano RTOS (POSIX nanokernel, DSP Optimized, Open Source) HeliOS developed and sold by Perihelion Software mainly for transputer-based systems Interactive Unix (a port of the UNIX System V operating system for Intel x86 by Interactive Systems Corporation) IRIX from SGI MeikOS NeXTSTEP (developed by NeXT; a Unix-based OS based on the Mach microkernel) OS-9 Unix-like RTOS. (OS from Microware for Motorola 6809 based microcomputers) OS9/68K Unix-like RTOS. (OS from Microware for Motorola 680x0 based microcomputers; based on OS-9) OS-9000 Unix-like RTOS. (OS from Microware for Intel x86 based microcomputers; based on OS-9, written in C) OSF/1 (developed into a commercial offering by Digital Equipment Corporation) OPENSTEP QNX (POSIX, microkernel OS; usually a real time embedded OS) Rhapsody (an early form of Mac OS X) RISC iX – derived from BSD 4.3, by Acorn computers, for their ARM family of machines RISC/os (a port by MIPS Technologies of 4.3BSD for its MIPS-based computers) RMX SCO UNIX (from SCO, bought by Caldera who renamed themselves SCO Group) SINIX (a port by SNI of Unix to the MIPS architecture) Solaris (from Sun, bought by Oracle; a System V-based replacement for SunOS) SunOS (BSD-based Unix system used on early Sun hardware) SUPER-UX (a port of System V Release 4.2MP with features adopted from BSD and Linux for NEC SX architecture supercomputers) System V (a release of AT&T Unix, 'SVR4' was the 4th minor release) System V/AT, 386 (The first version of AT&T System V UNIX on the IBM 286 and 386 PCs, ported and sold by Microport) Trusted Solaris (Solaris with kernel and other enhancements to support multilevel security) UniFLEX (Unix-like OS from TSC for DMA-capable, extended addresses, Motorola 6809 based computers; e.g. SWTPC, GIMIX and others) Unicos (the version of Unix designed for Cray Supercomputers, mainly geared to vector calculations) UTX-32 (Developed by Gould CSD (Computer System Division), a Unix-based OS that included both BSD and System V characteristics. It was one of the first Unix based systems to receive NSA's C2 security level certification.) Zenix, Zenith corporations Unix (a popular USA electronics maker at the time) Non-proprietary Unix or Unix-like MINIX (study OS developed by Andrew S. Tanenbaum in the Netherlands) BSD (Berkeley Software Distribution, a variant of Unix for DEC VAX hardware) FreeBSD (one of the outgrowths of UC Regents' abandonment of CSRG's 'BSD Unix') DragonFlyBSD, forked from FreeBSD 4.8 MidnightBSD, forked from FreeBSD 6.1 GhostBSD TrueOS (previously known as PC-BSD) NetBSD (an embedded device BSD variant) OpenBSD forked from NetBSD Bitrig forked from OpenBSD Darwin, created by Apple using code from NeXTSTEP, FreeBSD, and NetBSD GNU (also known as GNU/Hurd) Linux (see also List of Linux distributions) (alleged to be GNU/Linux see GNU/Linux naming controversy) Android Android-x86 Remix OS Redox (written in Rust) OpenSolaris illumos, contains original Unix (SVR4) code derived from the OpenSolaris (discontinued by Oracle in favor of Solaris 11 Express) OpenIndiana, operates under the illumos Foundation. Uses the illumos kernel, which is a derivative of OS/Net, which is basically an OpenSolaris/Solaris kernel with the bulk of the drivers, core libraries, and basic utilities. Nexenta OS, based on the illumos kernel with Ubuntu packages SmartOS, an illumos distribution for cloud computing with Kernel-based Virtual Machine integration. RTEMS (Real-Time Executive for Multiprocessor Systems) Syllable Desktop VSTa Plurix (or Tropix) (by Federal University of Rio de Janeiro – UFRJ) TUNIS (University of Toronto) Xv6 - a simple Unix-like teaching operating system from MIT SerenityOS - aims to be a modern Unix-like operating system, yet with a look and feel that emulates 1990s operating systems such as Microsoft Windows and Mac OS. Non-Unix Cosmos – written in C# FreeDOS – open source DOS variant Genode – operating system framework for microkernels (written in C++) Ghost OS – written in assembly, C/C++ Haiku – open source inspired by BeOS, in development Incompatible Timesharing System (ITS) – written in the MIDAS macro assembler language for the PDP-6 and PDP-10 by MIT students osFree – OS/2 Warp open source clone OSv – written in C++ Phantom OS – persistent object-oriented ReactOS – open source OS designed to be binary compatible with Windows NT and its variants (Windows XP, Windows 2000, etc.); in development SharpOS – written in .NET C# TempleOS – written in HolyC Visopsys – written in C and assembly by Andy McLaughlin Research Unix or Unix-like Plan 9 from Bell Labs – distributed OS developed at Bell Labs, based on original Unix design principles yet functionally different and going much further Inferno – distributed OS derived from Plan 9, originally from Bell Labs Research Unix Non-Unix Amoeba – research OS by Andrew S. Tanenbaum Barrelfish Croquet EROS – microkernel, capability-based CapROS – microkernel EROS successor Harmony – realtime, multitasking, multiprocessing message-passing system developed at the National Research Council of Canada. HelenOS – research and experimental operating system House – Haskell User's Operating System and Environment, research OS written in Haskell and C ILIOS – Research OS designed for routing L4 – second generation microkernel Mach – from OS kernel research at Carnegie Mellon University; see NeXTSTEP Nemesis – Cambridge University research OS – detailed quality of service abilities Singularity – experimental OS from Microsoft Research written in managed code to be highly dependable Spring – research OS from Sun Microsystems THE multiprogramming system – by Dijkstra in 1968, at the Eindhoven University of Technology in the Netherlands, introduced the first form of software-based memory segmentation, freeing programmers from being forced to use actual physical locations Thoth – realtime, multiprocess message-passing system developed at the University of Waterloo. V – from Stanford, early 1980s Verve – OS designed by Microsoft Research to be verified end-to-end for type safety and memory safety Xinu – Study OS developed by Douglas E. Comer in the United States Disk operating systems (DOS) 86-DOS (developed at Seattle Computer Products by Tim Paterson for the new Intel 808x CPUs; licensed to Microsoft, became PC DOS/MS-DOS. Also known by its working title QDOS.) PC DOS (IBM's DOS variant, developed jointly with Microsoft, versions 1.0–7.0, 2000, 7.10) MS-DOS (Microsoft's DOS variant for OEM, developed jointly with IBM, versions 1.x–6.22 Microsoft's now abandoned DOS variant) Concurrent CP/M-86 3.1 (BDOS 3.1) with PC-MODE (Digital Research's successor of CP/M-86 and MP/M-86) Concurrent DOS 3.1-4.1 (BDOS 3.1-4.1) Concurrent PC DOS 3.2 (BDOS 3.2) (Concurrent DOS variant for IBM compatible PCs) DOS Plus 1.1, 1.2 (BDOS 4.1), 2.1 (BDOS 5.0) (single-user, multi-tasking system derived from Concurrent DOS 4.1-5.0) Concurrent DOS 8-16 (dual-processor variant of Concurrent DOS for 8086 and 8080 CPUs) Concurrent DOS 286 1.x FlexOS 1.00-2.34 (derivative of Concurrent DOS 286) FlexOS 186 (variant of FlexOS for terminals) FlexOS 286 (variant of FlexOS for hosts) Siemens S5-DOS/MT (industrial control system based on FlexOS) IBM 4680 OS (POS operating system based on FlexOS) IBM 4690 OS (POS operating system based on FlexOS) Toshiba 4690 OS (POS operating system based on IBM 4690 OS and FlexOS) FlexOS 386 (later variant of FlexOS for hosts) IBM 4690 OS (POS operating system based on FlexOS) Toshiba 4690 OS (POS operating system based on IBM 4690 OS and FlexOS) Concurrent DOS 386 1.0, 1.1, 2.0, 3.0 (BDOS 5.0-6.2) Concurrent DOS 386/MGE (Concurrent DOS 386 variant with advanced graphics terminal capabilities) Multiuser DOS 5.0, 5.01, 5.1 (BDOS 6.3-6.6) (successor of Concurrent DOS 386) CCI Multiuser DOS 5.0-7.22 (up to BDOS 6.6) Datapac Multiuser DOS Datapac System Manager 7 (derivative of Datapac Multiuser DOS) IMS Multiuser DOS 5.1, 7.0, 7.1 (BDOS 6.6-6.7) IMS REAL/32 7.50, 7.51, 7.52, 7.53, 7.54, 7.60, 7.61, 7.62, 7.63, 7.70, 7.71, 7.72, 7.73, 7.74, 7.80, 7.81, 7.82, 7.83, 7.90, 7.91, 7.92, 7.93, 7.94, 7.95 (BDOS 6.8 and higher) (derivative of Multiuser DOS) IMS REAL/NG (successor of REAL/32) Concurrent DOS XM 5.0, 5.2, 6.0, 6.2 (BDOS 5.0-6.2) (real-mode variant of Concurrent DOS with EEMS support) DR DOS 3.31, 3.32, 3.33, 3.34, 3.35, 5.0, 6.0 (BDOS 6.0-7.1) single-user, single-tasking native DOS derived from Concurrent DOS 6.0) Novell PalmDOS 1 (BDOS 7.0) Novell DR DOS "StarTrek" Novell DOS 7 (single-user, multi-tasking system derived from DR DOS, BDOS 7.2) Novell DOS 7 updates 1-10 (BDOS 7.2) Caldera OpenDOS 7.01 (BDOS 7.2) Enhanced DR-DOS 7.01.0x (BDOS 7.2) Dell Real Mode Kernel (DRMK) Novell DOS 7 updates 11-15.2 (BDOS 7.2) Caldera DR-DOS 7.02-7.03 (BDOS 7.3) DR-DOS "WinBolt" OEM DR-DOS 7.04-7.05 (BDOS 7.3) OEM DR-DOS 7.06 (PQDOS) OEM DR-DOS 7.07 (BDOS 7.4/7.7) FreeDOS (open source DOS variant) ProDOS (operating system for the Apple II series computers) PTS-DOS (DOS variant by Russian company Phystechsoft) TurboDOS (Software 2000, Inc.) for Z80 and Intel 8086 processor-based systems Multi-tasking user interfaces and environments for DOS DESQview + QEMM 386 multi-tasking user interface for DOS DESQView/X (X-windowing GUI for DOS) Network operating systems Banyan VINES – by Banyan Systems Cambridge Ring Cisco IOS – by Cisco Systems Cisco NX-OS – previously SAN-OS CTOS – by Convergent Technologies, later acquired by Unisys Data ONTAP – by NetApp ExtremeWare – by Extreme Networks ExtremeXOS – by Extreme Networks Fabric OS – by Brocade JunOS – by Juniper NetWare – networking OS by Novell Network operating system (NOS) – developed by CDC for use in their Cyber line of supercomputers Novell Open Enterprise Server – Open Source networking OS by Novell. Can incorporate either SUSE Linux or Novell NetWare as its kernel Plan 9 – distributed OS developed at Bell Labs, based on Unix design principles but not functionally identical Inferno – distributed OS derived from Plan 9, originally from Bell Labs SONiC TurboDOS – by Software 2000, Inc. Generic, commodity, and other BLIS/COBOL A2 formerly named Active Object System (AOS), and then Bluebottle (a concurrent and active object update to the Oberon operating system) BS1000 by Siemens AG BS2000 by Siemens AG, now BS2000/OSD from Fujitsu-Siemens Computers (formerly Siemens Nixdorf Informationssysteme) BS3000 by Siemens AG (functionally similar to OS-IV and MSP from Fujitsu) Contiki for various, mostly 8-bit systems, including the Apple II series, the Atari 8-bit family, and some Commodore machines. FLEX9 (by Technical Systems Consultants (TSC) for Motorola 6809 based machines; successor to FLEX, which was for Motorola 6800 CPUs) Graphics Environment Manager (GEM) (windowing GUI for CP/M, DOS, and Atari TOS) GEOS (popular windowing GUI for PC, Commodore, Apple computers) JavaOS JNode (Java New Operating System Design Effort), written 99% in Java (native compiled), provides own JVM and JIT compiler. Based on GNU Classpath. JX Java operating system that focuses on a flexible and robust operating system architecture developed as an open source system by the University of Erlangen. KERNAL (default OS on Commodore 64) MERLIN for the Corvus Concept MorphOS (Amiga compatible) MSP by Fujitsu (successor to OS-IV), now MSP/EX, also known as Extended System Architecture (EXA), for 31-bit mode NetWare (networking OS by Novell) Oberon (operating system) (developed at ETH-Zürich by Niklaus Wirth et al.) for the Ceres and Chameleon workstation projects OSD/XC by Fujitsu-Siemens (BS2000 ported to an emulation on a Sun SPARC platform) OS-IV by Fujitsu (based on early versions of IBM's MVS) Pick (often licensed and renamed) PRIMOS by Prime Computer (sometimes spelled PR1MOS and PR1ME) Sinclair QDOS (multitasking for the Sinclair QL computer) SSB-DOS (by Technical Systems Consultants (TSC) for Smoke Signal Broadcasting; a variant of FLEX in most respects) SymbOS (GUI based multitasking operating system for Z80 computers) Symobi (GUI based modern micro-kernel OS for x86, ARM and PowerPC processors, developed by Miray Software; used and developed further at Technical University of Munich) TripOS, 1978 TurboDOS (Software 2000, Inc.) UCSD p-System (portable complete programming environment/operating system/virtual machine developed by a long running student project at UCSD; directed by Prof Kenneth Bowles; written in Pascal) VOS by Stratus Technologies with strong influence from Multics VOS3 by Hitachi for its IBM-compatible mainframes, based on IBM's MVS VM2000 by Siemens AG Visi On (first GUI for early PC machines; not commercially successful) VPS/VM (IBM based, main operating system at Boston University for over 10 years.) Hobby AROS – AROS Research Operating System (formerly known as Amiga Research Operating System) AtheOS – branched to become Syllable Desktop Syllable Desktop – a modern, independently originated OS; see AtheOS BareMetal DSPnano RTOS EmuTOS EROS – Extremely Reliable Operating System HelenOS – based on a preemptible microkernel design LSE/OS MenuetOS – extremely compact OS with GUI, written entirely in FASM assembly language KolibriOS – a fork of MenuetOS SerenityOS ToaruOS PonyOS Embedded Mobile operating systems DIP DOS on Atari Portfolio Embedded Linux (see also Linux for mobile devices) Android Flyme OS Replicant LineageOS See also List of custom Android distributions Firefox OS Ångström distribution Familiar Linux Mæmo based on Debian deployed on Nokia's Nokia 770, N800 and N810 Internet Tablets. OpenZaurus webOS from Palm, Inc., later Hewlett-Packard via acquisition, and most recently at LG Electronics through acquisition from Hewlett-Packard Access Linux Platform bada Openmoko Linux OPhone MeeGo (from merger of Maemo & Moblin) Mobilinux MotoMagx Qt Extended Sailfish OS Tizen (earlier called LiMo Platform) Ubuntu Touch PostmarketOS Inferno (distributed OS originally from Bell Labs) Magic Cap MS-DOS on Poqet PC, HP 95LX, HP 100LX, HP 200LX, HP 1000CX, HP OmniGo 700LX NetBSD Newton OS on Apple MessagePad Palm OS from Palm, Inc; now spun off as PalmSource PEN/GEOS on HP OmniGo 100 and 120 PenPoint OS Plan 9 from Bell Labs PVOS Symbian OS EPOC Windows CE, from Microsoft Pocket PC from Microsoft, a variant of Windows CE Windows Mobile from Microsoft, a variant of Windows CE Windows Phone from Microsoft DSPnano RTOS iOS watchOS tvOS iPod software iPodLinux iriver clix OS RockBox BlackBerry OS PEN/GEOS, GEOS-SC, GEOS-SE Palm OS Symbian platform (successor to Symbian OS) BlackBerry 10 Routers CatOS – by Cisco Systems Cisco IOS – originally Internetwork Operating System by Cisco Systems Inferno – distributed OS originally from Bell Labs IOS-XR – by Cisco Systems JunOS – by Juniper Networks LCOS – by LANCOM Systems Linux OpenWrt DD-WRT LEDE Gargoyle LibreCMC Zeroshell RTOS – by Force10 Networks FreeBSD m0n0wall OPNsense pfsense List of wireless router firmware projects Other embedded Apache Mynewt ChibiOS/RT Contiki ERIKA Enterprise eCos NetBSD Nucleus RTOS NuttX Minix NCOS freeRTOS, openRTOS, safeRTOS OpenEmbedded (or Yocto Project) pSOS (Portable Software On Silicon) QNX – Unix-like real-time operating system, aimed primarily at the embedded systems market. REX OS – microkernel; usually an embedded cell phone OS RIOT ROM-DOS TinyOS ThreadX RT-Thread DSPnano RTOS Windows IoT – formerly Windows Embedded Windows CE Windows IoT Core Windows IoT Enterprise Wind River VxWorks RTOS. Wombat – microkernel; usually real-time embedded Zephyr LEGO Mindstorms brickOS leJOS Capability-based Cambridge CAP computer – operating system demonstrated the use of security capabilities, both in hardware and software, also a useful fileserver, implemented in ALGOL 68C Flex machine – Custom microprogrammable hardware, with an operating system, (modular) compiler, editor, * garbage collector and filing system all written in ALGOL 68. HYDRA – Running on the C.mmp computer at Carnegie Mellon University, implemented in the programming language BLISS KeyKOS nanokernel EROS microkernel CapROS EROS successor V – from Stanford, early 1980s See also Comparison of operating systems Comparison of real-time operating systems Timeline of operating systems Category links Operating systems Embedded operating systems Real-time operating systems References External links "List of Operating Systems". www.operating-system.org. List of operating systems Computing-related lists Operating
735452
https://en.wikipedia.org/wiki/McEliece%20cryptosystem
McEliece cryptosystem
In cryptography, the McEliece cryptosystem is an asymmetric encryption algorithm developed in 1978 by Robert McEliece. It was the first such scheme to use randomization in the encryption process. The algorithm has never gained much acceptance in the cryptographic community, but is a candidate for "post-quantum cryptography", as it is immune to attacks using Shor's algorithm and – more generally – measuring coset states using Fourier sampling. The algorithm is based on the hardness of decoding a general linear code (which is known to be NP-hard). For a description of the private key, an error-correcting code is selected for which an efficient decoding algorithm is known, and which is able to correct errors. The original algorithm uses binary Goppa codes (subfield codes of geometric Goppa codes of a genus-0 curve over finite fields of characteristic 2); these codes can be efficiently decoded, thanks to an algorithm due to Patterson. The public key is derived from the private key by disguising the selected code as a general linear code. For this, the code's generator matrix is perturbated by two randomly selected invertible matrices and (see below). Variants of this cryptosystem exist, using different types of codes. Most of them were proven less secure; they were broken by structural decoding. McEliece with Goppa codes has resisted cryptanalysis so far. The most effective attacks known use information-set decoding algorithms. A 2008 paper describes both an attack and a fix. Another paper shows that for quantum computing, key sizes must be increased by a factor of four due to improvements in information set decoding. The McEliece cryptosystem has some advantages over, for example, RSA. The encryption and decryption are faster. For a long time, it was thought that McEliece could not be used to produce signatures. However, a signature scheme can be constructed based on the Niederreiter scheme, the dual variant of the McEliece scheme. One of the main disadvantages of McEliece is that the private and public keys are large matrices. For a standard selection of parameters, the public key is 512 kilobits long. Scheme definition McEliece consists of three algorithms: a probabilistic key generation algorithm which produces a public and a private key, a probabilistic encryption algorithm, and a deterministic decryption algorithm. All users in a McEliece deployment share a set of common security parameters: . Key generation The principle is that Alice chooses a linear code from some family of codes for which she knows an efficient decoding algorithm, and to make public knowledge but keep the decoding algorithm secret. Such a decoding algorithm requires not just knowing , in the sense of knowing an arbitrary generator matrix, but requires one to know the parameters used when specifying in the chosen family of codes. For instance, for binary Goppa codes, this information would be the Goppa polynomial and the code locators. Therefore, Alice may publish a suitably obfuscated generator matrix of . More specifically, the steps are as follows: Alice selects a binary -linear code capable of (efficiently) correcting errors from some large family of codes, e.g. binary Goppa codes. This choice should give rise to an efficient decoding algorithm . Let also be any generator matrix for . Any linear code has many generator matrices, but often there is a natural choice for this family of codes. Knowing this would reveal so it should be kept secret. Alice selects a random binary non-singular matrix . Alice selects a random permutation matrix . Alice computes the matrix . Alice's public key is ; her private key is . Note that could be encoded and stored as the parameters used for selecting . Message encryption Suppose Bob wishes to send a message m to Alice whose public key is : Bob encodes the message as a binary string of length . Bob computes the vector . Bob generates a random -bit vector containing exactly ones (a vector of length and weight ) Bob computes the ciphertext as . Message decryption Upon receipt of , Alice performs the following steps to decrypt the message: Alice computes the inverse of (i.e. ). Alice computes . Alice uses the decoding algorithm to decode to . Alice computes . Proof of message decryption Note that , and that is a permutation matrix, thus has weight . The Goppa code can correct up to errors, and the word is at distance at most from . Therefore, the correct code word is obtained. Multiplying with the inverse of gives , which is the plain text message. Key sizes McEliece originally suggested security parameter sizes of , resulting in a public key size of 524*(1024−524) = 262,000 bits. Recent analysis suggests parameter sizes of for 80 bits of security when using standard algebraic decoding, or when using list decoding for the Goppa code, giving rise to public key sizes of 520,047 and 460,647 bits respectively. For resiliency against quantum computers, sizes of with Goppa code were proposed, giving the size of public key of 8,373,911 bits. In its round 3 submission to the NIST post quantum standardization the highest level of security, level 5 is given for parameter sets 6688128, 6960119, and 8192128. The parameters are , , respectively. Attacks An attack consists of an adversary, who knows the public key but not the private key, deducing the plaintext from some intercepted ciphertext . Such attempts should be infeasible. There are two main branches of attacks for McEliece: Brute-force / unstructured attacks The attacker knows which is the generator matrix of an code which is combinatorially able to correct errors. The attacker may ignore the fact that is really the obfuscation of a structured code chosen from a specific family, and instead just use an algorithm for decoding with any linear code. Several such algorithms exist, such as going through each codeword of the code, syndrome decoding, or information set decoding. Decoding a general linear code, however, is known to be NP-hard, however, and all of the above-mentioned methods have exponential running time. In 2008, Bernstein, Lange, and Peters described a practical attack on the original McEliece cryptosystem, using the information set decoding method by Stern. Using the parameters originally suggested by McEliece, the attack could be carried out in 260.55 bit operations. Since the attack is embarrassingly parallel (no communication between nodes is necessary), it can be carried out in days on modest computer clusters. Structural attacks The attacker may instead attempt to recover the "structure" of , thereby recovering the efficient decoding algorithm or another sufficiently strong, efficient decoding algorithm. The family of codes from which is chosen completely determines whether this is possible for the attacker. Many code families have been proposed for McEliece, and most of them have been completely "broken" in the sense that attacks which recover an efficient decoding algorithm has been found, such as Reed-Solomon codes. The originally proposed binary Goppa codes remain one of the few suggested families of codes which have largely resisted attempts at devising structural attacks. Post-quantum encryption candidate A variant of this algorithm combined with NTS-KEM was entered into and selected during the second round of the NIST post-quantum encryption competition. References External links (Submission to the NIST Post-Quantum Cryptography Standardization Project) Public-key encryption schemes Code-based cryptography Post-quantum cryptography
41734688
https://en.wikipedia.org/wiki/Malware%20analysis
Malware analysis
Malware analysis is the study or process of determining the functionality, origin and potential impact of a given malware sample such as a virus, worm, trojan horse, rootkit, or backdoor. Malware or malicious software is any computer software intended to harm the host operating system or to steal sensitive data from users, organizations or companies. Malware may include software that gathers user information without permission. Use cases There are three typical use cases that drive the need for malware analysis: Computer security incident management: If an organization discovers or suspects that some malware may have gotten into its systems, a response team may wish to perform malware analysis on any potential samples that are discovered during the investigation process to determine if they are malware and, if so, what impact that malware might have on the systems within the target organizations' environment. Malware research: Academic or industry malware researchers may perform malware analysis simply to understand how malware behaves and the latest techniques used in its construction. Indicator of compromise extraction: Vendors of software products and solutions may perform bulk malware analysis in order to determine potential new indicators of compromise; this information may then feed the security product or solution to help organizations better defend themselves against attack by malware. Types The method by which malware analysis is performed typically falls under one of two types: Static malware analysis: Static or Code Analysis is usually performed by dissecting the different resources of the binary file without executing it and studying each component. The binary file can also be disassembled (or reverse engineered) using a disassembler such as IDA or Ghidra. The machine code can sometimes be translated into assembly code which can be read and understood by humans: the malware analyst can then read the assembly as it is correlated with specific functions and actions inside the program, then make sense of the assembly instructions and have a better visualization of what the program is doing and how it was originally designed. Viewing the assembly allows the malware analyst/reverse engineer to get a better understanding of what is supposed to happen versus what is really happening and start to map out hidden actions or unintended functionality. Some modern malware is authored using evasive techniques to defeat this type of analysis, for example by embedding syntactic code errors that will confuse disassemblers but that will still function during actual execution. Dynamic malware analysis: Dynamic or Behavioral analysis is performed by observing the behavior of the malware while it is actually running on a host system. This form of analysis is often performed in a sandbox environment to prevent the malware from actually infecting production systems; many such sandboxes are virtual systems that can easily be rolled back to a clean state after the analysis is complete. The malware may also be debugged while running using a debugger such as GDB or WinDbg to watch the behavior and effects on the host system of the malware step by step while its instructions are being processed. Modern malware can exhibit a wide variety of evasive techniques designed to defeat dynamic analysis including testing for virtual environments or active debuggers, delaying execution of malicious payloads, or requiring some form of interactive user input. Stages Examining malicious software involves several stages, including, but not limited to the following: Manual Code Reversing Interactive Behavior Analysis Static Properties Analysis Fully-Automated Analysis References Analysis Computer forensics
5182671
https://en.wikipedia.org/wiki/Il%20Filostrato
Il Filostrato
"Il Filostrato" is a poem by the Italian writer Giovanni Boccaccio, and the inspiration for Geoffrey Chaucer's Troilus and Criseyde and, through Chaucer, the Shakespeare play Troilus and Cressida. It is itself loosely based on Le Roman de Troie, by 12th-century poet Benoît de Sainte-Maure. Il Filostrato is a narrative poem on a classical topic written in "royal octaves" (ottava rima) and divided into eight cantos. The title, a combination of Greek and Latin words, can be translated approximately as "laid prostrate by love". The poem has a mythological plot: it narrates the love of Troilo (Troilus), a younger son of Priam of Troy, for Criseida (Cressida), daughter of Calcas (Calchas). Although its setting is Trojan, Boccaccio's story is not taken from Greek myth, but from the Roman de Troie, a twelfth-century French medieval re-elaboration of the Trojan legend by Benoît de Sainte-Maure known to Boccaccio in the Latin prose version by Guido delle Colonne (Historia destructionis Troiae). The plot of the Filostrato can be read as a roman à clef of Boccaccio's love of "Fiammetta". Indeed, the proem suggests it. The atmosphere of the poem is reminiscent of that of the court of Naples, and the psychology of the characters is portrayed with subtle notes. There is no agreement on the date of its composition: according to some, it may have been written in 1335, whereas others consider it to date from 1340. Boccaccio also used the name for one of the three men occurring in the character of narrators in The Decameron. Plot summary Calcas, a Trojan prophet, has foreseen the fall of the city and joined the Greeks. His daughter, Criseida, is protected from the worst consequences of her father's defection by Hector alone. Troilo sees the lovelorn glances of other young men attending a festival in the Palladium. But almost immediately he sees a young widow in mourning. This is Criseida. Troilo falls in love with her but sees no sign of her similar feelings in him, despite his efforts to attract attention by excelling in the battles before Troy. Troilo's close friend Pandaro (Pandarus), a cousin of Criseida, senses something is distressing him. He calls on Troilo, finding him in tears. Eventually Pandarus finds out the reason and agrees to act as go-between. Troilo, with Pandaro's help, eventually wins Criseida's hand. During a truce, Calcas persuades the Greeks to propose a hostage exchange: Criseida for Antenor. When the two lovers meet again, Troilo suggests elopement, but Criseida argues that he should not abandon Troy and she should protect her honour. Instead she promises to meet him in ten days' time. The Greek hero Diomedes, supervising the hostage exchange, sees the parting looks of the two lovers and guesses the truth. But he falls in love with Criseida, and seduces her. She misses the appointment with Troilo who dreams of a boar which he recognises as a symbol of Diomede. Troilo rightly interprets the dream to mean that Cressida has switched her affections to the Greek. But Pandaro persuades him that this is his imagination. Cressida, meanwhile, sends letters that pretend a continuing love for Troilo. Troilo has his fears confirmed when his brother Deífobo (Deiphobus) returns to the city with the clothes that he has snatched in battle from Diomedes; on the garment is a clasp that belonged to Criseida. Troilo, infuriated, goes into battle to seek out Diomedes, killing a thousand men. He and Diomedes fight many times, but never manage to kill each other. Instead Troilo's life and his suffering are ended by Achilles. References This article incorporates material from the Spanish Wikipedia article Giovanni Boccaccio 14th-century poems Medieval Italian literature Italian poems Works by Giovanni Boccaccio
34694752
https://en.wikipedia.org/wiki/SS%20Peveril%20%281884%29
SS Peveril (1884)
SS (RMS) Peveril (I) No. 76307 – the first vessel in the company's history to be so named – was a packet steamer which was operated by the Isle of Man Steam Packet Company until she sank off Douglas following a collision with in 1899. Construction and dimensions Constructed in 1884 by the Barrow Shipbuilding Company, Barrow-in-Furness, Peveril was launched on Thursday 24 May 1884. The Barrow Shipbuilding Company also supplied Peveril's engines and boilers. The Peveril was, like , schooner rigged. The wheelhouse was situated amidships and there was a flying bridge for the captain. Four repeating telegraphs by Chadburn were installed allowing direct communication with the engine room. Length 207'; beam 26'; depth 13'; with an i.h.p. of 1,200. Peveril had a design speed of 13.5 knots, but is recorded as reaching 15 knots on her acceptance sailing. Her passenger accommodation was well appointed, with the upholstering carried out by Messers Townsend & Ward, Barrow. Peveril's lower saloon and ladies' cabins were heated by steam. Passenger capacity is recorded at 559, which was 55 more than her older sister, . Peveril had crew accommodation for 30. Service life Sister ship to , Peveril was intended for general cargo work in the main season and for passenger relief service in winter. In addition to this, Peveril also performed numerous summer cruises and excursions between Douglas and Ramsey. Peveril made her acceptance sailing from Barrow to Douglas on Saturday 21 June 1884, under the command of Capt. Keig. She left the Hilpsford Buoy at Ramsden Dock at 09:49, arriving at Douglas at 12:51, covering the 44 nautical miles at a speed of . On nearing Douglas a gun was fired from the Peveril, and guns were also fired in celebration from the Fort Anne Hotel, with large cheering crowds reported to have assembled on the Victoria Pier. On board the Peveril were members of the Isle of Man Steam Packet Company Board and also Mr W. John, manager of the Barrow Shipbuilding Company. Shortly after 14:00, having embarked a further group of dignitaries, the Peveril departed Douglas Harbour for a trial run to Maughold Head. During the course of the run, luncheon was served, and upon reaching Maughold Head a gun was fired from the Peveril, and she then continued into Ramsey Bay. On Wednesday 14 December 1887, the body of a woman was discovered between the casing and the boiler of the Peveril's port side. It was believed that the woman secreted herself in the narrow passage for warmth and consequently suffocated. On the night of Wednesday 13 September 1893, The Peveril was involved in a collision with a small boat as she was making her way from the Victoria Pier to the inner harbour at Douglas. The small boat, named the Daisy, was on its way to put a light on the yacht Vision when she cut across the Peveril's path, and was cut in two. The solitary person on board the Daisy, John "Kitty" O'Neil, jumped clear just before impact and was subsequently picked out of the water by three dockers (David "Dawsey" Kewley, Paul Bridson and another man named Higgin), who took to a small boat in order to carry out the rescue. On Saturday 12 January 1895, the Peveril sustained damage whilst in the process of docking at Douglas. Under the command of Captain Hill, the Peveril had departed Liverpool on schedule bound for Douglas, but during the course of the passage she encountered severe weather in the form of a south-easterly Gale. Challenging conditions awaited the Peveril as she approached Douglas, and as a consequence of the wind direction coupled to a large swell in the harbour, the decision was made for the Peveril to dock at the Battery Pier as opposed to the Victoria Pier. Although it was low water Captain Hill then decided to proceed to Peel on the west coast of the Isle of Man so as to receive maximum shelter. As she was breaking away from the pier, the Peveril swung round against the pier and broke one of her propellers, so that she then had to be taken into the inner harbour at Douglas for shelter. However, as she again proceeded to break away from the Battery Pier she was involved in another mishap. Being less manoeuvrable because of her disabled propeller, she struck her stern against the pier, with such force that one of the plates on her stern was stove in and two of the piles of the fender of the pier were broken away by the impact. Finally the Peveril was positioned into the inner harbour, where she was moored at the North Quay. The damage sustained was promptly repaired and she was able to resume her schedule on Monday morning with only a minor delay. It was also during the course of this storm that the Douglas Lifeboat, Civil Service No 6, broke from her moorings at the Fort Anne Jetty and was discovered on the Sunday morning completely wrecked. When ten years old, she was fitted with electric lighting. Fifty-seven points were installed, and these installations were considered so successful, that it was decided to install a similar lighting system to , and . On Thursday 23 January 1896, the Isle of Man's new Lieutenant Governor, Lord Henniker was conveyed to the Isle of Man on board the Peveril. Mail and cargo Peveril was designed to carry a mixture of passengers and cargo. Her designation as a Royal Mail Ship (RMS) indicated that she carried mail under contract with the Royal Mail. A specified area was allocated for the storage of letters, parcels and specie (bullion, coins and other valuables). In addition, there was a considerable quantity of regular cargo, ranging from furniture to foodstuffs. Sinking After 15 years service with the company's fleet, she was sunk off Douglas on the night of 16 September 1899, following a collision with . The weather for the passage was fine, with a clear night sky, good visibility and a calm sea. The Peveril, under the command of Capt. William Woods, departed Queen's Dock, Port of Liverpool at 19:50 and passed the Bar Lightship at 21:17 when she set a course bound for Douglas. Capt. Woods left the bridge shortly after this course was set, leaving First Officer Thomas Webb on the bridge. First Officer Webb was subsequently replaced by Second Officer J. Collister, but returned at 00:10, by which time the Peveril was maintaining her course, and proceeding at full speed. At 00:25 as the Peveril was nearing Douglas, both First Officer Webb and the Peveril's lookout, A.B. Joseph Corris observed the masthead light and then the port navigation lights of another vessel which could be seen off the Peveril's starboard quarter, with the range decreasing and the bearing remaining constant. These were the lights of the steamer Monarch, making passage from Workington to Swansea. The Monarch (No. 90117), was an iron-built schooner-rigged steamer, and was of 113 tons. She was built by Mollwaine, Lewis & Co., Belfast in 1885, and was owned and operated by Alexander King Ltd, Belfast. She was sailing under the command of her Master, Captain Alexander McCullough and with a crew of 10. At the time of the incident, Captain McCullough had been in command of the Monarch for 18 months, and had been in the employ of the Belfast Steam Ship Company for three and a half years. The Monarch had departed Workington at 19:30, carrying 360 tons of flue-ash (a valuable ore-bearing material) for the Villiers Spelter Company, Swansea. She arrived off St Bees Head at 20:35 and set a course for Skerries. As both vessels neared a position southeast of Douglas Isle of Man, the Monarch's helmsman F. Burns, and her lookout, George Caddell, spotted the light on the Peveril's masthead away to port. The starboard light of the Peveril and the port light of the Monarch maintained a constant bearing, and neither ship appeared to alter course. Approximately two minutes before the collision, First Officer Webb ordered the Peveril's helm hard to starboard and gave two blasts on the ship's whistle. At the same time, Captain McCullough ordered "full astern" on the Monarch's ship's telegraph. At 01:00, 14 miles southeast of Douglas, the Monarch rammed the Peveril amidships, just abreast of the funnel, flooding the engine room. On the bridge of the Peveril at the time of collision, were First Officer Thomas Webb, Second Officer J. Collister, Corris the lookout and a helmsman. Upon collision, the Monarch rebounded clear of the Peveril, and as the Peveril shot ahead, First Officer Webb stopped engines. Following the collision all hands were immediately on deck, and Captain Woods, who was below at the time, took command. It was clear to Capt. Woods that the vessel would founder, and the necessary provisions were made to abandon ship. The Monarch stood by whilst the Peveril's lifeboats were lowered, which then made their way towards the Monarch. There were 30 crew members on board the Peveril and one passenger (Mr. Robert Henry Pitts, of Johannesburg, South Africa). The Peveril was carrying a full complement of cargo, valued at £7 per ton. On carrying out a muster upon reaching the Monarch, it was discovered that the ship's two Firemen (J. Crellin and J. Crowe), together with an engineer (Matthew Ruthen) were missing. First Officer Webb returned to the Peveril and was successful in assisting all three crew members to safety, clearing the lower part of the ship just as the stokehold became flooded. Also thought to be missing was Mr. John Howe, who was described as "an old blind fiddler, who earned his living by musically entertaining passengers onboard." However, after making his way to the stern of the vessel, he was able to lower himself into a lifeboat with the aid of a crew member. The Peveril sank stem first in 40 minutes. The position of the wreck of Peveril is given as . Aftermath The Monarch then made her way to Douglas Harbour with the Peveril's solitary passenger, her ship's company aboard; and towing two of her lifeboats astern. Monarch arrived at the Victoria Pier at 04:00. However, the Monarch had also sustained serious damage. Her stem was stoved in, and, had it not been for an extra-strong collision bulkhead, she may well also have foundered. On discharging the Peveril's crew and passenger, the Monarch moved across the harbour to the Red Pier, and then to the South Quay in order for repairs to be effected, where she attracted considerable attention from the public, with several thousand people reported to have visited the quay to view the damage. A report in the Ramsey Courier; Tuesday 19 September 1899, stated that the Monarch's bow was covered by canvas in order to obscure the result of the impact, but the entirety could not be fully hidden. Damage could be seen in the shape of a hole, extending several feet below the waterline, as well as damage to her plating stretching back approximately 20 feet as a consequence of striking the Peveril's belting. Mr. T. P. Ellison, Manager of the Isle of Man Steam Packet Company, was approached by several journalists, but declined to make any statement regarding the incident. He also refused to give his permission when asked if either Capt. Woods or First Officer Webb would be allowed to give an interview regarding the collision. Capt. Woods was described in a local paper as:- In accordance with the provisions of The Merchant Shipping Act 1894, both First Officer Webb of the Peveril together with Captain McCullough of the Monarch were summoned to appear before an inquiry held at the Custom House, Douglas, on Monday 18 September 1899, presided over by the Receiver of Wrecks, Mr. M. J. Cahill, as to the events surrounding the loss of the Peveril. During the course of the hearing, unsurprisingly, Mr. Webb blamed the Monarch stating:- As would be expected, during the course of his deposition, Capt. McCullough made a different assertion:- From the International Regulations for Preventing Collisions at Sea, it would appear that the crew of the Peveril were to blame. International Regulations for Preventing Collisions at Sea; Part B – Steering and sailing; Section II (for vessels in sight of one another); Article 15. Crossing situations stating:- "When two power-driven vessels are crossing so as to involve risk of collision, the vessel which has the other on her starboard side shall keep out of the way and shall, if the circumstances of the case admit, avoid crossing ahead of the other vessel." The following Wednesday (20 September) wreckage from the RMS Peveril was found washed-up on the beach at Lytham St Annes. The items included barrels of oil, cases of fish and butter and numerous deck chairs. Trivia Capt. William Woods was first officer on board the Mona when she was involved in a collision, and sank in the Mersey in 1883. He was also the first officer on board Peverils older sister Fenella, when she went aground on the Half Tide Rock in the Menai Strait, on 9 September 1884. The Peveril had initially been scheduled to leave Liverpool at 08:00 on the morning of 16 September. However, owing to a technical fault with another steamer tasked to operate a schedule, Peveril's departure was re-arranged. However, the other vessel was subsequently ready to depart in time, and took her own sailing, the Peveril leaving later. Even with its rescheduled timing it was intended for the Peveril to depart Liverpool at 19:00, but owing to the tidal conditions in the Mersey, the sailing was delayed until 19:50. Had the Peveril sailed at the time for which she had originally been scheduled, or even the rescheduled time, the collision would not have occurred. Mr. John Thomas Howe, the "old, blind fiddler," had been a seafarer, and had worked for the Harrison Line rising to the rank of Chief Steward. However, as a consequence of cataracts, his eyesight began to fail him, and he took to playing music in an effort to "maintain himself and his family," and this he had been doing for the previous 16 years. With the sinking of the Peveril, Mr. Howe lost his watch and chain, his clothes, his concertina and all the money he had earned that summer - his "hard-earned savings." First Officer Thomas Webb was the son of the then Mayor of Douglas; Mr. Samuel Webb. Amongst the cargo consignment on board the Peveril, were several pictures belonging to the renowned Manx art nouveau designer, Archibald Knox. Knox subsequently brought a Civil Action against the Isle of Man Steam Packet Company in the Common Law Division of the High Court, Douglas, Isle of Man, on Monday 4 December 1899, in respect of a claim for the loss of his property, in the sinking of the Peveril. The Isle of Man Steam Packet Company received a sum of £13,500 from their underwriters in receipt for the loss of the Peveril. This was lodged with their bankers, and was subsequently lost, along with a large proportion of their cash reserves, in the Dumbell's Bank Crash of 1900. References Bibliography Chappell, Connery (1980). Island Lifeline T. Stephenson & Sons Ltd Ships of the Isle of Man Steam Packet Company 1884 ships Shipwrecks in the Irish Sea Ferries of the Isle of Man Steamships Steamships of the United Kingdom Merchant ships of the United Kingdom Ships built in Barrow-in-Furness
26245730
https://en.wikipedia.org/wiki/College%20of%20Engineering%20Karunagappally
College of Engineering Karunagappally
The Government College of Engineering Karunagappally (CEK) is a public institute of engineering and technology in Karunagappally, in the north-west of Kollam district, Kerala, India. Established in 1999 by the Government of Kerala, it is the second engineering college in Kollam district the fourth engineering college under the aegis of the state government's Institute of Human Resources Development in Electronics. The institute is affiliated to the A P J Abdul Kalam Technological University, Recognized by AICTE and Accredited by National Board of Accreditation(NBA). It is the second engineering College in the Kerala Section to win the prestigious IEEE Region 10(Asia - Pacific) Exemplary Student Branch Award, Only student branch in Asia Pacific Region to win the IEEE MGA Regional Exemplary Student Branch Award twice in a row. The college offers four undergraduate programmes and two postgraduate programmes in the field of engineering and technology. Since 2012 it has been aided by the World Bank under the Government of India's TEQIP Programme History On 28 January 1987, the Government of Kerala established the Institute of Human Resources Development in Electronics (IHRDE) (now known as IHRD) in Thiruvananthapuram for the purpose of establishing a central institution which would work for the development of technical education in the state. The governing body of the IHRDE was headed by the then education minister of Kerala, K. Chandrasekharan. The first institute established under the IHRDE was the Model Polytechnic in Vadakara in 1988. In 1989 Government Model Engineering College was the first engineering college established under the IHRDE. Former Minister of Food, Tourism, Law & Civil Supplies E. Chandrasekharan Nair, who represented Karunagappally in the Kerala Legislative Assembly, put forward a proposal to start an engineering institute in Karunagappally under the auspices of the IHRDE; in 1999 CEK was the fourth engineering institute established under the IHRDE. The college started off with three undergraduate courses in engineering: Information Technology, Computer Engineering and Electronics Engineering. The college received affiliation from AICTE and the Cochin University of Science and Technology in 1999. The college commenced teaching from the 2000 academic year. The initial functioning of the college was from temporary premises in three rooms of at a hall in Karunagappally town. Later it is shifted to a government godown in Karunagappally. The college was officially inaugurated by the Chief Minister of Kerala E. K. Nayanar, two years after its establishment, on 4 February 2001. The college moved to a permanent campus in 2006 at Thodiyoor Gramapanchayathu, with the foundation stone of the administrative block laid by Minister of Education Nalakath Soopy on 12 February 2004 in the presence of Rajan Babu MLA. The college was selected in 2012 to participate in Phase II of the Technical Education Quality Improvement Programme, a project organised by the Government of India with assistance from the World Bank). The college was selected under component 1, sub-component 1.1 of the programme, for "strengthening institutions to improve learning outcomes and employability of graduates". In 2011 a Department of Electrical and Electronics Engineering was established. A postgraduate programme was commenced by the Department of Computer Science and Engineering in the same year. In 2012 a second postgraduate program in electronics and communication engineering commenced being offered. In 2015 the college became affiliated with the A P J Abdul Kalam Technological University.In 2020 a Department of Mechanical Engineering was established with Bachelor course with an approved in take of 60 seats. Two Bachelor Programs of the College (Computer Science, Electronics and Communication) got accredited in 2018 for 3 Years by NBA. Administration The college functions under Kerala's Institute of Human Resources Development. The Chairman is the state's Education Minister and the Vice-Chairman is the Principal-secretary of the Higher Education Department. The college's principal is appointed by the IHRD Director. Department of Electrical and Electronics Engineering The Department of Electrical and Electronics Engineering was formed in 2011 and has an annual intake of 60 students. The department has the following laboratories: Basic Electrical Engineering Laboratory Electrical Machines I Laboratory (DC Machines) Electrical Machines II Laboratory (AC Machines) Electrical Measurements Laboratory Power Electronics Laboratory Advanced Electrical Engineering Laboratory. Department of Electronics and Communication Engineering The Department of Electronics and Communication Engineering was formed as the Electronics Engineering Department in 1999 and commenced teaching from the college's first batch of students in 2000. In 2012 the department started a Masters of Technology course with a specialisation in signal processing. In 2020 the department intake was reduced to 30. The laboratories under the Department of Electronics & Communication Engineering are : Electronic Circuits and Digital Laboratory Micro processor and Advanced Micro Controller Laboratory Digital Laboratory Communication and Microwave Laboratory Digital Signal Processing EC Project Laboratory PG and Research Lab PG Signal Processing Lab Department of Computer Science and Engineering The Department of Computer Science and Engineering was formed as the Computer Engineering Department in 1999 and commenced teaching from the college's first batch of students in 2000. In 2011 the department started a Masters of Technology course with a specialisation in image processing. The Computer Science and Engineering is equipped with the following laboratories: Programming Laboratory Internet Laboratory Hardware/Networking Laboratory Project Laboratory - 2 PG and Research Lab PG Image processing Lab Department of Mechanical Engineering The Department of Mechanical Engineering was formed in 2020 and has an annual intake of 60 students. The department has the following laboratories: Basic Mechanical Workshop Graphics Drawing Hall Machine Workshop Department of Information Technology The Department of Information Technology was formed in 1999 and commenced teaching from the college's first batch of students in 2000 with an intake of 45 students. Later the intake was reduced to 30 students. The department closed in the year 2015 The Department of Information Technology laboratories are : Systems and Application Laboratory Internet Laboratory Multimedia Laboratory Department of Applied Science The Department of Applied Science teaches mathematics, physics, chemistry and humanities. The Department of Applied Science laboratories are : Language Lab Basic Physics Laboratory Basic Chemistry Laboratory Department of General Engineering The Department of General Engineering is engaged in teaching basic engineering subjects and applied science subjects. This department consists of Civil Engineering, Technical Communication etc. The Department of General Engineering laboratories/Workshop are : Basic Civil Workshop Department of Physical Education The Department of Physical Education supports the physical and mental health of students. It maintains a main ground near the Administration Block and a smaller ground near the MLA block. Admission Undergraduate Programmes The admission for both the Merit Quota and Management Quota is based on the rank secured in the All Kerala Engineering Entrance Examination conducted by the Commissioner for Entrance Examinations, Government of Kerala. The difference between the merit quota and management quota is in the amount of fees that have to be paid by the candidates. 50% of seats are under the Merit Quota and 45% of seats are under the Management Quota. The fees of these seats are fixed by the fees regulatory committee of Government of Kerala. The remaining 5% of seats are reserved for the children of Non-Resident Indians (NRIs). These seats are filled up purely based on the marks obtained in the Qualifying Exam. Annual intake to the Bachelor of Technology course is as follows: Computer Science and Engineering: 60 seats (+10% lateral entry students) Electronics and Communication Engineering: 30 seats (+10% lateral entry students) Electrical and Electronics Engineering: 60 seats (+10% lateral entry students) Mechanical Engineering: 60 seats (+10% lateral entry students) Postgraduate Programmes Admission to the college's postgraduate programs is through the Graduate Aptitude Test in Engineering exam administered and conducted jointly by the Indian Institute of Science and the Indian Institutes of Technology. Admission to sponsored seats are as per the Government of Kerala and All India Council for Technical Education rules. Annual intake to the Master of Technology course is as follows: Electronics Engineering (with specialisation in signal processing): 24 seats Computer Science Engineering (with specialisation in image processing): 24 seats Campus The college's campus is located 3 km from Karunagappally, in the northwest of Kollam district, near Veluthamanal and Driver Junction. The Government renamed the Veluthamanal - Driver Junction road as "Engineering College Road" following the college's relocation to the site. The 28-acre campus is the largest of those institutions under the IHRD. The design of the administration building was inspired from Hindu temple architecture. Initially, it had two floors with eight classrooms, one seminar hall, one computer lab and office. A third floor was later added for first-year students' classrooms. The campus consists of an Administration Block, MLA Block, MP Block and NABARD Block, with a separate laboratory for physics and chemistry, and a mechanical workshop with drawing hall. Technical Paper Conference IEEE Industrial Application Society approved India's first IEEE Student lead technical paper conference in College of Engineering Karunagappally named 2020 IEEE International Power and Renewable Energy Conference (IPRECON 2020). Technically & financially sponsored by IEEE Industrial Application Society. Awards and Global Achievements 1. IEEE MGA Regional Exemplary Student Branch Award 2018 (Declared at Asia Pacific Student Young Professional Women in Engineering and Life Members Congress at Bali Indonesia) 2.IEEE MGA Regional Exemplary Student Branch Award 2019 3.IEEE MGA Darrel Chong Student Activities Award - Gold 2019 (For organizing innovative technical event Luxathon 1.O) 4.IEEE MGA Global Outstanding Counselor Award 2019 - Prof. Sabeena K (For Outstanding contribution towards IEEE technical activities) 5.IEEE Industrial Application Society Global website Contest first prize 2019 - Webmaster : Mr. Akshay Krishnan 6. IEEE Industrial Application Society Outstanding Member Award 2020(Global) - Chapter Advisor : Prof. Sabeena K CEK Student Senate The Students Executive Senate, simply known as the CEK Senate, is the supreme student body. The members of the senate are elected by and from the students, with one representatives from each and one girl representative from each year. The objectives of the senate are to train the students of the college in their duties, responsibilities and rights, to organize debates, seminars, group discussions, work squads, and tours, and to encourage sports, arts and other cultural, social or recreational activities. The first Senate formed in 2000 with six members. Department Associations ELEKTOR ELEKTOR is the association of Electrical & Electronics Engineering of College of Engineering Karunagapally. The main motive of this association is to empower the talents and ideas in electrical engineering. This is providing a platform for students to share their ideas and to interact with others. It was being inaugurated by Dr Jayaraju Director ANERT on 31-07-2013. CECIA It is the association of computer Science Engineering and Information technology ASECTRON It is the association of electronics and communication Engineering department Student Organizations IEEE Student Branch IEEE Student Branch College of Engineering Karunagappally (STB 03951) is one of the oldest IEEE student branches in Kerala section, 4th branch in Travancore hub after CET TKMCE & CE Adoor. And It is the second IEEE Student Branch in Kollam (Quilon) district. The group came into function on 11 September 2000, in the early history of the institution. It is officially formed as an organization unit on 13 May 2009. Student members of CEK take active participation in the IEEE meetings and workshops, earning many prizes and awards. In 2010 the group started a WIE (Women in Engineering) affinity group. Today IEEE SB CEK have five technical society student branch chapters and one affinity group. IEEE Women in Engineering CEK IEEE Women in Engineering affinity group of CEK working under IEEE SB CEK . Started in 2010. It is one of the oldest WIE AG of Travancore hub (former Hub No 1). It works to bring more women into the engineering field and develop their abilities in technology. The AG started with the help of former active IEEE member Seena Susan George. IEEE Computer Society CEK IEEE Computer Society Student Chapter of CEK working under IEEE SB CEK . IEEE CS CEK Started in 2015 with 20 student members as a request from Computer Science students to learn more about computer and designated Field. IEEE Power & Energy Society CEK IEEE Power & Energy Society CEK started in 2016. As the request from electrical engineering students. The official inauguration done by the IEEE PES Chair of Kerala Section . IEEE Industrial Application Society CEK IEEE Industrial Application Society CEK 2017. As the request from electrical engineering students. IEEE Robotics and Automation Society CEK IEEE Robotics and Automation Society CEK 2018. As the request from EC & CS students. IEEE Nuclear Plasma Science Society/Power Electronics Society Joint Chapter CEK IEEE Nuclear Plasma Science Society/Power Electronics Society Joint Chapter CEK 2019. First Joint Student Branch Chapter in IEEE Kerala Section. ISTE Student Chapter CEK In 2009 the college established an Indian Society for Technical Education chapter. CEK NSS Unit The National Service Scheme (NSS) is an Indian government-sponsored public service program conducted by the Department of Youth Affairs and Sports of the Government of India. Popularly known as NSS, the scheme was launched in Gandhiji's Centenary year, 1969. Aimed at developing student's personality through community service, NSS is a voluntary association of young people in Colleges, Universities and at +2 level working for a campus-community linkage. The CEK started its NSS Unit in 2000 which is a Government funded unit under CUSAT with unit number KL 02 004. Later in 2015 a self-financing unit started under NSS Technical Cell . Today CEK has two active NSS Units CEK IEDC Kerala Startup Mission have been actively initiating various programmes for developing the student entrepreneurship in the state. Government of Kerala declared the start up policy with an aim to accelerate the growth of student entrepreneurs. KSUM being the nodal agency for implementing the Startup policy have come up with various schemes for the effective implementation of the policy. The schemes cover a broad area from schools, colleges and to young entrepreneurs. With the help of startup Mission CEK Started IEDC in 2016 Alumni Alumni interaction is maintained through the Alumni Network under the aegis of the Staffs of Alumni Affairs and International Relations, office staff and student representatives. It also helps in conducting the annual alumni meets. The students of CEK are known as CEKians, and the alumni are known as XCEKians. References Engineering colleges in Kollam district Educational institutions established in 1999 1999 establishments in Kerala
3830615
https://en.wikipedia.org/wiki/Bootcfg
Bootcfg
In computing, bootcfg is a command on Microsoft Windows NT-based operating systems which acts as a wrapper for editing the boot.ini file. Overview The command is used to configure, query, or change Boot.ini file settings. A similar command exists in the Recovery Console for repairing or rebuilding boot configuration files. Though NTLDR and boot.ini are no longer used to boot Windows Vista and later versions of Windows NT, they ship with the bootcfg utility regardless. This is to handle boot.ini in the case that a multi-boot configuration with previous versions of Windows exists and needs troubleshooting from within the later operating system. Windows Vista and later versions will warn users who run bootcfg that BCDEdit is the correct command to modify its booting options. Syntax The command-syntax is: bootcfg <parameter> [arguments...] Parameters addsw – Add operating system load options copy – Make a copy of an existing boot entry dbg1394 – Configures 1394 port debugging debug – Add or changes debug settings default – Specify the default operating system entry delete – Deletes an operating system entry ems – Add or change settings for redirection of the Emergency Management Services console query – Query and displays [boot loader] and [operating systems] section entries raw – Add operating system load options rmsw – Remove operating system load options timeout – Change operating system time-out value References Further reading External links bootcfg | Microsoft Docs Windows administration Windows commands
87966
https://en.wikipedia.org/wiki/The%20Age%20of%20Spiritual%20Machines
The Age of Spiritual Machines
The Age of Spiritual Machines: When Computers Exceed Human Intelligence is a non-fiction book by inventor and futurist Ray Kurzweil about artificial intelligence and the future course of humanity. First published in hardcover on January 1, 1999 by Viking, it has received attention from The New York Times, The New York Review of Books and The Atlantic. In the book Kurzweil outlines his vision for how technology will progress during the 21st century. Kurzweil believes evolution provides evidence that humans will one day create machines more intelligent than they are. He presents his law of accelerating returns to explain why "key events" happen more frequently as time marches on. It also explains why the computational capacity of computers is increasing exponentially. Kurzweil writes that this increase is one ingredient in the creation of artificial intelligence; the others are automatic knowledge acquisition and algorithms like recursion, neural networks, and genetic algorithms. Kurzweil predicts machines with human-level intelligence will be available from affordable computing devices within a couple of decades, revolutionizing most aspects of life. He says nanotechnology will augment our bodies and cure cancer even as humans connect to computers via direct neural interfaces or live full-time in virtual reality. Kurzweil predicts the machines "will appear to have their own free will" and even "spiritual experiences". He says humans will essentially live forever as humanity and its machinery become one and the same. He predicts that intelligence will expand outward from earth until it grows powerful enough to influence the fate of the universe. Reviewers appreciated Kurzweil's track record with predictions, his ability to extrapolate technology trends, and his clear explanations. However, there was disagreement on whether computers will one day be conscious. Philosophers John Searle and Colin McGinn insist that computation alone cannot possibly create a conscious machine. Searle deploys a variant of his well-known Chinese room argument, this time tailored to computers playing chess, a topic Kurzweil covers. Searle writes that computers can only manipulate symbols which are meaningless to them, an assertion which if true subverts much of the vision of the book. Background Ray Kurzweil is an inventor and serial entrepreneur. When The Age of Spiritual Machines was published he had already started four companies: Kurzweil Computer Products, Inc. which created optical character recognition and image scanning technology to assist the blind, Kurzweil Music Systems, which developed music synthesizers with high quality emulation of real instruments, Kurzweil Applied Intelligence, which created speech recognition technology, and Kurzweil Educational Systems, which made print-to-speech reading technology. Critics say predictions from his previous book The Age of Intelligent Machines "have largely come true" and "anticipated with uncanny accuracy most of the key computer developments" of the 1990s. After this book was published he went on to expand upon its ideas in a follow-on book The Singularity is Near. Today Ray Kurzweil works at Google where he is attempting to "create a truly useful AI [artificial intelligence] that will make all of us smarter". Content Law of accelerating returns Kurzweil opens by explaining that the frequency of universe-wide events has been slowing down since the Big Bang while evolution has been reaching important milestones at an ever-increasing pace. This is not a paradox, he writes, entropy (disorder) is increasing overall, but local pockets of increasing order are flourishing. Kurzweil explains how biological evolution leads to technology which leads to computation which leads to Moore's law. Kurzweil unveils several laws of his own related to this progression, leading up to his law of accelerating returns which says time speeds up as order increases. He believes Moore's law will end "by the year 2020" but that the law of accelerating returns mandates progress will continue to accelerate, therefore some replacement technology will be discovered or perfected to carry on the exponential growth. As in The Age of Intelligent Machines Kurzweil argues here that evolution has an intelligence quotient just slightly greater than zero. He says it is not higher than that because evolution operates so slowly, and intelligence is a function of time. Kurzweil explains that humans are far more intelligent than evolution, based on what we have created in the last few thousand years, and that in turn our creations will soon be more intelligent than us. The law of accelerating returns predicts this will happen within decades, Kurzweil reveals. Philosophy of mind Kurzweil introduces several thought experiments related to brain implants and brain scanning; he concludes we are not a collection of atoms, instead we are a pattern which can manifest itself in different mediums at different times. He tackles the mystery of how self-awareness and consciousness can arise from mere matter, but without resolution. Based partly on his Unitarian religious education Kurzweil feels "all of these views are correct when viewed together, but insufficient when viewed one at a time" while at the same time admitting this is "contradictory and makes little sense". Kurzweil defines the spiritual experience as "a feeling of transcending one's everyday physical and mortal bounds to sense a deeper reality". He elaborates that "just being—experiencing, being conscious—is spiritual, and reflects the essence of spirituality". In the future, Kurzweil believes, computers will "claim to be conscious, and thus to be spiritual" and concludes "twenty-first-century machines" will go to church, meditate, and pray to connect with this spirituality. Artificial intelligence Kurzweil says Alan Turing's 1950 paper Computing Machinery and Intelligence launched the field of artificial intelligence. He admits that early progress in the field led to wild predictions of future successes which did not materialize. Kurzweil feels intelligence is the "ability to use optimally limited resources" to achieve goals. He contrasts recursive solutions with neural nets, he likes both but specifically mentions how valuable neural nets are since they destroy information during processing, which if done selectively is essential to making sense of real-world data. A neuron either fires or not "reducing the babble of its inputs to a single bit". He also greatly admires genetic algorithms which mimic biological evolution to great effect. Recursion, neural nets and genetic algorithms are all components of intelligent machines, Kurzweil explains. Beyond algorithms Kurzweil says the machines will also need knowledge. The emergent techniques, neural nets and genetic algorithms, require significant training effort above and beyond creating the initial machinery. While hand-coded knowledge is tedious and brittle acquiring knowledge through language is extremely complex. Building new brains To build an artificial brain requires formulas, knowledge and sufficient computational power, explains Kurzweil. He says "by around the year 2020" a $1,000 personal computer will have enough speed and memory to match the human brain, based on the law of accelerating returns and his own estimates of the computational speed and memory capacity of the brain. Kurzweil predicts Moore's law will last until 2020 so current integrated circuits should come close to human brain levels of computation, but he says three dimensional chips will be the next big technology, followed potentially by optical computing, DNA computing, nanotubes, or quantum computing. Kurzweil feels the best model for an artificial brain is a real human brain, and suggests slicing up and digitizing preserved human brains or examining them non-invasively as technology permits. Kurzweil differentiates between scanning the brain to understand it, in a generic fashion, and scanning a particular person's brain in order to preserve it in exact detail, for "uploading" into a computer for example. The latter is much harder to do, he notes, because it requires capturing much more detail, but it will eventually happen as well. When it does "we will be software, not hardware" and our mortality will become a function of our ability to "make frequent backups". Building new bodies Kurzweil notes that many thoughts people have are related to their bodies, and reassures that he does not believe in creating a disembodied brain. He reviews all the various body implants that existed when the book was published, explaining that our bodies are already becoming more synthetic over time. Kurzweil says this trend will continue and that the technology will advance from macroscopic implants, to cellular sized insertions, and finally to nanotechnology. Nanotechnology has the potential to reshape the entire world, Kurzweil exclaims. Assembling materials molecule by molecule could solve energy problems, cure cancer and other diseases, strengthen our bodies, and produce self-assembling food, clothing, and buildings. Kurzweil admits that nanotechnology carries a big risk; a self-replicating substance, without the constraints of a living organism, might grow out of control and consume everything. However he points out that today there are already technologies which pose grave risks, for example nuclear power or weapons, and we have managed to keep them relatively safe, so he feels we can probably do the same with nanotechnology. Finally Kurzweil says there is the prospect of virtual bodies, where direct neural implants would give us the sensation of having bodies and a way to exert control, without any physical manifestation at all. Although he quickly brings things back to nanotechnology by pointing out that sufficiently advanced nanotechnology will be like having a virtual world, since "utility fog" will appear to be entirely absent and then instantly morph into functional physical shapes. Kurzweil broaches the topic of sex in futuristic times, reminding us that every new technology "adopts sexual themes". Kurzweil envisions virtual sex, sexbots, and as well as more chaste activities like strolling along a "virtual Cancún beach". State of the art Kurzweil explains that in 1999 computers are essential to most facets of life, yet he predicts no major disruption related to the then-pending Y2K problem. He says computers are narrow-minded and brittle so far, but suggests in specific domains they are showing signs of intelligence. As examples Kurzweil cites computer-generated or assisted music, and tools for the automatic or semi-automatic production of literature or poetry. He shows examples of paintings by AARON as programmed by Harold Cohen which can be automatically created. Kurzweil reviews some of his predictions from The Age of Intelligent Machines and various past presentations, and is very pleased with his record. Finally he predicts a new Luddite movement as intelligent machines take away jobs, although he predicts a net gain of new and better jobs. Predictions Kurzweil has a dense chapter of predictions for each of these years: 2009, 2019, 2029, 2099. For example, when discussing the year 2009 he makes many separate predictions related to computer hardware, education, people with disabilities, communication, business and economics, politics and society, the arts, warfare, health and medicine, and philosophy. As one example he predicts a 2009 computer will be a tablet or smaller sized device with a high quality but somewhat conventional display, while in 2019 computers are "largely invisible" and images are mostly projected directly into the retina, and by 2029 he predicts computers will communicate through direct neural pathways. Similarly in 2009 he says there is interest and speculation about the Turing test, by 2019 there are "prevalent reports" of computers passing the test, but not rigorously, while by 2029 machines "routinely" pass the test, although there is still controversy about how machine and human intelligence compare. In 2009 he writes it will take a supercomputer to match the power of one human brain, in 2019 $4,000 will accomplish the same thing, while in 2029 $1,000 will buy the equivalent of 1000 humans brains. Dollar figures are in 1999 dollars. Kurzweil predicts life expectancy will rise to "over one hundred" by 2019, to 120 by 2029, and will be indefinitely long by 2099 as humans and computers will have merged. Molly The book features a series of sometimes humorous dialogs between an initially unnamed character, later revealed to be a young woman named Molly, and the author. For most of the book she serves as proxy for the reader, asking the author for clarification, challenging him, or otherwise eliciting additional commentary about the current chapter. For example: So I'll be able to download memories of experiences I've never had? Yes, but someone has probably had the experience. So why not have the ability to share it? I suppose for some experiences, it might be safer to just download the memories. Less time-consuming also. Do you really think that scanning a frozen brain is feasible today? Sure, just stick your head in my freezer here. Later in the book during the prediction chapters Molly seems to inhabit whatever year the predictions are about, while Kurzweil remains in the present. So Kurzweil starts questioning her about how things are in future, and her lines serve as additional predictions or commentary. For example: No, I'm talking about real reality now. For example, I can see that Jeremy is two blocks away, headed in this direction. An embedded chip? That's a reasonable guess. But it's not a chip exactly. It's one of the first useful nanotechnology applications. You eat this stuff. Stuff? Yeah, it's a paste, tastes pretty good, actually. It has millions of little computers — we call them trackers — which work their way into your cells. The rest of the universe Kurzweil says life in the universe is "both rare and plentiful" meaning for vast stretches there is nothing then piled into a small space it is everywhere. He suggests any form of life that invents technology will, if it survives, relatively quickly reach the point of merging with that technology, the same thing he predicts will happen to humans. Therefore, Kurzweil explains if we ever met another civilization, we would really be meeting with its technology. The technology would likely be microscopic in size because that is all that would be necessary for exploration. The civilization would not be looking for anything except knowledge, therefore we would likely never notice it. Kurzweil feels intelligence is the strongest force in the universe, because intelligence can cause sweeping physical changes like defending a planet from an asteroid. Kurzweil predicts that as the "computational density" of the universe increases, intelligence will rival even "big celestial forces". There is disagreement about whether the universe will end in a big crunch or a long slow expansion, Kurzweil says the answer is still up in the air because intelligence will ultimately make the decision. Reception Analysis Kurzweil uses Deep Blue, the IBM computer that beat the world chess champion in 1997, as an example of fledgling machine intelligence. John Searle, author and professor of philosophy at University of California, Berkeley, reviewing The Age of Spiritual Machines in The New York Review of Books, disagrees with Kurzweil's interpretation. Searle argues that while Kasparov was "quite literally, playing chess" the computer in contrast was doing "nothing remotely like it;" instead, it was merely manipulating "a bunch of meaningless symbols". Searle offers a variant of his Chinese room argument, calling it the Chess Room Argument, where instead of answering questions in Chinese, the man in a room is playing chess. Or rather, as Searle explains, he is inside the room manipulating symbols which are meaningless to him, while his actions result in winning chess games outside the room. Searle concludes that like a computer, the man has no understanding of chess. Searle compares Deep Blue's victory to the manner in which a pocket calculator can beat humans at arithmetic; he adds that it is no more significant than a steel robot which is too tough for human beings to tackle during a game of American football. Kurzweil counters that the very same argument could be made of the human brain, since the individual neurons have no true understanding of the bigger problem the brain is working on but, added together, they produce what is known as consciousness[5]. Searle continues by contrasting simulation of something with "duplication or recreation" of it. Searle points out a computer can simulate digestion, but it will not be able to digest actual pizza. In the same way, he says, computers can simulate the processes of a conscious brain, but that does not mean it is conscious. Searle has no objection to constructing an artificial consciousness producing brain "using some chemistry different from neurons" so long as it duplicates "the actual causal powers of the brain" which he says precludes computation by itself, since that only involves symbol manipulation. Searle concludes by saying the increased computational power that Kurzweil predicts "moves us not one bit closer to creating a conscious machine", instead he says the first step to building conscious machines is to understand how the brain produces consciousness, something we are only in the infancy of doing. Colin McGinn, an author and philosophy professor at the University of Miami, wrote in The New York Times that machines might eventually exhibit external behavior at a human-level, but it would be impossible to know if they have an "inner subjective experience" as people do. If they do not, then "uploading" someone into a computer is equivalent to letting their mind "evaporate into thin air," he argues. McGinn is skeptical of the Turing test, claiming it smacks of the long-abandoned doctrine of behaviorism, and agreeing with the validity of Searle's "quite devastating" Chinese room argument. He believes minds do compute, but that it does not follow that computation alone can create a mind, instead he says minds have phenomenological properties, perhaps originating from organic tissue. Therefore, he insists that neither silicon chips nor any future technology Kurzweil mentions will ever be conscious. Reviews McGinn says The Age of Spiritual Machines is "detailed, thoughtful, clearly explained and attractively written" as well as having "an engaging discussion of the future of virtual sex" and that the book is for "anyone who wonders where human technology is going next". However, Diane Proudfoot, philosophy professor at the University of Canterbury, wrote in Science that Kurzweil's historical details are inaccurate and his philosophical understanding is flawed and that these transgressions inspired "little confidence in his imaginings about the future". Chet Raymo, physics professor at Stonehill College writes that "Ray Kurzweil has a better record than most at foreseeing the digital future" and "Kurzweil paints a tantalizing — and sometimes terrifying — portrait of a world where the line between humans and machines has become thoroughly blurred". He says the book is a "welcome challenge to beliefs we hold dear" and feels we can only shape the future if we anticipate it first. Jim Bencivenga, staff writer for The Christian Science Monitor, says Kurzweil "possesses a highly refined and precise ability to think exponentially about technology over time". Bencivenga also says we should take Kurzweil's predictions very seriously because of his "proven track record". Lyle Feisel, former electrical engineering professor, writes the predictions from Kurzweil's The Age of Intelligent Machines "have largely come true" and so "engineers and computer scientists would do well to give [this book] a read". In other media The Canadian rock band Our Lady Peace based their 2000 concept album Spiritual Machines on The Age of Spiritual Machines. They recruited Kurzweil to voice several tracks, on which he read select passages from the book. On October 29th, 2021, Our Lady Peace released a sequel album, the aptly titled Spiritual Machines 2 as an NFT that was made available on January 28th, 2022 in traditional formats. The 2013 film The Congress by Ari Folman based on a Stanislaw Lem novel The Futurological Congress references the Kurzweil's book in a fictional trailer for the protagonist's upcoming film. See also Algorithms Analytical Engine Antimatter Artificial life Cosmological constant Cochlear implant Eric Drexler Thomas Edison Albert Einstein The Emperor's New Mind Encryption Facial recognition system Richard Feynman Gödel's incompleteness theorems Douglas Hofstadter Holography Human Genome Project Image processing Integrated circuits Ted Kaczynski Garry Kasparov Lisp (programming language) Marvin Minsky Gordon Moore Hans Moravec Parallel processing Pattern recognition Roger Penrose Recursion Bertrand Russell Thermodynamics Tractatus Logico-Philosophicus Alan Turing Virtual reality Notes References External links Books by Ray Kurzweil 1999 non-fiction books Futurology books Transhumanist books Viking Press books Books about cognition
3198391
https://en.wikipedia.org/wiki/Interarchy
Interarchy
Interarchy is a FTP client for macOS supporting FTP, SFTP, SCP, WebDAV and Amazon S3. It is made by Nolobe and supports many advanced features for transferring, syncing and managing files over the Internet. Interarchy was created by Mac programmer Peter N Lewis in 1993 for Macintosh System 7. Lewis went on to form Stairways Software in 1995 to continue development of Interarchy. In 2007 Lewis sold Interarchy to Matthew Drayton of Nolobe who continues to develop Interarchy to this day. Drayton was an employee of Stairways Software having worked as a developer of Interarchy alongside Lewis since 2001. Interarchy was originally called Anarchie because it was "an Archie" client. The name was changed to Interarchy in 2000 due to a conflict with a cybersquatter. See also Comparison of FTP client software References External links MacOS Internet software Utilities for macOS Macintosh-only software FTP clients SFTP clients MacOS-only software
54300654
https://en.wikipedia.org/wiki/Carolina%20Falkholt
Carolina Falkholt
Carolina Alexandra Falkholt, with the pseudonym Blue, born 4 March 1977 in Gothenburg, is a Swedish artist, graffiti writer and musician. Sometimes she uses her own coined term grafitta, to describe her art. It is a play with the two words graffiti and fitta, the latter means "pussy" in Swedish but also with a Swedish grammatical habit of setting a gender to work titles where an "a" denotes female role. Biography Carolina Falkholt grew up in Dals Långed, Dalsland, Sweden. As a teenager, she moved to Stockholm to go to the waldorf school Kristofferskolan. At the same time she began painting graffiti under the pseudonym Blue. By the mid-1990s, she moved to New York City. There, as the only Swedish artist, she became a member of the two crews The Fantastic Partners and Hardcore Chickz. She worked with graffiti writers such as Sento and Lady Pink while making paintings around New York for the record company Rawkus to earn a living. Around the turn of the century, she was one of Sweden's most famous graffiti writers. After four years In New York, she moved back to Sweden and settled in Gothenburg where she is active today. Artistic practice In addition to spray paint and drawing, Carolina Falkholt practice involve collage, sculpture, installation, performance, film and photo. She often build up her drawings with endless amounts of circles creating a web, sometimes over vibrant colors. Typical motifs she has been investigating in her art is connected to the body, like eyes, ears, mouth, hands and the vagina. In many of her paintings of hands it is actually letters, since she is using Swedish Sign Language in her art. She is also a musician and has released records as part of her artistic practice. In several projects she has initiated various forms of collaboration with other artists, musicians, the public and organizations. In 2010, Falkholt realized the big project Graffiti Mariestad that circulated around a now demolished silo in Mariestad harbor. During a number of months before the building was to be dismantled, the façade was painted while activities in and around the silo were ongoing. The project involved about 30 graffiti writers, including Nug, Rubin and Dwane, musicians, dancers, artists and hundreds of young people. The project resulted in one of the world's largest graffiti paintings. The project Graffiti Mariestad also resulted in Falkholt being commissioned to create a public sculpture in Mariestad, which was built, among other things, from material from the demolished silo. The twelve-meter high sculpture T.E.S.T. was inaugurated in June 2011. The whole process is documented in the book SILO. Carolina Falkholt has had solo exhibitions at Gothenburg Museum of Art, Eskilstuna Museum of Art, Ystad Art Museum, Steneby Konsthall in Dalsland and Klippans konsthall in Skåne., among others. In 2013, she was one of the participants in Swedish Televisions (SVT) program Konstkuppen and that same year she curated and participated as an artist in the exhibition Mynningsladdare (Muzzleloader) at Röda Sten Konsthall in Gothenburg. In the same year, she participated in the X-Border biennial with the project Firewall with paintings in the three towns Severomorsk, Rovaniemi and Luleå and made the 16 meter long piece Wet Paint at Kulturhuset in Stockholm. Falkholt is represented by, among others, Gothenburg Museum of Art, Skövde Museum of Art and Halland Museum of Cultural History. In 2014 she made the painting Övermålning (Overpainting) as a commission for a highschool in Nyköping. She started by writing derogatory words towards women on the wall and then painted them over with a stylized motif of the lower part of a womans naked body. The artwork created a heated argument which was covered in media, both in Sweden and internationally. At one point politicians took the decision to build a wall in front of the painting. But Falholt and the painting had many strong supporters. The principal of the school said: "I see many pedagogic advantages to having her art in the school". After much debate the wall in front of the painting was taken down. In connection with this debate she made a performance at Konstakademien in Stockholm, where she invited politicians to Skenbröllop, a sham marriage with her. Two politicians said yes, Marita Ulvskog (Swedish Social Democratic Party) and Sissela Nordling Blanco (Feminist Initiative). In 2015 she made a huge mural as a commission for the highschool Parkskolan in Ystad. The painting Untitled (Firewall), depict yet another stylized naked woman haning upside down with her legs in an unnatural contorted position. 2017 she was invited as one on the artist for the exhibition SculptureMotion at Wanås sculpture park, together with William Forsythe, Henrik Plenge Jakobsen, Sonia Khurana and Éva Mag. For this exhibition she did the work Train of thoughts, a railroad car moved to the woods, first painted white and then filled with her black circles creating an organic web over the whole surface. In December 2017 she painted a 40-foot erect human penis on a building at 303 Broome Street, New York, NY. She signed the artwork on the lower right hand side. The mural was painted over by the buildings owner three days later. Gallery Public works (in selection) T.E.S.T. , Mariestad, Sweden, 2011–12 Wall painting on the facade of Bengtsfors Sports Hall, Sweden, 2012 Freedom of expression, Järnvägsgatan in Alingsås, Sweden, 2012 Facade painting on a day-care building in Durres, Albania, 2013 Untitled, Bergslagsvägen 43, Avesta, Sweden, 2013 Pi, on the facade of the student house Jakten, Halmstad, Sweden, 2013 Untitled (Firewall), Borås, Sweden, 2014 Övermålning, Nyköping High School, Nyköping, Sweden, 2014 Untitled (Firewall), wall painting at Parkskolan in Ystad, Sweden, 2015 Untitled (Firewall), wall painting, Södra Dragongatan in Ystad, Sweden, 2015 Fountain, GrEEK Campus, Cairo, Egypt, 2015 TECHNE, Mimers house of Culture, Kungälv, Sweden, 2016 Fuck the World, Kungsholmen, Stockholm, Sweden, 2018 Fuck the World Fuck the World is a mural by Falkholt which depicts an erect blue penis and covers five storeys of an apartment block () in Stockholm, Sweden. The work was unveiled in April 2018 on Kronobergsvägen in the Kungsholmen district of Stockholm. Falkholt whose work explores human sexuality said of it, "They should consider what it is they are so upset about and then talk about it" and "Sex is so important, but it’s always been too dirty to discuss." Falkholt was confident that Stockholm residents would be receptive to the work and that it would avoid the fate of an earlier work of a pink and orange penis which was painted on the side of a four-storey building in lower Manhattan in December 2017, but was painted over after a few weeks. However following complaints from neighbouring residents, it was announced a week after being unveiled that the work is to be painted over. References External links CV (pdf) - grossestreffen.org Swedish artists Living people 1977 births Swedish graffiti artists
2271324
https://en.wikipedia.org/wiki/Satkhira%20District
Satkhira District
Satkhira (, pron: Satkhira) is a district in southwestern Bangladesh and is part of Khulna Division. It lies along the border with West Bengal, India. It is on the bank of the Arpangachhia River. The largest city and headquarter of this district is Satkhira. Administration The district consists of two municipalities, seven upazilas, 79 union porishods, 8 thana (police station) and 1436 villages. The upazilas are: Satkhira Sadar Upazila Assasuni Upazila Debhata Upazila Tala Upazila Kalaroa Upazila Kaliganj Upazila Shyamnagar Upazila The two municipalities are Satkhira and Kalaroa. Chairman of Zila Porishod: Nazrul Islam Deputy Commissioner (DC): Mohammad Humayun Kabir Geography Satkhira District has an area of about . It is bordered to the north by Jessore District, on the south by the Bay of Bengal, to the east by Khulna District, and to the west by 24 Pargana District of West Bengal, India. The annual average maximum temperature reaches 35.5 °C (95.9 °F); minimum temperature is 12.5 °C (54.5 °F). The annual rainfall is 1710 mm (67 in). The main rivers are the Kopotakhi river across Dorgapur union of Assasuni Upazila, Morichap River, Kholpetua River, Betna River, Raimangal River, Hariabhanga river, Ichamati River, Betrabati River and Kalindi-Jamuna River. Climate Tropical savanna climates have a monthly mean temperature above 18 °C (64 °F) in every month of the year and typically a pronounced dry season, with the driest month having precipitation less than 60mm (2.36 in) of precipitation. The Köppen Climate Classification subtype for this climate is "Aw" (tropical savanna climate). Demographics According to the 2011 Bangladesh census, Satkhira District had a population of 1,985,959. Males constituted 49.49% of the population and females 50.51%. 90.05% of the population lived in rural areas and 9.95% of the population lived in urban areas. Satkhira District had a literacy rate of 52.07% for the population 7 years and above: for males it is 56.11% and for females 48.15%. Muslims formed 81.86% of the population, Hindus 17.70%, Christians 0.31% and others 0.12%. The Muslim population has increased continuously while the Hindu population has remained relatively constant and sometimes fallen. Economy Most of the peoples of southern part of Satkhira depend on pisciculture, locally called gher. Main fruits are aam (mango), jaam (blackberry), kathal (jackfruit), kola (banana), pepe (papaya), lichoo (litchi), naarikel (coconut) and peyara (guava). Farms are 86 dairies, 322 poultry farms, 3046 fisheries, 3650 shrimp farms, 66 hatcheries and one cattle breeding centre. The main exports are shrimp, paddy, jute, wheat, betel leaf, leather and jute goods. Contribute 18.5% of Bangladesh economy. Recently, the wide spread crab fattening is contributing heavily in Satkhira's economy. Points of interest Sundarbans is the largest single block of tidal halophytic mangrove forest in the world, is a World Heritage Site, and covers an area of . The region is home to many ancient buildings and temples such as Sultanpur Shahi Mosque (500 years old) and Pir-e-Kamel Kari Hafez Sah-Sufi Jonab Hozrat Maolana Azizur Rahman (Rh) was an Muslim Sufi Saint and local ruler Kalimakhali, assasuni upozila in satkhira (now in Bangladesh). Attractions also include the mangrove forest at Kaligonj Upazila. This forest, named Basjharia Joarar Ban, is popularly known as the forest of BADHA. The Joarar Ban is the cause of friction between Bangladesh and Indian border. Infrastructure Land ports India-Bangladesh (Bhomra land port): 200 yard distant BGP camp from main port. The Bhomra land port is second largest land port in Bangladesh. The Bhomra land customs station was inaugurated in 1996. Transport Roads and highways are Satkhira-Khulna, Satkhira-Jessore, Satkhira-Assasuni-Ghola, Satkhira-Kaligonj-Shyamnagar. Satkhira-Kaligonj-Shyamnagar is very bad due to conductor's corruption. Education Recently established one medical college, 79 colleges, one primary teachers training institute, 421 high schools, 41 junior high schools, 259 madrassas, 822 government primary schools. Some of the notable educational institutions- Satkhira Government College Satkhira City College Satkhira Medical College Satkhira Government Girls’ High School Satkhira Government High School Satkhira Government Mahila College Kaliganj Govt. College Satkhira Day-Night College Kalaroa government college Jhaudanga high school Digital Satkhira In 1994 few young people started a computer training center as a business and prepared some talents who later started other computer business and ultimately created the idea of digital Satkhira. Slowly the computer replaced manual type machine in the office, bank and other institutions. Schools and colleges started recruiting computer teachers. Many more young people started computer business. Manual (letter) printing presses switched to offset printing using the computer. First local daily newspaper published name was "Doinik Satkhira Chitra". First Computer Sales & Service Centre was "Mitul Computer Services" First Computer Training Centre was "Cosmos Computer" First Offset Printing Press was "Zahan Offset Printing Press" In 1999 the Computer Association of Satkhira was established with 30 members. The first president was Mitul Md. Moniruzzaman and General Secretary was Nityananda Sarkar with Vice President Faruque ul-Islam and Sayed Iqbal Babu. Computer Association of Satkhira regularly organizes computer fairs in varieties location for ICT awareness. Some of computer fairs were supported by Bangladesh Computer Samity with the presence of Mustafa Jabbar, current ICT minister. Notable people A. F. M. Entaz Ali Habibul Islam Habib - ex-Member of Parliament-105, Satkhira-1, Bangladesh National Parliament, Publicity and Publication Affairs Secretary Bangladesh Nationalist Party Khan Bahadur Ahsanullah AFM Ruhal Haque Soumya Sarkar Mustafizur Rahman Nilufar Yasmin Sabina Yasmin Amin Khan (actor) Moushumi Moushumi Hamid Falguni Hamid Tariq Anam Khan Afzal Hossain Rani Sarker Muhammad Wajed Ali Shimul Hossain Sikandar Abu Jafar S.M Alauddin *M.R. Khan Satkhira Government Hospital/Medical College Private Hospital/Clinic Private Diagnostic Centre See also Districts of Bangladesh Khulna Division References Districts of Bangladesh
3350021
https://en.wikipedia.org/wiki/Racket%20%28programming%20language%29
Racket (programming language)
Racket is a general-purpose, multi-paradigm programming language and a multi-platform distribution that includes the Racket language, compiler, large standard library, IDE, development tools, and a set of additional languages including Typed Racket (a sister language of Racket with a static type-checker), Swindle, FrTime, Lazy Racket, R5RS & R6RS Scheme, Scribble, Datalog, Racklog, Algol 60 and several teaching languages. The Racket language is a modern dialect of Lisp and a descendant of Scheme. It is designed as a platform for programming language design and implementation. In addition to the core Racket language, Racket is also used to refer to the family of programming languages and set of tools supporting development on and with Racket. Racket is also used for scripting, computer science education, and research. The Racket platform provides an implementation of the Racket language (including a runtime system, libraries, and compiler supporting several compilation modes: machine code, machine-independent, interpreted, and JIT) along with the DrRacket integrated development environment (IDE) written in Racket. Racket is used by the ProgramByDesign outreach program, which aims to turn computer science into "an indispensable part of the liberal arts curriculum". The core Racket language is known for its extensive macro system which enables creating embedded and domain-specific languages, language constructs such as classes or modules, and separate dialects of Racket with different semantics. The platform distribution is free and open-source software distributed under the Apache 2.0 and MIT licenses. Extensions and packages written by the community may be uploaded to Racket's package catalog. History Development Matthias Felleisen founded PLT Inc. in the mid 1990s, first as a research group, soon after as a project dedicated to producing pedagogic materials for novice programmers (lectures, exercises/projects, software). In January 1995, the group decided to develop a pedagogic programming environment based on Scheme. Matthew Flatt cobbled together MrEd, the original virtual machine for Racket, from libscheme, wxWidgets, and a few other free systems. In the years that followed, a team including Flatt, Robby Findler, Shriram Krishnamurthi, Cormac Flanagan, and many others produced DrScheme, a programming environment for novice Scheme programmers and a research environment for soft typing. The main development language that DrScheme supported was named PLT Scheme. In parallel, the team began conducting workshops for high school teachers, training them in program design and functional programming. Field tests with these teachers and their students provided essential clues for directing the development. Over the following years, PLT added teaching languages, an algebraic stepper, a transparent read–eval–print loop, a constructor-based printer, and many other innovations to DrScheme, producing an application-quality pedagogic program development environment. By 2001, the core team (Felleisen, Findler, Flatt, Krishnamurthi) had also written and published their first textbook, How to Design Programs, based on their teaching philosophy. The Racket Manifesto details the principles driving the development of Racket, presents the evaluation framework behind the design process, and details opportunities for future improvements. Version history The first generation of PLT Scheme revisions introduced features for programming in the large with both modules and classes. Version 42 introduced units – a first-class module system – to complement classes for large scale development. The class system gained features (e.g. Java-style interfaces) and also lost several features (e.g. multiple inheritance) throughout these versions. The language evolved throughout a number of successive versions, and gaining milestone popularity in Version 53, leading to extensive work and the following Version 100, which would be equivalent to a "1.0" release in current popular version systems. The next major revision was named Version 200, which introduced a new default module system that cooperates with macros. In particular, the module system ensures that run-time and compile-time computation are separated to support a "tower of languages". Unlike units, these modules are not first-class objects. Version 300 introduced Unicode support, foreign library support, and refinements to the class system. Later on, the 300 series improved the performance of the language runtime with an addition of a JIT compiler and a switch to a default generational garbage collection. By the next major release, the project had switched to a more conventional sequence-based version numbering. Version 4.0 introduced the #lang shorthand to specify the language that a module is written in. Further, the revision introduced immutable pairs and lists, support for fine-grained parallelism, and a statically-typed dialect. On 7 June 2010, PLT Scheme was renamed Racket. The renaming coincided with the release of Version 5.0. Subsequently, the graphical user interface (GUI) backend was rewritten in Racket from C++ in Version 5.1 using native UI toolkits on all platforms. Version 5.2 included a background syntax checking tool, a new plotting library, a database library, and a new extended REPL. Version 5.3 included a new submodule feature for optionally loaded modules, new optimization tools, a JSON library, and other features. Version 5.3.1 introduced major improvements to DrRacket: the background syntax checker was turned on by default and a new documentation preview tool was added. In version 6.0, Racket released its second-generation package management system. As part of this development, the principal DrRacket and Racket repository was reorganized and split into a large set of small packages, making it possible to install a minimal racket and to install only those packages needed. Version 7 of Racket was released with a new macro expander written in Racket as part the preparations for supporting moving to the Chez Scheme runtime system and supporting multiple runtime systems. On 19 November 2019, Racket 7.5 was released. The license of Racket 7.5 was less restrictive. They use now either the Apache 2.0 license or the MIT license. On 2021 February 13, Racket 8.0 was released. Racket 8.0 marks the first release where Racket with the Chez Scheme runtime system, known as Racket CS, is the default implementation. Racket CS is faster, easier to maintain and develop, backward-compatible with existing Racket programs, and has better parallel garbage collection. Features Racket's core language includes macros, modules, lexical closures, tail calls, delimited continuations, parameters (fluid variables), software contracts, green threads and OS threads, and more. The language also comes with primitives, such as eventspaces and custodians, which control resource management and enables the language to act like an operating system for loading and managing other programs. Further extensions to the language are created with the powerful macro system, which together with the module system and custom parsers can control all aspects of a language. Most language constructs in Racket are implemented as macros in the base language. These include a mixin class system, a component (or module) system as expressive as opaque ascription in the ML module system, and pattern matching. Further, the language features the first contract system for a higher-order programming language. Racket's contract system is inspired by the Design by Contract work for Eiffel and extends it to work for higher-order values such as first-class functions, objects, reference cells, and so on. For example, an object that is checked by a contract can be ensured to make contract checks when its methods are eventually invoked. Racket includes both bytecode and JIT (JIT) compilers. The bytecode compiler produces an internal bytecode format run by the Racket virtual machine, and the JIT compiler translates bytecode to machine code at runtime. Since 2004, the language has also shipped with PLaneT, a package manager that is integrated into the module system so that third-party libraries can be transparently imported and used. Also, PLaneT has a built-in versioning policy to prevent dependency hell. At the end of 2014, much of Racket's code was moved into a new packaging system separate from the main code base. This new packaging system is serviced by a client program named raco. The new package system provides fewer features than PLaneT; a blog post by Jay McCarthy on the Racket blog explains the rationale for the change and how to duplicate the older system. Integrated Language Extensibility and Macros The features that most clearly distinguish Racket from other languages in the Lisp family are its integrated language extensibility features that support building new domain-specific and general-purpose languages. Racket's extensibility features are built into the module system to allow context-sensitive and module-level control over syntax. For example, the #%app syntactic form can be overridden to change the semantics of function application. Similarly, the #%module-begin form allows arbitrary static analysis of the entire module. Since any module can be used as a language, via the #lang notation, this effectively means that virtually any aspect of the language can be programmed and controlled. The module-level extensibility features are combined with a Scheme-like hygienic macro system, which provides more features than Lisp's s-expression manipulation system, Scheme 84's hygienic extend-syntax macros, or R5RS's syntax-rules. Indeed, it is fair to say that the macro system is a carefully tuned application programming interface (API) for compiler extensions. Using this compiler API, programmers can add features and entire domain-specific languages in a manner that makes them completely indistinguishable from built-in language constructs. The macro system in Racket has been used to construct entire language dialects. This includes Typed Racket, which is a gradually typed dialect of Racket that eases the migration from untyped to typed code, Lazy Racket—a dialect with lazy evaluation, and Hackett, which combines Haskell and Racket. The pedagogical programming language Pyret was originally implemented in Racket. Other dialects include FrTime (functional reactive programming), Scribble (documentation language), Slideshow (presentation language), and several languages for education. Racket's core distribution provides libraries to aid the development of programming languages. Such languages are not restricted to s-expression based syntax. In addition to conventional readtable-based syntax extensions, the directive #lang enables the invocation of arbitrary parsers, which can be implemented using the parser tools library. See Racket logic programming for an example of such a language. Programming environment The language platform provides a self-hosted IDE named DrRacket, a continuation-based web server, a graphical user interface, and other tools. As a viable scripting tool with libraries like common scripting languages, it can be used for scripting the Unix shell. It can parse command line arguments and execute external tools. DrRacket IDE DrRacket (formerly DrScheme) is widely used among introductory computer science courses that teach Scheme or Racket and is lauded for its simplicity and appeal to beginner programmers. The IDE was originally built for use with the TeachScheme! project (now ProgramByDesign), an outreach effort by Northeastern University and a number of affiliated universities for attracting high school students to computer science courses at the college level. The editor provides highlighting for syntax and run-time errors, parenthesis matching, a debugger and an algebraic stepper. Its student-friendly features include support for multiple "language levels" (Beginning Student, Intermediate Student and so on). It also has integrated library support, and sophisticated analysis tools for advanced programmers. Further, module-oriented programming is supported with the module browser, a contour view, integrated testing and coverage measurements, and refactoring support. It provides integrated, context-sensitive access to an extensive hyper-linked help system named "Help Desk". DrRacket is available for Windows, macOS, Unix, and Linux with the X Window System and programs behave similarly on all these platforms. Code examples Here is a trivial hello world program: #lang racket "Hello, World!" Running this program produces the output: Here is a slightly less trivial program: #lang racket (require 2htdp/image) (let sierpinski ([n 8]) (if (zero? n) (triangle 2 'solid 'red) (let ([t (sierpinski (- n 1))]) (freeze (above t (beside t t)))))) This program, taken from the Racket website, draws a Sierpinski triangle, nested to depth 8. Using the #lang directive, a source file can be written in different dialects of Racket. Here is an example of the factorial program in Typed Racket, a statically typed dialect of Racket: #lang typed/racket (: fact (Integer -> Integer)) (define (fact n) (if (zero? n) 1 (* n (fact (- n 1))))) Applications and practical use Apart from having a basis in programming language theory, Racket was designed as a general-purpose language for production systems. Thus, the Racket distribution features an extensive library that covers systems and network programming, web development, a uniform interface to the underlying operating system, a dynamic foreign function interface, several flavours of regular expressions, lexer/parser generators, logic programming, and a complete GUI framework. Racket has several features useful for a commercial language, among them an ability to compile standalone executables under Windows, macOS, and Unix, a profiler and debugger included in the integrated development environment (IDE), and a unit testing framework. Racket has been used for commercial projects and web applications. A notable example is the Hacker News website, which runs on Arc, which is developed in Racket. Naughty Dog has used it as a scripting language in several of their video games. Racket is used to teach students algebra through game design in the Bootstrap program. References Further reading Felleisen et al., 2013. Realm of Racket. No Starch Press. Felleisen et al., 2003. How to Design Programs. MIT Press. External links Functional languages Object-oriented programming languages Extensible syntax programming languages Scheme (programming language) implementations Scheme (programming language) compilers Scheme (programming language) interpreters R6RS Scheme Academic programming languages Educational programming languages Pedagogic integrated development environments Cross-platform free software Free compilers and interpreters Programming languages created in 1995 Pattern matching programming languages Articles with example Racket code Scheme (programming language) Software development Language workbench Lisp programming language family
57635741
https://en.wikipedia.org/wiki/34746%20Thoon
34746 Thoon
34746 Thoon, prov. designation: , is a dark Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 22 August 2001, by astronomers with the Lincoln Near-Earth Asteroid Research at Lincoln Lab's ETS in Socorro, New Mexico. The possibly elongated Jovian asteroid is one of the 70 largest Jupiter trojans and has a rotation period of 19.6 hours. It was named after the Trojan warrior Thoön from Greek mythology. Orbit and classification Thoon is a dark Jupiter trojan in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 5.0–5.4 AU once every 11 years and 9 months (4,287 days; semi-major axis of 5.16 AU). Its orbit has an eccentricity of 0.04 and an inclination of 27° with respect to the ecliptic. The body's observation arc begins with its first observation as at the Goethe Link Observatory in November 1945, almost 56 years prior to its official discovery observation at Socorro. Numbering and naming This minor planet was numbered on 28 January 2002 (). On 14 May 2021, the object was named by the Working Group Small Body Nomenclature (WGSBN), after Thoön, a Lycian warrior and ally of the Trojans, who was killed in battle by Odysseus during the Trojan War. Physical characteristics Thoon is an assumed C-type asteroid. Its V–I color index of 0.95 is typical for most D-type asteroids, the dominant spectral type among the Jupiter trojans. Rotation period In April 2007, a rotational lightcurve of Thoon was obtained from photometric observations by Lawrence Molnar at Calvin University, using the Calvin-Rehoboth Robotic Observatory in New Mexico. Lightcurve analysis gave a well-defined rotation period of hours with a high brightness variation of 0.56 magnitude (). A high amplitude is indicative of a non-spherical shape. Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Thoon measures between 61.68 and 63.63 kilometers in diameter and its surface has an albedo between 0.061 and 0.091. The Collaborative Asteroid Lightcurve Link assumes an albedo of 0.0580 and calculates a diameter of 60.51 kilometers based on an absolute magnitude of 9.8. References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (30001)-(35000) – Minor Planet Center Asteroid (34746) 2001 QE91 at the Small Bodies Data Ferret 034746 034746 Minor planets named from Greek mythology Named minor planets 20010822
301240
https://en.wikipedia.org/wiki/RSA%20Security
RSA Security
RSA Security LLC, formerly RSA Security, Inc. and doing business as RSA, is an American computer and network security company with a focus on encryption and encryption standards. RSA was named after the initials of its co-founders, Ron Rivest, Adi Shamir and Leonard Adleman, after whom the RSA public key cryptography algorithm was also named. Among its products is the SecurID authentication token. The BSAFE cryptography libraries were also initially owned by RSA. RSA is known for incorporating backdoors developed by the NSA in its products. It also organizes the annual RSA Conference, an information security conference. Founded as an independent company in 1982, RSA Security was acquired by EMC Corporation in 2006 for US$2.1 billion and operated as a division within EMC. When EMC was acquired by Dell Technologies in 2016, RSA became part of the Dell Technologies family of brands. On 10 March 2020, Dell Technologies announced that they will be selling RSA Security to a consortium, led by Symphony Technology Group (STG), Ontario Teachers’ Pension Plan Board (Ontario Teachers’) and AlpInvest Partners (AlpInvest) for US$2.1 billion, the same price when it was bought by EMC back in 2006. RSA is based in Bedford, Massachusetts, with regional headquarters in Bracknell (UK) and Singapore, and numerous international offices. History Ron Rivest, Adi Shamir and Leonard Adleman, who developed the RSA encryption algorithm in 1977, founded RSA Data Security in 1982. In 1994, RSA was against the Clipper Chip during the Crypto War. In 1995, RSA sent a handful of people across the hall to found Digital Certificates International, better known as VeriSign. The company then called Security Dynamics acquired RSA Data Security in July 1996 and DynaSoft AB in 1997. In January 1997, it proposed the first of the DES Challenges which led to the first public breaking of a message based on the Data Encryption Standard. In February 2001, it acquired Xcert International, Inc., a privately held company that developed and delivered digital certificate-based products for securing e-business transactions. In May 2001, it acquired 3-G International, Inc., a privately held company that developed and delivered smart card and biometric authentication products. In August 2001, it acquired Securant Technologies, Inc., a privately held company that produced ClearTrust, an identity management product. In December 2005, it acquired Cyota, a privately held Israeli company specializing in online security and anti-fraud solutions for financial institutions. In April 2006, it acquired PassMark Security. On September 14, 2006, RSA stockholders approved the acquisition of the company by EMC Corporation for $2.1 billion. In 2007, RSA acquired Valyd Software, a Hyderabad-based Indian company specializing in file and data security . In 2009, RSA launched the RSA Share Project. As part of this project, some of the RSA BSAFE libraries were made available for free. To promote the launch, RSA ran a programming competition with a US$10,000 first prize. In 2011, RSA introduced a new CyberCrime Intelligence Service designed to help organizations identify computers, information assets and identities compromised by trojans and other online attacks. In July 2013, RSA acquired Aveksa the leader in Identity and Access Governance sector On September 7, 2016, RSA was acquired by and became a subsidiary of Dell EMC Infrastructure Solutions Group through the acquisition of EMC Corporation by Dell Technologies in a cash and stock deal led by Michael Dell. On February 18, 2020, Dell Technologies announced their intention to sell RSA for $2.075 billion to Symphony Technology Group. In anticipation of the sale of RSA to Symphony Technology Group, Dell Technologies made the strategic decision to retain the BSAFE product line. To that end, RSA transferred BSAFE products (including the Data Protection Manager product) and customer agreements, including maintenance and support, to Dell Technologies on July 1, 2020. On September 1, 2020, Symphony Technology Group (STG) completed its acquisition of RSA from Dell Technologies. RSA became an independent company, one of the world’s largest cybersecurity and risk management organizations. Controversy SecurID security breach On March 17, 2011, RSA disclosed an attack on its two-factor authentication products. The attack was similar to the Sykipot attacks, the July 2011 SK Communications hack, and the NightDragon series of attacks. RSA called it an advanced persistent threat. Today, SecurID is more commonly used as a software token rather than older physical tokens. Relationship with NSA RSA's relationship with the NSA has changed over the years. Reuters' Joseph Menn and cybersecurity analyst Jeffrey Carr have noted that the two once had an adversarial relationship. In its early years, RSA and its leaders were prominent advocates of strong cryptography for public use, while the NSA and the Bush and Clinton administrations sought to prevent its proliferation. In the mid-1990s, RSA and Bidzos led a "fierce" public campaign against the Clipper Chip, an encryption chip with a backdoor that would allow the U.S. government to decrypt communications. The Clinton administration pressed telecommunications companies to use the chip in their devices, and relaxed export restrictions on products that used it. (Such restrictions had prevented RSA Security from selling its software abroad.) RSA joined civil libertarians and others in opposing the Clipper Chip by, among other things, distributing posters with a foundering sailing ship and the words "Sink Clipper!" RSA Security also created the DES Challenges to show that the widely used DES encryption was breakable by well-funded entities like the NSA. The relationship shifted from adversarial to cooperative after Bidzos stepped down as CEO in 1999, according to Victor Chan, who led RSA's department engineering until 2005: "When I joined there were 10 people in the labs, and we were fighting the NSA. It became a very different company later on." For example, RSA was reported to have accepted $10 million from the NSA in 2004 in a deal to use the NSA-designed Dual EC DRBG random number generator in their BSAFE library, despite many indications that Dual_EC_DRBG was both of poor quality and possibly backdoored. RSA Security later released a statement about the Dual_EC_DRBG kleptographic backdoor: In March 2014, it was reported by Reuters that RSA had also adapted the extended random standard championed by NSA. Later cryptanalysis showed that extended random did not add any security, and was rejected by the prominent standards group Internet Engineering Task Force. Extended random did however make NSA's backdoor for Dual_EC_DRBG tens of thousands of times faster to use for attackers with the key to the Dual_EC_DRBG backdoor (presumably only NSA), because the extended nonces in extended random made part of the internal state of Dual_EC_DRBG easier to guess. Only RSA Security's Java version was hard to crack without extended random, since the caching of Dual_EC_DRBG output in e.g. RSA Security's C programming language version already made the internal state fast enough to determine. And indeed, RSA Security only implemented extended random in its Java implementation of Dual_EC_DRBG. NSA Dual_EC_DRBG backdoor From 2004 to 2013, RSA shipped security software—BSAFE toolkit and Data Protection Manager—that included a default cryptographically secure pseudorandom number generator, Dual EC DRBG, that was later suspected to contain a secret National Security Agency kleptographic backdoor. The backdoor could have made data encrypted with these tools much easier to break for the NSA, which would have had the secret private key to the backdoor. Scientifically speaking, the backdoor employs kleptography, and is, essentially, an instance of the Diffie Hellman kleptographic attack published in 1997 by Adam Young and Moti Yung. RSA Security employees should have been aware, at least, that Dual_EC_DRBG might contain a backdoor. Three employees were members of the ANSI X9F1 Tool Standards and Guidelines Group, to which Dual_EC_DRBG had been submitted for consideration in the early 2000s. The possibility that the random number generator could contain a backdoor was "first raised in an ANSI X9 meeting", according to John Kelsey, a co-author of the NIST SP 800-90A standard that contains Dual_EC_DRBG. In January 2005, two employees of the cryptography company Certicom—who were also members of the X9F1 group—wrote a patent application that described a backdoor for Dual_EC_DRBG identical to the NSA one. The patent application also described three ways to neutralize the backdoor. Two of these—ensuring that two arbitrary elliptic curve points P and Q used in Dual_EC_DRBG are independently chosen, and a smaller output length—were added to the standard as an option, though NSA's backdoored version of P and Q and large output length remained as the standard's default option. Kelsey said he knew of no implementers who actually generated their own non-backdoored P and Q, and there have been no reports of implementations using the smaller outlet. Nevertheless, NIST included Dual_EC_DRBG in its 2006 NIST SP 800-90A standard with the default settings enabling the backdoor, largely at the behest of NSA officials, who had cited RSA Security's early use of the random number generator as an argument for its inclusion. The standard did also not fix the unrelated (to the backdoor) problem that the CSPRNG was predictable, which Gjøsteen had pointed out earlier in 2006, and which led Gjøsteen to call Dual_EC_DRBG not cryptographically sound. ANSI standard group members and Microsoft employees Dan Shumow and Niels Ferguson made a public presentation about the backdoor in 2007. Commenting on Shumow and Ferguson's presentation, prominent security researcher and cryptographer Bruce Schneier called the possible NSA backdoor "rather obvious", and wondered why NSA bothered pushing to have Dual_EC_DRBG included, when the general poor quality and possible backdoor would ensure that nobody would ever use it. There does not seem to have been a general awareness that RSA Security had made it the default in some of its products in 2004, until the Snowden leak. In September 2013, the New York Times, drawing on the Snowden leaks, revealed that the NSA worked to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of the Bullrun program. One of these vulnerabilities, the Times reported, was the Dual_EC_DRBG backdoor. With the renewed focus on Dual_EC_DRBG, it was noted that RSA Security's BSAFE used Dual_EC_DRBG by default, which had not previously been widely known. After the New York Times published its article, RSA Security recommended that users switch away from Dual_EC_DRBG, but denied that they had deliberately inserted a backdoor. RSA Security officials have largely declined to explain why they did not remove the dubious random number generator once the flaws became known, or why they did not implement the simple mitigation that NIST added to the standard to neutralize the suggested and later verified backdoor. On 20 December 2013, Reuters' Joseph Menn reported that NSA secretly paid RSA Security $10 million in 2004 to set Dual_EC_DRBG as the default CSPRNG in BSAFE. The story quoted former RSA Security employees as saying that "no alarms were raised because the deal was handled by business leaders rather than pure technologists". Interviewed by CNET, Schneier called the $10 million deal a bribe. RSA officials responded that they have not "entered into any contract or engaged in any project with the intention of weakening RSA’s products." Menn stood by his story, and media analysis noted that RSA's reply was a non-denial denial, which denied only that company officials knew about the backdoor when they agreed to the deal, an assertion Menn's story did not make. In the wake of the reports, several industry experts cancelled their planned talks at RSA's 2014 RSA Conference. Among them was Mikko Hyppönen, a Finnish researcher with F-Secure, who cited RSA's denial of the alleged $10 million payment by the NSA as suspicious. Hyppönen announced his intention to give his talk, "Governments as Malware Authors", at a conference quickly set up in reaction to the reports: TrustyCon, to be held on the same day and one block away from the RSA Conference. At the 2014 RSA Conference, former RSA Security Executive Chairman Art Coviello defended RSA Security's choice to keep using Dual_EC_DRBG by saying "it became possible that concerns raised in 2007 might have merit" only after NIST acknowledged the problems in 2013. Products RSA is most known for its SecurID product, which provides two-factor authentication to hundreds of technologies utilizing hardware tokens that rotate keys on timed intervals, software tokens, and one time codes. In 2016, RSA re-branded the SecurID platform as RSA SecurID Access. This release added Single-Sign-On capabilities and cloud authentication for resources using SAML 2.0 and other types of federation. The RSA SecurID Suite also contains the RSA Identity Governance and Lifecycle software (formally Aveksa). The software provides visibility of who has access to what within an organization and manages that access with various capabilities such as access review, request andprovisioning. RSA enVision is a security information and event management (SIEM) platform, with centralised log-management service that claims to "enable organisations to simplify compliance process as well as optimise security-incident management as they occur." On April 4, 2011, EMC purchased NetWitness and added it to the RSA group of products. NetWitness was a packet capture tool aimed at gaining full network visibility to detect security incidents. This tool was re-branded RSA Security Analytics and was a combination of RSA enVIsion and NetWitness as a SIEM tool that did log and packet capture. The RSA Archer GRC platform is software that supports business-level management of governance, risk management, and compliance (GRC). The product was originally developed by Archer Technologies, which EMC acquired in 2010. See also Hardware token RSA Factoring Challenge RSA Secret-Key Challenge BSAFE RSA SecurID Software token References Cryptography organizations American companies established in 1982 Software companies based in Massachusetts Software companies established in 1982 Former certificate authorities Computer security companies Companies based in Bedford, Massachusetts 1982 establishments in Massachusetts 2020 mergers and acquisitions Software companies of the United States Private equity portfolio companies
2840548
https://en.wikipedia.org/wiki/VocalTec
VocalTec
VocalTec Communications Inc. is an Israeli telecom equipment provider. The company was founded in 1985 by Alon Cohen and Lior Haramaty, who patented the first Voice over IP audio transceiver. VocalTec has supplied major customers such as Deutsche Telekom, Telecom Italia, and many others. History VocalTec was founded in 1985 by Alon Cohen and Lior Haramaty while still serving together in the IDF, and was officially incorporated in 1989. Its initial operations were devoted to research, development and commercialization of products which provided audio and voice capabilities to personal computers and over computer networks. Cohen and Haramaty developed and manufactured a PC sound card (SpeechBoard TM) that was sold mainly to the local market for various uses such as educational, advertising, radio broadcasting and to the visually-impaired community in Israel with a unique Text to Speech software enabling blind people to use a computer in Hebrew as well as English. As Text to Speech was not available in Hebrew at all, and rarely available in English, they developed both from scratch, utilizing Haramaty's voice, and a user-update-able dictionary of words that was periodically merged between all users. Other projects during the mid to late 80s included Audio editing software, external audio card (mainly for laptops) that was connected the parallel (printer) port, standalone digital audio playback device for frozen desert trucks, multimedia presentation with audio (based on IBM's Story Board presentation software), a system for the disabled (mute) with text to speech which enabled "talking" and conducting phone calls, broadcasting recording, editing and transfer system for offshore radio station, automated IVR information systems, voice messaging over LAN and many other projects utilizing digital audio. In 1990, technology entrepreneur Elon Ganor joined the company to manage International Sales & Marketing, and later on was nominated as CEO. In 1993, VocalTec introduced The CAT to the international market, a peripheral device that provided audio capability for personal computers. In 1993 and 1994, the company introduced additional products, including CATBoard, a full duplex audio card, an internal audio card that provided high level compression. Net sales of these products totaled approximately $0.3 million, $0.4 Million and $0.2 Million in 1993, 1994 and the first nine months of 1995, respectively, and the Company did not expect to recognize significant revenues from sales of these products in the future. In early 1993 the company partnered with ClassX, a group of innovative teenagers lead by Ofer Shem-Tov (a childhood friend of Haramaty) to develop MS Windows audio drivers and related software. ClassX was acquired by VocalTec in 1993 and became the core of VocalTec's software and network development team, and the company recruited Rami Amit as a hardware engineer (also a childhood friend of Haramaty and Shem-Tov). Despite Ganor's initial objection, VocalTec's management decided to shift the company's focus to software, and in 1993 VocalChat was born, a software that enabled voice communication from one PC to another on a local and wide area network, and VocalChat LAN/WAN, hardware and software products that enable real-time voice conversations over local and wide area computer networks. The software was developed, based on Cohen & Haramaty's Audio Transceiver design, by a group of developers including Ofer Shem Tov, Ofer Kahana, Elad Sion (died young in a car accident), Dror Tirosh, Rami Amit and others. The software was presented in Atlanta in May 1993 at the Network InterOp trade show. In 1994 support for Internet Protocol was added and on Friday, February 10, 1995 “Internet Phone“ was launched with a near full page Wall Street Journal article by WSJ Boston Correspondent Bill Bulkeley, “Hello World! Audible chats On the Internet” was the header. VocalTec Internet Phone VocalTec released the first ever Internet VoIP application in February 1995. The product was named Internet Phone but according to Wired magazine many people simply called it iPhone; and was the world's first VoIP software. The software was invented by Alon Cohen and Lior Haramaty, the two co-founders of VocalTec Ltd. At the base of the Internet Phone was the invention of Alon Cohen and Lior Haramaty named the "Audio Transceiver", which managed the dynamic jitter buffer that was critical for achieving adaptive lower possible audio latency along with handling packet loss, packet re-ordering, and receiver transmitter sample rate adjustments. The first implementation of the "Audio Transceiver" was carried out by Elad Sion. Initial Public Offering VocalTec had an initial public offering on the NASDAQ on February 6, 1996. The company sold 2,500,000 shares for $19 a share. 1,750,00 shares were sold by the company and 750,000 were sold by selling shareholders including Elon Ganor, VocalTec's CEO and his brother in law, Ami Tal, through their holding of La Cresta International Trading Inc. VocalTec's leadership who managed its successful IPO included: Elon Ganor - Chairman of the board and CEO, Ami Tal - Director and Chief Operating Officer, Alon Cohen - Director and Chief Technology Officer, Lior Haramaty - Director and Vice President Technical Marketing, Yahal Zilka - Chief Financial officer and Secretary, Daniel Nissan - Vice President Marketing, Ohad Finkelstein - Vice President International Sales, In 1997, Deltathree, an American company engaged in the business of voice over IP telephony services, launched an Internet-based international low cost calling service using VocalTec's VoIP technology, and VocalTec Internet Phone "PC to Phone" system. The same year, Europe's largest telecommunications company, Deutsche Telekom, bought a 21.1 percent stake in VocalTec for $48.3 million, in addition to purchasing $30 million in telephony products, services, and support over the following two-and-a-half years. During the Dot-com bubble the company’s share peaked at a price of $3,363 per share on March 3, 2000 (split adjusted). In 2005, completed a business combination with Tdsoft, a provider of VoIP Gateways. and refocused on providing carrier-class multimedia and voice-over-IP systems for communication service providers. The company's Essentra suite, comprised the essential building blocks required to develop a next-generation-network, addressing customers’ specific requirements in trunking, peering and residential/enterprise VoIP applications. Reverse merger with magicJack On July 16, 2010, MagicJack took over VocalTec in a reverse takeover. ITXC In 1997, VocalTec founded ITXC Corporation a US-based wholesale provider of Internet-based phone calls. The ITXC voice-over-IP network was powered by the VocalTec technology. The company was founded after Vocaltec's CEO at the time, Elon Ganor met Tom Evslin from AT&T (who led at the time WorldNet AT&T ISP initiative) in a conference, ITXC was founded, with Tom Evslin as its CEO and cofounder. VocalTec invested an initial $500K ITXC Corporation and gave a credit of $1Million in VoIP Gateway equipment in exchange of 19.9% of the company, AT&T followed with an additional investment. ITXC became the world's largest VoIP Carrier reaching a market cap of about $8 Billion as a Nasdaq company in 2000 (prior to the March 2000 Dot com crash). In 2003 ITXC was acquired by Teleglobe. See also List of VoIP companies Silicon Wadi Voice over IP References Electronics companies of Israel VoIP companies of the United States Companies listed on the Nasdaq VoIP companies of Israel Companies based in Netanya
49396241
https://en.wikipedia.org/wiki/1932%20Pittsburgh%20Panthers%20football%20team
1932 Pittsburgh Panthers football team
The 1932 Pittsburgh Panthers football team was an American football team that represented the University of Pittsburgh as an independent during the 1932 college football season. In its ninth season under head coach Jock Sutherland, the team compiled an 8–1–2 record, shut out eight of its eleven opponents, suffered its sole loss to USC in the 1933 Rose Bowl, and outscored all opponents by a total of 182 to 60. The team played its home games at Pitt Stadium in Pittsburgh. Although there was no AP Poll to determine a national champion in 1932, the Knute K. Rockne Trophy was presented at the end of the season to the team deemed to be the national champion using the Dickinson System, a rating system developed by Frank G. Dickinson, a professor of economics of the University of Illinois. Michigan won the Rockne Trophy. Pittsburgh was ranked third. Halfback Warren Heller and end Joe Skladany were both consensus first-team selections to the 1932 All-America team, and center Joseph Tormey earned third team United Press All-America honors. Schedule Preseason Paul Reider was elected by his teammates to captain the Panthers, and W. Don Harrison announced that Jack McParland and Elmer Rosenblum were selected to be co-student managers for the varsity team through the 1932 football season. On March 18, Coach Sutherland welcomed 60 candidates to spring practice. The Post-Gazette reported: "The Pitt coach is anxious to get under way, as he has the task of building an entirely new varsity line, as well as finding a punter...ahead of him." The spring session came to an end on April 23 with a regulation football game between the varsity and second team. Sophomore Henry Weisenbaugh scored a touchdown in the second quarter for the second teamers. Varsity back Warren Heller tied the score in the third period, but Clarence Hasson of the second team broke the tie in the last stanza with his touchdown and Zora Alpert added the point after. The second team triumphed 13 to 6. On September 8, 50 Pitt players reported to Camp Hamilton for two weeks of preseason training. The Sun-Telegraph observed: "Despite the fact that the hardest football schedule in Pitt history lies ahead, and the entire varsity wall of 1931 rated by many the best in Pitt history is missing, Sutherland is so well pleased by the condition of his team and by the mental attitude of the players that he is almost able to forget his troubles for five or ten minutes at a time." Due to the Depression, the athletic department lowered the ticket prices for the 1932 schedule. Box seat season tickets were lowered from $23.00 to $18.50; sideline seats from $15.00 to $14.00; and end seats from $9.50 to $7.60. Individual game tickets were lowered fifty cents to a dollar depending on the section. Boys under 16 were admitted to the Ohio Northern game for a dime, and the special boys' price was a quarter for the remaining home games. Coaching staff Roster Game summaries Ohio Northern The Panthers opened their season at home on September 24th against the Ohio Northern Polar Bears, who last played Pitt in 1913. Pitt led the series 6–0 and had outscored the Polar Bears 179–0. Second-year head coach Harris Lamb led the 1931 team to a record setting 6–2 season. The Pitt News noted: "The Ohio boys are not exactly hopeful of beating the Panthers, but expect to offer more opposition than was afforded by Miami U., which faded before Pitt's attack in last year's opener." Chester L. Smith of The Pittsburgh Press wrote: "This afternoon's match with the Polar Bears from Ada, O., is counted on by Dr. Jock Sutherland, the Pitt coach, to enable him to try out his first, second and third elevens. Captain Reider and his varsity mates will be on the field at the kickoff, but they will not remain in action long." Projected starter, fullback John Luch (appendicitis) was the only Panther on the roster unable to play. Pitt drubbed the Polar Bears 47–0, as forty-three Panthers saw action. The Panthers scored seven touchdowns. Both Warren Heller and James Simms each scored two. Isadore Weinstock, Paul Reider and Richard Matesic contributed one apiece. Weinstock kicked three extra points and Tarciscio Onder converted two. Pitt accumulated 120 yards in penalties, which cost them three more scoring chances. The Pitt defense was also impressive as, it only allowed one first down and held the Northern offense to negative 74 yards for the game. The Polar Bears finished the season with a 4–2–1 record. The Pittsburgh Panthers and Ohio Northern Polar Bears would not meet on the gridiron again. The Pitt starting lineup for the game against Ohio Northern was Theodore Dailey (left end), Paul Cuba left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Robert Hoel (right tackle), Joseph Skladany (right end), Bob Hogan (quarterback), Warren Heller (left halfback), Paul Reider (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Louis Wojcihovski, Harvey Rooker, Arthur Craft, John Meredith, John Valenti, Ken Ormiston, Marwood Stark, George Shindehutte, George Shotwell, Leslie Wilkins, Francis Seigel, Frank Tiernan, Robert Timmons, Frank Walton, John Love, Stanley Oleojnicsak, Rocco Cutri, Miller Munjas, Howard O'Dell, James Simms, Melvin Brown, Walter Balasia, Mike Sebastian, Richard Matesic, Nicolas Kliskey, Arthur Sekay, Henry Weisenbaugh and Clarence Hasson. at West Virginia On October 1, the 28th edition of the Backyard Brawl was played at Mountaineer Field in Morgantown, WV. The Panthers led the series 18–8–1. Second-year coach Greasy Neale's squad was 0–1 after being upset at Forbes Field by Duquesne (3–0) in their opening game. The Mountaineers were optimistic playing on their home turf, but their lineup was missing two starters due to injuries – end Will Sortet and fullback Patsy Slate. The Pitt Weekly noted: "When West Virginia wins a football game, that's news for Morgantown, but when West Virginia happens to beat Pitt, the football season for the Mountaineers is regarded as a howling success." On Friday, September 30, the Panther entourage bussed to Uniontown, PA for an overnight stay. The team arrived in Morgantown on Saturday morning. They lunched at the Morgantown Country Club prior to suiting up for the game. Coach Sutherland's lineup was missing two starters. Captain Paul Reider was injured in the Ohio Northern game, and center Joseph Tormey contracted a severe cold. Mike Sebastian replaced Reider and George Shotwell replaced Tormey. Home field was no advantage, as the Panthers manhandled the Mountaineers. Mike Sebastian, Warren Heller and Isadore Weinstock each scored a touchdown in the first quarter, and Weinstock added two placements for a 20 to 0 lead. Sutherland enlisted the second team for the second period in which fullback Henry Weisenbaugh scampered 6 yards for his first touchdown and Dick Matesic added the point after. The Panthers led 27 to 0 at halftime. The Pitt lineup of second and third stringers added a touchdown in both the third and fourth quarters. First Weisenbaugh, and then Howard Gelini carried the ball across the goal line, while Matesic added one placement to finalize the score at 40 to 0. West Virginia finished the season with a 5–5 record. Statistically, the Panthers dominated for the second week in a row. Offensively, Pitt earned 16 first downs and netted 419 yards. Defensively, they held West Virginia to 2 first downs and a net of 26 yards. The Pitt starting lineup for the game against West Virginia was Theodore Daily (left end), Paul Cuba (left tackle), Charles Hartwig (left guard), George Shotwell (center), Tarciscio Onder (right guard), Robert Hoel (right tackle), Joseph Skladany (right end), Robert Hogan (quarterback), Warren Heller (left halfback), Mike Sebastian (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Louis Wojcihhovski, Harvey Rooker, John Meredith, John Love, Robert Timmons, Ken Ormiston, Marwood Stark, John Valenti, Joseph Tormey, Francis Seigel, Stanley Oleojnicsak, Frank Walton, Karl Seiffert, Roco Cutri, Miller Munjas, James Simms, Melvin Brown, Richard Matesic, Howard O'Dell, Henry Weisenbaugh and John Luch. Duquesne On October 8 the Duquesne Dukes and Pitt Panthers met on the gridiron for the first time since 1901, when Duquesne was named Pittsburgh College. In 1927 Duquesne hired Elmer Layden, a member of Notre Dame's famous “Four Horsemen”, as head coach to upgrade their football program. After Layden led the Dukes to the Tri-State Conference title in 1928 and 1929, Duquesne became independent and upgraded their schedule. Layden's unbeaten team (3–0) came into this game with no injuries. The Dukes had outscored their opponents 49–0. The Pitt News mused: "It seems hardly possible that Duquesne will defeat the Panthers, but Duquesne followers have not lost hope. Considering the affair from all angles, it appears that Pitt will likely be the first team to defeat the Dukes by more than twenty points, a feat that has not been accomplished since Layden took charge." The Panthers were too strong for the Dukes, as they prevailed 33–0. Warren Heller led the offense with 2 short touchdown runs in the first quarter and Isadore Weinstock added an extra point. The second quarter was scoreless and Pitt led 13 to 0 at halftime. After Mike Sebastian raced 33 yards on a punt return to the Duquesne 6-yard line, he scored from the 3-yard line. Weinstock added the point for a 20 to 0 lead at the end of three periods. Substitute backs Mike Nicksick and Richard Matesic added two touchdowns in the final stanza and Matesic converted a point after to end the scoring. Pitt earned 18 first downs and gained 440 total yards. Duquesne had 6 first downs and 171 total yards. The Dukes were only able to complete 7 of 23 pass attempts, and the Panther defense had 4 interceptions. Both coaches spoke with Les Biederman of The Pittsburgh Press. Coach Sutherland said: "If we hadn't got our share of the breaks, it might have been a much closer game. I had to get my players 'up there' for Duquesne. That's how much I thought of Elmer Layden's boys. The Dukes had plenty of heart." Coach Layden declared: "Those boys certainly charge as though they were the 'Light Brigade'...But I am certainly proud of my boys...They don't know the meaning of the word quit...It's no disgrace losing to Pitt this year. They're much better than last fall." Duquesne finished the season with a 7-2-1 record. The Pitt starting lineup for the game against Duquesne was Theodore Daily (left end),Paul Cuba (left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Robert Hoel (right tackle), Joe Skladany (right end), Robert Hogan (quarterback), Warren Heller (left halfback), Mike Sebastian (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for the Panthers were Louis Wojcihovski, Harvey Rooker,Frank Walton, John Love, Ken Ormiston, George Shotwell, John Valenti, Francis Seigel, John Meredith, Stanley Olejnicsak, Rocco Cutri, Miller Munjas, James Simms, Mike Nicksick, Howard O'Dell, Richard Matesic and Henry Weisenbaugh. at Army On October 15 Pitt and West Point met at Michie Stadium. Third-year Coach Ralph Sasse, who had led the Cadets to a 19–3–2 mark in two-plus campaigns, was retiring at the end of the season. The Cadets had revenge on their minds, after being drubbed 26 to 0 the previous year, and wanted a victory for their coach against a phenomenal Pitt club. Army opened their season with victories over Carleton and Furman. The Cadets had three All-Americans in their lineup – guard Milton Summerfelt, end Richard King and quarterback Felix Vidal. Last year's starting quarterback, Edward Herb, had a broken leg and was replaced by Vidal. The Panthers had Captain Paul Reider back in the lineup at halfback, but George Shotwell replaced the injured Joseph Tormey at center. The Sun-Telegraph predicted that the Panthers would run a double wing-back offense, try more forward passes and use some trick plays to beat the Army. The Panthers eked out a hard-fought 18–13 victory. Jess Carver of the Sun-Telegraph summarized: "The Panthers lived up to their rating as favorites, but to the Army, a team that played its heart out for a victory that barely eluded its grasp, must go the lion's share of the laurels. The Army outplayed Pitt today, and don't you forget it." The Panthers opened the scoring late in the first period with a 55 yard scamper by Warren Heller. Isadore Weinstock's extra point attempt was blocked. The Panther offense regained possession and scored on a 29-yard pass from Heller to Joseph Skladany. Weinstock missed the point after, but Pitt led 12 to 0. The Army offense countered with an 8 play, 44 yard drive that ended with a 5 yard touchdown run by Thomas Kilday. Travis Brown missed the extra point and the halftime score was 12 to 6. The Cadets advanced the ball inside the Panther 5-yard line early in the third quarter, but the Pitt defense held. The Army offense regained possession on the Pitt 35-yard line. Kenneth Fields completed a 27-yard pass to Vidal. Vidal picked up 6 yards on first down and Fields scored on the next play. Charles Broshous place-kicked the extra point and Pitt trailed 12-13. The Panther offense responded with a 73-yard drive. Heller completed a 48 yard pass to Skladany from his own 27-yard line to the Army 25-yard line. Six plays later Weinstock scored from the one. He missed the point after, but Pitt was back in the lead 18–13. Pitt moved the ball to the Army 11-yard line in the final period, but lost possession on downs. The Cadet offense then advanced the ball into Pitt territory, but the Panther defense kept them out of the end zone. Army finished the season with an 8-2 record. The Pitt starting lineup for the game against Army was Theodore Daily (left end), Paul Cuba (left tackle), Charles Hartwig (left guard), George Shotwell (center), Tarciscio Onder (right guard), Robert Hoel (right tackle), Joe Skladany (right end), Robert Hogan (quarterback), Warren Heller (left halfback ), Paul Reider (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, John Meredith, Ken Ormiston, John Valenti, Joseph Tormey, Frank Walton, Robert Timmons, Francis Seifert, Rocco Cutrti, Miller Munjas, Mike Nicksick, Mike Sebastian, Richard Matesic and Henry Weisenbaugh. Ohio State The Homecoming match-up was against the Buckeyes of Ohio State. Fourth-year coach Sam Willaman's team was 1–1–1 on the season. The Buckeyes beat Ohio Wesleyan, tied Indiana and lost to Michigan. The Ohio State lineup boasted four All-Americans – end Sid Gilman, tackle Ted Rosequist, guard Joe Gailus and halfback Lew Hinchman. The Sun-Telegraph warned: "The Ohioans are due for a good game and Pitt for a letdown, and almost anything can happen this afternoon." The Panthers and Buckeyes had played two times, with each team winning one game. The Panthers won at home 18–2 in 1929 and lost at Columbus 16–7 the following year. Coach Sutherland started the same line-up as in the Army game except for Frank Walton, who replaced Robert Hoel at right tackle. The Cincinnati Enquirer summed it up best: "An underrated Ohio State football eleven refused to respect pregame predictions here today and fought the Pitt Panther to a standstill in its own lair, holding Jock Sutherland's highly touted team to a scoreless tie." The Panther offense spent the first half in Ohio territory but could not score. The Buckeyes threatened the Panther goal twice in the third stanza, but came up short each time. The Panthers made a valiant final offensive effort with three minutes remaining in the game. They gained possession on their 14-yard line. Warren Heller completed a 52 yard pass play to Mike Sebastian, who was tackled on the State 34-yard line. Sebastian raced 20 yards for another first down on the State 14-yard line. Buckeye end Sidney Gilman threw Sebastian for a 13 yard loss to the 27-yard line. Sebastian threw an incomplete pass to Theodore Daily in the end zone, but State halfback Thomas Keefe was called for interference, and Pitt had first down on the 1-yard line. Three futile line bucks and an incomplete pass turned the ball over to the Buckeyes. The game ended seconds later. David Finoli noted in When Pitt Ruled the Gridiron that Coach Willaman instructed his defense to repeatedly jump offsides when the ball was on the one yard line - "which at that point and time in college football history allowed the clock to run, giving Pitt little time to score. The Buckeye offside ploy proved to be successful, running the clock down as Sebastian failed to score on a third attempt, leaving Pitt with a fourth and inches and seconds left. They decided to pass; Sebastian thought he had completed the winning pass only to see it fall harmlessly to the ground." Ohio State finished the season with a 4-1-3 record. The Pitt starting lineup for the Ohio State game was Theodore Dailey (left end), Paul Cuba (left tackle), Tarciscio Onder (left guard), George Shotwell (center), Charles Hartwig (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Robert Hogan (quarterback), Warren Heller (left halfback), Mike Sebastian (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, Rocco Cutri, Joseph Tormey, John Valenti and Mike Sebastian. Notre Dame On October 29 the Fighting Irish of Notre Dame, sporting a 3–0 record on the season and owning a 4–0–1 all-time record against the Panthers, arrived at Pitt Stadium as a 3 1/2 to 1 favorite. Coach Heartley Anderson brought 37 players east and opted to start his first string line and second string backfield. The Irish squad had five All-Americans – tackle Joe Kurth, end Edwin Kasky, tackle Edward “Moose” Krause, fullback George Melinkovich and guard James Harris. Coach Anderson told The Pittsburgh Press: "I think we'll win alright, but I'm predicting no score...This is one game we're determined to win – we're pointed for it, realizing it will be one of the hardest tests we will have all fall." Coach Sutherland adjusted his starting lineup - Joseph Tormey returned to the lineup at center; Miller Munjas started the game at quarterback for the injured Robert Hogan; and Mike Sebastian replaced Captain Paul Reider at right halfback. "This news has further strengthened the odds on the visitors." Earlier in the week Pitt halfback Mike Nicksick was declared ineligible due to scholastic problems. Edward J. Neil wrote in The South Bend Tribune: "The panther, regal jungle cat, and football team alike, is most dangerous when wounded. Cornered, it bares its fangs for the last fight to the death. A mighty Notre Dame eleven, hailed the greatest in the land, found that out for the first time today as the Panthers of Pittsburgh, battered and groggy, lashed out, in a dying fourth quarter effort that stunned the green grenadiers from South Bend, sent them reeling down to a 12 to 0 defeat, and chalked on the pages of football history one of the greatest upsets of all times." The Irish took the opening kick-off and advanced the ball to the Pitt 25-yard line. The Pitt defense stiffened and Notre Dame lost the ball on downs. In the second period, the Irish offense sustained a 50 yard drive to Pitt 19-yard line, but the Panthers held again and took the ball on downs. A 40 yard march in the third stanza put the Irish within 9 yards of the Panther goal. The Panther defense stopped the Irish a few feet short on fourth down, and Pitt quarterback Bob Hogan punted out of danger. In the final quarter the Irish sustained another 35 yard march, which was stopped when Hogan intercepted Mike Koken's pass on the Pitt 27-yard line. Pitt earned two first downs to the Notre Dame 45-yard line before Mike Sebastian broke free around left end for the first score of the game. Isadore Weinstock's extra point attempt was blocked and Pitt led 6 to 0. Notre Dame received the kick-off and on second down Irish back McGuff's pass was intercepted by Theodore Dailey, who raced 36 yards unmolested for the second touchdown in less than two minutes. Weinstock's kick was again blocked and the final score read 12 to 0. The Irish finished the season with a 7–2 record. The Pitt starting lineup for the game against Notre Dame was Theodore Dailey (left end), Paul Cuba (left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Miller Munjas (quarterback), Warren Heller (left halfback), Mike Sebastian (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were John Meredith, Ken Ormiston, Robert Hoel, Robert Hogan, Paul Reider and Henry Weisenbaugh. at Penn Pitt's third road trip was across the state to Philadelphia to play the undefeated Penn Quakers. Penn, with a record of 5-0, had outscored their opposition 153 to 13. The schools last met in 1925, and the Panthers led the all-time series 8-1-1. Second-year Penn coach Harvey Harman played tackle on the Pitt teams of 1919-1921, and his line coach, Alec Fox, played guard for Coach Sutherland in 1927 and 1928. Penn was injury-free after their previous game against Navy, so Harman used the same starting lineup against the Panthers. The Quaker line was anchored by All-America tackle Howard Colehower. The Panthers arrived in Philadelphia on Friday morning and were housed at the Merchant's Country Club at Oreland. Sutherland held a scrimmage on the grounds Friday afternoon. They traveled to Franklin Field Saturday right before game time. The Panthers were in the best shape of the season, but were still without their injured Captain, Paul Reider. Mike Nicksick regained his eligibility and was back on the team. Coach Sutherland used the same starting lineup as the Notre Dame game, except Bob Hogan who replaced Miller Munjas at quarterback. It was historically significant that The Pittsburgh Press published driving directions from Pittsburgh to Franklin Field: "Motorists planning to drive to Philadelphia to see the Pitt-Penn football game tomorrow were advised today by the Pittsburgh Motor Club to use Route 30, the Lincoln Highway. Route 22, the William Penn Highway, has two detours. The trip is 294 miles over highways that are reported in good condition. To reach Franklin Field in Philadelphia drivers should continue on the Lincoln Highway, which becomes Lancaster Avenue, to the intersection of Chestnut Street. Franklin Field is one block South of that intersection." Perry Lewis of The Philadelphia Inquirer reported: "Penn is no longer an undefeated team. Treading on the heels of Notre Dame, the Quakers yesterday joined the lengthening procession of distinguished elevens that have been crushed beneath Pitt's 1932 gridiron juggernaut. The score was 19 to 12–three touchdowns to two...In the neighborhood of 70,000 worshippers at the shrine of King Football framed the emerald arena where the mightiest gridiron gladiators of the Keystone State battled to a finish in one of the most savagely fought imbroglios these arch football rivals have ever waged." Early in the second period the Panther offense ended a 67 yard, thirteen play drive on a fourth down, with Warren Heller scoring from four yards out. Isadore Weinstock's placement was perfect and Pitt led 7 to 0. Penn countered after blocking a Pitt punt attempt from the end zone. The ball was recovered by Pitt on their 10-yard line and Penn took possession. After a 4 yard loss on first down, Penn halfback, Don Kellett, completed a 14 yard touchdown pass to end John Powell. Monroe Smith missed the point after and Pitt led 7–6 at halftime. Coach Sutherland was unhappy and, after the break, started the second string. Henry Weisenbaugh intercepted an errant Penn pass on the Pitt 40-yard line and raced 47 yards to the Penn 13-yard line. Four plays later Weisenbaugh bulled his way into the end zone and Pitt led 13 to 6. Joe Matesic missed the point after. In the final period Pitt gained possession on their 20-yard line. Runs by Heller, Mike Sebastian and Weisenbaugh moved the ball to the Penn 11-yard line. After a penalty and 6 yard loss, Heller threw a 30 yard pass to Sebastian for Pitt's final touchdown. Weinstock failed to convert the extra point and Pitt led 19–6. Penn countered with a Kellett 57 yard punt return for a touchdown. Late in the game Pitt back John Luch fumbled a punt and Penn recovered the ball on the Pitt 15-yard line. Kellett's pass to the end zone was incomplete and the Panthers went back to Pittsburgh victorious. Penn finished the season with a 7-2 record. The statistics were deceiving - Pitt earned 13 first downs to the Quakers 5; Pitt gained 354 yards and Penn 132; Pitt lost 2 fumbles and Penn 1; Each team intercepted 3 passes; Pitt was penalized 95 yards and Penn 55 yards. The Pitt starting lineup for the game against Penn was Theodore Dailey (left end), Paul Cuba (left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Bob Hogan (quarterback), Warren Heller (left halfback), Mike Sebastian (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, John Meredith, Ken Ormiston, Frank Kutz, George Shotwell, Francis Seigel, Robert Hoel, Robert Timmons, Miller Munjas, Howard O'Dell, James Simms, Henry Weisenbaugh and John Luch. at Nebraska The Panthers' train delivered the squad home from Philadelphia on Sunday morning. On Wednesday night 38 Panthers (largest Pitt traveling squad to that time) reboarded at Penn Station for the western trip to play the Nebraska Cornhuskers in Lincoln, NE. On Thursday the team had a 12-hour layover in Chicago and worked out at Stagg Field. Friday was spent in Omaha, NE with a workout at Ak-Sar-Ben (Nebraska spelled backward) Pavilion. The team departed for Memorial Stadium on Saturday morning. The Panthers led the all-time series 3–1–2. This was the Pitt's fourth trip to Lincoln and two of the previous visits ended in scoreless ties. Coach Sutherland started his second string to give the varsity some rest. But, according to the Sun-Telegraph: "The varsity will be ready for instantaneous relief duty." Fourth-year coach Dana X. Bible's Cornhuskers were 4–1. Their only blemish was a one point loss to Minnesota. The Lincoln Star noted: "A win would be unusually sweet in view of the 40–0 rampage the Pittsburghers staged at Cornhusker expense on the Smoky City gridiron last Thanksgiving day....With the squad 100 per cent physical condition for the first time since the start of the season, the Scarlet and Cream is prepared to meet the Panthers in a give-and-take affair." John Bentley of The Lincoln Star reported: "Nebraska and Pittsburgh, the team that beat Notre Dame, fought a scoreless tie at the stadium Saturday afternoon as 27,000 spectators watched one of the toughest toe-to-toe gridiron battles that has ever been fought here...Nebraska outplayed the Panthers from first to last...Nebraska outdowned the Panthers 13 to 7, outrushed what has been termed the greatest backfield in America, 283 yards to 183 and in net yards gained, which includes passes had the edge of 277 yards to 198." Both defenses were the deciding factor in the frigid conditions. In the third quarter the Pitt offense advanced the ball 64 yards to the Husker 11-yard line and lost the ball on downs. In the fourth period Pitt moved the ball to the Nebraska 26-yard line. Husker quarterback Bernie Masterson stopped the drive by intercepting a Warren Heller pass. The Pitt defense had to thwart three Husker drives. In the second quarter the Husker offense was on the Pitt 3-yard line, when Heller broke up a pass play on fourth down in the end zone. Early in the fourth quarter the Huskers advanced to the Pitt 27-yard line and lost the ball on downs. Later, they advanced to the Pitt 19-yard line and the Pitt defense stiffened. Masterson attempted a field goal from the 30-yard line that was short, and Pitt escaped with a scoreless tie. The Huskers won the Big Six Conference title and finished the season with a 7–1–1 record. The Pitt starting lineup for the game against Nebraska was Harvey Rooker (left end), John Meredith (left tackle), Ken Ormiston (left guard), George Shotwell (center), Francis Siegel (right guard), Robert Hoel (right tackle), Robert Timmons (right end), Miller Munjas (quarterback), Howard O'Dell (left halfback), Paul Reider (right halfback) and Henry Weisenbaugh (fullback). Substitutes appearing in the game for Pitt were Paul Cuba, Charles Hartwig, Joseph Tormey, Tarciscio Onder, Frank Walton, Joseph Skladany, Robert Hogan, Warren Heller, Mike Sebastian and Isadore Weinstock. Carnegie Tech On November 19 the nineteenth edition of the "City Game" was held at Pitt Stadium. Three trophies – the City of Pittsburgh, the Chamber of Commerce and the Warner Brothers awards - were presented to the victor. It was Homecoming Day at Carnegie Tech. The Skibos hoped the bonfire, pep rally and visiting grads would help the team upset the Panthers. The Tartans were 3–1–2 on the season for coach Walter Steffen, who was in his 18th year at Tech. Tech's lone loss was against Notre Dame ( 42 to 0). Star halfback, Bill Spisak, was injured in the previous game against Xavier and was replaced by Tech track star Tom Coulter. Pitt led the all-time series 14–4, but Tech had won four of the past nine games. Theodore Dailey, left end, and Paul Reider, right halfback, were healthy and back in the starting lineup. John Meredith replaced Paul Cuba at left tackle and Miller Munjas replaced Bob Hogan at quarterback. Coach Sutherland was worried his team was taking the Tartans too lightly, because his present squad had never lost to Tech. For the second game in a row the Panthers played in terrible weather. Harry Keck of the Sun-Telegraph described: "All through the night and right up to a little before game time, the rain had poured down. And when the rain ended, it snowed until after they got the rain cover off the gridiron and everything had been rendered nice and gooey. And then,the weather gods just sat back and hee-hawed themselves silly as the teams mud-horsed it up and down the field through four dragging periods to a 6–0 victory for Pitt." Despite the adverse weather conditions, both defensive units and punters kept the offenses from sustaining any drives. Even though Pitt earned 11 first downs to Tech's 3; out gained the Tartans 214 yards from scrimmage to 76; intercepted 3 passes and out punted the Techsters by 10 yards per punt, the Panthers needed a fumble recovery deep in Carnegie territory to score. Late in the third period Pitt quarterback Miller Munjas punted. Tech quarterback Stuart Dueger fumbled, and Pitt end Harvey Rooker recovered on the Carnegie 4-yard line. On fourth down Isadore Weinstock plunged into the end zone for the only score of the game. His extra point attempt was blocked and Pitt survived 6 to 0, and kept their undefeated season intact. Carnegie Tech finished the season with a 4–3–2 record. Walter Steffen resigned with an 18-year record of 88-53-8. He was 4-10 versus Pitt. The Pitt starting lineup for the game against Carnegie Tech was Ted Dailey (left end), John Meredith (left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Miller Munjas (quarterback), Warren Heller(left halfback), Paul Reider (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, Paul Cuba, George Shotwell, Robert Hogan, Mike Sebastian, Howard O'Dell and Henry Weisenbaugh. Stanford On November 26 Glen Warner brought his Stanford Indians east to attempt to stymie Pitt's championship aspirations. Since taking the job at Stanford in 1924, Warner's eleven won three Pacific Coast Conference titles (1924, 1926, 1927), went to the Rose Bowl Game three times (1925, 1927, 1928) and won a national title (1926). His present team came to Pitt Stadium with a 6–3–1 overall record and a 1–3–1 record in the Pacific Coast Conference. Consensus All-American guard Bill Corbus anchored the Stanford line. Pitt was 1–1 all-time against Stanford. In 1922 the Warner-led Panthers beat Stanford 16 to 7, and in the 1928 Rose Bowl the Warner-led Indians bested the Panthers 7 to 6. With an invitation to the Rose Bowl and possible national title on the line, the Panthers had to beat a team that had not lost to an eastern squad during Warner's tenure. Harry G. Scott noted in his book Jock Sutherland, Architect of Men: “In all fairness,it must be stated that his (Warner's) 1932 team did not measure up to his famous teams of the two preceding years which came east to slaughter Army and Dartmouth.” Since both Captain Paul Reider and his backup Mike Sebastian were injured, coach Sutherland started Richard Matesic at right halfback. Otherwise, the Panthers were healthy. Eleven seniors played in their last home game: Paul Reider, Warren Heller,Ted Dailey, Joe Tormey, Paul Cuba, John Luch, Francis Seigel, Mel Brown, Art Sekay Rocco Cutri and George Shindehuette. The Pitt Panthers finished the season undefeated by shutting out the Stanford eleven 7 to 0. Early in the first period Pitt quarterback Bob Hogan punted from his own 37-yard line and Ted Dailey downed the ball on the Stanford 1-yard line. Stanford tried to punt out of danger, but their attempt into the strong wind was downed on their 30-yard line. Mike Sebastian gained eight yards around end. Isadore Weinstock added nine through the middle. Warren Heller completed a pass to Dailey for first down on the Stanford 2-yard line. On third down Heller pushed through for the score. Weinstock split the uprights for the extra point and Pitt led 7 to 0. Hogan's punts and the Pitt defense kept the Stanford offense deep in their own territory for three plus quarters. In the fourth quarter Stanford faked a punt and managed their initial first down. Two completed passes advanced the Indians to the Panther 25-yard line. The Panther defense stiffened, and Stanford had to punt. The score was not indicative of how well Pitt dominated play. The Panthers gained 211 yards from scrimmage to 44 for Stanford. Pitt earned 11 first downs to 3 for the Indians. The Panthers ran 68 plays to 26 for Stanford. In what turned out to be Pop Warner's final season as coach of Stanford, his team finished with a 6–4–1 record. The Pitt starting lineup for the game against Stanford was Ted Dailey (left end), Paul Cuba (left Tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Robert Hogan (quarterback), Warren Heller (left halfback), Richard Matesic (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, Miller Munjas, Mike Sebastian and Henry Weisenbaugh. vs. USC (Rose Bowl) By virtue of winning the Pacific Coast Conference title for the second season in an row, the undefeated Southern Cal Trojans (10–0) were selected to represent the Nest in the 1933 Rose Bowl Game. Four teams were deemed worthy opponents by the sportswriters – Michigan, Auburn, Colgate and Pittsburgh. Since Southern teams had represented the East for five of the previous seven years, Auburn (9–0–1) was eliminated from consideration. USC wanted to play unbeaten Michigan (8–0), but the Big Ten Conference could not get all members to agree to send the Wolverines west. Colgate was unbeaten, untied and unscored upon with a 9–0 record, but USC extended the offer to Pitt with their unbeaten 8–0–2 record. USC Athletic Director, Willis O. Hunter, told The San Francisco Examiner: “In selecting Pitt we feel that we have invited a team that has had a more representative schedule than Colgate. Pitt defeated both Notre Dame and Army. They were twice tied but unbeaten. We feel that Pitt is entitled to another crack at us because when we played them two years ago I do not believe they were at their best.” Practice prior to the game was difficult due to the harsh winter in Pittsburgh. In addition, Coach Sutherland had to coach the North squad in a charity all-star game on December 7 in Baltimore, MD. The Panthers ended up practicing indoors at the Hunt Armory. For the final practice before heading west, Coach Sutherland arranged a December 17 game against a Pitt alumni squad. He had the alumni team run the Southern Cal offense and defense. The weather remained frigid and the game was played indoors in front of 2,000 die-hard fans. The makeshift field was 80 yards in length and the quarters were shortened to 10 minutes. The Panther varsity scored a late touchdown on a 55 yard scamper by Henry Weisenbaugh. Tarciscio Onder converted the point after and the varsity won, 7–0. On December 18, The Pittsburgh Press reported: “Thirty-six football players, three coaches, a team physician, a trainer, a custodian of equipment, four managers and the assistant director of athletics, will comprise the Pitt football party when it leaves here tonight at 11:20 o'clock headed for California...” At noon on Monday, the Panther train had a short layover in St. Louis. To the delight of some curious onlookers Coach Sutherland had the team do some calisthenics on the Union Station platform. Tuesday, they arrived in Dallas and had a scheduled workout with Southern Methodist University on Ownby Field. The weather in Dallas was similar to Pittsburgh – rain, snow and freezing temperatures, but the Panthers were happy to get off the train and work out. On the ride from Dallas to Tucson, Coach Sutherland mused: “We are gambling our chances on the ten-day stop-over at Tucson. My team is in good shape but it will have to improve. The lack of decent practice weather has hindered us.” The next morning they arrived in Tucson which was blanketed in the heaviest snow fall of the past twelve years. Coach Sutherland contemplated moving camp to California, but the weatherman promised sunny days for the remainder of the Panthers stay in Arizona. On the evening of December 31, the Panthers boarded the train for the thirteen-hour trip to Pasadena. Coach Sutherland admitted “his team is as ready as it will ever be, that his players are physically fit and mentally eager for the fray.” USC coach Homer Jones was in his eighth year and had two previous Rose Bowl victories – 1930 over Pitt, and 1932 over Tulane. His Trojans were the defending National Champs and were on a 19 game winning streak. The USC line featured three All-Americans – tackle Ernie Smith, tackle Tay Brown and guard Aaron Rosenberg. The team was healthy and Coach Jones emphasized the importance of not being over confident. Since the USC line was heavier than Pitt's and Pitt had lost on their two prior trips to the Rose Bowl, the odds makers favored the Trojans by as much as 2 to 1. In front of the largest Rose Bowl crowd in history (83,000), USC beat Pitt handily (35–0) to capture the Rose Bowl Championship for the fourth time and the National Title for the second consecutive season. USC kicked off and forced the Panthers to punt. The Trojan offense proceeded to advance the ball 62 yards for the opening touchdown. The Pitt offense countered with a drive to the USC 32-yard line, but lost the ball on a fumble by Mike Sebastian. In the second quarter, the Panthers offense penetrated to the Trojan 23-yard line but lost the ball on downs. The halftime score was 7 to 0. USC added a touchdown in the third period. Pitt botched a center snap and Trojan tackle Ray Brown recovered on the Pitt 7-yard line. Four plays later the score read USC 14 to Pitt 0. To open the fourth quarter, the Trojans sustained a 62 yard drive, culminating in a touchdown to extend the score to 21 to 0. Another bad pass from center and a blocked punt led to the final two USC touchdowns of the game. USC totaled 22 first downs to Pitt's 9 and out-gained the Panthers 278 yards to 193. The Trojan defense intercepted two passes and recovered two Pitt fumbles. George H. Beale wrote: “As the Trojans thus earned the best record of any Rose Bowl competitor, the defeat gave Pitt the worst record-three defeats in as many games.” Jock Sutherland admitted the Trojans should be the National Champs. “It was a smart, aggressive and versatile team,” he said. “It took advantage of the breaks. The score was not a real indication of the strength of the two teams for intercepted passes and fumbles played a large part of the scoring spree.” Trojan coach Homer Jones stated: “It was a great finish in a great season. The Trojan seniors playing their last game especially turned in fine performances. As to Pittsburgh, the Steel City eleven is one of the strongest we have met and during most of the game it gave us all we could handle. All Pittsburgh players lived up to the reputations which they brought to the coast.” The Pitt starting lineup for the Rose Bowl game was Ted Dailey (left end), Paul Cuba (left tackle), Charles Hartwig (left guard), Joseph Tormey (center), Tarciscio Onder (right guard), Frank Walton (right tackle), Joseph Skladany (right end), Robert Hogan (quarterback), Mike Sebastian (left halfback), Warren Heller (right halfback) and Isadore Weinstock (fullback). Substitutes appearing in the game for Pitt were Harvey Rooker, John Meredith, Ken Ormiston, George Shotwell,Francis Seigel, Robert Hoel, Miller Munjas, Paul Reider, Mike Nicksick, Henry Weisenbaugh and Louis Wojcihovski. The Pasadena Post reported that while USC and Pitt were battling it out on the gridiron, “five hundred men and boys, armed with stones from the Arroyo Seco, attacked Pasadena police who were guarding the Rose Bowl yesterday afternoon, and after tearing down a portion of the high wire fence, engaged in the worst riot in history of Tournament of Roses East-West games.” Police had to use tear gas to disperse the mob. Thirty of the hooligans were arrested. Two policemen and numerous instigators were injured in the fracas. Individual scoring summary Postseason On their last day in L.A., the Panthers were given movie studio tours by Hollywood notables Joe E. Brown and Kay Francis. The next morning they headed east for the Grand Canyon and a donkey ride trip down the Bright Angel Trail. Their final sight-seeing stop was in Albuquerque, N.M. To visit the Isleta Indian village. On Sunday January 8, the Panthers arrived back in Pittsburgh, where they were greeted by a throng of 2,000 well-wishers. On January 18, the Pittsburgh Athletic Council awarded letters to the following members of the 1932 Pitt varsity football team: Paul Reider, Paul Cuba, Thedore Dailey,John Meredith, Kenneth Ormiston, Arthur Sekay, Joseph Tormey, Tarciscio Onder, Robert Hoel, Joseph Skladany, Robert Hogan, Warren Heller,Francis Seigel, Howard O'Dell, George Shotwell, Isadore Weinstock,Charles Hartwig, Michael Sebastian, Harvey Rooker, Henry Weisenbaugh, Elmer Rosenblum and John McParland. On Friday February 10, the athletic board of the University of Pittsburgh appointed James Hagan to the office of graduate manager of student athletics. Leroy Lewis (Col. '34) was named varsity manager for the 1933 football season. On February 26, senior fullback John Luch died from peritonitis, which he contracted after having his appendix removed. In September, at Camp Hamilton he was stricken with a severe case of appendicitis. The doctor advised against the operation at that time, but his recovery was slow and he only played in 2 games. John was a three-letter athlete (football, track and boxing) at Pitt. References Pittsburgh Pittsburgh Panthers football seasons Pittsburgh Panthers football
156337
https://en.wikipedia.org/wiki/Amstrad%20PCW
Amstrad PCW
The Amstrad PCW series is a range of personal computers produced by British company Amstrad from 1985 to 1998, and also sold under licence in Europe as the "Joyce" by the German electronics company Schneider in the early years of the series' life. The PCW, short for Personal Computer Word-processor, was targeted at the wordprocessing and home office markets. When it was launched the cost of a PCW system was under 25% of the cost of almost all IBM-compatible PC systems in the UK, and as a result the machine was very popular both in the UK and in Europe, persuading many technophobes to venture into using computers. However the last two models, introduced in the mid-1990s, were commercial failures, being squeezed out of the market by the falling prices, greater capabilities and wider range of software for IBM-compatible PCs. In all models, including the last, the monitor's casing included the CPU, RAM, floppy disk drives and power supply for all of the systems' components. All except the last included a printer in the price. Early models used 3-inch floppy disks, while those sold from 1991 onwards used 3½-inch floppies, which became the industry standard around the time the PCW series was launched. A variety of inexpensive products and services were launched to copy 3-inch floppies to the 3½-inch format so that data could be transferred to other machines. All models except the last included the Locoscript word processing program, the CP/M Plus operating system, Mallard BASIC and the Logo programming language at no extra cost. A wide range of other CP/M office software and several games became available, some commercially produced and some free. Although Amstrad supplied all but the last model as text based systems, graphical user interface peripherals and the supporting software also became available. The last model had its own unique GUI operating system and set of office applications, which were included in the price. However none of the software for previous PCW models could run on this system. Development and launch In 1984, Tandy Corporation executive Steve Leininger, designer of the TRS-80 Model I, admitted that "as an industry we haven't found any compelling reason to buy a computer for the home" other than for word processing. Amstrad's founder Alan Sugar realised that most computers in the United Kingdom were used for word processing at home, and allegedly sketched an outline design for a low cost replacement for typewriters during a flight to the Far East. This design featured a single "box" containing all the components, including a portrait-oriented display, which would be more convenient for displaying documents than the usual landscape orientation. However the portrait display was quickly eliminated because it would have been too expensive, and the printer also became a separate unit. To reduce the cost of the printer, Amstrad commissioned an ASIC (custom circuit) from MEJ Electronics, which had developed the hardware for Amstrad's earlier CPC-464. Two other veterans of the CPC-464's creation played important roles, with Roland Perry managing the PCW project and Locomotive Software producing the Locoscript word processing program and other software. The CP/M operating system was added at the last minute. During development the PCW 8256 / 8512 project was code-named "Joyce" after Sugar's secretary. For the launch the product name "Zircon" was jointly suggested by MEJ Electronics and Locomotive Software, as both companies had been spun off from Data Recall, which had produced a word processing system called "Diamond" in the 1970s. Sugar, preferring a more descriptive name, suggested "WPC" standing for "Word Processing Computer", but Perry pointed out that this invited jokes about Women Police Constables. Sugar reshuffled the initials and the product was launched as the "Personal Computer Word-processor", abbreviated to "PCW". The advertising campaign featured trucks unloading typewriters to form huge scrap heaps, with the slogan "It's more than a word processor for less than most typewriters". In Britain the system was initially sold exclusively through Dixons, whose chairman shared Sugar's dream that computers would cease to be exclusive products for the technologically adept and would become consumer products. Impact on the computer market In 1986, John Whitehead described the Amstrad PCW as "the bargain of the decade", and technology writer Gordon Laing said in 2007, "It represented fantastic value at a time when an IBM compatible or a Mac would cost a comparative fortune." At its United Kingdom launch in September 1985, the basic PCW model was priced at £399 plus value added tax, which included a printer, word processor program, the CP/M operating system and associated utilities, and a BASIC interpreter. Software vendors quickly made a wide range of additional applications available, including accounting, spreadsheet and database programs, so that the system was able to support most of the requirements of a home or small business. Shortly afterwards the Tandy 1000 was introduced in the UK with the MS-DOS operating system and a similar suite of business applications and became the only other IBM-compatible personal computer system available for less than £1,000 in Britain. At the time the cheapest complete systems from Apricot Computers cost under £2,000 and the cheapest IBM PC system cost £2,400. Although competitors' systems generally had more sophisticated features, including colour monitors, Whitehead thought the Amstrad PCW offered the best value for money. In the US the PCW was launched at a price of $799, and its competitors were initially the Magnavox Videowriter and Smith Corona PWP, two word-processing systems whose prices also included a screen, keyboard and printer. The magazine Popular Science thought that the PCW could not compete as a general-purpose computer, because its use of non-standard 3-inch floppy disk drives and the rather old CP/M operating system would restrict the range of software available from expanding beyond the spreadsheet, typing tutor and cheque book balancing programs already on sale. However, the magazine predicted that the PCW's large screen and easy-to-use word processing software would make it a formidable competitor for dedicated word processors in the home and business markets. The system was sold in the US via major stores, business equipment shops and electronics retailers. The PCW redefined the idea of "best value" in computers by concentrating on reducing the price, which totally disrupted the personal computer market. The low price encouraged home users to trade up from simpler systems like the Sinclair Spectrum, whose sales had passed their peak. According to Personal Computer World, the PCW "got the technophobes using computers". In the first two years over 700,000 PCWs were sold, gaining Amstrad 60% of the UK home computer market, and 20% of the European personal computer market, second only to IBM's 33.3% share. Having gained credibility as computer supplier, Amstrad launched IBM-compatible PCs, once again focussing on low prices, with its PC1512 surpassing the IBM PC on performance and beating even the Taiwanese clones on price. Amstrad became the dominant British personal computer company, buying all the designs, marketing rights and product stocks of Sinclair Research Ltd's computer division in April 1986, while Apricot later sold its manufacturing assets to Mitsubishi and became a software company. In the PCW's heyday the magazines 8000 Plus (later called PCW Plus) and PCW Today were published specifically for PCW users. In addition to the usual product reviews and technical advice, they featured other content such as articles by science fiction writer and software developer Dave Langford on his experiences of using the PCW. By 1989, units had been sold. When the PCW line was retired in 1998, 8 million machines had been sold. The Daily Telegraph estimated in 2000 that 100,000 were still in use in the UK, and said that the reliability of the PCW's hardware and software and the range of independently produced add-on software for its word processing program were factors in its continued popularity. Laing says the PCW line's downfall was that "proper PCs became affordable". IBM, Compaq and other vendors of more expensive computers had reduced prices drastically in an attempt to increase demand during the recession of the early 1990s. In 1993 the PCW still cost under £390 while a PC system with a printer and word processing software cost over £1,000. However, after adjustment for inflation the retail price of a multimedia IBM-compatible PC in 1997 was about 11% more than that of a PCW 8256 in 1985, and many home PCs were cast-offs, sometimes costing as little as £50, from large organisations that had upgraded their systems. Users of Windows, Unix or macOS systems who wish to run programs that were developed for the PCW 8256, 8512, 9256, 9512 and 9512+ can use an emulator called "Joyce". There is also another one only for Windows called "CP/M Box". Models and features PCW 8256 and 8512 The PCW 8256 was launched in September 1985, and had 256 KB of RAM and one floppy disk drive. Launched a few months later, the PCW 8512 had 512 KB of RAM and two floppy disk drives. Both systems consisted of three units: a printer; a keyboard; and a monochrome CRT monitor whose casing included the processor, memory, motherboard, one or two floppy disk drives, the power supply for all the units and the connectors for the printer and keyboard. The monitor displayed green characters on a black background. It measured diagonally, and showed 32 lines of 90 characters each. The designers preferred this to the usual personal computer display of 25 80-character lines, as the larger size would be more convenient for displaying a whole letter. The monitor could also display graphics well enough for the bundled graphics program and for some games. The floppy disk drives on these models were in the unusual 3-inch "compact floppy" format, which was selected as it had a simpler electrical interface than 3½-inch drives. In the range's early days supplies of 3-inch floppies occasionally ran out, but by 1988 the PCW's popularity encouraged suppliers to compete for this market. There are several techniques for transferring data from a PCW to an IBM-compatible PC, some of which also can transfer in the opposite direction, and service companies that will do the job for a fee. While all the 3-inch disks were double-sided, the PCW 256's 3-inch drive and the PCW 8512's upper one were single-sided, while the 8512's lower one was double-sided and double-density. Hence there were two types of disk: single-density, which could store 180 KB of data per side, equivalent to about 70 pages of text each; and double density, which could store twice as much per side. The double-density drive could read single-density disks, but it was inadvisable to write to them using this drive. Users of single-sided drives had to flip the disks over to use the full capacity. The dot matrix printer had a sheet feed for short documents and a tractor attachment for long reports on continuous stationery. This unit could print 90 characters per second at draft quality and 20 characters per second at higher quality, and could also produce graphics. However it had only 9 printing pins and even its higher quality did not match that of 24-pin printers. The dot matrix printer was not very robust as its chassis was made entirely of plastic. Users who needed to support higher print volumes or to produce graphics could buy a daisy-wheel printer or graph plotter from Amstrad. The daisy-wheel printer could not produce graphics. The keyboard had 82 keys, some of which were designed for word processing, especially with the bundled Locoscript software – for example to cut, copy, and paste. Non-English characters such as Greek could be typed by holding down the ALT or EXTRA key, along with the SHIFT key if capitals were required. Other special key combinations activated caps lock, num lock and reboot. A wide range of upgrades became available. The PCW 8256's RAM could be expanded to 512 KB for a hardware cost of about £50. An additional internal floppy disk drive for the 8256 would cost about £100, and installation was fairly easy. Alternatively one could add external drives, for example if a 3½-inch drive was needed. Graphical user interface devices such as light pens, mice and graphic tablets could be attached to the expansion socket at the back of the monitor. Adding a serial interface connector, which cost about £50, made it possible to attach a modem or non-Amstrad printer. The designs were licensed to the German consumer electronics company Schneider, which slightly modified their appearance and consequently sold them as "Joyce" and "Joyce Plus". The partnership between Amstrad and Schneider had been formed to market the Amstrad CPC range of computers, and broke up when Amstrad launched the PCW9512. PCW 9512 and 9256 The PCW 9512, introduced in 1987 at a price of £499 plus VAT, had a white-on-black screen instead of green-on-black, and the bundled printer was a daisy-wheel model instead of a dot-matrix printer. These models also had a parallel port, allowing non-Amstrad printers to be attached. The 9512 was also supplied with version 2 of the Locoscript word processor program which included spellchecker and mail merge facilities. In all other respects the 9512's facilities were the same as the 8512's. In 1991 the 9512 was replaced by the PCW 9256 and 9512+, both equipped with a single 3½-inch disk drive that could access 720 KB. The 9512+ had 512 KB of RAM, and two printer options, the Amstrad daisy-wheel unit and a series of considerably more expensive Canon inkjet printers: initially the BJ10e, later the BJ10ex and finally the BJ10sx. The 9256 had 256 KB of RAM and the same dot matrix printer as the 8256 and 8512, as well as the older Locoscript version 1. PCW 10 This was a PCW 9256 with 512 KB of RAM, a parallel printer port, and Locoscript 1.5 instead of Locoscript 1. The PCW 10 was not a success, and few were produced. By this time other systems offered much better print quality, and the PCW was a poor choice as a general-purpose computer, because of its slow CPU and incompatibility with MS-DOS systems. PcW16 This model, whose display labelled it "PcW16", was introduced in 1995 at a price of £299. Despite its name it was totally incompatible with all previous PCW systems. Instead of having two operating environments, Locoscript for word processing and CP/M for other uses, it had its own GUI operating system, known as "Rosanne". This could only run one application at a time, and starting another application made the previous one save all the files it had changed and then close. The bundled word processor was produced by Creative Technology, and could read Locoscript files but saved them in its own format. The package also included a spreadsheet, address book, diary, calculator and file manager. Amstrad never provided other applications, and very little third-party software was written for the machine. The display unit, which also contained the processor, motherboard and RAM, was the standard 640×480 pixels in size and worked in VGA mode. The PcW16 included a standard 1.4 MB floppy drive. While competitors included hard disk drives with capacities of a few hundred MB to a few GB, the PcW16 used a 1 MB flash memory to store the programs and user files. Like previous PCW models, the PcW16 used the 8-bit Zilog Z-80 CPU, which first appeared in 1976, while other personal computers used 16-bit CPUs or the more recent 32-bit CPUs. The price included a mouse for use with the GUI, but did not include a printer. In the magazine PCW Plus Dave Langford expressed a series of concerns about the PcW16: the operating system could not run the many CP/M programs available for previous PCW models; the flash RAM was too small for a large collection of programs, but programs could not be run from the floppy disk, which was designed for backing up files; and a second-hand IBM PC with Locoscript Pro looked like a more sensible upgrade path for users of earlier PCWs. Few PcW16s were sold. Software This section covers the PCW 8xxx, 9xxx and 10 series; software for the PcW16 is described above. Bundled Locoscript word processor The word processing software Locoscript was included in the price of the hardware. The manual provided both a reference and a tutorial that could enable users to start work within 20 minutes, and some users found the tutorial provided as much information as they ever needed. The program enabled users to divide documents into groups, display the groups on a disk and then the documents in the selected group, and set up a template for each group. The "limbo file" facility enabled users to recover accidentally deleted documents until the disk ran out of space, when the software would permanently delete files to make room for new ones. Layout facilities included setting and using tab stops, production of page headers and footers, with automated page numbering; typographical effects including proportional spacing, a range of font sizes, and bold, italic and underline effects. The cut, copy and paste facility provided 10 paste buffers, each designated by a number, and these could be saved to a disk. The menu system had two layouts, one for beginners and the other for experienced users. Locoscript supported 150 characters and, if used with the dot matrix printer, could print European letters including Greek and Cyrillic, as well as mathematical and technical symbols. The program allowed the user to work on one document while printing another, so that the relative slowness of the basic printer seldom caused difficulties. Locoscript did not run under the control of a standard operating system but booted directly from a floppy disk. Users had to reboot if they wanted to switch between Locoscript and a CP/M application, unless they used a utility called "Flipper", which could allocate separate areas of RAM to Locoscript and CP/M. Locoscript version 1, which was bundled with the PCW 8256 and 8512, had no spell checker or mail merge facilities. Version 2, which was bundled with the PCW 9512, included a spellchecker and could provide mail merge by interfacing to other products from Locomotive Software, such as LocoMail and LocoFile. Locoscript 2 also expanded the character set to 400. CP/M operating system and applications The PCW included a version of CP/M known as "CP/M Plus". This provided a range of facilities comparable to those of MS-DOS, but imposed a significant limitation: it could not address more than 64 KB of RAM. Since CP/M took 3 KB of this, the most that CP/M applications could use was 61 KB. The rest of the RAM was used as a RAM disk (exposed under the drive letter "M:" for "memory"), which was much faster than a floppy disk but lost all its data when the machine was powered off. On the other hand, the standalone Locoscript word processor program was able to use 154 KB as normal memory, and the rest as a RAM disk. Mallard BASIC, like LocoScript, was a Locomotive Software product, but ran under CP/M. This version of BASIC lacked built-in graphics facilities, but included JetSAM, an implementation of ISAM that supported multiple indexes per file, so that programs could access records directly by specifying values of key fields. The CP/M software bundle also included the Digital Research implementation of Seymour Papert's LOGO programming language and a graphics program that could produce pie charts and bar charts. Sold separately Many software vendors supplied versions of their products to run with CP/M Plus, making a wide range of software available for the PCW, often very cheaply: Alternative word processors included Superwriter and WordStar. Several spreadsheet programs became available, including Supercalc II and Microsoft's Multiplan. Database programs adapted for the PCW included Sage Database, Cardbox and dBase II. The MicroDesign, Desk Top Publisher, Newsdesk and Stop Press desktop publishing packages were used by groups of authors for newsletters. The Sage Group's Popular Accounts and Payroll, and the Camsoft payroll and accounting software. Other programming languages, including C. Many games for the PCW. Most were text adventures but there were also graphical games like Batman, Bounder and Head over Heels. Free software Many free packages could run under CP/M but required careful setting of options to run on the PCW series, although a significant number had installer programs that made this task easier. Programs that were already configured for the PCW covered a broad range of requirements including word processors, databases, graphics, personal accounts, programming languages, games, utilities and a full-featured bulletin board system. Many of these were at least as good as similar commercial offerings, but most had poor documentation. Technical design All PCW models, including the PcW16, used the Zilog Z80 range of CPUs. A 4 MHz Z80A was used in the 8256, 8512, 9512, 9256, 9512+ and PCW10; and a 16 MHz Z80 in the PCW16. The Z80 could only access 64 KB of RAM at a time. Software could work round this by bank switching, accessing different banks of memory at different times but this made programming more complex and slowed the system down. The PCW divided the Z80 memory map into four 16 KB banks. In CP/M, the memory used for the display was switched out while programs were running, giving more than 60 KB of usable RAM. While the Joyce architecture was designed with configurations of 128 KB and 256 KB of RAM in mind, no PCW was ever sold with 128 KB of RAM. Each PCW's CP/M application could not use more than 64 KB so the system used the rest of the RAM for a RAM drive. On the other hand, the standalone Locoscript word processor program was reported as using up to 154 KB as normal memory and the rest as a RAM disk. Unusually, the Z80 CPU in the PCW 8256, 8512, 9512, 9256 and 9512+ had no directly connected ROM, which most computers used to start the boot process. Instead, at startup, the ASIC (customised circuit) at the heart of the PCW provided access to part of the 1k ROM within the Intel 8041 microcontroller used to drive the printer. The Z80 would copy 256 bytes via the ASIC into RAM, providing sufficient instructions to load the first sector from a floppy. The ROM-based code cannot display text, being too small to support character generation; instead, it displays a bright screen which is progressively filled by black stripes as the code is loaded from floppy. To make the printer cheap enough to be included with every PCW, Amstrad placed the majority of its drive electronics inside the PCW cabinet. The printer case contained only electromechanical components and high current driver electronics; its power was supplied via a coaxial power connector socket on the monitor casing, and rather than using a traditional parallel interface, pin and motor signals were connected directly by a 34-wire ribbon cable to an 8041 microcontroller on the PCW's mainboard. Most models of PCW were bundled with a 9-pin dot matrix printer mechanism, with the later 9512 and 9512+ models using a daisywheel (with a different cable; the printers were not interchangeable with the dot matrix models). These PCW printers could not, of course, be used on other computers, and the original PCW lacked a then-standard Centronics printer port. Instead, the Z80 bus and video signals were brought to an edge connector socket at the back of the cabinet. Many accessories including parallel and serial ports were produced for this interface. Some of the later models included a built-in parallel port; these could be bundled with either the dedicated Amstrad printer or a Canon Bubblejet model. The PCWs were not designed to play video games, although some software authors considered this a minor detail, releasing games like Batman, Head Over Heels, and Bounder. The PCW video system was not at all suited to games. In order that it be able to display a full 80-column page plus margins, the display's addressable area was 90 columns and the display had 32 lines. The display was monochrome and bitmapped with a resolution of 720 by 256 pixels. At 1 bit per pixel, this occupied 23 KB of RAM which was far too large for the Z80 CPU to scroll in software without ripple and tearing of the display. Instead, the PCW implemented a Roller RAM consisting of a 512-byte area of RAM that held the address of each line of display data. The screen could now be scrolled either by changing the Roller RAM contents or by writing to an I/O port that set the starting point in Roller RAM for the screen data. This allowed for very rapid scrolling. The video system also fetched data in a special order designed so that plotting a character eight scan lines high would touch eight contiguous addresses. This meant that the Z80's concise block copy instructions, such as LDIR, could be used. Unfortunately, it also meant that drawing lines and other shapes could be very complicated. The PcW16 does not share any hardware with the original PCW series, other than the Z80 CPU, and should be considered to be a completely different machine. See also Amstrad CP/M Plus character set Amstrad CPC SymbOS List of Amstrad PCW games IBM Displaywriter System References External links Amstrad PCW 16 page at www.old-computers.com Amstrad PCW 8256/8512 at www.old-computers.com PCW Joyce Computer Club Screen shots of the PcW16's Rosanne GUI PCW nostalgia. BBC Web page. PCW Emulator CP/M Box Computer-related introductions in 1985 Z80-based home computers Personal computers Computers designed in the United Kingdom Amstrad
26615065
https://en.wikipedia.org/wiki/Programming%20team
Programming team
A programming team is a team of people who develop or maintain computer software. They may be organised in numerous ways, but the egoless programming team and chief programmer team have been common structures. Description A programming team comprises people who develop or maintain computer software. Programming team structures Programming teams may be organised in numerous ways, but the egoless programming team and chief programmer team are two common structures typically used. The main determinants when choosing the programming team structure typically include: difficulty, size, duration, modularity, reliability, time, and sociability. Egoless programming According to Marilyn Mantei, individuals that are a part of a decentralized programming team report higher job satisfaction. But an egoless programming team contains groups of ten or fewer programmers. Code is exchanged and goals are set amongst the group members. Leadership is rotated within the group according to the needs and abilities required during a specific time. The lack of structure in the egoless team can result in a weakness of efficiency, effectiveness, and error detection for large-scale projects. Egoless programming teams work best for tasks that are very complex. Chief programmer team A chief programmer team will usually contain three-person teams consisting of a chief programmer, senior level programmer, and a program librarian. Additional programmers and analysts are added to the team when necessary. The weaknesses of this structure include a lack of communication across team members, task cooperation, and complex task completion. The chief programmer team works best for tasks that are simpler and straightforward since the flow of information in the team is limited. Individuals that work in this team structure typically report lower work morale. Shared workstation teams Pair programming A development technique where two programmers work together at one workstation. Mob programming A software development approach where the whole team works on the same thing, at the same time, in the same space, and at the same computer. Programming Models Programming models allow software development teams to develop, deploy, and test projects using these different methodologies. Waterfall Model The waterfall model, noted as the more traditional approach, is a linear model of production. The sequence of events of this methodology follows as: Gather and document requirements Design Code and unit test Perform system testing Perform user acceptance testing (UAT) Fix any issues Deliver the finished product Each stage is distinct during the software development process, and each stage generally finishes before the next one can begin. Programming teams using this model are able to design the project early on in the development process allowing teams to focus on coding and testing during the bulk of the work instead of constantly reiterating design. This also allows teams to design completely and more carefully so that teams can have a complete understanding of all software deliverables. Agile Model The Agile development model is a more team-based approach to development than the previous waterfall model. Teams work in rapid delivery/deployment which splits work into phases called "sprints". Sprints are usually defined as two weeks of planned software deliverables given to each team/team member. After each sprint, work is reprioritized and the information learned from the previous sprint is used for future sprint planning. As the sprint work is complete, it can be reviewed and evaluated by the programming team and sent back for another iteration (i.e. next sprint) or closed if completed. The general principles of the Agile Manifesto are as follows: Satisfy the customer and continually develop software. Changing requirements are embraced for the customer's competitive advantage. Concentrate on delivering working software frequently. Delivery preference will be placed on the shortest possible time span. Developers and business people must work together throughout the entire project. Projects must be based on people who are motivated. Give them the proper environment and the support that they need. They should be trusted to get their jobs done. Face-to-face communication is the best way to transfer information to and from a team. Working software is the primary measurement of progress. Agile processes will promote development that is sustainable. Sponsors, developers, and users should be able to maintain an indefinite, constant pace. Constant attention to technical excellence and good design will enhance agility. Simplicity is considered to be the art of maximizing the work that is not done, and it is essential. Self-organized teams usually create the best designs. At regular intervals, the team will reflect on how to become more effective, and they will tune and adjust their behavior accordingly. See also Cross-functional team Scrum (software development) Software development process Team software process References Software project management
23070656
https://en.wikipedia.org/wiki/Isometric%20video%20game%20graphics
Isometric video game graphics
Isometric video game graphics are graphics employed in video games and pixel art that use a parallel projection, but which angle the viewpoint to reveal facets of the environment that would otherwise not be visible from a top-down perspective or side view, thereby producing a three-dimensional effect. Despite the name, isometric computer graphics are not necessarily truly isometric—i.e., the , , and axes are not necessarily oriented 120° to each other. Instead, a variety of angles are used, with dimetric projection and a 2:1 pixel ratio being the most common. The terms "3/4 perspective", "3/4 view", "2.5D", and "pseudo 3D" are also sometimes used, although these terms can bear slightly different meanings in other contexts. Once common, isometric projection became less so with the advent of more powerful 3D graphics systems, and as video games began to focus more on action and individual characters. However, video games using isometric projection—especially computer role-playing games—have seen a resurgence in recent years within the indie gaming scene. Overview Advantages In the fields of computer and video games and pixel art, the technique has become popular because of the ease with which 2D sprite- and tile-based graphics can be made to represent 3D gaming environments. Because parallelly projected objects do not change in size as they move about an area, there is no need for the computer to scale sprites or do the complex calculations necessary to simulate visual perspective. This allowed 8-bit and 16-bit game systems (and, more recently, handheld and mobile systems) to portray large game areas quickly and easily. And, while the depth confusion problems of parallel projection can sometimes be a problem, good game and level design can alleviate this. Further, though not limited strictly to isometric video game graphics, pre-rendered 2D graphics can possess a higher fidelity and use more advanced graphical techniques than may be possible on commonly available computer hardware, even with 3D hardware acceleration. Similarly to modern CGI used in motion pictures, graphics can be rendered one time on a powerful super computer or render farm, and then displayed many times on less powerful consumer hardware, such as on television sets, tablet computers and smartphones. This means that static pre-rendered isometric graphics often look better compared to their contemporary real-time-rendered counterparts, and may age better over time compared to their peers. However, this advantage may be less pronounced today than it was in the past, as developments in graphical technology equalize or produce diminishing returns, and current levels of graphical fidelity become "good enough" for many people. Lastly, there are also gameplay advantages to using an isometric or near-isometric perspective in video games. For instance, compared to a purely top-down game, they add a third dimension, opening up new avenues for aiming and platforming. Secondly, compared to a first- or third-person video game, they allow you to more easily field and control a large number of units, such as a full party of characters in a computer role-playing game, or an army of minions in a real-time strategy game. Further, they may alleviate situations where a player may become distracted from a game's core mechanics by having to constantly manage an unwieldy 3D camera. I.e., the player can focus on playing the game itself, and not on manipulating the game's camera. In the present day, rather than being purely a source of nostalgia, the revival of isometric projection is the result of real, tangible design benefits. Disadvantages Some disadvantages of pre-rendered isometric graphics are that, as display resolutions and display aspect ratios continue to evolve, static 2D images need to be re-rendered each time in order to keep pace, or potentially suffer from the effects of pixelation and require anti-aliasing. Re-rendering a game's graphics is not always possible, however; as was the case in 2012, when Beamdog remade BioWare's Baldur's Gate (1998). Beamdog were lacking the original developers' creative art assets (the original data was lost in a flood) and opted for simple 2D graphics scaling with "smoothing", without re-rendering the game's sprites. The results were a certain "fuzziness", or lack of "crispness", compared to the original game's graphics. This does not affect real-time rendered polygonal isometric video games, however, as changing their display resolutions or aspect ratios is trivial, in comparison. Differences from "true" isometric projection The projection commonly used in video games deviates slightly from "true" isometric due to the limitations of raster graphics. Lines in the and directions would not follow a neat pixel pattern if drawn in the required 30° to the horizontal. While modern computers can eliminate this problem using anti-aliasing, earlier computer graphics did not support enough colors or possess enough CPU power to accomplish this. Instead, a 2:1 pixel pattern ratio would be used to draw the and axis lines, resulting in these axes following a ≈26.565° () angle to the horizontal. (Game systems that do not use square pixels could, however, yield different angles, including "true" isometric.) Therefore, this form of projection is more accurately described as a variation of dimetric projection, since only two of the three angles between the axes are equal to each other, i.e., . History of isometric video games While the history of video games saw some three-dimensional games as early as the 1970s, the first video games to use the distinct visual style of isometric projection in the meaning described above were arcade games in the early 1980s. 1980s The use of isometric graphics in video games began with the appearance of Data East's DECO Cassette System arcade game Treasure Island, released in Japan in September 1981, but it was not released internationally until June 1982. The first isometric game to be released internationally was Sega's Zaxxon, which was significantly more popular and influential; it was released in Japan in December 1981 and internationally in April 1982. Zaxxon is an isometric shooter where the player flies a space plane through scrolling levels. It is also one of the first video games to display shadows. Another early isometric game is Q*bert. Warren Davis and Jeff Lee began programming the concept around April 1982, with the game's production beginning in the Summer and then released in October or November 1982. Q*bert shows a static pyramid in an isometric perspective, with the player controlling a character which can jump around on the pyramid. The following year in February 1983, the isometric platformer arcade game Congo Bongo was released, running on the same hardware as Zaxxon. It allows the player character to move around in bigger isometric levels, including true three-dimensional climbing and falling. The same is possible in the arcade title Marble Madness, released in 1984. In 1983, isometric games were no longer exclusive to the arcade market and also entered home computers, with the release of Blue Max for the Atari 8-bit family and Ant Attack for the ZX Spectrum. In Ant Attack, the player could move forward in any direction of the scrolling game, offering complete free movement rather than fixed to one axis as with Zaxxon. The views could also be changed around a 90 degrees axis. The ZX Spectrum magazine, Crash, consequently awarded it 100% in the graphics category for this new technique, known as "Soft Solid 3-D". A year later the ZX Spectrum saw the release of Knight Lore, which is generally regarded as a revolutionary title that defined the subsequent genre of isometric adventure games. Following Knight Lore, many isometric titles were seen on home computers – to an extent that it once was regarded as being the second most cloned piece of software after WordStar, according to researcher Jan Krikke. Other examples out of those were Highway Encounter (1985), Batman (1986), Head Over Heels (1987) and La Abadía del Crimen (1987). Isometric perspective was not limited to arcade/adventure games, though; for example, the 1989 strategy game Populous used isometric perspective. 1990s {{multiple image | image1 = Pixelart-tv-iso-2.png | width1 = 150 | height1 = 150 | alt1 = Pixel art | caption1 = A television set drawn in near-isometric 2:1 pixel art. (Enlarged to show the pixel structure.) | image2 = Fallout camera angles.png | width2 = 150 | height2 = 150 | alt2 = Rendering of a scene emulating Fallout 2's perspective | caption2 = 3D rendering mimicking the video game Fallout'''s use of trimetric projection and a hexagonal grid }} Throughout the 1990s several successful games such as Syndicate (1993), SimCity 2000 (1994), Civilization II (1996), X-COM (1994), and Diablo (1996) used a fixed isometric perspective. But with the advent of 3D acceleration on personal computers and gaming consoles, games previously using a 2D perspective generally started switching to true 3D (and perspective projection) instead. This can be seen in the successors to the above games: for instance SimCity (2013), Civilization VI (2016), XCOM: Enemy Unknown (2012) and Diablo III (2012) all use 3D polygonal graphics; and while Diablo II (2000) used fixed-perspective 2D perspective like its predecessor, it optionally allowed for perspective scaling of the sprites in the distance to lend it a "pseudo-3D" appearance. Also during the 1990s, isometric graphics began being used for Japanese role-playing video games (JRPGs) on console systems, particularly tactical role-playing games, many of which still use isometric graphics today. Examples include Front Mission (1995), Tactics Ogre (1995) and Final Fantasy Tactics (1997)—the latter of which used 3D graphics to create an environment where the player could freely rotate the camera. Other titles such as Vandal Hearts (1996) and Breath of Fire III (1997) carefully emulated an isometric or parallel view, but actually used perspective projection. Infinity Engine Black Isle Studios and BioWare helped popularize the use of isometric projection in computer role-playing games in the late 1990s and early 2000s. These studios used the Infinity Engine game engine in several of their titles, developed by BioWare for Baldur's Gate (1998). This engine gained significant traction among players, and many developers since then have tried to emulate and improve upon it in various ways. The Infinity Engine itself was also revamped and modernized by Beamdog in preparation for Baldur's Gate: Enhanced Edition (2012)—as well as their remakes of several other classic Infinity Engine titles. Two other titles by Black Isle Studios, Fallout (1997) and Fallout 2 (1998), used trimetric projection. Kickstarter Isometric projection has seen continued relevance in the new millennium with the release of several newly-crowdfunded role-playing games on Kickstarter. These include the Shadowrun Returns series (2013-2015) by Harebrained Schemes; the Pillars of Eternity series (2015-2018) and Tyranny (2016) by Obsidian Entertainment; and Torment: Tides of Numenera (2017) by inXile Entertainment. Both Obsidian Entertainment and inXile Entertainment have employed, or were founded by, former members of Black Isle Studios and Interplay Entertainment. Obsidian Entertainment in particular wanted to "bring back the look and feel of the Infinity Engine games like Baldur's Gate, Icewind Dale, and Planescape: Torment". Lastly, several pseudo-isometric 3D RPGs, such as Divinity: Original Sin (2014), Wasteland 2 (2014) and Dead State (2014), have been crowdfunded using Kickstarter in recent years. These titles differ from the above games, however, in that they use perspective projection instead of parallel projection. Use of related projections and techniques The term "isometric perspective" is frequently misapplied to any game with an—usually fixed—angled, overhead view that appears at first to be "isometric". These include the aforementioned dimetrically projected video games; games that use trimetric projection, such as Fallout (1997) and SimCity 4 (2003); games that use oblique projection, such as Ultima Online (1997) and Divine Divinity (2002); and games that use a combination of perspective projection and a bird's eye view, such as Silent Storm (2003), Torchlight (2009) and Divinity: Original Sin (2014). Also, not all "isometric" video games rely solely on pre-rendered 2D sprites. There are, for instance, titles which use polygonal 3D graphics completely, but render their graphics using parallel projection instead of perspective projection, such as Syndicate Wars (1996), Dungeon Keeper (1997) and Depths of Peril (2007); games which use a combination of pre-rendered 2D backgrounds and real-time rendered 3D character models, such as The Temple of Elemental Evil (2003) and Torment: Tides of Numenera (2017); and games which combine real-time rendered 3D backgrounds with hand-drawn 2D character sprites, such as Final Fantasy Tactics (1997) and Disgaea: Hour of Darkness (2003). One advantage of top-down oblique projection'' over other near-isometric perspectives, is that objects fit more snugly within non-overlapping square graphical tiles, thereby potentially eliminating the need for an additional Z-order in calculations, and requiring fewer pixels. Mapping screen to world coordinates One of the most common problems with programming games that use isometric (or more likely dimetric) projections is the ability to map between events that happen on the 2d plane of the screen and the actual location in the isometric space, called world space. A common example is picking the tile that lies right under the cursor when a user clicks. One such method is using the same rotation matrices that originally produced the isometric view in reverse to turn a point in screen coordinates into a point that would lie on the game board surface before it was rotated. Then, the world x and y values can be calculated by dividing by the tile width and height. Another way that is less computationally intensive and can have good results if the method is called on every frame, rests on the assumption that a square board was rotated by 45 degrees and then squashed to be half its original height. A virtual grid is overlaid on the projection as shown on the diagram, with axes virtual-x and virtual-y. Clicking any tile on the central axis of the board where (x, y) = (tileMapWidth / 2, y), will produce the same tile value for both world-x and world-y which in this example is 3 (0 indexed). Selecting the tile that lies one position on the right on the virtual grid, actually moves one tile less on the world-y and one tile more on the world-x. This is the formula that calculates world-x by taking the virtual-y and adding the virtual-x from the center of the board. Likewise world-y is calculated by taking virtual-y and subtracting virtual-x. These calculations measure from the central axis, as shown, so the results must be translated by half the board. For example, in the C programming language: float virtualTileX = screenx / virtualTileWidth; float virtualTileY = screeny / virtualTileHeight; // some display systems have their origin at the bottom left while the tile map at the top left, so we need to reverse y float inverseTileY = numberOfTilesInY - virtualTileY; float isoTileX = inverseTileY + (virtualTileX - numberOfTilesInX / 2); float isoTileY = inverseTileY - (virtualTileX - numberOfTilesInY / 2); This method might seem counter intuitive at first since the coordinates of a virtual grid are taken, rather than the original isometric world, and there is no one-to-one correspondence between virtual tiles and isometric tiles. A tile on the grid will contain more than one isometric tile, and depending on where it is clicked it should map to different coordinates. The key in this method is that the virtual coordinates are floating point numbers rather than integers. A virtual-x and y value can be (3.5, 3.5) which means the center of the third tile. In the diagram on the left, this falls in the 3rd tile on the y in detail. When the virtual-x and y must add up to 4, the world x will also be 4. Examples Dimetric projection Oblique projection Perspective projection See also Clipping Filmation engine :Category:Video games with isometric graphics: listing of isometric video games :Category:Video games with oblique graphics: listing of oblique video games :Commons:Category:Isometric video game screenshots: gallery of isometric video game screenshots References External links The classic 8-bit isometric games that tried to break the mould at Eurogamer.com The Best-Looking Isometric Games at Kotaku.com The Best Isometric Video Games at Kotaku.com Video game graphics Articles with example C code
24673423
https://en.wikipedia.org/wiki/Sorcim
Sorcim
Sorcim was an early start-up company in Silicon Valley, founded in June 1980 by Richard Frank, Paul McQuesten, Martin Herbach, Anil Lakhwara, and Steve Jasik - all former Control Data Corporation employees working in the Language Group in Sunnyvale, CA. Jasik left company early on, to develop the MacNosy product for the Macintosh. Sorcim was best known for SuperCalc, a spreadsheet the company developed for the Osborne Computer Corporation portable computer. The company made many other products, including SuperWriter and SuperProject before its acquisition by Computer Associates in 1985. Although the company continued as a largely autonomous division of CA, it never again achieved prominence after the acquisition. The company was named "Sorcim" after Richard Frank saw a reflection of the word “micros” in an airplane window. Early history The company was founded to expand the microcomputer products from Digicom, a company formed by Richard in 1978. Paul joined in 1979. The Digicom software programs ran on the CP/M operating system using the Intel 8080, 8085 and later the 8086, Zilog Z80 and the Z8000. The company's early products included Pascal/M, and ACT - a set of cross assemblers including one for the Atari (8080) and the Commodore Pet (6502). In these early days of the company, and before the introduction of the IBM PC and MS-DOS, Sorcim used Godbout S100 bus CP/M machines for development; these machines were fast and the people at Godbout were competent hardware developers. Bill Godbout was one of the first commercial accounts for Sorcim, supporting the company's cross assemblers and Pascal/M. In fact at one time Godbout helped relieve a short-term cash flow problem by doing a one-time buy of development tool products. "Bill was one of those people who always provided you an honest opinion (sometimes to the dismay of Sorcim managers) and great Friday lunch meetings." The birth of SuperCalc In 1980 at one of the local monthly computer industry poker parties, Bill Godbout introduced Richard Frank to Adam Osborne. Lee Felsenstein was developing the industry's first portable computer for Adam's new company, and he needed a CP/M BIOS. This computer was released as the Osborne I. In late Fall of 1980, Adam was looking for a spreadsheet for the Osborne I. His efforts to acquire rights to VisiCalc were disappointing, so he asked Sorcim if they would be interested in developing a spreadsheet that would be competitive with VisiCalc, and develop it in time to showcase it at the West Coast Computer Faire in April 1981. The company accepted the challenge, working days on contract programming (a CHILL compiler for Siemens) and nights on the Osborne BIOS and SuperCalc. With Martin Herbach as the lead architect, the company hired Gary Balleisen, as the lead developer, to implement a demo version of the application. Someone selected the name SuperCalc. The product was introduced in April 1981 at the West Coast Computer Faire in the Osborne booth. The enthusiastic reception surprised the Sorcim folks. SuperCalc was written in assembly code using Sorcim's ACT assembler. Eventually SuperCalc was ported to over 150 different hardware platforms - from the Osborne I to the Zenith Z89. By the 18-month mark, the company had sold over 250,000 copies of the original SuperCalc. During this period, the company estimated that VisiCalc's market share was about 85% and SuperCalc was about 15%. There were some other early spreadsheet programs, but these two programs shared essentially the entire market. Growing pains Toward the end of 1982, the founders had become dissatisfied with company management. They removed the company president and each founder took on an acting VP role: Martin Herbach ran Sales, Anil Lakahara was responsible for software development, Paul McQuesten supervised Finance, and Richard - despite having a business card that read "Programmer" - was CEO, and Chairman. But the founders were unhappy in these roles (for most of them, not their core competencies), and they were actively looking for a new president to guide the company's growth. SuperCalc2, SuperWriter, and SuperChart, which were all announced in November 1982, had no concrete ship dates, and a census of projects in the company showed that the development staff of about 20 people was working on over 100 projects. This was whittled down to the main applications and development tools. At this time, the company made its first serious effort to establish control over the products, and Greg Resnick (who came aboard with the SpellGuard acquisition) became Product Manager of SuperWriter, while new hire Walter Feigenson came in to manage SuperCalc. The founders were still actively looking for a new leader during the early part of 1983, since they were out of their elements and getting more frustrated as time moved on. They had interviewed a number of candidates for CEO, but none was acceptable. Finally they coalesced around Jim Pelkey, who had started consulting for the founders in late 1982. Pelkey was introduced to the company by Jack Melchor of Melchor Venture Management to help management create a strategic plan. Melchor was an early investor in ROLM, Software Publishing, 3Com, The Learning Company, and he was the only outside investor in Sorcim (and a member of the Board). Jim was appointed President in May 1983, followed by George Wikle, CFO. Bill Ferguson, was hired from MicroPro (the makers of WordStar) as VP of Sales, and Steve Goldsworthy joined from HP as VP Engineering. Ron Grubman was recruited as VP Corporate Development in the Summer of 1983, and the team was rounded out by Hal King, who was hired as VP Marketing. The company makes plans to move beyond CP/M SuperCalc2 shipped April 15, 1983 for CP/M machines, and a month later for CP/M-86 machines. Sales continued at a healthy upward pace, despite strong competition from Lotus 1-2-3. VisiCalc sales essentially dried up, and Sorcim management believed that SuperCalc maintained its 15% market share in a rapidly growing market. With the new management team coming together, the team focused on aggressively growing the company to maintain market visibility and power, respond to the phenomenon of Lotus 1-2-3, create a "killer app" for the IBM PC, solve the constraints of a thin capitalization and remain profitable. The marketing challenge was to create a solid relationship with IBM while generating as much revenue as possible from existing products. Simultaneously, the staff was tasked with evolving beyond a CP/M company. Work began on a new "killer" product that was to become SuperCalc3. Although well known because SuperCalc was one of the three products packaged with the wildly successful Osborne I CP/M "luggable" computer. Sorcim had no new and innovative product offerings for the breakthrough PC success, the IBM PC. SuperCalc for MS-DOS was functionally the same product as the CP/M version, which was typical for all established products at this time. Lotus 1-2-3 was the most notable exception. Failure to change focus from CP/M, where the company had almost 100% market share, to DOS, where SuperCalc simply maintained its market share, was a big mistake. New management establishes future directions The engineering organization was divided into three major efforts: Maintain the current products including ports to new OEM computers, Create SuperCalc3, and Invest in a skunkworks effort that would lead to products beyond SuperCalc3. The capital structure constraints required the company to become profitable, again attain market growth and to create an exciting business plan for the future; all aimed at raising a new capital round in the early part of 1984. It seemed important to demonstrate that the new team was in control, since so many startups falter when the founders don't hand over full control to their new management teams. So while the new team came together, Richard Frank and Paul McQuesten moved a few blocks away to an office called "The Farm." Nobody knew the location or their phone number. There were two purposes: 1) to give the new team some breathing room, and 2) to start work on a new version of a multifunction product that was code-named Oyster. Richard, Paul, and Jeff McKenna bought a Symbolics LISP machine so they could start rapid prototyping of new products. This work had previously been done on paper and at white boards. In fact, there was another project going on to define this product—a kind of skunkworks team composed of Martin Herbach, Dave Montagna (also of CDC Fortran compiler fame), and Walter Feigenson. This team got pretty far into defining what would have been a windowing system based on technology they had acquired from Payment Pouladdej and Peter Fiore—a system that appeared very similar to GEM, which was being developed by Digital Research (much to the chagrin of Payment and Peter who had shown it to Digital Research before joining Sorcim). This project died when Computer Associates acquired Sorcim. By the time SuperCalc2 shipped in April 1983, Sorcim knew that its competitor was no longer VisiCalc, but Lotus 1-2-3, which became an instant best seller in February 1983. Besides being technically excellent, 1-2-3 also had a substantially larger marketing budget than Sorcim's. As a marketing reply to this juggernaut, Sorcim crafted plans to add the features of SuperChart to the DOS version of SuperCalc, and this became SuperCalc3, which shipped in September 1983. SC3 was introduced at the CP/M show in Boston in 1983. Although some thought this venue an odd choice, Sorcim still thought at that time that it could make a "universal" version of SuperCalc3 for any CP/M machine. This turned out to be impractical because CP/M-86 did not function to hide the hardware level from the application software. At the Boston show, many industry people paid attention to Sorcim's booth, including Mitch Kapor, the founder of Lotus Development, the 1-2-3 company. SuperCalc was effectively the only competition to 1-2-3 at that point, and SC3 was vastly superior to 1-2-3 in its graphics. When Product Manager Walter Feigenson showed Kapor the product for the first time, Mitch was astounded that SC3 could do everything it did from a single disk. He even remarked that he had to reprogram 1-2-3 in Assembler to get its speed - and he wanted to know how Martin Herbach had managed to get the C-coded graphics engine to work in the middle of a non-relocatable Assembler program.(That remains a bit of unknown magic to this day.) By all accounts, Martin had achieved the impossible. SuperCalc's graphics were on a par with dedicated graphics programs (it won 3rd place in the National Software Testing Labs graphics programs competition in 1984). But good graphics weren't enough to supplant 1-2-3, and in fact the company learned that 1-2-3 users weren't even printing their graphics, since the cable for the only low-cost pen plotter was wired incorrectly. It's interesting to note that Microsoft also had a spreadsheet (MultiPlan) at this time, but the main competitors for the King of Spreadsheets remained SuperCalc3 and later versions, and 1-2-3 in its upgraded versions. Microsoft eventually abandoned MultiPlan in favor of Excel. SuperCalc3 porting sales Sorcim was very successful at selling an OEM CP/M version of SuperCalc2, and sales for 1983 zoomed to $7M including the upfront OEM payments. These were not "porting contracts," since CP/M machines all executed the same code and used one of the 100+ standard terminals Sorcim products supported. The only difference in versions was the disk size and recording format. After SC3 shipped, the company began a successful campaign to port this graphical version to IBM "compatibles" (which mostly weren't 100% compatible at that time). At the same time, the company created a corporate sales organization. By early 1984, DOS sales dominated, and CP/M sales had eroded, and our efforts to get IBM to minimally endorse us - as they had endorsed Microsoft and Lotus – failed. Sales to businesses were not advancing fast enough to fund our efforts. Management concluded that the company needed additional financial resources. Retail sales remained relatively steady, and the company sold some ports to other platforms that generated significant OEM revenue (some ports ran as high as $500,000). But at retail, the company was never able to make a significant dent in the 1-2-3 juggernaut. Sorcim did find a "sweet spot" in the US Government and some large companies that refused to buy software with copy protection, which was included in every copy of 1-2-3. Eventually, Lotus lost big chunks of their government business to Sorcim, and Sorcim started selling unlimited site licenses for SuperCalc and SuperWriter to firms like Ernst & Young (one of the "Big 8" accounting firms). Additional financing Throughout this time, the company continued to increase headcount to get to the "critical mass" required to be a major player in the industry. Newly acquired products, as well as home-built efforts, failed to achieve much sales success. These included SuperProject, a project management program using "drop down menus," which was licensed from its creator Alan Cooper; and Paul McQuesten's SuperCalc3 for the Apple IIc (in native 6502 code. Revenues from compilers and products like the company's Pascal/M interpreter were drying up fast. SuperWriter, when it shipped, never sold in substantial quantities, and was limited by its ability to edit only what it could hold in memory. The company probably diluted its efforts in agreeing to ports of SuperCalc3 to Unix machines (AT&T machines - the UNIX PC, and the 3B2, which Sorcim employees referred to as the world's most expensive paperweight). Non-standard defocused efforts in the predominant market, especially on contracts they had for computers for which the company could not complete an effective port. In those days, the "gold standard" for compatibility was Compaq; everything else had differences, sometimes trivial (AT&T had additional graphics capabilities), or massive - but every company wanted to claim IBM compatibility, and that could only be proven through software. By this time, Osborne, which never established a foothold in the DOS market, was no longer a factor in portable computers. But there were others in the works, and Sorcim worked with many of these startups. The burden of revenue for the company was always SuperCalc, no matter how the company tried to branch out. Starting with SuperCalc2, the product life cycle was tightened to 9 months. The objective was to catch up to and pass 1-2-3. By the time SuperCalc4 shipped, in 1985, the software was so refined that it was runner-up for the product of the year at PC Magazine's annual Comdex bash. PC Magazine, in its “Best of 1986” review had this to say: “If market dominance were based on rational criteria, Computer Associates' SuperCalc 4 would certainly replace 1-2-3 as the leading spreadsheet program. After all, it can do anything that 1-2-3 can do and adds some notable features of its own." By early 1984 InfoWorld estimated that Sorcim was the world's 13th-largest microcomputer-software company, with $12 million in 1983 sales. In the fall of 1983 (first closing January 1984), Sorcim raised over $9 million in private financing through Alex. Brown & Sons, but soon after concluded that Microsoft and Lotus had such dominant market shares that even more resources were required to be competitive. The company also funded a million dollar print advertising campaign in the Wall Street Journal and other national papers that failed to increase sales. In the early part of 1984, it became clear that the revenue bubble that Sorcim and substantially all of the other companies in the PC marketplace had experienced was bursting. Consequently, management re-hired Alex. Brown and Sons to find a corporate partner. In the spring of 1984, Computer Associates purchased Sorcim. See also Sorcim TRANS86 References Software companies based in California Software companies of the United States
42552765
https://en.wikipedia.org/wiki/LibreSSL
LibreSSL
LibreSSL is an open-source implementation of the Transport Layer Security (TLS) protocol. The implementation is named after Secure Sockets Layer (SSL), the deprecated predecessor of TLS, for which support was removed in release 2.3.0. The OpenBSD project forked LibreSSL from OpenSSL 1.0.1g in April 2014 as a response to the Heartbleed security vulnerability, with the goals of modernizing the codebase, improving security, and applying development best practices. History After the Heartbleed security vulnerability was discovered in OpenSSL, the OpenBSD team audited the codebase and decided it was necessary to fork OpenSSL to remove dangerous code. The libressl.org domain was registered on 11 April 2014; the project announced the name on 22 April 2014. In the first week of development, more than 90,000 lines of C code were removed. Unused code was removed, and support for obsolete operating systems was removed. LibreSSL was initially developed as an intended replacement for OpenSSL in OpenBSD 5.6, and was ported to other platforms once a stripped-down version of the library was stable. , the project was seeking a "stable commitment" of external funding. On 17 May 2014, Bob Beck presented "LibreSSL: The First 30 Days, and What The Future Holds" during the 2014 BSDCan conference, in which he described the progress made in the first month. On 5 June 2014, several OpenSSL bugs became public. While several projects were notified in advance, LibreSSL was not; Theo de Raadt accused the OpenSSL developers of intentionally withholding this information from OpenBSD and LibreSSL. On 20 June 2014, Google created another fork of OpenSSL called BoringSSL, and promised to exchange fixes with LibreSSL. Google has already relicensed some of its contributions under the ISC license, as it was requested by the LibreSSL developers. On 21 June 2014, Theo de Raadt welcomed BoringSSL and outlined the plans for LibreSSL-portable. Starting on 8 July, code porting for macOS and Solaris began, while the initial porting to Linux began on 20 June. As of 2021, OpenBSD uses LibreSSL as the primary SSL library. Alpine Linux supported LibreSSL as its primary TLS library for three years, until release 3.9.0 in January 2019. Gentoo supported LibreSSL until February 2021. Python 3.10 drops LibreSSL support after being supported since Python 3.4.3 (2015). Adoption LibreSSL is the default provider of TLS for: Dragonfly BSD OpenBSD OpenELEC TrueOS packages Hyperbola GNU/Linux-libre macOS LibreSSL is a selectable provider of TLS for: FreeBSD packages Gentoo packages (support dropped as of February 2021) OPNsense packages Changes Memory-related Changes include replacement of custom memory calls to ones in a standard library (for example, strlcpy, calloc, asprintf, reallocarray, etc.). This process may help later on to catch buffer overflow errors with more advanced memory analysis tools or by observing program crashes (via ASLR, use of the NX bit, stack canaries, etc.). Fixes for potential double free scenarios have also been cited in the VCS commit logs (including explicit assignments of null pointer values). There have been extra sanity checks also cited in the commit logs related to ensuring length arguments, unsigned-to-signed variable assignments, pointer values, and method returns. Proactive measures In order to maintain good programming practice, a number of compiler options and flags designed for safety have been enabled by default to help in spotting potential issues so they can be fixed earlier (-Wall, -Werror, -Wextra, -Wuninitialized). There have also been code readability updates which help future contributors in verifying program correctness (KNF, white-space, line-wrapping, etc.). Modification or removal of unneeded method wrappers and macros also help with code readability and auditing (Error and I/O abstraction library references). Changes were made to ensure that LibreSSL will be year 2038 compatible along with maintaining portability for other similar platforms. In addition, explicit_bzero and bn_clear calls were added to prevent the compiler from optimizing them out and prevent attackers from reading previously allocated memory. Cryptographic There were changes to help ensure proper seeding of random number generator-based methods via replacements of insecure seeding practices (taking advantage of features offered by the kernel itself natively). In terms of notable additions made, OpenBSD has added support for newer and more reputable algorithms (ChaCha stream cipher and Poly1305 message authentication code) along with a safer set of elliptic curves (brainpool curves from RFC 5639, up to 512 bits in strength). Added features The initial release of LibreSSL added a number of features: the ChaCha and Poly1305 algorithm, the Brainpool and ANSSI elliptic curves, and the AES-GCM and ChaCha20-Poly1305 AEAD modes. Later versions added the following: 2.1.0: Automatic ephemeral EC keys. 2.1.2: Built-in arc4random implementation on macOS and FreeBSD. 2.1.2: Reworked GOST cipher suite support. 2.1.3: ALPN support. 2.1.3: Support for SHA-256 and Camellia cipher suites. 2.1.4: TLS_FALLBACK_SCSV server-side support. 2.1.4: certhash as a replacement of the c_rehash script. 2.1.4: X509_STORE_load_mem API for loading certificates from memory (enhance chroot support). 2.1.4: Experimental Windows binaries. 2.1.5: Minor update mainly for improving Windows support, first working 32- and 64-bit binaries. 2.1.6: declared stable and enabled by default. 2.2.0: AIX and Cygwin support. 2.2.1: Addition of EC_curve_nid2nist and EC_curve_nist2nid from OpenSSL, initial Windows XP/2003 support. 2.2.2: Defines LIBRESSL_VERSION_NUMBER, added TLS_*methods as a replacement for the SSLv23_*method calls, cmake build support. Old insecure features The initial release of LibreSSL disabled a number of features by default. Some of the code for these features was later removed, including Kerberos, US-Export ciphers, TLS compression, DTLS heartbeat, SSL v2 and SSL v3. Later versions disabled more features: 2.1.1: Following the discovery of the POODLE vulnerability in the legacy SSL 3.0 protocol, LibreSSL now disables the use of SSL 3.0 by default. 2.1.3: GOST R 34.10-94 signature authentication. 2.2.1: Removal of Dynamic Engine and MDC-2DES support 2.2.2: Removal of SSL 3.0 from the openssl binary, removal of Internet Explorer 6 workarounds, RSAX engine. 2.3.0: Complete removal of SSL 3.0, SHA-0 and DTLS1_BAD_VER. Code removal The initial release of LibreSSL has removed a number of features that were deemed insecure, unnecessary or deprecated as part of OpenBSD 5.6. In response to Heartbleed, the heartbeat functionality was one of the first features to be removed. Unneeded platforms (Classic Mac OS, NetWare, OS/2, OpenVMS, 16-bit Windows, etc.) were removed. Support for platforms that do not exist, such as big-endian i386 and amd64. Support for old compilers. The IBM 4758, Broadcom ubsec, Sureware, Nuron, GOST, GMP, CSwift, CHIL, CAPI, Atalla and AEP engines were removed due to irrelevance of hardware or dependency on non-free libraries. The OpenSSL PRNG was removed (and replaced with ChaCha20-based implementation of arc4random). Preprocessor macros that have been deemed unnecessary or insecure or had already been deprecated in OpenSSL for a long time (e.g. des_old.h). Older unneeded files for assembly language, C, and Perl (e.g. EGD). MD2, SEED functionality. SSL 3.0, SHA-0, DTLS1_BAD_VER The Dual EC DRBG algorithm, which is suspected of having a back door, was cut along with support for the FIPS 140-2 standard that required it. Unused protocols and insecure algorithms have also been removed, including the support for FIPS 140-2, MD4/MD5 J-PAKE, and SRP. Bug backlog One of the complaints of OpenSSL was the number of open bugs reported in the bug tracker that had gone unfixed for years. Older bugs are now being fixed in LibreSSL. See also Comparison of TLS implementations OpenSSH wolfSSH References External links LibreSSL and source code (OpenGrok) 2014 software C (programming language) libraries Cryptographic software Free security software Free software programmed in C OpenBSD Software forks Transport Layer Security implementation