id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
13400601
https://en.wikipedia.org/wiki/1990%20Rose%20Bowl
1990 Rose Bowl
The 1990 Rose Bowl was the 76th edition of the college football bowl game, played at the Rose Bowl in Pasadena, California, on Monday, January 1. The game was a rematch of the previous year, won by Michigan, 22–14. Gaining a measure of revenge, the USC Trojans upset the third-ranked Michigan Wolverines, 17–10. USC junior running back Ricky Ervins was named the Player Of The Game. This was Bo Schembechler's final game as Michigan's head coach, and he finished with a 2–8 record in Rose Bowls. Pre-game activities On Tuesday, October 24, 1989, Tournament of Roses President Don W. Fedde chose 17-year-old Yasmine Begum Delawari, a senior at La Canada High School and a resident of La Cañada Flintridge, California. She became the 72nd Rose Queen to reign over the 101st Rose Parade and the 76th Rose Bowl Game on New Year's Day. The game was presided over by the 1990 Tournament of Roses Royal Court and Rose Parade Grand Marshal John Glenn, U.S. Senator from Ohio and an original astronaut. Members of the court included princesses Kristin Gibbs, South Pasadena, Pasadena City College; Inger Miller, Altadena, John Muir High School; Marisa Stephenson, Arcadia, Arcadia High School; Joanne Ward, Arcadia, Arcadia High School; Kandace Watson, Pasadena, John Muir High School; and Peggy Ann Zazueta, Temple City, Maranatha High School. Teams Michigan Wolverines The Wolverines lost their opening game, at home, to Notre Dame 24–19. The UCLA Bruins under Terry Donahue and the Michigan Wolverines under Bo Schembechler met for the only time since the 1983 Rose Bowl in a UCLA home game at the Rose Bowl on September 23, 1989. The fifth-ranked Michigan Wolverines defeated #24 UCLA by a point, 24–23. This began a ten-game winning streak for Michigan, the biggest win was a 24–10 win at Illinois that ultimately gave the Big Ten title to Michigan, over the runner-up Illini. USC Trojans USC lost their opener to Illinois 14–13, but won the rest with the exception of a 28–24 mid-season loss at Notre Dame and a 10–10 tie in their regular season finale with rival UCLA. They won the Pac-10 title by 2½ games over Washington, who had struggled early in the season. It was a third-straight berth in the Rose Bowl for the Trojans, but they had lost the previous two, the only such streak in USC history (through , no Pac-12 team has done so since). The previous western team to lose consecutively was California, which dropped three straight (1949–1951) while representing the Pacific Coast Conference (PCC). Game summary The game was expected to be a tight physical defensive struggle, and it was. USC scored first, when quarterback Todd Marinovich ran for a touchdown and led 10–3 at halftime, but Michigan came back to tie the score in the third quarter. Midway through the fourth quarter, Michigan faced a 4th-and-2 at its own 46-yard line. The normally conservative Schembechler called for a fake punt and it worked to perfection as punter Chris Stapleton rambled 24 yards for what would have been a first down, but Michigan was called for holding. On the resultant drive, USC scored the winning touchdown with just over a minute to play. At the end of the game, Schembechler walked off the field as head coach for the last time, refusing interview requests; he remained briefly as the athletic director, a post he gained concurrently in 1988. A few days later, he announced he was leaving Michigan to become the president of the Detroit Tigers of Major League Baseball. Scoring First quarter None 0–0 tie Second quarter USC: Todd Marinovich, 1-yard run (Quin Rodriguez kick), USC 7–0 Mich: J.D. Carlson, 19-yard field goal, USC 7–3 USC: Rodriguez, 34-yard field goal, USC 10–3 Third quarter Mich: Allen Jefferson, 2-yard run (Carlson kick), 10–10 tie Fourth quarter USC: Ricky Ervins, 14-yard run (Rodriguez kick), 17–10 USC References External links Summary at Bentley Historical Library, University of Michigan Athletics History Rose Bowl Rose Bowl Game Michigan Wolverines football bowl games USC Trojans football bowl games Rose Bowl January 1990 sports events in the United States
22845262
https://en.wikipedia.org/wiki/Real-time%20locating%20system
Real-time locating system
Real-time locating systems (RTLS), also known as real-time tracking systems, are used to automatically identify and track the location of objects or people in real time, usually within a building or other contained area. Wireless RTLS tags are attached to objects or worn by people, and in most RTLS, fixed reference points receive wireless signals from tags to determine their location. Examples of real-time locating systems include tracking automobiles through an assembly line, locating pallets of merchandise in a warehouse, or finding medical equipment in a hospital. The physical layer of RTLS technology is often radio frequency (RF) communication. Some systems use optical (usually infrared) or acoustic (usually ultrasound) technology with, or in place of, RF. RTLS tags and fixed reference points can be transmitters, receivers, or both, resulting in numerous possible technology combinations. RTLS are a form of local positioning system and do not usually refer to GPS or to mobile phone tracking. Location information usually does not include speed, direction, or spatial orientation. Origin The term RTLS was created (circa 1998) at the ID EXPO trade show by Tim Harrington (WhereNet), Jay Werb (PinPoint), and Bert Moore (Automatic Identification Manufacturers, Inc., AIM). It was created to describe and differentiate an emerging technology that not only provided the automatic identification capabilities of active RFID tags, but also added the ability to view the location on a computer screen. It was at this show that the first examples of a commercial radio based RTLS system were shown by PinPoint and WhereNet. Although this capability had been utilized previously by military and government agencies, the technology had been too expensive for commercial purposes. In the early 1990s, the first commercial RTLS were installed at three healthcare facilities in the United States and were based on the transmission and decoding of infrared light signals from actively transmitting tags. Since then, new technology has emerged that also enables RTLS to be applied to passive tag applications. Locating concepts RTLS are generally used in indoor and/or confined areas, such as buildings, and do not provide global coverage like GPS. RTLS tags are affixed to mobile items to be tracked or managed. RTLS reference points, which can be either transmitters or receivers, are spaced throughout a building (or similar area of interest) to provide the desired tag coverage. In most cases, the more RTLS reference points that are installed, the better the location accuracy, until the technology limitations are reached. A number of disparate system designs are all referred to as "real-time locating systems". Two primary system design elements are locating at choke points and locating in relative coordinates. Locating at choke points The simplest form of choke point locating is where short range ID signals from a moving tag are received by a single fixed reader in a sensory network, thus indicating the location coincidence of reader and tag. Alternately, a choke point identifier can be received by the moving tag and then relayed, usually via a second wireless channel, to a location processor. Accuracy is usually defined by the sphere spanned with the reach of the choke point transmitter or receiver. The use of directional antennas, or technologies such as infrared or ultrasound that are blocked by room partitions, can support choke points of various geometries. Locating in relative coordinates ID signals from a tag are received by a multiplicity of readers in a sensory network, and a position is estimated using one or more locating algorithms, such as trilateration, multilateration, or triangulation. Equivalently, ID signals from several RTLS reference points can be received by a tag and relayed back to a location processor. Localization with multiple reference points requires that distances between reference points in the sensory network be known in order to precisely locate a tag, and the determination of distances is called ranging. Another way to calculate relative location is via mobile tags communicating with one another. The tag(s) will then relay this information to a location processor. Location accuracy RF trilateration uses estimated ranges from multiple receivers to estimate the location of a tag. RF triangulation uses the angles at which the RF signals arrive at multiple receivers to estimate the location of a tag. Many obstructions, such as walls or furniture, can distort the estimated range and angle readings leading to varied qualities of location estimate. Estimation-based locating is often measured in accuracy for a given distance, such as 90% accurate for 10-meter range. Some systems use locating technologies that can't pass through walls, such as infrared or ultrasound. These require line of sight (or near line of sight) to communicate properly. As a result, they tend to be more accurate in indoor environments. Applications RTLS can be used in numerous logistical or operational areas to: locate and manage assets within a facility, such as finding a misplaced tool cart in a warehouse or medical equipment in a hospital create notifications when an object moves, such as an alert if a tool cart left the facility combine identity of multiple items placed in a single location, such as on a pallet locate customers, for example in a restaurant, for delivery of food or service maintain proper staffing levels of operational areas, such as ensuring guards are in the proper locations in a correctional facility quickly and automatically account for all staff after or during an emergency evacuation Toronto General Hospital is looking at RTLS to reduce quarantine times after an infectious disease outbreak. After a recent SARS outbreak, 1% of all staff were quarantined. With RTLS, they would have more accurate data regarding who had been exposed to the virus, potentially reducing the need for quarantines. aid in process improvement efforts by automatically tracking and time stamping the progress of people or assets through a process, such as following a patient's emergency room wait time, time spent in the operating room, and total time until discharge aid in acute care capacity management through clinical-care locating Privacy concerns RTLS may be seen as a threat to privacy when used to determine the location of people. The newly declared human right of informational self-determination gives the right to prevent one's identity and personal data from being disclosed to others and also covers disclosure of locality, though this does not generally apply to the workplace. Several prominent labor unions have spoken out against the use of RTLS systems to track workers, calling them "the beginning of Big Brother" and "an invasion of privacy". Current location-tracking technologies can be used to pinpoint users of mobile devices in several ways. First, service providers have access to network-based and handset-based technologies that can locate a phone for emergency purposes. Second, historical location can frequently be discerned from service provider records. Thirdly, other devices such as Wi-Fi hotspots or IMSI catchers can be used to track nearby mobile devices in real time. Finally, hybrid positioning systems combine different methods in an attempt to overcome each individual method's shortcomings. Types of technologies used There is a wide variety of systems concepts and designs to provide real-time locating. Active radio frequency identification (Active RFID) Active radio frequency identification - infrared hybrid (Active RFID-IR) Infrared (IR) Optical locating Low-frequency signpost identification Semi-active radio frequency identification (semi-active RFID) Passive RFID RTLS locating via steerable phased array antennae Radio beacon Ultrasound Identification (US-ID) Ultrasonic ranging (US-RTLS) Ultra-wideband (UWB) Wide-over-narrow band Wireless local area network (WLAN, Wi-Fi) Bluetooth Clustering in noisy ambience Bivalent systems A general model for selection of the best solution for a locating problem has been constructed at the Radboud University of Nijmegen. Many of these references do not comply with the definitions given in international standardization with ISO/IEC 19762-5 and ISO/IEC 24730-1. However, some aspects of real-time performance are served and aspects of locating are addressed in context of absolute coordinates. Ranging and angulating Depending on the physical technology used, at least one and often some combination of ranging and/or angulating methods are used to determine location: Angle of arrival (AoA) Angle of departure (AoD) (e.g., Bluetooth direction finding features a mobile-centric RTLS architecture - see US 7376428 B1) Line of sight (LoS) Time of arrival (ToA) Multilateration (Time difference of arrival) (TDoA) Time of flight (ToF) Two-way ranging (TWR) according to Nanotron's patents Symmetrical double-sided two-way ranging (SDS-TWR) Near-field electromagnetic ranging (NFER) Errors and accuracy Real-time locating is affected by a variety of errors. Many of the major reasons relate to the physics of the locating system, and may not be reduced by improving the technical equipment. None or no direct response Many RTLS systems require direct and clear line of sight visibility. For those systems, where there is no visibility from mobile tags to fixed nodes there will be no result or a non valid result from locating engine. This applies to satellite locating as well as other RTLS systems such as angle of arrival and time of arrival. Fingerprinting is a way to overcome the visibility issue: If the locations in the tracking area contain distinct measurement fingerprints, line of sight is not necessarily needed. For example, if each location contains a unique combination of signal strength readings from transmitters, the location system will function properly. This is true, for example, with some Wi-Fi based RTLS solutions. However, having distinct signal strength fingerprints in each location typically requires a fairly high saturation of transmitters. False location The measured location may appear entirely faulty. This is a generally result of simple operational models to compensate for the plurality of error sources. It proves impossible to serve proper location after ignoring the errors. Locating backlog Real time is no registered branding and has no inherent quality. A variety of offers sails under this term. As motion causes location changes, inevitably the latency time to compute a new location may be dominant with regard to motion. Either an RTLS system that requires waiting for new results is not worth the money or the operational concept that asks for faster location updates does not comply with the chosen system's approach. Temporary location error Location will never be reported exactly, as the term real-time and the term precision directly contradict in aspects of measurement theory as well as the term precision and the term cost contradict in aspects of economy. That is no exclusion of precision, but the limitations with higher speed are inevitable. Steady location error Recognizing a reported location steadily apart from physical presence generally indicates the problem of insufficient over-determination and missing of visibility along at least one link from resident anchors to mobile transponders. Such effect is caused also by insufficient concepts to compensate for calibration needs. Location jitter Noise from various sources has an erratic influence on stability of results. The aim to provide a steady appearance increases the latency contradicting to real time requirements. Location jump As objects containing mass have limitations to jump, such effects are mostly beyond physical reality. Jumps of reported location not visible with the object itself generally indicate improper modeling with the location engine. Such effect is caused by changing dominance of various secondary responses. Location creep Location of residing objects gets reported moving, as soon as the measures taken are biased by secondary path reflections with increasing weight over time. Such effect is caused by simple averaging and the effect indicates insufficient discrimination of first echoes. Standards ISO/IEC The basic issues of RTLS are standardized by the International Organization for Standardization and the International Electrotechnical Commission under the ISO/IEC 24730 series. In this series of standards, the basic standard ISO/IEC 24730-1 identifies the terms describing a form of RTLS used by a set of vendors but does not encompass the full scope of RTLS technology. Currently several standards are published: ISO/IEC 19762-5:2008 Information technology — Automatic identification and data capture (AIDC) techniques — Harmonized vocabulary—Part 5: Locating systems ISO/IEC 24730-1:2014 Information technology — Real-time locating systems (RTLS) — Part 1: Application programming interface (API) ISO/IEC 24730-2:2012 Information technology — Real time locating systems (RTLS) — Part 2: Direct Sequence Spread Spectrum (DSSS) 2,4 GHz air interface protocol ISO/IEC 24730-5:2010 Information technology — Real-time locating systems (RTLS) — Part 5: Chirp spread spectrum (CSS) at 2,4 GHz air interface ISO/IEC 24730-21:2012 Information technology — Real time locating systems (RTLS) — Part 21: Direct Sequence Spread Spectrum (DSSS) 2,4 GHz air interface protocol: Transmitters operating with a single spread code and employing a DBPSK data encoding and BPSK spreading scheme ISO/IEC 24730-22:2012 Information technology — Real time locating systems (RTLS) — Part 22: Direct Sequence Spread Spectrum (DSSS) 2,4 GHz air interface protocol: Transmitters operating with multiple spread codes and employing a QPSK data encoding and Walsh offset QPSK (WOQPSK) spreading scheme ISO/IEC 24730-61:2013 Information technology — Real time locating systems (RTLS) — Part 61: Low rate pulse repetition frequency Ultra Wide Band (UWB) air interface ISO/IEC 24730-62:2013 Information technology — Real time locating systems (RTLS) — Part 62: High rate pulse repetition frequency Ultra Wide Band (UWB) air interface These standards do not stipulate any special method of computing locations, nor the method of measuring locations. This may be defined in specifications for trilateration, triangulation, or any hybrid approaches to trigonometric computing for planar or spherical models of a terrestrial area. INCITS INCITS 371.1:2003, Information Technology - Real Time Locating Systems (RTLS) - Part 1: 2.4 GHz Air Interface Protocol INCITS 371.2:2003, Information Technology - Real Time Locating Systems (RTLS) - Part 2: 433-MHz Air Interface Protocol INCITS 371.3:2003, Information Technology - Real Time Locating Systems (RTLS) - Part 3: Application Programming Interface Limitations and further discussion In RTLS application in the healthcare industry, various studies were issued discussing the limitations of the currently adopted RTLS. Currently used technologies RFID, Wi-fi, UWB, all RFID based are hazardous in the sense of interference with sensitive equipment. A study carried out by Dr Erik Jan van Lieshout of the Academic Medical Centre of the University of Amsterdam published in JAMA (Journal of the American Medical Equipment) claimed "RFID and UWB could shut down equipment patients rely on" as "RFID caused interference in 34 of the 123 tests they performed". The first Bluetooth RTLS provider in the medical industry is supporting this in their article: "The fact that RFID cannot be used near sensitive equipment should in itself be a red flag to the medical industry". The RFID Journal responded to this study not negating it rather explaining real-case solution: "The Purdue study showed no effect when ultrahigh-frequency (UHF) systems were kept at a reasonable distance from medical equipment. So placing readers in utility rooms, near elevators and above doors between hospital wings or departments to track assets is not a problem". However the case of ”keeping at a reasonable distance” might be still an open question for the RTLS technology adopters and providers in medical facilities. In many applications it is very difficult and at the same time important to make a proper choice among various communication technologies (e.g., RFID, WiFi, etc.) which RTLS may include. Wrong design decisions made at early stages can lead to catastrophic results for the system and a significant loss of money for fixing and redesign. To solve this problem a special methodology for RTLS design space exploration was developed. It consists of such steps as modelling, requirements specification, and verification into a single efficient process. See also Context awareness Indoor positioning system Location awareness Omlox Positioning technologies Track and trace Vehicle tracking system Wireless triangulation References Further reading Indoor Geolocation Using Wireless Local Area Networks (Berichte Aus Der Informatik), Michael Wallbaum (2006) Local Positioning Systems: LBS applications and services, Krzysztof Kolodziej & Hjelm Johan, CRC Press Inc (2006) Wireless locating Tracking Locating system
8641308
https://en.wikipedia.org/wiki/Disk%20buffer
Disk buffer
In computer storage, disk buffer (often ambiguously called disk cache or cache buffer) is the embedded memory in a hard disk drive (HDD) acting as a buffer between the rest of the computer and the physical hard disk platter that is used for storage. Modern hard disk drives come with 8 to 256 MiB of such memory, and solid-state drives come with up to 4 GB of cache memory. Since the late 1980s, nearly all disks sold have embedded microcontrollers and either an ATA, Serial ATA, SCSI, or Fibre Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters. The disk buffer is physically distinct from and is used differently from the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the microcontroller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 and 256 MiB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded controller's memory is more appropriately called disk buffer. Note that disk array controllers, as opposed to disk controllers, usually have normal cache memory of around 0.5–8 GiB. Uses Read-ahead/read-behind When a disk's controller executes a physical read, the actuator moves the read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data. The data read ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later. Similarly, data can be read for free behind the requested one if the head can stay on track because there is no other read to execute or the next actuating can start later and still complete in time. If several requested reads are on the same track (or close by on a spiral track), most unrequested data between them will be both read ahead and behind. Speed matching The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed. Write acceleration The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data is actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data is permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state. On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system for caching data, although this is typically only found in high-end RAID controllers. Alternatively, the caching can simply be turned off when the integrity of data is deemed more important than write performance. Another option is to send data to disk in a carefully managed order and to issue "cache flush" commands in the right places, which is usually referred to as the implementation of write barriers. Command queuing Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation through "command queuing" (see NCQ and TCQ). These commands are stored by the disk's embedded controller until they are completed. One benefit is that the commands can be re-ordered to be processed more efficiently, so that commands affecting the same area of a disk are grouped together. Should a read reference the data at the destination of a queued write, the to-be-written data will be returned. NCQ is usually used in combination with enabled write buffering. In case of a read/write FPDMA command with Force Unit Access (FUA) bit set to 0 and enabled write buffering, an operating system may see the write operation finished before the data is physically written to the media. In case of FUA bit set to 1 and enabled write buffering, write operation returns only after the data is physically written to the media. Cache control from the host Cache flushing Data that was accepted in write cache of a disk device will be eventually written to disk platters, provided that no starvation condition occurs as a result of firmware flaw, and that disk power supply is not interrupted before cached writes are forced to disk platters. In order to control write cache, ATA specification included FLUSH CACHE (E7h) and FLUSH CACHE EXT (EAh) commands. These commands cause the disk to complete writing data from its cache, and disk will return good status after data in the write cache is written to disk media. In addition, flushing the cache can be initiated at least to some disks by issuing Soft reset or Standby (Immediate) command. Mandatory cache flushing is used in Linux for implementation of write barriers in some filesystems (for example, ext4), together with Force Unit Access write command for journal commit blocks. Force Unit Access (FUA) Force Unit Access (FUA) is an I/O write command option that forces written data all the way to stable storage. FUA write commands (WRITE DMA FUA EXT 3Dh, WRITE DMA QUEUED FUA EXT 3Eh, WRITE MULTIPLE FUA EXT CEh), in contrast to corresponding commands without FUA, write data directly to the media, regardless of whether write caching in the device is enabled or not. FUA write command will not return until data is written to media, thus data written by a completed FUA write command is on permanent media even if the device is powered off before issuing a FLUSH CACHE command. FUA appeared in the SCSI command set, and was later adopted by SATA with NCQ. FUA is more fine-grained as it allows a single write operation to be forced to stable media and thus has smaller overall performance impact when compared to commands that flush the entire disk cache, such as the ATA FLUSH CACHE family of commands. Windows (Vista and up) supports FUA as part of Transactional NTFS, but only for SCSI or Fibre Channel disks where support for FUA is common. It is not known whether a SATA drive that supports FUA write commands will actually honor the command and write data to disk platters as instructed; thus, Windows 8 and Windows Server 2012 instead send commands to flush the disk write cache after certain write operations. Although the Linux kernel gained support for NCQ around 2007, SATA FUA remains disabled by default because of regressions that were found in 2012 when the kernel's support for FUA was tested. The Linux kernel supports FUA at the block layer level. See also Hybrid array Hybrid drive References Computer storage devices Hard disk computer storage
4820337
https://en.wikipedia.org/wiki/Computer%20repair%20technician
Computer repair technician
A computer repair technician is a person who repairs and maintains computers and servers. The technician's responsibilities may extend to include building or configuring new hardware, installing and updating software packages, and creating and maintaining computer networks. Overview Computer technicians work in a variety of settings, encompassing both the public and private sectors. Because of the relatively brief existence of the profession, institutions offer certificate and degree programs designed to prepare new technicians, but computer repairs are frequently performed by experienced and certified technicians who have little formal training in the field. Private sector computer repair technicians can work in corporate information technology departments, central service centers or in retail computer sales environments. Public sector computer repair technicians might work in the military, national security or law enforcement communities, health or public safety field, or an educational institution. Despite the vast variety of work environments, all computer repair technicians perform similar physical and investigative processes, including technical support and often customer service. Experienced computer repair technicians might specialize in fields such as data recovery, system administration, networking or information systems. Some computer repair technicians are self-employed or own a firm that provides services in a regional area. Some are subcontracted as freelancers or consultants. This type of computer repair technician ranges from hobbyists and enthusiasts to those who work professionally in the field. Computer malfunctions can range from a minor setting that is incorrect, to spyware, viruses, and as far as replacing hardware and an entire operating system. Some technicians provide on-site services, usually at an hourly rate. Others can provide services off-site, where the client can drop their computers and other devices off at the repair shop. Some have pickup and drop off services for convenience. Some technicians may also take back old equipment for recycling. This is required in the EU, under WEEE rules. Hardware repair While computer hardware configurations vary widely, a computer repair technician that works on OEM equipment will work with five general categories of hardware; desktop computers, laptops, servers, computer clusters and smartphones / mobile computing devices. Technicians also work with and occasionally repair a range of peripherals, including input devices (like keyboards, mice, webcams and scanners), output devices (like displays, printers, and speakers), and data storage devices such as internal and external hard drives and disk arrays. Technicians involved in system administration might also work with networking hardware, including routers, switches, cabling, fiber optics, and wireless networks. Software repair When possible, computer repair technicians protect the computer user's data and settings. Following a repair, an ideal scenario will give the user access to the same data and settings that were available to them prior to repair. To address a software problem, the technician could take an action as minor as adjusting a single setting or they may implore more involved techniques such as: installing, uninstalling, or reinstalling various software packages. Advanced software repairs often involve directly editing keys and values in the Windows Registry or running commands directly from the command prompt. A reliable, but somewhat more complicated procedure for addressing software issues is known as a system restore (also referred to as imaging, and/or reimaging), in which the computer's original installation image (including operating system and original applications) is reapplied to a formatted hard drive. Anything unique such as settings or personal files will be destroyed if not backed up on external media, as this reverts everything back to its original unused state. The computer technician can only reimage if there is an image of the hard drive for that computer either in a separate partition or stored elsewhere. On a Microsoft Windows system, if there is a restore point that was saved (normally saved on the hard drive of the computer) then the installed applications and Windows Registry can be restored to that point. This procedure may solve problems that have arisen after the time the restore point was created. Finally, if no image or system restore point is available, a fresh copy of the operating system is recommended. Formatting and reinstalling the operating system will require the license information from the initial purchase. If none is available, the operating system may require a new licence to be used. Data recovery One of the most common tasks performed by computer repair technicians after software updates and screen repairs is data recovery. This is the process of recovering lost data from a corrupted or otherwise inaccessible hard drive. In most cases a third-party data recovery software is used to retrieve the data and transfer it to a new hard drive. Specialists say in about 15% of the cases the data is unable to be recovered as the hard disk is damaged to a point where it will no longer function. Blackblaze's annual report indicates that the hard drive failure rate for the first quarter of 2020 was 1.07%. Education Education requirements vary by company and individual proprietor. The entry level requirement is generally based on the extent of the work expected. Often a 4 year degree will be required for a more specialized technician, whereas a general support technician may only require a 2 year degree or some post secondary classes. Certification Common Certifications The most common certification for computer repair technicians are the CompTIA A+ Certification and Network+ Certifications. Additional Certifications Additional certifications are useful when technicians are expanding their skill set. These will be useful when seeking advanced, higher paying positions. These are generally offered by specific software or hardware providers and will give the technician an in-depth knowledge of the systems related to that software or hardware. For instance, the Microsoft Technology Associate and Microsoft Certified Solutions Associate certifications give the technician proof that they have mastered PC fundamentals. Additional Computer Technician Certifications Microsoft (MCSE, MCITP, MCTS) Apple (ACSP, ACTC) International Information Systems Security Certification Consortium (CISSP) Information Systems Audit and Control Association (ISACA) Project Management Professional (PMP) Additional Network Technician Certifications Cisco CCNA and CCNP Cisco CCIE Enterprise Infrastructure and CCIE Enterprise Wireless SolarWinds Certified Professional Wireshark WCNA License In Texas, computer companies and professionals are required to have private investigators’ licenses if they access computer data for purposes other than diagnosis or repair. Texas Occupations Code, Chapter 1702 section 104, subsection 4(b). See also Information systems technician Rework (electronics) 3-Pronged Parts Retriever References Repair technician People in information technology Technicians
300006
https://en.wikipedia.org/wiki/Use%20case
Use case
In software and systems engineering, the phrase use case is a polyseme with two senses: A usage scenario for a piece of software; often used in the plural to suggest situations where a piece of software may be useful. A potential scenario in which a system receives an external request (such as user input) and responds to it. This article discusses the latter sense. A use case is a list of actions or event steps typically defining the interactions between a role (known in the Unified Modeling Language (UML) as an actor) and a system to achieve a goal. The actor can be a human or other external system. In systems engineering, use cases are used at a higher level than within software engineering, often representing missions or stakeholder goals. The detailed requirements may then be captured in the Systems Modeling Language (SysML) or as contractual statements. History In 1987, Ivar Jacobson presented the first article on use cases at the OOPSLA'87 conference. He described how this technique was used at Ericsson to capture and specify requirements of a system using textual, structural, and visual modeling techniques to drive object oriented analysis and design. Originally he had used the terms usage scenarios and usage case – the latter a direct translation of his Swedish term användningsfall – but found that neither of these terms sounded natural in English, and eventually he settled on use case. In 1992 he co-authored the book Object-Oriented Software Engineering - A Use Case Driven Approach, which laid the foundation of the OOSE system engineering method and helped to popularize use cases for capturing functional requirements, especially in software development. In 1994 he published a book about use cases and object-oriented techniques applied to business models and business process reengineering. At the same time, Grady Booch and James Rumbaugh worked at unifying their object-oriented analysis and design methods, the Booch method and Object Modeling Technique (OMT) respectively. In 1995 Ivar Jacobson joined them and together they created the Unified Modelling Language (UML), which includes use case modeling. UML was standardized by the Object Management Group (OMG) in 1997. Jacobson, Booch and Rumbaugh also worked on a refinement of the Objectory software development process. The resulting Unified Process was published in 1999 and promoted a use case driven approach. Since then, many authors have contributed to the development of the technique, notably: Larry Constantine developed in 1995, in the context of usage-centered design, so called "essential use-cases" that aim to describe user intents rather than sequences of actions or scenarios which might constrain or bias the design of user interface; Alistair Cockburn published in 2000 a goal-oriented use case practice based on text narratives and tabular specifications; Kurt Bittner and Ian Spence developed in 2002 advanced practices for analyzing functional requirements with use cases; Dean Leffingwell and Don Widrig proposed to apply use cases to change management and stakeholder communication activities; Gunnar Overgaard proposed in 2004 to extend the principles of design patterns to use cases. In 2011, Jacobson published with Ian Spence and Kurt Bittner the ebook Use Case 2.0 to adapt the technique to an agile context, enriching it with incremental use case "slices", and promoting its use across the full development lifecycle after having presented the renewed approach at the annual IIBA conference. General principle Use cases are a technique for capturing, modelling and specifying the requirements of a system. A use case corresponds to a set of behaviours that the system may perform in interaction with its actors, and which produces an observable result that contribute to its goals. Actors represent the role that human users or other systems have in the interaction. In the requirement analysis, at their identification, a use case is named according to the specific user-goal that it represents for its primary actor. The case is further detailed with a textual description or with additional graphical models that explains the general sequence of activities and events, as well as variants such as special conditions, exceptions or error situations. According to the Software Engineering Body of Knowledge (SWEBOK), use cases belong to the scenario-based requirement elicitation techniques, as well as the model-based analysis techniques. But the use cases also supports narrative-based requirement gathering, incremental requirement acquisition, system documentation, and acceptance testing. Variations There are different kinds of use cases and variations in the technique: System use cases specify the requirements of a system to be developed. They identify in their detailed description not only the interactions with the actors but also the entities that are involved in the processing. They are the starting point for further analysis models and design activities. Business use cases focus on a business organisation instead of a software system. They are used to specify business models and business process requirements in the context of business process reengineering initiatives. Essential use cases, also called abstract use cases, describe the potential intents of the actors and how the system addresses these, without defining any sequence or describing a scenario. This practice was developed with the aim of supporting user-centric design and avoiding to induce bias about the user-interface in the early stage of the system specifications. Use Case 2.0 adapts the technique for the context of agile development methods. This technique enriches the requirement gathering practice with support for user-story narratives. It also provides use case "slices" to facilitate incremental elicitation of requirements and enable incremental implementation. Scope The scope of a use case can be defined by subject and by goals: The subject identifies the system, sub-system or component that will provide the interactions. The goals can be structured hierarchically, taking into account the organisational level interested in the goal (e.g. company, department, user), and the decomposition of the user's goal into sub-goals. The decomposition of the goal is performed from the point of view of the users, and independently of the system, which differs from traditional functional decomposition. Usage Use cases are known to be applied in the following contexts: Object Oriented Software Engineering (OOSE), as driving element; Unified Modeling Language (UML), as a behavioral modelling instrument; Unified Software Development Process (UP) and its fore-runner, the IBM Rational Unified Process (RUP); up-front documentation of software requirements specification (SRS), as alternative structure for the functional requirements; deriving the design from the requirements using the entity-control-boundary approach; and agile development. Templates There are many ways to write a use case in text, from use case brief, casual, outline, to fully dressed etc., and with varied templates. Writing use cases in templates devised by various vendors or experts is a common industry practice to get high-quality functional system requirements. Cockburn style The template defined by Alistair Cockburn in his book Writing Effective Use Cases has been one of the most widely used writing styles of use cases. Design scopes Cockburn suggests annotating each use case with a symbol to show the "Design Scope", which may be black-box (internal detail is hidden) or white-box (internal detail is shown). Five symbols are available: Other authors sometimes call use cases at Organization level "Business use cases". Goal levels Cockburn suggests annotating each use case with a symbol to show the "Goal Level"; the preferred level is "User-goal" (or colloquially "sea level"). Sometimes in text writing, a use case name followed by an alternative text symbol (!, +, -, etc.) is a more concise and convenient way to denote levels, e.g. place an order!, login-. Fully dressed Cockburn describes a more detailed structure for a use case, but permits it to be simplified when less detail is needed. His fully dressed use case template lists the following fields: Title: "an active-verb goal phrase that names the goal of the primary actor" Primary Actor Goal in Context Scope Level Stakeholders and Interests Precondition Minimal Guarantees Success Guarantees Trigger Main Success Scenario Extensions Technology & Data Variations List In addition, Cockburn suggests using two devices to indicate the nature of each use case: icons for design scope and goal level. Cockburn's approach has influenced other authors; for example, Alexander and Beus-Dukic generalize Cockburn's "Fully dressed use case" template from software to systems of all kinds, with the following fields differing from Cockburn: Variation scenarios "(maybe branching off from and maybe returning to the main scenario)" Exceptions "i.e. exception events and their exception-handling scenarios" Casual Cockburn recognizes that projects may not always need detailed "fully dressed" use cases. He describes a Casual use case with the fields: Title (goal) Primary Actor Scope Level (Story): the body of the use case is simply a paragraph or two of text, informally describing what happens. Fowler style Martin Fowler states "There is no standard way to write the content of a use case, and different formats work well in different cases." He describes "a common style to use" as follows: Title: "goal the use case is trying to satisfy" Main Success Scenario: numbered list of steps Step: "a simple statement of the interaction between the actor and a system" Extensions: separately numbered lists, one per Extension Extension: "a condition that results in different interactions from .. the main success scenario". An extension from main step 3 is numbered 3a, etc. The Fowler style can also be viewed as a simplified variant of the Cockburn template. Actors A use case defines the interactions between external actors and the system under consideration to accomplish a goal. Actors must be able to make decisions, but need not be human: "An actor might be a person, a company or organization, a computer program, or a computer system—hardware, software, or both." Actors are always stakeholders, but not all stakeholders are actors, since they "never interact directly with the system, even though they have the right to care how the system behaves." For example, "the owners of the system, the company's board of directors, and regulatory bodies such as the Internal Revenue Service and the Department of Insurance" could all be stakeholders but are unlikely to be actors. Similarly, a person using a system may be represented as different actors because of playing different roles. For example, user "Joe" could be playing the role of a Customer when using an Automated Teller Machine to withdraw cash from his own account, or playing the role of a Bank Teller when using the system to restock the cash drawer on behalf of the bank. Actors are often working on behalf of someone else. Cockburn writes that "These days I write 'sales rep for the customer' or 'clerk for the marketing department' to capture that the user of the system is acting for someone else." This tells the project that the "user interface and security clearances" should be designed for the sales rep and clerk, but that the customer and marketing department are the roles concerned about the results. A stakeholder may play both an active and an inactive role: for example, a Consumer is both a "mass-market purchaser" (not interacting with the system) and a User (an actor, actively interacting with the purchased product). In turn, a User is both a "normal operator" (an actor using the system for its intended purpose) and a "functional beneficiary" (a stakeholder who benefits from the use of the system). For example, when user "Joe" withdraws cash from his account, he is operating the Automated Teller Machine and obtaining a result on his own behalf. Cockburn advises to look for actors among the stakeholders of a system, the primary and supporting (secondary) actors of a use case, the system under design (SuD) itself, and finally among the "internal actors", namely the components of the system under design. Business use case In the same way that a use case describes a series of events and interactions between a user (or other type of Actor) and a system, in order to produce a result of value (goal), a business use case describes the more general interaction between a business system and the users/actors of that system to produce business results of value. The primary difference is that the system considered in a business use case model may contain people in addition to technological systems. These "people in the system" are called business workers. In the example of a restaurant, a decision must be made whether to treat each person as an actor (thus outside the system) or a business worker (inside the system). If a waiter is considered an actor, as shown in the example below, then the restaurant system does not include the waiter, and the model exposes the interaction between the waiter and the restaurant. An alternative would be to consider the waiter as a part of the restaurant system (a business worker), while considering the client to be outside the system (an actor). Visual modeling Use cases are not only texts, but also diagrams, if needed. In the Unified Modeling Language, the relationships between use cases and actors are represented in use case diagrams originally based upon Ivar Jacobson's Objectory notation. SysML uses the same notation at a system block level. In addition, other behavioral UML diagrams such as activity diagrams, sequence diagrams, communication diagrams and state machine diagrams can also be used to visualize use cases accordingly. Specifically, a System Sequence Diagram (SSD) is a sequence diagram often used to show the interactions between the external actors and the system under design (SuD), usually for visualizing a particular scenario of a use case. Use case analysis usually starts by drawing use case diagrams. For agile development, a requirement model of many UML diagrams depicting use cases plus some textual descriptions, notes or use case briefs would be very lightweight and just enough for small or easy project use. As good complements to use case texts, the visual diagram representations of use cases are also effective facilitating tools for the better understanding, communication and design of complex system behavioral requirements. Examples Below is a sample use case written with a slightly-modified version of the Cockburn-style template. Note that there are no buttons, controls, forms, or any other UI elements and operations in the basic use case description, where only user goals, subgoals or intentions are expressed in every step of the basic flow or extensions. This practice makes the requirement specification clearer, and maximizes the flexibility of the design and implementations. Use Case: Edit an article Primary Actor: Member (Registered User) Scope: a Wiki system Level: ! (User goal or sea level) Brief: (equivalent to a user story or an epic) The member edits any part (the entire article or just a section) of an article they are reading. Preview and changes comparison are allowed during the editing. Stakeholders ... Postconditions Minimal Guarantees: Success Guarantees: The article is saved and an updated view is shown. An edit record for the article is created by the system, so watchers of the article can be informed of the update later. Preconditions: The article with editing enabled is presented to the member. Triggers: The member invokes an edit request (for the full article or just one section) on the article. Basic flow: The system provides a new editor area/box filled with all the article's relevant content with an informative edit summary for the member to edit. If the member just wants to edit a section of the article, only the original content of the section is shown, with the section title automatically filled out in the edit summary. The member modifies the article's content until member is satisfied. The member fills out the edit summary, tells the system if they want to watch this article, and submits the edit. The system saves the article, logs the edit event and finishes any necessary post processing. The system presents the updated view of the article to the member. Extensions: 2–3. a. Show preview: The member selects Show preview which submits the modified content. The system reruns step 1 with addition of the rendered updated content for preview, and informs the member that his/her edits have not been saved yet, then continues. b. Show changes: The member selects Show changes which submits the modified content. The system reruns step 1 with addition of showing the results of comparing the differences between the current edits by the member and the most recent saved version of the article, then continues. c. Cancel the edit: The member selects Cancel. The system discards any change the member has made, then goes to step 5. 4a. Timeout: ... Advantages Since the inception of the agile movement, the user story technique from Extreme Programming has been so popular that many think it is the only and best solution for agile requirements of all projects. Alistair Cockburn lists five reasons why he still writes use cases in agile development. The list of goal names provides the shortest summary of what the system will offer (even than user stories). It also provides a project planning skeleton, to be used to build initial priorities, estimates, team allocation and timing. The main success scenario of each use case provides everyone involved with an agreement as to what the system will basically do and what it will not do. It provides the context for each specific line item requirement (e.g. fine-grained user stories), a context that is very hard to get anywhere else. The extension conditions of each use case provide a framework for investigating all the little, niggling things that somehow take up 80% of the development time and budget. It provides a look ahead mechanism, so the stakeholders can spot issues that are likely to take a long time to get answers for. These issues can and should then be put ahead of the schedule, so that the answers can be ready when the development team gets around to working on them. The use case extension scenario fragments provide answers to the many detailed, often tricky and ignored business questions: "What are we supposed to do in this case?" It is a thinking/documentation framework that matches the if...then...else statement that helps the programmers think through issues. Except it is done at investigation time, not programming time. The full use case set shows that the investigators have thought through every user's needs, every goal they have with respect to the system, and every business variant involved. In summary, specifying system requirements in use cases has these apparent benefits comparing with traditional or other approaches: User focused Use cases constitute a powerful, user-centric tool for the software requirements specification process. Use case modeling typically starts from identifying key stakeholder roles (actors) interacting with the system, and their goals or objectives the system must fulfill (an outside perspective). These user goals then become the ideal candidates for the names or titles of the use cases which represent the desired functional features or services provided by the system. This user-centered approach ensure that what has the real business value and the user really want is developed, not those trivial functions speculated from a developer or system (inside) perspective. Use case authoring has been an important and valuable analysis tool in the domain of User-Centered Design (UCD) for years. Better communication Use cases are often written in natural languages with structured templates. This narrative textual form (legible requirement stories), understandable by almost everyone, complemented by visual UML diagrams foster better and deeper communications among all stakeholders, including customers, end-users, developers, testers and managers. Better communications result in quality requirements and thus quality systems delivered. Quality requirements by structured exploration One of the most powerful things about use cases resides in the formats of the use case templates, especially the main success scenario (basic flow) and the extension scenario fragments (extensions, exceptional and/or alternative flows). Analyzing a use case step by step from preconditions to postconditions, exploring and investigating every action step of the use case flows, from basic to extensions, to identify those tricky, normally hidden and ignored, seemingly trivial but realistically often costly requirements (as Cockburn mentioned above), is a structured and beneficial way to get clear, stable and quality requirements systematically. Minimizing and optimizing the action steps of a use case to achieve the user goal also contribute to a better interaction design and user experience of the system. Facilitate testing and user documentation With content based upon an action or event flow structure, a model of well-written use cases also serves as an excellent groundwork and valuable guidelines for the design of test cases and user manuals of the system or product, which is an effort-worthy investment up-front. There is obvious connections between the flow paths of a use case and its test cases. Deriving functional test cases from a use case through its scenarios (running instances of a use case) is straightforward. Limitations Limitations of use cases include: Use cases are not well suited to capturing non-interaction based requirements of a system (such as algorithm or mathematical requirements) or non-functional requirements (such as platform, performance, timing, or safety-critical aspects). These are better specified declaratively elsewhere. As there are no fully standard definitions of use cases, each project must form its own interpretation. Some use case relationships, such as extends, are ambiguous in interpretation and can be difficult for stakeholders to understand as pointed out by Cockburn (Problem #6) Use case developers often find it difficult to determine the level of user interface (UI) dependency to incorporate in a use case. While use case theory suggests that UI not be reflected in use cases, it can be awkward to abstract out this aspect of design, as it makes the use cases difficult to visualize. In software engineering, this difficulty is resolved by applying requirements traceability, for example with a traceability matrix. Another approach to associate UI elements with use cases, is to attach a UI design to each step in the use case. This is called a use case storyboard. Use cases can be over-emphasized. Bertrand Meyer discusses issues such as driving system design too literally from use cases, and using use cases to the exclusion of other potentially valuable requirements analysis techniques. Use cases are a starting point for test design, but since each test needs its own success criteria, use cases may need to be modified to provide separate post-conditions for each path. Though use cases include goals and contexts, whether these goals and motivations behind the goals (stakeholders concerns and their assessments including non-interaction) conflict or negatively/positively affect other system goals are subject of goal oriented requirement modelling techniques (such as BMM, I*, KAOS and ArchiMate ARMOR). Misconceptions Common misunderstandings about use cases are: User stories are agile; use cases are not. Agile and Scrum are neutral on requirement techniques. As the Scrum Primer states, Product Backlog items are articulated in any way that is clear and sustainable. Contrary to popular misunderstanding, the Product Backlog does not contain "user stories"; it simply contains items. Those items can be expressed as user stories, use cases, or any other requirements approach that the group finds useful. But whatever the approach, most items should focus on delivering value to customers. Use case techniques have evolved to take Agile approaches into account by using use case slices to incrementally enrich a use case. Use cases are mainly diagrams. Craig Larman stresses that "use cases are not diagrams, they are text". Use cases have too much UI-related content. As some put it, Use cases will often contain a level of detail (i.e. naming of labels and buttons) which make it not well suited for capturing the requirements for a new system from scratch. Novice misunderstandings. Each step of a well-written use case should present actor goals or intentions (the essence of functional requirements), and normally it should not contain any user interface details, e.g. naming of labels and buttons, UI operations etc., which is a bad practice and will unnecessarily complicate the use case writing and limit its implementation. As for capturing requirements for a new system from scratch, use case diagrams plus use case briefs are often used as handy and valuable tools, at least as lightweight as user stories. Writing use cases for large systems is tedious and a waste of time. As some put it, The format of the use case makes it difficult to describe a large system (e.g. CRM system) in less than several hundred pages. It is time consuming and you will find yourself spending time doing an unnecessary amount of rework. Spending much time in writing tedious use cases which add no or little value and result in a lot of rework is a bad smell indicating that the writers are not well skilled and have little knowledge of how to write quality use cases both efficiently and effectively. Use cases should be authored in an iterative, incremental and evolutionary (agile) way. Applying use case templates does not mean that all the fields of a use case template should be used and filled out comprehensively from up-front or during a special dedicated stage, i.e. the requirement phase in the traditional waterfall development model. In fact, the use case formats formulated by those popular template styles, e.g. the RUP's and the Cockburn's (also adopted by the OUM method) etc., have been proved in practice as valuable and helpful tools for capturing, analyzing and documenting complex requirements of large systems. The quality of a good use case documentation (model) should not be judged largely or only by its size. It is possible as well that a quality and comprehensive use case model of a large system may finally evolve into hundreds of pages mainly because of the inherent complexity of the problem in hand, not because of the poor writing skills of its authors. Tools Text editors and/or word processors with template support are often used to write use cases. For large and complex system requirements, dedicated use case tools are helpful. Some of the well-known use case tools include: CaseComplete Enterprise Architect MagicDraw Rational Software's RequisitePro - one of the early, well-known use case and requirement management tools in the 1990s. Software Ideas Modeler Wiki software - good tools for teams to author and manage use cases collaboratively. Most UML tools support both the text writing and visual modeling of use cases. See also Abuse case Business case Entity-control-boundary Event partitioning Feature List of UML tools Misuse case Requirement Requirements elicitation Scenario Storyboard Test case Use case points References Further reading Alexander, Ian, and Beus-Dukic, Ljerka. Discovering Requirements: How to Specify Products and Services. Wiley, 2009. Alexander, Ian, and Maiden, Neil. Scenarios, Stories, Use Cases. Wiley 2004. Armour, Frank, and Granville Miller. Advanced Use Case Modeling: Software Systems. Addison-Wesley, 2000. Kurt Bittner, Ian Spence, Use Case Modeling, Addison-Wesley Professional, 20 August 2002. Cockburn, Alistair. Writing Effective Use Cases. Addison-Wesley, 2001. Larry Constantine, Lucy Lockwood, Software for Use: A Practical Guide to the Essential Models and Methods of Usage-Centered Design, Addison-Wesley, 1999. Denney, Richard. Succeeding with Use Cases: Working Smart to Deliver Quality. Addison-Wesley, 2005. Fowler, Martin. UML Distilled (Third Edition). Addison-Wesley, 2004. Jacobson Ivar, Christerson M., Jonsson P., Övergaard G., Object-Oriented Software Engineering - A Use Case Driven Approach, Addison-Wesley, 1992. Jacobson Ivar, Spence I., Bittner K. Use Case 2.0: The Guide to Succeeding with Use Cases, IJI SA, 2011. Dean Leffingwell, Don Widrig, Managing Software Requirements: A Use Case Approach, Addison-Wesley Professional. 7 December 2012. Kulak, Daryl, and Eamonn Guiney. Use cases: requirements in context. Addison-Wesley, 2012. Meyer, Bertrand. Object Oriented Software Construction. (2nd edition). Prentice Hall, 2000. Schneider, Geri and Winters, Jason P. Applying Use Cases 2nd Edition: A Practical Guide. Addison-Wesley, 2001. Wazlawick, Raul S. Object-Oriented Analysis and Design for Information Systems: Modeling with UML, OCL, and IFML''. Morgan Kaufmann, 2014. External links Use case column and Basic Use Case Template by Alistair Cockburn Use Cases at GSA's Usability.gov Application of use cases for stakeholder analysis "Project Icarus" at Academia.edu Search for "use case" at IBM Developer Software project management Software requirements Unified Modeling Language Systems Modeling Language 1986 establishments in Sweden 1986 in computing Swedish inventions Agile software development
58710893
https://en.wikipedia.org/wiki/Notion%20%28productivity%20software%29
Notion (productivity software)
Notion is a project management and note-taking software. Notion is a software designed to help members of a company or organization coordinate deadlines, objectives, and assignments for the sake of efficiency and productivity. History Notion Labs Inc is a startup based in San Francisco, and was founded in 2013 by Ivan Zhao. At that time, the founders declined to meet any venture capitalists or discuss acquiring a higher valuation. In March of 2018, Notion 2.0 was released. It was positively received by Product Hunt and rated #1 Product of the Month. At that point, the company had fewer than 10 employees . In June of 2018, an official Android app was released. In September of 2019, the company announced it had reached 1 million users and less than 7 months later In April 2020, Notion became a unicorn after being valued at two billion dollars. Notion received $50 million in investments from Index Ventures and other investors in January 2020. The Korean version of Notion was officially released in 2020. Korean startups known to use Notion include Toss, StyleShare, Sandbox, Fast Five, Class 101 Zigzag, Karrot, and Ridibooks. On September 7, 2021, Notion acquired a Hyderabad-based startup called Automate.io. In October of that year, a new round of funding led by Coatue Management and Sequoia Capital helped Notion raise $275 million. The investment valued Notion at $10 billion and the company had a total of 20 million users. Software Notion is a collaboration platform with modified markdown support that integrates kanban boards, tasks, wikis, and databases. The company is an all-in-one workspace for note-taking, knowledge and data management, and project and task management. It is a file management tool offering a unified workspace, allowing users to comment on ongoing projects, participate in discussions, and receive feedback. In addition to cross-platform apps, it can be accessed via most web browsers. The software includes a tool for "clipping" content from webpages. Notion helps users schedule tasks, manage files, save documents, set reminders, keep agendas, and organize their work. Notion allows writing & pasting equations in the form of blocks or inline, as it supports LaTeX. Users can also embed online content in their Notion pages using Embed.ly, in this way notes can also be efficiently typed down alongside an embedded video playing in PiP mode. Pricing Notion has a four-tiered subscription model: free, personal, team, and enterprise. It offers an account credit system where users can earn credit via referrals. Users are not charged if they have a remaining balance in their accounts. An academic email address allows a free personal plan. As of May 2020, the company upgraded the Personal Plan to allow unlimited blocks, a change from the previous cap in the Personal Plan. See also Wiki software Collaborative software Collaborative real-time editor Document collaboration References External links Note-taking software Project management software Task management software Android (operating system) software IOS software Collaborative software Wiki software Software companies based in California Business software
18558517
https://en.wikipedia.org/wiki/IPv6%20deployment
IPv6 deployment
Deployment of Internet Protocol Version 6 (IPv6), the latest generation of the Internet Protocol, has been in progress since the mid-2000s. IPv6 was designed as a replacement for IPv4. IPv4 has been in use since 1982, and is in the final stages of exhausting its unallocated address space, but still carries most Internet traffic. Google's statistics show IPv6 availability of its users at around 32–37% depending on the day of the week (greater on weekends), as of November 2021. Adoption is uneven across countries and Internet service providers. Many countries have 0% use while a few have over 50% use, such as India and Germany. In November 2016, 1,491 (98.2%) of the 1,519 top-level domains (TLDs) in the Internet supported IPv6 to access their domain name servers, and 1,485 (97.8%) zones contained IPv6 glue records, and approximately 9.0 million domains (4.6%) had IPv6 address records in their zones. Of all networks in the global BGP routing table, 29.2% had IPv6 protocol support. By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Cellular telephone systems present a large deployment field for Internet Protocol devices as mobile telephone service continues to make the transition from 3G to 4G technologies, in which voice is provisioned as a voice over IP (VoIP) service. In 2009, the US cellular operator Verizon released technical specifications for devices to operate on its 4G networks. The specification mandates IPv6 operation according to the 3GPP Release 8 Specifications (March 2009), and deprecates IPv4 as an optional capability. Deployment evaluation tools Google publishes statistics on IPv6 adoption among Google users. A graph of IPv6 adoption since 2008 and a map of IPv6 deployment by country are available. Akamai publishes by-country and by-network statistics on IPv6 adoption for traffic it sees on its global Content Distribution Network (CDN). This set of data also shows graphs for each country and network over time. A global view into the history of the growing IPv6 routing tables can be obtained with the SixXS Ghost Route Hunter. This tool provided a list of all allocated IPv6 prefixes until 2014 and marks with colors the ones that were actually being announced into the Internet BGP tables. When a prefix was announced, it means that the ISP at least can receive IPv6 packets for their prefix. The integration of IPv6 on existing network infrastructure may be monitored from other sources, for example: Regional Internet registries (RIR) IPv6 prefix allocation IPv6 transit services Japan ISP IPv6 services IPv6 testing, evaluation, and certification A few organizations are involved with international IPv6 test and evaluation, ranging from the United States Department of Defense to the University of New Hampshire. The US DoD Joint Interoperability Test Command DoD IPv6 Product Certification Program University of New Hampshire InterOperability Laboratory involvement in the IPv6 Ready Logo Program SATSIX Major milestones Operating system support By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Microsoft Windows has supported IPv6 since Windows 2000, and in production-ready state beginning with Windows XP. Windows Vista and later have improved IPv6 support. macOS since Panther (10.3), Linux 2.6, FreeBSD, and Solaris also have mature production implementations. Some implementations of the BitTorrent peer-to-peer file transfer protocol make use of IPv6 to avoid NAT issues common for IPv4 private networks. Government encouragement In the early 2000s, governments increasingly required support for IPv6 in new equipment. The US government, for example, specified in 2005 that the network backbones of all federal agencies had to be upgraded to IPv6 by June 30, 2008; this was completed before the deadline. In addition, the US government in 2010 required federal agencies to provide native dual-stacked IPv4/IPv6 access to external/public services by 2012, and internal clients were to utilize IPv6 by 2014. Progress on the US government's external facing IPv6 services is tracked by NIST. The government of the People's Republic of China implemented a five-year plan for deployment of IPv6 called the China Next Generation Internet (see below). Coexistence with IPv4 On 7 March 2013, the Internet Engineering Task Force created a working group for IPv4 sunset in preparation for protocol changes that could be used to support sunset / shutdown of remanent IPv4 networks. However, in May 2018 this working group was closed as no immediate work could be identified due to the slow transition to IPv6. The Internet Engineering Task Force expects IPv6 to coexist with IPv4 as it is considered impractical to transition to IPv6 in the short term. The coexistence is expected to be based on dual-stack, tunneling or translation mechanisms. Dual-stack implementations required two parallel logical networks, increasing cost and complexity of the network. IPv4 networks are expected to slowly transition into segmented subnetworks using IPv4 Residual Deployment. The slow transition to IPv6 has caused significant resentment in the Internet community. As a result, many larger enterprises, such as Microsoft, are transitioning to phasing out IPv4, and moving towards IPv6 Single-Stack within the company. In a recent blog, the company describes their heavily translated IPv4 network as "potentially fragile", "operationally challenging", and with regard to dual-stack operations (i.e. those running IPv4 and IPv6 simultaneously) "complex". Deployment by country and region Algeria AnwarNet (www.anwarnet.dz); AfriNIC has allocated range of IPv6 address space to AnwarNet. AnwarNet started IPV6 services in 2011. Australia AARNet completed network AARNet 3, a high-speed network connecting academic and research customers in the major metropolitan centres, with international links to major ISPs in the US, Asia, and Europe. One of the design goals was to support both IPv4 and IPv6 protocols equally. It also supports multicast routing and jumbo frames. IPv6 Now Pty Ltd introduced the first commercial-grade IPv6 tunnel broker service in Australia on April 30, 2008. Also, in June 2008, IPv6Now introduced the first dual-stacked (IPv4 & IPv6) web hosting service. Internode is the first commercial ISP in Australia to have full IPv6 connectivity and make IPv6 available to customers. The availability to customers was officially announced to Whirlpool on July 18, 2008. The Victorian State government granted A$350,000 to establish an IPv6 testbed network (VIC6) freely available to industry to evaluate their IPv6 products and strategies. Telstra announced on 5 September 2011 that their backbone network was fully double-stacked and that they had commenced providing its enterprise, government and wholesale customers with IPv6 connectivity, and helping customers through the transition; they would activate IPv6 addressing for its mobile network on 12 September 2016. Aussie Broadband offers native IPv6 as an opt-in beta feature, as of November 2018 however it had to be withdrawn briefly due to bugs in their Cisco supplied equipment. Bangladesh Infolink successfully tested and started commercial IPv6 beta deployment to end user for the first time in Bangladesh on May 22, 2017. SpeedLinks successfully tested and started commercial IPv6 deployment to end user for the first time in Rangpur Division, Bangladesh on January 25, 2022. Belgium On July 13, 2010, Logica Netherlands (operating within the SPITS project in cooperation with Mobistar Belgium) successfully tested native IPv6 over UMTS/GPRS in Belgium and the Netherlands within a vehicle platform as an Intelligent transportation system solution. The test was performed both in gsm and in tethering mode using a Nokia smart-phone. Since September 2013, research and government ISP Belnet offers native IPV6 to all customers. VOO A large residential ISP (cable) started its transition in April 2013 leading to impressive growth in IPv6 in Belgium Telenet started its transition in February 2014, helping to push the Belgian average of IPv6 usage to almost 30% by September 2014 and putting them in the top 10 of worldwide ISPs of which customers are visiting websites with IPv6. According to APNIC, IPv6 penetration is 52% as of January 2019; penetration briefly peaked around 70% in August 2017. Brazil As of April 2021, Brazil has 38.4% IPv6 adoption. IPv6 adoption in the country was boosted in 2015 when the Brazilian telecommunications agency, Anatel, announced that all Internet operators and service providers would be required to provide IPv6 addresses to consumers. This was one of a number of initiatives to increase the speed of deployment. Bulgaria Has constructed a research center to study the possibilities of adopting IPv6 in the country. The center will operate alongside another facility, which is equipped with an IBM Blue Gene/P supercomputer. Since 2015, the ISP Blizoo enabled IPv6 for many home customers. At the end of 2016, the ISP ComNet Bulgaria Holding Ltd. has provided complete IPv6 support for all customers and households within company network in Bulgaria. Canada IPv6 deployment is slow but ongoing, with major Canadian ISPs (notably Bell Canada) lacking in support for its residential customers, and the majority of their business customers (including server packages). According to Google's statistics, Canada reached an IPv6 adoption rate of 34.69% as of August 2021. Rogers Communications has deployed native IPv6 network wide, including their DOCSIS 3.0/3.1 wireline broadband network and their HSPA/LTE mobile network. In 2018, it appeared that all Wireless LTE devices in the network had only IPv6 addresses and no IPv4 gateway, IP address and DNS servers. Shaw Communications has IPv6 including DOCSIS 3.1 for residential customers using the latest XB6 cable modems since 2018. Fibrenoire, a Canadian Metro Ethernet fibre network operating in Quebec and Ontario, has been providing native IPv6 connectivity since 2009. Aptum Technologies(formerly Cogeco Peer 1) has provided IPv6 backbones to Canadian data centres since 2011, as well as in its peering centers. TekSavvy has deployed its own IPv6 network to its customers on DSL in Alberta, British Columbia, Ontario, and Quebec as well as for cable customers serviced by Rogers Communications. Vidéotron has deployed their own IPv6 network to customers as a beta service. SaskTel has deployed IPv6 support for business customers subscribing to their Dedicated Internet or LANSpan IP product. Telus has deployed IPv6 support for business services and residential customers with 61.23% IPv6 usage in August 2021 according to World IPv6 Launch measurements. Origen Telecom is a Canadian internet service provider operating in Montreal and Toronto, and supports IPv6 connectivity for its business clients. Belair Technologies Operating in Montreal, Laval and surrounding area as well as Cornwall and Toronto, and fully supports IPv6 connectivity for all its clients. TelKel, an ISP which offers FTTH only, in Montreal, Quebec and suburbs, supports native dual-stack IPv4 and IPv6 since the beginning. Cogeco provides IPv6 to customers. Beanfield Metroconnect provides IPv6 for business customers. EBOX provides IPv6 to its customers since 2013 on fiber and DSL/FTTN last-mile technologies. GemsTelecom has provided IPv6 to its customers since 2009 on fiber and DSL/FTTN last-mile technologies. China The China Next Generation Internet (CNGI, 中国下一代互联网) project is an ongoing plan initiated by the Chinese government with the purpose of gaining a significant position in the development of the Internet through the early adoption of IPv6. China showcased CNGI's IPv6 infrastructure during the 2008 Summer Olympics, being the first time a major world event has had a presence on the IPv6 Internet. At the time of the event, it was believed that the Olympics provided the largest showcase of IPv6 technology since the inception of IPv6. The deployment of IPv6 was widespread in all related applications, from data networking and camera transmissions for sporting events, to civil applications, such as security cameras and taxis. The events were streamed live over the Internet and networked cars were able to monitor traffic conditions readily, all network operations of the Games being conducted using IPv6. Also, the CERNET (China Education and Research NETwork, 中国教育和科研计算机网, 教育网) set up native IPv6 (CERNET2), and since then many academic institutions in China joined CERNET2 for IPv6 connectivity. CERNET-2 is probably the widest deployment of IPv6 in China. It is managed and operated jointly by 25 universities. Students in Shanghai Jiao Tong University and Beijing University of Posts and Telecommunications, for example, get native IPv6. In 2017, China issued an "Action Plan for Promoting Large-scale Deployment of Internet Protocol Version 6" where it encouraged a nationwide adoption of the IPv6 network. Outlined in the plan, China had set goals to develop a next generation internet technical system and industrial ecosystem with independent intellectual property rights in 5 to 10 years, and aimed at having the largest IPv6 network in the world by the end of 2025. In 2018, US researchers from the Georgia Institute of Technology categorised China as being part of a group of 169 countries that had little IPv6 traffic. As of 2021, Akamai's latest State of the Internet Report asserts an IPv6 adoption rate of 23.5% among Chinese internet connections. In July 2021, China announced plans to complete a national IPv6 rollout by 2030. It is the only country known to advocate towards a single-stack network and had earlier in May 2021, overtaken India in becoming the Number 1 country in terms of having the most IPv6 addresses in the world, with 528 million. Czech Republic As of September 2019, the country has deployment ratio around 11%, both by Google and APNIC stats. O2 Czech Republic have deployed IPv6 on residential xDSL lines since 2012. It uses dual-stack PPPoE with CGN for IPv4. Only /64 prefix size is available via DHCP-PD. T-Mobile Czech Republic have deployed IPv6 on residential xDSL lines since 2014. It uses dual-stack PPPoE with one public static IPv4 address and /56 IPv6 prefix delegated via DHCP-PD. UPC Czech Republic have deployed IPv6 on residential DOCSIS lines since 2017. IPv6-only network with IPv4 over DS-Lite is used. Customers are forced to terminate the connection in carrier-provided CPE with limited customization options. IPv6 is generally available in datacenters and web hosting companies. As of 2019, no mobile network supports IPv6. Denmark As of July 2020, the country has only 4% IPv6 traffic, according to Google stats. A web page (in Danish) follows national IPv6 deployment. The ISP Fullrate has begun offering IPv6 to its customers, on the condition that their router (provided by the ISP itself) is compatible. If the router is of a different version, the customer has to request a new router. Several other small ISP have already began implementing the protocol as well as 3, the smallest mobile provider. Estonia Estonian Telekom is providing native IPv6 access on residential and business broadband connections since September 2014. According to Google's statistics, Estonia has reached an IPv6 adoption rate of 28% by July 2020. Finland FICORA (Finnish Communications Regulatory Authority), the NIC for the .fi top level domain, has added IPv6 address to DNS servers, and allows entering IPv6 address when registering domains. The registration service domain.fi for new domains is also available over IPv6. A small Finnish ISP Nebula has offered IPv6 access since 2007. FICORA held national IPv6 day on June 9, 2015. At that time Elisa and DNA Oyj started providing IPv6 on mobile subscriptions, and Telia Company (via 6rd) and DNA Oyj (native) started providing IPv6 on fixed-line connections. According to Google's statistics, Finland has reached an IPv6 adoption rate of 40% . France AFNIC, the NIC for (among others) the .fr Top Level Domain, has implemented IPv6 operations. Renater, the French national academical network, is offering IPv6 connectivity including multicast support to their members. Free, a major French ISP, rolled-out IPv6 as an opt-in at end of year 2007. In 2020, it removed the possibility to opt-out, effectively reaching 99% coverage. Free also activated IPv6 on its mobile network just after Christmas 2020. Nerim, a small ISP, provides native IPv6 for all its clients since March 2003. Orange (formerly France Telecom), a major ISP, is currently rolling out IPv6 on its wired network. ETA Q2 2016 for FTTH and VDSL, 2017 for ADSL. OVH has implemented IPv6. FDN, a small associative ISP, has been providing native IPv6 since November 2008. SFR, a major ISP, rolled out IPv6 as an opt-in on its wired network. Bouygues Telecom plans deployment for 2017. all mobile operators in France support IPv6 (December 2020) As of January 2021, France has 42.91% IPv6 traffic according to Google, and 40% according to APNIC. Germany According to Google's statistics, Germany has reached an IPv6 adoption rate of 52% by April 2021. DFN backbone network offers full native IPv6 support for their participants. Many scientific networks in Germany, like the Munich Scientific Network (MWN) operated by Leibniz-Rechenzentrum, are connected to this network. Deutsche Telekom started rolling out IPv6 for new All-IP DSL customers in September 2012. Telekom started to roll out IPv6 (dual stack) in their mobile network in August 2015. In January 2020 Deutsche Telekom announced a new APN for IPv6-only. The overall deployment rate for both mobile and fixed network was 76% as of 31 December 2020. Vodafone Kabel Deutschland and Unitymedia offer native IPv6 to their new customers. The adoption rate was 63% for both Vodafone Kabel and Unitymedia as of 31 December 2018. , a regional carrier and ISP, offers native IPv6 for their customers. Adoption rate was 72% as of 31 December 2020. Regional carrier and ISP NetCologne has begun offering native IPv6 to its customers. Deployment rate was 68% as of 31 December 2018. Primacom (now part of PŸUR) offers IPv6 for their customers. (former Tele Columbus) offers IPv6 connectivity since end of 2014. Deutsche Glasfaser offers ipv6 via DHCPv6 or 6rd. IPv4 connectivity is provided via CGN to its customers. O2 has introduced IPv6 for new DSL customers in 2018. Vodafone started with IPv6 in its mobile network end of 2019. O2 Germany started to roll out IPv6 in its mobile network, first only for new contracts, later step by step for all customers till end of June 2021 Hong Kong Hungary In Hungary Externet was the first ISP starting deploying IPv6 on its network in 2008 August. The service was commercially available since 2009 May. Magyar Telekom was running tests on its production environments since the beginning of 2009. Free customer trials started on November 2, 2009, for those on ADSL or Fiber Optic. Customers are given a /128 via DHCP-ND unless they register their DUID in which case they receive a /56 using a static configuration results in a single /64. According to information on telecompaper.com, UPC Hungary will start deploying IPv6 in mid-2013, finishing it in 2013. The plan has not materialized until the end of 2015. In 2015, December RCS&RDS (Digi) has enabled native dual-stack IPv6 (customers receive dynamic /64 prefixes) for its FTTB/H customers. In November the same year UPC Hungary introduced DS Lite(with private IPv4 addresses) which can be enabled on a customer-to-customer basis if the customer asks for it. Magyar Telekom deployed dual-stack IPv6 (using dynamic /56 prefixes on DSL and GPON and static /56 prefixes on DOCSIS) for all of its wired (and for all of its compatible mobile) customers in October 2016. According to the statistics of APNIC, IPv6 use in Hungary as of 2018 December has reached around 20%. According to Google's IPv6 statistics the adoption rate in Hungary as of July 2020 is 26%. India According to Google's statistics, India has reached an IPv6 adoption rate of around 55.8% in April 2021. APNIC places India at more than 70% preferring IPv6. Department of Telecommunications, of the government of India has run workshops on IPv6 on 13 February 2015 at Silvassa & on 11 February 2015, at DoT headquarters, New Delhi. They have also released roadmaps on IPv6 deployment. Sify Technologies Limited, a private Internet service provider, rolled out IPv6 in 2005. Sify has a dual-stack network that supports commercial services on IPv6 transport for its enterprise customers. Sify is a sponsored member of 6Choice, a project by India-Europe cooperation to promote IPv6 adoption. Sify.com is the first to launch a dual-stack commercial portal. ERNET The Indian Education and Research Network, Department of Electronics & IT of the government of India is providing dual-stack networks from 2006 onwards and has been part of many EU funded initiative such as 6Choice, 6lowpan, Myfire, GEANT etc. ERNET's own websites and those hosted of other organisations are all running on dual stack. ERNET provides Consultancy and Turnkey project Implementation to organisations migrating to IPv6 along with fulfilling their Training needs. ERNET has an IPv6 central facility aimed at system and network administrators to provide hands-on training in the use and configuration of web, mail, proxy, DNS and other such servers on IPv6 spearheaded by Praveen Misra, an IPv6 evangelist. Reliance JIO has deployed and is offering IPv6 services in India since September 2016, and has migrated 200M of their Internet users on their IPv6 only mobile network by the end of 2017. Ireland eir, dual-stack, VDSL2 & FTTH Virgin Media, DS-Lite, DOCSIS Growth of IPv6 in Ireland as seen by Google. Italy Fastweb announced in 2015 the initial availability of IPv6 addresses for its residential customers. TIM, the largest Italian ISP, has offered since 2017 a basic pilot service in order to allow its customers to connect using IPv6. However, at the beginning of 2022 several users began to report the removal of the PPPoE profile used to provide IPv6, it seems that TIM has completely abandoned IPv6 in its network. Sky Wifi provides IPv6 service. SkyWiFi started using dual stack, but will switch to an IPv6-only network using MAP-T for IPv4 connectivity Dimensione provides IPv6 by assigning a /48 via DHCPv6 Prefix Delegation with IPv4 in Dual Stack. Pianeta Fibra provides IPv6 by assigning a /56 via DHCPv6 Prefix Delegation with IPv4 Dual Stack. Navigabene provides IPv6 by assigning a /56 via DHCPv6 Prefix Delegation with IPv4 Dual Stack. Iliad provides IPv6 by assigning a /60 via DHCPv6 Prefix Delegation with MAP-E for IPv4 connectivity. Aruba.it Fibra provides IPv6 by assigning a /56 via IPv6 over PPPoE with IPv4 Dual Stack. Spadhausen provides IPv6 by assigning a /60 via DHCPv6 Prefix Delegation with IPv4 in Dual Stack. Convergenze provides IPv6 by assigning a /64 via DHCPv6 Prefix Delegation with IPv4 in Dual Stack. Ehiweb provides IPv6 by assigning a /64 via DHCPv6 Prefix Delegation with IPv4 in Dual Stack. According to Google's statistics, Italy had an IPv6 adoption rate of 5.29% by January 2022. Japan Telecommunications company NTT announced itself as the world's first ISP to offer public availability of IPv6 services in March 2000. NTT's NGN allows for native IPv6 over Ethernet connection to various ISPs. Some ISPs provide the option to use IPv6 transition mechanisms such as DS-Lite or MAP-E as an alternative to IPv4 PPPoE. By March 2021, 80% of NTT NGN users had IPv6 internet access through an ISP operating on the NGN. NTT Docomo, the largest cell phone operator in Japan, started providing IPv6 dual-stack service to devices sold since Summer 2017. In June 2021, they announced that they would be transitioning to a NAT64/DNS64-based single-stack IPv6 network starting in Spring 2022. According to Google's statistics, Japan had an IPv6 adoption rate of 38.46% by April 2021. Lebanon Telecommunications company Ogero enabled IPv6 support for DSL users and for private operators since July 2018 Lithuania The LITNET academic & research network has supported IPv6 since 2001. Most commercial ISPs have not publicly deployed IPv6 yet. Luxembourg RESTENA, the national research and education network, has been running IPv6 for a number of years. It is connected to the European GEANT2 network. In addition, it runs one of the country Internet exchanges, which supports IPv6 peering. RESTENA also runs the .lu top level domain, which also supports IPv6. P&T Luxembourg, main telecom and Internet service providers, has announced they have production quality IPv6 connectivity since January 2009, with the first professional customers being connected as of September 2009. Deployment of IPv6 to residential customers is expected to take place in 2010. According to Google's statistics, Luxembourg reached an IPv6 adoption rate of 36% by July 2020. Netherlands SURFnet, maintainer of the Dutch academical network SURFnet, introduced IPv6 to its network 1997, in the beginning using IPv6-to-IPv4 tunnels. Its backbone is entirely running dual-stack, supporting both native IPv4 and IPv6 to most of its users. XS4All is a major Dutch ISP. In 2002 XS4All was the first Dutch broadband provider to introduce IPv6 to its network, but it has only been experimental. In May 2009 the provider provided the first native IPv6 DSL connections. As of August 2010 native IPv6 DSL connections became available to almost all their customers. Since June 2012 native IPv6 is enabled by default for all new customers. Business-orientated Internet provider BIT BV has been providing IPv6 to all their customers (DSL, FTTH, colocated) since 2004. SixXS had two private Dutch founders and has been partnering with IPv6 Internet service providers in many countries to provide IPv6 connectivity via IP tunnels to users worldwide since 2000. It started out as IPng.nl with a predominantly Dutch user base and reorganized as SixXS to be able to reach users internationally and be diversified in ISP support. SixXS also provided various other related services and software which contributed significantly to IPv6 adoption and operation globally. They ceased their operation on 6-6-2017. Business ISP Introweb provides an IPv6-only 8 Mbit/s ADSL connection for 6 euro per month to 100 customers as a pilot, both for companies to learn how to adapt to IPv6 as for themselves in working on a fully IPv6 enabled network. Signet is the first ISP in the country which provides IPv6 connectivity together with IPv4 on multiple national fiber networks (Eurofiber, Glasvezel Eindhoven, BRE, Glasnet Veghel, Ziggo, and Fiber Port). Most Dutch hosting companies, including the biggest one, Leaseweb, support IPv6, but customers by default get only IPv4 address. Several government sites (such as Rijksoverheid.nl) are available via IPv6. On July 13, 2010, native IPv6 over UMTS/GPRS was successfully tested in Belgium and The Netherlands within a vehicle platform as an Intelligent transportation system solution. The test was performed both in gsm and in tethering mode using a Nokia smart-phone. This test was performed by Logica Netherlands within the SPITS project, in cooperation with Mobistar Belgium. In 2018 KPN started issuing address blocks to their business clients for a one-time fee. T-Mobile doesn't have plans to deploy IPv6 yet. New Zealand , surveys conducted by the New Zealand IPv6 Task Force indicated that awareness of IPv6 had reached a near-universal level among New Zealand's large public- and private-sector organisations, with adoption mostly occurring as part of normal network refresh cycles. Most of New Zealand's ISP and carrier community have a test environment for IPv6 and many have started bringing IPv6 products and services on-stream. An increasing number of New Zealand government websites are available over IPv6, including those of the Ministry of Defence (New Zealand), Ministry for Primary Industries (New Zealand) and the Department of Internal Affairs. Massey University has enabled IPv6 on its border and core campus routers. Its central network services, including DNS, external email and NTP are also enabled. Massey's main website is IPv6-enabled and remote login to some servers and network equipment also support IPv6 for systems administration and networking staff. IPv6 has been enabled on 15 websites hosted at Tauranga City Council (TCC). Changes to equipment on the council's internal LAN have also been made to enable IPV6. Some internal networks across the organisation have been enabled for IPv6, and dual-stack technology is being used to enable both IPv4 and IPv6 use. A number of internal servers and client devices communicate via IPv6, and a teredo relay and 6to4 relay ensure users using these two transition technologies are well served when accessing IPV6 addresses. The University of Auckland IT Services team has partially deployed IPv6, in collaboration with the Science Faculty and the Computer Science Department. It has IPv6 connectivity via KAREN and its commercial ISP. Computer Science is fully dual-stacked; IPv6 has been used in undergraduate laboratory assignments and for post-graduate projects. KAREN, New Zealand's R&E network, is an IPv6 native network and has provided IPv6 as a standard service offering to its members since 2006. Auckland-based ISP WorldxChange Communications has had dual-stack since 2008. It has started providing residential customers with dual (IPv4 and IPv6) service using DHCPv6, on a trial basis. Government Technology Services, a business group of the Department of Internal Affairs (DIA), has an IPv6 website as a proof of concept to demonstrate how New Zealand government websites can be made accessible to the IPv6 Internet. South Island-based Internet Service Provider Snap Internet provides native IPv6 connectivity for all its customers. Its network is fully IPv6-enabled, with the IPv6 service running alongside Snap's normal IPv4 connectivity. Palmerston North-based ISP Inspire Net has had native IPv6 transit since late-2009. Internet Service Provider DTS's transit, managed and hosting services are fully IPv6 capable. Trans-Tasman service provider Vocus Communications offers full dual-stack IP transit services and also supports IPv6 transport on its private IP WAN service in NZ. Philippines The government is in process of upgrading its facilities. Globe Telecom has already set in motion the transition of its core IP network to IPv6, noting that it is now fully prepared even as the Internet runs out of IPv4 addresses. Globe claims it is the first local telecommunication company to test IPv6 with Department of Science and Technology (Philippines). In some cases, like test networks or users, IPv6 or both maybe present. Poland The Polish national research and education network began an IPv6 trial period in 2002. As for now native IPv6 connectivity is available to numerous educational and private clients connected via citywide networks operated by local universities. Polish Internet Exchange, a commercial and carrier-neutral Internet traffic exchange point, has facilitated IPv6 peering between numerous operators since 2008. Orange Polska In March 2013, the mobile operator launched mobile access to the Internet via IPv6 protocol for their subscribers. In September 2013, Sony Xperia Z1 became the first IPv6 compliant device commercially available in Orange Poland. Romania As of June 2012, the ISP named RCS&RDS offers dual-stack IPv4/IPv6 PPPoE services to current home users using modern versions of Microsoft Windows, Mac OS X, Linux and other IPv6-ready devices. More than 1 million RCS & RDS residential customers can now use native IPv6 on a dual-stack PPPoE connection and 16% already do. Russian Federation ER-Telecom offers native IPv6 to customers since 10.10.2013 using PPPoE Dual-Stack and DHCPv6 Prefix Delegation MTS provides native IPv6 for mobile customers since April 2017 Serbia Supernova network has started IPv6 beta test, since December 2021. Sri Lanka LEARN network had deployed IPv6, since 2008. Dialog Axiata provides native IPv6 to customers by default, since 2018. SLT provides native IPv6 to Fixed LTE customers, since 2018. Mobitel provides native IPv6, since 2020. Sudan The Sudanese IPv6 task Force SDv6TF was formed in 2010 to fellow the implementation of IPv6 migration plan (2011–2015). By November 2012, all telecom operators are becoming IPv6 enabled, this was tested for the first time at the AFRINIC-17 meeting held in Khartoum. SudREN (Sudanese Research and Education Network) is the first ISP to provide native IPv6 connectivity of the member institution. By August 2014, SudREN.edu.sd is fully IPv6 Enabled. Two certification received from IPv6 Forum, for WWW and ISP Enabled Logos. Sweden Bahnhof offers IPv6 to businesses. Tele2 have begun IPv6 rollout to mobile customers (both consumers and businesses). Tre rolled out IPv6 in 2018. Com Hem offers IPv6 to businesses and to consumers in some locations. Operators offering native IPv6 access for business clients and collocation customers include Tele2 and Phonera. Switzerland The Data Center Light is the first commercial IPv6 only data center in Switzerland Swisscom offers IPv6 over 6rd to private customers. Init7 offers native IPv6 on all their offerings. iway offers native IPv6 on customer lines. Sunrise provides IPv6 for some of the products, private customers can enable 6rd. UPC Switzerland offers native IPv6 with DS-lite to new customers. Tunisia Started deploying IPv6 in 2010. In 2011, ATI (Tunisian Internet Agency) obtained a new IPv6 block from AFRINIC (2c0f:fab0::/28). In 2013–2015, Gnet (Global Net), and CIMSP (Computing Departement of Health Ministry) received IPv6 prefixes from AFRINIC. Deployment of an IPv6 tunnel between ATI and HE (Hurricane Electric). In 2016, CCK (Centre de Calcul El Khawarizmi) obtains its own IPv6 (/32) block from AFRINIC. In 2016, ISET Charguia (Higher Institute of Technologies in Tunisia) deployed its IPv6 network as end user. Ukraine Some IPv6 implementation has taken place. United Kingdom JANET, the UK's education and research network, introduced IPv6 unicast support into its service level agreement in 2008. Several major UK universities and colleges (e.g., Cambridge and Esher College) upgraded their campus routing infrastructure to provide IPv6 unicast support to their users. Andrews & Arnold launched a native (non-tunneled) IPv6 service in October 2005 and offer IPv6 by default. The UK government started to replace much of its Government Secure Intranet (a wide area network) with a new Public Services Network (PSN) in late 2009. The aspiration was to deploy using IPv6 and support IPv4. The implementation is based on IPv4 but suppliers must be capable of supporting IPv6. BT Group announced in August 2016 that most of its customers can expect IPv6 connectivity in early 2017. Zen Internet enabled IPv6 for all customers in December 2015, after a successful trial earlier that year. Spitfire Network Services offer native dual-stack IPv6 on broadband and Ethernet services. Sky Broadband enabled IPv6 for a majority of their customers in the first half of 2016. EE Limited enabled IPv6 on the Radio access network for most consumers by Autumn 2018. Their home broadband services currently do not support IPv6 however. Aquiss enabled IPv6 for their broadband customers in 2015. In February 2020 they completed their IPv6 rollout for services and systems i.e. web hosting platform, VOIP and other services. According to Google's statistics, United Kingdom has reached an IPv6 adoption rate of 34.5% as of January 2021. United States In the United States the majority of smartphones use IPv6, but only a small percent of computers and tablets use IPv6. As of April 2021, 44.8% of Google users in the US use IPv6. Further countries As of January 2021 Malaysia 50.06% IPv6 adoption Vietnam 44.6% IPv6 adoption Mexico 37.89% IPv6 adoption Portugal 37.12% IPv6 adoption Events World IPv6 Day The Internet Society promoted June 8, 2011, as "World IPv6 Day". The event was described as a "test drive" for full IPv6 rollouts. World IPv6 Launch The Internet Society declared June 6, 2012, to be the date for "World IPv6 Launch", with participating major websites enabling IPv6 permanently, participating ISPs offering IPv6 connectivity, and participating router manufacturers offering devices enabled for IPv6 by default. See also IPv6 brokenness and DNS whitelisting References External links World IPV6 Launch Measurements Hurricane Electric - Global IPv6 Deployment Progress Report IPv6 Technological change
40928093
https://en.wikipedia.org/wiki/Infoblox
Infoblox
Infoblox, formerly (NYSE:BLOX), is a privately held IT automation and security company based in California's Silicon Valley. The company focuses on managing and identifying devices connected to networks—specifically for the Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), and IP address management (collectively, "DDI"). According to Gartner, by 2015 the Infoblox market share was 49.9 percent of the $533 million enterprise DDI market. In June 2016, IDC, a market intelligence firm, named Infoblox as the market share leader in DNS, DHCP and IP address management. No other competitor had a market share greater than 15 percent. History Infoblox was founded in 1999 in Chicago, Illinois, by Stuart Bailey who was at the University of Illinois. The company moved to Santa Clara, California, in 2003. In 2007, Infoblox acquired French startup Ipanto, which led to the development of IPAM Win Connect appliances. In 2010, Infoblox acquired Net Cordia which provided technologies for network task automation. Later in the same year, the company integrated Infoblox IP address management technology with Net Cordia’s network configuration and change management technologies. As virtualization and cloud computing became increasingly prevalent in data centers, automation was marketed using the term distributed virtual infrastructure. The company added DNS security products, and it also supplied hardware appliances to host its software. Infoblox joined commercial and government groups, and independent research, and made Tapestry, its open-source software, generally available. Network management became increasingly crucial, after a sharp rise in computer crime, especially attacks that exploit DNS servers, such as DNS spoofing and distributed denial-of-service attacks. In 2012, 7.8 million new malware threats emerged. Mobile threats grew by 1,000 percent, and 865 successful breaches compromised 174 million records. DNS servers in particular are vulnerable to hacking, and often used in destructive attacks such as the Syrian Electronic Army (SEA) attack that hit The New York Times and Twitter in 2013. In December 2013, it estimated 6,000 customers, which included government organizations as well as businesses. In February 2016, the company acquired IID, a cyberthreat intelligence company. On September 19, 2016, Vista Equity Partners announced intent to purchase Infoblox for approximately $1.6 billion. The acquisition closed in November. In June 2017, Infoblox announced an expansion to its Tacoma, Washington, office, which focuses on cybersecurity research, threat intelligence and engineering. In 2019, Infoblox introduced new updates to its Network Identity Operating System (NIOS) platform, including support for Google Cloud Platform and the option for single sign-on. In December 2019, Infoblox was included in the list for the Top 25 Cybersecurity Companies of 2019 by The Software Report. On September 8, 2020, Infoblox announced a significant investment from Warburg Pincus. Following necessary approvals, the investment closed on December 1, 2020 with Vista Equity Partners and Warburg Pincus holding 50% ownership in Infoblox. Financial Results Infoblox received $80 million in five rounds of financing (2000, 2001, 2003, 2004 and 2005). The company's main investor was Sequoia Capital. The company had their initial public offering on April 20, 2012. Shares were listed on the New York Stock Exchange under the symbol BLOX. The stock price advanced 40 percent in the first day of trading. After adding 250 employees that year, Infoblox moved to Santa Clara. Earnings leading up through Q4 2013 showed financial as well as physical growth. Total net revenue for the fourth quarter of fiscal 2013 was $63.1 million, an increase of 40 percent on a year-over-year basis. Total net revenue for fiscal 2013 was a record $225.0 million, an increase of 33 percent compared with the total net revenue of $169.2 million in fiscal 2012. First quarter results for 2014 fell short of expectations, causing a drop in share price. By 2015, the Infoblox market share in DDI jumped to 49.9 percent in 2015 from 46.7 percent in 2014, and the overall DDI market grew 18.3 percent in the same period to $533 million. No other competitor had a market share greater than 15 percent. Products A sample of Infoblox products, platforms and network services includes: Actionable Network Intelligence Platform Private Cloud/Virtualization Public/Hybrid Cloud Reporting and Analytics Network Insight DNS, DHCP and IPAM (DDI) IPAM for Microsoft DNS Appliance DNS Firewall Threat Insight Advanced DNS Security Active Trust Net MRI References Companies based in Santa Clara, California Technology companies based in the San Francisco Bay Area Companies formerly listed on the New York Stock Exchange Information technology management 1999 establishments in Illinois Computer companies established in 1999 2012 initial public offerings Private equity portfolio companies 2016 mergers and acquisitions
8667910
https://en.wikipedia.org/wiki/PMAC%20%28cryptography%29
PMAC (cryptography)
PMAC, which stands for parallelizable MAC, is a message authentication code algorithm. It was created by Phillip Rogaway. PMAC is a method of taking a block cipher and creating an efficient message authentication code that is reducible in security to the underlying block cipher. PMAC is similar in functionality to the OMAC algorithm. Patents PMAC is no longer patented and can be used royalty-free. It was originally patented by Phillip Rogaway, but he has since abandoned his patent filings. References External links Phil Rogaway's page on PMAC Changhoon Lee, Jongsung Kim, Jaechul Sung, Seokhie Hong, Sangjin Lee. "Forgery and Key Recovery Attacks on PMAC and Mitchell's TMAC Variant", 2006. (ps) Rust implementation Message authentication codes
16756445
https://en.wikipedia.org/wiki/List%20of%20NJCAA%20Division%20I%20schools
List of NJCAA Division I schools
There are 221 Division I teams in the National Junior College Athletic Association (NJCAA) that play in 24 different regions. Members Alabama Bevill State Community College Bears in Sumiton Bishop State Community College Wildcats in Mobile Calhoun Community College Warhawks in Tanner Chattahoochee Valley Community College Pirates in Phenix City Central Alabama Community College Trojans in Alexander City Coastal Alabama Community College Sun Chiefs in Bay Minette Coastal Alabama Community College Brewton Warhawks in Brewton Coastal Alabama Community College Monroeville Eagles in Monroeville Enterprise-Ozark Community College Boll Weevils in Enterprise Gadsden State Community College Cardinals in Gadsden Lawson State Community College Cougars in Birmingham Lurleen B. Wallace Community College Saints in Andalusia Marion Military Institute Tigers in Marion Shelton State Community College Buccaneers in Tuscaloosa Snead State Community College Parsons in Boaz Southern Union State Community College Bison in Wadley Wallace Community College Governors in Dothan Wallace Community College Selma Patriots in Selma Wallace State Community College Lions in Hanceville Arizona Arizona Western College Matadors in Yuma Central Arizona College Vaqueros/Vaqueras in Coolidge Cochise College Apaches in Douglas Eastern Arizona College Gila Monsters in Thatcher Mesa Community College Thunderbird in Mesa Pima Community College Aztecs in Tucson Yavapai College Roughriders in Prescott Arkansas Arkansas Baptist College Buffaloes in Little Rock Colorado Colorado Northwestern Community College Spartan in Rangely Lamar Community College Runnin' Lopes in Lamar Northeastern Junior College Plainsmen in Sterling Otero College Rattlers in La Junta Trinidad State College Trojans in Trinidad Florida ASA College Silver Storm in Miami Broward College Seahawks in Fort Lauderdale Chipola College Indians in Marianna College of Central Florida Patriots in Ocala Daytona State College Falcons in Daytona Beach Eastern Florida State College Titans in Brevard County (formerly known as Brevard Community College) Florida SouthWestern State College Buccaneers in Fort Myers (formerly known as Edison Community College) Florida State College at Jacksonville Blue Wave in Jacksonville Gulf Coast State College Commodores in Panama City Hillsborough Community College Hawks in Tampa Indian River State College Pioneers in Fort Pierce Lake–Sumter State College Lakehawks in Leesburg Miami-Dade College Sharks in Miami North Florida Community College Sentinels in Madison Northwest Florida State College Raiders in Niceville Palm Beach State College Panthers in Lake Worth Pensacola State College Pirates in Pensacola Polk State College Eagles in Winter Haven Santa Fe College Saints in Gainesville Seminole State College of Florida Raiders in Sanford South Florida State College Panthers in Avon Park State College of Florida, Manatee–Sarasota Manatees in Bradenton (formerly known as Manatee Community College) St. Johns River State College Vikings in Palatka St. Petersburg College Titans in St. Petersburg Tallahassee Community College Eagles in Tallahassee Georgia Abraham Baldwin College Golden Stallions in Tifton Albany Technical College Titans in Albany Atlanta Metropolitan College Trailblazers in Atlanta Chattahoochee Technical College Golden Eagles in Marietta East Georgia College Bobcats in Swainsboro Georgia Highlands College Chargers in Rome Georgia Military College Bulldogs in Milledgeville Gordon State College Highlanders in Barnesville South Georgia Technical College Jets in Americus Waycross College Swamp Foxes in Waycross Idaho College of Southern Idaho Eagles in Twin Falls Illinois Kishwaukee College Kougars in Malta Olive-Harvey College Purple Panthers in Chicago Triton College Trojans in River Grove South Suburban College Bulldogs in South Holland John A. Logan College Volunteers in Carterville Kaskaskia College Blue Devils (Boys)/Blue Angels (Girls) in Centralia Lake Land College Lakers in Mattoon Lincoln Trail College Statesman in Robinson Olney Central College Blue Knights in Olney Shawnee Community College Saints in Ullin Southeastern Illinois College Falcons in Harrisburg Southwestern Illinois College Blue Storm in Belleville Wabash Valley College Warriors in Mount Carmel Indiana Vincennes University Trailblazers in Vincennes Iowa Ellsworth Community College Panthers in Iowa Falls Hawkeye Community College Redtails in Waterloo Indian Hills Community College-Ottumwa Warriors in Ottumwa (Falcons in Baseball) Iowa Central Community College Tritons in Fort Dodge Iowa Lakes Community College Lakers in Estherville Iowa Western Community College Reivers in Council Bluffs Marshalltown Community College Tigers in Marshalltown Kansas Barton County Community College Cougars in Great Bend Butler County Community College (Kansas) Grizzly Bears in El Dorado Cloud County Community College Thunderbirds/Lady Thunderbirds in Concordia Coffeyville Community College Ravens in Coffeyville Colby Community College Trojans in Colby Cowley County Community College Tigers in Arkansas City Dodge City Community College Conquistadors in Dodge City Fort Scott Community College Greyhounds in Fort Scott Garden City Community College Broncbusters in Garden City Hutchinson Community College Blue Dragons in Hutchinson Independence Community College Pirates in Independence Kansas City Kansas Community College Blue Devils in Kansas City Labette Community College Cardinals in Parson Neosho County Community College Panthers in Chanute Northwest Kansas Technical College Mavericks in Goodland Pratt Community College Beavers in Pratt Seward County Community College Saints in Liberal Kentucky Simmons College of Kentucky Panthers in Louisville Louisiana Baton Rouge Community College Bears in Baton Rouge Bossier Parish Community College Cavaliers in Bossier Delgado Community College Dolphins in New Orleans Southern University at Shreveport Jaguars in Shreveport Maryland Allegany College of Maryland Trojans in Cumberland Carroll Community College Lynx in Westminster Chesapeake College Skipjacks in Wye Mills Garrett College Lakers in McHenry Frederick Community College Cougars in Frederick Hagerstown Community College Hawks in Hagerstown Montgomery College Raptors in Germantown, Rockville, and Takoma Park/Silver Spring Mississippi Coahoma Community College Tigers in Clarksdale Copiah-Lincoln Community College Wolves in Wesson East Central Community College Warriors in Decatur East Mississippi Community College Lions in Scooba Hinds Community College Eagles in Raymond Holmes Community College Bulldogs in Goodman Itawamba Community College Indians in Fulton Jones County Junior College Bobcats in Ellisville Meridian Community College Eagles in Meridian Mississippi Delta Community College Trojans in Moorhead Mississippi Gulf Coast Community College Bulldogs in Perkinston Northeast Mississippi Community College Tigers in Booneville Northwest Mississippi Community College Rangers in Senatobia Pearl River Community College Wildcats in Poplarville Southwest Mississippi Community College Bears in Summit Missouri Crowder College Roughriders in Neosho Mineral Area College Cardinals in Park Hills Missouri State University-West Plains Grizzlies in West Plains Moberly Area Community College Greyhounds in Moberly State Fair Community College Roadrunners in Sedalia Three Rivers Community College Raiders in Poplar Bluff St. Charles Community College Cougars in Saint Charles Montana Dawson Community College Buccaneers in Glendive Little Big Horn College Rams in Crow Agency Miles Community College Pioneers in Miles City Nebraska McCook Community College Indians in McCook North Platte Community College Knights in North Platte Northeast Community College Hawks in Norfolk Western Nebraska Community College Cougars in Scotts Bluff Nevada College of Southern Nevada Coyotes in Henderson Western Nevada College Wildcats in Carson City New Mexico New Mexico Junior College Thunderbirds in Hobbs New Mexico Military Institute Broncos in Roswell New York Globe Institute of Technology Knights in New York City Monroe College Mustangs in Bronx ASA College Avengers in Brooklyn North Carolina Brunswick Community College Dolphins in Bolivia Cape Fear Community College Sea Devils in Wilmington Guilford Technical Community College Titans in Jamestown Lenoir Community College Lancers in Kinston Louisburg College Hurricanes in Louisburg Pitt Community College Bulldogs in Winterville Roanoke-Chowan Community College Waves in Ahoskie Rockingham Community College Eagles in Wentworth Surry Community College Knights in Dobson Wake Technical Community College Eagles in Raleigh Wilkes Community College Cougars in Wilkesboro North Dakota Lake Region State College Royals in Devils Lake North Dakota State College of Science Wildcats in Wahpeton Williston State College Tetons in Williston Ohio Hocking College Eagles in Nelsonville Oklahoma Carl Albert State College Vikings in Poteau Connors State College Cowboys in Conner Eastern Oklahoma State College Mountaineers in Wilburton Murray State College Aggies in Tishomingo Northeastern Oklahoma A&M College Golden Norseman in Miami Northern Oklahoma College Enid Jets in Enid Northern Oklahoma College-Tonkawa Mavericks in Tonkawa Redlands Community College Cougars in El Reno Seminole State College Trojans in Seminole Western Oklahoma State College Pioneers in Altus Pennsylvania Lackawanna College Falcons in Scranton South Carolina Aiken Technical College Knights in Aiken Clinton Junior College Golden Bears in Rock Hill Denmark Technical College Panthers in Denmark Spartanburg Methodist College Pioneers in Spartanburg University of South Carolina Lancaster Lancers in Lancaster University of South Carolina Salkehatchie Indians in Allendale Tennessee Chattanooga State Technical Community College Tigers in Chattanooga Cleveland State Community College Cougars in Cleveland Columbia State Community College Chargers in Columbia Dyersburg State Community College Eagles in Dyersburg Jackson State Community College Green Jays in Jackson Motlow State Community College Bucks in Lynchburg Roane State Community College Raider in Harriman Southwest Tennessee Community College Salquis in Memphis Volunteer State Community College Pioneers in Gallatin Walters State Community College Senators in Morristown Texas Angelina College Roadrunners in Lufkin Blinn College Buccaneers in Brenham Clarendon College Bulldogs in Clarendon Cisco College Wranglers in Cisco Coastal Bend College Cougars in Beeville Collin College Cougars in Collin County Frank Phillips College Plainsmen in Borger Grayson County College Vikings in Denison Hill College Rebels in Hillsboro Howard College Hawks in Big Spring Jacksonville College Jaguars/Lady Jaguars in Jacksonville Kilgore College Rangers in Kilgore Lee College Runnin' Rebels in Baytown McLennan Community College Highlanders in Waco Midland College Chaparrals in Midland Navarro College Bulldogs in Corsicana Northeast Texas Community College Eagles in Mount Pleasant Odessa College Wranglers in Odessa Panola College Ponies in Carthage Paris Junior College Dragons in Paris Ranger College Rangers in Ranger San Jacinto College-Central Gators in Pasadena South Plains College Texans in Lubbock Southwestern Christian College Rams in Terrell Temple College Leopards in Temple Trinity Valley Community College Cardinals in Athens Tyler Junior College Apaches in Tyler Western Texas College Westerners in Snyder Wharton County Junior College Pioneers in Wharton Utah Salt Lake Community College Bruin Bears in Salt Lake Snow College Badgers in Ephraim Utah State University Eastern Eagles in Price West Virginia Potomac State College of West Virginia University Catamounts in Keyser Wyoming Casper College Thunderbirds in Casper Central Wyoming College Rustlers in Riverton Eastern Wyoming College Lancers in Torrington Laramie County Community College Golden Eagles in Cheyenne Northwest College Trappers in Powell Sheridan College Generals in Gillette Western Wyoming Community College Mustangs in Rock Springs Note The schools listed above may not compete in Division I in all sports. For instance, many schools in Kansas compete in Division I basketball while competing in Division II in softball and volleyball. Highland (Kan.) and Johnson County compete in Division I baseball but have Division II teams in all other sports (except Highland football because NJCAA football is not split into divisions). See also List of NJCAA Division II schools List of NJCAA Division III schools List of community college football programs List of USCAA institutions List of NCCAA institutions List of NAIA institutions List of NCAA Division I institutions List of NCAA Division II institutions List of NCAA Division III institutions References Sources NJCAA Kansas Jayhawk Community College Conference website Division 1 NJCAA athletics
35683261
https://en.wikipedia.org/wiki/Anti-Subversion%20Software
Anti-Subversion Software
Software subversion is the process of making software perform unintended actions either by tampering with program code or by altering behavior in another fashion. For example, code tampering could be used to change program code to load malicious rules or heuristics, SQL injection is a form of subversion for the purpose of data corruption or theft and buffer overflows are a form of subversion for the purpose of unauthorised access. These attacks are examples of computer hacking. Anti-Subversion Software detects subversion and attempts to stop the effects of the hack. Software applications are vulnerable to the effects of subversion throughout their lifecycle from development to deployment, but particularly in operation and maintenance. Anti-subversion protection can be accomplished in both a static and dynamic manner: Static anti-subversion is performed during the construction of the code. The code is statically tested and verified against various attack types by examining the program source code. Examples of static anti-subversion include security auditing, code verification, and fuzzing. Static anti-subversion is generally seen as a good coding practice, and is deemed necessary in some compliance regimes. However, static solutions cannot prevent all types of subversion attacks. Dynamic anti-subversion is performed during code execution. The code is dynamically protected against subversion by continuously checking for unintended program behaviours. Examples of dynamic anti-subversion include application firewalls, security wrappers, and protection embedded in the software. Software applications running on desktops, corporate servers, mobile devices and embedded devices are all at risk from subversion. References Computer security software
40324370
https://en.wikipedia.org/wiki/Jumping-Jupiter%20scenario
Jumping-Jupiter scenario
The jumping-Jupiter scenario specifies an evolution of giant-planet migration described by the Nice model, in which an ice giant (Uranus, Neptune, or an additional Neptune-mass planet) is scattered inward by Saturn and outward by Jupiter, causing their semi-major axes to jump, quickly separating their orbits. The jumping-Jupiter scenario was proposed by Ramon Brasser, Alessandro Morbidelli, Rodney Gomes, Kleomenis Tsiganis, and Harold Levison after their studies revealed that the smooth divergent migration of Jupiter and Saturn resulted in an inner Solar System significantly different from the current Solar System. During this migration secular resonances swept through the inner Solar System exciting the orbits of the terrestrial planets and the asteroids, leaving the planets' orbits too eccentric, and the asteroid belt with too many high-inclination objects. The jumps in the semi-major axes of Jupiter and Saturn described in the jumping-Jupiter scenario can allow these resonances to quickly cross the inner Solar System without altering orbits excessively, although the terrestrial planets remain sensitive to its passage. The jumping-Jupiter scenario also results in a number of other differences with the original Nice model. The fraction of lunar impactors from the core of the asteroid belt during the Late Heavy Bombardment is significantly reduced, most of the Jupiter trojans are captured during Jupiter's encounters with the ice giant, as are Jupiter's irregular satellites. In the jumping-Jupiter scenario, the likelihood of preserving four giant planets on orbits resembling their current ones appears to increase if the early Solar System originally contained an additional ice giant, which was later ejected by Jupiter into interstellar space. However, this remains an atypical result, as is the preservation of the current orbits of the terrestrial planets. Background Original Nice model In the original Nice model a resonance crossing results in a dynamical instability that rapidly alters the orbits of the giant planets. The original Nice model begins with the giant planets in a compact configuration with nearly circular orbits. Initially, interactions with planetesimals originating in an outer disk drive a slow divergent migration of the giant planets. This planetesimal-driven migration continues until Jupiter and Saturn cross their mutual 2:1 resonance. The resonance crossing excites the eccentricities of Jupiter and Saturn. The increased eccentricities create perturbations on Uranus and Neptune, increasing their eccentricities until the system becomes chaotic and orbits begin to intersect. Gravitational encounters between the planets then scatter Uranus and Neptune outward into the planetesimal disk. The disk is disrupted, scattering many of the planetesimals onto planet-crossing orbits. A rapid phase of divergent migration of the giant planets is initiated and continues until the disk is depleted. Dynamical friction during this phase dampens the eccentricities of Uranus and Neptune stabilizing the system. In numerical simulations of the original Nice model the final orbits of the giant planets are similar to the current Solar System. Resonant planetary orbits Later versions of the Nice model begin with the giant planets in a series of resonances. This change reflects some hydrodynamic models of the early Solar System. In these models, interactions between the giant planets and the gas disk result in the giant planets migrating toward the central star, in some cases becoming hot Jupiters. However, in a multiple-planet system, this inward migration may be halted or reversed if a more rapidly migrating smaller planet is captured in an outer orbital resonance. The Grand Tack hypothesis, which posits that Jupiter's migration is reversed at 1.5 AU following the capture of Saturn in a resonance, is an example of this type of orbital evolution. The resonance in which Saturn is captured, a 3:2 or a 2:1 resonance, and the extent of the outward migration (if any) depends on the physical properties of the gas disk and the amount of gas accreted by the planets. The capture of Uranus and Neptune into further resonances during or following this outward migration results in a quadruply resonant system, with several stable combinations having been identified. Following the dissipation of the gas disk, the quadruple resonance is eventually broken due to interactions with planetesimals from the outer disk. Evolution from this point resembles the original Nice model with an instability beginning either shortly after the quadruple resonance is broken or after a delay during which planetesimal-driven migration drives the planets across a different resonance. However, there is no slow approach to the 2:1 resonance as Jupiter and Saturn either begin in this resonance or cross it rapidly during the instability. Late escape from resonance The stirring of the outer disk by massive planetesimals can trigger a late instability in a multi-resonant planetary system. As the eccentricities of the planetesimals are excited by gravitational encounters with Pluto-mass objects, an inward migration of the giant planets occurs. The migration, which occurs even if there are no encounters between planetesimals and planets, is driven by a coupling between the average eccentricity of the planetesimal disk and the semi-major axes of the outer planets. Because the planets are locked in resonance, the migration also results in an increase in the eccentricity of the inner ice giant. The increased eccentricity changes the precession frequency of the inner ice giant, leading to the crossing of secular resonances. The quadruple resonance of the outer planets can be broken during one of these secular-resonance crossings. Gravitational encounters begin shortly afterward due to the close proximity of the planets in the previously resonant configuration. The timing of the instability caused by this mechanism, typically occurring several hundred million years after the dispersal of the gas disk, is fairly independent of the distance between the outer planet and the planetesimal disk. In combination with the updated initial conditions, this alternative mechanism for triggering a late instability has been called the Nice 2 model. Planetary encounters with Jupiter Encounters between Jupiter and an ice giant during the giant planet migration are required to reproduce the current Solar System. In a series of three articles Ramon Brasser, Alessandro Morbidelli, Rodney Gomes, Kleomenis Tsiganis, and Harold Levison analyzed the orbital evolution of the Solar System during giant planet migration. The first article demonstrated that encounters between an ice giant and at least one gas giant were required to reproduce the oscillations of the eccentricities of the gas giants. The other two demonstrated that if Jupiter and Saturn underwent a smooth planetesimal-driven separation of their orbits the terrestrial planets would have orbits that are too eccentric and too many of the asteroids would have orbits with large inclinations. They proposed that the ice giant encountered both Jupiter and Saturn, causing the rapid separation of their orbits, thereby avoiding the secular resonance sweeping responsible for the excitation of orbits in the inner Solar System. Exciting the oscillations of the eccentricities of the giant planets requires encounters between planets. Jupiter and Saturn have modest eccentricities that oscillate out of phase, with Jupiter reaching maximum eccentricity when Saturn reaches its minimum and vice versa. A smooth migration of the giant planets without resonance crossings results in very small eccentricities. Resonance crossings excite their mean eccentricities, with the 2:1 resonance crossing reproducing Jupiter's current eccentricity, but these do not generate the oscillations in their eccentricities. Recreating both requires either a combination of resonance crossings and an encounter between Saturn and an ice giant, or multiple encounters of an ice giant with one or both gas giants. During the smooth migration of the giant planets the ν5 secular resonance sweeps through the inner Solar System, exciting the eccentricities of the terrestrial planets. When planets are in a secular resonance the precessions of their orbits are synchronized, keeping their relative orientations and the average torques exerted between them fixed. The torques transfer angular momentum between the planets causing changes in their eccentricities and, if the orbits are inclined relative to one another, their inclinations. If the planets remain in or near secular resonances these changes can accumulate resulting in significant changes in eccentricity and inclination. During a ν5 secular resonance crossing this can result in the excitation of the terrestrial planet's eccentricity, with the magnitude of the increase depending on the eccentricity of Jupiter and the time spent in the secular resonance. For the original Nice model the slow approach to Jupiter's and Saturn's 2:1 resonance results in an extended interaction of the ν5 secular resonance with Mars, driving its eccentricity to levels that can destabilize the inner Solar System, potentially leading to collisions between planets or the ejection of Mars. In later versions of the Nice model Jupiter's and Saturn's divergent migration across (or from) the 2:1 resonance is more rapid and the nearby ν5 resonance crossings of Earth and Mars are brief, thus avoiding the excessive excitation of their eccentricities in some cases. Venus and Mercury, however, reach significantly higher eccentricities than are observed when the ν5 resonance later crosses their orbits. A smooth planetesimal-driven migration of the giant planets also results in an asteroid belt orbital distribution unlike that of the current asteroid belt. As it sweeps across the asteroid belt the ν16 secular resonance excites asteroid inclinations. It is followed by the ν6 secular resonance which excites the eccentricities of low-inclination asteroids. If the secular resonance sweeping occurs during a planetesimal driven migration, which has a timescale of 5 million years or longer, the remaining asteroid belt is left with a significant fraction of asteroids with inclinations greater than 20°, which are relatively rare in the current asteroid belt. The interaction of the ν6 secular resonance with the 3:1 mean-motion resonance also leaves a prominent clump in the semi-major-axis distribution that is not observed. The secular resonance sweeping would also leave too many high inclination asteroids if the giant planet migration occurred early, with all of the asteroids initially in low eccentricity and inclination orbits, and also if the orbits of the asteroids were excited by Jupiter's passage during the Grand Tack. Encounters between an ice giant and both Jupiter and Saturn accelerate the separation of their orbits, limiting the effects of secular resonance sweeping on the orbits of the terrestrial planets and the asteroids. To prevent the excitation of orbits of the terrestrial planets and asteroids the secular resonances must sweep rapidly through the inner Solar System. The small eccentricity of Venus indicates that this occurred on a timescale of less than 150,000 years, much shorter than in a planetesimal driven migration. The secular resonance sweeping can be largely avoided, however, if the separation of Jupiter and Saturn was driven by gravitational encounters with an ice giant. These encounters must drive the Jupiter–Saturn period ratio quickly from below 2.1 to beyond 2.3, the range where the secular resonance crossings occur. This evolution of the giant planets orbits has been named the jumping-Jupiter scenario after a similar process proposed to explain the eccentric orbits of some exoplanets. Description The jumping-Jupiter scenario replaces the smooth separation of Jupiter and Saturn with a series of jumps, thereby avoiding the sweeping of secular resonances through the inner Solar System as their period ratio crosses from 2.1–2.3. In the jumping-Jupiter scenario an ice giant is scattered inward by Saturn onto a Jupiter-crossing orbit and then scattered outward by Jupiter. Saturn's semi-major axis is increased in the first gravitational encounter and Jupiter's reduced by the second with the net result being an increase in their period ratio. In numerical simulations the process can be much more complex: while the trend is for Jupiter's and Saturn's orbits to separate, depending on the geometry of the encounters, individual jumps of Jupiter's and Saturn's semi-major axes can be either up and down. In addition to numerous encounters with Jupiter and Saturn, the ice giant can encounter other ice giant(s) and in some cases cross significant parts of the asteroid belt. The gravitational encounters occur over a period of 10,000–100,000 years, and end when dynamical friction with the planetesimal disk dampens the ice giant's eccentricity, raising its perihelion beyond Saturn's orbit; or when the ice giant is ejected from the Solar System. A jumping-Jupiter scenario occurs in a subset of numerical simulations of the Nice model, including some done for the original Nice model paper. The chances of Saturn scattering an ice giant onto a Jupiter-crossing orbit increases when the initial Saturn–ice giant distance is less than 3 AU, and with the 35-Earth-mass planetesimal belt used in the original Nice model, typically results in the ejection of the ice giant. Fifth giant planet The frequent loss of the giant planet encountering Jupiter in simulations has led some to propose that the early Solar System began with five giant planets. In numerical simulations of the jumping-Jupiter scenario the ice giant is often ejected following its gravitational encounters with Jupiter and Saturn, leaving planetary systems that begin with four giant planets with only three. Although beginning with a higher-mass planetesimal disk was found to stabilize four-planet systems, the massive disk either resulted in excess migration of Jupiter and Saturn after the encounters between an ice giant and Jupiter or prevented these encounters by damping eccentricities. This problem led David Nesvorný to investigate planetary systems beginning with five giant planets. After conducting thousands of simulations he reported that simulations beginning with five giant planets were 10 times as likely to reproduce the current orbits of the outer planets. A follow-up study by David Nesvorny and Alessandro Morbidelli sought initial resonant configurations that would reproduce the semi-major axis of the four outer planets, Jupiter's eccentricity, and a jump from <2.1 to >2.3 in Jupiter's and Saturn's period ratio. While less than 1% of the best four-planet models met these criteria roughly 5% of the best five-planet models were judged successful, with Jupiter's eccentricity being the most difficult to reproduce. A separate study by Konstantin Batygin and Michael Brown found similar probabilities (4% vs 3%) of reproducing the current outer Solar System beginning with four or five giant planets using the best initial conditions. Their simulations differed in that the planetesimal disk was placed close to the outer planet resulting in a period of migration before planetary encounters began. Criteria included reproducing the oscillations of Jupiter's and Saturn's eccentricities, a period when Neptune's eccentricity exceeded 0.2 during which hot classical Kuiper belt objects were captured, and the retention of a primordial cold classical Kuiper belt, but not the jump in Jupiter's and Saturn's period ratio. Their results also indicate that if Neptune's eccentricity exceeded 0.2, preserving a cold classical belt may require the ice giant to be ejected in as little as 10,000 years. Migration of Neptune before instability Neptune's migration into the planetesimal disk before planetary encounters begin allows Jupiter to retain a significant eccentricity and limits its migration after the ejection of the fifth ice giant. Jupiter's eccentricity is excited by resonance crossings and gravitational encounters with the ice giant and is damped due to secular friction with the planetesimal disk. Secular friction occurs when the orbit of a planet suddenly changes and results in the excitation of the planetesimals' orbits and the reduction of the planet's eccentricity and inclination as the system relaxes. If gravitational encounters begin shortly after the planets leave their multi-resonant configuration, this leaves Jupiter with a small eccentricity. However, if Neptune first migrates outward disrupting the planetesimal disk, its mass is reduced and the eccentricities and inclinations of the planetesimals are excited. When planetary encounters are later triggered by a resonance crossing this lessens the impact of secular friction allowing Jupiter's eccentricity to be maintained. The smaller mass of the disk also reduces the divergent migration of Jupiter and Saturn following the ejection of the fifth planet. This can allow Jupiter's and Saturn's period ratio to jump beyond 2.3 during the planetary encounters without exceeding the current value once the planetesimal disk is removed. Although this evolution of the outer planet's orbits can reproduce the current Solar System, it is not the typical result in simulations that begin with a significant distance between the outer planet and the planetesimal disk as in the Nice 2 model. An extended migration of Neptune into the planetesimal disk before planetary encounters begin can occur if the disk's inner edge was within 2 AU of Neptune's orbit. This migration begins soon after the protoplanetary disk dissipates, resulting in an early instability, and is most likely if the giant planets began in a 3:2, 3:2, 2:1, 3:2 resonance chain. A late instability can occur if Neptune first underwent a slow dust-driven migration towards a more distant planetesimal disk. For a five planet system to remain stable for 400 million years the inner edge of the planetesimal disk must be several AU beyond Neptune's initial orbit. Collisions between planetesimals in this disk creates debris that is ground down to dust in a collisional cascade. The dust drifts inward due to Poynting–Robertson drag, eventually reaching the orbits of the giant planets. Gravitational interactions with the dust causes the giant planets to escape from their resonance chain roughly 10 million years after the dissipation of the gas disk. The gravitational interactions then result in a slow dust-driven migration of the planets until Neptune approaches the inner edge of the disk. A more rapid planetesimal-driven migration of Neptune into the disk then ensues until the orbits of the planets are destabilized following a resonance crossing. The dust driven migration requires 7–22 Earth-masses of dust, depending on the initial distance between Neptune's orbit and the inner edge of the dust disk. The rate of the dust-driven migration slows with time as the amount of dust the planets encounters declines. As a result, the timing of the instability is sensitive to the factors that control the rate of dust generation such as the size distribution and the strength of the planetesimals. Implications for the early Solar System The jumping-Jupiter scenario results in a number of differences with the original Nice model. The rapid separation of Jupiter's and Saturn's orbits causes the secular resonances to quickly cross the inner Solar System. The number of asteroids removed from the core of the asteroid belt is reduced, leaving an inner extension of the asteroid belt as the dominant source of rocky impactors. The likelihood of preserving the low eccentricities of the terrestrial planets increases to above 20% in a selected jumping-Jupiter model. Since the modification of orbits in the asteroid belt is limited, its depletion and the excitement of its orbits must have occurred earlier. However, asteroid orbits are modified enough to shift the orbital distribution produced by a grand tack toward that of the current asteroid belt, to disperse collisional families, and to remove fossil Kirkwood gaps. The ice giant crossing the asteroid belt allows some icy planetesimals to be implanted into the inner asteroid belt. In the outer Solar System icy planetesimals are captured as Jupiter trojans when Jupiter's semi-major axis jumps during encounters with the ice giant. Jupiter also captures irregular satellites via three body interactions during these encounters. The orbits of Jupiter's regular satellites are perturbed, but in roughly half of simulations remain in orbits similar to those observed. Encounters between an ice giant and Saturn perturb the orbit of Iapetus and may be responsible for its inclination. The dynamical excitement of the outer disk by Pluto-massed objects and its lower mass reduces the bombardment of Saturn's moons. Saturn's tilt is acquired when it is captured in a spin-orbit resonance with Neptune. A slow and extended migration of Neptune into the planetesimal disk before planetary encounters begin leaves the Kuiper belt with a broad inclination distribution. When Neptune's semi-major axis jumps outward after it encounters the ice giant objects captured in its 2:1 resonance during its previous migration escape, leaving a clump of low inclination objects with similar semi-major axes. The outward jump also releases objects from the 3:2 resonance, reducing the number of low inclination plutinos remaining at the end of Neptune's migration. Late Heavy Bombardment Most of the rocky impactors of the Late Heavy Bombardment originate from an inner extension of the asteroid belt yielding a smaller but longer lasting bombardment. The innermost region of the asteroid belt is currently sparsely populated due to the presence of the ν6 secular resonance. In the early Solar System, however, this resonance was located elsewhere and the asteroid belt extended farther inward, ending at Mars-crossing orbits. During the giant planet migration the ν6 secular resonance first rapidly traversed the asteroid belt removing roughly half of its mass, much less than in the original Nice model. When the planets reached their current positions the ν6 secular resonance destabilized the orbits of the innermost asteroids. Some of these quickly entered planet crossing orbit initiating the Late Heavy Bombardment. Others entered quasi-stable higher inclination orbits, later producing an extended tail of impacts, with a small remnant surviving as the Hungarias. The increase in the orbital eccentricities and inclinations of the destabilized objects also raised impact velocities, resulting in a change the size distribution of lunar craters, and in the production of impact melt in the asteroid belt. The innermost (or E-belt) asteroids are estimated to have produced nine basin-forming impacts on the Moon between 4.1 and 3.7 billion years ago with three more originating from the core of the asteroid belt. The pre-Nectarian basins, part of the LHB in the original Nice model, are thought to be due to the impacts of leftover planetesimals from the inner Solar System. The magnitude of the cometary bombardment is also reduced. The giant planets outward migration disrupts the outer planetesimal disk causing icy planetesimals to enter planet crossing orbits. Some of them are then perturbed by Jupiter onto orbits similar to those of Jupiter-family comets. These spend a significant fraction of their orbits crossing the inner Solar System raising their likelihood of impacting the terrestrial planets and the moon. In the original Nice model this results in a cometary bombardment with a magnitude similar to the asteroid bombardment. However, while low levels of iridium detected from rocks dating from this era have been cited as evidence of a cometary bombardment, other evidence such as the mix of highly siderophile elements in lunar rocks, and oxygen isotope ratios in the fragments of impactors are not consistent with a cometary bombardment. The size distribution of lunar craters is also largely consistent with that of the asteroids, leading to the conclusion the bombardment was dominated by asteroids. The bombardment by comets may have been reduced by a number of factors. The stirring of the orbits by Pluto-massed objects excites of the inclinations of the orbits of the icy planetimals, reducing the fraction of objects entering Jupiter-family orbits from 1/3 to 1/10. The mass of the outer disk in the five-planet model is roughly half that of the original Nice model. The magnitude of the bombardment may have been reduced further due to the icy planetesimals undergoing significant mass loss, or their having broken up as the entered the inner Solar System. The combination of these factors reduces the estimated largest impact basin to the size of Mare Crisium, roughly half the size of the Imbrium basin. Evidence of this bombardment may have been destroyed by later impacts by asteroids. A number of issues have been raised regarding the connection between the Nice model and the Late Heavy Bombardment. Crater counts using topographic data from the Lunar Reconnaissance Orbiter find an excess of small craters relative to large impact-basins when compared to the size distribution of the asteroid belt. However, if the E-belt was the product of collisions among a small number of large asteroids, it may have had a size distribution that differed from that of the asteroid belt with a larger fraction of small bodies. A recent work has found that the bombardment originating from the inner band of asteroids would yield only two lunar basins and would be insufficient to explain ancient impact spherule beds. It suggests instead that debris from a massive impact was the source, noting that this would better match the size distribution of impact craters. A second work concurs, finding that the asteroid belt was probably not the source of the Late Heavy Bombardment. Noting the lack of direct evidence of cometary impactors, it proposes that leftover planetesimals were the source of most impacts and that Nice model instability may have occurred early. If a different crater scaling law is used, however, the Nice model is more likely to produce the impacts attributed to the Late Heavy bombardment and more recent impact craters. Terrestrial planets A giant-planet migration in which the ratio of the periods of Jupiter and Saturn quickly cross from below 2.1 to greater than 2.3 can leave the terrestrial planets with orbits similar to their current orbits. The eccentricities and inclinations of a group of planets can be represented by the angular momentum deficit (AMD), a measure of the differences of their orbits from circular coplanar orbits. A study by Brasser, Walsh, and Nesvorny found that when a selected jumping-Jupiter model was used, the current angular momentum deficit has a reasonable chance (~20%) of being reproduced in numerical simulations if the AMD was initially between 10% and 70% of the current value. The orbit of Mars is largely unchanged in these simulations indicating that its initial orbit must have been more eccentric and inclined than those of the other planets. The jumping-Jupiter model used in this study was not typical, however, being selected from among only 5% with Jupiter and Saturn's period ratio jumped to beyond 2.3 while reproducing other aspects of the outer Solar System. The overall success rate of jumping-Jupiter models with a late instability reproducing both the inner and outer Solar System is small. When Kaib and Chambers conducted a large number of simulations starting with five giant planets in a resonance chain and Jupiter and Saturn in a 3:2 resonance, 85% resulted in the loss of a terrestrial planet, less than 5% reproduce the current AMD, and only 1% reproduce both the AMD and the giant planet orbits. In addition to the secular-resonance crossings, the jumps in Jupiter's eccentricity when it encounters an ice giant can also excite the orbits of the terrestrial planets. This led them to propose that the Nice model migration occurred before the formation of the terrestrial planets and that the LHB had another cause. However, the advantage of an early migration is significantly reduced by the requirement that the Jupiter–Saturn period ratio jump to beyond 2.3 to reproduce the current asteroid belt. An early instability could be responsible for the low mass of Mars. If the instability occurs early the eccentricities of the embryos and planetesimals in the Mars region are excited causing many of them being ejected. This deprives Mars of material ending its growth early leaving Mars smaller relative to Earth and Venus. The jumping-Jupiter model can reproduce the eccentricity and inclination of Mercury's orbit. Mercury's eccentricity is excited when it crosses a secular resonance with Jupiter. When relativistic effects are included, Mercury's precession rate is faster, which reduces the impact of this resonance crossing, and results in a smaller eccentricity similar to its current value. Mercury's inclination may be the result of it or Venus crossing a secular resonance with Uranus. Asteroid belt The rapid traverse of resonances through the asteroid belt can leave its population and the overall distribution of its orbital elements largely preserved. In this case the asteroid belt's depletion, the mixing of its taxonomical classes, and the excitation of its orbits, yielding a distribution of inclinations peaked near 10° and eccentricities peaked near 0.1, must have occurred earlier. These may be the product of Jupiter's Grand Tack, provided that an excess of higher eccentricity asteroids is removed due to interactions with the terrestrial planets. Gravitational stirring by planetary embryos embedded in the asteroid belt could also produce its depletion, mixing, and excitation. However, most if not all of the embryos must have been lost before the instability. A mixing of asteroids types could be the product of asteroids being scattered into the belt during the formation of the planets. An initially small mass asteroid belt could have its inclinations and eccentricities excited by secular resonances that hopped across the asteroid belt if Jupiter's and Saturn's orbits became chaotic while in resonance. The orbits of the asteroids could be excited during the instability if the ice giant spent hundreds of thousands of years on a Jupiter crossing orbit. Numerous gravitational encounters between the ice giant and Jupiter during this period would cause frequent variations in Jupiter's semi-major axis, eccentricity and inclination. The forcing exerted by Jupiter on the orbits of the asteroids and the semi-major axes where it was strongest, would also vary, causing a chaotic excitation of the asteroids orbits that could reach or exceed their present level. The highest eccentricity asteroids would be later be removed by encounters with the terrestrial planets. The eccentricities of the terrestrial planets are excited beyond the current values during this process, however, requiring that the instability occur before their formation in this case. Gravitational stirring by embryos during the instability could increase the number of asteroids entered unstable orbits, resulting in the loss of 99-99.9% of its mass. The sweeping of resonances and the penetration of the ice giant into the asteroid belt results in the dispersal of asteroid collisional families formed during or before the Late Heavy Bombardment. A collisional family's inclinations and eccentricities are dispersed due to the sweeping secular resonances, including those inside mean motion resonances, with the eccentricities being most affected. Perturbations by close encounters with the ice giant result in the spreading of a family's semi-major axes. Most collisional families would thus become unidentifiable by techniques such as the hierarchical clustering method, and V-type asteroids originating from impacts on Vesta could be scattered to the middle and outer asteroid belt. However, if the ice giant spent a short time crossing the asteroid belt, some collisional families may remain recognizable by identifying the V-shaped patterns in plots of semi-major axes vs absolute magnitude produced by the Yarkovsky effect. The survival of the Hilda collisional family, a subset of the Hilda group thought to have formed during the LHB because of the current low collision rate, may be due to its creation after Hilda's jump-capture in the 3:2 resonance as the ice giant was ejected. The stirring of semi-major axes by the ice giant may also remove fossil Kirkwood gaps formed before the instability. Planetesimals from the outer disc are embedded in all parts of the asteroid belt, remaining as P- and D-type asteroids. While Jupiter's resonances sweep across the asteroid belt, outer disk planetesimals are captured by its inner resonances, evolve to lower eccentricities via secular resonances with in these resonances, and are released onto stable orbits as Jupiter's resonances move on. Other planetesimals are implanted in the asteroid belt during encounters with the ice giant, either directly leaving them with aphelia higher than that of the ice giant's perihelia, or by removing them from a resonance. Jumps in Jupiter's semi-major axis during its encounters with the ice giant shift the locations of its resonances, releasing some objects and capturing others. Many of those remaining after its final jump, along with others captured by the sweeping resonances as Jupiter migrates to its current location, survive as parts of the resonant populations such as the Hildas, Thule, and those in the 2:1 resonance. Objects originating in the asteroid belt can also be captured in the 2:1 resonance, along with a few among the Hilda population. The excursions the ice giant makes into the asteroid belt allows the icy planetesimals to be implanted farther into the asteroid belt, with a few reaching the inner asteroid belt with semi-major axis less than 2.5 AU. Some objects later drift into unstable resonances due to diffusion or the Yarkovsky effect and enter Earth-crossing orbits, with the Tagish Lake meteorite representing a possible fragment of an object that originated in the outer planetesimal disk. Numerical simulations of this process can roughly reproduce the distribution of P- and D-type asteroids and the size of the largest bodies, with differences such as an excess of objects smaller than 10 km being attributed to losses from collisions or the Yarkovsky effect, and the specific evolution of the planets in the model. Trojans Most of the Jupiter trojans are jump-captured shortly after a gravitational encounters between Jupiter and an ice giant. During these encounters Jupiter's semi-major axis can jump by as much as 0.2 AU, displacing the L4 and L5 points radially, and releasing many existing Jupiter trojans. New Jupiter trojans are captured from the population of planetesimals with semi-major axes similar to Jupiter's new semi-major axis. The captured trojans have a wide range of inclinations and eccentricities, the result of their being scattered by the giant planets as they migrated from their original location in the outer disk. Some additional trojans are captured, and others lost, during weak-resonance crossings as the co-orbital regions becomes temporarily chaotic. Following its final encounters with Jupiter the ice giant may pass through one of Jupiter's trojan swarms, scattering many, and reducing its population. In simulations, the orbital distribution of Jupiter trojans captured and the asymmetry between the L4 and L5 populations is similar to that of the current Solar System and is largely independent of Jupiter's encounter history. Estimates of the planetesimal disk mass required for the capture of the current population of Jupiter trojans range from 15-20 Earth masses, consistent with the mass required to reproduce other aspects of the outer Solar System. Planetesimals are also captured as Neptune trojans during the instability when Neptune's semimajor axis jumps. The broad inclination distribution of the Neptune trojans indicates that the inclinations of their orbits must have been excited before they were captured. The number of Neptune trojans may have been reduced due to Uranus and Neptune being closer to a 2:1 resonance in the past. Irregular satellites Jupiter captures a population of irregular satellites and the relative size of Saturn's population is increased. During gravitational encounters between planets, the hyperbolic orbits of unbound planetesimals around one giant planet are perturbed by the presence of the other. If the geometry and velocities are right, these three-body interactions leave the planetesimal in a bound orbit when planets separate. Although this process is reversible, loosely bound satellites including possible primordial satellites can also escape during these encounters, tightly bound satellites remain and the number of irregular satellites increases over a series of encounters. Following the encounters, the satellites with inclinations between 60° and 130° are lost due to the Kozai resonance and the more distant prograde satellites are lost to the evection resonance. Collisions among the satellites result in the formation of families, in a significant loss of mass, and in a shift of their size distribution. The populations and orbits of Jupiter's irregular satellites captured in simulations are largely consistent with observations. Himalia, which has a spectra similar to asteroids in the middle of the asteroid belt, is somewhat larger than the largest captured in simulations. If it was a primordial object its odds of surviving the series of gravitational encounters range from 0.01 - 0.3, with the odds falling as the number increases. Saturn has more frequent encounters with the ice giant in the jumping-Jupiter scenario, and Uranus and Neptune have fewer encounters if that was a fifth giant planet. This increases the size of Saturn's population relative to Uranus and Neptune when compared to the original Nice model, producing a closer match with observations. Regular satellites The orbits of Jupiter's regular satellites can remain dynamically cold despite encounters between the giant planets. Gravitational encounters between planets perturb the orbits of their satellites, exciting inclinations and eccentricities, and altering semi-major axes. If these encounters would lead to results inconsistent with the observations, for example, collisions between or the ejections of satellites or the disruption of the Laplace resonance of Jupiter's moons Io, Europa and Ganymede, this could provide evidence against jumping-Jupiter models. In simulations, collisions between or the ejection of satellites was found to be unlikely, requiring an ice giant to approach within 0.02 AU of Jupiter. More distant encounters that disrupted the Laplace resonance were more common, though tidal interactions often lead to their recapture. A sensitive test of jumping-Jupiter models is the inclination of Callisto's orbit, which isn't damped by tidal interactions. Callisto's inclination remained small in six out of ten 5-planet models tested in one study (including some where Jupiter acquired irregular satellites consistent with observations), and another found the likelihood of Jupiter ejecting a fifth giant planet while leaving Callisto's orbit dynamically cold at 42%. Callisto is also unlikely to have been part of the Laplace resonance, because encounters that raise it to its current orbit leave it with an excessive inclination. The encounters between planets also perturb the orbits of the moons of the other outer planets. Saturn's moon Iapetus could have been excited to its current inclination, if the ice giant's closest approach was out of the plane of Saturn's equator. If Saturn acquired its tilt before the encounters, Iapetus's inclination could also be excited due to multiple changes of its semi-major axis, because the inclination of Saturn's Laplace plane would vary with the distance from Saturn. In simulations, Iapetus was excited to its current inclination in five of ten of the jumping-Jupiter models tested, though three left it with excessive eccentricity. The preservation of Oberon's small inclination favors the 5-planet models, with only a few encounters between Uranus and an ice giant, over 4-planet models in which Uranus encounters Jupiter and Saturn. The low inclination of Uranus's moon Oberon, 0.1°, was preserved in nine out of ten of five planet models, while its preservation was found to be unlikely in four planet models. The encounters between planets may have also be responsible for the absence of regular satellites of Uranus beyond the orbit of Oberon. The loss of ices from the inner satellites due to impacts is reduced. Numerous impacts of planetesimals onto the satellites of the outer planets occur during the Late Heavy Bombardment. In the bombardment predicted by the original Nice model, these impacts generate enough heat to vaporize the ices of Mimas, Enceladus and Miranda. The smaller mass planetesimal belt in the five planet models reduces this bombardment. Furthermore, the gravitational stirring by Pluto-massed objects in the Nice 2 model excites the inclinations and eccentricities of planetesimals. This increases their velocities relative to the giant planets, decreasing the effectiveness of gravitational focusing, thereby reducing the fraction of planetesimals impacting the inner satellites. Combined these reduce the bombardment by an order of magnitude. Estimates of the impacts on Iapetus are also less than 20% of that of the original Nice model. Some of the impacts are catastrophic, resulting in the disruption of the inner satellites. In the bombardment of the original Nice model this may result in the disruption of several of the satellites of Saturn and Uranus. An order of magnitude reduction in the bombardment avoids the destruction of Dione and Ariel; but Miranda, Mimas, Enceladus, and perhaps Tethys would still be disrupted. These may be second generation satellites formed from the re-accretion of disrupted satellites. In this case Mimas would not be expected to be differentiated and the low density of Tethys may be due to it forming primarily from the mantle of a disrupted progenitor. Alternatively they may have accreted later from a massive Saturnian ring, or even as recently as 100 Myr ago after the last generation of moons were destroyed in an orbital instability. Giant planet tilts Jupiter's and Saturn's tilts can be produced by spin-orbit resonances. A spin-orbit resonance occurs when the precession frequency of a planet's spin-axis matches the precession frequency of another planet's ascending node. These frequencies vary during the planetary migration with the semi-major axes of the planets and the mass of the planetesimal disk. Jupiter's small tilt may be due to a quick crossing of a spin-orbit resonance with Neptune while Neptune's inclination was small, for example, during Neptune's initial migration before planetary encounters began. Alternatively, if that crossing occurred when Jupiter's semi-major axis jumped, it may be due to its current proximity to spin-orbit resonance with Uranus. Saturn's large tilt can be acquired if it is captured in a spin-orbit resonance with Neptune as Neptune slowly approached its current orbit at the end of the migration. The final tilts of Jupiter and Saturn are very sensitive to the final positions of the planets: Jupiter's tilt would be much larger if Uranus migrated beyond its current orbit, Saturn's would be much smaller if Neptune's migration ended earlier or if the resonance crossing was more rapid. Even in simulations where the final position of the giant planets are similar to the current Solar System, Jupiter's and Saturn's tilt are reproduced less than 10% of the time. Kuiper belt A slow migration of Neptune covering several AU results in a Kuiper belt with a broad inclination distribution. As Neptune migrates outward it scatters many objects from the planetesimal disk onto orbits with larger semi-major axes. Some of these planetesimals are then captured in mean-motion resonances with Neptune. While in a mean-motion resonance, their orbits can evolve via processes such as the Kozai mechanism, reducing their eccentricities and increasing their inclinations; or via apsidal and nodal resonances, which alter eccentricities and inclinations respectively. Objects that reach low-eccentricity high-perihelion orbits can escape from the mean-motion resonance and are left behind in stable orbits as Neptune's migration continues. The inclination distribution of the hot classical Kuiper belt objects is reproduced in numerical simulations where Neptune migrated smoothly from 24 AU to 28 AU with an exponential timescale of 10 million years before jumping outward when it encounters with a fifth giant planet and with a 30 million years exponential timescale thereafter. The slow pace and extended distance of this migration provides sufficient time for inclinations to be excited before the resonances reach the region of Kuiper belt where the hot classical objects are captured and later deposited. If Neptune reaches an eccentricity greater than 0.12 following its encounter with the fifth giant planet hot classical Kuiper belt objects can also be captured due to secular forcing. Secular forcing causes the eccentricities of objects to oscillate, allowing some to reach smaller eccentricity orbits that become stable once Neptune reaches a low eccentricity. The inclinations of Kuiper belt objects can also be excited by secular resonances outside resonances, however, preventing the inclination distribution from being used to definitely determine the speed of Neptune's migration. The objects that remain in the mean-motion resonances at the end of Neptune's migration form the resonant populations such as the plutinos. Few low inclination objects resembling the cold classical objects remain among the plutinos at the end of the Neptune's migration. The outward jump in Neptune's semi-major axes releases the low-inclination low-eccentricity objects that were captured as Neptune's 3:2 resonance initially swept outward. Afterwards, the capture of low inclination plutinos was largely prevented due to the excitation of inclinations and eccentricities as secular resonances slowly sweep ahead of it. The slow migration of Neptune also allows objects to reach large inclinations before capture in resonances and to evolve to lower eccentricities without escaping from resonance. The number of planetesimals with initial semi-major axes beyond 30 AU must have been small to avoid an excess of objects in Neptune's 5:4 and 4:3 resonances. Encounters between Neptune and Pluto-massed objects reduce the fraction of Kuiper belt objects in resonances. Velocity changes during the gravitational encounters with planetesimals that drive Neptune's migration cause small jumps in its semi-major axis, yielding a migration that is grainy instead of smooth. The shifting locations of the resonances produced by this rough migration increases the libration amplitudes of resonant objects, causing many to become unstable and escape from resonances. The observed ratio of hot classical objects to plutinos is best reproduced in simulations that include 1000–4000 Pluto-massed objects (i.e. large dwarf planets) or about 1000 bodies twice as massive as Pluto, making up 10–40% of the 20-Earth-mass planetesimal disk, with roughly 0.1% of this initial disk remaining in various parts of the Kuiper belt. The grainy migration also reduces the number of plutinos relative to objects in the 2:1 and 5:2 resonances with Neptune, and results in a population of plutinos with a narrower distribution of libration amplitudes. A large number of Pluto-massed objects would requires the Kuiper belt's size distribution to have multiple deviations from a constant slope. The kernel of the cold classical Kuiper belt objects is left behind when Neptune encounters the fifth giant planet. The kernel is a concentration of Kuiper belt objects with small eccentricities and inclinations, and with semi-major axes of 44–44.5 AU identified by the Canada–France Ecliptic Plane Survey. As Neptune migrates outward low-inclination low-eccentricity objects are captured by its 2:1 mean-motion resonance. These objects are carried outward in this resonance until Neptune reaches 28 AU. At this time Neptune encounters the fifth ice giant, which has been scattered outward by Jupiter. The gravitational encounter causes Neptune's semi-major axis to jump outward. The objects that were in the 2:1 resonance, however, remain in their previous orbits and are left behind as Neptune's migration continues. Those objects that have been pushed-out a short distance have small eccentricities and are added to the local population of cold classical KBOs. Others that have been carried longer distances have their eccentricities excited during this process. While most of these are released on higher eccentricity orbits a few have their eccentricities reduced due to a secular resonance within the 2:1 resonance and released as part of the kernel or earlier due to Neptune's grainy migration. Among these are objects from regions no longer occupied by dynamically cold objects that formed in situ, such as between 38 and 40 AU. Pushing out in resonance allows these loosely bound, neutrally colored or 'blue' binaries to be implanted without encountering Neptune. The kernel has also been reproduced in a simulation in which a more violent instability occurred without a preceding migration of Neptune and the disk was truncated at ~44.5 AU. The low eccentricities and inclinations of the cold classical belt objects places some constraints on the evolution of Neptune's orbit. They would be preserved if the eccentricity and inclination of Neptune following its encounter with another ice giant remained small (e < 0.12 and i < 6°) or was damped quickly. This constraint may be relaxed somewhat if Neptune's precession is rapid due to strong interactions with Uranus or a high surface density disk. A combination of these may allow the cold classical belt to be reproduced even in simulations with more violent instabilities. If Neptune's rapid precession rate drops temporarily, a 'wedge' of missing low eccentricity objects can form beyond 44 AU. The appearance of this wedge can also be reproduced if the size of objects initially beyond 45 AU declined with distance. A more extended period of Neptune's slow precession could allow low eccentricity objects to remain in the cold classical belt if its duration coincided with that of the oscillations of the objects' eccentricities. A slow sweeping of resonances, with an exponential timescale of 100 million years, while Neptune has a modest eccentricity can remove the higher-eccentricity low-inclination objects, truncating the eccentricity distribution of the cold classical belt objects and leaving a step near the current position of Neptune's 7:4 resonance. Scattered disk In the scattered disk, a slow and grainy migration of Neptune leaves detached objects with perihelia greater than 40 AU clustered near its resonances. Planetesimals scattered outward by Neptune are captured in resonances, evolve onto lower-eccentricity higher-inclination orbits, and are released onto stable higher perihelion orbits. Beyond 50 AU this process requires a slower migration of Neptune for the perihelia to be raised above 40 AU. As a result, in the scattered disk fossilized high-perihelion objects are left behind only during the latter parts of Neptune's migration, yielding short trails (or fingers) on a plot of eccentricity vs. semi-major axis, near but just inside the current locations of Neptune's resonances. The extent of these trails is dependent on the timescale of Neptune's migration and extends farther inward if the timescale is longer. The release of these objects from resonance is aided by a grainy migration of Neptune which may be necessary for an object like to have escaped from Neptune's 8:3 resonance. If the encounter with the fifth planet leaves Neptune with a large eccentricity the semi-major axes of the high perihelion objects would be distributed more symmetrically about Neptune's resonances, unlike the objects observed by OSSOS. The dynamics of the scattered disk left by Neptune's migration varies with distance. During Neptune's outward migration many objects are scattered onto orbits with semi-major axes greater than 50 AU. Similar to in the Kuiper belt, some of these objects are captured by and remain in a resonance with Neptune, while others escape from resonance onto stable orbits after their perihelia are raised. Other objects with perihelia near Neptune's also remain at the end of Neptune's migration. The orbits of these scattering objects vary with time as they continue to interact with Neptune, with some of them entering planet crossing orbits, briefly becoming centaurs or comets before they are ejected from the Solar System. Roughly 80% of the objects between 50 and 200 AU have stable, resonant or detached, orbits with semi-major axes that vary less than 1.5 AU per billion years. The remaining 20% are actively scattering objects with semi-major axes that vary significantly due to interactions with Neptune. Beyond 200 AU most objects in the scattered disc are actively scattering. The total mass deposited in the scattered disk is about twice that of the classical Kuiper belt, with roughly 80% of the objects surviving to the present having semi-major axes less than 200 AU. Lower inclination detached objects become scarcer with increasing semi-major axis, possible due to stable mean motion resonances, or the Kozai resonance within these resonances, requiring a minimum inclination that increases with semi-major axis. Planet Nine cloud If the hypothetical Planet Nine exists and was present during the giant planet migration, a cloud of objects with similar semi-major axes would be formed. Objects scattered outward to semi-major axes greater than 200 AU would have their perihelia raised by the dynamical effects of Planet Nine decoupling them from the influence of Neptune. The semi-major axes the objects dynamically controlled by Planet Nine would be centered on its semi-major axis, ranging from 200 AU to ~2000 AU, with most objects having semi-major axes greater than that of Planet Nine. Their inclinations would be roughly isotropic, ranging up to 180 degrees. The perihelia of these object would cycle over periods of over 100 Myr, returning many to the influence of the Neptune. The estimated mass remaining at the current time is 0.3 – 0.4 Earth masses. Oort cloud Some of the objects scattered onto very distant orbits during the giant planet migration are captured in the Oort cloud. The outer Oort cloud, semi-major axes greater than 20,000 AU, forms quickly as the galactic tide raises the perihelion of object beyond the orbits of the giant planets. The inner Oort cloud forms more slowly, from the outside in, due to the weaker effect of the galactic tide on objects with smaller semi-major axes. Most objects captured in the outer Oort cloud are scattered outward by Saturn, without encountering Jupiter, with some being scattered outward by Uranus and Neptune. Those captured in the inner Oort cloud are primarily scattered outward by Neptune. Roughly 6.5% of the planetesimals beyond Neptune's initial orbit, approximately 1.3 Earth masses, are captured in the Oort cloud with roughly 60% in the inner cloud. Objects may also have been captured earlier and from other sources. As the sun left its birth cluster objects could have been captured in the Oort cloud from other stars. If the gas disk extended beyond the orbits of the giant planets when they cleared their neighborhoods comet-sized object are slowed by gas drag preventing them from reaching the Oort cloud. However, if Uranus and Neptune formed late, some of the objects cleared from their neighborhood after the gas disk dissipates may be captured in the Oort cloud. If the Sun remained in its birth cluster at this time, or during the planetary migration if that occurred early, the Oort cloud formed would be more compact. See also Grand tack hypothesis References Solar System dynamic theories
1636593
https://en.wikipedia.org/wiki/Collaboratory
Collaboratory
A collaboratory, as defined by William Wulf in 1989, is a “center without walls, in which the nation’s researchers can perform their research without regard to physical location, interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries” (Wulf, 1989). Bly (1998) refines the definition to “a system which combines the interests of the scientific community at large with those of the computer science and engineering community to create integrated, tool-oriented computing and communication systems to support scientific collaboration” (Bly, 1998, p. 31). Rosenberg (1991) considers a collaboratory as being an experimental and empirical research environment in which scientists work and communicate with each other to design systems, participate in collaborative science, and conduct experiments to evaluate and improve systems. A simplified form of these definitions would describe the collaboratory as being an environment where participants make use of computing and communication technologies to access shared instruments and data, as well as to communicate with others. However, a wide-ranging definition is provided by Cogburn (2003) who states that “a collaboratory is more than an elaborate collection of information and communications technologies; it is a new networked organizational form that also includes social processes; collaboration techniques; formal and informal communication; and agreement on norms, principles, values, and rules” (Cogburn, 2003, p. 86). This concept has a lot in common with the notions of Interlock research, Information Routing Group and Interlock diagrams introduced in 1984. Other meaning The word “collaboratory” is also used to describe an open space, creative process where a group of people work together to generate solutions to complex problems. This meaning of the word originates from the visioning work of a large group of people – including scholars, artists, consultant, students, activists, and other professionals – who worked together on the 50+20 initiative aiming at transforming management education. In this context, by fusing two elements, “collaboration” and “laboratory”, the word “collaboratory” suggests the construction of a space where people explore collaborative innovations. It is, as defined by Dr. Katrin Muff, “an open space for all stakeholders where action learning and action research join forces, and students, educators, and researchers work with members of all facets of society to address current dilemmas.” The concept of the collaboratory as a creative group process and its application are further developed in the book “The Collaboratory: A co-creative stakeholder engagement process for solving complex problems”. Examples of collaboratory events are provided on the website of the Collaboratory community as well as by Business School Lausanne- a Swiss business school that has adopted the collaboratory method to harness collective intelligence. Background Problems of geographic separation are especially present in large research projects. The time and cost for traveling, the difficulties in keeping contact with other scientists, the control of experimental apparatus, the distribution of information, and the large number of participants in a research project are just a few of the issues researchers are faced with. Therefore, collaboratories have been put into operation in response to these concerns and restrictions. However, the development and implementation proves to be not so inexpensive. From 1992 to 2000 financial budgets for scientific research and development of collaboratories ranged from US$447,000 to US$10,890,000 and the total use ranged from 17 to 215 users per collaboratory (Sonnenwald, 2003). Particularly higher costs occurred when software packages were not available for purchase and direct integration into the collaboratory or when requirements and expectations were not met. Chin and Lansing (2004) state that the research and development of scientific collaboratories had, thus far, a tool-centric approach. The main goal was to provide tools for shared access and manipulation of specific software systems or scientific instruments. Such an emphasis on tools was necessary in the early development years of scientific collaboratories due to the lack of basic collaboration tools (e.g. text chat, synchronous audio or videoconferencing) to support rudimentary levels of communication and interaction. Today, however, such tools are available in off-the-shelf software packages such as Microsoft NetMeeting, IBM Lotus Sametime, Mbone Videoconferencing (Chin and Lansing, 2004). Therefore, the design of collaboratories may now move beyond developing general communication mechanisms to evaluating and supporting the very nature of collaboration in the scientific context (Chin & Lansing, 2004). The evolution of the collaboratory As stated in Chapter 4 of the 50+20 "Management Education for the World" book, "the term collaboratory was first introduced in the late 1980s to address problems of geographic separation in large research projects related to travel time and cost, difficulties in keeping contact with other scientists, control of experimental apparatus, distribution of information, and the large number of participants. In their first decade of use, collaboratories were seen as complex and expensive information and communication technology (ICT) solutions supporting 15 to 200 users per project, with budgets ranging from 0.5 to 10 million USD. At that time, collaboratories were designed from an ICT perspective to serve the interests of the scientific community with tool-oriented computing requirements, creating an environment that enabled systems design and participation in collaborative science and experiments. The introduction of a user-centered approach provided a first evolutionary step in the design philosophy of the collaboratory, allowing rapid prototyping and development circles. Over the past decade the concept of the collaboratory expanded beyond that of an elaborate ICT solution, evolving into a “new networked organizational form that also includes social processes, collaboration techniques, formal and informal communication, and agreement on norms, principles, values, and rules”. The collaboratory shifted from being a tool-centric to a data-centric approach, enabling data sharing beyond a common repository for storing and retrieving shared data sets. These developments have led to the evolution of the collaboratory towards a globally distributed knowledge work that produces intangible goods and services capable of being both developed and distributed around the world using traditional ICT networks. Initially, the collaboratory was used in scientific research projects with variable degrees of success. In recent years, collaboratory models have been applied to areas beyond scientific research and the national context. The wide acceptance of collaborative technologies in many parts of the world opens promising opportunities for international cooperation in critical areas where societal stakeholders are unable to work out solutions in isolation, providing a platform for large multidisciplinary teams to work on complex global challenges. The emergence of open-source technology transformed the collaboratory into its next evolution. The term open-source was adopted by a group of people in the free software movement in Palo Alto in 1998 in reaction to the source code release of the Netscape Navigator browser. Beyond providing a pragmatic methodology for free distribution and access to an end product's design and implementation details, open-source represents a paradigm shift in the philosophy of collaboration. The collaboratory has proven to be a viable solution for the creation of a virtual organization. Increasingly, however, there is a need to expand this virtual space into the real world. We propose another paradigm shift, moving the collaboratory beyond its existing ICT framework to a methodology of collaboration beyond the tool- and data-centric approaches, and towards an issue-centered approach that is transdisciplinary in nature." Characteristics and considerations A distinctive characteristic of collaboratories is that they focus on data collection and analysis. Hence the interest to apply collaborative technologies to support data sharing as opposed to tool sharing. Chin and Lansing (2004) explore the shift of collaboratory development from traditional tool-centric approaches to more data-centric ones, to effectively support data sharing. This means more than just providing a common repository for storing and retrieving shared data sets. Collaboration, Chin and Lansing (2004) state, is driven both by the need to share data and to share knowledge about data. Shared data is only useful if sufficient context is provided about the data such that collaborators may comprehend and effectively apply it. It is therefore imperative, according to Chin and Lansing (2004), to know and understand how data sets relate to aspects of overall data space, applications, experiments, projects, and the scientific community, identifying the critical features or properties among which we can mention: General data set properties (owner, creation data, size, format); Experimental properties (conditions of the scientific experiment that generated that data); Data provenance (relationship with previous versions); Integration (relationship of data subsets within the full data set); Analysis and interpretation (notes, experiences, interpretations, and knowledge produced) Scientific organization (scientific classification or hierarchy); Task (research task that generated or applies the data set); Experimental process (relationship of data and tasks to the overall process); User community (application of data set to different users). Henline (1998) argues that communication about experimental data is another important characteristic of a collaboratory. By focusing attention on the dynamics of information exchange, the study of Zebrafish Information Network Project (Henline, 1998) concluded that the key challenges in creating a collaboratory may be social rather than technical. “A successful system must respect existing social conventions while encouraging the development of analogous mechanisms within the new electronic forum” (Henline, 1998, p. 69). Similar observations were made in the Computer-supported collaborative learning (CSCL) case study (Cogburn, 2003). The author (Cogburn, 2003) is investigating a collaboratory established for researchers in education and other related domains from United States of America and southern Africa. The main finding was that there have been important intellectual contributions on both sides, although the context was that of a developed country working together with a developing one and there have been social as well as cultural barriers. He further develops the idea that a successful CSCL would need to draw the best lessons learned on both sides in computer-mediated communication (CMC) and computer-supported cooperative work (CSCW). Sonnenwald (2003) conducted seventeen interviews with scientists and revealed important considerations. Scientists expect a collaboratory to “support their strategic plans; facilitate management of the scientific process; have a positive or neutral impact on scientific outcomes; provide advantages and disadvantages for scientific task execution; and provide personal conveniences when collaborating across distances” (Sonnenwald, 2003, p. 68). Many scientists looked at the collaboratory as means to achieve strategic goals that were organizational and personal in nature. Other scientists anticipated that the scientific process would speed up when they had access to the collaboratory. Design philosophy Finholt (1995), based on the case studies of the Upper Atmospheric Research Collaboratory (UARC) and the Medical Collaboratory, establishes a design philosophy: a collaboratory project must be dedicated to a user-centered design (UCD) approach. This means a commitment to develop software in programming environments that allow rapid prototyping, rapid development cycles (Finholt, 1995). A consequence of the user-centered design in the collaboratory is that the system developers must be able to distinguish when a particular system or modification has positive impact on users’ work practices. An important part of obtaining this understanding is producing an accurate picture of how work is done prior to the introduction of technology. Finholt (1995) explains that behavioral scientists had the task of understanding the actual work settings for which new information technologies were developed. The goal of a user-centered design effort was to inject those observations back into the design process to provide a baseline for evaluating future changes and to illuminate productive directions for prototype development (Finholt, 1995). A similar viewpoint is expressed by Cogburn (2003) who relates the collaboratory to a globally distributed knowledge work, stating that human-computer interaction (HCI) and user-centered design (UCD) principles are critical for organizations to take advantage of the opportunities of globalization and the emergence of an Information society. He (Cogburn, 2003) refers to distributed knowledge work as being a set of “economic activities that produce intangible goods and services […], capable of being both developed and distributed around the world using the global information and communication networks” (Cogburn, 2003, p. 81). Through the use of these global information and communications networks, organizations are able to take part in globally disarticulated production, which means they can locate their research and development facilities almost anywhere in the world, and engineers can collaborate across time zones, institutions and national boundaries. Evaluation Meeting expectations is a factor that influences adoption of innovations, including scientific collaboratories. Some of the collaboratories implemented thus far have not been entirely successful. The Mathematics and Computer Science Division of Argonne National Laboratory, Waterfall Glen collaboratory (Henline, 1998) is an illustrative example. This collaboratory had its shares of problems. There have been the occasional technical and social disasters, but most importantly it did not meet all of the collaboration and interaction requirements. The vast majority of the evaluations performed thus far are concentrating mainly on the usage statistics (e.g. total number of members, hours of use, amount of data communicated) or on the immediate role in the production of traditional scientific outcomes (e.g. publications and patents). Sonnenwald (2003), however, argues that we should rather look for longer-term and intangible measures such as new and continued relationship among scientists, and subsequent, longer-term creation of new knowledge. Regardless of the criteria used for evaluation, we must focus on understanding the expectations and requirements defined for a collaboratory. Without such understanding a collaboratory runs the risk of not being adopted. Success factors Olson, Teasley, Bietz, and Cogburn (2002) ascertain some of the success factors of a collaboratory. They are: collaboration readiness, collaboration infrastructure readiness, and collaboration technology readiness. Collaboration readiness is the most basic pre-requisite for an effective collaboratory, according to Olson, Teasley, Bietz, and Cogburn (2002). Often the critical component to collaboration readiness is based on the concept of “working together in order to achieve a science goal” (Olson, Teasley, Bietz, & Cogburn, 2002, p. 46). Incentives to collaborate, shared principles of collaboration, and experience with the elements of collaboration are also crucial. Successful interaction between users requires a certain amount of common ground. Interactions require a high degree of trust or negotiation, especially when they involve areas where there is a cultural difference. “Ethical norms tend to be culturally specific, and negotiations about ethical issues require high levels of trust” (Olson, Teasley, Bietz, & Cogburn, 2002, p. 49). When analyzing the collaboration infrastructure readiness Olson, Teasley, Bietz, and Cogburn (2002) state that modern collaboration tools require adequate infrastructure to operate properly. Many off-the-shelf applications will run effectively only on state-of-the-art workstations. An important piece of the infrastructure is the technical support necessary to ensure version control, to get participants registered, and to recover in case of disaster. Communications cost is another element which can be critical for collaboration infrastructure readiness (Olson, Teasley, Bietz, & Cogburn, 2002). Pricing structures for network connectivity can affect the choices that users will make and therefore have an effect on the collaboratory's final design and implementation. Collaboration technology readiness, according to Olson, Teasley, Bietz, and Cogburn (2002), refers to the fact that collaboration does not involve only technology and infrastructure, but also requires a considerable investment in training. Thus, it is essential to assess the state of technology readiness in the community to ensure success. If the level is too primitive more training is required to bring the users’ knowledge up-to-date. Examples Biological Sciences Collaboratory A comprehensively described example of a collaboratory, the Biological Sciences Collaboratory (BSC) at the Pacific Northwest National Laboratory (Chin & Lansing, 2004), enables the sharing and analysis of biological data through metadata capture, electronic laboratory notebooks, data organization views, data provenance tracking, analysis notes, task management, and scientific workflow management. BSC supports various data formats, has data translation capabilities, and can interact and exchange data with other sources (external databases, for example). It offers subscription capabilities (to allow certain individuals to access data) and verification of identities, establishes and manages permissions and privileges, and has data encryption capabilities (to ensure secure data transmission) as part of its security package. BSC also provides a data provenance tool and a data organization tool. These tools allow a hierarchical tree to display the historical lineage of a data set. From this tree-view the scientist may select a particular node (or an entire branch) to access a specific version of the data set (Chin & Lansing, 2004). The task management provided by BSC allows users to define and track tasks related to a specific experiment or project. Tasks can have deadlines assigned, levels of priority, and dependencies. Tasks can also be queried and various reports produced. Related to task management, BSC provides workflow management to capture, manage, and supply standard paths of analyses. The scientific workflow may be viewed as process templates that captures and semi-automate the steps of an analysis process and its encompassing data sets and tools (Chin & Lansing, 2004). BSC provides project collaboration by allowing scientists to define and manage members of their group. Security and authentication mechanisms are therefore applied to limit access to project data and applications. Monitoring capability allows for members to identify other members that are online working on the project (Chin & Lansing, 2004). BSC offers community collaboration capabilities: scientists may publish their data sets to a larger community through the data portal. Notifications are in place for scientists interested in a particular set of data - when that data changes, the scientists get notification via email (Chin & Lansing, 2004). Diesel Combustion Collaboratory Pancerella, Rahn, and Yang (1999) analyzed the Diesel Combustion Collaboratory (DCC) which was a problem-solving environment for combustion research. The main goal of DCC was to make the information exchange for the combustion researchers more efficient. Researchers would collaborate over the Internet using various DCC tools. These tools included “a distributed execution management system for running combustion models on widely distributed computers (distributed computing), including supercomputers; web accessible data archiving capabilities for sharing graphical experimental or modeling data; electronic notebooks and shared workspaces for facilitating collaboration; visualization of combustion data; and videoconferencing and data conferencing among researchers at remote sites” (Pancerella, Rahn, & Yang, 1999, p. 1). The collaboratory design team defined the requirements to be (Pancerella, Rahn, & Yang, 1999): Ability share graphical data easily; Ability to discuss modeling strategies and exchange model descriptions; Archiving collaborative information; Ability to run combustion models at widely separated locations; Ability to analyze experimental data and modeling results in a web-accessible format; Videoconference and group meetings capabilities. Each of these requirements had to be done securely and efficiently across the Internet. Resources availability was a major concern because many of the chemistry simulations could run for hours or even days on high-end workstations and produce Kilobytes to Megabytes of data sets. These data sets had to be visualized using simultaneous 2-D plots of multiple variables (Pancerella, Rahn, & Yang, 1999). The deployment of the DCC was done in a phased approach. The first phase was based on iterative development, testing, and deployment of individual collaboratory tools. Once collaboratory team members had adequately tested each new tool, it was deployed to combustion researchers. The deployment of the infrastructure (videoconferencing tools, multicast routing capabilities, and data archives) was done in parallel (Pancerella, Rahn, & Yang, 1999). The next phase was to implement full security in the collaboratory. The primary focus was on two-way synchronous and multi-way asynchronous collaborations (Pancerella, Rahn, & Yang, 1999). The challenge was to balance the increased access to data that was needed with the security requirements. The final phase was the broadening of the target research to multiple projects including a broader range of collaborators. The collaboratory team found that the highest impact was perceived by the geographically separated scientists that truly depended on each other to achieve their goals. One of the team's major challenges was to overcome the technological and social barriers in order to meet all of the objectives (Pancerella, Rahn, & Yang, 1999). User openness and low maintenance security collaboratories are hard to achieve, therefore user feedback and evaluation are constantly required. Other collaboratories Other collaboratories that have been implemented and can be further investigated are: Marine Biological Laboratory (MBL) is an international center for research and education in biology, biomedicine and ecology. Biological Collaborative Research Environment (BioCoRE) developed at University of Illinois at Urbana–Champaign – a collaboration tool for biologists (Chin and Lansing, 2004); The CTQ Collaboratory, a virtual community of teacher leaders and those who value teacher leadership, run by the Center for Teaching Quality, a national education nonprofit (Berry, Byrd, & Wieder, 2013); HASTAC (Humanities, Arts, Science, and Technology Alliance and Collaboratory), founded in 2002 by Cathy N. Davidson, then Vice Provost for Interdisciplinary Studies at Duke University and David Theo Goldberg, Director of the University of California Humanities Research Institute (UCHRI), after contacting scholars across the humanities (including digital humanities), social sciences, media studies, the arts, and technology sectors who shared these convictions and wanted to envision a new kind of organization—an academic social network—that would allow anyone to join and would offer any member of the community to contribute. They began working with a team of developers at Stanford University to code and design a participatory, community site, originally a display website and a Wiki for open contribution and as a community-based publishing and networking platform. Molecular Interactive Collaborative Environment (MICE) developed at the San Diego Supercomputer Center – provides collaborative access and manipulation of complex, three-dimensional molecular models as captured in various scientific visualization programs (Chin and Lansing, 2004); Molecular Modeling Collaboratory (MMC) developed at University of California, San Francisco – allows remote biologists to share and interactively manipulate three-dimensional molecular models in applications such as drug design and protein engineering (Chin and Lansing, 2004); Collaboratory for Microscopic Digital Anatomy (CMDA) – a computational environment to provide biomedical scientists remote access to a specialized research electron microscope (Henline, 1998); The Collaboratory for Strategic Partnerships and Applied Research at Messiah College - an organization of Christian students, educators, and professionals affiliated with Messiah College, aspiring to fulfill Biblical mandates to foster justice, empower the poor, reconcile adversaries, and care for the earth, in the context of academic engagement. Waterfall Glen – a multi-user object-oriented (MOO) collaboratory at Argonne National Laboratory (Henline, 1998); The International Personality Item Pool (IPIP) – a scientific collaboratory for the development of advanced measures of personality and other individual differences (Henline, 1998); TANGO – a set of collaborative applications for education and distance learning, command and control, health care, and computer steering (Henline, 1998). Special consideration should be attributed to TANGO (Henline, 1998) because it is a step forward in implementing collaboratories, as it has distance learning and health care as main domains of operation. Henline (1998) mentions that the collaboratory has been successfully used to implement applications for distance learning, command and control center, telemedical bridge, and a remote consulting tool suite. Collaborative architecture and Interactive architecture, the work of Adam Somlai-Fischer and Usman Haque. The Internet & Society Collaboratory supported by Google in Germany Summary To date, most collaboratories have been applied largely in scientific research projects, with various degrees of success and failure. Recently, however, collaboratory models have been applied to additional areas of scientific research in both national and international contexts. As a result, a substantial knowledge base has emerged helping us in understanding their development and application in science and industry (Cogburn, 2003). Extending the collaboratory concept to include both social and behavioral research as well as more scientists from the developing world could potentially strengthen the concept and provide opportunities of learning more about the social and technical factors that support a distributed knowledge network (Cogburn, 2003). The use of collaborative technologies to support geographically distributed scientific research is gaining wide acceptance in many parts of the world. Such collaboratories hold great promise for international cooperation in critical areas of scientific research and not only. As the frontiers of knowledge are pushed back the problems get more and more difficult, often requiring large multidisciplinary teams to make progress. The collaboratory is emerging as a viable solution, using communication and computing technologies to relax the constraints of distance and time, creating an instance of a virtual organization. The collaboratory is both an opportunity with very useful properties, but also a challenge to human organizational practices (Olson, 2002). See also Information and communication technologies Human–computer interaction User-centered design Participatory design Footnotes References Berry, B., Byrd, A., & Wieder, A. (2013). Teacherpreneurs: Innovative teachers who lead but don't leave. San Francisco: Jossey-Bass. Bly, S. (1998). Special section on collaboratories, Interactions, 5(3), 31, New York: ACM Press. Bos, N., Zimmerman, A., Olson, J., Yew, J., Yerkie, J., Dahl, E. and Olson, G. (2007), From Shared Databases to Communities of Practice: A Taxonomy of Collaboratories. Journal of Computer-Mediated Communication, 12: 652–672. Chin, G., Jr., & Lansing, C. S. (2004). Capturing and supporting contexts for scientific data sharing via the biological sciences collaboratory, Proceedings of the 2004 ACM conference on computer supported cooperative work, 409-418, New York: ACM Press. Cogburn, D. L. (2003). HCI in the so-called developing world: what's in it for everyone, Interactions, 10(2), 80-87, New York: ACM Press. Cosley, D., Frankowsky, D., Kiesler, S., Terveen, L., & Riedl, J. (2005). How oversight improves member-maintained communities, Proceedings of the SIGCHI conference on Human factors in computing systems, 11-20. Finholt, T. A. (1995). Evaluation of electronic work: research on collaboratories at the University of Michigan, ACM SIGOIS Bulletin, 16(2), 49–51. Finholt, T.A. Collaboratories. (2002). In B. Cronin (Ed.), Annual Review of Information Science and Technology (pp. 74–107), 36. Washington, D.C.: American Society for Information Science. Finholt, T.A., & Olson, G.M. (1997). From laboratories to collaboratories: A new organizational form for scientific collaboration. Psychological Science, 8, 28-36. Henline, P. (1998). Eight collaboratory summaries, Interactions, 5(3), 66–72, New York: ACM Press. Olson, G.M. (2004). Collaboratories. In W.S. Bainbridge (Ed.), Encyclopedia of Human-Computer Interaction. Great Barrington, MA: Berkshire Publishing. Olson, G.M., Teasley, S., Bietz, M. J., & Cogburn, D. L. (2002). Collaboratories to support distributed science: the example of international HIV/AIDS research, Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on enablement through technology, 44–51. Olson, G.M., Zimmerman, A., & Bos, N. (Eds.) (2008). Scientific collaboration on the Internet. Cambridge, MA: MIT Press. Pancerella, C.M., Rahn, L. A., Yang, C. L. (1999). The diesel combustion collaboratory: combustion researchers collaborating over the internet, Proceedings of the 1999 ACM/IEEE conference on supercomputing, New York: ACM Press. Rosenberg, L. C. (1991). Update on National Science Foundation funding of the “collaboratory”, Communications of the ACM, 34(12), 83, New York: ACM Press. Sonnenwald, D.H. (2003). Expectations for a scientific collaboratory: A case study, Proceedings of the 2003 international ACM SIGGROUP conference on supporting group work, 68–74, New York: ACM Press. Sonnenwald, D.H., Whitton, M.C., & Maglaughlin, K.L. (2003). Scientific collaboratories: evaluating their potential, Interactions, 10(4), 9–10, New York: ACM Press. Wulf, W. (1989, March). The national collaboratory. In Towards a national collaboratory. Unpublished report of a National Science Foundation invitational workshop, Rockefeller University, New York. Wulf, W. (1993) The collaboratory opportunity. Science, 261, 854-855. 1989 introductions Technology systems Collaboration Laboratories
1367430
https://en.wikipedia.org/wiki/IBM%20RT%20PC
IBM RT PC
The IBM RT PC (RISC Technology Personal Computer) is a family of workstation computers from IBM introduced in 1986. These were the first commercial computers from IBM that were based on a reduced instruction set computer (RISC) architecture. The RT PC uses IBM's proprietary ROMP microprocessor, which commercialized technologies pioneered by IBM Research's 801 experimental minicomputer (the 801 was the first RISC). The RT PC runs three operating systems: AIX, the Academic Operating System (AOS), or Pick. The RT PC's performance is relatively poor compared to contemporary workstations and it had little commercial success as a result; IBM responded by introducing the RS/6000 workstations in 1990, which used a new IBM-proprietary RISC processor, the POWER1. All RT PC models were discontinued by May 1991. Hardware Two basic types were produced: a floor-standing desk-side tower, and a table-top desktop. Both types featured a special board slot for the processor card, as well as machine-specific RAM cards. Each machine had one processor slot, one co-processor slot, and two RAM slots. There were three versions of the processor card: The Standard Processor Card or 032 card had a 5.88MHz clock rate (170ns cycle time), 1MB of standard memory (expandable via 1, 2, or 4MB memory boards). It could be accompanied by an optional Floating-Point Accelerator (FPA) board, which contained a 10MHz National Semiconductor NS32081 floating point coprocessor. This processor card was used in the original RT PC models (010, 020, 025, and A25) announced on January 21, 1986. The Advanced Processor Card had a 10MHz clock (100ns) and either 4MB memory on the processor card, or external 4MB ECC memory cards, and featured a built-in 20MHz Motorola 68881 floating-point processor. The Advanced Processor Card could be accompanied by an optional Advanced Floating-Point Accelerator (AFPA) board, which was based around the Analog Devices ADSP-3220 FP multiplier and ADSP-3221 FP ALU. Models 115, 125, and B25 used these cards. These models were announced on February 17, 1987. The Enhanced Advanced Processor Card sported a 12.5MHz clock (80ns), 16MB on-board memory, while an enhanced advanced floating point accelerator was standard. The models 130, 135, and B35 used these cards. They were announced on July 19, 1988. All RT PCs supported up to 16MB of memory. Early models were limited to 4MB of memory because of the capacity of the DRAM ICs used, later models could have up to 16MB. I/O was provided by eight ISA bus slots. Storage was provided by a 40 or 70MB hard drive, upgradeable to 300MB. External SCSI cabinets could be used to provide more storage. Also standard were a mouse and either a 720×512 or 1024×768 pixel-addressable display, and a 4Mbit/s Token Ring network adapter or 10BASE2 Ethernet adapter. For running CADAM, a computer-aided design (CAD) program, an IBM 5080 or 5085 graphics processor could be attached. The 5080 and 5085 were contained in a large cabinet that would have been positioned alongside the RT PC. The 5080 was used with a 1,024- by 1,024-pixel IBM 5081 display. 6152 Academic System The 6152 Academic System was a PS/2 Model 60 with a RISC Adapter Card, a Micro Channel board containing a ROMP, its support ICs, and up to 8MB of memory. It allowed the PS/2 to run ROMP software compiled for the AOS. AOS was downloaded from a RT PC running AOS, via a LAN TCP/IP interface. Software One of the novel aspects of the RT design was the use of a microkernel. The keyboard, mouse, display, disk drives and network were all controlled by a microkernel, called Virtual Resource Manager (VRM), which allowed multiple operating systems to be booted and run at the same time. One could "hotkey" from one operating system to the next using the Alt-Tab key combination. Each OS in turn would get possession of the keyboard, mouse and display. Both AIX version 2 and the Pick operating system were ported to this microkernel. Pick was unique in being a unified operating system and database, and ran various accounting applications. It was popular with retail merchants, and accounted for about 4,000 units of sales. The primary operating system for the RT was AIX version 2. Much of the AIX v2 kernel was written in a variant of the PL/I programming language, which proved troublesome during the migration to AIX v3. AIX v2 included full TCP/IP networking support, as well as SNA, and two networking file systems: NFS, licensed from Sun Microsystems, and IBM Distributed Services (DS). DS had the distinction of being built on top of SNA, and thereby being fully compatible with DS on the IBM midrange AS/400 and mainframe systems. For the graphical user interfaces, AIX v2 came with the X10R3 and later the X10R4 and X11 releases of the X Window System from MIT, together with the Athena widget set. Compilers for C and Fortran programming languages were available. Some RT PCs were also shipped with the Academic Operating System (AOS), an IBM port of 4.3BSD Unix to the RT PC. It was offered as an alternative to AIX, the usual RT PC operating system, to US universities eligible for an IBM educational discount. AOS added a few extra features to 4.3BSD, notably NFS, and an almost ANSI C-compliant C compiler. A later version of AOS existed that was derived from 4.3BSD-Reno, but it was not widely distributed. The RT forced an important stepping-stone in the development of the X Window System, when a group at Brown University ported X version 9 to the system. Problems with reading unaligned data on the RT forced an incompatible protocol change, leading to version 10 in late 1985. Sales and market reception The IBM RT had a varied life even from its initial announcement. Most industry watchers considered the RT as "not enough power, too high a price, and too late." Many thought that the RT was part of IBM's Personal Computer line of computers. This confusion started with its initial name, "IBM RT PC". Initially, it seemed that even IBM thought that it was a high-end Personal Computer given the initially stunning lack of support that it received from IBM. This could be explained by the sales commission structure the IBM gave the system: salesmen received commissions similar to those for the sale of a PC. With typically configured models priced at $20,000, it was a hard sell, and the lack of any reasonable commission lost the interest of IBM's sales force. Both MIT's Project Athena and Brown University's Institute for Research in Information and Scholarship found the RT inferior to other computers. The performance of the RT, in comparison with other contemporaneous Unix workstations, was not outstanding. In particular, the floating point performance was poor, and was scandalized mid-life with the discovery of a bug in the floating point square root routine. With the RT system's modest processing power (when first announced), and with announcements later that year by some other workstation vendors, industry analysts questioned IBM's directions. AIX for the RT was IBM's second foray into UNIX (its first was PC/IX for the IBM PC in September 1984.) The lack of software packages and IBM's sometimes lackluster support of AIX, in addition to sometimes unusual changes from traditional, de facto UNIX operating system standards, caused most software suppliers to be slow in embracing the RT and AIX. The RT found its home mostly in the CAD/CAM and CATIA markets, with some inroads into the scientific and educational areas, especially after the announcement of AOS and substantial discounts for the educational community. The RT running the Pick OS also found use as shopping store control systems, given the strong database, accounting system and general business support in the Pick OS. The RT also did well as an interface system between IBM's larger mainframes, due to its SNA and DS support, and some of its point-of-sale terminals, store control systems, and machine shop control systems. Approximately 23,000 RTs were sold over its lifetime, with some 4,000 going into IBM's development and sales organizations. Pick OS sales accounted for about 4,000 units. When the RT PC was introduced in January 1986, it competed with several workstations from established providers: the Apollo Computer Domain Series 3000, the DEC MicroVAX II, and Sun Microsystems Sun-3. As part of the NSFNET backbone In 1987, "The NSF starts to implement its T1 backbone between the supercomputing centers with 24 RT-PCs in parallel implemented by IBM as ‘parallel routers’. The T1 idea is so successful that proposals for T3 speeds in the backbone begin. Internet History of 1980s The National Science Foundation Network (NSFNET) was the forerunner of the Internet. From July 1988 to November 1992, the NSFNET's T1 backbone network used routers built from multiple RT PCs (typically nine) interconnect by a Token Ring LAN. References Further reading Contains 4 significant technical articles about the Machine, processor and architecture. IBM Pub SA23-1057-00 Chapter 5 describes the origins of the PowerPC architecture in the IBM 801 and RT PC. Contains an in-depth description of the origins of the RT PC, its development, and subsequent commercial failure. External links IBM RT PC-page The IBM RT Information Page JMA Systems's FAQ Archive video in operation This entry incorporates text from the RT/PC FAQ . RT PC Computer-related introductions in 1986 32-bit computers
577235
https://en.wikipedia.org/wiki/James%E2%80%93Younger%20Gang
James–Younger Gang
The James–Younger Gang was a notable 19th-century gang of American outlaws that revolved around Jesse James and his brother Frank James. The gang was based in the state of Missouri, the home of most of the members. Membership fluctuated from robbery to robbery, as the outlaws' raids were usually separated by many months. As well as the notorious James brothers, at various times it included the Younger brothers (Cole, Jim, John, and Bob), John Jarrett (married to the Youngers' sister Josie), Arthur McCoy, George Shepherd, Oliver Shepherd, William McDaniel, Tom McDaniel, Clell Miller, Charlie Pitts (born Samuel A. Wells), and Bill Chadwell (alias Bill Stiles). The James–Younger Gang had its origins in a group of Confederate bushwhackers that participated in the bitter partisan fighting that wracked Missouri during the American Civil War. After the war, the men continued to plunder and murder, though the motive shifted to personal profit rather than in the name of the Confederacy. The loose association of outlaws did not truly become the "James–Younger Gang" until 1868 at the earliest, when the authorities first named Cole Younger, John Jarrett, Arthur McCoy, George Shepherd and Oliver Shepherd as suspects in the robbery of the Nimrod Long bank in Russellville, Kentucky. The James–Younger Gang dissolved in 1876, following the capture of the Younger brothers in Minnesota during the unsuccessful attempt to rob the Northfield First National Bank. Three years later, Jesse James organized a new gang, including Clell Miller's brother Ed and the Ford brothers (Robert and Charles), and renewed his criminal career. This career came to an end in 1882 when Robert Ford shot James from behind, killing him. For nearly a decade following the Civil War, the James–Younger Gang was among the most feared, most publicized, and most wanted confederations of outlaws on the American frontier. Though their crimes were reckless and brutal, many members of the gang commanded a notoriety in the public eye that earned the gang significant popular support and sympathy. The gang's activities spanned much of the central part of the country; they are suspected of having robbed banks, trains, and stagecoaches in at least eleven states: Missouri, Kentucky, Tennessee, Iowa, Kansas, Minnesota, Texas, Arkansas, Louisiana, Alabama, and West Virginia. History Origins From the beginning of the American Civil War, the state of Missouri had chosen not to secede from the Union but not to fight for it or against it either: its position, as determined by an 1861 constitutional convention, was officially neutral. Missouri, however, had been the scene of much of the agitation leading up to the outbreak of the war, and was home to dedicated partisans from both sides. In the mid-1850s, local Unionists and Secessionists had begun to battle each other throughout the state, and by the end of 1861, guerrilla warfare erupted between Confederate partisans known as "bushwhackers" and the more organized Union forces. The Missouri State Guard and the newly elected Governor of Missouri, Claiborne Fox Jackson, who maintained implicit Southern sympathies, were forced into exile as Union troops under Nathaniel Lyon and John C. Frémont took control of the state. Still, pro-Confederate guerrillas resisted; by early 1862, the Unionist provisional government mobilized a state militia to fight the increasingly organized and deadly partisans. This conflict (fought largely, though not exclusively, between Missourians themselves) raged until after the fall of Richmond and the surrender of General Robert E. Lee, costing thousands of lives and devastating broad swathes of the Missouri countryside. The conflict rapidly escalated into a succession of atrocities committed by both sides. Union troops often executed or tortured suspects without trial and burned the homes of suspected guerrillas and those suspected of aiding or harboring them. Where credentials were suspect, the accused guerrilla was often executed, as in the case of Lt. Col. Frisby McCullough after the Battle of Kirksville. Bushwhackers, meanwhile, frequently went house to house, executing Unionist farmers. The James and Younger brothers belonged to families from an area known as "Little Dixie" in western Missouri with strong ties to the South. Zerelda Samuel, the mother of Frank and Jesse James, was an outspoken partisan of the South, though the Youngers' father, Henry Washington Younger, was believed to be a Unionist. Cole Younger's initial decision to fight as a bushwhacker is usually attributed to the death of his father at the hands of Union forces in July 1862. He and Frank James fought under one of the most famous Confederate bushwhackers, William Clarke Quantrill, though Cole eventually joined the regular Confederate Army. Jesse James began his guerrilla career in 1864, at the age of sixteen, fighting alongside Frank under the leadership of Archie Clement and "Bloody Bill" Anderson. At the war's end, Frank James surrendered in Kentucky; Jesse James attempted to surrender to Union militia but was shot through the lung outside of Lexington, Missouri. He was nursed back to health by his cousin, Zerelda "Zee" Mimms, whom he eventually married. When Cole Younger returned from a mission to California, he learned that Quantrill and Anderson had both been killed. The James brothers, however, continued to associate with their old guerrilla comrades, who remained together under the leadership of Archie Clement. It was likely Clement who, amid the tumult of Reconstruction in Missouri, turned the guerrillas into outlaws. Early years: 1866 to 1870 On February 12, 1866, a group of gunmen carried out one of the first daylight, peacetime, armed bank robberies in U.S. history when they held up the Clay County Savings Association in Liberty, Missouri. The outlaws stole some $60,000 in cash and bonds and killed a bystander named George Wymore on the street outside the bank. State authorities suspected Archie Clement of leading the raid, and promptly issued a reward for his capture. In later years, the list of suspects grew to include Jesse and Frank James, Cole Younger, John Jarrett, Oliver Shepherd, Bud and Donny Pence, Frank Greg, Bill and James Wilkerson, Joab Perry, Ben Cooper, Red Mankus, and Allen Parmer (who later married Susan James, Frank and Jesse's sister). Four months later, on June 13, 1866, two members of Quantrill's Raiders were freed from prison in Independence, Missouri; the jailer, Henry Bugler, was killed. The James brothers are believed to have been involved. The crime began a string of robberies, many of which were linked to Clement's group of bushwhackers. The hold-up most clearly linked to the group was of Alexander Mitchell and Company in Lexington, Missouri, on October 30, 1866, which netted $2,011.50. Clement was also linked to violence and intimidation against officials of the Republican government that now held power in the state. On election day, Clement led his men into Lexington, where they drove Republican voters away from the polls, thereby securing a Republican defeat. A detachment of state militiamen was dispatched to the town. They convinced the bushwhackers to disperse, then attempted to capture Clement, who still had a price on his head. Clement refused to surrender and was shot down in a wild gunfight on the streets of Lexington. Despite the death of Clement, his old followers remained together, and robbed a bank across the Missouri River from Lexington in Richmond, Missouri, on May 22, 1867, in which the town mayor John B. Shaw and two lawmen [Barry and George Griffin] were killed. This was followed on March 20, 1868, by a raid on the Nimrod Long bank in Russellville, Kentucky. In the aftermath of the two raids, however, the more senior bushwhackers were killed, captured or simply left the group. This set the stage for the emergence of the James and Younger brothers, and the transformation of the old crew into the James–Younger Gang. John Jarrett and Arthur McCoy were mentioned in numerous newspaper accounts, so they were likely active in gang activities up to 1875. On December 7, 1869, Frank and Jesse James are believed to have robbed the Daviess County Savings Association in Gallatin, Missouri. Jesse is suspected of having shot down the cashier, John W. Sheets, in the mistaken belief that he was Samuel P. Cox, the Union militia officer who had ambushed and killed "Bloody Bill" Anderson during the Civil War. The James brothers were unknown up to this point; this may have been their first robbery. Their names were later added to previous robberies as an afterthought. 1871 to 1873 John Younger was almost arrested in Dallas County, Texas in January 1871. He killed two lawmen [Nichols and Mcmahan] during the attempt and escaped. On June 3, 1871, the gang robbed a bank in Corydon, Iowa; the James and Younger brothers were suspects. The bank contacted the Pinkerton National Detective Agency in Chicago, the first involvement of the famous agency in the pursuit of the James–Younger Gang. Agency founder Allan Pinkerton dispatched his son, Robert Pinkerton, who joined a county sheriff in tracking the gang to a farm in Civil Bend, Missouri. A short gunfight ended indecisively as the gang escaped. On June 24, Jesse James wrote a letter to the Kansas City Times, claiming Republicans were persecuting him for his Confederate loyalties by accusing him and Frank of carrying out the robberies. "But I don't care what the degraded Radical party thinks about me," he wrote, "I would just as soon they would think I was a robber as not." On April 29, 1872, the gang robbed a bank in Columbia, Kentucky. One of the outlaws shot the cashier, R. A. C. Martin, who had refused to open the safe. On September 23, 1872, three men (identified by former bushwhacker Jim Chiles as Jesse James and Cole and John Younger) robbed a ticket booth of the Second Annual Kansas City Industrial Exposition, amid thousands of people. They took some $900, and accidentally shot a little girl in the ensuing struggle with the ticket-seller. Apart from Chiles' testimony, there is no other evidence this crime was committed by the James or Younger brothers, and Jesse later wrote a letter denying his or the Youngers' involvement. Cole was furious over this, because neither he nor brother John had been linked to the crime before the letter. The crime was praised by Kansas City Times editor John Newman Edwards in a famous editorial entitled, "The Chivalry of Crime." Edwards soon published an anonymous letter from one of the outlaws (believed to be Jesse) that referred to the approaching presidential election: "Just let a party of men commit a bold robbery, and the cry is hang them. But [President Ulysses S.] Grant and his party can steal millions and it is all right," the outlaw wrote. "They rob the poor and rich, and we rob the rich and give to the poor." On May 27, 1873, the James–Younger Gang robbed the Ste. Genevieve Savings Association in Ste. Genevieve, Missouri. As they rode off they fired in the air and shouted, "Hurrah for Hildebrand!" Samuel S. Hildebrand was a famous Confederate bushwhacker from the area who had recently been shot dead in Illinois. Arthur McCoy had lived in this area and knew it quite well; he was likely involved and may have been the planner and leader. On July 21, 1873, the gang carried out what was arguably the first train robbery west of the Mississippi River, derailing a locomotive of the Rock Island Railroad near Adair, Iowa. Engineer John Rafferty died in the crash. The outlaws took $2,337 from the express safe in the baggage car, having narrowly missed a transcontinental express shipment of a large amount of cash. On November 24, John Newman Edwards published a lengthy glorification of the James brothers, Cole and John Younger, and Arthur McCoy, in a twenty-page special supplement to his newspaper the St. Louis Dispatch (Edwards had moved from the Kansas City Times to the Dispatch in 1873). Most of the supplement, entitled "A Terrible Quintet," was devoted to Jesse James, the gang's public face, and the article stressed their Confederate loyalties. 1874 to 1876 In January 1874, the outlaws were suspected of holding up a stagecoach in Bienville Parish, Louisiana. Later another suspected stage robbery took place between Malvern and Hot Springs, Arkansas. There, the gang returned a pocket watch to a Confederate veteran, saying that Northern men had driven them to outlawry and that they intended to make them pay for it. On January 31, the gang robbed a southbound train on the Iron Mountain Railway at Gads Hill, Missouri. For the first of two times in all their train robberies, the outlaws robbed the passengers. In both train robberies, their usual target, the safe in the baggage car belonging to an express company, held an unusually small amount of money. On this occasion, the outlaws reportedly examined the hands of the passengers to ensure that they did not rob any working men. Many newspapers reported this was actually done by the "Arthur McCoy" gang. To correct errors, the gang telegraphed a report of the Gads Hill robbery to the St. Louis Dispatch newspaper for publication. The Adams Express Company, which owned the safe robbed at Gads Hill, hired the Pinkerton National Detective Agency. On March 11, 1874, John W. Whicher, the agent who was sent to investigate the James brothers, was found shot to death alongside a rural road in Jackson County, Missouri. Two other agents, John Boyle and Louis J. Lull, accompanied by Deputy Sheriff Edwin B. Daniels to track the Youngers, posed as cattle buyers. On March 17, 1874, the trio was stopped and attacked by John and Jim Younger on a rural stretch of road near Monegaw Springs, Missouri. Daniels was killed instantly, Lull and John Younger shot and killed each other, while Boyle and Jim Younger escaped. Lull lived long enough to testify before a coroner's inquest before succumbing to his wounds a few days later. The Pinkerton deaths added to the growing embarrassment suffered by Missouri's first post-war Democratic governor, Silas Woodson. He issued a $2,000 reward offer for the Iron Mountain robbers (the reward usually offered for criminals was $300). He also persuaded the state legislature to provide $10,000 for a secret fund to track down the famous outlaws. The first agent, J. W. Ragsdale, was hired on April 9, 1874. On August 30, three of the gang held up a stagecoach across the Missouri River from Lexington, Missouri, in view of hundreds of onlookers on the bluffs of the town. A passenger identified two of the robbers as Frank and Jesse James. The acting governor, Charles P. Johnson, dispatched an agent selected from the St. Louis police department to investigate. The gang next robbed a train on the Kansas Pacific Railroad near Muncie, Kansas, on December 8, 1874. It was one of the outlaws' most successful robberies, gaining them $30,000. William "Bud" McDaniel was captured by a Kansas City police officer after the robbery, and later was shot during an escape attempt. On the night of January 25, 1875, Pinkerton agents surrounded the James farm in Kearney, Missouri. Frank and Jesse James had been there earlier but had already left. When the Pinkertons threw an iron incendiary device into the house, it exploded when it rolled into a blazing fireplace. The blast nearly severed the right arm of Zerelda Samuel, the James boys' mother (the arm had to be amputated at the elbow that night), and killed their 9-year-old half-brother, Archie Samuel. On April 12, 1875, an unknown gunman shot dead Daniel Askew, a neighbor and former Union militiaman who may have been suspected of providing the Pinkertons with a base for their raid. Allan Pinkerton then abandoned the chase for the James–Younger Gang. By September 1875, at least part of the gang had ventured east to Huntington, West Virginia, where they robbed a bank on September 7. Two new members participated: Tom McDaniel (brother of Bud) and Tom Webb (a Confederate veteran who had been at Lawrence with Frank and Cole). McDaniel was killed by a posse and Webb was caught. The other two robbers, Frank and Cole, escaped. Also in 1875, the two James brothers moved to the outskirts of Nashville, Tennessee, probably to save their mother from further raids by detectives. Once there, Jesse James began to write letters to the local press, asserting his place as a Confederate hero and a martyr to Radical Republican vindictiveness. On July 7, 1876, Frank and Jesse James, Cole and Bob Younger, Clell Miller, Charlie Pitts, Bill Chadwell and Hobbs Kerry robbed the Missouri Pacific Railroad at the "Rocky Cut" near Otterville, Missouri. The new man, Kerry, was arrested soon after and he readily identified his accomplices. Northfield, Minnesota Raid The Rocky Cut raid set the stage for the final act of the James–Younger Gang: the famous Northfield, Minnesota raid on September 7, 1876. The target was the First National Bank of Northfield, which was far outside the gang's usual territory. The idea for the raid came from Jesse and Bob Younger. Cole tried to talk his brother out of the plan, but Bob refused to back down. Reluctantly, Cole agreed to go, writing to his brother Jim in California to come home. Jim Younger had never wanted anything to do with Cole's outlaw activities, but he agreed to go out of family loyalty. The Northfield bank was not unusually rich. According to public reports, it was a perfectly ordinary rural bank, though rumors persisted that General Adelbert Ames, son of the owner of the Ames Mill in Northfield, had deposited $50,000 there. Shortly after the robbery, Bob Younger declared that they had selected it because of its connection to two Union generals and Radical Republican politicians: Benjamin Butler and his son-in-law Adelbert Ames. General Ames had just stepped down as Governor of Mississippi, where he had been strongly identified with civil rights for freedmen. He had recently moved to Northfield, where his father owned the mill on the Cannon River and had a large amount of stock in the bank. One of the outlaws "had a spite" against Ames, Bob said. Cole Younger said much the same thing years later and recalled greeting "General Ames" on the street in Northfield just before the robbery. Cole, Jim and Bob Younger, Frank and Jesse James, Charlie Pitts, Clell Miller and Bill Chadwell took the train to St. Paul, Minnesota, in early September 1876. After a layover in St. Paul they divided into two groups, one going to Mankato, the other to Red Wing, on either side of Northfield. They purchased expensive horses and scouted the terrain around the towns, agreeing to meet south of Northfield along the Cannon River near Dundas on the morning of September 7, 1876. The gang attempted to rob the bank about 2:00 p.m. on September 7. Northfield residents had seen the gang leave a local restaurant near the mill shortly after noon, where they dined on fried eggs. They testified at the Younger brothers' trial that the group smelled of alcohol and that the gang was obviously under the influence when they greeted General Ames. Three of the outlaws (Bob Younger, Frank James and Charlie Pitts) crossed the bridge by the Ames Mill and entered the bank; the other five (Jesse James, Cole and Jim Younger, Bill Stiles and Clell Miller) stood guard outside. Two were standing outside the bank’s front door and the other three were waiting in Mills Square to guard the gang's escape route. According to some reports, J. S. Allen shouted to the townspeople, “Get your guns, boys, they’re robbing the bank!” Once local citizens realized a robbery was in progress, several took up arms from local hardware stores. Shooting from behind cover, they poured deadly fire on the outlaws. During the gun battle, medical student Henry Wheeler killed Miller, shooting from a third-floor window of the Dampier House Hotel, across the street from the bank. Another civilian named A. R. Manning, who took cover at the corner of the Sciver building down the street, killed Stiles. Other civilians wounded the Younger brothers (Cole was shot in his left hip, Bob suffered a shattered elbow, and Jim was shot in the jaw). The only civilian fatality on the street was 30-year-old Nicholas Gustafson, an unarmed recent Swedish immigrant, who was killed by Cole Younger at the corner of 5th Street and Division. Thirteen Swedish families lived west of Northfield in the Millersburg area in 1876, including Peter Gustafson, who had recently been joined by his brother Nicolaus and nephew Ernst from Sweden. West of Millersburg that morning, Peter Youngquist harnessed his mules and headed for Northfield to sell farm produce, accompanied by Gustafson and three others. The Swedes arrived in Northfield about 1:00 p.m. and set up their vegetable wagon along the Cannon River near 5th Street. About 2:00 p.m., they heard gunshots. Nicolaus Gustafson ran to the intersection of Division and 5th a block away, where he was shot in the head as the bank was being robbed. Gustafson died four days later. Another Swede named John Olson was an eyewitness to the Gustafson shooting and later testified against Cole Younger. Inside the bank, the assistant cashier Joseph Lee Heywood refused to open the safe and was murdered for resisting. The two other employees in the bank were teller Alonzo Bunker and assistant bookkeeper Frank Wilcox. Bunker escaped from the bank by running out the back door despite being wounded in the right shoulder by Pitts as he ran. The three robbers then ran out of the bank after hearing the shooting outside and mounted their horses to make a run for it, having taken only several bags of nickels from the bank. Every year in September, Northfield hosts "Defeat of Jesse James Days", a celebration of the town's victory over the James–Younger Gang. In addition to the death of Miller and Stiles, every one of the rest of the gang was wounded, including Frank James and Pitts, both shot in their right legs. Jesse James was the last one to be shot, taking a bullet in the thigh as the gang escaped. The six surviving outlaws rode out of town on the Dundas Road toward Millersburg where four of them had spent the night before. Aftermath Minnesotans joined posses and set up picket lines by the hundreds. After several days the gang had only reached the western outskirts of Mankato when they decided to split up (despite persistent stories to the contrary, Cole Younger told interviewers that they all agreed to the decision). The Youngers and Pitts remained on foot, moving west, until finally they were cornered in a swamp called Hanska Slough, just south of La Salle, Minnesota, on September 21, two weeks after the Northfield raid. In the gunfight that followed, Pitts was killed and the Youngers were again wounded. The Youngers surrendered and pleaded guilty to murder in order to avoid execution. Frank and Jesse secured horses and fled west across southern Minnesota, turning south just inside the border of the Dakota Territory. In the face of hundreds of pursuers and a nationwide alarm, Frank and Jesse escaped, but the infamous James–Younger Gang was no more. On September 23, 1876, the Younger brothers were taken to the Rice County jail in Faribault. On November 16, a grand jury issued four indictments—one each for the first-degree murders of Joseph Heywood and Nicolaus Gustafson, one for bank robbery, and one for assault with deadly weapons on the wounded bank clerk, Bunker. The three brothers pleaded guilty on November 20, 1876, and were sentenced to life terms in the Minnesota Territorial Prison at Stillwater. Nicolaus Gustafson was buried in Northfield because the Millersburg Swedes had no cemetery in 1876. After his death, the Millersburg Swedes determined to establish their own church and burial ground. Peter Youngquist and Carl Hirdler donated an acre of land adjacent to their homes overlooking Circle Lake and in 1877 John Olson was hired to build the Christdala Evangelical Swedish Lutheran Church west of Millersburg. Today the church is listed on the National Register of Historic Places and historical markers in front of the church tell the story of Nicolaus Gustafson and the founding of Christdala. Final years Having successfully escaped, Frank James joined Jesse in Nashville, Tennessee, where they spent the next three years living peacefully. Frank in particular seems to have thrived in his new life farming in the Whites Creek area. Jesse, however, did not adapt well to peace. Accordingly, he gathered up new recruits, formed a new gang and returned to a life of crime. On October 8, 1879, Jesse and his gang robbed the Chicago and Alton Railroad near Glendale, Missouri. Unfortunately for Jesse, one of the men, Tucker Basham, was captured by a posse. He told authorities he had been recruited by Bill Ryan. On September 3, 1880, Jesse James and Bill Ryan robbed a stagecoach near Mammoth Cave, Kentucky. On October 5, 1880, they robbed the store of John Dovey in Mercer, Kentucky. On March 11, 1881, Jesse, Ryan, and Jesse's cousin Wood Hite robbed a federal paymaster at Muscle Shoals, Alabama, taking $5,240. Shortly afterward, a drunk and boastful Ryan was arrested in Whites Creek, near Nashville, and both Frank and Jesse James fled back to Missouri. On July 15, 1881, Frank and Jesse James, Wood and Clarence Hite, and Dick Liddil robbed the Rock Island Railroad near Winston, Missouri, of $900. Train conductor William Westfall and passenger John McCullough were killed. On September 7, 1881, Jesse James carried out his last train robbery, holding up the Chicago and Alton Railroad. The gang held up the passengers when the express safe proved to be nearly empty. With this new outbreak of train robberies, the new Governor of Missouri, Thomas T. Crittenden, convinced the state's railroad and express executives to put up the money for a large reward for the capture of the James brothers. Creed Chapman and John Bugler were arrested for participating in the robbery on September 7, 1881. Though they were confirmed as having participated in the robbery by convicted members of the gang, neither was ever convicted. In December 1881, Wood Hite was killed by Liddil in an argument over Martha Bolton, the sister of the Fords. Bob Ford, not yet a member of the gang, assisted Liddil in his gunfight. Ford and Liddil, with Bolton as an intermediary, made deals with Governor Crittenden. On February 11, 1882, James Timberlake arrested Wood Hite's brother Clarence, who made a confession but died of tuberculosis in prison. Ford, on the other hand, agreed to bring down Jesse James in return for the reward. On April 3, 1882, Ford fatally shot Jesse James behind the ear at James's rented apartment in St. Joseph, Missouri. Bob and his brother Charley surrendered to the authorities, pleaded guilty, and were promptly pardoned by Crittenden. On October 4, 1882, Frank James surrendered to Crittenden. Accounts say that Frank surrendered with the understanding that he would not be extradited to Northfield, Minnesota. Only two cases ever came to trial: one in Gallatin, Missouri, for the July 15, 1881, robbery of the Rock Island Line train at Winston, Missouri in which a train crewman and a passenger were killed, and one in Huntsville, Alabama, for the March 11, 1881, robbery of a United States Army Corps of Engineers payroll at Muscle Shoals, Alabama. Frank James was found not guilty by juries in both cases (July 1883 at Gallatin and April 1884 at Huntsville). Missouri kept jurisdiction over him with other charges but they never came to trial and they kept him from being extradited to Minnesota. Frank James died on February 8, 1915, at the age of 72. The Youngers remained loyal to the Jameses when they were in prison and never informed on them. They ended up being model prisoners and in one incident helped keep other prisoners from escaping during a fire at the prison. Cole Younger also founded the longest-running prison newspaper in the United States during his stay at the Minnesota Territorial Prison in Stillwater. Bob Younger died in prison of tuberculosis on September 15, 1889, at the age of 36. After much legal dispute, Cole and Jim Younger were paroled in 1901 on the condition they remain in Minnesota. Jim committed suicide on October 19, 1902, while on parole in St. Paul, at the age of 54. Cole Younger received a pardon in 1903 on the condition that he leave Minnesota and never return. He traveled to Missouri where he joined a "Wild West" show with Frank James and died there on March 21, 1916, at the age of 72. Legacy Bill Ayers and Diana Oughton headed a splinter group of the Students for a Democratic Society (SDS) that called itself the "Jesse James Gang" and evolved into the Weather Underground. In popular culture See also Film The James Boys in Missouri (1908) The Younger Brothers (1908) Jesse James (1939) Days of Jesse James (1939) Bad Men of Missouri (1941) Jesse James at Bay (1941) The Younger Brothers (1949) Kansas Raiders (1950) The Great Missouri Raid (1951) The True Story of Jesse James (1957) Young Jesse James (1960) The Great Northfield Minnesota Raid (1972) The Long Riders (1980) Frank and Jesse (1994) American Outlaws (2001) The Assassination of Jesse James by the Coward Robert Ford (2007) Literature The James and Younger brothers are major characters in Wildwood Boys (William Morrow, 2000; New York), a biographical novel of "Bloody Bill" Anderson by James Carlos Blake The Story of Cole Younger, by Himself (Cole Younger, 1903; Chicago) See also History of Missouri Bushwhackers Reconstruction Era List of Old West gangs References Other sources Further reading McLachlan, Sean (2012) The Last Ride of the James–Younger Gang; Jesse James and the Northfield Raid 1876. Osprey Raid Series #35. Osprey Publishing. External links Northfield Bank Raid in MNopedia, the Minnesota Encyclopedia Website for the American Experience documentary on Jesse James, broadcast on PBS, with transcript and additional material Website for T. J. Stiles's biography of Jesse James, with excerpts of primary sources and additional essays Official website for the family of Frank & Jesse James: Stray Leaves, A James Family in America Since 1650 John Koblas, author of several Jesse James books Yesterday's News blog 1901 newspaper interview with Cole and Jim Younger upon their release from a Minnesota prison Northfield (Minnesota) Historical Society Bank Raid Wiki Defeat of Jesse James Days, held annually the weekend after Labor Day in Northfield, Minnesota The Younger Brothers: After the Attempted Robbery, a podcast by the Minnesota Historical Society on the Younger brothers' time in Stillwater State Prison Newspapers report the rise, exploits, and fall of Jesse James and the James–Younger Gang Today's James Younger Gang 19th century in Missouri Crime families Gangs in Missouri Missouri in the American Civil War Outlaw gangs in the United States
9418
https://en.wikipedia.org/wiki/Epic%20poetry
Epic poetry
An epic poem, or simply an epic, is a lengthy narrative poem typically about the extraordinary deeds of extraordinary characters who, in dealings with gods or other superhuman forces, gave shape to the mortal universe for their descendants. Etymology The English word epic comes from the Latin epicus, which itself comes from the Ancient Greek adjective (epikos), from (epos), "word, story, poem." In ancient Greek, 'epic' could refer to all poetry in dactylic hexameter (epea), which included not only Homer but also the wisdom poetry of Hesiod, the utterances of the Delphic oracle, and the strange theological verses attributed to Orpheus. Later tradition, however, has restricted the term 'epic' to heroic epic, as described in this article. Overview Originating before the invention of writing, primary epics, such as those of Homer, were composed by bards who used complex rhetorical and metrical schemes by which they could memorize the epic as received in tradition and add to the epic in their performances. Later writers like Virgil, Apollonius of Rhodes, Dante, Camões, and Milton adopted and adapted Homer's style and subject matter, but used devices available only to those who write. The oldest epic recognized is the Epic of Gilgamesh (), which was recorded in ancient Sumer during the Neo-Sumerian Empire. The poem details the exploits of Gilgamesh, the king of Uruk. Although recognized as a historical figure, Gilgamesh, as represented in the epic, is a largely legendary or mythical figure. The longest epic written is the ancient Indian Mahabharata (c. 3rd century BC–3rd century AD), which consists of 100,000 ślokas or over 200,000 verse lines (each shloka is a couplet), as well as long prose passages, so that at ~1.8 million words it is roughly twice the length of Shahnameh, four times the length of the Rāmāyaṇa, and roughly ten times the length of the Iliad and the Odyssey combined. Famous examples of epic poetry include the Sumerian Epic of Gilgamesh, the ancient Indian Mahabharata and Rāmāyaṇa in Sanskrit and Silappatikaram in Tamil, the Persian Shahnameh, the Ancient Greek Odyssey and Iliad, Virgil's Aeneid, the Old English Beowulf, Dante's Divine Comedy, the Finnish Kalevala, the Estonian Kalevipoeg, the German Nibelungenlied, the French Song of Roland, the Spanish Cantar de mio Cid, the Portuguese Os Lusíadas, the Armenian Daredevils of Sassoun, and John Milton's Paradise Lost. Epic poems of the modern era include Derek Walcott’s Omeros, Mircea Cărtărescu's The Levant and Adam Mickiewicz's Pan Tadeusz. Paterson by William Carlos Williams published in five volumes from 1946 to 1958, was inspired in part by another modern epic, The Cantos by Ezra Pound. Oral epics The first epics were products of preliterate societies and oral history poetic traditions. Oral tradition was used alongside written scriptures to communicate and facilitate the spread of culture. In these traditions, poetry is transmitted to the audience and from performer to performer by purely oral means. Early 20th-century study of living oral epic traditions in the Balkans by Milman Parry and Albert Lord demonstrated the paratactic model used for composing these poems. What they demonstrated was that oral epics tend to be constructed in short episodes, each of equal status, interest and importance. This facilitates memorization, as the poet is recalling each episode in turn and using the completed episodes to recreate the entire epic as he performs it. Parry and Lord also contend that the most likely source for written texts of the epics of Homer was dictation from an oral performance. Milman Parry and Albert Lord have argued that the Homeric epics, the earliest works of Western literature, were fundamentally an oral poetic form. These works form the basis of the epic genre in Western literature. Nearly all of Western epic (including Virgil's Aeneid and Dante's Divine Comedy) self-consciously presents itself as a continuation of the tradition begun by these poems. Composition and conventions In his work Poetics, Aristotle defines an epic as one of the forms of poetry, contrasted with lyric poetry and with drama in the form of tragedy and comedy. Epic poetry agrees with Tragedy in so far as it is an imitation in verse of characters of a higher type. They differ in that Epic poetry admits but one kind of meter and is narrative in form. They differ, again, in their length: for Tragedy endeavors, as far as possible, to confine itself to a single revolution of the sun, or but slightly to exceed this limit, whereas the Epic action has no limits of time. This, then, is a second point of difference; though at first the same freedom was admitted in Tragedy as in Epic poetry. Of their constituent parts some are common to both, some peculiar to Tragedy: whoever, therefore knows what is good or bad Tragedy, knows also about Epic poetry. All the elements of an Epic poem are found in Tragedy, but the elements of a Tragedy are not all found in the Epic poem. – Aristotle, Poetics Part V Harmon & Holman (1999) define an epic: Epic A long narrative poem in elevated style presenting characters of high position in adventures forming an organic whole through their relation to a central heroic figure and through their development of episodes important to the history of a nation or race. — Harmon & Holman (1999) An attempt to delineate ten main characteristics of an epic: Begins in medias res (“in the thick of things”). The setting is vast, covering many nations, the world or the universe. Begins with an invocation to a muse (epic invocation). Begins with a statement of the theme. Includes the use of epithets. Contains long lists, called an epic catalogue. Features long and formal speeches. Shows divine intervention in human affairs. Features heroes that embody the values of the civilization. Often features the tragic hero's descent into the underworld or hell. The hero generally participates in a cyclical journey or quest, faces adversaries that try to defeat him in his journey and returns home significantly transformed by his journey. The epic hero illustrates traits, performs deeds, and exemplifies certain morals that are valued by the society the epic originates from. Many epic heroes are recurring characters in the legends of their native cultures. Conventions of Indo-Aryan Epic The above passages convey the experience of epic poetry in the West, but somewhat different conventions have historically applied in India. In the Indo-Aryan mahākāvya epic genre, more emphasis was laid on description than on narration. Indeed, the traditional characteristics of a mahākāvya are listed as:{{efn| It springs from a historical incident or is otherwise based on some fact; it turns upon the fruition of the fourfold ends and its hero is clever and noble; By descriptions of cities, oceans, mountains, seasons and risings of the moon or the sun; through sportings in garden or water, and festivities of drinking and love; Through sentiments-of-love-in-separation and through marriages, by descriptions of the birth-and-rise of princes, and likewise through state-counsel, embassy, advance, battle, and the hero’s triumph; Embellished; not too condensed, and pervaded all through with poetic sentiments and emotions; with cantos none too lengthy and having agreeable metres and well-formed joints, And in each case furnished with an ending in a different metre – such a poem possessing good figures-of-speech wins the people’s heart and endures longer than even a kalpa.<ref> It must take its subject matter from the epics (Ramayana or Mahabharata), or from history, It must help further the four goals of man (Purusharthas), It must contain descriptions of cities, seas, mountains, moonrise and sunrise, and "accounts of merrymaking in gardens, of bathing parties, drinking bouts, and love-making. It should tell the sorrow of separated lovers and should describe a wedding and the birth of a son. It should describe a king's council, an embassy, the marching forth of an army, a battle, and the victory of a hero". Themes Classical epic poetry recounts a journey, either physical (as typified by Odysseus in the Odyssey) or mental (as typified by Achilles in the Iliad) or both. Epics also tend to highlight cultural norms and to define or call into question cultural values, particularly as they pertain to heroism. Conventions Proem The poet may begin by invoking a Muse or similar divinity. The poet prays to the Muses to provide him with divine inspiration to tell the story of a great hero. Example opening lines with invocations: Sing goddess the baneful wrath of Achilles son of Peleus – Iliad 1.1 Muse, tell me in verse of the man of many wiles – Odyssey 1.1 From the Heliconian Muses let us begin to sing – Hesiod, Theogony 1.1 Beginning with thee, Oh Phoebus, I will recount the famous deeds of men of old – Argonautica 1.1 Muse, remember to me the causes – Aeneid 1.8 Sing Heav'nly Muse, that on the secret top of Oreb, or of Sinai, didst inspire – Paradise Lost 1.6–7 An alternative or complementary form of proem, found in Virgil and his imitators, opens with the performative verb "I sing". Examples: I sing arms and the man – Aeneid 1.1 I sing pious arms and their captain – Gerusalemme liberata 1.1 I sing ladies, knights, arms, loves, courtesies, audacious deeds – Orlando Furioso 1.1–2 This Virgilian epic convention is referenced in Walt Whitman's poem title / opening line "I sing the body electric". Compare the first six lines of the Kalevala: Mastered by desire impulsive, By a mighty inward urging, I am ready now for singing, Ready to begin the chanting Of our nation’s ancient folk-song Handed down from by-gone ages. These conventions are largely restricted to European classical culture and its imitators. The Epic of Gilgamesh, for example, or the Bhagavata Purana do not contain such elements, nor do early medieval Western epics that are not strongly shaped by the classical traditions, such as the Chanson de Roland or the Poem of the Cid. In medias res Narrative opens "in the middle of things", with the hero at his lowest point. Usually flashbacks show earlier portions of the story. For example, the Iliad does not tell the entire story of the Trojan War, starting with the judgment of Paris, but instead opens abruptly on the rage of Achilles and its immediate causes. So too, Orlando Furioso is not a complete biography of Roland, but picks up from the plot of Orlando Innamorato, which in turn presupposes a knowledge of the romance and oral traditions. Enumeratio Epic catalogues and genealogies are given, called enumeratio. These long lists of objects, places, and people place the finite action of the epic within a broader, universal context, such as the catalog of ships. Often, the poet is also paying homage to the ancestors of audience members. Examples: In The Faerie Queene, the list of trees I.i.8–9. In Paradise Lost, the list of demons in Book I. In the Aeneid, the list of enemies the Trojans find in Etruria in Book VII. Also, the list of ships in Book X. In the Iliad: Catalogue of Ships, the most famous epic catalogue Trojan Battle Order Stylistic features In the Homeric and post-Homeric tradition, epic style is typically achieved through the use of the following stylistic features: Heavy use of repetition or stock phrases: e.g., Homer's "rosy-fingered dawn" and "wine-dark sea". Epic similes Form Many verse forms have been used in epic poems through the ages, but each language's literature typically gravitates to one form, or at least to a very limited set. Ancient Sumerian epic poems did not use any kind of poetic meter and lines did not have consistent lengths; instead, Sumerian poems derived their rhythm solely through constant repetition and parallelism, with subtle variations between lines. Indo-European epic poetry, by contrast, usually places strong emphasis on the importance of line consistency and poetic meter. Ancient Greek epics were composed in dactylic hexameter. Very early Latin epicists, such Livius Andronicus and Gnaeus Naevius, used Saturnian meter. By the time of Ennius, however, Latin poets had adopted dactylic hexameter. Dactylic hexameter has been adapted by a few anglophone poets such as Longfellow in "Evangeline", whose first line is as follows: This is the | forest pri | meval. The | murmuring | pines and the | hemlocks Old English, German and Norse poems were written in alliterative verse, usually without rhyme. The alliterative form can be seen in the Old English “Finnsburg Fragment” (alliterated sounds are in bold): Ac onwacnigeað nū, wīgend mīne "But awake now, my warriors", ealra ǣrest eorðbūendra, of all first the men While the above classical and Germanic forms would be considered stichic, Italian, Spanish and Portuguese long poems favored stanzaic forms, usually written in terza rima or especially ottava rima. Terza rima is a rhyming verse stanza form that consists of an interlocking three-line rhyme scheme. An example is found in the first lines of the Divine Comedy by Dante, who originated the form: In ottava rima, each stanza consists of three alternate rhymes and one double rhyme, following the ABABABCC rhyme scheme. Example: From the 14th century English epic poems were written in heroic couplets, and rhyme royal, though in the 16th century the Spenserian stanza and blank verse were also introduced. The French alexandrine is currently the heroic line in French literature, though in earlier literature – such as the chanson de geste – the decasyllable grouped in laisses took precedence. In Polish literature, couplets of Polish alexandrines (syllabic lines of 7+6 syllables) prevail. In Russian, iambic tetrameter verse is the most popular. In Serbian poetry, the decasyllable is the only form employed. Balto-Finnic (e.g. Estonian, Finnish, Karelian) folk poetry uses a form of trochaic tetrameter that has been called the Kalevala meter. The Finnish and Estonian national epics, Kalevala and Kalevipoeg, are both written in this meter. The meter is thought to have originated during the Proto-Finnic period. In Indic epics such as the Ramayana and Mahabharata, the shloka form is used. Genres and related forms The primary form of epic, especially as discussed in this article, is the heroic epic, including such works as the Iliad and Mahabharata. Ancient sources also recognized didactic epic as a category, represented by such works as Hesiod's Works and Days and Lucretius's De Rerum Natura. A related type of poetry is the epyllion (plural: epyllia), a brief narrative poem with a romantic or mythological theme. The term, which means "little epic," came into use in the nineteenth century. It refers primarily to the erudite, shorter hexameter poems of the Hellenistic period and the similar works composed at Rome from the age of the neoterics; to a lesser degree, the term includes some poems of the English Renaissance, particularly those influenced by Ovid. The most famous example of classical epyllion is perhaps Catullus 64. Epyllion is to be understood as distinct from mock epic, another light form. Romantic epic is a term used to designate works such as Morgante, Orlando Innamorato, Orlando Furioso and Gerusalemme Liberata, which freely lift characters, themes, plots and narrative devices from the world of prose chivalric romance. See also Footnotes References Bibliography External links "The Epic", BBC Radio 4 discussion with John Carey, Karen Edwards and Oliver Taplin (In Our Time, 3 Feb. 2003) "Epic Poem", Main Features and Conventions of the Epic Fiction forms Adventure fiction
30634770
https://en.wikipedia.org/wiki/MMH-Badger%20MAC
MMH-Badger MAC
Badger is a Message Authentication Code (MAC) based on the idea of universal hashing and was developed by Boesgaard, Scavenius, Pedersen, Christensen, and Zenner. It is constructed by strengthening the ∆-universal hash family MMH using an ϵ-almost strongly universal (ASU) hash function family after the application of ENH (see below), where the value of ϵ is . Since Badger is a MAC function based on the universal hash function approach, the conditions needed for the security of Badger are the same as those for other universal hash functions such as UMAC. Introduction The Badger MAC processes a message of length up to bits and returns an authentication tag of length bits, where . According to the security needs, user can choose the value of , that is the number of parallel hash trees in Badger. One can choose larger values of u, but those values do not influence further the security of MAC. The algorithm uses a 128-bit key and the limited message length to be processed under this key is . The key setup has to be run only once per key in order to run the Badger algorithm under a given key, since the resulting internal state of the MAC can be saved to be used with any other message that will be processed later. ENH Hash families can be combined in order to obtain new hash families. For the ϵ-AU, ϵ-A∆U, and ϵ-ASU families, the latter are contained in the former. For instance, an A∆U family is also an AU family, an ASU is also an A∆U family, and so forth. On the other hand, a stronger family can be reduced to a weaker one, as long as a performance gain can be reached. A method to reduce ∆-universal hash function to universal hash functions will be described in the following. Theorem 2 Let be an ϵ-AΔU hash family from a set A to a set B. Consider a message . Then the family H consisting of the functions is ϵ-AU. If , then the probability that is at most ϵ, since is an ϵ-A∆U family. If but , then the probability is trivially 0. The proof for Theorem 2 was described in The ENH-family is constructed based on the universal hash family NH (which is also used in UMAC): Where '' means 'addition modulo ', and . It is a -A∆U hash family. Lemma 1 The following version of NH is -A∆U: Choosing w=32 and applying Theorem 1, one can obtain the -AU function family ENH, which will be the basic building block of the badger MAC: where all arguments are 32-bits long and the output has 64-bits. Construction Badger is constructed using the strongly universality hash family and can be described as where an -AU universal function family H* is used to hash messages of any size onto a fixed size and an -ASU function family F is used to guarantee the strong universality of the overall construction. NH and ENH are used to construct H*. The maximum input size of the function family H* is and the output size is 128 bits, split into 64 bits each for the message and the hash. The collision probability for the H*-function ranges from to . To construct the strongly universal function family F, the ∆-universal hash family MMH* is transformed into a strongly universal hash family by adding another key. Two steps on Badger There are two steps that have to be executed for every message: processing phase and finalize phase. Processing phase In this phase, the data is hashed to a 64-bit string. A core function : is used in this processing phase, that hashes a 128-bit string to a 64-bit string as follows: for any n, means addition modulo . Given a 2n-bit string x, L(x) means least significant n bits, and U(x) means most significant n bits. A message can be processed by using this function. Denote level_key [j][i] by . Pseudo-code of the processing phase is as follow. L = |M| if L = 0 M^1 = ⋯ = M^u = 0 Go to finalization r = L mod 64 if r≠0: M = 0^(64-r)∥M for i = 1 to u: M^i = M v^' = max{1, ⌈log_2 L⌉-6} for j = 1 to v^': divide M^i into 64-bit blocks, M^i = m_t^i∥⋯∥m_1^i if t is even: M^i = h(k_j^i, m_t^i, m_(t-1)^i) ∥⋯∥ h(k_j^i, m_2^i, m_1^i) else M^i = m_t^i∥h(k_j^i, m_(t-1)^i, m_(t-2)^i) ∥⋯∥ h(k_j^i, m_2^i, m_1^i) Finalize phase In this phase, the 64-string resulting from the processing phase is transformed into the desired MAC tag. This finalization phase uses the Rabbit stream cipher and uses both key setup and IV setup by taking the finalization key final_key[j][i] as . Pseudo-code of the finalization phase RabbitKeySetup(K) RabbitIVSetup(N) for i = 1 to u: Q^i =0^7∥L∥M^i divide Q^i into 27-bit blocks, Q^i=q_5^i∥⋯∥q_1^i S^i =(Σ_(j=1)^5 (q_j^i K_j^i))+K_6^i mod p S = S^u∥⋯∥S^1 S = S ⨁ RabbitNextbit(u∙32) return S Notation: From the pseudocode above, k denotes the key in the Rabbit Key Setup(K) which initializes Rabbit with the 128-bit key k. M denotes the message to be hashed and |M| denotes the length of the message in bits. q_i denotes a message M that is divided into i blocks. For the given 2n-bit string x then L(x) and U(x) respectively denoted its least significant n bits and most significant n bits. Performance Boesgard, Christensen and Zenner report the performance of Badger measured on a 1.0 GHz Pentium III and on a 1.7 GHz Pentium 4 processor. The speed-optimized versions were programmed in assembly language inlined in C and compiled using the Intel C++ 7.1 compiler. The following table presents Badger's properties for various restricted message lengths. "Memory req." denotes the amount of memory required to store the internal state including key material and the inner state of the Rabbit stream cipher . "Setup" denotes the key setup, and "Fin." denotes finalization with IV-setup. MMH (Multilinear Modular Hashing) The name MMH stands for Multilinear-Modular-Hashing. Applications in Multimedia are for example to verify the integrity of an on-line multimedia title. The performance of MMH is based on the improved support of integer scalar products in modern microprocessors. MMH uses single precision scalar products as its most basic operation. It consists of a (modified) inner product between the message and a key modulo a prime . The construction of MMH works in the finite field for some prime integer . MMH* MMH* involves a construction of a family of hash functions consisting of multilinear functions on for some positive integer . The family MMH* of functions from to is defined as follows. where x, m are vectors, and the functions are defined as follows. = = In the case of MAC, is a message and is a key where and . MMH* should satisfy the security requirements of a MAC, enabling say Ana and Bob to communicate in an authenticated way. They have a secret key . Say Charles listens to the conversation between Ana and Bob and wants to change the message into his own message to Bob which should pass as a message from Ana. So, his message and Ana's message will differ in at least one bit (e.g. ). Assume that Charles knows that the function is of the form and he knows Ana's message but he does not know the key x then the probability that Charles can change the message or send his own message can be explained by the following theorem. Theorem 1:The family MMH* is ∆-universal. Proof: Take , and let be two different messages. Assume without loss of generality that . Then for any choice of , there is To explain the theorem above, take for prime represent the field as . If one takes an element in , let say then the probability that is So, what one actually needs to compute is But, From the proof above, is the collision probability of the attacker in 1 round, so on average verification queries will suffice to get one message accepted. To reduce the collision probability, it is necessary to choose large p or to concatenate such MACs using independent keys so that the collision probability] becomes . In this case the number of keys are increased by a factor of and the output is also increased by . MMH*32 Halevi and Krawczyk construct a variant called . The construction works with 32-bit integers and with the prime integer . Actually the prime p can be chosen to be any prime which satisfies . This idea is adopted from the suggestion by Carter and Wegman to use the primes or . is defined as follows: where means (i.e., binary representation) The functions are defined as follows. where , By theorem 1, the collision probability is about ϵ = , and the family of can be defined as ϵ-almost ∆ Universal with ϵ = . The value of k The value of k that describes the length of the message and key vectors has several effects: Since the costly modular reduction over k is multiply and add operations increasing k should decrease the speed. Since the key x consist of k 32-bit integers increasing k will results in a longer key. The probability of breaking the system is and so increasing k makes the system harder to break. Performance Below are the timing results for various implementations of MMH in 1997, designed by Halevi and Krawczyk. A 150 MHz PowerPC 604 RISC machine running AIX A 150 MHz Pentium-Pro machine running Windows NT A 200 MHz Pentium-Pro machine running Linux See also UMAC VMAC Poly1305-AES References Articles with example pseudocode Message authentication codes
52751822
https://en.wikipedia.org/wiki/List%20of%20British%20computers
List of British computers
Computers designed or built in Britain include: Acorn Computers Acorn Eurocard systems Acorn System 1 Acorn Atom BBC Micro Acorn Electron BBC Master Acorn Archimedes RiscPC Acorn Network Computer Amstrad Amstrad CPC Amstrad PCW Amstrad NC100 PC1512 PPC 512 and 640 Amstrad PC2286 Amstrad Mega PC Apricot Computers Apricot PC Apricot Portable Apricot Picobook Pro Bear Microcomputer Systems Newbear 77-68 Bywood Electronics SCRUMPI 2 SCRUMPI 3 Cambridge Computer Cambridge Z88 CAP computer Compukit UK101 Dragon 32/64 Enterprise (computer) Ferranti MRT Flex machine GEC GEC 2050 GEC 4000 series GEC Series 63 Grundy NewBrain ICL ICL 2900 Series ICL Series 39 Jupiter Ace Nascom Nascom 1 Nascom 2 Plessey System 250 Raspberry Pi Research Machines Research Machines 380Z LINK 480Z RM Nimbus SAM Coupé Science of Cambridge MK14 Sinclair Research ZX80 ZX81 ZX Spectrum Sinclair QL Systime Computers Ltd Systime 1000, 3000, 5000, 8750, 8780 Systime Series 2, Series 3 Tangerine Computer Systems Tangerine Microtan 65 Oric-1 Oric Atmos Tatung Einstein Transam Triton Tuscan Mechanical computers Difference engine Analytical Engine Bombe Early British computers AEI 1010 APEXC Atlas (computer) Automatic Computing Engine Colossus computer CTL Modular One Digico Micro 16 EDSAC EDSAC 2 Elliott Brothers (computer company) Elliott 152 Elliott 503 Elliott 803 Elliott 4100 Series EMIDEC 1100 English Electric English Electric DEUCE English Electric KDF8 English Electric KDF9 English Electric KDP10 English Electric System 4 Ferranti Ferranti Argus Ferranti Mark 1, or Manchester Electronic Computer Ferranti Mercury Ferranti Orion Ferranti Pegasus Ferranti Perseus Ferranti Sirius Nimrod (computer) Harwell computer Harwell CADET Hollerith Electronic Computer ICS Multum ICT ICT 1301 ICT 1900 series LEO (computer) Luton Analogue Computing Engine Manchester computers Manchester Mark 1 Manchester Baby Marconi Marconi Transistorised Automatic Computer (T.A.C.) Marconi Myriad Metrovick 950 Pilot ACE Royal Radar Establishment Automatic Computer SOLIDAC ICL mainframe computers References Computer Early British computers Lists of computer hardware Computers designed in the United Kingdom
40753902
https://en.wikipedia.org/wiki/Business%20process%20validation
Business process validation
Business Process Validation (BPV) is the act of verifying that a set of end-to-end business processes function as intended. If there are problems in one or more business applications that support a business process, or in the integration or configuration of those systems, then the consequences of disruption to the business can be serious. A company might be unable to take orders or ship product – which can directly impact company revenue, reputation, and customer satisfaction. It can also drive additional expenses, as defects in production are much more expensive to fix than if identified earlier. For this reason, a key aim of Business Process Validation is to identify defects early, before new enterprise software is deployed in production so that there is no business impact and the cost of repairing defects is kept to a minimum. Note: BPV is unrelated to Validation (drug manufacture) by the U.S. Food and Drug Administration. During Business Process Validation (BPV), the business process is checked step-by-step using representative data to confirm that all business rules are working correctly and that all underlying transactions are performing properly across every enterprise application used in the business process. When defects are identified, these problems are logged for repair by IT personnel, business analysts or the software vendor, as appropriate. Business Process Validation can be performed on various timescales, including the following: Project basis, when new enterprise software systems (such as mobile, cloud, or web applications) are being deployed for the first time Periodic basis, when there are regular monthly, quarterly, or annual updates to enterprise software Continuous basis, when companies want to validate the readiness of their processes and enterprise systems 24/7/365 Business Process Validation Methods Manual Manual Business Process Validation is where one or more people (typically a cross-functional team) work at keyboards or mobile devices to execute the various business process steps directly in the enterprise software by hand. Defects are manually noted and typically logged in a defect tracking system. There are several shortcomings to the manual approach. First, since all data is entered by hand, it can be time consuming for subject matter experts and business analysts. These are expensive staff resources that could be deployed on other higher value activities. Second, manual testing extends project timelines. This slows the deployment of innovation and makes business users wait longer for cost saving and revenue generating new technology. Third, the manual process is often incomplete, since the time-intensive nature means that IT teams cannot test all business processes, given their resource constraints. This lack of coverage introduces technology risk in a company’s business processes. Finally, if business process validation is done manually by IT teams, then business requirements and processes have to be unambiguously documented in advance, which is a time-consuming task. Automated Automated Business Process Validation relies on software to execute the various business process steps directly in the enterprise software systems in an automated fashion. BPV software automatically uses standard business process data during the validation, and interprets the correctness of each transaction and result. Defects are automatically noted and logged. With BPV software, the business process must first be "captured" in the BPV software system so that it can be automatically executed. This amounts to performing the business process once in the enterprise system with the BPV software running in the background to capture the process. Once the business process is captured, BPV software allows the business process automation to be modified with very little effort. Some BPV software have object-oriented designs that allow sub-processes to be shared among different end-to-end business processes, and business process automation can be easily copied and modified. This last feature is particularly helpful when companies have variations of business processes across geographies or business units. For example, an order shipment process in Europe may differ slightly from the same process in North America because there are differences in compliance requirements and Bills of Lading. There is no need to capture the second process end-to-end for automation purposes. The first process can easily be copied and modified in the BPV software. This enables companies to efficiently and quickly build their portfolio of business processes for automated validation. An ancillary benefit is that once the business process is correctly captured, BPV software allows a complete and accurate description of the business process to be printed. This documentation is generated automatically with no additional effort and is useful for training, regulatory compliance, and other purposes. It is available on-demand and is typically very accurate because it is based on the most recent versions of systems and business practices. BPV software largely avoids the shortcomings of manual business process validation. The automation software can be configured to validate business processes on a 24/7/365 basis, if desired by the user. The frequency of automation enables any defects in underlying business systems and interfaces to be detected and repaired quickly, before business users are impacted. Automated business process validation is a way to ensure that a company’s business processes continue to work, even when mission critical enterprise systems change. See also Process validation References Software testing Formal methods Software quality Enterprise modelling Business process management
52798571
https://en.wikipedia.org/wiki/Czech%20Game%20of%20the%20Year%20Awards
Czech Game of the Year Awards
Czech Game of the Year Awards () are annual awards that recognize accomplishments in video game development in the Czech Republic. The awards began as part of Gameday Festival in 2010, but became independent from the festival in 2017. The awards are organised by the České Hry association. 2010 The awards for the first year were presented on 7 May 2011. Awards were given in three categories. Mafia II was awarded as the best Czech game in Czech. The jury also expressed recognition to Centauri Production for making their games in Czech. Samurai II: Vengeance was awarded as the best Czech game for Mobile devices and best Czech artistic achievement in game creation. Best Czech artistic achievement in game creation - Madfinger Games for Samurai II: Vengeance Best Czech Game in Czech - 2K Czech for Mafia II Best Czech Game for Mobile Devices - Madfinger Games for Samurai II:Vengeance 2011 The 2011 awards were presented on 3 May 2012. Best Czech artistic achievement in game creation - Allodium for Infinitum Best Czech Game in Czech - Hammerware for Family Farm Best Czech Game for Mobile Devices - Madfinger Games for Shadowgun 2012 The 2012 awards were presented on 3 May 2013. Only two categories were awarded this time. The Technical Contribution to Czech Video Game Creation - Madfinger Games for Dead Trigger and Shadowgun: Deadzone Nominated: Keen Software House for Miner Wars 2081, SCS Software for Euro Truck Simulator 2 and Scania Truck Driving Simulator The Artistic Contribution to Czech Video Game Creation - Amanita Design for Botanicula Nominated: Hammerware for Good Folks, Rake in Grass for Northmark: Hour of the Wolf, and Lonely Sock for Coral City 2013 The 2013 awards were presented on 10 May 2014. The Technical Contribution to Czech Video Game Creation - Bohemia Interactive for ArmA III Nominated: Hyperbolic Magnetism for Lums: The Game of Light and Shadows, Keen Software House for Space Engineers and Madfinger Games for Dead Trigger 2 The Artistic Contribution to Czech Video Game Creation - Hyperbolic Magnetism for Lums: The Game of Light and Shadows Nominated: Hexage for Reaper: Tale of a Pale Swordsman, Silicon Jelly for Mimpi and Trickster Arts for Hero of Many 2014 The 2014 awards were presented on 8 May 2015. There were four categories. The Technical Contribution to Czech Video Game Creation - Keen Software House for Medieval Engineers Nominated: Allodium for Infinitum: Battle for Europe, Keen Software House for Medieval Engineers and Cinemax for The Keep The Artistic Contribution to Czech Video Game Creation - Dreadlocks Ltd for Dex Nominated: ARK8 for Coraabia, Icarus Games for Time Treasury and CBE Software for J.U.L.I.A. Among the Stars The Best Original Game - Madfinger Games for Monzo Nominated: Dreadlocks Ltd for Dex, Czech Games Edition for Galaxy Trucker and Allodium for Infinitum: Battle for Europe The Best Debut Game - ARK8 for Coraabia Nominated: Czech Games Edition for Galaxy Trucker, Digital Life productions for Soccerinho and CZ.NIC for Tablexia 2015 The 2015 awards were presented on 6 May 2016. The Technical Contribution to Czech Video Game Creation - Wube Software for Factorio Nominated: BadFly Interactive for Dead Effect 2, McMagic Productions for Novus Inceptio and Madfinger Games for Unkilled The Artistic Contribution to Czech Video Game Creation - Hangonit for Rememoried Nominated: Fiolasoft Studio for Blackhole, Silicon Jelly for Mimpi Dreams and Lukáš Navrátil for Toby: The Secret Mine The Best Original Game - Wube Software for Factorio Nominated: Fiolasoft Studio for Blackhole, Charles University for Czechoslovakia 38–89: The Assassination and Lipa Learning for Lipa Theater The Best Debut Game - Charles University for Czechoslovakia 38–89: The Assassination Nominated: Wube Software for Factorio, McMagic Productions for Novus Inceptio and Lukáš Navrátil for Toby: The Secret Mine 2016 The 2016 awards were presented on 10 February 2017. It was held in Prague for the first time and wasn't part of Gameday. There were 9 categories this time. Nominations were scheduled to be announced on 24 January 2017 but it was pushed to 27 January 2017. Dark Train and Samorost 3 garnered the most nominations at 6 categories each. Chameleon Run was nominated in 5 categories and American Truck Simulator earned 4 nominations. Samorost 3 won the top award as well as two others. Czech game of the year - Samorost 3 by Amanita Design Nominated: American Truck Simulator, Chameleon Run and Dark Train Czech game of the year for PC/Consoles - Samorost 3 by Amanita Design Nominated: American Truck Simulator, Dark Train and Void Raiders Czech game of the year for Mobile Devices - Chameleon Run by Hyperbolic Magnetism Nominated: Hackers, Redcon (video game) and Tiny Miners Best technological solution - American Truck Simulator by SCS Software Nominated: Chameleon Run, Killing Room and Space Merchants: Arena Best audio - Samorost 3 by Amanita Design Nominated: American Truck Simulator, Dark Train and The Solus Project Best Game Design - Chameleon Run by Hyperbolic Magnetism Nominated: Dark Train, Samorost 3 and Trupki Best Story - 7 Mages by Napoleon Games Nominated: Dark Train, Samorost 3 and The Solus Project Best Visual - Dark Train by Paperash Studio Nominated: Chameleon Run, Samorost 3 and The Solus Project Biggest Hope - WarFriends by About Fun Nominated: Blue Effect, Legends of Azulgar and Project ARGO Hall of Fame - František Fuka, Tomáš Rylek and Miroslav Fídler 2017 The 2017 awards were be presented on 23 May 2018. Educational game by Charles University Attentat 1942 has won 3 categories including the main award. Czech game of the year - Attentat 1942 Nominated: Skylar & Plux: Adventure on Clover Island, Smashing Four, WarFriends Czech game of the year for PC/Consoles - Attentat 1942 Nominated: Blue Effect, Skylar & Plux: Adventure on Clover Island, Take On Mars Czech game of the year for Mobile Devices - WarFriends Nominated: AirportPRG, Smashing Four, What the Hen! Best technological solution - Shadowgun Legends Nominated: Blue Effect, Mashinky, Ylands Best audio - Blue Effect Nominated: Attentat 1942, Skylar & Plux: Adventure on Clover Island, Under Leaves Best Game Design - Mashinky Nominated: AirportPRG, Smashing Four, Through the Ages Best Story - Attentat 1942 Nominated: Erin: The Last Aos Sí, Ghostory, The Naked Game Best Visual - Under Leaves Nominated: Shadowgun Legends, Tragedy of Prince Rupert, Ylands Biggest Hope - Mashinky Nominated: Children of the Galaxy, Shadowgun Legends, Ylands Hall of Fame - Martin Klíma 2018 The 2018 awards were presented on 5 April 2019, hosted by Tomáš Hanák. Nominations were announced on 28 March 2019. Kingdom Come: Deliverance and Beat Saber received highest number of nominations. Kingdom Come: Deliverance was seen as a front runner and has won highest number of awards winning 3 categories - Youtubers Award, Best Game Design and Best technological solution but lost to Beat Saber in the main award category. Beat Saber also won Game Journalists Award. Chuchel won Best Game Design award. Developer's Award - Main Award: Beat Saber Nominated: Chuchel, DayZ, Kingdom Come: Deliverance Game Journalists Award: Beat Saber Nominated: Heroes of Flatlandia, Aggressors: Ancient Rome, Kingdom Come: Deliverance Youtubers Award: Kingdom Come: Deliverance Nominated: Band of Defenders, Beat Saber, DayZ Audiovisual Execution: Chuchel Nominated: Project Hospital, Beat Saber, Kingdom Come: Deliverance Best Game Design: Kingdom Come: Deliverance Nominated: The Apartment, Beat Saber, DayZ Best technological solution: Kingdom Come: Deliverance Nominated: Mothergunship, Frontier Pilot Simulator, Beat Saber Hall of Fame: Tomáš Smutný and Eduard Smutný 2019 The 2019 awards were set for 20 March 2020, but were postponed due to the COVID-19 pandemic. A new date was announced on 13 July 2020, with the show scheduled to be held on 25 September 2020. Pilgrims by Amanita Design has eventually won the main award. Ylands by Bohemia Interactive won 2 awards - for Best technological solution and Best Free to Play game. Czech game of the Year - Main Award: Pilgrims Nominated: Ylands, Vigor, Planet Nomads Audiovisual Execution: Feudal Alloy Nominated: Little Mouse's Encyclopedia, Pilgrims, Zeminátor Best Game Design: Monolisk Nominated: Ylands, Jim is Moving Out!, Time, Space and Matter Best technological solution: Ylands Nominated: Vigor, Flippy Friends AR Multiplayer, Planet Nomads Free to Play: Ylands Nominated: Ritual: Sorcerer Angel, Vigor, Idle Quest Heroes Student game: Silicomrades Hall of Fame: Andrej Anastasov 2020 The 2020 awards were held on 8 December 2021. It was linked with a survey about best Czech game of 2010s. Kingdom Come: Deliverance was voted best Czech video game of 2010s. Creaks won the main award. Czech game of 2020: Creaks Best technological solution: Mafia: Definitive Edition Best Game Design: Someday You'll Return Audiovisual Execution: Creaks Free to Play: Shadowgun War Games Hall of Fame: Lukáš Ladra Czech game of the Decade: Kingdom Come: Deliverance References External links Official website Video gaming in the Czech Republic Czech awards Awards established in 2010 Video game awards Anifilm
273279
https://en.wikipedia.org/wiki/Debits%20and%20credits
Debits and credits
Debits and credits in double entry bookkeeping are entries made in account ledgers to record changes in value resulting from business transactions. A debit entry in an account represents a transfer of value to that account, and a credit entry represents a transfer from the account. Each transaction transfers value from credited accounts to debited accounts. For example, a tenant who writes a rent cheque to a landlord would enter a credit for the bank account on which the cheque is drawn, and a debit in a rent expense account. Similarly, the landlord would enter a credit in the rent income account associated with the tenant and a debit for the bank account where the cheque is deposited. Debits and credits are traditionally distinguished by writing the transfer amounts in separate columns of an account book. Alternately, they can be listed in one column, indicating debits with the suffix "Dr" or writing them plain, and indicating credits with the suffix "Cr" or a minus sign. Despite the use of a minus sign, debits and credits do not correspond directly to positive and negative numbers. When the total of debits in an account exceeds the total of credits, the account is said to have a net debit balance equal to the difference; when the opposite is true, it has a net credit balance. For a particular account, one of these will be the normal balance type and will be reported as a positive number, while a negative balance will indicate an abnormal situation, as when a bank account is overdrawn. Debit balances are normal for asset and expense accounts, and credit balances are normal for liability, equity and revenue accounts. History The first known recorded use of the terms is Venetian Luca Pacioli's 1494 work, Summa de Arithmetica, Geometria, Proportioni et Proportionalita (All about Arithmetic, Geometry, Proportions and Proportionality). Pacioli devoted one section of his book to documenting and describing the double-entry bookkeeping system in use during the Renaissance by Venetian merchants, traders and bankers. This system is still the fundamental system in use by modern bookkeepers. Indian merchants had developed a double-entry bookkeeping system, called bahi-khata, predating Pacioli's work by at least many centuries, and which was likely a direct precursor of the European adaptation. It is sometimes said that, in its original Latin, Pacioli's Summa used the Latin words (to owe) and (to entrust) to describe the two sides of a closed accounting transaction. Assets were owed to the owner and the owners' equity was entrusted to the company. At the time negative numbers were not in use. When his work was translated, the Latin words debere and credere became the English debit and credit. Under this theory, the abbreviations Dr (for debit) and Cr (for credit) derive directly from the original Latin. However, Sherman casts doubt on this idea because Pacioli uses Per (Italian for "by") for the debtor and A (Italian for "to") for the creditor in the Journal entries. Sherman goes on to say that the earliest text he found that actually uses "Dr." as an abbreviation in this context was an English text, the third edition (1633) of Ralph Handson's book Analysis or Resolution of Merchant Accompts and that Handson uses Dr. as an abbreviation for the English word "debtor." (Sherman could not locate a first edition, but speculates that it too used Dr. for debtor.) The words actually used by Pacioli for the left and right sides of the Ledger are "in dare" and "in havere" (give and receive). Geijsbeek the translator suggests in the preface: 'if we today would abolish the use of the words debit and credit in the ledger and substitute the ancient terms of "shall give" and "shall have" or "shall receive", the personification of accounts in the proper way would not be difficult and, with it, bookkeeping would become more intelligent to the proprietor, the layman and the student.' As Jackson has noted, "debtor" need not be a person, but can be an abstract party: "...it became the practice to extend the meanings of the terms ... beyond their original personal connotation and apply them to inanimate objects and abstract conceptions..." This sort of abstraction is already apparent in Richard Dafforne's 17th-century text The Merchant's Mirror, where he states "Cash representeth (to me) a man to whom I … have put my money into his keeping; the which by reason is obliged to render it back." Aspects of transactions There are three kinds of accounts: Real accounts relate to the assets of a company, which may be tangible (machinery, buildings etc.) or intangible (goodwill, patents etc.) Personal accounts relate to individuals, companies, creditors, banks etc. Nominal accounts relate to expenses, losses, incomes or gains. To determine whether to debit or credit a specific account, we use either the accounting equation approach (based on five accounting rules), or the classical approach (based on three rules). Whether a debit increases or decreases an account's net balance depends on what kind of account it is. The basic principle is that the account receiving benefit is debited, while the account giving benefit is credited. For instance, an increase in an asset account is a debit. An increase in a liability or an equity account is a credit. The classical approach has three golden rules, one for each type of account: Real accounts: Debit whatever comes in and credit whatever goes out. Personal accounts: Receiver's account is debited and giver's account is credited. Nominal accounts: Expenses and losses are debited and incomes and gains are credited. The complete accounting equation based on the modern approach is very easy to remember if you focus on Assets, Expenses, Costs, Dividends (highlighted in chart). All those account types increase with debits or left side entries. Conversely, a decrease to any of those accounts is a credit or right side entry. On the other hand, increases in revenue, liability or equity accounts are credits or right side entries, and decreases are left side entries or debits. Debits and credits occur simultaneously in every financial transaction in double-entry bookkeeping. In the accounting equation, Assets = Liabilities + Equity, so, if an asset account increases (a debit (left)), then either another asset account must decrease (a credit (right)), or a liability or equity account must increase (a credit (right)). In the extended equation, revenues increase equity and expenses, costs & dividends decrease equity, so their difference is the impact on the equation. For example, if a company provides a service to a customer who does not pay immediately, the company records an increase in assets, Accounts Receivable with a debit entry, and an increase in Revenue, with a credit entry. When the company receives the cash from the customer, two accounts again change on the company side, the cash account is debited (increased) and the Accounts Receivable account is now decreased (credited). When the cash is deposited to the bank account, two things also change, on the bank side: the bank records an increase in its cash account (debit) and records an increase in its liability to the customer by recording a credit in the customer's account (which is not cash). Note that, technically, the deposit is not a decrease in the cash (asset) of the company and should not be recorded as such. It is just a transfer to a proper bank account of record in the company's books, not affecting the ledger. To make it more clear, the bank views the transaction from a different perspective but follows the same rules: the bank's vault cash (asset) increases, which is a debit; the increase in the customer's account balance (liability from the bank's perspective) is a credit. A customer's periodic bank statement generally shows transactions from the bank's perspective, with cash deposits characterized as credits (liabilities) and withdrawals as debits (reductions in liabilities) in depositor's accounts. In the company's books the exact opposite entries should be recorded to account for the same cash. This concept is important since this is why so many people misunderstand what debit/credit really means. Commercial understanding When setting up the accounting for a new business, a number of accounts are established to record all business transactions that are expected to occur. Typical accounts that relate to almost every business are: Cash, Accounts Receivable, Inventory, Accounts Payable and Retained Earnings. Each account can be broken down further, to provide additional detail as necessary. For example: Accounts Receivable can be broken down to show each customer that owes the company money. In simplistic terms, if Bob, Dave, and Roger owe the company money, the Accounts Receivable account will contain a separate account for Bob, and Dave and Roger. All 3 of these accounts would be added together and shown as a single number (i.e. total 'Accounts Receivable' – balance owed) on the balance sheet. All accounts for a company are grouped together and summarized on the balance sheet in 3 sections which are: Assets, Liabilities and Equity. All accounts must first be classified as one of the five types of accounts (accounting elements) ( asset, liability, equity, income and expense). To determine how to classify an account into one of the five elements, the definitions of the five account types must be fully understood. The definition of an asset according to IFRS is as follows, "An asset is a resource controlled by the entity as a result of past events from which future economic benefits are expected to flow to the entity". In simplistic terms, this means that Assets are accounts viewed as having a future value to the company (i.e. cash, accounts receivable, equipment, computers). Liabilities, conversely, would include items that are obligations of the company (i.e. loans, accounts payable, mortgages, debts). The Equity section of the balance sheet typically shows the value of any outstanding shares that have been issued by the company as well as its earnings. All Income and expense accounts are summarized in the Equity Section in one line on the balance sheet called Retained Earnings. This account, in general, reflects the cumulative profit (retained earnings) or loss (retained deficit) of the company. The Profit and Loss Statement is an expansion of the Retained Earnings Account. It breaks-out all the Income and expense accounts that were summarized in Retained Earnings. The Profit and Loss report is important in that it shows the detail of sales, cost of sales, expenses and ultimately the profit of the company. Most companies rely heavily on the profit and loss report and review it regularly to enable strategic decision making. Terminology The words debit and credit can sometimes be confusing because they depend on the point of view from which a transaction is observed. In accounting terms, assets are recorded on the left-hand side (debit) of asset accounts, because they are typically shown on the left-hand side of the accounting equation (A=L+SE). Likewise, an increase in liabilities and shareholder's equity are recorded on the right-hand side (credit) of those accounts, thus they also maintain the balance of the accounting equation. In other words, if "assets are increased with left-hand entries, the accounting equation is balanced only if increases in liabilities and shareholder’s equity are recorded on the opposite or right-hand side. Conversely, decreases in assets are recorded on the right-hand side of asset accounts, and decreases in liabilities and equities are recorded on the left-hand side". Similar is the case with revenues and expenses, what increases shareholder's equity is recorded as credit because they are in the right side of equation and vice versa. Typically, when reviewing the financial statements of a business, Assets are Debits and Liabilities and Equity are Credits. For example, when two companies transact with one another say Company A buys something from Company B then Company A will record a decrease in cash (a Credit), and Company B will record an increase in cash (a Debit). The same transaction is recorded from two different perspectives. This use of the terms can be counter-intuitive to people unfamiliar with bookkeeping concepts, who may always think of a credit as an increase and a debit as a decrease. This is because most people typically only see their personal bank accounts and billing statements (e.g., from a utility). A depositor's bank account is actually a Liability to the bank, because the bank legally owes the money to the depositor. Thus, when the customer makes a deposit, the bank credits the account (increases the bank's liability). At the same time, the bank adds the money to its own cash holdings account. Since this account is an Asset, the increase is a debit. But the customer typically does not see this side of the transaction. On the other hand, when a utility customer pays a bill or the utility corrects an overcharge, the customer's account is credited. This is because the customer's account is one of the utility's accounts receivable, which are Assets to the utility because they represent money the utility can expect to receive from the customer in the future. Credits actually decrease Assets (the utility is now owed less money). If the credit is due to a bill payment, then the utility will add the money to its own cash account, which is a debit because the account is another Asset. Again, the customer views the credit as an increase in the customer's own money and does not see the other side of the transaction. Debit cards and credit cards Debit cards and credit cards are creative terms used by the banking industry to market and identify each card. From the cardholder's point of view, a credit card account normally contains a credit balance, a debit card account normally contains a debit balance. A debit card is used to make a purchase with one's own money. A credit card is used to make a purchase by borrowing money. From the bank's point of view, when a debit card is used to pay a merchant, the payment causes a decrease in the amount of money the bank owes to the cardholder. From the bank's point of view, your debit card account is the bank's liability. A decrease to the bank's liability account is a debit. From the bank's point of view, when a credit card is used to pay a merchant, the payment causes an increase in the amount of money the bank is owed by the cardholder. From the bank's point of view, your credit card account is the bank's asset. An increase to the bank's asset account is a debit. Hence, using a debit card or credit card causes a debit to the cardholder's account in either situation when viewed from the bank's perspective. General ledgers General ledger is the term for the comprehensive collection of T-accounts (it is so called because there was a pre-printed vertical line in the middle of each ledger page and a horizontal line at the top of each ledger page, like a large letter T). Before the advent of computerised accounting, manual accounting procedure used a ledger book for each T-account. The collection of all these books was called the general ledger. The chart of accounts is the table of contents of the general ledger. Totaling of all debits and credits in the general ledger at the end of a financial period is known as trial balance. "Daybooks" or journals are used to list every single transaction that took place during the day, and the list is totalled at the end of the day. These daybooks are not part of the double-entry bookkeeping system. The information recorded in these daybooks is then transferred to the general ledgers. Modern computer software allows for the instant update of each ledger account; for example, when recording a cash receipt in a cash receipts journal a debit is posted to a cash ledger account with a corresponding credit to the ledger account from which the cash was received. Not every single transaction needs to be entered into a T-account; usually only the sum (the batch total) of the book transactions for the day is entered in the general ledger. The five accounting elements There are five fundamental elements within accounting. These elements are as follows: Assets, Liabilities, Equity (or Capital), Income (or Revenue) and Expenses. The five accounting elements are all affected in either a positive or negative way. A credit transaction does not always dictate a positive value or increase in a transaction and similarly, a debit does not always indicate a negative value or decrease in a transaction. An asset account is often referred to as a "debit account" due to the account's standard increasing attribute on the debit side. When an asset (e.g. an espresso machine) has been acquired in a business, the transaction will affect the debit side of that asset account illustrated below: The "X" in the debit column denotes the increasing effect of a transaction on the asset account balance (total debits less total credits), because a debit to an asset account is an increase. The asset account above has been added to by a debit value X, i.e. the balance has increased by £X or $X. Likewise, in the liability account below, the X in the credit column denotes the increasing effect on the liability account balance (total credits less total debits), because a credit to a liability account is an increase. All "mini-ledgers" in this section show standard increasing attributes for the five elements of accounting. Summary table of standard increasing and decreasing attributes for the accounting elements: Attributes of accounting elements per real, personal, and nominal accounts Real accounts are assets. Personal accounts are liabilities and owners' equity and represent people and entities that have invested in the business. Nominal accounts are revenue, expenses, gains, and losses. Accountants close out accounts at the end of each accounting period. This method is used in the United Kingdom, where it is simply known as the Traditional approach. Transactions are recorded by a debit to one account and a credit to another account using these three "golden rules of accounting": Real account: Debit what comes in and credit what goes out Personal account: Debit who receives and Credit who gives. Nominal account: Debit all expenses & losses and Credit all incomes & gains Principle Each transaction that takes place within the business will consist of at least one debit to a specific account and at least one credit to another specific account. A debit to one account can be balanced by more than one credit to other accounts, and vice versa. For all transactions, the total debits must be equal to the total credits and therefore balance. The general accounting equation is as follows: Assets = Equity + Liabilities, A = E + L. The equation thus becomes A – L – E = 0 (zero). When the total debts equals the total credits for each account, then the equation balances. The extended accounting equation is as follows: Assets + Expenses = Equity/Capital + Liabilities + Income, A + Ex = E + L + I. In this form, increases to the amount of accounts on the left-hand side of the equation are recorded as debits, and decreases as credits. Conversely for accounts on the right-hand side, increases to the amount of accounts are recorded as credits to the account, and decreases as debits. This can also be rewritten in the equivalent form: Assets = Liabilities + Equity/Capital + (Income − Expenses), A = L + E + (I − Ex), where the relationship of the Income and Expenses accounts to Equity and profit is a bit clearer. Here Income and Expenses are regarded as temporary or nominal accounts which pertain only to the current accounting period whereas Asset, Liability, and Equity accounts are permanent or real accounts pertaining to the lifetime of the business. The temporary accounts are closed to the Equity account at the end of the accounting period to record profit/loss for the period. Both sides of these equations must be equal (balance). Each transaction is recorded in a ledger or "T" account, e.g. a ledger account named "Bank" that can be changed with either a debit or credit transaction. In accounting it is acceptable to draw-up a ledger account in the following manner for representation purposes: Accounts pertaining to the five accounting elements Accounts are created/opened when the need arises for whatever purpose or situation the entity may have. For example, if your business is an airline company they will have to purchase airplanes, therefore even if an account is not listed below, a bookkeeper or accountant can create an account for a specific item, such as an asset account for airplanes. In order to understand how to classify an account into one of the five elements, a good understanding of the definitions of these accounts is required. Below are examples of some of the more common accounts that pertain to the five accounting elements: Asset accounts Asset accounts are economic resources which benefit the business/entity and will continue to do so. They are Cash, bank, accounts receivable, inventory, land, buildings/plant, machinery, furniture, equipment, supplies, vehicles, trademarks and patents, goodwill, prepaid expenses, prepaid insurance, debtors (people who owe us money, due within one year), VAT input etc. Two types of basic asset classification: Current assets: Assets which operate in a financial year or assets that can be used up, or converted within one year or less are called current assets. For example, Cash, bank, accounts receivable, inventory (people who owe us money, due within one year), prepaid expenses, prepaid insurance, VAT input and many more. Non-current assets: Assets that are not recorded in transactions or hold for more than one year or in an accounting period are called Non-current assets. For example, land, buildings/plant, machinery, furniture, equipment, vehicles, trademarks and patents, goodwill etc. Liability accounts Liability accounts record debts or future obligations a business or entity owes to others. When one institution borrows from another for a period of time, the ledger of the borrowing institution categorises the argument under liability accounts.<ref>Financial Accounting, Horngren, Harrison, Bamber, Best, Fraser Willet, pp. '14, 45, Pearson/PrenticeHall 2006.</ref> The basic classifications of liability accounts are: Current liability, when money only may be owed for the current accounting period or periodical. Examples include accounts payable, salaries and wages payable, income taxes, bank overdrafts, accrued expenses, sales taxes, advance payments (unearned revenue), debt and accrued interest on debt, customer deposits, VAT output, etc. Long-term liability, when money may be owed for more than one year. Examples include trust accounts, debenture, mortgage loans and more. Equity accounts Equity accounts record the claims of the owners of the business/entity to the assets of that business/entity. Capital, retained earnings, drawings, common stock, accumulated funds, etc. Income/revenue accounts Income accounts record all increases in Equity other than that contributed by the owner/s of the business/entity. Services rendered, sales, interest income, membership fees, rent income, interest from investment, recurring receivables, donation etc. Expense accounts Expense accounts record all decreases in the owners' equity which occur from using the assets or increasing liabilities in delivering goods or services to a customer – the costs of doing business. Telephone, water, electricity, repairs, salaries, wages, depreciation, bad debts, stationery, entertainment, honorarium, rent, fuel, utility, interest etc. Example Quick Services business purchases a computer for £500, on credit, from ABC Computers. Recognize the following transaction for Quick Services in a ledger account (T-account): Quick Services has acquired a new computer which is classified as an asset within the business. According to the accrual basis of accounting, even though the computer has been purchased on credit, the computer is already the property of Quick Services and must be recognised as such. Therefore, the equipment account of Quick Services increases and is debited: As the transaction for the new computer is made on credit, the payable "ABC Computers" has not yet been paid. As a result, a liability is created within the entity's records. Therefore, to balance the accounting equation the corresponding liability account is credited: The above example can be written in journal form: The journal entry "ABC Computers" is indented to indicate that this is the credit transaction. It is accepted accounting practice to indent credit transactions recorded within a journal. In the accounting equation form: A = E + L, 500 = 0 + 500 (the accounting equation is therefore balanced). Further examples A business pays rent with cash: You increase rent (expense) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction. A business receives cash for a sale: You increase cash (asset) by recording a debit transaction, and increase sales (income) by recording a credit transaction. A business buys equipment with cash: You increase equipment (asset) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction. A business borrows with a cash loan: You increase cash (asset) by recording a debit transaction, and increase loan (liability) by recording a credit transaction. A business pays salaries with cash: You increase salary (expenses) by recording a debit transaction, and decrease cash (asset) by recording a credit transaction. The totals show the net effect on the accounting equation and the double-entry principle, where the transactions are balanced. T-accounts The process of using debits and credits creates a ledger format that resembles the letter "T". The term "T-account" is accounting jargon for a "ledger account" and is often used when discussing bookkeeping. The reason that a ledger account is often referred to as a T-account is due to the way the account is physically drawn on paper (representing a "T"). The left column is for debit (Dr) entries, while the right column is for credit (Cr) entries. Contra account All accounts also can be debited or credited depending on what transaction has taken place. For example, when a vehicle is purchased using cash, the asset account "Vehicles" is debited and simultaneously the asset account "Bank or Cash" is credited due to the payment for the vehicle using cash. Some balance sheet items have corresponding "contra" accounts, with negative balances, that offset them. Examples are accumulated depreciation against equipment, and allowance for bad debts (also known as allowance for doubtful accounts) against accounts receivable. United States GAAP utilizes the term contra'' for specific accounts only and does not recognize the second half of a transaction as a contra, thus the term is restricted to accounts that are related. For example, sales returns and allowance and sales discounts are contra revenues with respect to sales, as the balance of each contra (a debit) is the opposite of sales (a credit). To understand the actual value of sales, one must net the contras against sales, which gives rise to the term net sales (meaning net of the contras). A more specific definition in common use is an account with a balance that is the opposite of the normal balance (Dr/Cr) for that section of the general ledger. An example is an office coffee fund: Expense "Coffee" (Dr) may be immediately followed by "Coffee – employee contributions" (Cr). Such an account is used for clarity rather than being a necessary part of GAAP (generally accepted accounting principles). Accounts classification Each of the following accounts is either an Asset (A), Contra Account (CA), Liability (L), Shareholders' Equity (SE), Revenue (Rev), Expense (Exp) or Dividend (Div) account. Account transactions can be recorded as a debit to one account and a credit to another account using the modern or traditional approaches in accounting and following are their normal balances: References External links Accounting systems Accounting terminology Accounting journals and ledgers
57070883
https://en.wikipedia.org/wiki/DNS%20over%20TLS
DNS over TLS
DNS over TLS (DoT) is a network security protocol for encrypting and wrapping Domain Name System (DNS) queries and answers via the Transport Layer Security (TLS) protocol. The goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data via man-in-the-middle attacks. While DNS-over-TLS is applicable to any DNS transaction, it was first standardized for use between stub or forwarding resolvers and recursive resolvers, in in May of 2016. Subsequent IETF efforts specify the use of DoT between recursive and authoritative servers ("Authoritative DNS-over-TLS" or "ADoT") and a related implementation between authoritative servers (Zone Transfer-over-TLS or "xfr-over-TLS"). Server software BIND supports DoT connections as of version 9.17. Earlier versions offered DoT capability by proxying through stunnel. Unbound has supported DNS over TLS since 22 January 2018. Unwind has supported DoT since 29 January 2019. With Android Pie's support for DNS over TLS, some ad blockers now support using the encrypted protocol as a relatively easy way to access their services versus any of the various work-around methods typically used such as VPNs and proxy servers. Simple DNS Plus, a resolving and authoritative DNS server for Windows, added support for DoT in version 9.0 released 28 September 2021. Client software Android clients running Android 9 (Pie) or newer support DNS over TLS and will use it by default if the network infrastructure, for example the ISP, supports it. In April 2018, Google announced that Android Pie will include support for DNS over TLS, allowing users to set a DNS server phone-wide on both Wi-Fi and mobile connections, an option that was historically only possible on rooted devices. DNSDist, from PowerDNS, also announced support for DNS over TLS in version 1.3.0. Linux and Windows users can use DNS over TLS as a client through the NLnet Labs stubby daemon or Knot Resolver. Alternatively they may install getdns-utils to use DoT directly with the getdns_query tool. The unbound DNS resolver by NLnet Labs also supports DNS over TLS. Apple's iOS 14 introduced OS-level support for DNS over TLS (and DNS over HTTPS). iOS does not allow manual configuration of DoT servers, and requires the use of a third-party application to make configuration changes. systemd-resolved is a Linux-only implementation that can be configured to use DNS over TLS, by editing /etc/systemd/resolved.conf and enabling the setting DNSOverTLS. Most major Linux distributions have systemd installed by default. personalDNSfilter is an open source DNS filter with support for DoT and DNS over HTTPS (DoH) for Java-enabled devices including Android. Nebulo is an open source DNS changer application for Android which supports both DoT and DoH. Public resolvers DNS-over-TLS was first implemented in a public recursive resolver by Quad9 in 2017. Other recursive resolver operators such as Google and Cloudflare followed suit in subsequent years, and now it is a broadly-supported feature generally available in most large recursive resolvers. Criticisms and implementation considerations DoT can impede analysis and monitoring of DNS traffic for cybersecurity purposes. DoT has been used to bypass parental controls which operate at the (unencrypted) standard DNS level; Circle, a parental control router which relies on DNS queries to check domains against a blocklist, blocks DoT by default due to this. However, there are DNS providers that offer filtering and parental controls along with support for both DoT and DoH. In that scenario, DNS queries are checked against block lists once they are received by the provider rather than prior to leaving the user's router. Encryption by itself does not protect privacy. It only protects against third-party observers. It does not guarantee what the endpoints do with the (then decrypted) data. DoT clients do not necessarily directly query any authoritative name servers. The client may rely on the DoT server using traditional (port 53 or 853) queries to finally reach authoritative servers. Thus, DoT does not qualify as an end-to-end encrypted protocol, only hop-to-hop encrypted and only if DNS over TLS is used consistently. Alternatives DNS over HTTPS (DoH) is a similar protocol standard for encrypting DNS queries, differing only in the methods used for encryption and delivery from DoT. On the basis of privacy and security, whether or not a superior protocol exists among the two is a matter of controversial debate, while others argue the merits of either depend on the specific use case. DNSCrypt is another network protocol that authenticates and encrypts DNS traffic, although it was never proposed to the Internet Engineering Task Force (IETF) with a Request for Comments (RFC). See also DNSCurve Public recursive name server References External links – Specification for DNS over Transport Layer Security (TLS) – Usage Profiles for DNS over TLS and DNS over DTLS DNS Privacy Project: dnsprivacy.org Domain Name System Internet protocols Application layer protocols Internet security Transport Layer Security
41052931
https://en.wikipedia.org/wiki/Stephanie%20Forrest
Stephanie Forrest
Stephanie Forrest (born circa 1958) is an American computer scientist and director of the Biodesign Center for Biocomputing, Security and Society at the Biodesign Institute at Arizona State University. She was previously Distinguished Professor of Computer Science at the University of New Mexico in Albuquerque. She is best known for her work in adaptive systems, including genetic algorithms, computational immunology, biological modeling, automated software repair, and computer security. Biography After earning her BA from St. John's College in 1977, Forrest studied Computer and Communication Sciences at the University of Michigan, where she received her MS in 1982, and in 1985 her PhD, with a thesis entitled "A study of parallelism in the classifier system and its application to classification in KL-ONE semantic networks." After graduation Forrest worked for Teknowledge Inc. and at the Center for Nonlinear Studies of the Los Alamos National Laboratory. In 1990 she joined the University of New Mexico, where she was appointed Professor of Computer Science and directs the Computer Immune Systems Group, and the Adaptive Computation Laboratory. From 2006 to 2011 she chaired the Computer Science Department. In the 1990s she was also affiliated with the Santa Fe Institute, where she was Interim Vice President for the 1999–2000 term. In 1991, Forrest was awarded the NSF Presidential Young Investigator Award, and in 2009 she received the IFIP TC2 Manfred Paul Award for Excellence in Software. In 2011, she was awarded the ACM - AAAI Allen Newell Award. Work Forrest's research interests are in the field of "adaptive systems, including genetic algorithms, computational immunology, biological modeling, automated software repair, and computer security." According to the National Academies her research since the 1990s has included "developing the first practical anomaly intrusion-detection system; designing automated responses to cyberattacks; writing an early influential paper proposing automatic software diversity and introducing instruction-set randomization as a particular implementation; developing noncryptographic privacy-enhancing data representations; agent-based modeling of large-scale computational networks; and recently, work on automated repair of security vulnerabilities. She has conducted many computational modeling projects in biology, where her specialties are immunology and evolutionary diseases, such as Influenza and cancer." Selected bibliography Forrest has authored and co-authored many publications in her field of expertise. A selection: Forrest, Stephanie, et al. "Self-nonself discrimination in a computer." Research in Security and Privacy, 1994. Proceedings., 1994 IEEE Computer Society Symposium on. Ieee, 1994. Forrest, Stephanie, et al. "A sense of self for unix processes." Security and Privacy, 1996. Proceedings., 1996 IEEE Symposium on. IEEE, 1996. Hofmeyr, Steven A., Stephanie Forrest, and Anil Somayaji. "Intrusion detection using sequences of system calls." Journal of computer security 6.3 (1998): 151–180. Warrender, Christina, Stephanie Forrest, and Barak Pearlmutter. "Detecting intrusions using system calls: Alternative data models." Security and Privacy, 1999. Proceedings of the 1999 IEEE Symposium on. IEEE, 1999. Hofmeyr, Steven A., and Stephanie Forrest. "Architecture for an artificial immune system." Evolutionary computation 8.4 (2000): 443–473. References External links Stephanie Forrest at the University of New Mexico Stephanie Forrest at Arizona State University Year of birth missing (living people) Living people American computer scientists Complex systems scientists St. John's College (Annapolis/Santa Fe) alumni University of Michigan alumni University of New Mexico faculty 1950s births Computer security academics American women computer scientists Los Alamos National Laboratory personnel Santa Fe Institute people Researchers of artificial life American women academics 21st-century American women
29447553
https://en.wikipedia.org/wiki/PTC%20Creo%20Elements/Direct%20Drafting
PTC Creo Elements/Direct Drafting
Creo Elements/Direct Drafting now owned by PTC, and formerly called ME10 is a CAD software application exclusively for 2D drawings, especially in mechanical engineering and electrical engineering. The program was first developed by Hewlett Packard in Germany. HP released the first version 1986. Hewlett Packard MDD (Mechanical Design Division) continued the ME10 development. The first product designed using ME10 was the original HP DeskJet printer at the HP Vancouver Division.(HP Journal Reference) Creo Elements/Direct Drafting was originally developed for the Hewlett-Packard 98xx workstation family (also referred to as the Series 200) on their proprietary Pascal based operating system / development environment, followed by a move a few years later to the operating system HP-UX. With the success of Microsoft Windows, a version was offered for this operating system. Some versions have also been developed for Linux. Today, MS-Windows is the standard platform for Creo Elements/Direct Drafting. In 2010 the product was renamed to Creo Elements/Direct Drafting (as opposed to 3D product Creo Elements/Direct Modeling). Creo Elements/Direct Drafting is one of the most common 2D CAD programs for mechanical engineering in Germany, behind the leader AutoCAD. External links Parametric Technology Corporation website Creo Elements/Direct Modeling Personal Edition (free 3D CAD) Computer-aided design software Computer-aided design software for Windows
35075236
https://en.wikipedia.org/wiki/Vermont%20Information%20Processing
Vermont Information Processing
Vermont Information Processing, Inc. (VIP) is a small business in the Route Accounting software industry with a large software suite for soft drink bottlers, wine and beer distributors, and brewers. VIP's Route Accounting Software runs on IBM i in the Cloud and is accessed through Microsoft Windows; Mobile Solutions run on Windows Mobile, iOS, and Android. History Vermont Information Processing was founded in 1973 by Howard Aiken. The company purchased third shift computer time from a local bus company and focused on providing sales analysis services to distributors of soft drinks, wine, and beer. Distributors mailed retailer invoices to VIP where they were keyed into a mainframe computer for processing, the sales analysis reports were returned to the distributors via United States Postal Service mail. The transition from a service provider to a software development firm started in 1975 with the advent of affordable IBM mini computers for distributors. The original VIP software applications had an emphasis on route accounting functions as well as the backbone sales analysis suite. The mainstream use of handheld computers started in 1978 with beverage sales reps using the devices to enter retail orders in the field and transmit them to the minicomputers. In subsequent years, handheld computers were heavily used by warehouse and delivery personnel. In 1979, VIP grew to 15 employees, and moved to new headquarters in Colchester, Vermont. The business expanded to over 30 people by 1991 and the company built and occupied a 32,000 square foot office building. 2001 was a pivotal year for VIP. The company branched from its solid base in distributor software and started offering services for beverage suppliers. The VIP employee stock ownership plan (ESOP) also started in 2001 with 30% of the company being owned by the employees. VIP welcomed the change back to its roots of providing services to both distributors and suppliers, this time via a Cloud computing solution named ASP rather than the 1970s method of using the US Postal Service. VIP is a Provider for IBM. Axces Systems, a supplier services firm from New York, was acquired in 2002. In 2003 and 2005, VIP acquired beverage software competitors MicroVane of Kalamazoo, MI and Wholesaler Computer Systems (WCS) of St. Louis, MO. In 2015, VIP acquired BDN Systems from Neilsen to complement the growth of the Supplier Services. In 2007, VIP implemented regression testing to expedite the testing of new features and versions. VIP, now 100% employee owned, counts over 1,000 distributors and 800 suppliers as customers who use its software suite. The company has offices in Charlotte, Novato, St. Louis, and Kalamazoo in addition to the headquarters in Colchester. Route Accounting Software The core of VIP's software is the Route Accounting Software. Route accounting software is used by companies to keep track of product as it moves through the distribution process from the time it is received from the supplier to the time it is delivered to the retailer. Warehouse Management System VIP has a Warehouse Management System, WMS, that incorporates handheld technology that can transmit information about inventory movements and transactions in real-time to the Route Accounting Software. Day care VIP has had an on-site daycare facility providing education services to children of employees of the Colchester office since 1991. User's Conference VIP holds a User's Conference annually in the fall and it is attended by VIP Route Accounting System customers, business partners, hardware vendors, and other interested parties. References Companies based in Vermont Software companies of the United States
66830387
https://en.wikipedia.org/wiki/Android%2012
Android 12
Android 12 is the twelfth major release and 19th version of Android, the mobile operating system developed by the Open Handset Alliance led by Google. The first beta was released on May 18, 2021. Android 12 was released publicly on October 4, 2021, through Android Open Source Project (AOSP) and was released to supported Google Pixel devices on October 19, 2021. History Android 12 (internally codenamed Snow Cone) was announced in an Android blog posted on February 18, 2021. A developer preview was released immediately, with two additional ones planned the following two months. After that, four monthly beta releases were planned, beginning in May, the last one of them reaching platform stability in August, with general availability coming shortly after that. The second developer preview was released on March 17, 2021, followed by a third preview on April 21, 2021. The first beta build was then released on May 18, 2021. It was followed by beta 2 on June 9, 2021, which got a bugfix update to 2.1 on June 23. Then beta 3 was released on July 14, 2021, getting a bugfix update to beta 3.1 on July 26. Beta 4 was released on August 11, 2021. A fifth beta, not planned in the original roadmap, was released on September 8, 2021. Android 12 stable got released on the Android Open Source Project on October 4, getting its public over-the-air rollout on October 19, coinciding with the launch event for the Pixel 6. Android 12L In October 2021, Google announced Android 12L, an interim release of Android 12 including improvements specific for foldable phones, tablets, desktop-sized screens and Chromebooks, and modifications to the user interface to tailor it to larger screens. It is planned to launch in early 2022. Developer Preview 1 of Android 12L was released in October 2021, followed by Beta 1 in December 2021, Beta 2 in January 2022, and Beta 3 in February 2022. Features User interface Android 12 introduces a major refresh to the operating system's Material Design language branded as "Material You", which features larger buttons, an increased amount of animation, and a new style for home screen widgets. A feature, internally codenamed "monet", allows the operating system to automatically generate a color theme for system menus and supported apps using the colors of the user's wallpaper. The smart home and Wallet areas added to the power menu on Android 11 have been relocated to the notification shade, while Google Assistant is now activated by holding the power button. Android 12 also features native support for taking scrolling screenshots. In addition to the user interface, widgets on Android 12 are also updated with the new Material You design language. Platform Performance improvements have been made to system services such as the WindowManager, PackageManager, system server, and interrupts. It also adds accessibility improvements for those who are visually impaired. The Android Runtime has been added to Project Mainline, allowing it to be serviced via Play Store. Android 12 adds support for spatial audio, and MPEG-H 3D Audio, and will support transcoding of HEVC video for backwards compatibility with apps which do not support it. A "rich content insertion" API eases the ability to transfer formatted text and media between apps, such as via the clipboard. Third party app stores now have the ability to update apps without constantly asking the user for permission. Privacy OS-level machine learning functions are sandboxed within the "Android Private Compute Core", which is expressly prohibited from accessing networks. Apps requesting location data can now be restricted to having access only to "approximate" location data rather than "precise". Controls to prevent apps from using the camera and microphone system-wide have been added to the quick settings toggles. An indicator will also be displayed on-screen if they are active. See also iOS 15 macOS Monterey Windows 11 References External links Video: 60+ changes in Android 12 Android (operating system) 2021 software
2095183
https://en.wikipedia.org/wiki/Trapped%20ion%20quantum%20computer
Trapped ion quantum computer
A trapped ion quantum computer is one proposed approach to a large-scale quantum computer. Ions, or charged atomic particles, can be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information can be transferred through the collective quantized motion of the ions in a shared trap (interacting through the Coulomb force). Lasers are applied to induce coupling between the qubit states (for single qubit operations) or coupling between the internal qubit states and the external motional states (for entanglement between qubits). The fundamental operations of a quantum computer have been demonstrated experimentally with the currently highest accuracy in trapped ion systems. Promising schemes in development to scale the system to arbitrarily large numbers of qubits include transporting ions to spatially distinct locations in an array of ion traps, building large entangled states via photonically connected networks of remotely entangled ion chains, and combinations of these two ideas. This makes the trapped ion quantum computer system one of the most promising architectures for a scalable, universal quantum computer. As of April 2018, the largest number of particles to be controllably entangled is 20 trapped ions. History The first implementation scheme for a controlled-NOT quantum gate was proposed by Ignacio Cirac and Peter Zoller in 1995, specifically for the trapped ion system. The same year, a key step in the controlled-NOT gate was experimentally realized at NIST Ion Storage Group, and research in quantum computing began to take off worldwide. In 2021, researchers from the University of Innsbruck presented a quantum computing demonstrator that fits inside two 19-inch server racks, the world's first quality standards-meeting compact trapped ion quantum computer. Paul ion trap The electrodynamic ion trap currently used in trapped ion quantum computing research was invented in the 1950s by Wolfgang Paul (who received the Nobel Prize for his work in 1989). Charged particles cannot be trapped in 3D by just electrostatic forces because of Earnshaw's theorem. Instead, an electric field oscillating at radio frequency (RF) is applied, forming a potential with the shape of a saddle spinning at the RF frequency. If the RF field has the right parameters (oscillation frequency and field strength), the charged particle becomes effectively trapped at the saddle point by a restoring force, with the motion described by a set of Mathieu equations. This saddle point is the point of minimized energy magnitude, , for the ions in the potential field. The Paul trap is often described as a harmonic potential well that traps ions in two dimensions (assume and without loss of generality) and does not trap ions in the direction. When multiple ions are at the saddle point and the system is at equilibrium, the ions are only free to move in . Therefore, the ions will repel each other and create a vertical configuration in , the simplest case being a linear strand of only a few ions. Coulomb interactions of increasing complexity will create a more intricate ion configuration if many ions are initialized in the same trap. Furthermore, the additional vibrations of the added ions greatly complicate the quantum system, which makes initialization and computation more difficult. Once trapped, the ions should be cooled such that (see Lamb Dicke regime). This can be achieved by a combination of Doppler cooling and resolved sideband cooling. At this very low temperature, vibrational energy in the ion trap is quantized into phonons by the energy eigenstates of the ion strand, which are called the center of mass vibrational modes. A single phonon's energy is given by the relation . These quantum states occur when the trapped ions vibrate together and are completely isolated from the external environment. If the ions are not properly isolated, noise can result from ions interacting with external electromagnetic fields, which creates random movement and destroys the quantized energy states. Requirements for quantum computation The full requirements for a functional quantum computer are not entirely known, but there are many generally accepted requirements. David DiVincenzo outlined several of these criterion for quantum computing. Qubits Any two-level quantum system can form a qubit, and there are two predominant ways to form a qubit using the electronic states of an ion: Two ground state hyperfine levels (these are called "hyperfine qubits") A ground state level and an excited level (these are called the "optical qubits") Hyperfine qubits are extremely long-lived (decay time of the order of thousands to millions of years) and phase/frequency stable (traditionally used for atomic frequency standards). Optical qubits are also relatively long-lived (with a decay time of the order of a second), compared to the logic gate operation time (which is of the order of microseconds). The use of each type of qubit poses its own distinct challenges in the laboratory. Initialization Ionic qubit states can be prepared in a specific qubit state using a process called optical pumping. In this process, a laser couples the ion to some excited states which eventually decay to one state which is not coupled to the laser. Once the ion reaches that state, it has no excited levels to couple to in the presence of that laser and, therefore, remains in that state. If the ion decays to one of the other states, the laser will continue to excite the ion until it decays to the state that does not interact with the laser. This initialization process is standard in many physics experiments and can be performed with extremely high fidelity (>99.9%). The system's initial state for quantum computation can therefore be described by the ions in their hyperfine and motional ground states, resulting in an initial center of mass phonon state of (zero phonons). Measurement Measuring the state of the qubit stored in an ion is quite simple. Typically, a laser is applied to the ion that couples only one of the qubit states. When the ion collapses into this state during the measurement process, the laser will excite it, resulting in a photon being released when the ion decays from the excited state. After decay, the ion is continually excited by the laser and repeatedly emits photons. These photons can be collected by a photomultiplier tube (PMT) or a charge-coupled device (CCD) camera. If the ion collapses into the other qubit state, then it does not interact with the laser and no photon is emitted. By counting the number of collected photons, the state of the ion may be determined with a very high accuracy (>99.9%). Arbitrary single qubit rotation One of the requirements of universal quantum computing is to coherently change the state of a single qubit. For example, this can transform a qubit starting out in 0 into any arbitrary superposition of 0 and 1 defined by the user. In a trapped ion system, this is often done using magnetic dipole transitions or stimulated Raman transitions for hyperfine qubits and electric quadrupole transitions for optical qubits. The term "rotation" alludes to the Bloch sphere representation of a qubit pure state. Gate fidelity can be greater than 99%. The rotation operators and can be applied to individual ions by manipulating the frequency of an external electromagnetic field from and exposing the ions to the field for specific amounts of time. These controls create a Hamiltonian of the form . Here, and are the raising and lowering operators of spin (see Ladder operator). These rotations are the universal building blocks for single-qubit gates in quantum computing. To obtain the Hamiltonian for the ion-laser interaction, apply the Jaynes–Cummings model. Once the Hamiltonian is found, the formula for the unitary operation performed on the qubit can be derived using the principles of quantum time evolution. Although this model utilizes the rotating wave approximation, it proves to be effective for the purposes of trapped-ion quantum computing. Two qubit entangling gates Besides the controlled-NOT gate proposed by Cirac and Zoller in 1995, many equivalent, but more robust, schemes have been proposed and implemented experimentally since. Recent theoretical work by JJ. Garcia-Ripoll, Cirac, and Zoller have shown that there are no fundamental limitations to the speed of entangling gates, but gates in this impulsive regime (faster than 1 microsecond) have not yet been demonstrated experimentally. The fidelity of these implementations has been greater than 99%. Scalable trap designs Quantum computers must be capable of initializing, storing, and manipulating many qubits at once in order to solve difficult computational problems. However, as previously discussed, a finite number of qubits can be stored in each trap while still maintaining their computational abilities. It is therefore necessary to design interconnected ion traps that are capable of transferring information from one trap to another. Ions can be separated from the same interaction region to individual storage regions and brought back together without losing the quantum information stored in their internal states. Ions can also be made to turn corners at a "T" junction, allowing a two dimensional trap array design. Semiconductor fabrication techniques have also been employed to manufacture the new generation of traps, making the 'ion trap on a chip' a reality. An example is the quantum charge-coupled device (QCCD) designed by D. Kielpinski, C. Monroe, and D.J. Wineland. QCCDs resemble mazes of electrodes with designated areas for storing and manipulating qubits. The variable electric potential created by the electrodes can both trap ions in specific regions and move them through the transport channels, which negates the necessity of containing all ions in a single trap. Ions in the QCCD's memory region are isolated from any operations and therefore the information contained in their states is kept for later use. Gates, including those that entangle two ion states, are applied to qubits in the interaction region by the method already described in this article. Decoherence in scalable traps When an ion is being transported between regions in an interconnected trap and is subjected to a nonuniform magnetic field, decoherence can occur in the form of the equation below (see Zeeman effect). This is effectively changes the relative phase of the quantum state. The up and down arrows correspond to a general superposition qubit state, in this case the ground and excited states of the ion. Additional relative phases could arise from physical movements of the trap or the presence of unintended electric fields. If the user could determine the parameter α, accounting for this decoherence would be relatively simple, as known quantum information processes exist for correcting a relative phase. However, since α from the interaction with the magnetic field is path-dependent, the problem is highly complex. Considering the multiple ways that decoherence of a relative phase can be introduced in an ion trap, reimagining the ion state in a new basis that minimizes decoherence could be a way to eliminate the issue. One way to combat decoherence is to represent the quantum state in a new basis called the decoherence-free subspaces, or DFS., with basis states and . The DFS is actually the subspace of two ion states, such that if both ions acquire the same relative phase, the total quantum state in the DFS will be unaffected. Challenges Trapped ion quantum computers theoretically meet all of DiVincenzo's criteria for quantum computing, but implementation of the system can be quite difficult. The main challenges facing trapped ion quantum computing are the initialization of the ion's motional states, and the relatively brief lifetimes of the phonon states. Decoherence also proves to be challenging to eliminate, and is caused when the qubits interact with the external environment undesirably. CNOT gate implementation The controlled NOT gate is a crucial component for quantum computing, as any quantum gate can be created by a combination of CNOT gates and single-qubit rotations. It is therefore important that a trapped-ion quantum computer can perform this operation by meeting the following three requirements. First, the trapped ion quantum computer must be able to perform arbitrary rotations on qubits, which are already discussed in the "arbitrary single-qubit rotation" section. The next component of a CNOT gate is the controlled phase-flip gate, or the controlled-Z gate (see quantum logic gate). In a trapped ion quantum computer, the state of the center of mass phonon functions as the control qubit, and the internal atomic spin state of the ion is the working qubit. The phase of the working qubit will therefore be flipped if the phonon qubit is in the state . Lastly, a SWAP gate must be implemented, acting on both the ion state and the phonon state. Two alternate schemes to represent the CNOT gates are presented in Michael Nielsen and Isaac Chuang's Quantum Computation and Quantum Information and Cirac and Zoller's Quantum Computation with Cold Trapped Ions. References Additional Resources Trapped ion computer on arxiv.org Quantum information science
10663011
https://en.wikipedia.org/wiki/Panda3D
Panda3D
Panda3D is a game engine that includes graphics, audio, I/O, collision detection, and other abilities relevant to the creation of 3D games. Panda3D is free software under the revised BSD license. Panda3D's intended game-development language is Python. The engine itself is written in C++, and utilizes an automatic wrapper-generator to expose the complete functionality of the engine in a Python interface. This approach gives a developer the advantages of Python development, such as rapid development and advanced memory management, but keeps the performance of a compiled language in the engine core. For instance, the engine is integrated with Python's garbage collector, and engine structures are automatically managed. The manual and the sample programs use Python by default, with C++ available as an alternate. Both languages are fully supported. Python is the most commonly used language by developers, but C++ is also common. The users of Panda3D include the developers of several large commercial games, a few open source projects, and a number of university courses that leverage Panda3D's short learning curve. The community is small but active, and questions on the forum are generally answered quickly. History The Disney VR studio is a branch of Disney that was created to build 3D attractions for Disney theme parks. They built an attraction called "Aladdin's Magic Carpet," and the engine they created for that eventually became Panda3D. The engine in its current form bears little resemblance to those early years. Over time, Panda3D was used for additional VR rides at Disney theme parks, and was eventually used in the creation of Toontown Online, an online game set in a cartoon world, and later for the second MMORPG, Pirates of the Caribbean Online. In 2002, the engine was released as open source. According to the authors, this was so that they "could more easily work with universities on Virtual Reality research projects." However, it took some time for Panda3D to take off as an open-source project. From the article: The system, although quite usable by the team that developed it, was not quite "open source ready." There were several interested users, but building and installing the system was incredibly complex, and there was little in the way of documentation or sample code, so there was no significant open source community right away. However, the open-sourcing of the engine allowed Carnegie Mellon's Entertainment Technology Center to join in the development of the engine. While Disney engineers continued to do the bulk of the development, the Carnegie-Mellon team built a role for itself polishing the engine for public consumption, writing documentation, and adding certain high-end features such as shaders. Panda3D's name was once an acronym: "Platform Agnostic Networked Display Architecture." However, since that phrase has largely lost its meaning, the word "Panda3D" is rarely thought of as an acronym any more. Design Panda3D is a scene graph engine. This means that the virtual world is initially an empty Cartesian space into which the game programmer inserts 3D models. Panda3D does not distinguish between "large" 3D models, such as the model of an entire dungeon or island, and "small" 3D models, such as a model of a table or a sword. Both large and small models are created using a standard modeling program such as Blender, 3ds Max, or Maya. The models are then loaded into Panda3D and inserted into the Cartesian space. The Panda3D scene graph exposes the functionality of OpenGL and DirectX in a fairly literal form. For instance, OpenGL and DirectX both have fog capabilities. To enable fog in Panda3D, one simply stores the fog parameters on a node in the scene graph. The fog parameters exactly match the parameters of the equivalent calls in the underlying APIs. In this way, Panda3D can be seen as a thin wrapper around the lower-level APIs. Where it differs from them is that it stores the scene, whereas OpenGL and DirectX do not. Of course, it also provides higher-level operators, such as loading models, executing animations, detecting collisions, and the like. Panda3D was first engineered before the existence of vertex and pixel shaders. It acquired support for manually written shaders in 2005. However, users have been slow to leverage modern per-pixel lighting techniques in their games. The developers theorize that this is because shader programming can be quite difficult, and that many game developers want the engine to handle it automatically. To remedy this situation, the Panda3D developers have recently given Panda3D the ability to synthesize shaders automatically. This synthesis occurs if the 3D modeler marks a model for per-pixel lighting, or if the modeler applies a normal map, gloss map, self-illumination map, or other capability that exceeds the capabilities of the fixed-function pipeline. The intent of the synthesis is to render the model as the modeler intended, without any intervention from the programmer. Non-graphical capabilities Panda3D provides capabilities other than 3D rendering. Chief among these are: Performance analysis tools Scene graph exploration tools Debugging tools A complete art export/import pipeline 3D Audio, using either FMOD, OpenAL or Miles Sound System Collision detection Physics system, and full integration for the Open Dynamics Engine and Bullet integration Keyboard and Mouse support Support for I/O devices Finite state machines GUI Networking Artificial intelligence Software license Summary Panda3D is open source and is, as of May 28, 2008, free software under the revised BSD license. Releases prior to that date are not considered free software due to certain errors in the design of the old Panda3D license. Despite this, those older releases of Panda3D can also be used for both free and commercial game development at no financial cost. Evolution In 2002, when the engine was open sourced, the goal of the developers was to create a free software license. However, the license had a few flaws that made it non-free: it arguably required submitting changes to [email protected], and it explicitly prohibited the export of the software to various nations against which the United States had trade embargoes. On May 28, 2008, the trunk of Panda3D development switched to the BSD license. However, old releases still use the old license. Panda3D makes use of several third-party libraries whose licenses are not free software, including FMOD, Nvidia Cg, DirectX, and MFC. Most of these modules can be easily excluded from the installation, however. Projects employing Panda3D Toontown Online (defunct) and their private servers Pirates of the Caribbean Online (defunct) and their private servers Ghost Pirates of Vooju Island A Vampyre Story See also Blender Game Engine Pygame VRPN References External links Cross-platform free software Cross-platform software Disney technology Free 3D graphics software Free game engines Free software programmed in C++ Game engines for Linux Python (programming language)-scriptable game engines Software using the BSD license
29417595
https://en.wikipedia.org/wiki/Samba%20TV
Samba TV
Samba TV (formerly Flingo) is an advertising and analytics company headquartered in San Francisco, California. It was founded in 2008 by early employees of BitTorrent, including Samba TV’s current CEO, Ashwin Navin. It develops software for televisions, set-top boxes, smart phones and tablets to enable interactive television through personalization. Through its portfolio of applications and TV platform technologies, Samba TV is built directly into the TV or set-top box and will recognize onscreen content—live or time-shifted—and make relevant information available to users at their request. The service is available only after a user activates it on a device, and is supported by interest-based advertising delivered on the television or on devices within the household. Through APIs and SDKs for mobile application software developers, Samba TV is usable on a second screen or the TV itself. Samba TV has a global addressable footprint of 46 million devices globally, 28 million of which are in the United States. Its applications are available on over 30 million screens in 118 countries. The company has raised over $40 million in capital from Disney, Time Warner, Interpublic Group, Liberty Global, MDC Partners, A+E Networks and Union Grove Venture Partners. History The company was founded as Flingo in 2008, to help media companies like Showtime, FOX, A+E Networks, TMZ, Revision3, PBS and CBS develop apps on Smart TVs that synchronize with linear Broadcast programming and Non-linear media. In January 2012, Samba TV launched one-click sharing on social networks like Facebook and Twitter from the television. In January 2013, the company announced further developments and features and named its interactive TV platform Samba. Acquisitions In September 2013, the company adopted the Samba TV name. In October 2017, The Walt Disney Company became a strategic investor in Samba TV to better understand TV viewership and engagement with advertising. In December 2018, Samba TV acquired Screen6, a firm offering real-time, cross-device identity resolutions. In August 2019, Samba TV acquired Wove, to help direct-to-consumer brands TV ad targeting. In August 2019, Samba TV acquired, Axwave, an international software development company. Partnerships In 2016, Samba TV partnered with MediaMath. In 2017, Samba TV partnered with video ad serving platform SpotX and media/advertising research company Kantar. In 2019, Samba TV partnered with Twitter to help measure the social network’s effectiveness in driving tune-in. That same year, Samba TV also partnered with video streaming aggregator Reelgood. In 2020, Samba TV partnered with global programmatic media partner MiQ, global technology company The Trade Desk, media measurement and analytics company Comscore, as well as Amazon Web Services. and TiVo. That same year, Samba TV also partnered with point-of-sale data provider Catalina and paper product manufacturer Georgia-Pacific to measure cross-channel media spend. In 2021, Samba TV partnered with online advertising software company Pubmatic, as well as television measurement and analytics company 605. Products In July 2021, Samba TV launched its global Real-time TV Viewership Dashboard, an interactive TV analytics dashboard featuring geographic and demographic analysis of viewership in real-time across the world, starting with four of the largest media markets: the U.S., U.K., Germany, and Australia. Also in 2021, Samba TV announced its proprietary identifier, SambaID, as part of its Samba TV Identity solution, giving marketers, publishers and platforms the ability to transact with each other using a variety of different currencies to optimize against. Programmatic research technology firm Lucid later announced its use of SambaID for clients like Digitas and Wavemaker. Customers Samba TV's technology is integrated into 24 Smart TV brands globally. These brands are listed on their website as LG, Philips, Sony, Toshiba, beko, Magnavox, TCL, Grundig, Sanyo, AOC, Seiki, Element, Sharp, Westinghouse, Vestel, Panasonic, Hitachi, Finlux, Telefunken, Digihome, JVC, Luxor, Techwood, and Regal. Features Samba TV tracks what appears on the users' TV by reading pixels and utilizing this data for personalized recommendations on the TV or mobile apps connected to the television. This capability extends to streaming programs and even video games played on the television. In January 2021, Samba TV  introduced Picture Perfect℠ – an artificial intelligence (AI) technology that optimizes picture quality in real time for gaming, live sports, movies and more. Designed to be embedded within the TV, Picture Perfect will recognize and optimize the quality of the content playing on the screen in real time, with or without an internet connection. Awards In 2021, Samba TV was chosen by Adweek readers as "Best in Measurement Solutions" during the 2021 Adweek Readers' Choice: Best of Tech Partner Awards. Ownership Samba TV was incorporated as Free Stream Media Corp. in 2008, a company founded by Ashwin Navin, David Harrison, Alvir Navin, Omar Zennadi and Todd Johnson. According to the public source code repository, Flingo's open source client was written by David Harrison and Omar Zennadi in Python, and is free software licensed under the GPL. In February 2012, Flingo announced a $7 million Series A round investment from August Capital. In May 2012, Flingo added additional investors, closing at $8 million, including entrepreneur Mark Cuban and Gary Lauder. Cuban discovered Flingo at CES 2012 when he saw a crowd in the Flingo booth watching a demonstration of its SyncApps technology. The funding helped Flingo expand its presence with smart TV and device manufacturers, building on its existing partnerships. In April 2015, Interpublic Group announced a strategic investment in Samba TV. Subsequently, Samba TV announced more strategic investors totaling $30 million in a Series B round coming from Liberty Global, Disney, Warner Media, Interpublic Group, MDC Partners, A+E Networks and Union Grove Venture Partners. See also Smart TV Interactive television Social television Enhanced TV Digital Video Fingerprinting Second screen Automatic content recognition References External links Company website Official Website for Flingo Open Source project Official Website for Free Stream Media Corp. Free software programmed in Python MacOS multimedia software Television technology
31868028
https://en.wikipedia.org/wiki/CumFreq
CumFreq
In statistics and data analysis the application software CumFreq is a tool for cumulative frequency analysis of a single variable and for probability distribution fitting. Originally the method was developed for the analysis of hydrological measurements of spatially varying magnitudes (e.g. hydraulic conductivity of the soil) and of magnitudes varying in time (e.g. rainfall, river discharge) to find their return periods. However, it can be used for many other types of phenomena, including those that contain negative values. Software features CumFreq uses the plotting position approach to estimate the cumulative frequency of each of the observed magnitudes in a data series of the variable. The computer program allows determination of the best fitting probability distribution. Alternatively it provides the user with the option to select the probability distribution to be fitted. The following probability distributions are included: normal, lognormal, logistic, loglogistic, exponential, Cauchy, Fréchet, Gumbel, Pareto, Weibull, Generalized extreme value distribution, Laplace distribution, Burr distribution (Dagum mirrored), Dagum distribution (Burr mirrored), Gompertz distribution, Student distribution and other. Another characteristic of CumFreq is that it provides the option to use two different probability distributions, one for the lower data range, and one for the higher. The ranges are separated by a break-point. The use of such composite (discontinuous) probability distributions can be useful when the data of the phenomenon studied were obtained under different conditions. During the input phase, the user can select the number of intervals needed to determine the histogram. He may also define a threshold to obtain a truncated distribution. The output section provides a calculator to facilitate interpolation and extrapolation. Further it gives the option to see the Q–Q plot in terms of calculated and observed cumulative frequencies. ILRI provides examples of application to magnitudes like crop yield, watertable depth, soil salinity, hydraulic conductivity, rainfall, and river discharge. Generalizing distributions The program can produce generalizations of the normal, logistic, and other distributions by transforming the data using an exponent that is optimized to obtain the best fit. This feature is not common in other distribution-fitting software which normally include only a logarithmic transformation of data obtaining distributions like the lognormal and loglogistic. Generalization of symmetrical distributions (like the normal and the logistic) makes them applicable to data obeying a distribution that is skewed to the right (using an exponent <1) as well as to data obeying a distribution that is skewed to the left (using an exponent >1). This enhances the versatility of symmetrical distributions. Inverting distributions Skew distributions can be mirrored by distribution inversion (see survival function, or complementary distribution function) to change the skewness from positive to negative and vice versa. This amplifies the number of applicable distributions and increases the chance of finding a better fit. CumFreq makes use of that opportunity. Shifting distributions When negative data are present that are not supported by a probability distribution, the model performs a distribution shift to the positive side while, after fitting, the distribution is shifted back. Confidence belts The software employs the binomial distribution to determine the confidence belt of the corresponding cumulative distribution function. The prediction of the return period, which is of interest in time series, is also accompanied by a confidence belt. The construction of confidence belts is not found in most other software. The figure to the right shows the variation that may occur when obtaining samples of a variate that follows a certain probability distribution. The data were provided by Benson. The confidence belt around an experimental cumulative frequency or return period curve gives an impression of the region in which the true distribution may be found. Also, it clarifies that the experimentally found best fitting probability distribution may deviate from the true distribution. Goodness of fit Cumfreq produces a list of distributions ranked by goodness of fit. Histogram and density function From the cumulative distribution function (CDF) one can derive a histogram and the probability density function (PDF). Calculator The software offers the option to use a probability distribution calculator. The cumulative frequency and the return period are give as a function of data value as input. In addition, the confidence intervals are shown. Reversely, the value is presented upon giving the cumulative frequency or the return period. See also Distribution fitting References Statistical software Regression and curve fitting software Freeware
3594813
https://en.wikipedia.org/wiki/Dynamic%20Multipoint%20Virtual%20Private%20Network
Dynamic Multipoint Virtual Private Network
Dynamic Multipoint Virtual Private Network (DMVPN) is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS-based routers, and Huawei AR G3 routers, and on Unix-like operating systems. Benefits DMVPN provides the capability for creating a dynamic-mesh VPN network without having to pre-configure (static) all possible tunnel end-point peers, including IPsec (Internet Protocol Security) and ISAKMP (Internet Security Association and Key Management Protocol) peers. DMVPN is initially configured to build out a hub-and-spoke network by statically configuring the hubs (VPN headends) on the spokes, no change in the configuration on the hub is required to accept new spokes. Using this initial hub-and-spoke network, tunnels between spokes can be dynamically built on demand (dynamic-mesh) without additional configuration on the hubs or spokes. This dynamic-mesh capability alleviates the need for any load on the hub to route data between the spoke networks.. Technologies Generic Routing Encapsulation (GRE), , or multipoint GRE if spoke-to-spoke tunnels are desired NHRP (next-hop resolution protocol), IPsec (Internet Protocol Security) using an IPsec profile, which is associated with a virtual tunnel interface in IOS software. All traffic sent via the tunnel is encrypted per the policy configured (IPsec transform set) An IP-based routing protocol, EIGRP, OSPF, RIPv2, BGP or ODR (DMVPN hub-and-spoke only). Internal routing Routing protocols such as OSPF, EIGRP v1 or v2 or BGP are generally run between the hub and spoke to allow for growth and scalability. Both EIGRP and BGP allow a higher number of supported spokes per hub. Encryption As with GRE tunnels, DMVPN allows for several encryption schemes (including none) for the encryption of data traversing the tunnels. For security reasons Cisco recommend that customers use AES. Phases DMVPN has three phases that route data differently. Phase 1: All traffic flows from spokes to and through the hub. Phase 2: Start with Phase 1 then allows spoke-to-spoke tunnels based on demand and triggers. Phase 3: Starts with Phase 1 and improves scalability of and has fewer restrictions than Phase 2. References External links Cisco Systems DMVPN Management Cisco DMVPN Design Guide What Is Double VPN And How To Use It Expired Internet Draft Describing DMVPN Open Source NHRP Protocol Implementation Dynamic Multipoint IPsec VPNs (Using Multipoint GRE/NHRP to Scale IPsec VPNs) Network architecture Virtual private networks Cisco protocols
31081235
https://en.wikipedia.org/wiki/List%20of%20JBoss%20software
List of JBoss software
This is a list of articles for JBoss software, and projects from the JBoss Community and Red Hat. This open-source software written in Java is developed in projects, and productized with commercial-level support by Red Hat. JBoss productized software JBoss projects and software See also Comparison of application servers Comparison of business integration software Comparison of integrated development environments Comparison of network monitoring systems Comparison of object-relational mapping software Comparison of web server software References JBoss software
2218083
https://en.wikipedia.org/wiki/Gecos%20field
Gecos field
The gecos field, or GECOS field is a field of each record in the /etc/passwd file on Unix and similar operating systems. On UNIX, it is the 5th of 7 fields in a record. It is typically used to record general information about the account or its user(s) such as their real name and phone number. Format The typical format for the GECOS field is a comma-delimited list with this order: User's full name (or application name, if the account is for a program) Building and room number or contact person Office telephone number Home telephone number Any other contact information (pager number, fax, external e-mail address, etc.) In most UNIX systems non-root users can change their own information using the chfn or chsh command. History Some early Unix systems at Bell Labs used GECOS machines for print spooling and various other services, so this field was added to carry information on a user's GECOS identity. Other uses On Internet Relay Chat (IRC), the real name field is sometimes referred to as the gecos field. IRC clients are required to supply this field when connecting. Hexchat, an X-Chat fork, defaults to 'realname', TalkSoup.app on GNUstep defaults to 'John Doe', and irssi reads the operating system user's full name, replacing it with 'unknown' if not defined. Some IRC clients use this field for advertising; for example, ZNC defaulted to "Got ZNC?", but changed it to "RealName = " to match its configuration syntax in 2015. See also General Comprehensive Operating System References Unix
2835524
https://en.wikipedia.org/wiki/Metrowerks
Metrowerks
Metrowerks was a company that developed software development tools for various desktop, handheld, embedded, and gaming platforms. Its flagship product, CodeWarrior, comprised an IDE, compilers, linkers, debuggers, libraries, and related tools. In 1999 it was acquired by Motorola and in 2005 it was spun-off as part of Freescale, which continues to sell these tools. In 2015, Freescale Semiconductor was absorbed into NXP. History Founded by Greg Galanos in 1985 as Metropolis Computer Networks in Hudson, Quebec, Metrowerks originally developed software development tools for the Apple Macintosh and UNIX workstations. Its first product was a Modula-2 compiler originally developed by Niklaus Wirth, the creator of the ALGOL W, Pascal and Modula-2 programming languages. It had limited success with this product. In 1992, it began an effort to develop development tools for Macintosh computers based on the newly announced PowerPC processor as well as legacy support for 68k chipsets. It shipped the first commercial release of CodeWarrior in May 1994 at Apple's Worldwide Developers Conference. The release was a great success. Metrowerks received much credit for helping Apple succeed in its risky transition to a new processor. In March 1994 Metrowerks had its initial public offering, trading under the symbol MTWKF (NASDAQ foreign exchange) and continued to trade on Canadian exchanges. Also in 1994, Metrowerks opened a small sales and R&D office in Austin, Texas to be closer to the manufacturers of the new PowerPC chips, IBM and Motorola. Metrowerks later moved its corporate headquarters to Austin along with Greg Galanos (Founder/President/CTO) and Jean Belanger (Chairman/CEO). By 1996 Metrowerks had begun expanding its CodeWarrior product line to target platforms besides Macintosh computers, including: Mac OS PowerPC Mac OS 68k General Magic's Magic Cap OS BeOS Microsoft Windows x86 NEC v8xx, VRxxxx General MIPS (ISA I-IV) General PowerPC embedded General 68k embedded General Coldfire embedded General ARM embedded PlayStation, PS2 and PSP Nintendo 64, GameCube and Wii Sega Saturn Java tools Nokia SymbianOS (toolchain sold to Nokia in late 2004) PalmPilot In 1997, Metrowerks acquired the principal assets of The Latitude Group Inc., a software compatibility layer to port Macintosh applications to UNIX systems, with the intent to use it to port CodeWarrior to run on Solaris, and to extend it to facilitate porting MacOS software to Rhapsody. This will result in the creation of CodeWarrior Latitude. In August 1999, Motorola's semiconductor sector (Motorola Semiconductor Products Sector, or SPS) acquired Metrowerks for roughly $100 million in cash. After the acquisition, Jean Belanger moved to become VP of business development in SPS and after a short stint as Director of Software Strategy for SPS, Greg Galanos left to become a General Partner and Managing Director at SOFTBANK Venture Capital, known as Mobius Venture Capital since December 2001. David Perkins, previously SVP of Business Development at Metrowerks, assumed the title of President and CEO. Metrowerks subsequently acquired a small number of other companies including HIWARE AG, Embedix and Applied Microsystems Corp. in November 2002 for US$40-Million. In 2002, David Perkins assumed the role of Corporate Vice President of NCSG at Motorola SPS; Jim Welch (previously the CFO of Metrowerks) assumed the role of CEO. In late 2003, Jim Welch left to become CEO of Wireless Valley Communications and Matt Harris (who was previously CEO of Lineo and Embedix) became the new CEO of Metrowerks. In 2003, Motorola spun off its semiconductor group as a separate company named Freescale Semiconductor. In late 2004, Nokia purchased the SymbianOS development tools, including members of the engineering, for US$30-Million. In early 2005, Matt R. Harris left Metrowerks to become CEO of Volantis at which time Freescale management decided to absorb Metrowerks completely and not treat it as a wholly owned subsidiary. CodeWarrior for Mac OS had successfully made the transition to Apple's new Mac OS X operating system, supporting the Carbon development environment. However, Apple invested heavily in their own development tools for OS X (Xcode), distributed free of charge and always up to date. The increasing prominence of the Cocoa development environment marginalized CodeWarrior, and finally the surprise announcement of the Mac's switch to Intel processors – mere weeks after Freescale had sold the Metrowerks Intel compiler tools to Nokia – signalled the end of CodeWarrior on the Mac. In July 2005, Freescale discontinued CodeWarrior for Mac OS, as the same time it was also divesting from any tools targeting non-Freescale silicon. In October 2005, Freescale retired the Metrowerks name but continues to develop CodeWarrior and other developer technologies as part of Freescale's Developer Technology Organization. Metrowerks' logo of the iconic factory worker and other visual branding was created by illustrator Bill Russell Addendum: Freescale's website now says, "CodeWarrior for Mac OS has been discontinued and is no longer sold or supported." It has several downloadable updates, but the most recent modification date is 15 August 2005. Former Corporate Addresses 8920 Business Park Drive, Austin, TX 78759, USA 3925 West Braker Lane, Austin, TX, 78759, USA 2201 Donley Drive Suite 310, Austin, TX 78758, USA 2601 McHale Court, Austin, TX 78758, USA (Warehouse) 9801 Metric Boulevard, Austin, TX 78758, USA 7700 West Parmer Lane, Austin, TX 78753, USA CodeWarrior Starting in 1994, CodeWarrior was the main product from Metrowerks. It was an Integrated Development Environment for Classic MacOS, that offered C/C++ and Pascal, targeting both 68k and PowerPC. Java support was added in 1996. CodeWarrior for PalmPilot was the IDE in the early days of the device, limited to the C compiler with partial C++ support. Constructor for PalmPilot looked familiar to PowerPlant Constructor users. CodeWarrior was the default toolchain for BeOS. Initially, developers on a BeBox could either use the command line tool provided, or cross-compile using the Macintosh IDE, until Metrowerks developed the BeIDE as part of CodeWarrior for BeOS. Later CodeWarrior was ported to run on Windows for Win32 development (with MFC), and compilers started targeting embedded platforms. References Defunct software companies of Canada Defunct companies of Quebec Software companies established in 1985 Software companies disestablished in 2005 1985 establishments in Quebec
13858716
https://en.wikipedia.org/wiki/1952%20Pittsburgh%20Pirates%20season
1952 Pittsburgh Pirates season
The 1952 Pittsburgh Pirates season was the team's 71st season in Major League Baseball, and their 66th season in the National League. The Pirates posted a record of 42 wins and 112 losses, their worst record since 1890, and one of the worst in major league history. Offseason The Pirates were led in 1952 by 70-year-old general manager Branch Rickey and 60-year-old manager Billy Meyer. Meyer had led Pittsburgh to a last-place finish in the National League in 1950. After Rickey was installed as general manager, the Pirates were second-to-last in 1951. Tension was high as the two-year contract of their star slugger, Ralph Kiner, expired before the 1952 season. Kiner was the premier power hitter in baseball, having won the previous six National League home run titles. Rickey voiced what he viewed as inconsistent levels of commitment by Kiner when talking to the media. Kiner received permission to instead negotiate directly with owner John W. Galbreath and agreed to a reported one-year, $90,000 contract, making him the highest-paid player in the National League. Kiner was signed, but the most famous Pirate of all, 78-year-old Hall of Fame member Honus Wagner, decided to retire from his part-time coaching duties with the team. His number was retired, and he was given a lifetime pass to Forbes Field. Rickey wanted to hold a tryout for dozens of kids from the low minor league levels, and his plan was largely supported by Bing Crosby and the rest of the team's ownership. Rickey hired his former scout and coach Clyde Sukeforth, who had scouted Jackie Robinson for Rickey in the 1940s. Several top young prospects, like Vern Law and Danny O'Connell, were called to military service for the Korean War, and the more experienced Danny Murtaugh retired to accept a minor league managing position. Expectations were high for 23-year-old outfielder Gus Bell to support Kiner in the lineup. Murry Dickson, who had won 21 games in 1951, nearly a third of the entire team's win total, was once again expected to be the anchor of the pitching rotation. Notable transactions Prior to 1952 season: Sonny Senerchia was signed as a free agent by the Pirates. Regular season Season summary A season to forget The Pirates struggled throughout spring training in 1952. Gus Bell missed training time due to family-related car problems and illness and was sent to the minor leagues. Towards the end of spring training, pitcher Bill Werle was suspended indefinitely and fined $500, only the third player fined in over two decades of Billy Meyer's managing career. Werle professed his innocence and was reinstated before Opening Day but he was traded to the St. Louis Cardinals two weeks later. Thirteen rookies made the Pirates' Opening Day roster, including four teenagers: Bobby Del Greco, Tony Bartirome, Jim Waugh and Lee Walls. After four games, Pittsburgh's record was 2–2 but they quickly tumbled to the bottom of the majors by losing 16 of their next 17 games. The early two-game winning streak matched the longest they would see all year. Their top three pitchers combined to win just one of their first nine games started. Kiner's hitting was affected by the lack of support as well as back problems and his batting average was under .220 several weeks into the season. Kiner's difficulties and a club earned run average over five resulted in a 5–28 record in mid-May. Gus Bell returned from the minors on May 12 and hit for some power but Kiner hit only .241 with 13 home runs and 31 RBIs in the first half which ended with Pittsburgh at 21–59. 21-year-old Dick Groat was one of the Pirates' few bright spots in the first half with four hits in his first three games, but others went into long slumps like Jack Merson's 0-for-35, Clyde McCullough's 0-for-24 and Tony Bartirome's 0-for-29. The second half soon resembled the first with a 2–11 stretch in mid-July. They were mathematically eliminated from pennant contention on August 6 with more than six weeks left to play. In early August, Pittsburgh called up 20-year-old pitcher Ron Necciai from the minors. Necciai had pitched a legendary 27-strikeout game in the minors but gave up five runs in his first inning in the majors. Necciai not only finished the season with poor numbers but also injured his arm and never again pitched in the majors. Branch Rickey's youth movement, derided as "Operation Peach Fuzz", continued unabated. On August 20, the average age of Pittsburgh's starting lineup was only 23 with Kiner and Garagiola being the only non-rookies. On September 5, pitcher Bill Bell made his major league debut at age 18. Including Bell, seven of the eight youngest players in the National League in 1952 were Pittsburgh Pirates. The "Rickey Dinks", as they were sometimes called, were not only young but small. In one game, the entire infield was less than six feet tall. The Pirates difficulties reached off the field as well. Ralph Kiner, enduring his worst season to-date, received a death threat in an attempt to extort $6,200. Rather than pay, he contacted the authorities and was kept under guard for a time. Financially, Pittsburgh's attendance was the lowest since World War II, falling more than 30% short of the one million budgeted. Branch Rickey sometimes saved money by sending only 21 players on road trips. The final losses for the franchise, including minor leagues and bonuses, were $800,000. Billy Meyer resigned as manager on September 27, the second-to-last day of the season. Final results When the season mercifully ended, Pittsburgh's final record was 42–112. The winning percentage and number of losses were the worst for the franchise since the 1890 season (which was greatly affected by the inclusion of the Players' League) and the worst for any franchise since the 1935 Boston Braves. Since 1952, the only non-expansion team to finish worse has been the 2003 Detroit Tigers. A few individuals came away with positive notes. A late-season home run surge by Ralph Kiner brought him his seventh consecutive home run championship (he finished tied with Hank Sauer with 37 on the year). It was also his last. Dick Groat finished at .284 and was third in National League Rookie of the Year voting. Joe Garagiola logged the most playing time of his career and hit .273 with a career-high 54 RBIs, third most on the team behind only Kiner and Gus Bell. On the flipside, teenagers Tony Bartirome and Bobby Del Greco were regulars but neither hit over .220. Seven other players had at least 40 at-bats but hit under .200. Kiner's home run total (37) was more than the next four highest on the team combined (16, 8, 7, 5). As a team, Pittsburgh was last in the National League in runs, hits, doubles, triples, home runs, RBIs, batting average, slugging percentage, complete games, ERA, walks allowed, home runs allowed, fielding percentage and errors committed. Murry Dickson, who won 21 games in 1951, lost 20 games in 1952, going 14–20. Only three other pitchers won more than two games. The pitching staff walked 615 opposing batters while striking out only 564, with 16 different players starting a game during the season. Among their young players, only Jim Waugh – the youngest – played in the majors again before 1955. Waugh played in 1953, his last year; Ron Necciai and Tony Bartirome never played in the majors after 1952; Bill Bell pitched one inning in 1955, his last; and Bobby Del Greco, Lee Walls and Ron Kline had longer careers but not until several years later. Dick Groat and pitcher Bob Friend were the only players to endure the 1952 season who also played with the 1960 World Series champion Pirates. Anecdotes, etc. The failure of the 1952 Pirates was the source of several anecdotes and side-stories. Pittsburgh Press writer Len Biederman recalled an earlier humorous practice by giving Dick Groat a dime while he was in an 0-for-19 slump. When Groat broke out of the slump with a 5-for-5 game, Biederman gave Kiner a quarter with similar positive results so Biederman continued giving coins to various Pirates. Joe Garagiola, the regular catcher for the 1952 Pirates, frequently used the team's struggles in his later career as a baseball sportscaster with lines like, "They talk about Pearl Harbor being something; they should have seen the 1952 Pittsburgh Pirates" and "In an eight-team league, we should've finished ninth." Season standings Record vs. opponents Game log |- bgcolor="ffbbbb" | 1 || April 15 || @ Cardinals || 2–3 || Staley || Dickson (0–1) || Brazle || 15,850 || 0–1 |- bgcolor="ffbbbb" | 2 || April 16 || @ Cardinals || 5–6 || Chambers || Pollet (0–1) || Brazle || 4,324 || 0–2 |- bgcolor="ccffcc" | 3 || April 17 || @ Cardinals || 5–3 || Muir (1–0) || Yuhas || Wilks (1) || 4,907 || 1–2 |- bgcolor="ccffcc" | 4 || April 18 || Reds || 3–0 || Friend (1–0) || Blackwell || — || 29,874 || 2–2 |- bgcolor="ffbbbb" | 5 || April 19 || Reds || 3–9 || Wehmeier || Queen (0–1) || — || 10,271 || 2–3 |- bgcolor="ffbbbb" | 6 || April 20 || Reds || 6–8 || Perkowski || Dickson (0–2) || Byerly || || 2–4 |- bgcolor="ffbbbb" | 7 || April 20 || Reds || 2–12 || Hiller || Pollet (0–2) || — || 23,732 || 2–5 |- bgcolor="ffbbbb" | 8 || April 21 || Cubs || 1–7 || Minner || Kline (0–1) || — || 12,378 || 2–6 |- bgcolor="ffbbbb" | 9 || April 22 || Cubs || 2–13 || Rush || Friend (1–1) || — || 9,321 || 2–7 |- bgcolor="ffbbbb" | 10 || April 25 || Cardinals || 4–6 || Staley || Muir (1–1) || Brazle || 1,945 || 2–8 |- bgcolor="ffbbbb" | 11 || April 26 || @ Reds || 2–9 || Wehmeier || Dickson (0–3) || Smith || 4,239 || 2–9 |- bgcolor="ffbbbb" | 12 || April 27 || @ Reds || 2–8 || Raffensberger || Friend (1–2) || — || || 2–10 |- bgcolor="ffbbbb" | 13 || April 27 || @ Reds || 0–1 || Hiller || Pollet (0–3) || — || 16,427 || 2–11 |- bgcolor="ffbbbb" | 14 || April 29 || Braves || 1–5 || Spahn || Friend (1–3) || — || 10,008 || 2–12 |- bgcolor="ccffcc" | 15 || April 30 || Braves || 11–5 || Dickson (1–3) || Cole || Wilks (2) || 2,861 || 3–12 |- |- bgcolor="ffbbbb" | 16 || May 1 || Giants || 5–13 || Hearn || Queen (0–2) || — || 4,801 || 3–13 |- bgcolor="ffbbbb" | 17 || May 2 || Giants || 3–5 (10) || Wilhelm || Wilks (0–1) || Spencer || 17,111 || 3–14 |- bgcolor="ffbbbb" | 18 || May 3 || Giants || 2–3 || Maglie || Kline (0–2) || — || 7,451 || 3–15 |- bgcolor="ffbbbb" | 19 || May 4 || Dodgers || 0–6 || Erskine || Dickson (1–4) || — || 19,322 || 3–16 |- bgcolor="ffbbbb" | 20 || May 5 || Dodgers || 1–5 (8) || Branca || Friend (1–4) || — || 3,652 || 3–17 |- bgcolor="ffbbbb" | 21 || May 6 || Phillies || 0–6 || Roberts || Carlsen (0–1) || — || 9,008 || 3–18 |- bgcolor="ccffcc" | 22 || May 7 || Phillies || 5–1 || Pollet (1–3) || Meyer || — || 7,291 || 4–18 |- bgcolor="ffbbbb" | 23 || May 10 || @ Cubs || 1–3 || Rush || Dickson (1–5) || — || 7,438 || 4–19 |- bgcolor="ffbbbb" | 24 || May 11 || @ Cubs || 2–8 || Minner || Kline (0–3) || — || || 4–20 |- bgcolor="ccffcc" | 25 || May 11 || @ Cubs || 11–2 || Friend (2–4) || Klippstein || — || 14,845 || 5–20 |- bgcolor="ffbbbb" | 26 || May 13 || @ Braves || 1–3 || Bickford || Pollet (1–4) || — || 2,831 || 5–21 |- bgcolor="ffbbbb" | 27 || May 14 || @ Braves || 3–4 (10) || Surkont || Main (0–1) || — || 1,105 || 5–22 |- bgcolor="ffbbbb" | 28 || May 15 || @ Dodgers || 0–2 || Loes || Dickson (1–6) || — || 14,402 || 5–23 |- bgcolor="ffbbbb" | 29 || May 16 || @ Dodgers || 4–6 || Labine || Main (0–2) || — || 3,385 || 5–24 |- bgcolor="ffbbbb" | 30 || May 17 || @ Dodgers || 7–12 || Wade || Kline (0–4) || — || 11,067 || 5–25 |- bgcolor="ffbbbb" | 31 || May 19 || @ Giants || 0–4 || Maglie || Pollet (1–5) || — || 4,461 || 5–26 |- bgcolor="ffbbbb" | 32 || May 21 || @ Phillies || 3–7 || Roberts || Dickson (1–7) || — || 6,202 || 5–27 |- bgcolor="ffbbbb" | 33 || May 22 || @ Phillies || 0–6 || Simmons || Munger (0–1) || — || 3,065 || 5–28 |- bgcolor="ccffcc" | 34 || May 23 || Cubs || 6–5 (13) || Wilks (1–1) || Hacker || — || 8,496 || 6–28 |- bgcolor="ffbbbb" | 35 || May 24 || Cubs || 5–7 || Minner || Pollet (1–6) || Klippstein || 3,118 || 6–29 |- bgcolor="ffbbbb" | 36 || May 25 || Cubs || 4–5 || Hacker || Wilks (1–2) || Leonard || 5,111 || 6–30 |- bgcolor="ccffcc" | 37 || May 26 || Reds || 6–3 || Friend (3–4) || Hiller || — || 6,171 || 7–30 |- bgcolor="ffbbbb" | 38 || May 27 || Reds || 4–5 (14) || Smith || Main (0–3) || — || 2,150 || 7–31 |- bgcolor="ffbbbb" | 39 || May 28 || Reds || 2–5 || Raffensberger || Munger (0–2) || — || 6,186 || 7–32 |- bgcolor="ccffcc" | 40 || May 29 || Reds || 4–2 || Dickson (2–7) || Perkowski || — || 1,070 || 8–32 |- bgcolor="ffbbbb" | 41 || May 30 || Cardinals || 2–3 || Yuhas || Friend (3–5) || Brazle || || 8–33 |- bgcolor="ccffcc" | 42 || May 30 || Cardinals || 4–3 || LaPalme (1–0) || Staley || — || 19,546 || 9–33 |- bgcolor="ccffcc" | 43 || May 31 || Phillies || 5–3 || Muir (2–1) || Possehl || Main (1) || 6,425 || 10–33 |- |- bgcolor="ffbbbb" | 44 || June 1 || Phillies || 1–5 || Simmons || Dickson (2–8) || — || || 10–34 |- bgcolor="ccffcc" | 45 || June 1 || Phillies || 2–1 || Wilks (2–2) || Drews || — || 15,529 || 11–34 |- bgcolor="ffbbbb" | 46 || June 3 || Dodgers || 4–6 || Branca || Munger (0–3) || Rutherford || 19,452 || 11–35 |- bgcolor="ffbbbb" | 47 || June 4 || Dodgers || 4–7 || Erskine || Friend (3–6) || Loes || 14,421 || 11–36 |- bgcolor="ffbbbb" | 48 || June 5 || Dodgers || 0–2 || Wade || Main (0–4) || — || 6,328 || 11–37 |- bgcolor="ccffcc" | 49 || June 6 || Giants || 8–1 || Dickson (3–8) || Maglie || — || 20,163 || 12–37 |- bgcolor="ffbbbb" | 50 || June 7 || Giants || 5–7 || Spencer || Main (0–5) || Lanier || 7,656 || 12–38 |- bgcolor="ffbbbb" | 51 || June 8 || Giants || 1–9 || Jansen || Pollet (1–7) || — || 13,942 || 12–39 |- bgcolor="ffbbbb" | 52 || June 9 || Braves || 2–3 || Wilson || Friend (3–7) || — || 6,973 || 12–40 |- bgcolor="ccffcc" | 53 || June 10 || Braves || 7–5 || Wilks (3–2) || Spahn || — || 10,934 || 13–40 |- bgcolor="ccffcc" | 54 || June 11 || Braves || 5–0 || Dickson (4–8) || Surkont || — || 9,415 || 14–40 |- bgcolor="ffbbbb" | 55 || June 12 || Braves || 2–11 || Burdette || Muir (2–2) || — || 3,223 || 14–41 |- bgcolor="ffbbbb" | 56 || June 14 || @ Phillies || 2–4 || Meyer || Friend (3–8) || Konstanty || 5,033 || 14–42 |- bgcolor="ccffcc" | 57 || June 15 || @ Phillies || 6–0 || Pollet (2–7) || Drews || — || || 15–42 |- bgcolor="ffbbbb" | 58 || June 15 || @ Phillies || 3–6 || Fox || Dickson (4–9) || Konstanty || 12,525 || 15–43 |- bgcolor="ffbbbb" | 59 || June 16 || @ Phillies || 4–5 || Konstanty || LaPalme (1–1) || — || 2,210 || 15–44 |- bgcolor="ccffcc" | 60 || June 17 || @ Giants || 6–2 || Main (1–5) || Gregg || — || 11,317 || 16–44 |- bgcolor="ffbbbb" | 61 || June 18 || @ Giants || 2–5 || Hearn || Friend (3–9) || — || 3,346 || 16–45 |- bgcolor="ccffcc" | 62 || June 19 || @ Giants || 8–1 || Dickson (5–9) || Jansen || — || 6,369 || 17–45 |- bgcolor="ffbbbb" | 63 || June 20 || @ Dodgers || 4–5 || Labine || Wilks (3–3) || — || 4,679 || 17–46 |- bgcolor="ffbbbb" | 64 || June 21 || @ Dodgers || 4–14 || Loes || Main (1–6) || Erskine || 13,335 || 17–47 |- bgcolor="ffbbbb" | 65 || June 23 || @ Braves || 3–9 || Johnson || Friend (3–10) || — || 2,654 || 17–48 |- bgcolor="ffbbbb" | 66 || June 24 || @ Braves || 3–4 || Wilson || Dickson (5–10) || — || 3,736 || 17–49 |- bgcolor="ffbbbb" | 67 || June 25 || @ Braves || 2–5 || Surkont || Pollet (2–8) || — || 1,414 || 17–50 |- bgcolor="ffbbbb" | 68 || June 27 || Cardinals || 4–6 || Yuhas || Muir (2–3) || Brazle || 16,133 || 17–51 |- bgcolor="ffbbbb" | 69 || June 28 || Cardinals || 3–4 || Yuhas || Dickson (5–11) || — || 5,417 || 17–52 |- bgcolor="ccffcc" | 70 || June 29 || Cardinals || 2–1 (5) || Pollet (3–8) || Boyer || — || 14,870 || 18–52 |- bgcolor="ffbbbb" | 71 || June 30 || @ Cubs || 4–5 || Klippstein || Friend (3–11) || — || 5,983 || 18–53 |- |- bgcolor="ccffcc" | 72 || July 1 || @ Cubs || 3–2 || Main (2–6) || Ramsdell || Wilks (3) || 9,935 || 19–53 |- bgcolor="ffbbbb" | 73 || July 2 || @ Cubs || 3–8 || Minner || Dickson (5–12) || — || || 19–54 |- bgcolor="ffbbbb" | 74 || July 2 || @ Cubs || 0–3 (8) || Hacker || Kline (0–5) || — || 16,543 || 19–55 |- bgcolor="ffbbbb" | 75 || July 3 || @ Reds || 1–5 || Church || Pollet (3–9) || — || 1,807 || 19–56 |- bgcolor="ccffcc" | 76 || July 4 || @ Reds || 4–2 || Friend (4–11) || Perkowski || — || || 20–56 |- bgcolor="ccffcc" | 77 || July 4 || @ Reds || 5–2 || Fisher (1–0) || Nuxhall || Wilks (4) || 8,253 || 21–56 |- bgcolor="ffbbbb" | 78 || July 5 || @ Cardinals || 0–5 || Brazle || Main (2–7) || — || 15,625 || 21–57 |- bgcolor="ffbbbb" | 79 || July 6 || @ Cardinals || 5–6 || Yuhas || Dickson (5–13) || — || || 21–58 |- bgcolor="ffbbbb" | 80 || July 6 || @ Cardinals || 4–6 || Brecheen || Friend (4–12) || — || 17,048 || 21–59 |- bgcolor="ccffcc" | 81 || July 10 || Giants || 6–4 (12) || Wilks (4–3) || Spencer || — || 15,226 || 22–59 |- bgcolor="ccffcc" | 82 || July 11 || Giants || 6–2 || Dickson (6–13) || Maglie || — || 4,482 || 23–59 |- bgcolor="ffbbbb" | 83 || July 12 || Braves || 2–5 || Bickford || Friend (4–13) || — || 4,999 || 23–60 |- bgcolor="ffbbbb" | 84 || July 13 || Braves || 2–4 || Surkont || Fisher (1–1) || — || || 23–61 |- bgcolor="ffbbbb" | 85 || July 13 || Braves || 1–2 || Jester || Wilks (4–4) || — || 12,373 || 23–62 |- bgcolor="ffbbbb" | 86 || July 15 || Phillies || 3–10 || Simmons || Pollet (3–10) || — || 10,244 || 23–63 |- bgcolor="ffbbbb" | 87 || July 16 || Phillies || 7–8 || Roberts || Dickson (6–14) || Hansen || 2,569 || 23–64 |- bgcolor="ccffcc" | 88 || July 17 || Phillies || 2–1 || Hogue (1–0) || Meyer || — || || 24–64 |- bgcolor="ccffcc" | 89 || July 17 || Phillies || 4–2 || Wilks (5–4) || Drews || — || 5,304 || 25–64 |- bgcolor="ffbbbb" | 90 || July 18 || Dodgers || 2–6 || Loes || Friend (4–14) || Black || 19,681 || 25–65 |- bgcolor="ffbbbb" | 91 || July 19 || Dodgers || 1–9 || Erskine || Pollet (3–11) || — || 5,662 || 25–66 |- bgcolor="ffbbbb" | 92 || July 20 || Dodgers || 5–8 || Wade || Dickson (6–15) || Black || 14,490 || 25–67 |- bgcolor="ffbbbb" | 93 || July 22 || @ Phillies || 4–14 || Meyer || Hogue (1–1) || — || || 25–68 |- bgcolor="ffbbbb" | 94 || July 22 || @ Phillies || 1–8 || Drews || Main (2–8) || Hansen || 11,213 || 25–69 |- bgcolor="ffbbbb" | 95 || July 23 || @ Phillies || 1–4 || Ridzik || Friend (4–15) || Roberts || 4,611 || 25–70 |- bgcolor="ccffcc" | 96 || July 25 || @ Braves || 3–2 || Dickson (7–15) || Spahn || — || 4,126 || 26–70 |- bgcolor="ccffcc" | 97 || July 26 || @ Braves || 6–4 || Pollet (4–11) || Jester || — || 2,006 || 27–70 |- bgcolor="ffbbbb" | 98 || July 27 || @ Braves || 2–5 || Bickford || Hogue (1–2) || Burdette || || 27–71 |- bgcolor="ffffff" | 99 || July 27 || @ Braves || 3–3 (11) || || || — || 3,719 || 27–71 |- bgcolor="ccffcc" | 100 || July 29 || @ Dodgers || 7–1 || Dickson (8–15) || Loes || — || 11,807 || 28–71 |- bgcolor="ffbbbb" | 101 || July 30 || @ Dodgers || 3–4 (10) || Black || Friend (4–16) || — || 5,110 || 28–72 |- bgcolor="ffbbbb" | 102 || July 31 || @ Dodgers || 6–7 (11) || Black || LaPalme (1–2) || — || || 28–73 |- bgcolor="ffbbbb" | 103 || July 31 || @ Dodgers || 1–4 || Landrum || Main (2–9) || — || || 28–74 |- |- bgcolor="ffbbbb" | 104 || August 1 || @ Giants || 3–7 || Hearn || Fisher (1–2) || — || 10,458 || 28–75 |- bgcolor="ffbbbb" | 105 || August 2 || @ Giants || 3–4 (6) || Wilhelm || Dickson (8–16) || — || 4,174 || 28–76 |- bgcolor="ccffcc" | 106 || August 3 || @ Giants || 7–0 || Dickson (9–16) || Lanier || — || || 29–76 |- bgcolor="ccffcc" | 107 || August 3 || @ Giants || 10–8 (6) || Pollet (5–11) || Jansen || Main (2) || 17,965 || 30–76 |- bgcolor="ffbbbb" | 108 || August 5 || Cardinals || 3–4 (12) || Presko || Hogue (1–3) || — || 10,235 || 30–77 |- bgcolor="ffbbbb" | 109 || August 6 || Cardinals || 2–7 (10) || Brazle || Wilks (5–5) || — || || 30–78 |- bgcolor="ffbbbb" | 110 || August 6 || Cardinals || 2–3 || Boyer || Main (2–10) || Yuhas || 11,999 || 30–79 |- bgcolor="ccffcc" | 111 || August 8 || Cubs || 1–0 (10) || Dickson (10–16) || Rush || — || 8,503 || 31–79 |- bgcolor="ccffcc" | 112 || August 9 || Cubs || 4–3 || Waugh (1–0) || Kelly || — || 4,196 || 32–79 |- bgcolor="ffbbbb" | 113 || August 10 || Cubs || 5–9 || Hacker || Necciai (0–1) || Leonard || || 32–80 |- bgcolor="ffbbbb" | 114 || August 10 || Cubs || 3–4 || Minner || Pollet (5–12) || Leonard || 17,773 || 32–81 |- bgcolor="ffbbbb" | 115 || August 11 || Reds || 4–10 || Wehmeier || Hogue (1–4) || Smith || 9,304 || 32–82 |- bgcolor="ccffcc" | 116 || August 14 || @ Cardinals || 5–3 (10) || Dickson (11–16) || Presko || — || 9,524 || 33–82 |- bgcolor="ffbbbb" | 117 || August 15 || @ Cardinals || 4–5 || Brazle || Main (2–11) || — || 6,115 || 33–83 |- bgcolor="ccffcc" | 118 || August 16 || @ Cubs || 2–1 || Pollet (6–12) || Minner || — || 12,256 || 34–83 |- bgcolor="ffbbbb" | 119 || August 17 || @ Cubs || 2–5 || Rush || Waugh (1–1) || — || || 34–84 |- bgcolor="ccffcc" | 120 || August 17 || @ Cubs || 5–2 || Friend (5–16) || Kelly || — || 26,635 || 35–84 |- bgcolor="ffbbbb" | 121 || August 18 || @ Cubs || 3–4 || Schultz || Dickson (11–17) || — || 4,911 || 35–85 |- bgcolor="ffbbbb" | 122 || August 19 || Phillies || 5–10 || Roberts || Necciai (0–2) || — || 11,207 || 35–86 |- bgcolor="ffbbbb" | 123 || August 20 || Phillies || 1–3 || Meyer || Hogue (1–5) || — || 2,755 || 35–87 |- bgcolor="ffbbbb" | 124 || August 22 || Dodgers || 2–9 || Black || Pollet (6–13) || — || || 35–88 |- bgcolor="ccffcc" | 125 || August 22 || Dodgers || 3–2 || Dickson (12–17) || Landrum || — || 21,845 || 36–88 |- bgcolor="ffbbbb" | 126 || August 23 || Dodgers || 2–3 || Labine || Waugh (1–2) || Black || 8,844 || 36–89 |- bgcolor="ccffcc" | 127 || August 24 || Braves || 4–3 || Necciai (1–2) || Jester || Dickson (1) || || 37–89 |- bgcolor="ffbbbb" | 128 || August 24 || Braves || 3–5 (10) || Burdette || Kline (0–6) || — || 12,349 || 37–90 |- bgcolor="ffbbbb" | 129 || August 26 || Giants || 7–14 || Wilhelm || Dickson (12–18) || Lanier || 14,011 || 37–91 |- bgcolor="ffbbbb" | 130 || August 27 || Giants || 4–5 || Connelly || Pollet (6–14) || Jansen || 4,069 || 37–92 |- bgcolor="ffbbbb" | 131 || August 28 || Giants || 7–14 || Koslo || Waugh (1–3) || — || 3,561 || 37–93 |- bgcolor="ffbbbb" | 132 || August 30 || Cardinals || 2–12 || Staley || Necciai (1–3) || — || 10,500 || 37–94 |- bgcolor="ccffcc" | 133 || August 31 || Cardinals || 4–2 || Dickson (13–18) || Miller || — || 7,871 || 38–94 |- |- bgcolor="ffbbbb" | 134 || September 1 || Cubs || 0–6 || Klippstein || Pollet (6–15) || — || || 38–95 |- bgcolor="ccffcc" | 135 || September 1 || Cubs || 5–4 (11) || Dickson (14–18) || Leonard || — || 13,031 || 39–95 |- bgcolor="ffbbbb" | 136 || September 3 || @ Reds || 0–1 || Raffensberger || Necciai (1–4) || — || 4,230 || 39–96 |- bgcolor="ffbbbb" | 137 || September 4 || @ Reds || 2–7 || Wehmeier || Waugh (1–4) || — || 1,519 || 39–97 |- bgcolor="ffbbbb" | 138 || September 5 || @ Cardinals || 0–4 || Mizell || Bell (0–1) || — || 4,327 || 39–98 |- bgcolor="ffbbbb" | 139 || September 6 || @ Cardinals || 4–7 (10) || Brazle || Dickson (14–19) || — || 7,329 || 39–99 |- bgcolor="ffbbbb" | 140 || September 7 || @ Cardinals || 3–4 || Brazle || Waugh (1–5) || — || 9,298 || 39–100 |- bgcolor="ffbbbb" | 141 || September 9 || @ Giants || 6–11 || Connelly || Hogue (1–6) || Spencer || 2,894 || 39–101 |- bgcolor="ffbbbb" | 142 || September 10 || @ Giants || 2–3 (13) || Wilhelm || Dickson (14–20) || — || 3,742 || 39–102 |- bgcolor="ffbbbb" | 143 || September 11 || @ Giants || 4–5 || Maglie || Pollet (6–16) || Wilhelm || 3,094 || 39–103 |- bgcolor="ccffcc" | 144 || September 12 || @ Braves || 8–1 || Friend (6–16) || Jester || — || || 40–103 |- bgcolor="ffbbbb" | 145 || September 12 || @ Braves || 0–16 || Johnson || Necciai (1–5) || — || 2,608 || 40–104 |- bgcolor="ffbbbb" | 146 || September 13 || @ Braves || 0–8 || Spahn || Kline (0–7) || — || 1,957 || 40–105 |- bgcolor="ffbbbb" | 147 || September 14 || @ Phillies || 2–5 || Simmons || Hogue (1–7) || — || || 40–106 |- bgcolor="ffbbbb" | 148 || September 14 || @ Phillies || 1–2 || Meyer || Waugh (1–6) || — || 7,238 || 40–107 |- bgcolor="ffbbbb" | 149 || September 16 || @ Dodgers || 2–4 || Hughes || Dickson (14–21) || Black || 13,422 || 40–108 |- bgcolor="ccffcc" | 150 || September 17 || @ Dodgers || 4–1 || Pollet (7–16) || Wade || Dickson (2) || 5,895 || 41–108 |- bgcolor="ffbbbb" | 151 || September 19 || Reds || 3–4 || Wehmeier || Friend (6–17) || — || 5,435 || 41–109 |- bgcolor="ffbbbb" | 152 || September 21 || Reds || 3–4 || Podbielan || Necciai (1–6) || — || 22,398 || 41–110 |- bgcolor="ffbbbb" | 153 || September 26 || @ Reds || 0–5 || Podbielan || Hogue (1–8) || — || 3,893 || 41–111 |- bgcolor="ccffcc" | 154 || September 27 || @ Reds || 9–6 || Friend (7–17) || Perkowski || — || 2,084 || 42–111 |- bgcolor="ffbbbb" | 155 || September 28 || @ Reds || 2–3 || Raffensberger || Main (2–12) || — || 7,354 || 42–112 |- |- | Legend:      = Win      = Loss      = TieBold = Pirates team member Opening Day lineup Notable transactions May 17, 1952: Bill Howerton was selected off waivers from the Pirates by the New York Giants. June 16, 1952: Dick Groat was signed as an amateur free agent by the Pirates. Roster Player stats Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Other pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts Farm system LEAGUE CHAMPIONS: Hollywood, Denver Bartlesville franchise transferred to Pittsburg (Kansas), July 7, 1952 See also List of worst Major League Baseball season records Notes References 1952 Pittsburgh Pirates at Baseball Reference 1952 Pittsburgh Pirates at Baseball Almanac Pittsburgh Pirates seasons Pittsburgh Pirates season Pittsburg Pir
421579
https://en.wikipedia.org/wiki/Tiki%20Wiki%20CMS%20Groupware
Tiki Wiki CMS Groupware
Tiki Wiki CMS Groupware or simply Tiki, originally known as TikiWiki, is a free and open source Wiki-based content management system and online office suite written primarily in PHP and distributed under the GNU Lesser General Public License (LGPL-2.1-only) license. In addition to enabling websites and portals on the internet and on intranets and extranets, Tiki contains a number of collaboration features allowing it to operate as a Geospatial Content Management System (GeoCMS) and Groupware web application. Tiki includes all the basic features common to most CMSs such as the ability to register and maintain individual user accounts within a flexible and rich permission / privilege system, create and manage menus, RSS-feeds, customize page layout, perform logging, and administer the system. All administration tasks are accomplished through a browser-based user interface. Tiki features an all-in-one design, as opposed to a core+extensions model followed by other CMSs. This allows for future-proof upgrades (since all features are released together), but has the drawback of an extremely large codebase (more than 1,000,000 lines). Tiki can run on any computing platform that supports both a web server capable of running PHP 5 (including Apache HTTP Server, IIS, Lighttpd, Hiawatha, Cherokee, and nginx) and a MySQL database to store content and settings. Major components Tiki has four major categories of components: content creation and management tools, content organization tools and navigation aids, communication tools, and configuration and administration tools. These components enable administrators and users to create and manage content, as well as letting them communicate to others and configure sites. In addition, Tiki allows each user to choose from various visual themes. These themes are implemented using CSS and the open source Smarty template engine. Additional themes can be created by a Tiki administrator for branding or customization as well. Internationalization Tiki is an international project, supporting many languages. The default interface language in Tiki is English, but any language that can be encoded and displayed using the UTF-8 encoding can be supported. Translated strings can be included via an external language file, or by translating interface strings directly, through the database. As of 29 September 2005, Tiki had been fully translated into eight languages and reportedly 90% or more translated into another five languages, as well as partial translations for nine additional languages. Tiki also supports interactive translation of actual wiki pages and was the initial wiki engine used in the Cross Lingual Wiki Engine Project. This allows Tiki-based web sites to have translated content — not just the user interface. Implementation Tiki is developed primarily in PHP with some JavaScript code. It uses MySQL as a database. It will run on any server that provides PHP 5, including Apache and Microsoft's IIS. Tiki components make extensive use of other open source projects, including Zend Framework, Smarty, jQuery, HTML Purifier, FCKeditor, Raphaël, phpCAS, and Morcego. When used with Mapserver Tiki can become a Geospatial Content Management System. Project team Tiki is under active development by a large international community of over 300 developers and translators, and is one of the largest open-source teams in the world. Project members have donated the resources and bandwidth required to host the tiki.org website and various subdomains. The project members refer to this dependence on their own product as "eating their own dogfood", which they have been doing since the early days of the project. Tiki community members also participate in various related events such as WikiSym and the Libre Software Meeting. History Tiki has been hosted on SourceForge.net since its initial release (Release 0.9, named Spica) in October 2002. It was primarily the development of Luis Argerich (Buenos Aires, Argentina), Eduardo Polidor (São Paulo, Brazil), and Garland Foster (Green Bay, WI, United States). In July 2003, Tiki was named the SourceForge.net July 2003 Project of the Month. In late 2003, a fork of Tiki was used to create Bitweaver. In 2006, Tiki was named to CMS Report's Top 30 Web Applications. In 2008, Tiki was named to EContent magazine's Top 100 In 2009, Tiki adopted a six-month release cycle and announced the selection of a Long Term Support (LTS) version and the Tiki Software Community Association was formed as the legal steward for Tiki. The Tiki Software Association is a not-for-profit entity established in Canada. Previously, the entire project was run entirely by volunteers. In 2010, Tiki received Best of Open Source Software Applications Award (BOSSIE) from InfoWorld, in the Applications category. In 2011, Tiki was named to CMS Report's Top 30 Web Applications. In 2012, Tiki was named "Best Web Tool" by WebHostingSearch.com, and "People's Choice: Best Free CMS" by CMS Critic. In 2016, Tiki was named as one of the "10 Best Open Source Collaboration Software Tools" by Small Business Computing. Name The name TikiWiki is written in CamelCase, a common Wiki syntax indicating a hyperlink within the Wiki. It is most likely a compound word combining two Polynesian terms, Tiki and Wiki, to create a self-rhyming name similar to wikiwiki, a common variant of wiki. A backronym has also been formed for Tiki: Tightly Integrated Knowledge Infrastructure. See also Comparison of wiki software List of content management systems Comparison of office suites List of spreadsheet software References Further reading External links Free wiki software Semantic wiki software Free content management systems Blog software 2002 software Cross-platform free software Free groupware Free software programmed in PHP Free project management software Bug and issue tracking software Groupware Content management systems Web applications
291312
https://en.wikipedia.org/wiki/Svet%20kompjutera
Svet kompjutera
Svet kompjutera (World of Computers) (Started October 1984) is a computer magazine published in Serbia. It has the highest circulation in the country (e.g. in period from January till February 2002 circulation was 43,000 copies). Svet kompjutera deals with subjects on home, PC computers, tablet computers, smartphones (mobile phones), and video game consoles as well as their use for work and entertainment. Its aim is to inform the readers about the latest events in Serbian and world computer scene and to present products that are interesting for its readers. Its editorial staff sees this as their main task to advise computer users on how to use their hardware and software in the best way. It is one of the editions of Politika AD, one of the biggest media companies in the Balkans. It is published monthly and can be purchased in all newsstands in Serbia. It can be also found in North Macedonia, Slovenia, Bosnia and Herzegovina, Croatia, Montenegro and many other European countries with a possibility to subscribe to it from anywhere in the world. The magazine consists of 132 pages, commercial advertisements forming 35% to 40% of the magazine. It is printed in quality full colour offset technology. Four issues of this magazine were printed with different covers. The October issue of 2004 was printed with three different covers, the October issue of 2006 was printed with two different covers, and the December issue of 2011 and the October issue of 2014 were printed with four different covers. The editorial staff had been always consisting of young people - the average age being 26 years, while the average age of contributors is 20 years. Its readers are mainly young and middle-aged people, mostly from Serbia and some from ex-Yugoslav countries. Current (October 2011) Editor-In-Chief of Svet kompjutera is Nenad Vasovic. Current executive editors are Miodrag Kuzmanovic and Tihomir Stancevic. History The first issue of the Svet kompjutera was printed in October 1984. Ever since, the magazine has been dealing with small computers, from ZX Spectrum and Commodore 64, via Amiga to today's PCs. Most people famous in the Serbian (ex Yugoslav), Serbian and Belgrade computer scene have been working for the Svet kompjutera. The first editor-in-chief was Milan Misic, later Politika'''s correspondent from India and Japan, then foreign policy column editor, and former editor-in-chief in the same newspaper. Before settling in another businesses, contributors to the development of the Svet kompjutera were the following individuals: Stanko Popović (working independently in computer business), Stanko Stojiljković (editor in Ekspres daily newspaper), Sergej Marcenko (marketing editor in political weekly magazine NIN), Andrija Kolundžić (working independently in computer business in Tokyo, Japan), Aleksandar Radovanovic (now working at various universities around the world), Voja Antonić, Dragoslav Jovanović (working at the Belgrade University), Jovan Puzovic (working at the Belgrade University), Nenad Balint (working in IT company in United Kingdom), Aleksandar Petrović (manager of a software company in Canada), Dalibor Lanik (working as a programmer in Czech Republic) and many others. During 1986, when the home computers made the biggest boom, a games subsection of the Svet kompjutera started to evolve into a special issue Svet igara (Games World). This issue was published from time to time as a supplement to the games column in the magazine. Up until now, 14 issues have been published. The same year, Svet kompjutera had a special edition in Russian that was distributed to the former Soviet Union. "Computer Grand Prix", organized by the "ComputerWorld", is a contest for the best hardware and software products on domestic market. Unfortunately, during UN sanctions, organized import of such products was not allowed, so it was not possible to organize this contest. Also, in 1988 the Svet kompjutera organized "Computer '88", a small computer fair in downtown Belgrade. It consisted of the exhibition and presentations, lectures and special broadcasts in Belgrade media. In August 2005 Svet kompjutera'' formed its official Web forum named "Forum Sveta kompjutera". As of February 2011 it has over 26,000 users and over 1,200,000 posts in over 56,000 topics. Logos From October 1984, for this magazine, there are two different logos. The first logo of magazine is used from October 1984 to October 1991, and the second and current logo is in use from October 1991. External links Official site (in Serbian) SK forums (in Serbian) English section of the site "Svet kompjutera" slavi 20. godišnjicu (in Serbian) Magazines established in 1984 Mass media in Belgrade Monthly magazines Svet kompjutera 1984 establishments in Yugoslavia Magazines published in Yugoslavia
44933473
https://en.wikipedia.org/wiki/Lightning%20Memory-Mapped%20Database
Lightning Memory-Mapped Database
Lightning Memory-Mapped Database (LMDB) is a software library that provides an embedded transactional database in the form of a key-value store. LMDB is written in C with API bindings for several programming languages. LMDB stores arbitrary key/data pairs as byte arrays, has a range-based search capability, supports multiple data items for a single key and has a special mode for appending records (MDB_APPEND) without checking for consistency. LMDB is not a relational database, it is strictly a key-value store like Berkeley DB and dbm. LMDB may also be used concurrently in a multi-threaded or multi-processing environment, with read performance scaling linearly by design. LMDB databases may have only one writer at a time, however unlike many similar key-value databases, write transactions do not block readers, nor do readers block writers. LMDB is also unusual in that multiple applications on the same system may simultaneously open and use the same LMDB store, as a means to scale up performance. Also, LMDB does not require a transaction log (thereby increasing write performance by not needing to write data twice) because it maintains data integrity inherently by design. History LMDB's design was first discussed in a 2009 post to the OpenLDAP developer mailing list, in the context of exploring solutions to the cache management difficulty caused by the project's dependence on Berkeley DB. A specific goal was to replace the multiple layers of configuration and caching inherent to Berkeley DB's design with a single, automatically managed cache under the control of the host operating system. Development subsequently began, initially as a fork of a similar implementation from the OpenBSD ldapd project. The first publicly available version appeared in the OpenLDAP source repository in June 2011. The project was known as MDB until November 2012, after which it was renamed in order to avoid conflicts with existing software. Technical description Internally LMDB uses B+ tree data structures. The efficiency of its design and small footprint had the unintended side-effect of providing good write performance as well. LMDB has an API similar to Berkeley DB and dbm. LMDB treats the computer's memory as a single address space, shared across multiple processes or threads using shared memory with copy-on-write semantics (known historically as a single-level store). Due to most former modern computing architectures having 32-bit memory address space limitations, which imposes a hard limit of 4 GB on the size of any database using such techniques, the effectiveness of the technique of directly mapping a database into a single-level store was strictly limited. However, today's 64-bit processors now mostly implement 48-bit address spaces, giving access to 47-bit addresses or 128 terabytes of database size, making databases using shared memory useful once again in real-world applications. Specific noteworthy technical features of LMDB are: Its use of B+ tree. With an LMDB instance being in shared memory and the B+ tree block size being set to the OS page size, access to an LMDB store is extremely memory efficient New data is written without overwriting or moving existing data. This results in guaranteed data integrity and reliability without requiring transaction logs or cleanup services. The provision of a unique append-write mode (MDB_APPEND) which is implemented by allowing the new record to be added directly to the end of the B+ tree. This reduces the number of reads and write page operations, resulting in greatly-increased performance but requiring that the programmer is responsible for ensuring keys are already in sorted order when storing into the DB. Copy-on-write semantics help ensure data integrity as well as providing transactional guarantees and simultaneous access by readers without requiring any locking, even by the current writer. New memory pages required internally during data modifications are allocated through copy-on-write semantics by the underlying OS: the LMDB library itself never actually modifies older data being accessed by readers because it simply cannot do so: any shared-memory updates automatically create a completely independent copy of the memory-page being written to. As LMDB is memory-mapped, it can return direct pointers to memory addresses of keys and values through its API, thereby avoiding unnecessary and expensive copying of memory. This results in greatly-increased performance (especially when the values stored are extremely large), and expands the potential use cases for LMDB. LMDB also tracks unused memory pages, using a B+ tree to keep track of pages freed (no longer needed) during transactions. By tracking unused pages the need for garbage-collection (and a garbage collection phase which would consume CPU cycles) is completely avoided. Transactions which need new pages are first given pages from this unused free pages tree; only after these are used up will it expand into formerly unused areas of the underlying memory-mapped file. On a modern filesystem with sparse file support this helps minimise actual disk usage. The file format of LMDB is, unlike that of Berkeley DB, architecture-dependent. This means that a conversion must be done before moving a database from a 32-bit machine to a 64-bit machine, or between computers of differing endianness. Concurrency LMDB employs multiversion concurrency control (MVCC) and allows multiple threads within multiple processes to coordinate simultaneous access to a database. Readers scale linearly by design . While write transactions are globally serialized via a mutex, read-only transactions operate in parallel, including in the presence of a write transaction, and are entirely wait free except for the first read-only transaction on a thread. Each thread reading from a database gains ownership of an element in a shared memory array, which it may update to indicate when it is within a transaction. Writers scan the array to determine the oldest database version the transaction must preserve, without requiring direct synchronization with active readers. Performance In 2011 Google published software which allowed users to generate micro-benchmarks comparing LevelDB's performance to SQLite and Kyoto Cabinet in different scenarios. In 2012 Symas added support for LMDB and Berkeley DB and made the updated benchmarking software publicly available. The resulting benchmarks showed that LMDB outperformed all other databases in read and batch write operations. SQLite with LMDB excelled on write operations, and particularly so on synchronous/transactional writes. The benchmarks showed the underlying filesystem as having a big influence on performance. JFS with an external journal performs well, especially compared to other modern systems like Btrfs and ZFS. Zimbra has tested back-mdb vs back-hdb performance in OpenLDAP, with LMDB clearly outperforming the BDB based back-hdb. Many other OpenLDAP users have observed similar benefits. Since the initial benchmarking work done in 2012, multiple follow-on tests have been conducted with additional database engines for both in-memory and on-disk workloads characterizing the performance across multiple CPUs and record sizes. These tests show that LMDB performance is unmatched on all in-memory workloads, and excels in all disk-bound read workloads, as well as disk-bound write workloads using large record sizes. The benchmark driver code was subsequently published on GitHub and further expanded in database coverage. Reliability LMDB was designed from the start to resist data loss in the face of system and application crashes. Its copy-on-write approach never overwrites currently-in-use data. Avoiding overwrites means the structure on disk/storage is always valid, so application or system crashes can never leave the database in a corrupted state. In its default mode, at worst a crash can lose data from the last not-yet-committed write transaction. Even with all asynchronous modes enabled, it is only an OS catastrophic failure or hardware power-loss event rather than merely an application crash that could potentially result in any data corruption. Two academic papers from the USENIX OSDI Symposium covered failure modes of DB engines (including LMDB) under a sudden power loss or system crash. The paper from Pillai et al., did not find any failure in LMDB that would occur in the real-world file systems considered; the single failure identified by the study in LMDB only relates to hypothetical file systems. The Mai Zheng et al. paper claims to point out failures in LMDB, but the conclusion depends on whether fsync or fdatasync is utilised. Using fsync ameliorates the problem. Selection of fsync or fdatasync is a compile-time switch which is not the default behavior in current Linux builds of LMDB, but is the default on macOS, *BSD, Android, and Windows. Default Linux builds of LMDB are therefore the only ones vulnerable to the problem discovered by the zhengmai researchers however LMDB may simply be rebuilt by Linux users to utilise fsync instead. When provided with a corrupt database, such as one produced by fuzzing, LMDB may crash. LMDB's author considers the case unlikely to be concerning, but has nevertheless produced a partial fix in a separate branch. Open source license In June 2013, Oracle changed the license of Berkeley DB (a related project) from the Sleepycat license to the Affero General Public License, thus restricting its use in a wide variety of applications. This caused the Debian project to exclude the library from 6.0 onwards. It was also criticized that this license is not friendly to commercial redistributors. The discussion was sparked over whether the same licensing change could happen to LMDB. Author Howard Chu made clear that LMDB is part of the OpenLDAP project, which had its BSD style license before he joined, and it will stay like it. No copyright is transferred to anybody by checking in, which would make a similar move like Oracle's impossible. The Berkeley DB license issue has caused major Linux distributions such as Debian to completely phase out their use of Berkeley DB, with a preference for LMDB. API and uses There are wrappers for several programming languages, such as C++, Java, Python, Lua, Go, Ruby, Objective C, Javascript, C#, Perl, PHP, Tcl and Common Lisp. A complete list of wrappers may be found on the main web site. Howard Chu ported SQLite 3.7.7.1 to use LMDB instead of its original B-tree code, calling the end result SQLightning. One cited insert test of 1000 records was 20 times faster (than the original SQLite with its B-Tree implementation). LMDB is available as a backing store for other open source projects including Cyrus SASL, Heimdal Kerberos, and OpenDKIM. It is also available in some other NoSQL projects like MemcacheDB and Mapkeeper. LMDB was used to make the in-memory store Redis persist data on disk. The existing back-end in Redis showed pathological behaviour in rare cases, and a replacement was sought. The baroque API of LMDB was criticized though, forcing a lot of coding to get simple things done. However, its performance and reliability during testing was considerably better than the alternative back-end stores that were tried. An independent third-party software developer utilised the Python bindings to LMDB in a high-performance environment and published, on the technical news site Slashdot, how the system managed to successfully sustain 200,000 simultaneous read, write and delete operations per second (a total of 600,000 database operations per second). An up-to-date list of applications using LMDB is maintained on the main web site. Application support Many popular free software projects distribute or include support for LMDB, often as the primary or sole storage mechanism. The Debian, Ubuntu, Fedora, and OpenSuSE operating systems. OpenLDAP for which LMDB was originally developed via . Postfix via the adapter. PowerDNS, a DNS server. CFEngine uses LMDB by default since version of 3.6.0. Shopify use LMDB in their SkyDB system. Knot DNS a high performance DNS server. Monero an open source cryptocurrency created in April 2014 that focuses on privacy, decentralisation and scalability. Enduro/X middleware uses LMDB for optional XATMI Microservices (SOA) cache. So that for first request the actual service is invoked, in next request client process reads saved result directly from LMDB. Samba Active Directory Domain Controller Technical reviews of LMDB LMDB makes novel use of well-known computer science techniques such as copy-on-write semantics and B+ trees to provide atomicity and reliability guarantees as well as performance that can be hard to accept, given the library's relative simplicity and that no other similar key-value store database offers the same guarantees or overall performance, even though the authors explicitly state in presentations that LMDB is read-optimised not write-optimised. Additionally, as LMDB was primarily developed for use in OpenLDAP its developers are focused mainly on development and maintenance of OpenLDAP, not on LMDB per se. The developers limited time spent presenting the first benchmark results was therefore criticized as not stating limitations, and for giving a "silver bullet impression" not adequate to address an engineers attitude (it has to be pointed out that the concerns raised however were later adequately addressed to the reviewer's satisfaction by the key developer behind LMDB.LMDB: The Leveldb Killer?. Retrieved 2014-10-20.) The presentation did spark other database developers dissecting the code in-depth to understand how and why it works. Reviews run from brief to in-depth. Database developer Oren Eini wrote a 12-part series of articles on his analysis of LMDB, beginning July 9, 2013. The conclusion was in the lines of "impressive codebase ... dearly needs some love", mainly because of too long methods and code duplication. This review, conducted by a .NET developer with no former experience of C, concluded on August 22, 2013 with "beyond my issues with the code, the implementation is really quite brilliant. The way LMDB manages to pack so much functionality by not doing things is quite impressive... I learned quite a lot from the project, and it has been frustrating, annoying and fascinating experience" Multiple other reviews cover LMDB in various languages including Chinese. References C (programming language) libraries Embedded databases Free software programmed in C Key-value databases NoSQL Structured storage
25084036
https://en.wikipedia.org/wiki/USB%20image
USB image
A USB image — is bootable image of Operating system (OS) or other software where the boot loader is located on a USB flash drive, or another USB device (with memory storage) instead of conventional CD or DVD discs. The operating system loads from the USB device either to load it much like a Live CD that runs OS or any other software from the storage or installs OS itself. USB image runs off of the USB device the whole time. A USB image is easier to carry, can be stored more safely than a conventional CD or DVD. Drawbacks are that some older devices may not support USB booting and that the USB storage devices lifespan might be shortened. Ubuntu has included a utility for installing an operating system image file to a USB flash drive since version 9.10. Windows support also has added a step by step on how to set up a USB device as a bootable drive. Software Both graphical applications and command line utilities are available for authoring bootable operating system images. dd is a utility commonly found in Unix operating systems that allow creation of bootable images. Benefits and limitations Benefits In contrast to live CDs, a USB image is easier to transport and to store (e.g. a pocket, attached to a key chain, carried in a bag, locked away in a safe), instead of a CD, which can be damaged and corrupted easier, and also harder Also after OS installation, the USB can be removed after installation, and the operating system will run without the USB stick inserted into the computer, allowing installation on multiple OS devices with a single USB (This is known for Win 8.1 and newer Microsoft Win versions, since they fully support the USB image installation) The absence of moving parts in USB flash devices allows true random access avoiding the rotational latency and seek time, meaning small programs will start faster from a USB flash drive than from a local hard disk or live CD. However, as USB devices typically achieve lower data transfer rates than internal hard drives, booting from older computers that lack USB 2.0 or newer can be very slow. Limitations Some older systems have limited support for USB, since their BIOSes were not designed with such purpose at the time. Other devices may not be booted from USB, if in BIOS it is set to 'Legacy mode'Legacy mode. Due to the additional write cycles that occur on a full installation, the life span of the used USB may be shortened. To mitigate this, a USB hard drive can be used, as they give better performance than the USB stick, regardless of the connector. See also UEFI Live USB References Booting
33091221
https://en.wikipedia.org/wiki/West%20Career%20and%20Technical%20Academy
West Career and Technical Academy
West Career and Technical Academy (WCTA, West Tech) is a magnet high school located in Las Vegas, Nevada, United States. The school opened in August 2010 as the first magnet school in Summerlin, a community in the western Las Vegas Valley. It is administered by the Clark County School District. As of 2019, the school had an enrollment of 1,397 students and 61 classroom teachers on a FTE basis, for a student-teacher ratio of 23:1. West Tech offers nine programs to prepare students for a career in the field selected. History After one and a half years of construction costing $83.5 million, West Tech opened to students on August 30, 2010. It is the last high school built under a 1998 bond program to revitalize schools in the Clark County School District (CCSD), which involved the construction, replacement, and rehabilitation of more than 100 schools. For its first year of operation, West Tech admitted 750 freshmen and sophomores. In December 2014, student Angelique Clark applied to begin a pro-life club at the school; however, the administration denied her application, claiming the subject was too controversial. She proceeded to send two demand letters to CCSD, but when she failed to receive a response, she sued West Tech in August 2015. The following month, the school agreed to the formation of the club. Facilities The school is built on of land at the foothills of the Red Rock Canyon National Conservation Area. The buildings of the school occupy . A rotating solar panel and a ground heat source exchange system help to power the campus. There is also a computerized weather station and four greenhouses, which are used to facilitate students' learning in horticulture, biotechnology, and other fields. West Tech also includes multiple computer labs and a student WiFi system. Academics West Tech offers the following nine programs: . Biomedical Sciences: prepares students with the knowledge and skills in disease exploration, human body systems, and biomedical engineering. Areas of study include infectious and genetic diseases, molecular biology, oncology, metabolism, homeostasis, and exercise physiology. Biotechnology: explores molecular biology and genetics through industry-standard equipment. Areas of exploration include applied biomedical engineering, molecular biology, pathogen defense, infectious diseases, genetic diseases and preventing, detecting, and treating cancer. Topics of investigation include biomedical problems, community health, and the roles and responsibilities of various biomedical professions. Business Management: prepares students with the overall principles of business management. Areas of study include economics, budgeting, human resource management, operations, strategic management and financial-based decision making. Students will learn how to file taxes and have opportunities for certification. Cybersecurity: prepares students with knowledge and skills in computer maintenance and repair, the cybersecurity life cycle, incident handling and networking. Successful students will be prepared to take certification exams for CompTIA’s A+ and Networking +, which are required baseline certifications for careers in IT and Cybersecurity. Digital Media: introduces students to the principles of creating graphic works. Areas of study include elements and principles of design, production aspects, legal and ethical issues, and portfolio development. The Digital Media program breaks into either graphic design, photography, or video production/broadcast subgroups. Engineering: discusses architecture and civil engineering through areas of study including safety, construction documentation, the engineering design process, and the impacts of engineering on society. Students will explore robotic systems, determine its components, and construct a robotic system for automation. Environmental Science: prepares students with the information and skills necessary for success in environmental management. Areas of study include ecology, environmental quality, sustainable use, GIS and GPS, energy, hydrology and hydrogeology, law and public policy, and environmental site analysis. Nursing: provides students with the knowledge and skills required for entry into the healthcare field. Students who complete the didactic and clinical practicum have the opportunity to become licensed as a Certified Nursing Assistant. Sports Medicine: prepares students with an introduction to sports medicine techniques and processes. The program provides the primary skills and knowledge in athletic training, and sports medicine related fields. The areas of study include physical fitness, human anatomy and physiology, injury evaluation and prevention, and rehabilitation. Students select one of these programs to study throughout high school, designed to assist them in future studies and a career in the field. A student must be enrolled in West Tech by their sophomore year so that all required classes for their program can be completed in a timely manner; for this reason a student may not change their chosen program after sophomore year. West Tech utilizes the Google Apps for Education, which grant each student a free e-mail account, access to Google Sites, and other services. Clubs and activities The school offers several career and technical student organizations (CTSOs), including DECA, FBLA, Skills USA, and HOSA. There are no NIAA sports teams. The school also offers two honors societies, including NHS and Mu Alpha Theta. Some students choose to start their own clubs with the assistance of faculty and classmates, including clubs like Mock Trial and Speech and Debate. Project-Based Learning West Tech incorporates one school-wide project-based learning (PBL) event per semester which lasts two days. Topics for the PBL vary, including exploring different cultures, researching new technologies, and improving school spirit. Notable faculty Yvonne Caples Former Computer Based Projects teacher Monte Bay Original principal (2010-2012) Brian Boyars Current Chemistry Teacher known for carrying a lightsaber around campus if a student dares to cheat on a lab report References External links School website Clark County School District website High schools in Clark County, Nevada Educational institutions established in 2010 Public high schools in Nevada Magnet schools in Nevada Buildings and structures in Summerlin, Nevada 2010 establishments in Nevada
9057232
https://en.wikipedia.org/wiki/ParaView
ParaView
ParaView is an open-source multiple-platform application for interactive, scientific visualization. It has a client–server architecture to facilitate remote visualization of datasets, and generates level of detail (LOD) models to maintain interactive frame rates for large datasets. It is an application built on top of the Visualization Toolkit (VTK) libraries. ParaView is an application designed for data parallelism on shared-memory or distributed-memory multicomputers and clusters. It can also be run as a single-computer application. Summary ParaView is an open-source, multi-platform data analysis and visualization application. Paraview is known and used in many different communities to analyze and visualize scientific data sets. It can be used to build visualizations to analyze data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities. ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of terascale as well as on laptops for smaller data. ParaView is an application framework as well as a turn-key application. The ParaView code base is designed in such a way that all of its components can be reused to quickly develop vertical applications. This flexibility allows ParaView developers to quickly develop applications that have specific functionality for a specific problem domain. ParaView runs on distributed and shared memory parallel and single processor systems. It has been successfully tested on Windows, macOS, Linux, IBM Blue Gene, Cray Xt3 and various Unix workstations, clusters and supercomputers. Under the hood, ParaView uses Visualization Toolkit (VTK) as the data processing and rendering engine and has a user interface written using Qt. The goals of the ParaView team include the following: Develop an open-source, multi-platform visualization application. Support distributed computation models to process large data sets. Create an open, flexible, and intuitive user interface. Develop an extensible architecture based on open standards. History The ParaView project started in 2000 as a collaborative effort between Kitware, Inc. and Los Alamos National Laboratory through funding provided by the US Department of Energy ASCI Views program. The first public release was announced in October 2002. Independent of ParaView, Kitware developed a web-based visualization system in December 2001. This project was funded by Phase I and II SBIRs from the US Army Research Laboratory and eventually became the ParaView Enterprise Edition. PVEE significantly contributed to the development of ParaView's client/server architecture. In September 2005, Kitware, Sandia National Labs and CSimSoft (now Coreform LLC) started the development of ParaView 3.0. ParaView 3.0 was released in May 2007. On June 2013, ParaView 4.0 was released, this version was based on VTK 6.0. Version 5.0 was released in January 2016, this version included a new rendering back-end. Features Visualization capabilities Handles structured (uniform rectilinear, non-uniform rectilinear, and curvilinear grids), unstructured, polygonal, image, multi-block and AMR data types. All processing operations (filters) produce datasets. This allows the user to either further process the result of every operation or the results as a data file. For example, the user can extract a cut surface, reduce the number of points on this surface by masking and apply glyphs (i.e. vector arrows) to the result. Vectors fields can be inspected by applying glyphs (arrows, cones, lines, spheres, and various 2D glyphs) to the points in a dataset. The glyphs can be scaled by scalars, vector component or vector magnitude and can be oriented using a vector field. Contours and isosurfaces can be extracted from all data types using scalars or vector components. The results can be colored by any other variable or processed further. When possible, structured data contours/isosurfaces are extracted with fast and efficient algorithms which make use of the efficient data layout. A sub-region of a dataset can be extracted by cutting or clipping with an arbitrary plane (all data types), specifying a threshold criteria to exclude cells (all data types) and/or specifying a VOI (volume of interest - structured data types only). Streamlines can be generated using constant step or adaptive integrators. The results can be displayed as points, lines, tubes, ribbons, etc., and can be processed by a multitude of filters. Particle paths can be extracted from temporal datasets. The points in a dataset can be warped (displaced) with scalars (given a user defined displacement vector) or with vectors (unavailable for non-linear rectilinear grids). With the array calculator, new variables can be computed using existing point or cell field arrays. A multitude of scalar and vector operations are supported. Advanced data processing can be done using the Python Programmable filter with VTK, NumPy, SciPy and other Python modules. Data can be probed at a point or along a line. The results are displayed either graphically or as text and can be exported for further analysis. Data can also be extracted over time (including statistical information such as minimum, maximum and standard deviation). Data can be inspected quantitatively using the powerful selection mechanism and the spreadsheet view: The selection mechanism allows the user to focus on an important subset of a dataset using either interactive selection by picking a point or selecting a rectangular area as well quantitative selection mechanisms. The spreadsheet view allows the user to inspect either the whole dataset or the selected subset as raw numbers. ParaView provides many other data sources and filters by default. Any VTK source or filter can be added by providing a simple XML description. Input/output and file format Supports a variety of file formats including: VTK (new and legacy, all types including parallel, ASCII and binary, can be read and written). EnSight 6 and EnSight Gold (all types including parallel, ASCII and binary; multiple parts are supported -each part is loaded separately and can be processed individually) (read only). CGNS (support for multiple blocks, unsteady solutions and mesh deformation, based on HDF5 low level format) (read only). Various polygonal file formats including STL and BYU (by default, read only, other VTK writers can be added by writing XML description). Many other file formats are supported. Any VTK source or filter can be added by providing a simple XML description (VTK provides many readers). Since ParaView is open source, the user can provide their own readers and writers. User interaction Intuitive and flexible interface based on the Qt application framework. Allows changing the parameters of many filters by directly interacting with the 3D view using 3D widgets (manipulators). For example, the user can manipulate the seed line of a streamline filter by clicking on a control point and dragging the line to the new location. Compact user interface design. By default, all important tools are located in the main window. This eliminates the need for large number of windows which are often difficult to locate on a cluttered desktop. It is also possible to shear off inspectors from the main window. Maintains interactive frame rates even when working with large data through the use of level-of-detail (LOD) models. The user determines the threshold (number of points) beyond which a reduced version of the model is displayed during interaction (the size of the model can also be adjusted). Once the interaction is over, the large model is rendered. Large data and distributed computing Runs parallel on distributed and shared memory systems using MPI. These include workstation clusters, visualization systems, large servers, supercomputers, etc. The user interface is run on separate computer using the client/server mode. ParaView uses the data parallel model in which the data is broken into pieces to be processed by different processes. Most of the visualization algorithms function without any change when running in parallel. ParaView also supports ghost levels used to produce piece invariant results. Ghost levels are points/cells shared between processes and are used by algorithms which require neighborhood information. Supports both distributed rendering (where the results are rendered on each node and composited later using the depth buffer), local rendering (where the resulting polygons are collected on one node and rendered locally) and a combination of both (for example, the level-of-detail models can be rendered locally whereas the full model is rendered in a distributed manner). This provides scalable rendering for large data without sacrificing performance when working with smaller data. Distributed rendering and tiled-display is done using Sandia's Ice-T library. Scripting and extensibility ParaView is fully scriptable using the simple but powerful Python language. ParaView's data engine, called server manager, is fully accessible through the Python interface. All changes made to the engine through Python are automatically reflected to the user interface. ParaView can be run as a batch application using the Python interface. We have successfully run ParaView on supercomputers include IBM Blue Gene and Cray Xt3 using the batch mode. Distributed data processing can be done in Python using the Python Programmable Filter. This filter functions seamlessly with NumPy and SciPy. Additional modules can be added by either writing an XML description of the interface or by writing C++ classes. The XML interface allows users/developers to add their own VTK filters to ParaView without writing any special code and/or re-compiling. ParaView in use In 2005 Sandia National Laboratories, Nvidia and Kitware had multiple press releases on the scalable visualization and rendering work done on ParaView. The releases announced breakthroughs in scalable performance attaining rendering rates of over 8 billion polygons per second using ParaView. ParaView is used as the visualization platform for the Modeling software OpenFOAM. University of North Carolina at Chapel Hill course on Visualization on the Sciences. The National Center for Computational Sciences at Oak Ridge National Laboratory uses ParaView for visualizing large datasets. SimScale uses ParaView as an alternative to its integrated post-processing environment and is offering several tutorials and webinars on post-processing with ParaView. The FEATool Multiphysics simulation toolbox features one-click export to ParaView Glance interactive web plots. See also CMake ITK Scientific visualization VisIt VTK References External links Paraview's use in different areas ParaView Gallery ParaView Publications Flickr page of Paraview visualizations Kitware videos on Vimeo Computer-aided engineering software for Linux Engineering software that uses Qt Free data visualization software Software using the BSD license Software that uses VTK
1861497
https://en.wikipedia.org/wiki/Computerworld
Computerworld
Computerworld (abbreviated as CW) is an ongoing decades old professional publication which in 2014 "went digital." Its audience is information technology (IT) and business technology professionals, and is available via a publication website and as a digital magazine. As a printed weekly during the 1970s and into the 1980s, Computerworld was the leading trade publication in the data processing industry. Indeed, based on circulation and revenue it was one of the most successful trade publications in any industry. Later in the 1980s it began to lose its dominant position. It is published in many countries around the world under the same or similar names. Each country's version of Computerworld includes original content and is managed independently. The parent company of Computerworld US is IDG Communications. History The first issue was published in 1967. Going international The company IDG offers the brand "Computerworld" in 47 countries worldwide, the name and frequency differ slightly though. When IDG established the Swedish edition in 1983 i.e., the title "Computerworld" was already registered in Sweden by another publisher. This is why the Swedish edition is named . The corresponding German publication is called (which translates to "computer week") instead. Computerworld was distributed as a morning newspaper in tabloid format (41 cm) in 51,000 copies (2007) with an estimated 120,000 readers. From 1999 to 2008, it was published three days a week, but since 2009, it was published only on Tuesdays and Fridays. Going digital In June 2014, Computerworld US abandoned its print edition, becoming an exclusively digital publication. In late July 2014, Computerworld debuted the monthly Computerworld Digital Magazine. In 2017, Computerworld celebrated its 50th year in tech publishing with a number of features and stories highlighting the publication's history. Computerworld's website premiered in 1996, nearly two decades before their last printed issue. Ongoing Computerworld US serves IT and business management with coverage of information technology, emerging technologies and analysis of technology trends. Computerworld also publishes several notable special reports each year, including the 100 Best Places to Work in IT, IT Salary Survey, the DATA+ Editors' Choice Awards and the annual Forecast research report. Computerworld in the past has published stories that highlight the effects of immigration to the U.S. (e.g. the H-1B visa) on software engineers. Staff The executive editor of Computerworld in the U.S. is Ken Mingis, who leads a small staff of editors, writers and freelancers who cover a variety of enterprise IT topics (with a concentration on Windows, Mobile and Apple/Enterprise). See also Patrick Joseph McGovern References Further reading External links 1967 establishments in the United States 1983 establishments in Norway 1983 establishments in Sweden Defunct computer magazines published in the United States International Data Group Magazines established in 1967 Magazines disestablished in 2014 Magazines published in Boston Monthly magazines published in the United States Online magazines published in the United States Online magazines with defunct print editions
36465256
https://en.wikipedia.org/wiki/Mahdi%20%28malware%29
Mahdi (malware)
Mahdi is computer malware that was initially discovered in February 2012 and was reported in July of that year. According to Kaspersky Lab and Seculert (an Israeli security firm which discovered the malware), the software has been used for targeted cyber espionage since December 2011, infecting at least 800 computers in Iran and other Middle Eastern countries. Mahdi is named after files used in the malware and refers to the Muslim figure. See also Operation High Roller References Windows trojans 2012 in computing Cyberwarfare Cyberwarfare in Iran
1721497
https://en.wikipedia.org/wiki/Audrey%20Tang
Audrey Tang
Audrey Tang (; born 18 April 1981) is a Taiwanese free software programmer and Digital Minister of Taiwan, who has been described as one of the "ten greatest Taiwanese computing personalities". In August 2016, Tang was invited to join Taiwan's Executive Yuan as a minister without portfolio, making her the first transgender and the first non-binary official in the top executive cabinet. Tang has identified as "post-gender" and accepts "whatever pronoun people want to describe me with online." Tang is a community leader of Haskell and Perl and the core member of G0v. Early life Tang was born to father Kuang-hua Tang and mother Ya-ching Lee. Ya-ching Lee helped develop Taiwan's first consumer co-operative, and co-developed an experimental primary school employing indigenous teachers. Tang was a child prodigy, reading works of classical literature before the age of five, advanced mathematics before six, and programming before eight, and she began to learn Perl at age 12. Tang spent part of her childhood in Germany. Two years later, she dropped out of junior high school, unable to adapt to student life. By the year 2000, at the age of 19, Tang had already held positions in software companies, and worked in California's Silicon Valley as an entrepreneur. In late 2005, Tang began transitioning to female, including changing her English and Chinese names, citing a need to reconcile her outward appearance with her self-image. In 2017, Tang said, "I've been shutting reality off, and lived almost exclusively on the net for many years, because my brain knows for sure that I am a woman, but the social expectations demand otherwise." In 2019, Tang identified as "post-gender" or non-binary, responding to a request regarding pronoun preferences with "What’s important here is not which pronouns you use, but the experience...about those pronouns... I’m not just non-binary. I’m really whatever, so do whatever." The television news channel ETToday reported that Tang has an IQ of 180. Tang has been a vocal proponent for autodidacticism and individualist anarchism. Free software contributions Tang initiated and led the Pugs project, a joint effort from the Haskell and Perl communities to implement the Perl 6 language; Tang also made contributions to internationalization and localization efforts for several Free Software programs, including SVK (a version-control software written in Perl for which Tang also wrote a large portion of the code), Request Tracker, and Slash, created Ethercalc, building on Dan Bricklin's work on WikiCalc and their work together on SocialCalc, as well as heading Traditional Chinese translation efforts for various open source-related books. On CPAN, Tang initiated over 100 Perl projects between June 2001 and July 2006, including the popular Perl Archive Toolkit (PAR), a cross-platform packaging and deployment tool for Perl 5. Tang is also responsible for setting up smoke test and digital signature systems for CPAN. In October 2005, Tang was a speaker at O'Reilly Media's European Open Source Convention in Amsterdam. Political career Tang became involved in politics during Taiwan's 2014 Sunflower Student Movement demonstrations, in which Tang volunteered to help the protesters occupying the Taiwanese parliament building broadcast their message. The prime minister invited Tang to build media literacy curricula for Taiwan's schools, which was implemented in late 2017. Following this work, Tang was appointed minister without portfolio for digital affairs in the Lin Chuan cabinet in August 2016, and took office as the digital minister on October 1, being placed in charge of helping government agencies communicate policy goals and managing information published by the government, both via digital means. At age 35, Tang was the youngest minister without portfolio in Taiwanese history and was given this role to bridge the gap between the older and younger generations. As a conservative anarchist, Tang ultimately desires the abolition of Taiwan and all states, and justifies working for the state by the opportunity it affords to promote worthwhile ends. Tang's conservatism stems from wanting to preserve free public spaces independent from the state, such as Internet properties, and wanting technological advances to be applied humanistically so that all, rather than a few, can reap its benefits, to the exclusion of others. Tang's department does not follow hierarchical or bureaucratic relationships. As of 2017, Tang's staff of 15 chose to work in the department. The group produces a weekly roadmap as collaborators, not orders. Tang was quoted as saying, "My existence is not to become a minister for a certain group, nor to broadcast government propaganda. Instead, it is to become a 'channel' to allow greater combinations of intelligence and strength to come together." Tang's first initiative, the g0v project, involved swapping out the "o" for a zero in the government's "gov.tw" top-level domain to view more accessible and interactive versions of those governmental websites. The project was open source, in line with Tang's principles, and very popular, accessed millions of times each month. Another initiative, vTaiwan, uses social media paradigms for citizens to create digital petitions. Those with 5,000 signatures are brought to the premier and government ministries to be addressed. Changes implemented through this system include access to income tax software for non-Windows computers, and changes to cancer treatment regulations. The Taiwanese parliament complained that citizens had better access to influence regulation than they did as legislators. As of 2017, Tang was working on sharing economy software that would facilitate the free exchange of resources in abundance instead of the ride-sharing and peer hotel applications for which the technology is known. As a general practice of "radical transparency", all of Tang's meetings are recorded, transcribed, and uploaded to a public website. Tang also publicly responds to questions sent through another website. References Publications Further reading "The Frontiers Of Digital Democracy – Nathan Gardels interviews Tang in Noema External links Audrey's Pugs Journal and Personal Blog Audrey's Medium page An interview with Autrijus by Debby (in Mandarin) Podcast interview with Audrey on Perlcast Perl Archive Toolkit Audrey's contributions on CPAN "SocialCalc" Free software programmers Perl writers Taiwanese computer programmers Transgender and transsexual computer programmers LGBT people from Taiwan 1981 births Living people Taiwanese computer scientists Government ministers of Taiwan 21st-century Taiwanese scientists LGBT scientists from Taiwan Transgender and transsexual politicians Transgender and transsexual scientists Transgender non-binary people 21st-century LGBT people Individualist anarchists Taiwanese anarchists Non-binary politicians LGBT politicians from Taiwan
55365
https://en.wikipedia.org/wiki/Streaming%20SIMD%20Extensions
Streaming SIMD Extensions
In computing, Streaming SIMD Extensions (SSE) is a single instruction, multiple data (SIMD) instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of Central processing units (CPUs) shortly after the appearance of Advanced Micro Devices (AMD's) 3DNow!. SSE contains 70 new instructions, most of which work on single precision floating-point data. SIMD instructions can greatly increase performance when exactly the same operations are to be performed on multiple data objects. Typical applications are digital signal processing and graphics processing. Intel's first IA-32 SIMD effort was the MMX instruction set. MMX had two main problems: it re-used existing x87 floating-point registers making the CPUs unable to work on both floating-point and SIMD data at the same time, and it only worked on integers. SSE floating-point instructions operate on a new independent register set, the XMM registers, and adds a few integer instructions that work on MMX registers. SSE was subsequently expanded by Intel to SSE2, SSE3, and SSE4. Because it supports floating-point math, it had wider applications than MMX and became more popular. The addition of integer support in SSE2 made MMX largely redundant, though further performance increases can be attained in some situations by using MMX in parallel with SSE operations. SSE was originally called Katmai New Instructions (KNI), Katmai being the code name for the first Pentium III core revision. During the Katmai project Intel sought to distinguish it from their earlier product line, particularly their flagship Pentium II. It was later renamed Internet Streaming SIMD Extensions (ISSE), then SSE. AMD eventually added support for SSE instructions, starting with its Athlon XP and Duron (Morgan core) processors. Registers SSE originally added eight new 128-bit registers known as XMM0 through XMM7. The AMD64 extensions from AMD (originally called x86-64) added a further eight registers XMM8 through XMM15, and this extension is duplicated in the Intel 64 architecture. There is also a new 32-bit control/status register, MXCSR. The registers XMM8 through XMM15 are accessible only in 64-bit operating mode. SSE used only a single data type for XMM registers: four 32-bit single-precision floating-point numbers SSE2 would later expand the usage of the XMM registers to include: two 64-bit double-precision floating-point numbers or two 64-bit integers or four 32-bit integers or eight 16-bit short integers or sixteen 8-bit bytes or characters. Because these 128-bit registers are additional machine states that the operating system must preserve across task switches, they are disabled by default until the operating system explicitly enables them. This means that the OS must know how to use the FXSAVE and FXRSTOR instructions, which is the extended pair of instructions that can save all x86 and SSE register states at once. This support was quickly added to all major IA-32 operating systems. The first CPU to support SSE, the Pentium III, shared execution resources between SSE and the floating-point unit (FPU). While a compiled application can interleave FPU and SSE instructions side-by-side, the Pentium III will not issue an FPU and an SSE instruction in the same clock cycle. This limitation reduces the effectiveness of pipelining, but the separate XMM registers do allow SIMD and scalar floating-point operations to be mixed without the performance hit from explicit MMX/floating-point mode switching. SSE instructions SSE introduced both scalar and packed floating-point instructions. Floating-point instructions Memory-to-register/register-to-memory/register-to-register data movement Scalar – MOVSS Packed – MOVAPS, MOVUPS, MOVLPS, MOVHPS, MOVLHPS, MOVHLPS, MOVMSKPS Arithmetic Scalar – ADDSS, SUBSS, MULSS, DIVSS, RCPSS, SQRTSS, MAXSS, MINSS, RSQRTSS Packed – ADDPS, SUBPS, MULPS, DIVPS, RCPPS, SQRTPS, MAXPS, MINPS, RSQRTPS Compare Scalar – CMPSS, COMISS, UCOMISS Packed – CMPPS Data shuffle and unpacking Packed – SHUFPS, UNPCKHPS, UNPCKLPS Data-type conversion Scalar – CVTSI2SS, CVTSS2SI, CVTTSS2SI Packed – CVTPI2PS, CVTPS2PI, CVTTPS2PI Bitwise logical operations Packed – ANDPS, ORPS, XORPS, ANDNPS Integer instructions Arithmetic PMULHUW, PSADBW, PAVGB, PAVGW, PMAXUB, PMINUB, PMAXSW, PMINSW Data movement PEXTRW, PINSRW Other PMOVMSKB, PSHUFW Other instructions MXCSR management LDMXCSR, STMXCSR Cache and Memory management MOVNTQ, MOVNTPS, MASKMOVQ, PREFETCH0, PREFETCH1, PREFETCH2, PREFETCHNTA, SFENCE Example The following simple example demonstrates the advantage of using SSE. Consider an operation like vector addition, which is used very often in computer graphics applications. To add two single precision, four-component vectors together using x86 requires four floating-point addition instructions. vec_res.x = v1.x + v2.x; vec_res.y = v1.y + v2.y; vec_res.z = v1.z + v2.z; vec_res.w = v1.w + v2.w; This corresponds to four x86 FADD instructions in the object code. On the other hand, as the following pseudo-code shows, a single 128-bit 'packed-add' instruction can replace the four scalar addition instructions. movaps xmm0, [v1] ;xmm0 = v1.w | v1.z | v1.y | v1.x addps xmm0, [v2] ;xmm0 = v1.w+v2.w | v1.z+v2.z | v1.y+v2.y | v1.x+v2.x movaps [vec_res], xmm0 ;xmm0 Later versions SSE2, Willamette New Instructions (WNI), introduced with the Pentium 4, is a major enhancement to SSE. SSE2 adds two major features: double-precision (64-bit) floating-point for all SSE operations, and MMX integer operations on 128-bit XMM registers. In the original SSE instruction set, conversion to and from integers placed the integer data in the 64-bit MMX registers. SSE2 enables the programmer to perform SIMD math on any data type (from 8-bit integer to 64-bit float) entirely with the XMM vector-register file, without the need to use the legacy MMX or FPU registers. It offers an orthogonal set of instructions for dealing with common data types. SSE3, also called Prescott New Instructions (PNI), is an incremental upgrade to SSE2, adding a handful of DSP-oriented mathematics instructions and some process (thread) management instructions. It also allowed addition or multiplication two numbers that are stored in the same register, which wasn't possible in SSE2 and earlier. This capability, known as horizontal in Intel terminology, was the major addition to the SSE3 instruction set. AMD's 3DNow! extension could do the latter too. SSSE3, Merom New Instructions (MNI), is an upgrade to SSE3, adding 16 new instructions which include permuting the bytes in a word, multiplying 16-bit fixed-point numbers with correct rounding, and within-word accumulate instructions. SSSE3 is often mistaken for SSE4 as this term was used during the development of the Core microarchitecture. SSE4, Penryn New Instructions (PNI), is another major enhancement, adding a dot product instruction, additional integer instructions, a popcnt instruction (Population count: count number of bits set to 1, used extensively e.g. in cryptography), and more. XOP, FMA4 and CVT16 are new iterations announced by AMD in August 2007 and revised in May 2009. Advanced Vector Extensions (AVX), Gesher New Instructions (GNI), is an advanced version of SSE announced by Intel featuring a widened data path from 128 bits to 256 bits and 3-operand instructions (up from 2). Intel released processors in early 2011 with AVX support. AVX2 is an expansion of the AVX instruction set. AVX-512 (3.1 and 3.2) are 512-bit extensions to the 256-bit Advanced Vector Extensions SIMD instructions for x86 instruction set architecture. Software and hardware issues With all x86 instruction set extensions, it is up to the BIOS, operating system and application programmer to test and detect their existence and proper operation. Intel and AMD offer applications to detect what extensions a CPU supports. The CPUID opcode is a processor supplementary instruction (its name derived from CPU IDentification) for the x86 architecture. It was introduced by Intel in 1993 when it introduced the Pentium and SL-Enhanced 486 processors. User application uptake of the x86 extensions has been slow with even bare minimum baseline MMX and SSE support (in some cases) being non-existent by applications some 10 years after these extensions became commonly available. Distributed computing has accelerated the use of these extensions in the scientific community—and many scientific applications refuse to run unless the CPU supports SSE2 or SSE3. The use of multiple revisions of an application to cope with the many different sets of extensions available is the simplest way around the x86 extension optimization problem. Software libraries and some applications have begun to support multiple extension types hinting that full use of available x86 instructions may finally become common some 5 to 15 years after the instructions were initially introduced. Identifying The following programs can be used to determine which, if any, versions of SSE are supported on a system Intel Processor Identification Utility CPU-Z – CPU, motherboard, and memory identification utility. lscpu - provided by the util-linux package in most Linux distributions. References External links Intel Intrinsics Guide SIMD computing X86 instructions
2350026
https://en.wikipedia.org/wiki/ILUG-Delhi
ILUG-Delhi
ILUG-Delhi is a Linux user group, and is the Delhi Chapter of the India Linux User Group community (ILUG). ILUG-D regularly organises meetings to discuss Free and Open Source Software. Meetings are usually organised once a month and announced on the linux-delhi mailing-list. Linux-Delhi has also conducted events such as the Linux Demo Day for popularisation of FOSS. In 2005 and 2006, ILUGD organized the event "FreeDel" to popularize FOSS tools and philosophy. The first of these events took place on the 17th and 18 September 2005. In 2007 the event was renamed to "Freed.in"; it was held until 2009. In the summer of 2006, a bio-informatics workshop was run in collaboration with JNU Bio Informatics Centre, to develop FOSS solutions in Bio-Informatics field and to nurture talent in FOSS tools and languages. These days, most of the discussion occurs in the Telegram group and the community there is very active. During the COVID pandemic that has been spanning till now, the weekly meetups (every Friday) are held online at the Jitsi platform, these meetups are itself discussed in the telegram group. External links Indian Linux User Group Delhi Telegram Group meetups page Indian Linux Community at Help See also Bangalore Linux User Group ILUG-Cochin Raj Mathur Bharat Operating System Solutions Free Software Users Group, Thiruvananthapuram Linux user groups
7463171
https://en.wikipedia.org/wiki/PowerPC%20applications
PowerPC applications
Microprocessors belonging to the PowerPC/Power ISA architecture family have been used in numerous applications. Personal Computers Apple Computer was the dominant player in the market of personal computers based on PowerPC processors until 2006 when it switched to Intel-based processors. Apple used PowerPC processors in the Power Mac, iMac, eMac, PowerBook, iBook, Mac mini, and Xserve. Classic Macintosh accelerator boards using PowerPCs were made by DayStar Digital, Newer Technology, Sonnet Technologies, and TotalImpact. There have been several attempts to create PowerPC reference platforms for computers by IBM and others: The IBM PReP (PowerPC Reference Platform) is a system standard intended to ensure compatibility among PowerPC-based systems built by different companies; IBM POP (PowerPC Open Platform) is an open and free standard and design of PowerPC motherboards. Pegasos Open Desktop Workstation (ODW) is an open and free standard and design of PowerPC motherboards based on Marvell Discovery II (MV64361) chipset; PReP standard specifies the PCI bus, but will also support ISA, MicroChannel, and PCMCIA. PReP-compliant systems will be able to run OS/2, AIX, Solaris, Taligent, and Windows NT; and the CHRP (Common Hardware Reference Platform) is an open platform agreed on by Apple, IBM, and Motorola. All CHRP systems will be able to run Mac OS, OS/2-PPC, Windows NT, AIX, Solaris, Novell Netware. CHRP is a superset of PReP and the PowerMac platforms. Power.org has defined the Power Architecture Platform Reference (PAPR) that provides the foundation for development of computers based on the Linux operating system. List of computers based on PowerPC: Amiga accelerator boards: Phase5 Blizzard PPC. Phase5 CyberStorm PPC. Apple iMac PowerMac Xserve Mac mini iBook PowerBook Eyetech AmigaOne Genesi Pegasos Open Desktop Workstation (ODW). EFIKA IBM RS/6000 AIX workstations ACube Systems Srl Sam440 (Samantha) Sam460ex (Samantha) Servers Apple Xserve Rack server. Genesi Open Server Workstation (OSW) with dual IBM PowerPC 970MP CPU. High density blade server (rack server). IBM Rack server. Supercomputers IBM Blue Gene/L and Blue Gene/P Supercomputer, keeping the top spots of supercomputers since 2004, also being the first systems to performa faster than one Petaflops. System p with POWER5 processors are used as the base for many supercomputers as they are made to scale well and have powerful CPUs. All supercomputers of Spanish Supercomputing Network, built using PowerPC 970 based blade servers. Magerit and Marenostrum are the most powerful supercomputers of the network. Roadrunner is a new Cell/Opteron based supercomputer that will be operational in 2008, pushing the 1 PetaFLOPS mark. Summit and Sierra, currently the world's first and second fastest supercomputers, respectively. Apple System X of Virginia Tech is a supercomputer based on 1100 Xserves (PowerPC 970) running Mac OS X. First built using stock PowerMac G5s making it one of the cheapest and most powerful supercomputer in its day. Cray The XT3, XT4 and XT5 supercomputers have Opteron CPUs but PowerPC 440 based SeaStar communications processors connecting the CPUs to a very high bandwidth communications grid. Sony The PlayStation 3 is the base of Cell based supercomputer grids running Yellow Dog Linux. Personal digital assistants (smartphones and tablets) IBM released a Personal Digital Assistant (PDA) reference platform ("Arctic") based on PowerPC 405LP (Low Power). This project is discontinued after IBM sold PowerPC 4XX design to AMCC. Game consoles All three major seventh-generation game consoles contain PowerPC-based processors. Sony's PlayStation 3 console, released in November 2006, contains a Cell processor, including a 3.2 GHz PowerPC control processor and eight closely threaded DSP-like accelerator processors, seven active and one spare; Microsoft's Xbox 360 console, released in 2005, includes a 3.2 GHz custom IBM PowerPC chip with three symmetrical cores, each core SMP-capable at two threads, and Nintendo's Wii console, also released in November 2006, contains an extension of the PowerPC architecture found in their previous system, the GameCube. TV Set Top Boxes/Digital Recorder IBM, Sony, and Zarlink Semiconductor had released several Set Top Box (STB) reference platforms based on IBM PowerPC 405 cores and IBM Set Top Box (STB) System-On-Chip (SOC) Sony Set top box (STB). Motorola Set top box. Dreambox Set Top Box. TiVo (Series1) personal TV/video digital recorder (VDR). Printers/Graphics Global Graphics, YARC Raster Image Processing (RIP) system for professional printers. Hewlett-Packard, Kyocera, Konica-Minolta, Lexmark, Xerox laser and inkjet printers. Network/USB Devices Buffalo Technology Kuro Box/LinkStation/TeraStation network-attached storage devices Cisco routers Culturecom - VoIP in China. Realm Systems BlackDog Plug-in USB mobile Linux Server Automotive Ford, Daimler Benz cars and other car manufacturers. Medical Equipment Horatio - patient simulator for training doctor and nurse. Matrox image processing subsystem for medical equipment: MRI, CAT, PET, USG Military and Aerospace The RAD750 (234A510, 234A511, 244A325) radiation-hardened processors, used in several spacecraft. Maxwell radiation hardened Single-board computer (SBC) for space and military projects. U.S. Navy submarine sonar systems. Canadarm for International Space Station (ISS) created by MacDonald, Detwiller & Associates (MDA). Leclerc main battle tank fire control Point of Sales Culturecom - Tax Point of Sales terminal in China. Test and Measurement Equipment LeCroy digital oscilloscopes (certain series). References External links The OpenPOWER Foundation PowerPC PowerPC implementations
9925110
https://en.wikipedia.org/wiki/Command%20Post%20of%20the%20Future
Command Post of the Future
The United States Army's Command Post of the Future (CPOF) is a C2 software system that allows commanders to maintain topsight over the battlefield; collaborate with superiors, peers and subordinates over live data; and communicate their intent. Originally a DARPA technology demonstration, in 2006 CPOF became an Army Program of Record. It is managed by the Product Manager Tactical Mission Command at Aberdeen Proving Ground, Maryland, and integrated with the Army's Maneuver Control System and other products. The prime contractor on the CPOF program is General Dynamics C4 Systems, which purchased the original developer of the software (MAYA Viz Ltd) in 2005. Overview CPOF began as a DARPA investigation to improve mission command using networked information visualization systems, with the goal of doubling the speed and quality of command decisions. The system was developed in a research setting by Global Infotek, Inc.; ISX Corporation (now part of Lockheed Martin Advanced Technology Laboratories); Oculus Info, Inc. (now called Uncharted Software Inc.); SYS Technologies, Inc.; and MAYA Viz (now part of General Dynamics C4 Systems) with the active participation of military personnel as subject matter experts. CPOF is one of several examples of collaborative software, but intended specifically for use in a mission command. A shared workspace is the main interface, in which every interface element in CPOF is a shared piece of data in a networked repository. Shared visual elements in CPOF include iconic representations of hard data, such as units, events, and tasks; visualization frameworks such as maps or schedule charts on which those icons appear; and brush-marks, ink-strokes, highlighting, notes and other annotation. All visual elements in CPOF are interactive via drag-and-drop gestures. Users can drag data-elements and annotation from any visualization framework into any other (i.e., from a chart to a table), which reveal different data-attributes in context depending on the visualization used. Most data-elements can be grouped and nested via drag-and-drop to form associations that remain with the data in all of its views. Drag-and-drop composition on live visualizations is CPOF's primary mechanism for editing data values, such as locations on a map or tasks on a schedule (for example, moving an event-icon on a map changes the lat/lon values of that event in the shared repository; moving a task icon on a schedule changes its time-based values in the shared repository). The results of editing gestures are conveyed in real-time to all observers and users of a visualization; when one user moves an event on a map, for example, that event-icon moves on all maps and shared views, such that all users see its new location immediately. Data inputs from warfighters are conveyed to all collaborators as the "natural" result of a drop-gesture in-situ, requiring no explicit publishing mechanism. CPOF is also used as a live-data alternative to PowerPoint briefings. During a CPOF briefing, commanders can drill into any data element in a high-level view to see details on demand, and view outliers or other elements of interest in different visual contexts without switching applications. Annotations and editing-gestures made during briefings become part of the shared repository. The commander's topsight is based on ground-truth at the moment of the briefing; the commander can then communicate intent on live data. CPOF users at any level can assemble workspaces out of smaller tool-and-appliance primitives, allowing members of a collaborating group to organize their workflows according to their needs, without affecting or disrupting the views of other users. CPOF's Tool-and-appliance primitives are designed to let users create quick, throw-away mini-applications to meet their needs in-situ, supporting on-the-fly uses of the software that no developer or designer could have anticipated. The CPOF software is based on the CoMotion platform, a proprietary commercial framework for building collaborative information visualization systems and domain-independent "decision communities". CoMotion's design principles originated as a research program at Carnegie Mellon University led by Steven Roth, and was subsequently developed at MAYA Viz Ltd and General Dynamics C4 Systems. Operational details CPOF uses a proprietary navigational style database based on U-forms to store, represent, and operate upon a wide variety of types of data. CPOF can receive real-time or near-real-time data from a variety of standard sources—such as GCCS-A, C2PC, and ABCS—and display them using MIL-STD-2525B symbols on maps and charts. Plans, schedules, notes, briefings, and other battle-related information can be composed and shared between warfighters. All maps, charts, pasteboards, and other work products can be marked up with permanent and/or fading ink, and annotated with text or "stickies" to provide further context. A VOIP solution is included, although it can integrate with a pre-existing voice solution. Fault tolerance for low bandwidth, high latency, and/or error-prone TCP/IP networks is supported by CPOF's multi-tiered client-server architecture. It can thus be deployed on systems from a two-hop geosynchronous satellite link to a radio network such as JNN while remaining collaborative. The software is largely Java-based, but is only currently deployed on a Microsoft Windows platform. Deployment CPOF was first deployed operationally in a handful of locations in Baghdad, Iraq by the 1st Cavalry Division of the US Army in 2004, and was subsequently deployed throughout Iraq and Afghanistan and used by coalition forces. Variants of CPOF have participated in United States Joint Forces Command's Urban Resolve 2015, the United States Air Force's Joint Expeditionary Force Experiment 06 and 08, and has been in use by the Marines in Combat Operation Centers since 2007. CPOF became an official US Army program of record in 2006. See also Collaboration Collaborative software Project Manager Battle Command References 2. Jacob Mowry, Lead Trainer External links General Dynamics Mission Systems Command Post of the Future (CPOF) United States Army equipment Military technology Groupware Communication software
17104801
https://en.wikipedia.org/wiki/Whole-life%20cost
Whole-life cost
Whole-life cost is the total cost of ownership over the life of an asset. The concept is also known as life-cycle cost (LCC) or lifetime cost, and is commonly referred to as "cradle to grave" or "womb to tomb" costs. Costs considered include the financial cost which is relatively simple to calculate and also the environmental and social costs which are more difficult to quantify and assign numerical values. Typical areas of expenditure which are included in calculating the whole-life cost include planning, design, construction and acquisition, operations, maintenance, renewal and rehabilitation, depreciation and cost of finance and replacement or disposal. Financial Whole-life cost analysis is often used for option evaluation when procuring new assets and for decision-making to minimize whole-life costs throughout the life of an asset. It is also applied to comparisons of actual costs for similar asset types and as feedback into future design and acquisition decisions. The primary benefit is that costs which occur after an asset has been constructed or acquired, such as maintenance, operation, disposal, become an important consideration in decision-making. Previously, the focus has been on the up-front capital costs of creation or acquisition, and organisations may have failed to take account of the longer-term costs of an asset. It also allows an analysis of business function interrelationships. Low development costs may lead to high maintenance or customer service costs in the future. When making this calculation, the depreciation cost on the capital expense should not be included. Environmental and social The use of environmental costs in a whole-life analysis allows a true comparison options, especially where both are quoted as "good" for the environment. For a major project such as the construction of a nuclear power station it is possible to calculate the environmental impact of making the concrete containment, the water required for refining the copper for the power plants and all the other components. Only by undertaking such an analysis is it possible to determine whether one solution carries a lower or higher environmental cost than another. Almost all major projects have some social impact. This may be the compulsory re-location of people living on land about to be submerged under a reservoir or a threat to the livelihood of small traders from the development of a hypermarket nearby. Whole-life cost topics Project appraisal Whole-life costing is a key component in the economic appraisal associated with evaluating asset acquisition proposals. An economic appraisal is generally a broader based assessment, considering benefits and indirect or intangible costs as well as direct costs. In this way, the whole-life costs and benefits of each option are considered and usually converted using discount rates into net present value costs and benefits. This results in a benefit cost ratio for each option, usually compared to the "do-nothing" counterfactual. Typically the highest benefit-cost ratio option is chosen as the preferred option. Historically, asset investments have been based on expedient design and lowest cost construction. If such investment has been made without proper analysis of the standard of service required and the maintenance and intervention options available, the initial saving may result in increased expenditure throughout the asset's life. By using whole-life costs, this avoids issues with decisions being made based on the short-term costs of design and construction. Often the longer-term maintenance and operation costs can be a significant proportion of the whole-life cost. Asset management During the life of the asset, decisions about how to maintain and operate the asset need to be taken in context with the effect these activities might have on the residual life of the asset. If by investing 10% more per annum in maintenance costs the asset life can be doubled, this might be a worthwhile investment. Other issues which influence the lifecycle costs of an asset include: site conditions, historic performance of assets or materials, effective monitoring techniques, appropriate intervention strategies. Although the general approach to determining whole-life costs is common to most types of asset, each asset will have specific issues to be considered and the detail of the assessment needs to be tailored to the importance and value of the asset. High cost assets (and asset systems) will likely have more detail, as will critical assets and asset systems. Maintenance expenditure can account for many times the initial cost of the asset. Although an asset may be constructed with a design life of 30 years, in reality it will possibly perform well beyond this design life. For assets like these a balanced view between maintenance strategies and renewal/rehabilitation is required. The appropriateness of the maintenance strategy must be questioned, the point of intervention for renewal must be challenged. The process requires proactive assessment which must be based on the performance expected of the asset, the consequences and probabilities of failures occurring, and the level of expenditure in maintenance to keep the service available and to avert disaster. IT industry usage Whole-life cost is often referred to as "total cost of ownership (TCO)" when applied to IT hardware and software acquisitions. Use of the term "TCO" appears to have been popularised by Gartner Group in 1987 but its roots are considerably older, dating at least to the first quarter of the twentieth century. It has since been developed as a concept with a number of different methodologies and software tools. A TCO assessment ideally offers a final statement reflecting not only the cost of purchase but all aspects in the further use and maintenance of the equipment, device, or system considered. This includes the costs of training support personnel and the users of the system, costs associated with failure or outage (planned and unplanned), diminished performance incidents (i.e. if users are kept waiting), costs of security breaches (in loss of reputation and recovery costs), costs of disaster preparedness and recovery, floor space, electricity, development expenses, testing infrastructure and expenses, quality assurance, boot image control, marginal incremental growth, decommissioning, e-waste handling, and more. When incorporated in any financial benefit analysis (e.g., ROI, IRR, EVA, ROIT, RJE) TCO provides a cost basis for determining the economic value of that investment. Understanding and familiarity with the term TCO has been somewhat facilitated as a result of various comparisons between the TCO of open source and proprietary software. Because the software cost of open source software is often zero, TCO has been used as a means to justify the up-front licensing costs of proprietary software. Studies which attempt to establish the TCO and provide comparisons have as a result been the subject of many discussions regarding the accuracy or perceived bias in the comparison. Automobile industry, finances Total cost of ownership is also common in the automobile industry. In this context, the TCO denotes the cost of owning a vehicle from the purchase, through its maintenance, and finally its sale as a used car. Comparative TCO studies between various models help consumers choose a car to fit their needs and budget. TCO can and often does vary dramatically against TCA (total cost of acquisition), although TCO is far more relevant in determining the viability of any capital investment, especially with modern credit markets and financing. TCO also directly relates to a business's total costs across all projects and processes and, thus, its profitability. Some instances of "TCO" appear to refer to "total cost of operation", but this may be a subset of the total cost of ownership if it excludes maintenance and support costs. See also Benefits Realisation Management Infrastructure Asset management Life Cycle Thinking Design life Durability Maintainability Planned obsolescence Repairability Product life Source reduction Throwaway society References Further reading Riggs, James L., (1982), Engineering economics. McGraw-Hill, New York, 2nd edition, 1982. Norris, G. A. (2001): Integrating Life Cycle Cost Analysis and LCA, in: The International Journal of Life Cycle Assessment, Jg. 6, H. 2, p. 118–120. Schaltegger, S. & Burritt, R. (2000): Contemporary Environmental Accounting. Issues, Concepts and Practice. Sheffield: Greenleaf Publ. Kicherer, A.; Schaltegger, S.; Tschochohei, H. & Ferreira Pozo, B.: Eco-Efficiency. Combining Life Cycle Assessment and Life Cycle Costs via Normalization, International Journal of LCA, 2007, Vol 12, No 7, 537–543. External links Whole-life cost forum Whole-life costing for sustainable drainage article: "What is whole life cost analysis?" Role of depreciation Cost Structure and Life Cycle Cost (LCC) for Military Systems – Papers presented at the RTO Studies, Analysis and Simulation Panel (SAS) Symposium held in Paris, France, 24–25 October 2001 Brand management Costs Infrastructure investment Product management
51094126
https://en.wikipedia.org/wiki/NanoSat%20MO%20Framework
NanoSat MO Framework
The NanoSat MO Framework (NMF) is a software framework for nanosatellites based on CCSDS Mission Operations services. It facilitates not only the monitoring and control of the nanosatellite software applications, but also the interaction with the nanosatellite platform. This is achieved by using the latest CCSDS standards for monitoring and control, and by exposing services for common peripherals among nanosatellite platforms. Furthermore, it is capable of managing the software on-board by exposing a set of services for software management. In simple terms, it introduces the concept of apps in space that can be installed, and then simply started and stopped from ground. Apps can retrieve data from the nanosatellite platform through a set of well-defined Platform services. Additionally, it includes CCSDS standardized services for monitoring and control of apps. An NMF App can be easily developed, distributed, and deployed on a spacecraft. There is a Software Development Kit (SDK) in order to facilitate the development of software based on the NanoSat MO Framework. This SDK allows quick development of software that is capable of running on ground and/or in space. The reference implementation of the NanoSat MO Framework will be used in ESA's OPS-SAT mission. Architecture Specifications The NanoSat MO Framework is built upon the CCSDS Mission Operations services Architecture and therefore it inherits its properties such as being transport-agnostic, multi-domain, and programming language independent. Additionally, it is independent from any specific nanosatellite platform. The software framework includes 5 sets of MO services. The first 3 are Standardized by the CCSDS and the other 2 are bespoke interfaces: COM services Common services Monitor and Control services Platform services Software Management services The NanoSat MO Framework is split in two segments. First, the “Ground Segment” just like in any traditional spacecraft system. Second, the “NanoSat Segment” which is the equivalent of the space segment but because the target of the framework are nanosatellites, it contains a more specialized name. An NMF Composite is a software component that consists of interconnected services specialized for a certain purpose and to be deployed on the NanoSat segment or Ground segment. The NMF Composites are based on SOA’s service composability design principle that encourages reusing existing services and combine them together to build an advanced solution. The naming convention for the NMF Composites is: <Segment> MO <Purpose> The defined set of NMF Composites are: NanoSat MO Monolithic NanoSat MO Supervisor NanoSat MO Connector Ground MO Adapter Ground MO Proxy The objective of the NMF Composites is to provide prebuilt components that allow quick development of new software solutions that are interoperable in end-to-end scenarios. The NanoSat MO Framework defines an NMF App as an on-board software application based on the NanoSat MO Framework. An NMF App can be developed by integrating the NanoSat MO Connector component into the software application. NMF Apps are expected to be started, monitored, stopped, and/or killed by the NanoSat MO Supervisor component. Reference Implementation in Java The reference implementation provides a concrete implementation of the specifications of the NanoSat MO Framework in the Java programming language. It was used to discover problems, errors and ambiguities in the interfaces. The implementation is mature and the first version is available online. This reference implementation also serves as the basis for the tools of the Software Development Kit which can be used by other developers. The reference implementation in Java is currently maintained by the European Space Agency and it is available online for free (on GitHub) under an open-source license. This license allows anyone to reuse the software for the nanosatellite mission without any major restrictions. NMF SDK The NanoSat MO Framework Software Development Kit (NMF SDK) is a set of development tools and software source code that facilitate the creation of applications with the NanoSat MO Framework. It is composed of: Demos for NMF Ground software development Demos of NMF Apps Consumer Test Tool (CTT) NMF Package Assembler NMF Playground (with a satellite simulator) Documentation The NMF SDK is the starting point for a software developer willing to develop applications with the NMF. NMF Missions An NMF Mission is a concrete implementation of the NanoSat MO Framework for a specific mission. The NMF Mission development includes activities such as implementing the Platform services and the NanoSat MO Supervisor for the specific platform. If a custom or tailored transport is used for the mission, then the transport binding must be implemented and additionally, integrated with the Ground MO Proxy for protocol bridging. The following NMF Mission implementations were implemented: Software Simulator, and OPS-SAT Software Simulator The Software Simulator was developed to be part of the NMF SDK in order to provide simulated data towards the NMF Apps during the development and testing phases. OPS-SAT An implementation for ESA's OPS-SAT mission was developed in order to validate the software framework in-flight. OPS-SAT is a CubeSat built by the European Space Agency (ESA) and launched in December 2019, and it is intended to demonstrate the improvements in mission control capabilities that will arise when satellites can fly more powerful on-board computers. For example, OPS-SAT experimenters can use the NMF SDK for quick development of software capable of running on ground and/or in space. The NanoSat MO Framework apps are able to publish telemetry, receive telecommands or access the GPS device on OPS-SAT. References External links Consultative Committee for Space Data Systems (CCSDS) at http://www.ccsds.org Spaceflight technology European Space Agency Free software programmed in Java (programming language) Java development tools
12791204
https://en.wikipedia.org/wiki/ISO/IEC%2027005
ISO/IEC 27005
ISO/IEC 27005 "Information technology — Security techniques — Information security risk management" is an international standard published by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) providing good practice guidance on managing risks to information. It is a core part of the ISO/IEC 27000-series of standards, commonly known as ISO27k. The standard offers advice on systematically identifying, assessing, evaluating and treating information security risks - processes at the very heart of an ISO27k Information Security Management System (ISMS). It aims to ensure that organizations design, implement, manage, monitor and maintain their information security controls and other arrangements rationally, according to their information security risks. The current third edition of ISO/IEC 27005 was published in 2018. A fourth edition is being drafted and is due to be published at the end of 2022. Overview ISO/IEC 27005 does not specify or recommend specific risk management methods in detail. Instead it discusses the process in more general/overall terms, drawing on the generic risk management method described by ISO 31000 i.e.: Identify and assess the risks; Decide what to do about the risks (how to 'treat' them) ... and do it; Monitor the risks, risk treatments etc., identifying and responding appropriately to significant changes, issues/concerns or opportunities for improvement; Keep stakeholders (principally the organization's management) informed throughout the process. Within that broad framework, organizations are encouraged to select/develop and use whichever information risk management methods, strategies and/or approaches best suit their particular needs - for example: Identifying the possibility of various incidents, situations or scenarios that would compromise or harm the confidentiality, integrity and/or availability of information; Assessing threats to, vulnerabilities within and business impacts potentially arising from incidents involving IT systems and networks, plus manual information processing, information on paper or expressed in words and pictures, plus intangible information such as knowledge, intellectual property etc.; Considering factors that are wholly within the organization's control, entirely outside its control, or partially controllable; Determining the absolute or relative values of various forms, types or categories of information to the organization, in particular information and information processing that is critical to the achievement of important business objectives; Sizing-up information risks using quantitative or qualitative/comparative methods to estimate/determine the probability/likelihood of various types of incident and the organizational impacts if they were to occur; Considering and managing information risks in relation to other kinds (e.g. strategic, commercial/market, product, IT, health and safety, and legal/regulatory compliance risks); Applying/adapting risk management methods and approaches already used by the organization, adopting good practices, or developing new/hybrid approaches; Deciding whether to avoid the risks (typically by not starting or pulling out of risky activities), share them with third parties (e.g. through cyber-insurance or contractual clauses), mitigate them using information security controls, or retain/accept them, applying risk appetite/tolerance criteria; Prioritizing according to the significance or nature of the risks, and the cost-effectiveness or other implications of the risk treatments under consideration, planning to treat them accordingly, allocating resources etc.; Mitigating information risks by reducing their probability and/or impact in various ways e.g. selecting automated, manual, physical or administrative controls that are preventive, detective or corrective; Dealing with uncertainties, including those within the risk management process itself (e.g. the occurrence of unanticipated incidents, unfortunate coincidences, errors of judgment and partial or complete failure of controls); Gaining assurance through testing, assessment, evaluation, reviews, audits etc. that the chosen risk treatments are appropriate and remain sufficiently effective in practice; Complying with relevant requirements or obligations that are imposed on, or voluntarily accepted by, the organization through various laws, regulations, contracts, agreements, standards, codes etc. (e.g. privacy laws, PCI-DSS, ethical and environmental considerations); Learning from experience (including incidents experienced by the organization plus near-misses, and those affecting comparable organizations) and continuously improving. Objectives The ISO/IEC 27000-series of standards are applicable to all types and sizes of organization - a very diverse group, hence it would not be appropriate to mandate specific approaches, methods, risks or controls for them all. Instead, the standards provide general guidance under the umbrella of a management system. Managers are encouraged to follow structured methods that are relevant to and appropriate for their organization's particular situation, rationally and systematically dealing with their information risks. Identifying and bringing information risks under management control helps ensure that they are treated appropriately, in a way that responds to changes and takes advantage of improvement opportunities leading over time to greater maturity and effectiveness of the ISMS. Structure and content of the standard ISO/IEC 27005:2018 has the conventional structure common to other ISO/IEC standards, with the following main sections: <li> Background <li> Overview of the information security risk management process <li> Context establishment <li> Information security risk assessment <li> Information security risk treatment <li> Information security risk acceptance <li> Information security risk communication and consultation <li> information security risk monitoring and review And six appendices: <li> Defining the scope and boundaries of the information security risk management process <li> Identification and valuation of assets and impact assessment <li> Examples of typical threats <li> Vulnerabilities and methods for vulnerability assessment <li> Information security risk assessment approaches <li> Constraints for risk modification References Information assurance standards 27005
59543039
https://en.wikipedia.org/wiki/Amyntor%20%28son%20of%20Ormenus%29
Amyntor (son of Ormenus)
In Greek mythology, Amyntor (Ancient Greek: Ἀμύντωρ, translit. Amýntor, lit. 'defender') was the son of Ormenus, and a king of Eleon or Ormenium. Amyntor's son Phoenix, on his mother's urgings, had sex with his father's concubine, Clytia or Phthia. Amyntor, discovering this, called upon the Erinyes to curse him with childlessness. In a later version of the story, Phoenix was falsely accused by Amyntor's mistress and was blinded by his father, but Chiron restored his sight. Amyntor was also the father of a son Crantor, and a daughter Astydamia. When Amyntor lost a war with Achilles' father Peleus, king of Phthia, Amyntor gave Crantor to Peleus as a pledge of peace. Strabo reports a genealogy for Amyntor which made him the grandson of Cercaphus, the son of Aeolus, and the brother of Euaemon, the father of Eurypylus. When Amyntor refused Heracles permission to pass through his kingdom, Heracles killed Amyntor and fathered a son Ctesippus, by Astydamia. During the Trojan War, Odysseus received a helmet that had originally belonged to Amyntor. Mythology According to the Iliad, Amyntor, the son of Ormenus, was a king in Hellas, and the father of Phoenix, who became a tutor of Achilles, whom he accompanied to the Trojan War. In a speech, addressed to Achilles, Phoenix tells of the conflict between himself and his father. When Amyntor forsook his wife, Phoenix's mother, for a concubine, at the urging of his jealous mother, Phoenix had sex with Amyntor's concubine. To punish this crime Amyntor called upon the Erinyes to curse Phoenix with childlessness. Outraged Phoenix intended to kill Amyntor, but was finally dissuaded. Instead he fleeing through Hellas, Phoenix went to Peleus in Phthia, where he became king of the Dolopians. Also according to the Iliad, the thief Autolycus broke into Amyntor's house in Eleon and stole a helmet, which Meriones gave to Odysseus during the Trojan War. The mythographer Apollodorus gives a different version of Phoenix's story, probably drawn from a lost play by the tragedian Euripides. In this account Phoenix was falsely accused of having sex with Amyntor's concubine Phthia, and was blinded by Amyntor. Peleus brought Phoenix to the centaur Chiron who restored his sight, after which Peleus made him king of the Dolopians. According to Apollodorus, Amyntor was a king of Ormenium, and one day when Heracles wished to pass through his land, Amyntor took up arms and opposed him, and was killed by Heracles, who then fathered a son Ctesippus, by Amyntor's daughter Astydamia. Brief references to Amyntor are found in the poems of the third-century BC poets Callimachus and Lycophron. Callimachus, mentions the sons of Ormenus inviting Erysichthon to games associated with the cult of Athena at Itone in Thessaly, while Lycophron refers to Amyntor blinding Phoenix. According to Ovid, in his Metamorphoses, Amyntor had a son Crantor, whom he gave to Peleus when he sued for peace, and who died fighting alongside Peleus in the Centauromachy, the battle between the Lapiths and the Centaurs at the wedding feast of Pirithous. Strabo reports that, according to the Greek grammarian Demetrius of Scepsis, Amyntor's father Ormenus was the eponymous founder of the city of Ormenium (which Strabo identifies with a village called Orminium which he located at the foot of Mount Pelion, near the Pegasitic Gulf). According to this account Ormenus was the son of Cercaphus, the son of Aeolus, and Ormenus had two sons Amnytor and Euaemon, and that Amyntor had a son Phoenix, and Eumaemon had a son Eurypylus who succeeded to the throne, because Phoenix had fled to Peleus in Phthia. Scholia name Phoenix's mother either Cleobule or Hippodameia, and the concubine as either Clytia or Phthia. Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Callimachus, Callimachus and Lycophron with an English translation by A. W. Mair ; Aratus, with an English translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Internet Archive. Diodorus Siculus, Diodorus Siculus: The Library of History. Translated by C. H. Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Online version by Bill Thayer Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library. Leaf, Walter, The Iliad: Editied, with Apparatus Criticus, Prolegomena, Notes, and Appendices, Walter Leaf, Vol. I, Books 1-12, Second edition, Macmillan and Company, limited, 1900. Internet Archive Lycophron, Alexandra (or Cassandra) in Callimachus and Lycophron with an English translation by A. W. Mair ; Aratus, with an English translation by G. R. Mair, London: W. Heinemann, New York: G. P. Putnam 1921. Internet Archive. Ovid, Metamorphoses, Brookes More. Boston. Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library. Pindar, Odes, Diane Arnson Svarlien. 1990. Online version at the Perseus Digital Library. Smith, William; Dictionary of Greek and Roman Biography and Mythology, London (1873). Online version at the Perseus Digital Library Strabo, Geography, translated by Horace Leonard Jones; Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. (1924). Online version at the Perseus Digital Library, Books 6–14 Tripp, Edward, Crowell's Handbook of Classical Mythology, Thomas Y. Crowell Co; First edition (June 1970). . Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015. Kings in Greek mythology Characters in Greek mythology
1147236
https://en.wikipedia.org/wiki/Chris%20Wallace%20%28computer%20scientist%29
Chris Wallace (computer scientist)
Christopher Stewart Wallace (26 October 1933 – 7 August 2004) was an Australian computer scientist and physicist. Wallace is notable for having devised: The minimum message length principle — an information-theoretic principle in statistics, econometrics, machine learning, inductive inference and knowledge discovery which can be seen both as a mathematical formalisation of Occam's Razor and as an invariant Bayesian method of model selection and point estimation, The Wallace tree form of binary multiplier (1964), a variety of random number generators, a theory in physics and philosophy that entropy is not the arrow of time, a refrigeration system (from the 1950s, whose design is still in use in 2010), hardware for detecting and counting cosmic rays, design of computer operating systems, the notion of universality probability in mathematical logic, and a vast range of other works - see, e.g., and its Foreword re C. S. Wallace , pp 523-560. He was appointed Foundation Chair of Information Science at Monash University in 1968 at the age of 34 (before the Department was re-named Computer Science), and Professor Emeritus in 1996. Wallace was a fellow of the Australian Computer Society and in 1995 he was appointed a fellow of the ACM "For research in a number of areas in Computer Science including fast multiplication algorithm, minimum message length principle and its applications, random number generation, computer architecture, numerical solution of ODE's, and contribution to Australian Computer Science." Wallace received his PhD (in Physics) from the University of Sydney in 1959. He was married to Judy Ogilvie, the first secretary and programme librarian of SILLIAC, which was launched on the 12 of September 1956 at the University of Sydney and which was one of Australia's first computers. He also engineered one of the world's first Local Area Networks in the mid-1960s. References External links Tribute to IT pioneer Chris Wallace — 13 October 2004 Remembering Emeritus Professor Chris Wallace (Information Technology), 2008 Innovative studios honour Monash pioneer — 2 November 2011 Christopher S. Wallace publications, and searchable publications database Wallace, C.S. (posthumous, 2005), Statistical and Inductive Inference by Minimum Message Length, Springer (Series: Information Science and Statistics), 2005, XVI, 432 pp., 22 illus., Hardcover, . (Links to chapter headings, table of contents and sample pages.) (and here). (As far as we know, this cites and includes references to every paper which Chris Wallace ever wrote [and every thesis he ever supervised].) Chris Wallace Award for Outstanding Research Contribution — established by CORE (The Computing Research and Education Association of Australasia) - see also The Chris Wallace Award for Outstanding Research (for 2015) and CORE brief Chris Wallace bio' 1933 births 2004 deaths Australian computer scientists Australian physicists Fellows of the Association for Computing Machinery Information theorists Monash University faculty Computer science educators University of Sydney alumni Australian statisticians Scientists from Melbourne
26282021
https://en.wikipedia.org/wiki/Florida%20Automatic%20Computer
Florida Automatic Computer
FLAC, the Florida Automatic Computer, was an early digital electronic computer built for the United States Air Force at Patrick Air Force Base (PAFB) in Brevard County of Florida, to perform missile data reduction. The computer began service in 1953. The system's architecture resembled that of the many machines of the period that used the von Neumann architecture, and in design was most closely related to SEAC. It was operated by RCA's Data Reduction Group, a subcontractor to Pan American Airways. Three FLACs were ultimately built, with two upgraded FLAC systems (dubbed "FLAC II") entering service in the fall of 1956. FLAC computations supported the flight tests of early ballistic missiles and air-breathing cruise missiles such as the Redstone, Juno, Snark, Matador, Bomarc, Navaho, Atlas, and Thor. History Design on the computer was begun in December 1950 at PAFB's Atlantic Missile Range. The Air Force Civilian engineering team assembled to design and build the computer consisted of seven key members: Thomas G. Holmes, Charlie West, John MacNeill, Jim Bellinger, Steve Batchelor, Bruce Smith and Harlan Manweiler. Thomas G. Holmes was responsible for the overall logical design of the computer, ensuring all of the components worked together. He determined how to interconnect the modules to provide the control and numeric function of the computer. Charlie West was the director of the project. John MacNeill and Jim Bellinger were the mechanical engineers responsible for designing all of the system mechanisms. Jim also designed the input-output system. The punch design increased the existing punch speeds dramatically. Existing punch systems operated at around 10 characters per second but Jim's design was capable of over 400 characters per second. Jim also developed a reader for the paper tape input system. Steve Batchelor was in charge of purchasing and manufacturing. Bruce Smith was in charge of designing the building modules to be used in the design and Harlan Manweiler was the comptroller. Specifications Like the ENIAC, EDVAC, and other early computers, FLAC's basic electronic element was the vacuum tube, but it also used crystal diodes for gating. The complete system comprised 1,050 vacuum tubes of 5 different types and 18,000 crystal diodes, but the computer proper used only 420 6AN5 tubes and 15,000 diodes. FLAC's electronic components were built into 7 different kinds of exchangeable plug-in units which could be inserted or removed into 6 separate cabinets (excluding those for power and air conditioning), permitting faulty units to be replaced quickly to restore the machine to functionality following, for example, the burn-out of a vacuum tube. FLAC consumed 7.5 kW of power (plus another 7.5 kW for the air conditioning needed to cool the computer) and occupied of space over 65 square feet (plus an additional over for air conditioning). It weighed 1000 pounds. The approximate cost of the basic system was $500,000 to the USAF PAFB. The system was fixed-point binary and used 45 binary digits per word (44 numerical, plus one for the sign). Instruction words were the same length as data words, and the computer used 19 total instructions and three-address code instruction type. All numbers were scaled to less than 1 in absolute value. It had built-in automatic decimal-to-binary and binary-to-decimal number conversion that worked at 500 words/second. The system clock ran at 1 MHz. Addition operations took, on average, 850 microseconds, whereas multiplications and divisions took 3300 microseconds. The system used both a 512-word mercury delay line and magnetic tape for memory, and for data input, the system was equipped to process Flexowriter paper tape (at a rate of 1 word/second), magnetic wire (40 words/second), Raytheon magnetic tape (250 words/second), and paper tape (150 characters/second). The system could output to Flexowriter paper tape (at a rate of 1 word/second), magnetic wire (20 words/second), or paper tape (180 characters/second). Use All programming for FLAC was written in machine language, as the machine lacked any high-level language, assembler or compiler. Typical programs transformed missile tracking data from missile tests, recorded to rolls of seven-hole Flexowriter punched paper tape, cartridges of magnetic wire, and reels of magnetic tape, into missile trajectory and performance data. During its service life, FLAC was operated by an engineer or technician and one operator for two 8-hour shifts. It had an operational uptime of about 90%. Other features of the computer included insertion of short words, automatic truncation, automatic zero suppression, automatic scaling, and printed format control. FLAC I was housed in a three-story wooden building south of the cafeteria at PAFB, while the two FLAC II systems were built in the South Wing of the Tech Lab in summer and fall of 1956. FLAC II abandoned mercury delay-line memory in favor of a faster and more versatile 4096-word magnetic core memory. The FLAC machines' service life ended in 1960, whereupon they were replaced by IBM 709 scientific computers. Some of the USAF personnel involved in the construction of FLAC, including Thomas G. Holmes, Charlie West, John MacNeill, Jim Bellinger, Steve Batchelor and Harlan Manweiler, together with Jim Allen, later went on to form Soroban Engineering, Inc. in Melbourne, Florida. See also References 1953 establishments in Florida 1960 disestablishments in Florida 1950s computers Military computers Computer-related introductions in 1953 20th-century history of the United States Air Force History of Brevard County, Florida Science and technology in Florida Military history of Florida
4329299
https://en.wikipedia.org/wiki/Maharashtra%20Navnirman%20Sena
Maharashtra Navnirman Sena
The Maharashtra Navnirman Sena (ISO: Mahārāṣṭra Navanirmāṇa Sēnā) (translation: Maharashtra Reformation Army; MNS) is a Regionalist far-right Indian political party based in the state of Maharashtra and operates on the ideology of "Marathi Manus". It was founded on 9 March 2006 in Mumbai by Raj Thackeray after he left the Shiv Sena party due to differences with his cousin Uddhav Thackeray, who is the 19th Chief Minister Of Maharashtra and to his sidelining by the Shiv Sena in major decisions like distribution of election tickets. MNS won 13 assembly seats (out of 288) in the 2009 assembly elections, which was the first Maharashtra Legislative Assembly election that the party contested. In the most recent elections of Maharashtra Legislative Assembly 2019, MNS won only 1 seat. In January 2020, MNS unveiled a new flag, however the symbol on the flag was not used for elections. Foundation The party was founded by Raj Thackeray, nephew of late Shiv Sena leader Bal Thackeray and grandson of Prabodhankar Thackeray. Raj Thackeray resigned from his uncle's party in January 2006 and announced his intention to start a new political party. The reason given by him for breaking away from the Shiv Sena was that the latter was "run by petty clerks" because of which it had "fallen from its former glory". Also, Thackeray declared his motive of building a political awareness for the development related issues of the state and giving them a center stage in national politics. At the time of the party's foundation, Raj Thackeray stated that he does not want to have hostilities with his uncle who "was, is and always will be (his) mentor". Although the MNS is a break-away group from the Shiv Sena, the party is still based on Marathi and Bhumiputra ideologies. When unveiling the party in an assembly at Shivaji Park he said, that everyone is anxious to see what will happen to Hindutva. When unveiling, he also said, "I shall elaborate on the party's stance on issues like Sons of Soil and Marathi, its agenda for development of Maharashtra and the significance of the party flag colours at the March 19 public meeting." Raj Thackeray considers himself an Indian nationalist. The party also recognises secularism as one of its core tenets. Maharashtra Development Blueprint In September 2014, MNS unveiled its first look of "Maharashtra's Development Blueprint" with the slogan 'Yes, it's possible’. The blueprint discusses party's stance and key ideas on infrastructure, governance, quality of life, growth opportunities and Marathi pride. Controversies 2008 violence against North Indians in Maharashtra In February 2008, some MNS activists clashed with Samajwadi Party (SP) party workers in Mumbai when Samajwadi Party supporters attended a rally at Shivaji Park, Dadar, Mumbai, a stronghold of MNS, where Samajwadi Party leader Abu Asim Azmi made a fiery speech. After the clashes, 73 MNS activists and 19 SP workers were arrested by Mumbai Police for violence. On 6 February 2008, reportedly, about 200 Congress and NCP party workers quit the party and joined Maharashtra Navnirman Sena to support MNS's pro Marathi agenda. A petition was filed in the Patna civil court on 8 February against Thackeray for his alleged remarks over Chhath, the most popular festival of Bihar and Eastern Uttar Pradesh (Purvanchal). Mr. Thackeray maintains he is not against Chhath Puja, but against the "show of arrogance" and "Politicization of Chatth Puja" displayed by some people from Bihar and Eastern Uttar Pradesh on this occasion. On 10 February 2008, MNS workers attacked vendors and shopkeepers from North India in various parts of Maharashtra, and destroyed government property to vent their anger against the reported move to arrest Raj Thackeray. Nashik police detained 26 MNS workers for violence. In February 2008, Raj Thackarey's speech on the issue of migration into Mumbai from other parts of India created a well-publicised controversy. MNS supporters clashed with activists of the Samajwadi Party leading to street violence. Thackeray also criticised noted film actor turned politician Amitabh Bachchan, a native of Allahabad in Uttar Pradesh, for business towards Uttar Pradesh because of Amar Singh. Bachchan came into fame and fortune in Mumbai's film industry;– Bollywood. On 8 September 2008, Infosys Technologies announced that 3,000 employee positions had been shifted from Pune due to construction delays caused earlier that year by MNS attacks on North Indian construction workers in Maharashtra. On 15 October 2008, Thackeray threatened to shut down Jet Airways operations in Maharashtra if they did not rehire probationary employees that had been shed in a cost-cutting move forced by the economic downturn. In October 2008, MNS activists beat up North Indian candidates appearing for the all-India Railway Recruitment Board entrance exam for the Western region in Mumbai. One Bihari died because of train accident and rioting ensued following coverage in Hindi Media with support from NCP/Congress. In retaliation for the MNS' attack on Biharis and North Indians in general, the Bharatiya Bhojpuri Sangh attacked the residence of a Marathi official of Tata Motors in Jamshedpur. Following the uproar in the Indian parliament, and calls that there was no pressure to arrest the MNS chief, Raj Thackeray was arrested in the early hours of 21 October. He was produced before a court on the day itself and would return the next day after spending the night in jail. Following the arrest, however, MNS party activists took out their anger on parts of Mumbai city and the region at large. The arrest resulted in applause, fear and calls for a ban on the MNS. The Shiv Sena, however, maintained a cool response, although senior party leader Manohar Joshi said they were close to supporting the MNS in their agitation against the non-Marathi candidates for the railway board exam. Clash with Shiv Sena On 10 October 2006 clashes erupted between supporters of Shiv Sena and Maharashtra Navnirman Sena headed by Raj Thackeray. It was alleged that workers of MNS had torn the posters bearing the photographs of Shiv Sena Supremo Bal Thackeray near the SIES college in Mumbai. Later as a retaliation, it was alleged that Shiv Sena workers brought down the hoardings with Raj Thackeray's photo near the Sena Bhavan at Dadar. As the news spread about the incident groups gathered near the Sena Bhavan and started pelting stones at each other. In this incident a policeman was injured and many supporters of both parties were injured. To restore order the police fired tear gas shells at the mob. Decorum was eventually restored following police action and the appearance of Uddhav Thackeray and his cousin Raj Thackeray at the venue. Uddhav appealed to Sena workers to go back home. He said: "The police will take necessary action. This is happening because many people are joining us from MNS. The defections have started and that is why they are resorting to such actions". The division chief of the Shiv Sena Milind Vaidya said that they had lodged a complaint with the local police against an MNS worker who was involved in the incident. MNS general secretary Pravin Darekar, however, pinned the cause down to local elections in the SIES college. He alleges that the Sena is concerned about losing their hold over the colleges and that is why they are trying to color the issue, adding that the Sena's allegations had no merit. Raj Thackeray asserts that MNS could not have vandalized the pictures, seeing as how he and his members revere Bal Thackeray. A Parliamentary committee was set up to examine the breach of privileges notices from some MPs for remarks made by Bal Thackeray against Uttar Bharatiyas (North-Indians). Reacting to this, MNS chief Raj Thackeray had said he would not allow any politician from UP and Bihar to enter Mumbai if the parliamentary panel insisted on summoning Bal. Bal countered this by terming his nephew Raj a backstabber and reacted to the MNS chief with a "big no thank you." Shiv Sena (SS) and MNS workers also clashed at Anand Nagar in Oshiwara over the issuance of Navratri posters during the holiday season. SS corporator Rajul Patel said "The MNS activists had put up huge hoardings and were demanding money from people to remove them. People complained to us and we objected. This led to a scuffle." MNS Vibhag Pramukh (Division Leader) Manish Dhuri retorted that "the Sainiks are jealous of our popularity. On Sunday afternoon, a mob of Shiv Sainiks came to the area and started pulling down posters that were put up by us. We objected to this. Unfortunately, one MNS activist sustained severe injuries." Denunciation of MLA Abu Azmi On 9 November 2009 Abu Azmi of the Samajwadi Party was denounced and prevented by MLA of the MNS from taking his oath in Hindi and not in state official language Marathi. As a result of this incident, the speaker of the Maharashtra Legislative Assembly suspended the 4 MNS MLAs involved in the skirmish for a period of four years. They were also barred from entering Mumbai and Nagpur whenever the assembly met in the two cities. The MLAs suspended were Ram Kadam, Ramesh Wanjale, Shishir Shinde and Vasant Gite. The suspension was later revoked in July 2010. Growth in potency In October 2008, Jet Airways laid off almost 1,000 employees. In the frenzy for reinstatement that followed numerous political parties took the stand for the probationers’ cause. First the MNS and the SS came in, then the established national parties, the Congress and the BJP. Even the CPI M, rallied in support of the laid off Kolkata employees. One day after the lay off, the retrenched former staff flocked to the MNS office, even though the SS labour arm, the Bharatiya Kamgar Sena generally rules the aviation unions. The MNS then led more than 300 former employees to Jet's office in Marol. MNS general secretary Nitin Sardesai said, "We met Jet officials today while a lot of (cabin) crew and MNS workers were protesting outside. While we were talking, Jet chairman Naresh Goyal telephoned Raj Thackeray... He requested us to end the protest and offered to meet Raj in a couple of days. We had an one-point agenda that those laid off should be taken back." In two days the MNS march and support got the staff re-hired. The media widely appeared to pronounce Raj as having won the game of one upmanship with the SS, whose mantle of aggressive street politics was seen as having been usurped. This was a big boost for the MNS' newly formed trade union, the Maharashtra Navnirman Kamgar Sena, which has been trying to cut into the SS' influence in the aviation, hotel and entertainment sectors. Elected representatives Performance in the 2012 Maharashtra municipal elections. MNS won 13 assembly seats (out of 288) in the 2009 assembly elections Maharashtra. These include 6 in Mumbai, 2 in Thane, 3 in Nashik,1 in Pune and 1 in Kannad (Aurangabad) and remain at 2nd spot at more than 24 places. This result (4.5% seats) makes MNS, fourth in largest party in Maharashtra assembly after Congress-NCP (144 seats), BJP-Shiv-Sena (90 seats), Third Front (14 seats). In the 2014 Assembly elections, the MNS was trounced. It was only able to win 1 seat across the state. It lost all 10 seats held by it. It also lost on all 6 seats held by it in Mumbai, including Mahim, a Shiv Sena stronghold, which it had previously won from the Sena. Its candidates forfeited their deposits on a record 203 seats out of the 288 seats in the state, and the 218 seats on which it fielded candidates. In the Bruhanmumbai Municipal Corporation (BMC) elections held in 2017, the tally of MNS was reduced to 7 seats. In Oct 2017, 6 councillors defected to Shiv Sena, thus taking its representation down to 1 seat. Political criticism Following an attack on North Indians who had turned up to the RRB (Railway Recruitment Board) railway exam in Mumbai, numerous politicians, mainly from the then ruling UPA central government, harshly criticised Raj Thackeray and the MNS. Three UPA ministers demanded tough action, including a call for a ban against the party. Railway Minister Lalu Prasad Yadav demanded a ban on the MNS and saying its chief was a "mental case," Steel Minister Ram Vilas Paswan added that he would raise the issue in the next cabinet meeting and wondered why no action was being, or had been, taken against the MNS despite a repetition of such violent incidents. He said: "I strongly condemn the incident. There should be strong action against that party... MNS should be banned. [The] Thackeray family has become a chronic problem for Maharashtra and Raj Thackeray, in particular, has become a mental case." The Minister for Food Processing Industries, as well as a Congress leader, Subodh Kant Sahay demanded the Congress-NCP coalition government in Maharashtra treat those responsible for the attacks as criminals. He said, "I have spoken to Maharashtra Chief Minister Vilasrao Deshmukh and asked him on the goondaism that is going on in the state." As far the government's action till date is concerned, it has been soft on them. It should take action as too much has already happened there." They are not workers. They are looters. Organisations like MNS, Bajrang Dal, VHP and RSS should be banned." On the first working day following the incident, uproarious scenes were seen in the national parliament. Numerous members of Parliament condemned the attacks. They also indirectly criticised railway minister Lalu Prasad Yadav while noting that even in their regions, the maximum recruitments being made were those of people from Bihar and not from the state where the recruitment drives were held, adding some credence to the MNS' drive. Speaking first on the issue, RJD leader Devendra Prasad Yadav demanded that the Centre take action in the state under Article 355. He noted that despite the attacks, the Maharashtra chief minister has maintained a silence on the issue, adding that such actions threaten the unity and integrity of the country. Other MP's also demanded the invocation of Article 355 in the light of the attacks. Shahnawaz Hussain of the BJP made such demands in asking if people from Bihar and Uttar Pradesh needed a permit to travel to other parts of the country. The CPI M's Mohammed Salim said that such incidents threaten the country's integrity and send a wrong signal to the rest of the country. Anant Geete of the Shiv Sena, however, tried to give the other side of the story by noting the 4.2 million educated and unemployed youth in Maharashtra. The CPI (M) strongly condemned the attack, terming them a "blatant" assault on the Constitution and demanded the immediate arrest of party chief Raj Thackeray, adding that any leniency shown to "divisive forces" will have far-reaching consequences. The CPI(M) Politburo said the attack on the Constitution was a poor showing on the Maharashtra government which is duty bound to protect and take stringent action against the perpetrators of such crimes. "That it has failed to do so and in fact showing leniency to the leader of the outfit shows the utter bankruptcy of the politics of Congress and its coalition partner." The CPI also said such attacks should not be tolerated and Thackeray and his supporters must be "immediately arrested and prosecuted". Maharashtra Chief Minister Vilasrao Deshmukh said his government is responsible for failure in preventing attacks and ordered a probe into the incident, which will also inquire into why the job advertisements where not given in Marathi newspapers. He said: "What has happened is not good. Such incidents take place because of loopholes in the law. One can't hold only the Home Ministry responsible for it, it is (entire) Government's responsibility. Such incidents are affecting the image of the state and I have instructed the DGP to take stern action." On Raj Thackeray's accusation that job advertisements were not published in local newspapers to keep out Maharashtrian candidates, he said, "An inquiry would also be conducted about why advertisements about the examination were not given in Marathi newspapers and the number of Marathi candidates invited for the exam." He also assured that such incidents of vandalism would not take place in future. During the 2008 Mumbai Terror Attacks, NSG commandos bravely defended the city. They were several commandos from North India whereas the MNS was not seen helping. This led to a great deal of criticism for the MNS since they had vehemently opposed North Indians earlier. In January 2009, artist Pranava Prakash exhibited his painting series "Chal Hat Be Bihari" in Delhi. It showed in pop style the 2008 attacks on North Indians in Maharashtra in the context of xenophobia. Violence and controversies In 2008, MNS created panic among several shop owners through diktat on Marathi signboards. In December 2012, MNS corporator Nitin Nikam repeatedly slapped a 65-year-old contractor over alleged delay in repairing pipeline causing water shortage. In March 2013, 5 MNS and Shiv Sena MLAs assaulted Assistant Police Inspector Sachin Suryavanshi in the state assembly. In January 2014, a mob constituting MNS workers attacked toll booths in eight cities across Maharashtra, and demanding closing of toll booths where the construction cost is below 2 crores. In 2017, MNS workers forcibly removed the signboards of several Gujarati shops claiming they were Belittling Marathi Language. In May 2018, three trans-women were attacked brutally by more than 20 MNS workers. The trans-women sustained severe injuries. In June 2018, Kishor Shinde, a former corporator, slapped a multiplex manager repeatedly over high food prices and ban of food items in multiplex theaters . In September 2018, a mob of MNS workers assaulted a man because of his Facebook post criticising a cartoon drawn by Raj Thackeray. The workers made the man delete his comments, and forced him to apologize. MNS leader Avinash Jadhav threatened the public with similar consequences, "if such things happened again." In January 2020 on the birth anniversary of Bal Thackeray, his nephew Raj Thackeray changed his party flag colour to Bhagwa (Bhagwa Dhwaj) and agenda from Marathi interest to Hindutva or Hindu nationalism giving a speech against Indian Muslims about disturbances caused by their prayers. He supported National Register of Citizens by saying that "only Indians can live in my country and other countries people can't live in India without passport and VISA so what's wrong in this act?". He also stated that Bangladeshis and Pakistanis should go back to their respective countries or they'll be thrown back forcefully. See also 2008 All-India Railway Recruitment Board examination attack References External links Official Maharashtra Navnirman Sena Website Official Maharashtra Navnirman Sena student wing Website Election Television Commercials Of Maharashtra Navnirman Sena Political parties in Maharashtra Far-right politics in India Political parties established in 2006 2006 establishments in Maharashtra Political parties in India Conservative parties in India Right-wing populism in India Right-wing populist parties Hindutva Regionalist parties in India
898503
https://en.wikipedia.org/wiki/Steam%20%28service%29
Steam (service)
Steam is a video game digital distribution service by Valve. It was launched as a standalone software client in September 2003 as a way for Valve to provide automatic updates for their games, and expanded to include games from third-party publishers. Steam has also expanded into an online web-based and mobile digital storefront. Steam offers digital rights management (DRM), server hosting, video streaming, and social networking services. It also provides the user with installation and automatic updating of games, and community features such as friends lists and groups, cloud storage, and in-game voice and chat functionality. The software provides a freely available application programming interface (API) called Steamworks, which developers can use to integrate many of Steam's functions into their products, including in-game achievements, microtransactions, and support for user-created content through Steam Workshop. Though initially developed for use on Microsoft Windows operating systems, versions for macOS and Linux were later released. Mobile apps were also released for iOS, Android, and Windows Phone in the 2010s. The platform also offers a small selection of other content, including design software, hardware, game soundtracks, anime, and films. The Steam platform is the largest digital distribution platform for PC gaming, holding around 75% of the market share in 2013. By 2017, users purchasing games through Steam totaled roughly 4.3 billion, representing at least 18% of global PC game sales. By 2019, the service had over 34,000 games with over 95 million monthly active users. The success of Steam has led to the development of a line of Steam Machine microconsoles, which include the SteamOS operating system and Steam Controllers, Steam Link devices for local game streaming, and the Steam Deck, a handheld personal computer system tailored for running Steam games. History Valve had entered into a publishing contract with Sierra Studios in 1997 ahead of the 1998 release of Half-Life. The contract had given some intellectual property (IP) rights to Sierra in addition to publishing control. Valve published additional games through Sierra, including expansions for Half-Life and Counter-Strike. Around 1999, as Valve started work on Half-Life 2 and the new Source engine, they became concerned about their contract with Sierra related to the IP rights, and the two companies renegotiated a new contract by 2001. The new contract eliminated Sierra's IP rights and gave Valve rights to digital distribution of its games. Around this time, Valve had problems updating the published games. They could provide downloadable patches, but for multiplayer games, new patches would result in most of the online user base disconnecting for several days until everyone had implemented the patch. Valve decided to create a platform that would update games automatically and implement stronger anti-piracy and anti-cheat measures. Through user polls at the time of its announcement in 2002, Valve also recognized that at least 75% of their users had access to high-speed Internet connections, which would continue to grow with planned broadband expansion in the following years, and recognized that they could deliver game content faster to players than through retail channels. Valve approached several companies, including Microsoft, Yahoo!, and RealNetworks to build a client with these features, but were declined. Steam's development began in 2002, with working names for the platform being "Grid" and "Gazelle". It was publicly announced at the Game Developers Conference event on March 22, 2002, and released for beta testing the same day. To demonstrate the ease of integrating Steam with a game, Relic Entertainment created a special version of Impossible Creatures. Valve partnered with several companies, including AT&T, Acer, and GameSpy. The first mod released on the system was Day of Defeat. In 2002, the president of Valve, Gabe Newell, said he was offering mod teams a game engine license and distribution over Steam for . Prior to the announcement of Steam, Valve found that Sierra had been distributing their games in PC cafes which they claimed was against the terms of the contract, and took Sierra and their owners, Vivendi Games, to court. Sierra countersued, asserting that with the announcement of Steam, Valve had been working to undermine the contract to offer a digital storefront for their games, directly competing with Sierra. The case was initially ruled in Valve's favor, allowing them to leave the contract due to the breach and seek other publishing partners for retail copies of its games while continuing their work on Steam. One such company had been Microsoft, but Ed Fries stated that they turned down the offer due to Valve's intent to continue to sell their games over Steam. Between 80,000 and 300,000 players participated in the beta test before Steam's official release on September 12, 2003. The client and website choked under the strain of thousands of users simultaneously attempting to play the game. At the time, Steam's primary function was streamlining the patch process common in online computer games, and was an optional component for all other games. In 2004, the World Opponent Network was shut down and replaced by Steam, with any online features of games that required it ceasing to work unless they converted over to Steam. Half-Life 2 was the first game to require installation of the Steam client to play, even for retail copies. This decision was met with concerns about software ownership, software requirements, and problems with overloaded servers demonstrated previously by the Counter-Strike rollout. During this time users faced problems attempting to play the game. Beginning in 2005, Valve began negotiating contracts with several third-party publishers to release their products, such as Rag Doll Kung Fu and Darwinia, on Steam. Valve announced that Steam had become profitable because of some highly successful Valve games. Although digital distribution could not yet match retail volume, profit margins for Valve and developers were far larger on Steam. Larger publishers, such as id Software, Eidos Interactive, and Capcom, began distributing their games on Steam in 2007. By May of that year, 13 million accounts had been created on the service, and 150 games were for sale on the platform. By 2014, total annual game sales on Steam were estimated at around $1.5 billion. By 2018, the service had over 90 million monthly active users. Service features and functionality Software delivery and maintenance Steam's primary service is to allow its users to download games and other software that they have in their virtual software libraries to their local computers as game cache files (GCFs). Initially, Valve was required to be the publisher for these games since they had sole access to the Steam's database and engine, but with the introduction of the Steamworks software development kit (SDK) in May 2008, anyone could publish to Steam without Valve's direct involvement. Prior to 2009, most games released on Steam had traditional anti-piracy measures, including the assignment and distribution of product keys and support for digital rights management software tools such as SecuROM or non-malicious rootkits. With an update to the Steamworks SDK in March 2009, Valve added its "Custom Executable Generation" (CEG) approach into the Steamworks SDK that removed the need for these other measures. The CEG technology creates a unique, encrypted copy of the game's executable files for the given user, which allows them to install it multiple times and on multiple devices, and make backup copies of their software. Once the software is downloaded and installed, the user must then authenticate through Steam to de-encrypt the executable files to play the game. Normally this is done while connected to the Internet following the user's credential validation, but once they have logged into Steam once, a user can instruct Steam to launch in a special offline mode to be able to play their games without a network connection. Developers are not limited to Steam's CEG and may include other forms of DRM (or none at all) and other authentication services than Steam; for example, some games from publisher Ubisoft require the use of their UPlay gaming service, and prior to its shutdown in 2014, some other games required Games for Windows – Live, though many of these games have since transitioned to using the Steamworks CEG approach. In September 2008, Valve added support for Steam Cloud, a service that can automatically store saved game and related custom files on Valve's servers; users can access this data from any machine running the Steam client. Games must use the appropriate features of Steamworks for Steam Cloud to work. Users can disable this feature on a per-game and per-account basis. Cloud saving was expanded in January 2022 for Dynamic Cloud Sync, allowing games developed with this feature to store saved states to Steam Cloud while a game is running rather than waiting until the user quit out of the game; this was added ahead of the portable Steam Deck unit so that users can save from the Deck and then put the unit into a suspended state. In May 2012, the service added the ability for users to manage their game libraries from remote clients, including computers and mobile devices; users can instruct Steam to download and install games they own through this service if their Steam client is currently active and running. Product keys sold through third-party retailers can also be redeemed on Steam. For games that incorporate Steamworks, users can buy redemption codes from other vendors and redeem these in the Steam client to add the title to their libraries. Steam also offers a framework for selling and distributing downloadable content (DLC) for games. In September 2013, Steam introduced the ability to share most games with family members and close friends by authorizing machines to access one's library. Authorized players can install the game locally and play it separately from the owning account. Users can access their saved games and achievements providing the main owner is not playing. When the main player initiates a game while a shared account is using it, the shared account user is allowed a few minutes to either save their progress and close the game or purchase the game for his or her own account. Within Family View, introduced in January 2014, parents can adjust settings for their children's tied accounts, limiting the functionality and accessibility to the Steam client and purchased games. In accordance with its acceptable use policy, Valve retains the right to block customers' access to their games and Steam services when Valve's Anti-Cheat (VAC) software determines that the user is cheating in multiplayer games, selling accounts to others, or trading games to exploit regional price differences. Blocking such users initially removed access to his or her other games, leading to some users with high-value accounts losing access because of minor infractions. Valve later changed its policy to be similar to that of Electronic Arts' Origin platform, in which blocked users can still access their games but are heavily restricted, limited to playing in offline mode and unable to participate in Steam Community features. Customers also lose access to their games and Steam account if they refuse to accept changes to Steam's end user license agreements; this last occurred in August 2012. In April 2015, Valve began allowing developers to set bans on players for their games, but enacted and enforced at the Steam level, which allowed them to police their own gaming communities in a customizable manner. Storefront features The Steam client includes a digital storefront called the Steam Store through which users can purchase computer games. Once the game is bought, a software license is permanently attached to the user's Steam account, allowing them to download the software on any compatible device. Game licenses can be given to other accounts under certain conditions. Content is delivered from an international network of servers using a proprietary file transfer protocol. Steam sells its products in US and Canadian dollars, euros, pounds sterling, Brazilian reais, Russian rubles, Indonesian rupiah and Indian rupees depending on the user's location. In December 2010, the client began supporting the WebMoney payment system, which is popular in many European, Middle Eastern, and Asian countries. From April 2016 until December 2017, Steam accepted payments in Bitcoin with transactions handled by BitPay before dropping support for it due to high fluctuation in value and costly service fees. The Steam storefront validates the user's region; the purchase of games may be restricted to specific regions because of release dates, game classification, or agreements with publishers. Since 2010, the Steam Translation Server project offers Steam users to assist with the translation of the Steam client, storefront, and a selected library of Steam games for twenty-eight languages. Steam also allows users to purchase downloadable content for games, and for some specific games such as Team Fortress 2, the ability to purchase in-game inventory items. In February 2015, Steam began to open similar options for in-game item purchases for third-party games. Users of Steam's storefront can also purchase games and other software as gifts to be given to another Steam user. Prior to May 2017, users could purchase these gifts to be held in their profile's inventory until they opted to gift them. However, this feature enabled a gray market around some games, where a user in a country where the price of a game was substantially lower than elsewhere could stockpile giftable copies of games to sell to others, particularly in regions with much higher prices. In August 2016, Valve changed its gifting policy to require that games with VAC and Game Ban-enabled games be gifted immediately to another Steam user, which also served to combat players that worked around VAC and Game Bans, while in May 2017, Valve expanded this policy to all games. The changes also placed limitations on gifts between users of different countries if there is a large difference in pricing for the game between two different regions. The Steam store also enables users to redeem store product keys to add software from their library. The keys are sold by third-party providers such as Humble Bundle (in which a portion of the sale is given back to the publisher or distributor), distributed as part of a physical release to redeem the game, or given to a user as part of promotions, often used to deliver Kickstarter and other crowd funding rewards. A grey market exists around Steam keys, where less reputable buyers purchase a large number of Steam keys for a game when it is offered for a low cost, and then resell these keys to users or other third-party sites at a higher price, generating profit for themselves. This caused some of these third-party sites, such as G2A, to be embroiled in this grey market. It is possible for publishers to have Valve to track down where specific keys have been used and cancel them, removing the product from the user's libraries, leaving the user to seek any recourse with the third-party they purchased from. Other legitimate storefronts, like Humble Bundle, have set a minimum price that must be spent to obtain Steam keys as to discourage mass purchases that would enter the grey market. In June 2021, Valve began limiting how frequently Steam users could change their default region to prevent them from purchasing games from outside their home region for cheaper. In 2013, Steam began to accept player reviews of games. Other users can subsequently rate these reviews as helpful, humorous, or otherwise unhelpful, which are then used to highlight the most useful reviews on the game's Steam store page. Steam also aggregates these reviews and enables users to sort products based on this feedback while browsing the store. In May 2016, Steam further broke out these aggregations between all reviews overall and those made more recently in the last 30 days, a change Valve acknowledges to how game updates, particularly those in Early Access, can alter the impression of a game to users. To prevent observed abuse of the review system by developers or other third-party agents, Valve modified the review system in September 2016 to discount review scores for a game from users that activated the product through a product key rather than directly purchased by the Steam Store, though their reviews remain visible. Alongside this, Valve announced that it would end business relations with any developer or publisher that they have found to be abusing the review system. Separately, Valve has taken actions to minimize the effects of review bombs on Steam. In particular, Valve announced in March 2019 that it mark reviews they believe are "off-topic" as a result of a review bomb, and eliminate their contribution to summary review scores; the first such games they took action on with this was the Borderlands games after it was announced Borderlands 3 would be a timed-exclusive to the Epic Games Store. During mid-2011, Valve began to offer free-to-play games, such as Global Agenda, Spiral Knights and Champions Online; this offer was linked to the company's move to make Team Fortress 2 a free-to-play title. Valve included support via Steamworks for microtransactions for in-game items in these games through Steam's purchasing channels, in a similar manner to the in-game store for Team Fortress 2. Later that year, Valve added the ability to trade in-game items and "unopened" game gifts between users. Steam Coupons, which was introduced in December 2011, provides single-use coupons that provide a discount to the cost of items. Steam Coupons can be provided to users by developers and publishers; users can trade these coupons between friends in a similar fashion to gifts and in-game items. Steam Market, a feature introduced in beta in December 2012 that would allow users to sell virtual items to others via Steam Wallet funds, further extended the idea. Valve levies a transaction fee of 15% on such sales and game publishers that use Steam Market pay a transaction fee. For example, Team Fortress 2the first game supported at the beta phaseincurred both fees. Full support for other games was expected to be available in early 2013. In April 2013, Valve added subscription-based game support to Steam; the first game to use this service was Darkfall Unholy Wars. In October 2012, Steam introduced non-gaming applications, which are sold through the service in the same manner as games. Creativity and productivity applications can access the core functions of the Steamworks API, allowing them to use Steam's simplified installation and updating process, and incorporate features including cloud saving and Steam Workshop. Steam also allows game soundtracks to be purchased to be played via Steam Music or integrated with the user's other media players. Valve adjusted its approach to soundtracks in 2020, no longer requiring them to be offered as DLC, meaning that users can buy soundtracks to games they do not own, and publishers can offer soundtracks to games not on Steam. Valve have also added the ability for publishers to rent and sell digital movies via the service, with initially most being video game documentaries. Following Warner Bros. Entertainment offering the Mad Max films alongside the September 2015 release of the game based on the series, Lionsgate entered into agreement with Valve to rent over one hundred feature films from its catalog through Steam starting in April 2016, with more films following later. In March 2017, Crunchyroll started offering various anime for purchase or rent through Steam. However, by February 2019, Valve shuttered video from its storefront save for videos directly related to gaming content. While available, users could also purchase Steam Machine related hardware. In conjunction with developers and publishers, Valve frequently provides discounted sales on games on a daily and weekly basis, sometimes oriented around a publisher, genre, or holiday theme, and sometimes allow games to be tried for free during the days of these sales. The site normally offers a large selection of games at discount during its annual Summer and Holiday sales, including gamification of these sales to incentive users to purchase more games. While Steam allows developers to offer demo versions of their games at any time, Valve worked with Geoff Keighley in 2019 in conjunction with The Game Awards to hold a week-long Steam Game Festival to feature a large selection of game demos of current and upcoming games, alongside sales for games already released. This event has since been repeated two or three times a year, typically in conjunction with game expositions or award events, and since has been renamed as the Steam Next Fest. Privacy, security and abuse The popularity of Steam has led to the service's being attacked by hackers. An attempt occurred in November 2011, when Valve temporarily closed the community forums, citing potential hacking threats to the service. Days later, Valve reported that the hack had compromised one of its customer databases, potentially allowing the perpetrators to access customer information; including encrypted password and credit card details. At that time, Valve was not aware whether the intruders actually accessed this information or discovered the encryption method, but nevertheless warned users to be alert for fraudulent activity. Valve added Steam Guard functionality to the Steam client in March 2011 to protect against the hijacking of accounts via phishing schemes, one of the largest support problems Valve had at the time. Steam Guard was advertised to take advantage of the identity protection provided by Intel's second-generation Core processors and compatible motherboard hardware, which allows users to lock their account to a specific computer. Once locked, activity by that account on other computers must first be approved by the user on the locked computer. Support APIs for Steam Guard are available to third-party developers through Steamworks. Steam Guard also offers two-factor, risk-based authentication that uses a one-time verification code sent to a verified email address associated with the Steam account; this was later expanded to include two-factor authentication through the Steam mobile application, known as Steam Guard Mobile Authenticator. If Steam Guard is enabled, the verification code is sent each time the account is used from an unknown machine. In 2015, between Steam-based game inventories, trading cards, and other virtual goods attached to a user's account, Valve stated that the potential monetary value had drawn hackers to try to access user accounts for financial benefit, and continue to encourage users to secure accounts with Steam Guard, when trading was introduced in 2011. Valve reported that in December 2015, around 77,000 accounts per month were hijacked, enabling the hijackers to empty out the user's inventory of items through the trading features. To improve security, the company announced that new restrictions would be added in March 2016, under which 15-day holds are placed on traded items unless they activate, and authenticate with Steam Guard Mobile Authenticator. After a Counter-Strike: Global Offensive gambling controversy, Valve stated it is cracking down on third-party websites using Steam inventory trading for Skin gambling in July 2016. ReVuln, a commercial vulnerability research firm, published a paper in October 2012 that said the Steam browser protocol was posing a security risk by enabling malicious exploits through a simple user click on a maliciously crafted steam:// URL in a browser. This was the second serious vulnerability of gaming-related software following a recent problem with Ubisoft's own game distribution platform Uplay. German IT platform Heise online recommended strict separation of gaming and sensitive data, for example using a PC dedicated to gaming, gaming from a second Windows installation, or using a computer account with limited rights dedicated to gaming. In July 2015, a bug in the software allowed anyone to reset the password to any account by using the "forgot password" function of the client. High-profile professional gamers and streamers lost access to their accounts. In December 2015, Steam's content delivery network was misconfigured in response to a DDoS attack, causing cached store pages containing personal information to be temporarily exposed for 34,000 users. In April 2018, Valve added new privacy settings for Steam users, who are able to set if their current activity status is private, visible to friends only, or public; in addition to being able to hide their game lists, inventory, and other profile elements in a similar manner. While these changes brought Steam's privacy settings inline with approaches used by game console services, it also impacted third-party services such as Steam Spy, which relied on the public data to estimate Steam sales count. Valve established a HackerOne bug bounty program in May 2018, a crowdsourced method to test and improve security features of the Steam client. In August 2019, a security researcher exposed a zero-day vulnerability in the Windows client of Steam, which allowed for any user to run arbitrary code with LocalSystem privileges using just a few simple commands. The vulnerability was then reported to Valve via the program, but it was initially rejected for being "out-of-scope". Following a second vulnerability found by the same user, Valve apologized and patched them both, and expanded the program's rules to accept any other similar problems in the future. The Anti-Defamation League published a report that stated the Steam Community platform harbors hateful content in April 2020. User interface Since November 2013, Steam has allowed for users to review their purchased games and organize them into categories set by the user and add to favorite lists for quick access. Players can add non-Steam games to their libraries, allowing the game to be easily accessed from the Steam client and providing support where possible for Steam Overlay features. The Steam interface allows for user-defined shortcuts to be added. In this way, third-party modifications and games not purchased through the Steam Store can use Steam features. Valve sponsors and distributes some modifications free of charge; and modifications that use Steamworks can also use VAC, Friends, the server browser, and any Steam features supported by their parent game. For most games launched from Steam, the client provides an in-game overlay that can be accessed by a keystroke. From the overlay, the user can access his or her Steam Community lists and participate in chat, manage selected Steam settings, and access a built-in web browser without having to exit the game. Since the beginning of February 2011 as a beta version, the overlay also allows players to take screenshots of the games in process; it automatically stores these and allows the player to review, delete, or share them during or after his or her game session. As a full version on February 24, 2011, this feature was reimplemented so that users could share screenshots on websites of Facebook, Twitter, and Reddit straight from a user's screenshot manager. Steam's "Big Picture" mode was announced in 2011; public betas started in September 2012 and were integrated into the software in December 2012. Big Picture mode is a 10-foot user interface, which optimizes the Steam display to work on high-definition televisions, allowing the user to control Steam with a gamepad or with a keyboard and mouse. Newell stated that Big Picture mode was a step towards a dedicated Steam entertainment hardware unit. With the introduction of the Steam Deck, Valve stated that they will eventually replace Big Picture mode with the Steam Deck's user interface. In-Home Streaming was introduced in May 2014; it allows users to stream games installed on one computer to anotherregardless of platformon the same home network with low latency. By June 2019, Valve renamed this feature to Remote Play, allowing users to stream games across devices that may be outside of their home network. Steam's "Remote Play Together", added in November 2019 after a month of beta testing, gives the ability for local multiplayer games to be played by people in disparate locations, though will not necessary resolve latency problems typical of these types of games. Remote Play Together was expanded in February 2021 to give the ability to invite non-Steam players to play though a Steam Link app approach. The Steam client, as part of a social network service, allows users to identify friends and join groups using the Steam Community feature. Through the Steam Chat feature, users can use text chat and peer-to-peer VoIP with other users, identify which games their friends and other group members are playing, and join and invite friends to Steamworks-based multiplayer games that support this feature. Users can participate in forums hosted by Valve to discuss Steam games. Each user has a unique page that shows his or her groups and friends, game library including earned achievements, game wishlists, and other social features; users can choose to keep this information private. In January 2010, Valve reported that 10 million of the 25 million active Steam accounts had signed up to Steam Community. In conjunction with the 2012 Steam Summer Sale, user profiles were updated with Badges reflecting the user's participation in the Steam community and past events. Steam Trading Cards, a system where players earn virtual trading cards based on games they own, were introduced in May 2013. Using them, players can trade with other Steam users on the Steam Marketplace and use them to craft "Badges", which grant rewards such as game discount coupons, emoticons, and the ability to customize their user profile page. In 2010, the Steam client became an OpenID provider, allowing third-party websites to use a Steam user's identity without requiring the user to expose his or her Steam credentials. In order to prevent abuse, access to most community features is restricted until a one-time payment of at least 5 is made to Valve. This requirement can be fulfilled by making any purchase of five dollars or more on Steam, or by adding at the same amount to their wallet. Through Steamworks, Steam provides a means of server browsing for multiplayer games that use the Steam Community features, allowing users to create lobbies with friends or members of common groups. Steamworks also provides Valve Anti-Cheat (VAC), Valve's proprietary anti-cheat system; game servers automatically detect and report users who are using cheats in online, multiplayer games. In August 2012, Valve added new featuresincluding dedicated hub pages for games that highlight the best user-created content, top forum posts, and screenshotsto the Community area. In December 2012, a feature where users can upload walkthroughs and guides detailing game strategy was added. Starting in January 2015, the Steam client allowed players to livestream to Steam friends or the public while playing games on the platform. For the main event of The International 2018 Dota 2 tournament, Valve launched Steam.tv as a major update to Steam Broadcasting, adding Steam chat and Steamworks integration for spectating matches played at the event. It has also been used for other events, such as a pre-release tournament for the digital card game Artifact and for The Game Awards 2018 and Steam Awards award shows. In September 2014, Steam Music was added to the Steam client, allowing users to play through music stored on their computer or to stream from a locally networked computer directly in Steam. An update to the friends and chat system was released in July 2018, allowing for non-peer-to-peer chats integrated with voice chat and other features that were compared to Discord. A standalone mobile app based on this for Android and iOS was released in May 2019. A major visual overhaul of the Library and game profile pages were released in October 2019. These redesigns are aimed to aid users to organize their games, help showcase what shared games a user's friends are playing, games that are being live-streamed, and new content that may be available, along with more customization options for sorting games. Associated with that, Valve gave developers means of communicating when special in-game events are approaching through Steam Events, which appear to players on the revamped Library and game profile pages. A Steam Points system and storefront was added in June 2020, which mirrored similar temporary points systems that had been used in prior sales on the storefront. Users earn points through purchases on Steam or by receiving community recognition for helpful reviews or discussion comments. These points do not expire as they had in the prior sales, and can be redeemed in the separate storefront for cosmetics that apply to the user's profile and chat interface. Developer features Valve provides developers the ability to create storefront pages for games ahead of time to help generate interest in their game ahead of release. This is also necessary to fix a release date that functions into Valve's "build review", a free service performed by Valve about a week before this release date to make sure the game can be installed and run, and other checks to make sure the game's launch is otherwise trouble-free. Recent updates related to Discovery queues have given developers more options for customizing their storefront page and how these pages integration with users' experiences with the Steam client. Valve offers Steamworks, an application programming interface (API) that provides development and publishing tools to take advantage of Steam client's features, free-of-charge to game and software developers. Steamworks provides networking and player authentication tools for both server and peer-to-peer multiplayer games, matchmaking services, support for Steam community friends and groups, Steam statistics and achievements, integrated voice communications, and Steam Cloud support, allowing games to integrate with the Steam client. The API also provides anti-cheating devices and digital copy management. After introducing the Steam Controller and improvements to the Steam interface to support numerous customization options, the Steamworks API was also updated to provide a generic controller library for developers and these customization features for other third-party controllers, starting with the DualShock 4. Steam's API has since been updated to include official support for other console controllers such as the Nintendo Switch Pro Controller, the Xbox Wireless Controller for the Xbox Series X and Series S consoles, and the PlayStation 5's DualSense, as well as compatible controllers from third-party manufacturers. Developers of software available on Steam are able to track sales of their games through the Steam store. In February 2014, Valve announced that it would begin to allow developers to set up their own sales for their games independent of any sales that Valve may set. Valve may also work with developers to suggest their participation in sales on themed days. Valve added the ability for developers to sell games under an early access model with a special section of the Steam store, starting in March 2013. This program allows for developers to release functional, but not finished, products such as beta versions to the service to allow users to buy the games and help provide testing and feedback towards the final production. Early access also helps to provide funding to the developers to help complete their games. The early access approach allowed more developers to publish games onto the Steam service without the need for Valve's direct curation of games, significantly increasing the number of available games on the service. Developers are able to request Steam keys of their products to use as they see fit, such as to give away in promotions, to provide to selected users for review, or to give to key resellers for different profitization. Valve generally honors all such requests, but clarified that they would evaluate some requests to avoid giving keys to games or other offerings that are designed to manipulate the Steam storefront and other features. For example, Valve said that a request for 500,000 keys for a game that has significantly negative reviews and 1,000 sales on Steam is unlikely to be granted. Valve enabled the ability for multiple developers to create bundles of games from their offerings without the need for Valve's staff to create these on their behalf in June 2021. Steam Workshop The Steam Workshop is a Steam account-based hosting service for videogame user-created content. Depending on the title, new levels, art assets, gameplay modifications, or other content may be published to or installed from the Steam Workshop through an automated, online account-based process. The Workshop was originally used for distribution of new items for Team Fortress 2; it was redesigned to extend support for any game in early 2012, including modifications for The Elder Scrolls V: Skyrim. A May 2012 patch for Portal 2, enabled by a new map-making tool through the Steam Workshop, introduced the ability to share user-created levels. Independently developed games, including Dungeons of Dredmor, are able to provide Steam Workshop support for user-generated content. Dota 2 became Valve's third published title available for the Steam Workshop in June 2012; its features include customizable accessories, character skins, and announcer packs. Workshop content may be monetized; Newell said that the Workshop was inspired by gold farming from World of Warcraft to find a way to incentive both players and content creators in video games, and which had informed them of their approach to Team Fortress 2 and their later multiplayer games. By January 2015, Valve themselves had provided some user-developed Workshop content as paid-for features in Valve-developed games, including Team Fortress 2 and Dota 2; with over $57 million being paid to content creators using the Workshop. Valve began allowing developers to use these advanced features in January 2015; both the developer and content generator share the profits of the sale of these items; the feature went live in April 2015, starting with various mods for Skyrim. This feature was pulled a few days afterward following negative user feedback and reports of pricing and copyright misuse. Six months later, Valve stated they were still interested in offering this type of functionality in the future, but would review the implementation to avoid these previous mistakes. In November 2015, the Steam client was updated with the ability for game developers to offer in-game items for direct sale via the store interface, with Rust being the first game to use the feature. Steam for Schools Steam for Schools (discontinued) was function-limited version of the Steam client that was available free of charge for use in schools. It was part of Valve's initiative to support gamification of learning for classroom instruction. It was released alongside free versions of Portal 2 and a standalone program called "Puzzle Maker" that allowed teachers and students to create and manipulate levels. It featured additional authentication security that allowed teachers to share and distribute content via a Steam Workshop-type interface, but blocks access from students. SteamVR SteamVR is a virtual reality hardware and software platform developed by Valve, with a focus on allowing "room-scale" experiences using positional tracking base stations, as opposed to those requiring the player to stay in a singular location. SteamVR was first introduced for the Oculus Rift headset in 2014, and later expanded to support other virtual reality headsets, such as the HTC Vive and Valve Index. Though released for support on Windows, macOS, and Linux, Valve dropped macOS support for SteamVR in May 2020. Storefront curation Up until 2012, Valve would handpick games to be included onto the Steam service, limiting these to games that either had a major developer supporting them, or smaller studios with proven track records for Valve's purposes. Since then, Valve have sought ways to enable more games to be offered through Steam, while pulling away from manually approving games for the service, short of validating that a game runs on the platforms the publisher had indicated. Alden Kroll, a member of the Steam development team, said that Valve knows Steam is in a near-monopoly for game sales on personal computers, and the company does not want to be in a position to determine what gets sold, and thus had tried to find ways to make the process of adding games to Steam outside of their control. At the same time, Valve recognized that unfettered control of games onto the service can lead to discovery problems as well as low-quality games that are put onto the service for a cash grab. Steam Greenlight Valve's first attempt to streamline game addition to the service was with Steam Greenlight, announced in July 2012 and released the following month. Through Greenlight, Steam users would choose which games were added to the service. Developers were able to submit information about their games, as well as early builds or beta versions, for consideration by users. Users would pledge support for these games, and Valve would help to make top-pledged games available on the Steam service. In response to complaints during its first week that finding games to support was made difficult by a flood of inappropriate or false submissions, Valve required developers to pay to list a game on the service to reduce illegitimate submissions. Those fees were donated to the charity Child's Play. This fee was met with some concern from smaller developers, who often are already working in a deficit and may not have the money to cover such fees. A later modification allowed developers to put conceptual ideas on the Greenlight service to garner interest in potential projects free-of-charge; votes from such projects are visible only to the developer. Valve also allowed non-gaming software to be voted onto the service through Greenlight. The initial process offered by Greenlight was panned because while developers favored the concept, the rate of games that were eventually approved were small. Valve acknowledged that this was a problem and believed it could be improved upon. In January 2013, Newell stated that Valve recognized that its role in Greenlight was perceived as a bottleneck, something the company was planning to eliminate in the future through an open marketplace infrastructure. On the eve of Greenlight's first anniversary, Valve simultaneously approved 100 games through the Greenlight process to demonstrate this change of direction. While the Greenlight service had helped to bring more and varied games onto Steam without excessive bureaucracy, it also led to an excessively large number of games on the service that make it difficult for a single one to stand out. By 2014, Valve had discussed plans to phase out Greenlight in favor of providing developers with easier means to put their games onto Steam. Steam Direct Steam Greenlight was phased out and replaced with Steam Direct in June 2017. With Steam Direct, a developer or publisher wishing to distribute their game on Steam needs only to complete appropriate identification and tax forms for Valve and then pay a recoupable application fee for each game they intend to publish. Once they apply, a developer must wait thirty days before publishing the game as to give Valve the ability to review the game to make sure it is "configured correctly, matches the description provided on the store page, and doesn't contain malicious content". On announcing its plans for Steam Direct, Valve suggested the fee would be in the range of $100–5,000, meant to encourage earnest software submissions to the service and weed out poor quality games that are treated as shovelware, improving the discovery pipeline to Steam's customers. Smaller developers raised concerns about the Direct fee harming them, and excluding potentially good indie games from reaching the Steam marketplace. Valve opted to set the Direct fee at $100 after reviewing concerns from the community, recognizing the need to keep this at a low amount for small developers, and outlining plans to improve their discovery algorithms and inject more human involvement to help these. Valve then refunds the fee should the game exceed $1,000 in sales. In the process of transitioning from Greenlight to Direct, Valve mass-approved most of the 3,400 remaining games that were still in Greenlight, though the company noted that not all of these were at a state to be published. Valve anticipated that the volume of new games added to the service would further increase with Direct in place. Some groups, such as publisher Raw Fury and crowd funding/investment site Fig, have offered to pay the Direct fee for indie developers who can not afford it. Discovery updates Without more direct interaction on the curation process, allowing hundreds more games on the service, Valve had looked to find methods to allow players to find games they would be more likely to buy based on previous purchase patterns. The September 2014 "Discovery Update" added tools that would allow existing Steam users to be curators for game recommendations, and sorting functions that presented more popular games and recommended games specific to the user, as to allow more games to be introduced on Steam without the need of Steam Greenlight, while providing some means to highlight user-recommended games. This Discovery update was considered successful by Valve, as they reported in March 2015 in seeing increased use of the Steam Storefront and an increase in 18% of sales by revenue from just prior to the update. A second Discovery update was released November 2016, giving users more control over what games they want to see or ignore within the Steam Store, alongside tools for developers and publishers to better customize and present their game within these new users preferences. By February 2017, Valve reported that with the second Discovery update, the number of games shown to users via the store's front page increased by 42%, with more conversions into sales from that viewership. In 2016, more games are meeting a rough metric of success defined by Valve as selling more than $200,000 in revenues in its first 90 days of release. Valve added a "Curator Connect" program in December 2017. Curators can set up descriptors for the type of games they are interested in, preferred languages, and other tags along with social media profiles, while developers can find and reach out to specific curators from this information, and, after review, provide them directly with access to their game. This step, which eliminates the use of a Steam redemption key, is aimed to reduce the reselling of keys, as well as dissuade users that may be trying to game the curator system to obtain free game keys. Prior to October 2018, Valve received revenue share of a flat 30% from all direct Steam sales and transactions. After that date however, Valve updated their policies that cut theirs to 25% once revenue for a game surpasses , and further to 20% at . The policy change was seen by journalists as trying to entice larger developers to stay with Steam rather than other digital storefronts like Origin or Uplay, while the decision was also met with backlash from indie and other small game developers, as their revenue split remained unchanged. Valve has attempted to deal with "fake games", those that are built around reused assets and little other innovation, designed to misuse Steam's features for the benefit only to the developer or select few users. To help assist finding and removing these games from the service, the company added Steam Explorers atop its existing Steam Curator program, according to various YouTube personalities that have spoken out about such games in the past and with Valve directly, including Jim Sterling and TotalBiscuit. Any Steam user is able to sign up to be an Explorer, and are asked to look at under-performing games on the service as to either vouch that the game is truly original and simply lost among other releases, or if it is an example of a "fake game", at which point Valve can take action to remove the game. In July 2019, the Steam Labs feature was introduced as a means of Valve to showcase experimental discovery features they have considered for including into Steam, but seek public feedback to see if it is something that users want before fully integrating that into the storefront. For example, an initial experiment released at launch was the Interactive Recommender, which uses artificial intelligence algorithms pulling data from the user's past gameplay history, comparing it to all other users, as to suggest new games that may be of interest to them. As these experiments mature through end-user testing, they have then been brought into the storefront as direct features. The September 2019 Discovery update, which Valve claimed would improve the visibility of niche and lesser-known games, was met with criticism from some indie game developers, who recorded a significant drop in exposure of their games, including new wishlist additions and appearances in the "More Like This" and "Discovery queue" sections of the store. Policies In June 2015, Valve created a formal process to allow purchasers to request full refunds on games they had purchased on Steam for any reason, with refunds guaranteed within the first two weeks as long as the player had not spent more than two hours in the game. Prior to June 2015, Valve had a no-refunds policy, but allowed them in certain circumstances, such as if third-party content had failed to work or improperly reports on certain features. For example, the Steam version of From Dust was originally stated to have a single, post-installation online DRM check with its publisher Ubisoft, but the released version of the game required a DRM check with Ubisoft's servers each time it was used. At the request of Ubisoft, Valve offered refunds to customers who bought the game while Ubisoft worked to release a patch that would remove the DRM check altogether. On The War Z release, players found that the game was still in an alpha-build state and lacked many of the features advertised on its Steam store page. Though the developers Hammerpoint Interactive altered the description after launch to reflect the current state of the game software, Valve removed the title from Steam and offered refunds to those who had bought it. Valve also removed Earth: Year 2066 from the Early Access program and offered refunds after discovering that the game's developers had reused assets from other games and used developer tools to erase negative complaints about the title. Valve stated it would continue to work on improving the discovery process for users, taking principles they learned in providing transparency for matchmaking in Dota 2 to make the process better, and using that towards Steam storefront procedures to help refine their algorithms with user feedback. Valve has full authority to remove games from the service for various reasons; however games that are removed can still be downloaded and played by those that have already purchased these games. Another reason would be games that have had their licenses expired may no longer be sold, such as when a number of Transformers games published by Activision under license from Hasbro were removed from the store in January 2018. Grand Theft Auto: Vice City was removed from Steam in 2012 because of a claim from the Recording Industry Association of America over an expired license for one of the songs on the soundtrack. Around the launch of Electronic Arts' (EA) own digital storefront Origin during the same year, Valve removed Crysis 2, Dragon Age II, and Alice: Madness Returns from Steam because the terms of service prevented games from having their own in-game storefront for downloadable content. In the case of Crysis 2, a "Maximum Edition" that contained all the available downloadable content for the game and removed the in-game storefront was re-added to Steam. Valve also remove games that are formally stated to be violating copyright or other intellectual property when given such complaints. In 2016, Valve removed Orion by Trek Industries when Activision filed a Digital Millennium Copyright Act (DMCA) complaint about the game after it was discovered that one of the game's artists had taken, among other assets, gun models directly from Call of Duty: Black Ops 3 and Call of Duty: Advanced Warfare. Quality control With the launch of Steam Direct, effectively removing any curation of games by Valve prior to being published on Steam, there have been several incidents of published games that have attempted to mislead Steam users. Starting in June 2018, Valve has taken actions against games and developers that are "trolling" the system; in September 2018, Valve explicitly defined that trolls on Steam "aren't actually interested in good faith efforts to make and sell games to you or anyone" and instead use "game shaped object" that could be considered a video game but would not be considered "good" by a near-unanimity of users. As an example, Valve's Lombardi stated that the game Active Shooter, which would have allowed the player to play as either a SWAT team member tasked to take down the shooter at a school shooting incident or as the shooter themselves, was an example of trolling, as he described it was "designed to do nothing but generate outrage and cause conflict through its existence". While Active Shooter had been removed from Steam prior to Valve issuing this policy statement under the reasoning that the development had abused the Steam service's terms and conditions, Lombardi asserted that they would have removed the game if it had been offered by any other developer. A day after making this new policy, Valve subsequently removed four yet-released games from the service that appeared to also be created to purposely create outrage, including AIDS Simulator and ISIS Simulator. Within a month of clarifying its definition of trolling, Valve removed approximately 170 games from Steam. In addition to removing bad actors from the service, Valve has also taken steps to reduce the impact of "fake games" and their misuse on the service. In May 2017, Valve identified that there were several games on the service with trading card support, where the developer distributed game codes to thousands of bot-operated accounts that would run the game to earn trading cards that they could then sell for profit; these games would also create false positives that make these games appear more popular than they really were and would impact games suggested to legitimate players through their store algorithms, affecting Steam's Discovery algorithms. Subsequent to this patch, games must reach some type of confidence factor based on actual playtime before they can generate trading cards, with players credited for their time played towards receiving trading cards before this metric is met. Valve identified a similar situation in June 2018 with "fake games" that offered large numbers of game achievements with little gameplay aspects, which some users would use to artificially raise their global achievement statistics displayed on their profile. Valve plans to use the same approach and algorithms to identify these types of games, limiting these games to only one thousand total achievements and discounting these achievements towards a user's statistics. These algorithms have resulted in select false positives for legitimate games with unusual end-user usage patterns, such as Wandersong which was flagged in January 2019 for what the developer believed was related to a near unanimous positive users reviews from the game. Other actions taken by developers against the terms of service or other policies have prompted Valve to remove games. Some noted examples include: In September 2016, Valve removed Digital Homicide Studios games from the storefront for being "hostile to Steam customers" following a lawsuit that the developer had issued against 100 unnamed Steam users for leaving negative reviews of their games. Digital Homicide later dropped the lawsuit, in part due to the removal of the games from Steam affecting their financial ability to proceed with the lawsuit. In September 2017, Valve removed 170 games developed by Silicon Echo (operating under several different names) that they had released over a period of a few months in 2017, after the implementation of Steam Direct. Valve cited that these were cheap "fake games" that relied on "asset flipping" with pre-existing Unity game engine assets so that they could be published quickly, and were designed to take advantage of the trading card market to allow players and the developers to profit from the trading card sales. In February 2018, after discovering that the CEO of Insel Games had requested the company's employees to write positive Steam reviews for its games as to manipulate the review scores, Valve removed all of Insel's games from the service and banned the company from it. In July 2018, the games Abstractism and Climber offered Steam inventory items that used assets from other Valve games, which were used to mislead users looking for these for trading. Valve removed the games, and built in additional trade protections, warning users of trades involving recently released games or games they do not own to prevent such scamming. In November 2019, nearly 1000 games were removed from Steam. Most appeared tied to a Russian publisher that had operated under several different names. A Valve representative stated that they "recently discovered a handful of partners that were abusing some Steamworks tools" as rationale for the removals. Developers stated that Valve began warning them about removal of games that used cryptocurrencies and non-fungible tokens in October 2021, as such items could have real-world value outside of the game or Steam, which would be against Valve's acceptable use policy. Mature content Valve has also removed or threatened to remove games due to inappropriate or mature content, though there was often confusion as to what material qualified for this, such as a number of mature, but non-pornographic visual novels being threatened. For example, Eek Games' House Party included scenes of nudity and sexual encounters in its original release, which drew criticism from conservative religious organization National Center on Sexual Exploitation, leading Valve to remove the title from the service. Eek Games were later able to satisfy Valve's standards by including censor bars within the game and allowing the game to be readded to Steam, though offered a patch on their website to remove the bars. In May 2018, several developers of anime-stylized games that contained some light nudity, such as HuniePop, were told by Valve they had to address sexual content within their games or face removal from Steam, leading to questions of inconsistent application of Valve's policies. The National Center on Sexual Exploitation took credit for convincing Valve to target these games. However, Valve later rescinded its orders, allowing these games to remain and telling the developers Valve would re-evaluate the games and inform them of any content that would need to be changed or removed. In June 2018, Valve clarified its policy on content, taking a more hands-off approach rather than deem what content is inappropriate, outside of illegal material. Rather than trying to make decisions themselves on what content is appropriate, Valve enhanced its filtering system to allow developers and publishers to indicate and justify the types of mature content (including violence, nudity, and sexual content) in their games. Users can block games that are marked with this type of content from appearing in the store, and if they have not blocked it, they are presented with the description given by the developer or publisher before they can continue to the store page. Developers and publishers with existing games on Steam have been strongly encouraged to complete these forms for these games, while Valve will use moderators to make sure new games are appropriately marked. Valve also committed to developing anti-harassment tools to support developers who may find their game amid controversy. Until these tools were in place, some adult-themed games were delayed for release. Negligee: Love Stories developed by Dharker Studios was one of the first sexually explicit games to be offered after the introduction of the tools in September 2018. Dharker noted that in discussions with Valve that they would be liable for any content-related fines or penalties that countries may place on Valve, a clause of their publishing contract for Steam, and took steps to restrict sale of the game in over 20 regions. Games that feature mature themes with primary characters that visually appear to be underaged, even if the game's narrative establishes them as adults, have been banned by Valve. In March 2019, Valve faced pressure over Rape Day, a planned game described as being a dark comedy and power fantasy where the player would control a serial rapist in the midst of a zombie apocalypse. Journalists questioned how the hands-off approach would handle this case; Valve ultimately decided against offering the game on Steam, arguing that while it "[respects] developers' desire to express themselves", there were "costs and risks" associated with the game's content, and the developers had "chosen content matter and a way of representing it that makes it very difficult for us to help them [find an audience]". Platforms Microsoft Windows Steam originally released exclusively for Microsoft Windows in 2003, but has since been ported to other platforms. More recent Steam client versions use the Chromium Embedded Framework. To take advantage of some of its features for newer interface elements, Steam uses 64-bit versions of Chromium, which makes it unsupported on older operating systems such as Windows XP and Windows Vista. Steam on Windows also relies on some security features built into later versions of Windows. Steam support for XP and Vista were dropped in 2019. While users still on those operating systems are able to use the client, they do not have access to newer features. Around only 0.2% of Steam users were affected by this when it began. macOS On March 8, 2010, Valve announced a client for Mac OS X. The announcement was preceded by a change in the Steam beta client to support the cross-platform WebKit web browser rendering engine instead of the Trident engine of Internet Explorer. Before this announcement, Valve teased the release by e-mailing several images to Mac community and gaming websites; the images featured characters from Valve games with Apple logos and parodies of vintage Macintosh advertisements. Valve developed a full video homage to Apple's 1984 Macintosh commercial to announce the availability of Half-Life 2 and its episodes on the service; some concept images for the video had previously been used to tease the Mac Steam client. Steam for Mac OS X was originally planned for release in April 2010; but was pushed back to May 12, 2010, following a beta period. In addition to the Steam client, several features were made available to developers, allowing them to take advantage of the cross-platform Source engine, and platform and network capabilities using Steamworks. Through SteamPlay, the macOS client allows players who have purchased compatible products in the Windows version to download the Mac versions at no cost, allowing them to continue playing the game on the other platform. Some third-party games may require the user to re-purchase them to gain access to the cross-platform functionality. The Steam Cloud, along with many multiplayer PC games, also supports cross-platform play, allowing Windows, macOS, and Linux users to play with each other regardless of platform. Linux Valve announced in July 2012 that it was developing a Steam client for Linux and modifying the Source engine to work natively on Linux, based on the Ubuntu distribution. This announcement followed months of speculation, primarily from the website Phoronix that had discovered evidence of Linux developing in recent builds of Steam and other Valve games. Newell stated that getting Steam and games to work on Linux is a key strategy for Valve; Newell called the closed nature of Microsoft Windows 8 "a catastrophe for everyone in the PC space", and that Linux would maintain "the openness of the platform". Valve is extending support to any developers that want to bring their games to Linux, by "making it as easy as possible for anybody who's engaged with usputting their games on Steam and getting those running on Linux", according to Newell. The team developing the Linux client had been working for a year before the announcement to validate that such a port would be possible. As of the official announcement, a near-feature-complete Steam client for Linux had been developed and successfully run on Ubuntu. Internal beta testing of the Linux client started in October 2012; external beta testing occurred in early November the same year. Open beta clients for Linux were made available in late December 2012, and the client was officially released in mid-February 2013. At the time of announcement, Valve's Linux division assured that its first game on the OS, Left 4 Dead 2, would run at an acceptable frame rate and with a degree of connectivity with the Windows and Mac OS X versions. From there, it began working on porting other games to Ubuntu and expanding to other Linux distributions. Linux games are also eligible for SteamPlay availability. Versions of Steam working under Fedora and Red Hat Enterprise Linux were released by October 2013. The number of Linux-compatible games on Steam increased from over 500 in June 2014, to over 1,000 by March 2015 and to over 2,000 in March 2016. In February 2019, Steam for Linux had 5,800 native games compared to over 30,000 including aforementioned and was described as having "the power to keep Linux [gaming] alive" by Engadget. In August 2018, Valve released a beta version of Proton, an open-source Windows compatibility layer for Linux, so that Linux users could run Windows games directly through Steam for Linux, removing the need to install the Windows version of Steam in Wine. Proton is composed of a set of open-source tools including Wine and DXVK among others. The software allows the use of Steam-supported controllers, even those not compatible with Windows. Planned for release in 2022, Valve's handheld computer, the Steam Deck, will run SteamOS 3.0 which is based on the Arch Linux distribution, and using Proton to support Windows-based games without native Linux ports. To that end, Valve worked with various middleware developers to make sure their tools were compatible with Proton on Linux and maximize the number of games that the Steam Deck - and by extension other Linux-based installations - would support. This included working with various anti-cheat developers such as Easy Anti-Cheat and Battleye to make sure their solutions worked with Proton. To help with compatibility, Valve developed a classification system that they will populate to rank any game as to how well it works as a Linux native solution or through Proton. Support for Nvidia's deep learning super sampling (DLSS) on supported video cards and games was added to Proton in June 2021, though this will not be available on the Steam Deck which is based on AMD hardware. Other platforms At E3 2010, Newell announced that Steamworks would arrive on the PlayStation 3 with Portal 2. It would provide automatic updates, community support, downloadable content and other unannounced features. Steamworks made its debut on consoles with Portal 2 PlayStation 3 release. Several featuresincluding cross-platform play and instant messaging, Steam Cloud for saved games, and the ability for PS3 owners to download Portal 2 from Steam (Windows and Mac) at no extra costwere offered. Valve's Counter-Strike: Global Offensive also supports Steamworks and cross-platform features on the PlayStation 3, including using keyboard and mouse controls as an alternative to the gamepad. Valve said it "hope[s] to expand upon this foundation with more Steam features and functionality in DLC and future content releases". The Xbox 360 does not have support for Steamworks. Newell said that they would have liked to bring the service to the console through the game Counter-Strike: Global Offensive, which would have allowed Valve to provide the same feature set that it did for the PlayStation 3, but later said that cross-platform play would not be present in the final version of the game. Valve attributes the inability to use Steamworks on the Xbox 360 to limitations in the Xbox Live regulations of the ability to deliver patches and new content. Valve's Erik Johnson stated that Microsoft required new content on the console to be certified and validated before distribution, which would limit the usefulness of Steamworks' delivery approach. Mobile Valve released an official Steam client for iOS and Android devices in late January 2012, following a short beta period. The application allows players to log into their accounts to browse the storefront, manage their games, and communicate with friends in the Steam community. The application also incorporates a two-factor authentication system that works with Steam Guard, further enhancing the security of a user's account. Newell stated that the application was a strong request from Steam users and sees it as a means "to make [Steam] richer and more accessible for everyone". A mobile Steam client for Windows Phone devices was released in June 2016. In May 2019, a mobile chat-only client for Steam was released under the name Steam Chat. On May 14, 2018, a "Steam Link" app with remote play features was released in beta to allow users to stream games to Android phones. It was also submitted to the iOS App Store, but was denied by Apple Inc., who cited "business conflicts with app guidelines". Apple later clarified its rule at the following Apple Worldwide Developers Conference in early June, in that iOS apps may not offer an app-like purchasing store, but does not restrict apps that provide remote desktop support that would allow users to purchases content through the remote desktop. In response, Valve removed the ability to purchase games or other content through the app and resubmitted it for approval in June 2018, where it was accepted by Apple and allowed on their store in May 2019. Steam Machine Prior to 2013, industry analysts believed that Valve was developing hardware and tuning features of Steam with apparent use on its own hardware. These computers were pre-emptively dubbed as "Steam Boxes" by the gaming community and expected to be a dedicated machine focused upon Steam functionality and maintaining the core functionality of a traditional video game console. In September 2013, Valve unveiled SteamOS, a custom Linux-based operating system they had developed specifically aimed for running Steam and games, and the final concept of the Steam Machine hardware. Unlike other consoles, the Steam Machine does not have set hardware; its technology is implemented at the discretion of the manufacturer and is fully customizable, much like a personal computer. Steam Link Steam Link was a set-top box that removed the need for HDMI cables for displaying a PC's screen and allowed for wireless connection when connecting to a TV. That was discontinued in 2018, but now "Steam Link" refers to the Remote Play mobile app that allows users to stream content, such as games, from a PC to a mobile device over a network. Steam Cloud Play Valve included beta support for Steam Cloud Play in May 2020 for developers to allow users to play games in their library which developers and publishers have opted to allow in a cloud gaming service. At launch, Steam Cloud Play only worked through Nvidia's GeForce Now service and would link up to other cloud services in the future though whether Valve would run its own cloud gaming service was unclear. Steam China China has strict regulations on video games and Internet use; however, access to Steam is allowed through China's governmental firewalls. Currently, a large portion of Steam users are from China. By November 2017, more than half of the Steam userbase was fluent in Chinese, an effect created by the large popularity of Dota 2 and PlayerUnknown's Battlegrounds in the country, and several developers have reported that Chinese players make up up to 30% of the total players for their games. Following a Chinese government-ordered temporary block of many of Steam's functions in December 2017, Valve and Perfect World announced they would help to provide an officially sanctioned version of Steam that meets Chinese Internet requirements. Perfect World has worked with Valve before to help bring Dota 2 and Counter-Strike: Global Offensive to the country through approved government processes. All games to be released on Steam China are expected to pass through the government approval process and meet other governmental requirements for operation, such as requiring a Chinese company to run any game with an online presence. The platform is known locally as "Steam Platform" () and runs independently from the rest of Steam. It was made to comply with China's strict regulations on video games, featuring only those that have passed approval by their government. Valve does not plan to prevent Chinese users from accessing the global Steam platform and will try to assure that a player's cloud data remains usable between the two. The client launched as an open beta on February 9, 2021, with about 40 games available at launch. As of December 2021, only around 100 games that have been reviewed and licensed by the government are available through Steam China. On 25 December 2021, reports emerged that Steam's global service was the target of a domain name system attack that prevented users in China from accessing its site. The Ministry of Industry and Information Technology (MIIT) later confirmed that Chinese gamers would no longer be able to use Steam's global service as its international domain name has been designated as "illegal" due to "illicit activities" which were unspecified. The block has effectively locked all Chinese users out of games they had purchased through Steam's international service, and that they would only be able to go through Steam's China-specific application. Steam Deck In July 2021, Valve revealed the Steam Deck, a handheld gaming computer, with plans to ship in December 2021, although it was then delayed to a February 2022 release. The Deck is designed for the play of Steam games, but can be placed into a separate dock, purchased separately, that allows the Deck to output to an external display and use the dock's power, networking, and connected USB accessories. The Deck was released on February 25, 2022. Market share and impact Users Valve reported that there were 125 million active accounts on Steam by the end of 2015. By August 2017, the company reported that there were 27 million new active accounts since January 2016, bringing the total number of active users to at least 150 million. While most accounts are from North America and Western Europe, Valve has seen a significant growth in accounts from Asian countries within recent years, spurred by their work to help localize the client and make additional currency options available to purchasers. Valve also considers the concurrent user count a key indicator of the success of the platform, reflecting how many accounts were logged into Steam at the same time. By August 2017, Valve reported that they saw a peak of 14 million concurrent players, up from 8.4 million in 2015, with 33 million concurrent players each day and 67 million each month. By January 2018, the peak online count had reached 18.5 million, with over 47 million daily active users. During the coronavirus pandemic in 2020, in which a large proportion of the world's population were encouraged or forced to stay at home, Steam saw a concurrent player count of over 23 million in March, along with several games seeing similar record-breaking concurrent counts. The figure was broken again in January 2021 with over 25 million users shortly after the release of the highly anticipated game Cyberpunk 2077, itself the first single-player game on the service to have over a million concurrent players. Sales and distribution Steam has grown significantly since its launch in 2003. Whereas the service started with seven games in 2004, it had over 30,000 by 2019, with additional non-gaming products, such as creation software, DLC, and videos, numbering over 20,000. The growth of games on Steam is attributed to changes in Valve's curation approach, which allows publishers to add games without having Valve's direct involvement enabled by the Greenlight and early access models, and games supporting virtual reality technology. Though Steam provides direct sales data to a game's developer and publisher, it does not provide any public sales data or provide such data to third-party sales groups like NPD Group. In 2011, Valve's Jason Holtman stated that the company felt that such sales data was outdated for a digital market, since such data, used in aggregate from other sources, could lead to inaccurate conclusions. Data that Valve does provide cannot be released without permission because of a non-disclosure agreement with Valve. Developers and publishers have expressed the need to have some metrics of sales for games on Steam, as this allows them to judge the potential success of a title by reviewing how similar games had performed. This led to the creation of algorithms that worked on publicly available data through user profiles to estimate sales data with some accuracy, which led to the creation of the website Steam Spy in 2015. Steam Spy was credited with being reasonably accurate, but in April 2018, Valve added its new privacy settings that defaulted to hiding user game profiles by default, stating this was part of compliance with the General Data Protection Regulation (GDPR) in the European Union. The change broke the method Steam Spy had collected data, rendering it unusable. A few months later, another method had been developed using game achievements to estimate sales with similar accuracy, but Valve shortly changed the Steam API that reduced the functionality of this service. Some have asserted that Valve used the GDPR change as a means to block methods of estimating sales data, though Valve has since promised to provide tools to developers to help gain such insights that they say will be more accurate than Steam Spy was. In 2020, Simon Carless revised an approach originally proposed by Mike Boxleiter as early as 2013, with Carless's method used to estimate sales of a game based on the number of reviews it has on Steam based on a modified "Boxlieter number" used as a multiplication factor. Because of Valve's oversight of sales data, estimates of how much of a market share Steam has in the video game market is difficult to compile. However, Stardock, the previous owner of competing platform Impulse, estimated that as of 2009, Steam had a 70% share of the digital distribution market for video games. In early 2011, Forbes reported that Steam sales constituted 50–70% of the market for downloaded PC games and that Steam offered game producers gross margins of 70% of purchase price, compared with 30% at retail. Steam's success has led to some criticism because of its support of DRM and for being an effective monopoly. Free Software Foundation founder Richard Stallman commented on the issue following the announcement that Steam would come to Linux; he said that while he supposes that its release can boost GNU/Linux adoption leaving users better off than with Microsoft Windows, he stressed that he sees nothing wrong with commercial software but that the problem is that Steam is unethical for not being free software and that its inclusion in GNU/Linux distributions teaches the users that the point is not freedom and thus works against the software freedom that is his goal. In November 2011, CD Projekt, the developer of The Witcher 2: Assassins of Kings, revealed that Steam was responsible for 200,000 (80%) of the 250,000 online sales of the game. Steam was responsible for 58.6% of gross revenue for Defender's Quest during its first three months of release across six digital distribution platformscomprising four major digital game distributors and two methods of purchasing and downloading the game directly from the developer. In September 2014, 1.4 million accounts belonged to Australian users; this grew to 2.2 million by October 2015. Steam's customer service has been highly criticized, with users citing poor response times or lack of response in regards to problems such as being locked out of one's library or having a non-working game redemption key. In March 2015, Valve had been given a failing "F" grade from the Better Business Bureau due to a large number of complaints in Valve's handling of Steam, leading Valve's Erik Johnson to state that "we don't feel like our customer service support is where it needs to be right now". Johnson stated the company plans to better integrate customer support features into the Steam client and be more responsive to such problems. In May 2017, in addition to hiring more staff for customer service, Valve publicized pages that show the number and type of customer service requests it was handling over the last 90 days, with an average of 75,000 entered each day. Of those, requests for refunds were the largest segment, and which Valve could resolve within hours, followed by account security and recovery requests. Valve stated at this time that 98% of all service requests were processed within 24 hours of filing. Curation The addition of Greenlight and Direct have accelerated the number of games present on the service, with almost 40% of the 19,000 games on Steam by the end of 2017 having been released in 2017. By the end of 2018, over 27,000 games had been released on Steam, and had reached over 34,000 by the end of 2019. More than 50,000 games were on the service as of February 2021. Prior to Greenlight, Valve saw about five new games published each week. Greenlight expanded this to about 70 per week, and which doubled to 180 per week following the introduction of Direct. The accessability of publishing games on digital storefronts like Steam since its launch has been described as key to the popularity of indie games. As these processes allow developers to publish games on Steam with minimal oversight from Valve, journalists have criticized Valve for lacking curation policies that make it difficult to find quality games among poorly produced games, aka "shovelware". Following the launch of Steam Direct, allowing games to be published without Valve's curation, members of the video game industry were split on Valve's hands-off approach. Some praised Valve in favoring to avoid trying to be a moral adjudicator of content and letting consumers decide what content they want to see, while others felt that this would encourage some developers to publish games on Steam that are purposely hateful or degenerate of some social classes, like LGBTQ, and that Valve's reliance on user filters and algorithms may not succeed in blocking undesirable content from certain users. Some further criticized the decision based on the financial gain, as Valve collects 30% of all sales through Steam, giving the company reason to avoid blocking any game content, and further compounds the existing curation problems the service has. The National Center on Sexual Exploitation issued a statement that "denounces this decision in light of the rise of sexual violence and exploitation games being hosted on Steam", and that "In our current #MeToo culture, Steam made a cowardly choice to shirk its corporate and social responsibility to remove sexually violent and exploitive video games from its platform". Sector competition From its release in 2003 through to nearly 2009, Steam had a mostly uncontested hold over the PC digital distribution market before major competitors emerged with the largest competitors in the past being services like Games for Windows – Live and Impulse, both of which were shut down in 2013 and 2014, respectively. Sales via the Steam catalog are estimated to be between 50 and 75 percent of the total PC gaming market. With an increase in retail copies of major game publishers integrating or requiring Steam, critics often refer to the service as a monopoly, and claim that placing such a percentage of the overall market can be detrimental to the industry as a whole and that sector competition can yield only positive results for the consumer. Several developers also noted that Steam's influence on the PC gaming market is powerful and one that smaller developers cannot afford to ignore or work with, but believe that Valve's corporate practices for the service make it a type of "benevolent dictator", as Valve attempts to make the service as amenable to developers. As Steam has grown in popularity, many other competing services have been surfacing trying to emulate their success. The most notable major competitors are Electronic Arts' (EA) Origin service, Ubisoft's Uplay, Blizzard Entertainment's Battle.net, CD Projekt's GOG.com, and Epic Games' Epic Games Store. Battle.net competes as a publisher-exclusive platform, while GOG.com's catalog includes many of the same games as Steam but offers them in a DRM-free platform. Upon launch of EA's Origin in 2011, several EA-published games were no longer available for sale, and users feared that future EA games would be limited to Origin's service. Newell expressed an interest in EA games returning to the Steam catalog though noted the situation was complicated. Newell stated "We have to show EA it's a smart decision to have EA games on Steam, and we’re going to try to show them that." In 2020, EA started to publish select games on Steam, and offering its rebranded subscription service EA Play on the platform. Ubisoft still publishes their games on the Steam platform; however, most games published since the launch of Uplay require this service to run after launching the game from Steam. Steam has been criticized for its 30% cut on revenue from game sales, a value that is similar to other digital storefronts. However, some critics have asserted that the 30% cut no longer scales with cheaper costs of serving data a decade since Steam's launch. Epic Games' Tim Sweeney postulated that Valve could reduce its cut to 8%, given that content delivery network costs has dropped significantly. Shortly following an announcement from Valve that they would reduce their cut on games selling over , Epic launched its Epic Games Store in December 2018, promoting that Epic would take only a 12% cut of revenue for games sold through it, as well as not charging the normal 5% revenue cut for games that use the Unreal Engine. The chat application Discord followed suit a few days later, promoting only a 10% cut on games sold through its store. Legal disputes Steam's predominance in the gaming market has led to Valve becoming involved in various legal cases. The lack of a formal refund policy led the Australian Competition & Consumer Commission (ACCC) to sue Valve in September 2014 for violating Australian consumer laws that required stores to offer refunds for faulty or broken products. The Commission won the lawsuit in March 2016, though recognizing Valve changed its policy in the interim. The ACCC argued to the court that Valve should be fined 3 million Australian dollars "in order to achieve both specific and general deterrents, and also because of the serious nature of the conduct" prior to their policy changes. Valve argued that from the previous court case that "no finding that Valve's conduct was intended to mislead or deceive consumers", and argued for only a fine. In December 2016, the court ruled with the ACCC and fined Valve , as well as requiring Valve to include proper language for Australian consumers outlining their rights when purchasing games off Steam. Valve sought to appeal the rulings, arguing in part that they did not have a physical presence in Australia, but these were thrown out by higher courts by December 2017. In January 2018, Valve filed for special leave to appeal the decision to the High Court of Australia, but the High Court dismissed this request, affirming that Valve was still bound by Australian law since it sold products directly to Australian citizens. Later in September 2018, Valve's Steam refund policy was found to be in violation of France's consumer laws, and were fined along with requiring Valve to modify their refund policy appropriately. In December 2015, the French consumer group UFC-Que Choisir initiated a lawsuit against Valve for several of their Steam policies that conflict or run afoul of French law, including the restriction against reselling of purchased games, which is legal in the European Union. In September 2019, the Tribunal de grande instance de Paris found that Valve's practice of preventing resales violated the European Union's Information Society Directive of 2001 and the Computer Programs Directive of 2009, and required them to allow it in the future. The decision is primarily based on the court's findings that Steam sells licenses to software titles, despite Valve's claim that they were selling subscriptions, which are not covered by the Directives. The company stated that it would appeal the decision. The Interactive Software Federation of Europe (ISFE) issued a statement that the French court ruling goes against established EU case law related to digital copies and threatened to upend much of the digital distribution systems in Europe should it be upheld. In August 2016, BT Group filed a lawsuit against Valve stating that Steam's client infringes on four of their patents, which they stated were used within Steam's Library, Chat, Messaging, and Broadcasting services. In 2017, the European Commission began investigating Valve and five other publishers—Bandai Namco Entertainment, Capcom, Focus Home Interactive, Koch Media and ZeniMax Media—for anti-competitive practices, specifically the use of geo-blocking through the Steam storefront and Steam product keys to prevent access to software to citizens of certain countries within the European Economic Area. Such practices would be against the Digital Single Market initiative set by the European Union. The French gaming trade group, Syndicat National du Jeu Vidéo, noted that geo-blocking was a necessary feature to hinder inappropriate product key reselling, where a group buys a number of keys in regions where the cost is low, and then resells them into regions of much higher value to profit on the difference, outside of European oversight and tax laws. The Commission found, in January 2021, that Valve and co-defendants had violated antitrust rules of the European Union, issued combined fines of , and determined that these companies may be further liable to lawsuits from affected consumers. Valve had chosen "not to cooperate," and was fined , the most of any of the defendants. The publishers' fines, which amounted to more than €6 million, were reduced for cooperation with the EC. A January 2021 class-action lawsuit filed against Valve asserted that the company forced developers into entering a "most favored nation"-type of pricing contract to offer games on their storefront, which required the developers to price their games the same on other platforms as they did on Steam, thus stifling competition. Gamasutras Simon Carless analyzed the lawsuit and observed that Valve's terms only apply the resale of Steam keys and not games themselves, and thus the lawsuit may be without merit. A separate class-action lawsuit filed against Valve by Wolfire Games in April 2021 asserted that Steam is essentially a monopoly as if developers want to sell games to personal computer users, they must sell through Steam, and that its 30% cut and its "most favored nation" pricing practices violate antitrust laws as a result of their position. Valve's response to the suit, filed in July 2021, sought to dismiss the complaint, stating that it "has no duty under antitrust law to allow developers to use free Steam Keys to undersell prices for the games they sell on Steam—or to provide Steam Keys at all". Valve further defended its 30% revenue as meeting the current industry standard. Wolfire's suit was dismissed by the presiding judge in November 2021 after determining that Wolfire had failed to show that Valve had a monopoly on game sales, and the 30% cut is consistent, if not higher, than other vendors. Notes References External links 2003 software Android (operating system) software Digital rights management systems DRM for MacOS DRM for Windows Freeware Internet properties established in 2003 IOS software Multiplayer video game services Proprietary cross-platform software Proprietary freeware for Linux Software based on WebKit
38278780
https://en.wikipedia.org/wiki/Deerwalk%20Institute%20of%20Technology
Deerwalk Institute of Technology
Deerwalk Institute of Technology is a private college established in 2010 in collaboration between Nepalese entrepreneurs and the United States-based software company, Deerwalk Inc. It is an affiliation to Tribhuvan University, which is the oldest University in Nepal. Within the affiliation of Tribhuvan University, DWIT offers two undergraduate programs, Bsc.CSIT and BCA. History Deerwalk Institute of Technology was founded by Rudra Raj Pandey in 2010. It was first established collaboration between Nepalese entrepreneurs and the US-based software company, Deerwalk Inc. The first batch had a total of eight students. Buildings and infrastructure The DWIT campus is situated in Sifal, Kathmandu. With a garden and canteen in its front yard, DWIT building stands four-storey tall. The top-storey is occupied by Sagarmatha Hall where all the major sessions are held. It is a spacious establishment with a capacity of over 100 people. Library DWIT Library consists of a significant number of books related to Computer Science.It is solely handled by Students interns at DWIT. All of the library transactions are done using Gyansangalo, DWIT's Library Management System. Cafeteria The DWIT cafeteria is situated at the front yard of the DWIT building. An online portal,Canteen Management System is used  to carry out the canteen transactions. All members associated with DWIT can login into the system using their respective dwit emails and order the food from a range of food options. Academics DWIT offers Bachelors of Science in Computer Science and Information Technology (B. Sc. CSIT) and Bachelor in Computer Application (BCA) run under the curriculum of Tribhuvan University.These are among the comprehensive computer science courses offered by Tribhuvan University. The four-year course is categorized into two domains – Computer Science and Mathematics. In the first three-semester the course mainly consists of mathematics and basic programming concepts. In the latter semesters, the course progresses towards computational theory and artificial intelligence. Student life The students in DWIT come from different cities and towns of Nepal. Clubs and activities There are twelve student-run clubs in DWIT, considered and established by students solely. Each club has a club president, a club vice-president, and five members at its core. Major activities and fundraising events are organized by the clubs. Internship Deerwalk Services Deerwalk is a privately held company based in Lexington, Massachusetts, with over 300 employees worldwide, including a technology campus, DWIT. DWIT and Deerwalk Services occupy the same territory. Deerhold Nepal Deerwalk Compware or Deerhold Nepal is a subsidiary of Deerwalk Group and was founded in July 2017. It is the provider of IT consulting services, custom software development and IT products-distributor in Nepal. Research DWIT Research and Development Unit (R&D Unit) The DWIT Research and Development (R&D) team is the innovative unit of DWIT. The goal of the team is typically to research new products and services and add to the DWIT facilities and the society. The students of DWIT work in this department as interns. The major task of the team is the production of digital video classes. The video lectures are distributed for free by Deerwalk Learning Center. The videos are interactive learning resources for Grades 4-12, designed as per the curriculum prescribed by Curriculum Development Center.The videos have an estimated reach of more than 30 Lakhs in Nepal. DWIT Incubation-center DWIT provides a workplace for budding and neo-startups, known as the Incubation Center.It is a space given to student entrepreneurs to develop their businesses in the initial and transitional phases. The facility is provided until the start-ups are moderately stable. References External links Deerwalk Foods DWIT News Deerwalk Learning Center Deerwalk Education in Kathmandu Technical universities and colleges
21032020
https://en.wikipedia.org/wiki/Double%20Tools%20for%20DoubleSpace
Double Tools for DoubleSpace
Double Tools for DoubleSpace is a software utility released in 1993 by the Menlo Park-based company Addstor, Inc. The utility functioned as an add-on to the disk compression software DoubleSpace, supplied with MS-DOS 6.0, adding a number of features not available in the standard version. Features Most of the Double Tools utilities worked from Microsoft Windows, providing a graphical view and control panel of the compressed drives on the computer (the utilities supplied with MS-DOS only operated from DOS-mode). Double Tools also contained a number of disk checking and rescue/recovery utilities. Some of the included utilities were called Silent Tools. One of the unique features of its time was the capability to defragment a DoubleSpace compressed drive in the background. Some of the features, including the background defragmentation capability, required the user to let Double Tools replace the standard compression driver for MS-DOS (DBLSPACE.BIN) with one developed by Addstor, claimed to be 100% compatible with DoubleSpace and the Microsoft Real-Time Compression Interface introduced in MS-DOS 6.0. This driver added a number of extra features, such as the use of 32-bit code paths when it detected an Intel 80386 or higher CPU, caching capabilities and - in addition to its supporting the use of the Upper Memory Area - also permitted the use of Extended Memory for some of its buffers (reducing the driver's total footprint in conventional and upper memory, albeit at the cost of somewhat reduced speed). Other features provided by Double Tools was the ability to have compressed removable media auto-mounted as they were used (instead of the user having to do this manually). Although this capability was later introduced into the standard version of DoubleSpace found in MS-DOS 6.2, Double Tools also had the capability to put a special utility on compressed floppy disks that made it possible to access the compressed data even on computers that didn't have DoubleSpace (or Double Tools). Another interesting function was the ability to split a compressed volume over multiple floppy disks, being able to see the entire volume with only the first disk inserted (and being prompted to change discs as necessary). It was also possible to share a compressed volume with a remote computer. References Hard disk software Data compression software DOS software Discontinued software
26798826
https://en.wikipedia.org/wiki/Rmetrics
Rmetrics
Rmetrics is a free, open-source and open development software project for teaching computational finance. Rmetrics is based primarily on the statistical R programming language, but does contain contributions in other programming languages, Fortran, C, and C++. The project was started in 2001 by Diethelm Wuertz, based at the Swiss Federal Institute of Technology in Zurich. Rmetrics Packages Most Rmetrics components are distributed as R packages, which are add-on modules for R. Goals The broad goals of the projects are to provide widespread access to a broad range of powerful statistical and graphical methods for the analysis of market data and risk management in finance. to provide a common software platform that enables the rapid development and deployment of extensible, scalable, and interoperable software. to strengthen the scientific understanding by producing high-quality documentation and reproducible research. to train researchers on computational and statistical methods for the analysis of financial data and for financial risk management. R/Rmetrics Project Rmetrics and the R package system provides a broad range of advantages to the Rmetrics project including a high-level interpreted language in which one can easily and quickly prototype new computational methods. It includes a well established system for packaging together software components and documentation. It can address the diversity and complexity of computational finance and financial engineering problems in a common object-oriented framework. It supports a rich set of statistical simulation and modeling activities. It contains cutting edge data and model visualization capabilities. It has been the basis for pathbreaking research in parallel statistical computing. Open Source Commitment The Rmetrics project has a commitment to full open source discipline, with distribution via a SourceForge.net-like platform. All software contributions are expected to exist under an open source license such as GPL2, Artistic 2.0, or BSD. There are many different reasons why open—source software is beneficial to a software project in finance. The reasons include to provide full access to algorithms and their implementation to facilitate software improvements through bug fixing and software extension to encourage good scientific computing and statistical practice by providing appropriate tools and instruction to provide a workbench of tools that allow researchers to explore and expand the methods used to analyze biological data to ensure that the international scientific community is the owner of the software tools needed to carry out research to lead and encourage commercial support and development of those tools that are successful to promote reproducible research by providing open and accessible tools with which to carry out that research (reproducible research is distinct from independent verification) to encourage users to join the Rmetrics project, either by contributing Rmetrics compliant packages or documentation. Rmetrics Repository The Rmetrics Repository is hosted by R-forge. The following developers (in alphabetical order) contribute or have contributed to the Rmetrics packages: Andrew Ellis, Christophe Dutang, David Lüthi, David Scott, Diethelm Würtz, Francesco Gochez, Juri Hinz, Marco Perlin, Martin Mächler, Maxime Debon, Petr Savicky, Philipp Erb, Pierre Chausse, Sergio Guirreri, Spencer Graves, Yohan Chalabi Resources See also Computational finance R (programming language) External links Financial software Free R (programming language) software Free science software Science software for Linux Science software for MacOS Science software for Windows
571303
https://en.wikipedia.org/wiki/Position-independent%20code
Position-independent code
In computing, position-independent code (PIC) or position-independent executable (PIE) is a body of machine code that, being placed somewhere in the primary memory, executes properly regardless of its absolute address. PIC is commonly used for shared libraries, so that the same library code can be loaded in a location in each program address space where it does not overlap with other memory in use (for example, other shared libraries). PIC was also used on older computer systems that lacked an MMU, so that the operating system could keep applications away from each other even within the single address space of an MMU-less system. Position-independent code can be executed at any memory address without modification. This differs from absolute code, which must be loaded at a specific location to function correctly, and load-time locatable (LTL) code, in which a linker or program loader modifies a program before execution so it can be run only from a particular memory location. Generating position-independent code is often the default behavior for compilers, but they may place restrictions on the use of some language features, such as disallowing use of absolute addresses (position-independent code has to use relative addressing). Instructions that refer directly to specific memory addresses sometimes execute faster, and replacing them with equivalent relative-addressing instructions may result in slightly slower execution, although modern processors make the difference practically negligible. History In early computers such as the IBM 701 (29 April 1952) or the UNIVAC I (31 March 1951) code was position-dependent: each program was built to load into and run from a particular address. Those early computers did not have an operating system and were not multitasking-capable. Programs were loaded into main storage (or even stored on magnetic drum for execution directly from there) and run one at a time. In such an operational context, position-independent code was not necessary. The IBM System/360 (7 April 1964) was designed with truncated addressing similar to that of the UNIVAC III, with code position independence in mind. In truncated addressing, memory addresses are calculated from a base register and an offset. At the beginning of a program, the programmer must establish addressability by loading a base register; normally the programmer also informs the assembler with a USING pseudo-op. The programmer can load the base register from a register known to contain the entry point address, typically R15, or can use the BALR (Branch And Link, Register form) instruction (with a R2 Value of 0) to store the next sequential instruction's address into the base register, which was then coded explicitly or implicitly in each instruction that referred to a storage location within the program. Multiple base registeres could be used, for code or for data. Such instructions require less memory because they do not have to hold a full 24, 31, 32, or 64 bit address (4 or 8 bytes), but instead a base register number (encoded in 4 bits) and a 12–bit address offset (encoded in 12 bits), requiring only two bytes. This programming technique is standard on IBM S/360 type systems. It has been in use through to today's IBM System/z. When coding in assembly language, the programmer has to establish addressability for the program as described above and also use other base registers for dynamically allocated storage. Compilers automatically take care of this kind of addressing. IBM's early operating system DOS/360 (1966) was not using virtual storage (since the early models of System S/360 did not support it), but it did have the ability to place programs to an arbitrary (or automatically chosen) storage location during loading via the PHASE name,* JCL (Job Control Language) statement. So, on S/360 systems without virtual storage, a program could be loaded at any storage location, but this required a contiguous memory area large enough to hold that program. Sometimes memory fragmentation would occur from loading and unloading differently sized modules. Virtual storage - by design - does not have that limitation. While DOS/360 and OS/360 did not support PIC, transient SVC routines in OS/360 could not contain relocatable address constants and could run in any of the transient areas without relocation. Virtual storage was first introduced on IBM System/360 model 67 in (1965) to support IBM's first multi-tasking operating and time-sharing operating system TSS/360. Later versions of DOS/360 (DOS/VS etc.) and later IBM operating systems all utilized virtual storage. Truncated addressing remained as part of the base architecture, and still advantageous when multiple modules must be loaded into the same virtual address space. By way of comparison, on early segmented systems such as Burroughs MCP on the Burroughs B5000 (1961) and Multics (1964), paging systems such as IBM TSS/360 (1967) or base and bounds systems such as GECOS on the GE 625 and EXEC on the UNIVAC 1107, code was also inherently position-independent, since addresses in a program were relative to the current segment rather than absolute. The invention of dynamic address translation (the function provided by an MMU) originally reduced the need for position-independent code because every process could have its own independent address space (range of addresses). However, multiple simultaneous jobs using the same code created a waste of physical memory. If two jobs run entirely identical programs, dynamic address translation provides a solution by allowing the system simply to map two different jobs' address 32K to the same bytes of real memory, containing the single copy of the program. Different programs may share common code. For example, the payroll program and the accounts receivable program may both contain an identical sort subroutine. A shared module (a shared library is a form of shared module) gets loaded once and mapped into the two address spaces. Technical details Procedure calls inside a shared library are typically made through small procedure linkage table stubs, which then call the definitive function. This notably allows a shared library to inherit certain function calls from previously loaded libraries rather than using its own versions. Data references from position-independent code are usually made indirectly, through Global Offset Tables (GOTs), which store the addresses of all accessed global variables. There is one GOT per compilation unit or object module, and it is located at a fixed offset from the code (although this offset is not known until the library is linked). When a linker links modules to create a shared library, it merges the GOTs and sets the final offsets in code. It is not necessary to adjust the offsets when loading the shared library later. Position independent functions accessing global data start by determining the absolute address of the GOT given their own current program counter value. This often takes the form of a fake function call in order to obtain the return value on stack (x86), in a specific standard register (SPARC, MIPS), or a special register (POWER/PowerPC/Power ISA), which can then be moved to a predefined standard register, or to obtain it into that standard register (PA-RISC, Alpha, ESA/390 and z/Architecture). Some processor architectures, such as the Motorola 68000, Motorola 6809, WDC 65C816, Knuth's MMIX, ARM and x86-64 allow referencing data by offset from the program counter. This is specifically targeted at making position-independent code smaller, less register demanding and hence more efficient. Windows DLLs Dynamic-link libraries (DLLs) in Microsoft Windows use variant E8 of the CALL instruction (Call near, relative, displacement relative to next instruction). These instructions need not be fixed up when a DLL is loaded. Some global variables (e.g. arrays of string literals, virtual function tables) are expected to contain an address of an object in data section respectively in code section of the dynamic library; therefore, the stored address in the global variable must be updated to reflect the address where the DLL was loaded to. The dynamic loader calculates the address referred to by a global variable and stores the value in such global variable; this triggers copy-on-write of a memory page containing such global variable. Pages with code and pages with global variables that do not contain pointers to code or global data remain shared between processes. This operation must be done in any OS that can load a dynamic library at arbitrary address. In Windows Vista and later versions of Windows, the relocation of DLLs and executables is done by the kernel memory manager, which shares the relocated binaries across multiple processes. Images are always relocated from their preferred base addresses, achieving address space layout randomization (ASLR). Versions of Windows prior to Vista require that system DLLs be prelinked at non-conflicting fixed addresses at the link time in order to avoid runtime relocation of images. Runtime relocation in these older versions of Windows is performed by the DLL loader within the context of each process, and the resulting relocated portions of each image can no longer be shared between processes. The handling of DLLs in Windows differs from the earlier OS/2 procedure it derives from. OS/2 presents a third alternative and attempts to load DLLs that are not position-independent into a dedicated "shared arena" in memory, and maps them once they are loaded. All users of the DLL are able to use the same in-memory copy. Multics In Multics each procedure conceptually has a code segment and a linkage segment. The code segment contains only code and the linkage section serves as a template for a new linkage segment. Pointer register 4 (PR4) points to the linkage segment of the procedure. A call to a procedure saves PR4 in the stack before loading it with a pointer to the callee's linkage segment. The procedure call uses an indirect pointer pair with a flag to cause a trap on the first call so that the dynamic linkage mechanism can add the new procedure and its linkage segment to the Known Segment Table (KST), construct a new linkage segment, put their segment numbers in the caller's linkage section and reset the flag in the indirect pointer pair. TSS In IBM S/360 Time Sharing System (TSS/360 and TSS/370) each procedure may have a read-only public CSECT and a writable private Prototype Section (PSECT). A caller loads a V-constant for the routine into General Register 15 (GR15) and copies an R-constant for the routine's PSECT into the 19th word of the save area pointed to be GR13. The Dynamic Loader does not load program pages or resolve address constants until the first page fault. Position-independent executables Position-independent executables (PIE) are executable binaries made entirely from position-independent code. While some systems only run PIC executables, there are other reasons they are used. PIE binaries are used in some security-focused Linux distributions to allow PaX or Exec Shield to use address space layout randomization to prevent attackers from knowing where existing executable code is during a security attack using exploits that rely on knowing the offset of the executable code in the binary, such as return-to-libc attacks. Apple's macOS and iOS fully support PIE executables as of versions 10.7 and 4.3, respectively; a warning is issued when non-PIE iOS executables are submitted for approval to Apple's App Store but there's no hard requirement yet and non-PIE applications are not rejected. OpenBSD has PIE enabled by default on most architectures since OpenBSD 5.3, released on 1 May 2013. Support for PIE in statically linked binaries, such as the executables in /bin and /sbin directories, was added near the end of 2014. openSUSE added PIE as a default in 2015-02. Beginning with Fedora 23, Fedora maintainers decided to build packages with PIE enabled as the default. Ubuntu 17.10 has PIE enabled by default across all architectures. Gentoo's new profiles now support PIE by default. Around July 2017, Debian enabled PIE by default. Android enabled support for PIEs in Jelly Bean and removed non-PIE linker support in Lollipop. See also Dynamic linker Object file Code segment Notes References External links Introduction to Position Independent Code Position Independent Code internals Programming in Assembly Language with PIC The Curious Case of Position Independent Executables Operating system technology Computer libraries Computer file formats
18399377
https://en.wikipedia.org/wiki/Dougherty%20Comprehensive%20High%20School
Dougherty Comprehensive High School
Dougherty Comprehensive High School is a four-year secondary school located in Albany, Georgia, United States. It is part of the Dougherty County School System, along with, Monroe Comprehensive High School, and Westover Comprehensive High School. It was founded in 1963. DCHS enrolls about 869 students. The student body is 94% African-American, 4% Caucasian, and 2% of other races. DCHS is a Title I school, with about 86% economically disadvantaged students and about 7% with disabilities. Dougherty High is the first and only high school in the Dougherty County School System to be under the charter school system to implement the International Baccalaureate Program, starting in the 2008-2009 school year. The school colors are maroon, silver and white, and its mascot is the Trojan. Early history Dougherty High School was built in an effort to accommodate East Albany and the growing number of students from the two military bases located nearby. One was a SAC Air Force base, Turner AFB which later became Naval Air Station Albany and the other was a Marine base, Marine Corps Supply Center Albany. The school opened its first year in September, 1963 (without a senior class). The first graduating class was in 1965. "Onward, upward we shall strive, senior class of 65" was the class motto. This was before the comprehensive approach to education was adopted. Each grade was divided into three levels of achievement: above average, average and below average. Dougherty High was an excellent school with above average teachers, most of whom had master's degrees and were tops in their fields. The graduation rate was 98%, which was greater than the state average of the period. There was a broad spectrum of classes, including math, English, Physical Science, Social Science, Sociology and Psychology, Biology, Industrial Arts and Home Economics, mechanical drawing and shop, music and Language Arts and business administration. Sports includes track, football, basketball, tennis, softball, soccer, cross country, wrestling, golf and cheerleading. Dougherty High has many clubs and organizations, such as Band, Civitans and Civinetts, Key Club, Anchor Club, Interact, Glee Club, Spanish Club, Future Homemakers of America, Future Business Leaders of America, FTA, Audio Visual and Allied Medical Student Council and Beta Club, Science Club, SkillsUSA and the Thespians. The first black students to attend Dougherty High were Brenda Barlow and Shirley Carruthers, both 1966 seniors. The closing of the Air Force base and later the Navy base in the late 1960s provided an opportunity for the Douherty County School systems to move a large portion of Monroe High school's largely black student population to the relatively newer Dougherty High School. The three-tier class system could no longer be supported, so the Dougherty County School system changed to the comprehensive method of class dispersal. In other words, all the students of a particular grade were taught the same thing on the same level. Hence Dougherty High School became Dougherty Comprehensive High School. Athletics Football The football team won the GHSA Class AAA State Championship in 1998. Basketball Dougherty won the boys' GHSA State Basketball Championships in 1997 (AAA) and 2001 (AAAA). Extracurricular organizations FCCLA - This group's goal is to promote growth and leadership development through family and consumer sciences education, focusing on the multiple roles of family member, wage earner, and community leader. Its motto is "Toward New Horizons." JGG - Jobs for Georgia's Graduates is a school-to-work transition program for seniors. It prepares students for the world of work and maintains higher education. Fellowship of Christian Athletes - This group provides Christian fellowship opportunities. FCA members meet on Tuesday mornings for prayer and devotionals, and meet periodically at the flagpole for prayer. Alpha Mentee - This organization has been at Dougherty High since 2000. It is designed to build character education and to familiarize high school students with basic life skills, such as developing self-discipline, having high self-esteem, and promoting positive values. It involves its members in community service projects such as local nursing home visits and seasonal food drives. Mentees take part in college visitations. The organization is networked and affiliated under Alpha Phi Alpha fraternity. Future Business Leaders of America - Dougherty High's FBLA is part of Georgia FBLA, a nonprofit student organization committed to preparing today's students for success in business leadership. With over 75 members, it is the premiere organization for student leaders. Its mission is to bring business and education together in a positive working relationship through innovative leadership and career development programs. FBLA's motto is "Service, Education, and Progress." MCJROTC - This provides leadership training for students. Phantom Trojan Marching Band The Phantom Trojan Marching Band (PTMBband has participated in marching band festivals and competitions in Georgia, Florida, and Alabama. It has won in both traditional and corps-style competitions. The drumline, known as Phantom Phunq, were crowned grand champions three consecutive years (2001-2003) at the annual Battle of the Drumline in Columbus, Georgia. Members of this band appeared in the movie Drumline. Along with marching band, students participate in other ensembles such as symphonic band, concert band, jazz band, percussion ensemble, and other musical activities. Chorale DCHS chorale has won local and national awards and achievements and received praise and acknowledgment from the media. They received a Grammy Signature Award, and have won first place and overall winner awards of 15 national music festival competitions throughout the United States. They performed for President Jimmy Carter in January 2005 at the National Annual Black Caucus Convention, in Washington, D.C. (for five years straight dating back to September 2003). They have performed with gospel artists Richard Smallwood and Vickie Winans, with classical composers James Mulholland and Moses Hogan, and with the Southeastern Symphony Orchestra. The DCHS Chorale also released a record, ONE WORLD, in 2004, a compilation of choral music which includes a cappella motets, anthems, spirituals, inspirations, and gospel selections. The chorale serves through public performances and charitable contributions throughout their community and country. They have presented benefit concerts for homeless shelters and disaster relief. Events Sponsored by the DCHS Renaissance Club, DCHS Honors Day is a day for recognizing students for their academic accomplishments, college acceptances, scholarships, and positive character traits. In the past, this ceremony was held in each of the four nine-week periods of the school year. This tradition ended in December 2007; Honors Day now occurs at the end of the fall semester, and in May. DCHS Pageant is a female pageant for Miss Dougherty High held every spring. Candidates must be sophomores or juniors. Each candidate represents her extracurricular organization. The winner is named as Miss DCHS for the next school year term. DCHS Gentillion is a male pageant held in October or November. Each candidate represents his extracurricular organization. Unlike the DCHS pageant, sophomores, juniors, and seniors are allowed to participate as candidates for the contest. The winner is named as Mr. DCHS for the school year. Georgia High School Graduation Test Stop and Drop Rally is a new tradition to Dougherty High since the 2006-2007 school term. This is a pep rally held in March a few days before the juniors' graduation test. Its main purpose is to increase self-esteem and provide guidance and support for test takers. DCHS Senior Week is a reward for seniors in honor of their hard work and dedication throughout their high school career. It is usually held in late April or early May. Seniors express themselves by wearing themed attire every day in the week, for example, nerd day, spirit day, and celebrity day. After school hours, seniors gather for social activities such as bowling and movies. Homecoming - Every day of the week a special tradition is displayed throughout the school. On Friday the traditional homecoming football game is held. During this, homecoming court is introduced before the game or at halftime. The queen and king are announced, based on the vote of the student body. The homecoming dance occurs the day afterward. School Renovations Phase I consisted of the addition of the fine arts hall. This hall features two art rooms (2D and 3D), band room, orchestra room, choral room, backstage, and dance studio. Other additions and renovations include the new gallery area in front of the auditorium, black box theatre, renovated auditorium, special education classrooms and health classrooms. Phase II The work consists of constructing a new central plant, adding to the office and media center, adding a new drafting lab and mechanical spaces to the annex building, converting the breezeway to interior space, and enclosing the space between the main building and the annex building. The existing classrooms and support spaces in the main building will be renovated and modified to include new finishes (floor coverings, ceilings, interior wall finishes/systems, skylights), new mechanical/HVAC, electrical, plumbing, and fire sprinkler systems. The existing bituminous roof will be removed and replaced. The teachers' parking lot on the east wing of the building and service entrance behind the building will be paved. Other renovations and modifications include reconstructing and modernizing the gym, finishing the renovation of the auditorium, converting the auto mechanics garage to a shooting range for MCJROTC, and converting the construction garage to a health occupations lab. Construction began in September 2013, and was expected to be complete in the spring of 2015. Notable alumni Stanley Floyd - champion track and field sprinter, University of Houston Lionel James - former NFL player, running back for the San Diego Chargers, member of the 1978-1980 Dougherty High School Trojans football team Alexander Johnson - former NBA player, player for the Miami Heat Ray Knight - former MLB player (Cincinnati Reds, Houston Astros, New York Mets, Baltimore Orioles, Detroit Tigers) and former manager of the Cincinnati Reds, DHS 1970 graduate. Gene Martin - former pinch hitter and left fielder Washington Senators and Nippon Professional Baseball League Michael Reid - former NFL linebacker for the Atlanta Falcons, member of the 1980-1982 Dougherty High School Trojans football team Daryl Smith - former NFL linebacker for the Jacksonville Jaguars, member of the 1998 Dougherty High School Trojans football team Montavious Stanley - former NFL defensive tackle for the Dallas Cowboys, Jacksonville Jaguars, and the Atlanta Falcons References External links Dougherty High School http://www.docoschools.org http://www.walb.com/global/story.asp?s=7134592&ClientType=Printable http://www.walb.com/Global/story.asp?S=7186200&nav=5kZQ Public high schools in Georgia (U.S. state) Educational institutions established in 1963 Charter schools in Georgia (U.S. state) 1963 establishments in Georgia (U.S. state) Schools in Dougherty County, Georgia Buildings and structures in Albany, Georgia
54458210
https://en.wikipedia.org/wiki/Princeton%20%28electronics%20company%29
Princeton (electronics company)
is a Japanese company headquartered in Tokyo, Japan, that offers computer hardware and electronics products. Overview Originally, in 1995, Princeton Technology Ltd. was established. The company is basically fabless company, designing the products, ordering them to the manufactures in Taiwan and China etc.. The company offers flash memory products (SD cards, USB flash drives), DRAM, LCD, LED display, Hard disk drives, NAS and other electronics products. Princeton products are sold mostly in Japanese domestic market, but we can find several products at some online shopping, Amazon.com etc.. The business type and scope is same as Green House, Elecom and Buffalo, these are also the companies in Japan. In 2014, the company name was changed from Princeton Technology Ltd. to Princeton Ltd.. In the aspect of business-to-business, as the supplier of computer hardware, Princeton has contributed to offer the various flash memory and DRAM products to major electronics companies in Japan, such as Sony, Panasonic and Toshiba etc.. Princeton is also known that the company has been the official agency of Cisco, Polycom, Edgewater networks, Proware Technology and Drobo etc., and has introduced several cloud collaboration systems and SAN systems in Japan. The company has presented IT solution for education systems by installing Cisco and Edgewater networks cloud collaboration products, and as another example, SAN systems by installing Princeton, Proware Technology and Drobo NAS products. See also List of companies of Japan References External links Official Website Computer companies established in 1995 Computer hardware companies Computer memory companies Computer peripheral companies Computer storage companies Electronics companies of Japan Japanese brands Japanese companies established in 1995
8740226
https://en.wikipedia.org/wiki/Cathy%20Gillen%20Thacker
Cathy Gillen Thacker
Cathy Gillen Thacker is an American author of over seventy romance novels. Biography Thacker began writing to occupy herself while she was raising small children. She wrote seven books as she taught herself how to be an author. Her eighth attempt was finally published in July 1982. She has written over seventy novels since then, which have been published in seventeen languages and thirty-five countries. Thacker is a charter member of Romance Writers of America. She and her husband Charlie have three children, Julie, David, and Sarah. Bibliography Too Many Dads Baby on the Doorstep (1994) Daddy to the Rescue (1994) Too Many Moms (1994) Wild West Weddings The Cowboy's Bride (1996) The Ranch Stud (1996) The Maverick Marriage (1996) One Hot Cowboy (1997) Spur-Of-The-Moment Marriage (1997) McCabe Family Dr. Cowboy (1999) Wildcat Cowboy (1999) A Cowboy's Woman (1999) A Cowboy Kind of Daddy (1999) Texas Vows (2001) The Ultimate Texas Bachelor (2005) Santa's Texas Lullaby (2005) A Texas Wedding Vow (2006) Blame It On Texas (2006) A Laramie, Texas Christmas (2006) From Texas, With Love (2007) Brides of Holly Springs The Virgin's Secret Marriage (2003) The Secret Wedding Wish (2004) The Secret Seduction (2004) Plain Jane's Secret Life (2004) Her Secret Valentine (2005) Deveraux Legacy Her Bachelor Challenge (2002) His Marriage Bonus (2002) My Secret Wife (2002) Their Instant Baby (2002) The Heiress (2003) Taking Over the Tycoon (2003) Lockhart Women Series The Bride Said, I Did? (2000) The Bride Said, Finally! (2000) The Bride Said, Surprise! (2001) The Virgin Bride Said, Wow! (2001) Texas Legacies: Carrigans The Rancher Next Door (2007) The Rancher's Family Thanksgiving (2007) The Rancher's Christmas Baby (2007) The Gentleman Rancher (2008) Made in Texas Hannah's Baby (2008) The Inherited Twins (2008) A Baby In The Bunkhouse (2008) Found: One Baby (2009) The Lonestar Dad’s Club A Baby for Mommy (2009) A Mommy for Christmas (2009) Wanted: One Mommy (2010) The Mommy Proposal (2010) Texas Legacies: The McCabes A Cowboy Under the Mistletoe (2010) One Wild Cowboy (2011) A Cowboy to Marry (2011) Stand Alone Novels Touch of Fire (1983) Intimate Scoundrels (1983) Wildfire Trace (1984) Heart's Journey (1985) Embrace Me Love (1985) Promise Me Today (1985) A Private Passion (1985) Reach for the Stars (1985) A Family to Cherish (1986) Heaven Shared (1986) The Devlin Dare (1986) Family to Treasure (1987) Rogue's Bargain (1987) Family Affair (1988) Guardian Angel (1988) Fatal Amusement (1988) Natural Touch (1988) Dream Spinners (1988) Perfect Match (1988) One Man's Folly (1989) Lifetime Guarantee (1989) Meant to Be (1990) Slalom to Terror (1990) It's Only Temporary (1990) Father of the Bride (1991) An Unexpected Family (1991) Tangled Web (1992) Home Free (1992) Anything's Possible (1992) Honeymoon for Hire (1993) Beguiled Again (1993) Fiance for Sale (1993) Kidnapping Nick (1993) Guilty as Sin (1994) Baby on the Doorstep (1994) Daddy to the Rescue (1994) Jenny and the Fortune Hunter (1994) Too Many Mums (1994) Love Potion 5 (1994) A Shotgun Wedding (1995) Miss Charlotte Surrenders (1995) Matchmaking Baby (1995) Daddy Christmas (1995) Mathmaking Baby (1996) The Cowboy's Bride (1996) The Ranch Stud (1996) The Maverick Marriage (1996) How to Marry...One Hot Cowboy (1997) Spur-of-the-moment Marriage (1997) Snowbound Bride (1998) Hot Chocolate Honeymoon (1998) Snow Baby (1998) The Cowboy's Mistress (1998) Make Room for Baby (1998) Baby's First Christmas (1998) His Cinderella (1999) A Baby by Chance (2000) Texas Vows (2001) Return to Crystal Creek (2002) (with Bethany Campbell and Vicki Lewis Thompson) Lost and Found (2003) Twice and for Always (2003) The Heiress (2003) Blame It on Texas (2006) Christmas Lullaby (2006) A Laramie, Texas Christmas (2006) From Texas, With Love (2007) Omnibus Marriage by Design (1994) Yours, Mine And Ours (1997) (with Marisa Carroll, Penny Jordan) The Cupid Connection (1998) (with Anne Stuart and Vicki Lewis Thompson) In Defense of Love / Her Special Angel / Daddy Christmas / Home for Christmas (1999) (with Kathleen Creighton, Marie Ferrarella) The Baby Game (2000) (with Judy Christenberry) Western Rogues (2002) (with Annette Broadrick) Temporary Santa (2003) (with Leigh Michaels) Her Surprise Baby (2004) (with Paula Detmer Riggs) Special Delivery (2004) (with Maggie Shayne) Her Bachelor Challenge / His Marriage Bonus (2004) My Secret Wife / Their Instant Baby (2004) Be My Baby (2005) (with Adrianne Lee) Married in White (2005) (with Linda O. Johnston) Bride Said, Surprise! / Bride Said, Wow! (2005) Secret Wedding Wish / the Sugar House (2005) (with Christine Flynn) Secret Seduction / Which Child is Mine? (2005) (with Karen Rose Smith) Plain Jane's Secret Life / Beauty and the Black Sheep (2005) (with Jessica Bird) External links Cathy Gillen Thacker Official Website 20th-century American novelists 21st-century American novelists American romantic fiction writers Living people Year of birth missing (living people) American women novelists Women romantic fiction writers 20th-century American women writers Place of birth missing (living people) 21st-century American women writers
36091677
https://en.wikipedia.org/wiki/Kivy%20%28framework%29
Kivy (framework)
Kivy is a free and open source Python framework for developing mobile apps and other multitouch application software with a natural user interface (NUI). It is distributed under the terms of the MIT License, and can run on Android, iOS, Linux, macOS, and Windows. Kivy is the main framework developed by the Kivy organization, alongside Python for Android, Kivy iOS, and several other libraries meant to be used on all platforms. In 2012, Kivy got a $5000 grant from the Python Software Foundation for porting it to Python 3.3. Kivy also supports the Raspberry Pi which was funded through Bountysource. The framework contains all the elements for building an application such as: extensive input support for mouse, keyboard, TUIO, and OS-specific multitouch events, a graphic library using only OpenGL ES 2, and based on Vertex Buffer Object and shaders, a wide range of widgets that support multitouch, an intermediate language (Kv) used to easily design custom widgets. Kivy is the evolution of the PyMT project, and is recommended for new projects. Related projects Buildozer, generic Python packager for Android and iOS. Plyer, platform-independent Python wrapper for platform-dependent APIs. PyJNIus, dynamic access to the Java/Android API from Python. Pyobjus, dynamic access to the Objective-C/iOS API from Python. Python for Android, toolchain for building and packaging Python applications for Android. Kivy for iOS, toolchain for building and packaging Kivy applications for iOS. Audiostream, library for direct access to the microphone and speaker. KivEnt, entity-based game engine for Kivy. Kivy Garden, widgets and libraries created and maintained by community. Kivy SDK Packager, scripts for Kivy SDK generation on Windows, macOS and Linux. Kivy Remote Shell, remote SSH+Python interactive shell application. KivyPie, Raspbian-based distribution running latest Kivy framework on the Raspberry Pi. OSCPy, a fast and reliable OSC implementation. Condiment, preprocessor that includes or removes Python code portion, according to environment variables. KivyAuth, social login via Google, Facebook, GitHub and Twitter accounts in Kivy apps. KivMob, AdMob support for Kivy apps. KivyMD, a set of Material Design widgets for Kivy. Code example Here is an example of the Hello world program with just one button: from kivy.app import App from kivy.uix.button import Button class TestApp(App): def build(self): return Button(text="Hello World") TestApp().run() Kv language The Kv language is a language dedicated to describing user interface and interactions in Kivy framework. As with other user interface markup languages, it is possible to easily create a whole UI and attach interaction. For example, to create a Loading dialog that includes a file browser, and a Cancel / Load button, one could first create the base widget in Python and then construct the UI in Kv. In main.py: class LoadDialog(FloatLayout): def load(self, filename): pass def cancel(self): pass And in the associated Kv: #:kivy 1.11.1 <LoadDialog>: BoxLayout: size: root.size pos: root.pos orientation: "vertical" FileChooserListView: id: filechooser BoxLayout: size_hint_y: None height: 30 Button: text: "Cancel" on_release: root.cancel() Button: text: "Load" on_release: root.load(filechooser.path, filechooser.selection) Alternatively, the layout (here, Box Layout) and the buttons can be loaded directly in the main.py file. Google Summer of Code Kivy participated in Google Summer of Code under Python Software Foundation. Kivy in GSoC'2014. Kivy in GSoC'2015. Kivy in GSoC'2016. Kivy in GSoC'2017. See also Pygame, another Python game API, a layer over Simple DirectMedia Layer Cocos2d Panda3D Pyglet Scripting Layer for Android References External links Cross-platform mobile software Cross-platform software Free software programmed in Python Python (programming language) libraries Software using the MIT license
32760230
https://en.wikipedia.org/wiki/Cloudflare
Cloudflare
Cloudflare, Inc. is an American web infrastructure and website security company that provides content delivery network and DDoS mitigation services. Its services occur between a website's visitor and the Cloudflare customer's hosting provider, acting as a reverse proxy for websites. Its headquarters are in San Francisco. History Cloudflare was created in 2009 by Matthew Prince, Lee Holloway, and Michelle Zatlyn, all three of whom worked on Project Honey Pot, an open-source project monitoring internet fraud and abuse. Cloudflare was launched at the TechCrunch Disrupt conference in September 2010. It received media attention in June 2011 for providing security services to the website of LulzSec, a black hat hacking group. From 2009, the company was venture-capital funded. On August 15, 2019, Cloudflare submitted its S-1 filing for IPO on the New York Stock Exchange under the stock ticker NET. It opened for public trading on September 13, 2019 at $15 per share. In February 2014, Cloudflare mitigated what was at the time the largest ever recorded DDoS attack, which peaked at 400 Gigabits per second against an undisclosed customer. In November 2014, it reported another massive DDoS attack with independent media sites targeted at 500 Gbit/s. In March 2013, it defended The Spamhaus Project from a DDoS attack that exceeded 300 Gbit/s. Akamai's chief architect stated that at the time it was "the largest publicly announced DDoS attack in the history of the Internet". Cloudflare has also reportedly absorbed attacks that have peaked over 400Gbit/s from an NTP Reflection attack. In June 2020, it mitigated a DDoS attack that peaked at 754 million packets per second. In August 2021, it announced it had in July stopped a DDoS attack three times larger than any they'd recorded. As of 2020, Cloudflare provides DNS services to over 100,000 customers, covering more than 25 million internet properties. Project Galileo In 2014, Cloudflare launched Project Galileo, an initiative providing free services to protect artists, activists, journalists, and human rights groups from cyber attacks. More than 1,000 users and organizations were participating in Project Galileo as of 2020. Athenian Project In 2017, Cloudflare created the Athenian Project to ensure free protection of online election infrastructures to local and state governments, as well as domestic and foreign political campaigns. WARP On April 1, 2019, Cloudflare announced WARP, a new freemium VPN service that would initially be available through the 1.1.1.1 mobile apps with a desktop app available later. On September 25, 2019, it released WARP to the public. The beta for macOS and Windows was announced on April 1, 2020. Wikimedia On September 6, 2019, Wikipedia was the victim of a DDoS attack. European users were unable to access it for several hours. The attack was mitigated after Wikimedia network engineers used Cloudflare's network and DDoS protection services to reroute and filter internet traffic. The specific Cloudflare product used was Magic Transit. Zatlyn as president In 2020, Cloudflare co-founder and COO Michelle Zatlyn was named president, making her one of the few woman presidents of a publicly traded technology company in the U.S. Project Fair Shot In January 2021, the company established the Project Fair Shot initiative, a free tool that enables global health organizations to maintain a digital queue for COVID-19 vaccinations. Acquisitions Cloudflare has acquired: StopTheHacker (Feb 2014) CryptoSeal (June 2014) Eager Platform Co. (December 2016) Neumob (November 2017) S2 Systems (January 2020) Linc (December 2020) Zaraz (December 2021) Vectrix (February 2022) Area 1 Security (February 2022) Products Cloudflare acts as a reverse proxy for web traffic. It supports web protocols including SPDY and HTTP/2, QUIC, and support for HTTP/2 Server Push. DDoS Protection Cloudflare provides DDoS mitigation services that protect customers from distributed denial of service (DDoS) attacks. As of September 2020, it claims to block "an average of 72 billion threats per day, including some of the largest DDoS attacks in history." Content Distribution Network Cloudflare offers a popular content distribution network (CDN) service that it launched in 2010. TechCrunch wrote that its goal was to be "a CDN for the masses". Ten years later, Cloudflare claimed to support over 25 million Internet websites. Teams Cloudflare for Teams is a suite of authentication and security products for business clients, consisting of Gateway, a highly-customizable DNS resolver; and Access, a zero-trust authentication service. Workers In 2017 Cloudflare launched Cloudflare Workers, a serverless computing platform for creating new applications, augmenting existing ones, without configuring or maintaining infrastructure. It has expanded to include Workers KV, a low-latency key-value data store; Cron Triggers, for scheduling cron jobs; and additional tooling for developers to deploy and scale their code across the globe. Pages After being leaked to the press, Cloudflare Pages was launched as a beta in December 2020. It is a Jamstack platform for front-end developers to collaborate and deploy websites on Cloudflare's infrastructure of 200+ data centers worldwide. Security and privacy issues Intrusions The hacker group UGNazi attacked Cloudflare in June 2012 by gaining control over Cloudflare CEO Matthew Prince's voicemail and email accounts, which were hosted on Google. From there, they gained administrative control over Cloudflare's customers and used that to deface 4chan. Prince later acknowledged, "The attack was the result of a compromise that allowed the hacker to eventually access my Cloudflare.com email addresses" and as the media pointed out at the time, "the keys to his business were available to anyone with access to his voicemail." In March 2021, Tillie Kottmann from the hacking collective "Advanced Persistent Threat 69420" demonstrated that the group had gained root shell access to security cameras in Cloudflare offices managed by cloud-based physical security company Verkada after obtaining the credentials of a Verkada superuser account that had been leaked on the Internet. Cloudflare stated that the compromised cameras were in offices that had been officially closed for several months, though the hacking collective also obtained access to Verkada-operated cameras in Cloudflare's offices in New York City, London, Austin and San Francisco. The hacking group told Bloomberg News that it had video archives from all Verkada customers; it accessed footage from Cloudflare's cameras and posted a screenshot of security footage which they said was taken by a Verkada camera in a Cloudflare office. Data leaks From September 2016 until February 2017, a major Cloudflare bug (nicknamed Cloudbleed) leaked sensitive data, including passwords and authentication tokens, from customer websites by sending extra data in response to web requests. The leaks resulted from a buffer overflow which occurred, according to numbers provided by Cloudflare at the time, more than 18,000,000 times before the problem was corrected. In May 2017, ProPublica reported that Cloudflare routinely discloses the names and email addresses of persons complaining about hate sites to the operators of those sites, which has led to the complainants being harassed. Cloudflare's general counsel defended the company's policies by saying it is "base constitutional law that people can face their accusers", and noted that there had been a disclaimer on Cloudflare's complaint form since 2015 stating that they "would notify the site owner." Cloudflare's CEO later suggested that, had people not wanted their names shared, they should have provided a false name on the reporting form. In reaction to ProPublica's report, Cloudflare updated their abuse reporting process to provide greater control over disclosure of the complaining party's personally identifying information. Service outages Cloudflare outages can bring down large chunks of the web. There was major outage, lasting about 30 minutes, on July 2, 2019 attributed to bad software deployment. In 2020, a misconfiguration of a router caused a data pileup and outage in major European cities. Controversies Cloudflare has been criticized for not banning websites with hate speech content. The company has said it has a content neutrality policy and that it opposes the policing of its customers on free speech grounds, except in cases where the customers break the law. The company has also faced criticism for not banning websites allegedly connected to terrorism groups, but Cloudflare has maintained that no law enforcement agency has asked the company to discontinue these services and it closely monitors its obligations under U.S. laws. Free Speech Debate Cloudflare has come under pressure on multiple occasions due to its services being utilized to serve controversisal content. As Cloudflare is considered an infrastructure provider, rather than a hosting provider, they are able to maintain broad legal immunity for the content served from their customers. Cloudflare provided DNS routing and DoS protection for the white supremacist and neo-Nazi website, The Daily Stormer. In 2017 Cloudflare stopped providing its services to The Daily Stormer after an announcement on the controversial website asserted that the "upper echelons" of Cloudflare were "secretly supporters of their ideology". Previously Cloudflare had refused to take any action regarding The Daily Stormer. As a self-described "free speech absolutist", Cloudflare's CEO Matthew Prince, in a blog post, vowed never to succumb to external pressure again and sought to create a "political umbrella" for the future. Prince further addressed the dangers of large companies deciding what is allowed to stay online, a concern that is shared by a number of civil liberties groups and privacy experts. The Electronic Frontier Foundation, a US digital rights group, said that services such as Cloudflare "should not be adjudicating what speech is acceptable", adding that "when illegal activity, like inciting violence or defamation, occurs, the proper channel to deal with it is the legal system." Terrorism The Huffington Post has documented Cloudflare's services to "at least 7 terrorist groups", as designated by the United States Department of State including the Taliban, Al-Shabaab, the al-Aqsa Martyrs' Brigades, Hamas, Myanmar's military junta, and the al-Quds Brigades. Cloudflare has been aware since at least 2012, and has taken no action. However, according to Cloudflare's CEO, no law enforcement agency has asked the company to discontinue these services. Two of the top three online chat forums and nearly forty other web sites belonging to the Islamic State of Iraq and the Levant (ISIL) are guarded by Cloudflare. According to Prince, U.S. law enforcement has not asked Cloudflare to discontinue the service, and it has not chosen to do so itself. In November 2015, hacktivist group Anonymous discouraged the use of Cloudflare's services following the ISIL attacks in Paris and additional revelations that Cloudflare aids terrorists. Cloudflare responded by calling the group "15-year-old kids in Guy Fawkes masks", and saying that whenever such concerns are raised it consults anti-terrorism experts and abides by the law. Mass Shootings In 2019, Cloudflare was criticized for providing services to the discussion and imageboard 8chan, which allows users to post and discuss any content with minimal interference from site administrators. The message board has been linked to mass shootings in the United States and the Christchurch mosque shootings in New Zealand. In addition, a number of news organizations including The Washington Post and The Daily Dot have reported the existence of child pornography and child sexual abuse discussion boards. A Cloudflare representative has been quoted by the BBC claiming that the platform "does not host the referenced websites, cannot block websites, and is not in the business of hiding companies that host illegal content". Cloudflare did not terminate service to 8chan until public and legal pressure in the wake of a copycat shooting of Christchurch mosque shootings in the United States, which similarly used Cloudflare and 8chan to publish the associated manifesto. In an August 3 interview with The Guardian, immediately following the 2019 El Paso shooting, CEO Matthew Prince defended Cloudflare's support of 8chan, stating that he had a "moral obligation" to keep the site online. Crime Cloudflare services have been used by Rescator, a carding website that sells stolen payment card data. Cloudflare has been identified by the European Union's Counterfeit and Piracy Watch List as a "notorious market" which engages in, facilitates or benefits from counterfeiting and piracy. The report notes that Cloudflare hides and anonymizes the operators of 40% of the world's pirate sites, and 62% of the 500 largest such sites, and "does not follow due diligence when opening accounts for websites to prevent illegal sites from using its services." Italian courts have enjoined Cloudflare to cease hosting pirate television service "IPTV THE BEST" after it was found to be infringing the intellectual property of Sky Italy and the Italian football league, and German courts have similarly found that "Cloudflare and its anonymization services attract structurally copyright infringing websites." Cloudflare is cited in reports by The Spamhaus Project, an international spam tracking organization, for the high numbers of cybercriminal botnet operations hosted by Cloudflare. An October 2015 report found that Cloudflare provisioned 40% of the SSL certificates used by typosquatting phishing sites, which use deceptive domain names resembling those of banks and payment processors to compromise Internet users' banking and other transactions. References External links Cloudflare Workers Cloudflare Pages Cloudflare TV 2009 establishments in California 2019 initial public offerings American companies established in 2009 Companies based in San Francisco Companies listed on the New York Stock Exchange Content delivery networks DDoS mitigation companies Domain name registrars Freedom of speech in the United States Internet properties established in 2009 Internet security Internet technology companies of the United States Reverse proxy Technology companies based in the San Francisco Bay Area Virtual private network services
65450882
https://en.wikipedia.org/wiki/Mathias%20Payer
Mathias Payer
Mathias Payer (born 1981) is a Liechtensteinian computer scientist. His research is invested in software and system security. He is Associate Professor at the École Polytechnique Fédérale de Lausanne (EPFL) and head of the HexHive research group . Career Mathias Payer studied computer science at ETH Zurich and received his master's degree in 2006. He then joined the Laboratory for Software Technology of Thomas R. Gross at ETH Zurich as a PhD student and graduated with a thesis on secure execution in 2012, focusing on techniques to mitigate control-flow hijacking attacks. In 2010, he was working at Google as software security engineer in the anti-malware and anti-phishing team, where he was dedicated detecting novel malware . In 2012, he joined Dawn Song's BitBlaze group at University of California, Berkeley as a postdoctoral scholar working on the analysis and classification of memory errors. In 2014, he received an appointment as Assistant Professor from Purdue University, where he founded his research laboratory, the HexHive Group. Since 2018 he has been Assistant Professor in computer science at EPFL. The HexHive Group is now located on the Lausanne Campus of EPFL. Research Payer's research centers on software and systems security. He develops and refines tools that enable software developers to discover and patch software bugs, and thereby rendering their programs for resilient to potential software exploits. To reach this goal Payer employs two strategies. The first one are sanitization techniques that point to security issues of factors such as memory, type safety and API flow safety, and thereby enabling more salient products. The second are fuzzing techniques that create a set of input data for programs by combining static and dynamic analysis. The novel input data set extend and complement the set of existing test vectors. Using this newly created input data helps to uncover exploitable vulnerabilities, such as control-flow integrity making use of specific language semantics, requiring type integrity, and safeguarding selective data. Payer's research has led to the discovery of several software vulnerabilities. Among them are the Bluetooth bugs BLURtooth and BLESA, and USBFuzz, a vulnerability that affects the implementation of USB protocol parsing across mayor operating systems. Payer has been contributing to the development of the Decentralized Privacy-Preserving Proximity (DP-3T) protocol, on which the SwissCovid mobile application is built. The app allows for anonymous contact tracing to mitigate the COVID-19 pandemic. Payer assisted the creation of the startup company Xorlab that a former student of his, Antonio Barresi, founded. He gained recognition beyond his research field through his lectures at the CCC - Chaos Communication Congress, the BHEU-Black Hat Europe, and others. Distinctions He received the SNSF Eccellenza Award, and gained an ERC Starting Grant. Selected works References External links Website of the HexHive Group 1981 births Living people ETH Zurich alumni University of California, Berkeley alumni École Polytechnique Fédérale de Lausanne faculty Liechtenstein people Liechtenstein men by occupation
3410878
https://en.wikipedia.org/wiki/Dwayne%20Jarrett
Dwayne Jarrett
Dwayne Jarrett (born September 11, 1986) is a former American football wide receiver who played in the National Football League (NFL) for four seasons. He played college football for the University of Southern California (USC), and was recognized as a consensus All-American twice. The Carolina Panthers selected him in the second round of the 2007 NFL Draft. Early years Jarrett attended New Brunswick High School in New Brunswick, New Jersey. He was a 2003 Parade magazine All-American, Super Prep All-American, Prep Star All-American, Super Prep Elite 50, Prep Star Top 100 Dream Team, Super Prep All-Northeast Offensive MVP, and Prep Star All-East. Jarrett also played in the 2004 U.S. Army All-American Bowl. He was also New Jersey's Offensive Pick of the Year as a senior wideout and defensive back. He scored 26 touchdowns as a senior (with three of those touchdowns coming in New Brunswick's 21-14 state title victory), including five on 15 punt returns (for a 48-yard return average). He also played basketball in high school. College career Jarrett attended the University of Southern California, where he played for coach Pete Carroll's USC Trojans football team from 2004 to 2006. He was a consensus first-team All-American in 2005, and a unanimous first-team All-American in 2006. He was USC's all-time receptions leader with 216 and the Pacific-10 Conference's all-time leader in touchdown receptions with 41. As a freshman, he helped Trojan fans get over the loss of former USC standout wide receiver Mike Williams. He played in all 13 games and started 8 of them. He caught 55 passes for 849 yards and 13 touchdowns. He also made 5 catches for 115 yards in USC's FedEx Orange Bowl victory over Oklahoma in 2004. As a sophomore, he became Matt Leinart's favorite target. He recorded 91 receptions for 1,274 yards including 4 touchdowns while starting 6 games in 2005. He will be remembered by many fans for his catch against Notre Dame. On fourth-and-nine with less than a minute left, Leinart threw a pass down the sideline to him and Jarrett caught it, running for 61 yards to set up the winning touchdown, the famed and controversial "Bush Push" quarterback sneak. In USC's Rose Bowl loss to the Texas Longhorns he had 10 catches for 121 yards and a touchdown. He was a 2005 first-team All-American as a sophomore and one of three finalists for the Biletnikoff Award. He was on the 2006 Maxwell Award watch list as the best player in college football. His height, hands, and quickness made him one of the premier receivers going into the 2006 college football season. With the departure of former Trojan running backs Reggie Bush and LenDale White, he was expected to be a big part of the Trojans' offense. After a brief period of ineligibility due to his apparently inappropriate living situation with Leinart, the NCAA reinstated him on August 9, 2006, making him eligible to play for the 2006 football season. Despite being hampered by injuries, including missing games, Jarrett was named to the rivals.com and Pac-10 Coaches 2006 All-Pac-10 team First Team. He was also second-team All-America at rivals.com and SI and Walter Camp foundation first team All-America. However, because of his lack of playing time, he was left off the 2006 list of Biletnikoff Award finalists, an omission noted by some sports writers. On January 1, 2007, he was named offensive most valuable player of the Rose Bowl Game with a career-high 203 receiving yards and two touchdowns in the 32–18 win over Michigan. Jarrett finished the 2006 season as USC's all-time leading receiver with 216 catches The junior, a two-time All-American, had 70 catches for 1,015 yards and led the Trojans with 12 touchdowns in his final college season. Projected as a first-round pick in the 2007 NFL Draft, on January 10, 2007, Jarrett declared his intent to leave USC early to enter the NFL. At a press conference, the tearful Jarrett noted the best part of his USC career was being with his "teammates" but that he was "definitely doing it for my family, because I wasn't the most fortunate kid growing up." Professional career Carolina Panthers Jarrett was drafted in the second round (45th overall) of the 2007 NFL Draft by the Carolina Panthers, he was the eighth receiver selected. Originally projected as a first-round pick, Jarrett's stock fell due to his unremarkable time in the 40-yard dash. Jarrett often draws similarities to former Pro Bowl WR Keyshawn Johnson. Both are considered "possession" receivers and both played at USC. Although Johnson had publicly stated that he looked forward to mentoring Jarrett, it did not become a reality as Johnson was released three days after Jarrett was drafted. Johnson had initially stated that he would be best served by staying at USC an extra year and entering the draft as a senior. After being inactive for 7 of the first 8 games, Jarrett was activated for week 9 against the Atlanta Falcons due to the injury of 2nd string receiver Keary Colbert. He had two catches for 28 yards and recorded a special-teams tackle. On the year he had 6 catches for 73 yards and one tackle. In 2008, Jarrett played in 9 games, starting one. He had 10 receptions for 119 yards. On November 1, 2009, Jarrett made his second career start in place of an injured Muhsin Muhammad, in a victory over the Arizona Cardinals. On January 3, 2010, Dwayne caught his first touchdown pass from Matt Moore. A pass of 30 yards against division rival New Orleans Saints. The Panthers went on to win the game 23-10. Jarrett was cut from the Panthers on October 5, 2010 following his second DUI arrest. He had been pulled over for speeding on I-77 near Charlotte just before 2:00 a.m., according to Charlotte-Mecklenburg police spokesman Bob Fey. Jarrett had declined to take a breath test and was, instead, given a blood test. He was released on a bond of $2,000. During the week of November 21, 2010, Jarrett worked out with the Seattle Seahawks and his former college coach, Pete Carroll. Saskatchewan Roughriders On May 11, 2012, Jarrett was signed by the Saskatchewan Roughriders of the Canadian Football League. On June 7, 2012, the Roughriders placed him on the retired list. Personal life Jarrett credits his ability to growing up and playing catch with his uncle who "forced me to make one-handed catches." During his sophomore year, Jarrett shared a Los Angeles apartment with quarterback Matt Leinart. Jarrett has had some of his best college games against USC's rival, the University of Notre Dame, he credits this to how he was treated during the high school recruiting process: "They came down to recruit me, they talked to my coaches and everything. They didn't think I was intelligent enough to go to their school. That was kind of an insult to me. I've always had a little grudge against them." DWI Jarrett was arrested and charged for DWI on the morning of March 11, 2008 in a Charlotte suburb of Mint Hill. A police officer witnessed Jarrett run a red light and performed a sobriety test, which Jarrett failed. He was released on $1,000 bond. Jarrett later pleaded guilty to his DWI charge. Jarrett got a 30-day suspended sentence, must pay $420 in fines, and perform 24 hours of community service and could be suspended by the league. On Tuesday, October 5, 2010, he was arrested for his second DWI in less than three years, after being pulled over shortly after 2 a.m. in Charlotte, North Carolina. See also List of NCAA major college football yearly receiving leaders List of NCAA Division I FBS career receiving touchdowns leaders References External links USC Trojans bio 1986 births Living people All-American college football players American football wide receivers Carolina Panthers players New Brunswick High School alumni Sportspeople from New Brunswick, New Jersey Saskatchewan Roughriders players USC Trojans football players
66350108
https://en.wikipedia.org/wiki/List%20of%20games%20included%20with%20Windows
List of games included with Windows
Video games have been included in versions of the Microsoft Windows line of operating systems, starting from Windows 1.0x, all published by Microsoft. Some games that have appeared in Microsoft Entertainment Pack and Microsoft Plus! have been included in subsequent versions of Windows as well. Solitaire has been included in every version of Windows since Windows 3.0, except Windows 8 and 8.1. History Microsoft planned to include games when developing Windows 1.0 in 1983–1984. Two games were initially developed, Puzzle and Chess, but were scrapped in favor of Reversi, based on the board game of the same name. Reversi was included in Windows versions up to Windows 3.1. Solitaire was developed in 1988 by the intern Wes Cherry. The card deck itself was designed by Macintosh pioneer Susan Kare. Cherry's version was to include a boss key that would have switched the game to a fake Microsoft Excel spreadsheet, but he was asked to remove this from the final release. Microsoft intended Solitaire to "soothe people intimidated by the operating system," and at a time where many users were still unfamiliar with graphical user interfaces, it proved useful in familiarizing them with the use of a mouse, such as the drag-and-drop technique required for moving cards. According to Microsoft telemetry, Solitaire was among the three most-used Windows programs and FreeCell was seventh, ahead of Microsoft Word and Excel. Lost business productivity by employees playing Solitaire has become a common concern since it became standard on Microsoft Windows. The Microsoft Hearts Network was included with Windows for Workgroups 3.1, as a showcase of NetDDE technology by enabling multiple players to play simultaneously across a computer network. The Microsoft Hearts Network would later be renamed Internet Hearts, and included in Windows Me and XP. Support for Internet games on Windows Me and XP ended on July 31, 2019, and Windows 7 on January 22, 2020. 3D Pinball for Windows – Space Cadet is a version of the "Space Cadet" pinball table from the 1995 video game Full Tilt! Pinball. Several third party games, such as Candy Crush Saga and Disney Magic Kingdoms, have been included as advertisements on the Start menu in Windows 10, and may also be automatically installed by the operating system. Windows 11 additionally includes the Xbox app, which allows users to access the PC Game Pass video game subscription service. Microsoft Casual Games Starting from Windows 8 onwards, updated versions of previously bundled games are now under the brand Microsoft Casual Games, in addition to several brand new games. These games include Solitaire Collection, Minesweeper, Mahjong, and Ultimate Word Games. With the exception of Solitaire Collection being included in Windows 10 and 11, these games are not included with Windows, and are instead available as ad-supported free downloads in Microsoft Store. Premium monthly and annual subscriptions are available, which removes advertisements and offers several gameplay benefits, a move that has been criticized by reviewers as a way to "nickel and dime" users, since previous versions of Solitaire and previously bundled games did not have any advertisements or paid subscriptions. Included games See also Microsoft Entertainment Pack List of Microsoft Windows components Notes References Windows components Windows games Casual games Microsoft franchises Microsoft games Video games developed in the United States
30865034
https://en.wikipedia.org/wiki/Apple%20Disk%20Image
Apple Disk Image
Apple Disk Image is a disk image format commonly used by the macOS operating system. When opened, an Apple Disk Image is mounted as a volume within the Finder. An Apple Disk Image can be structured according to one of several proprietary disk image formats, including the Universal Disk Image Format (UDIF) from Mac OS X and the New Disk Image Format (NDIF) from Mac OS 9. An Apple disk image file's name usually has ".dmg" as its extension. Features Apple Disk Image files are published with a MIME type of application/x-apple-diskimage. Different file systems can be contained inside these disk images, and there is also support for creating hybrid optical media images that contain multiple file systems. Some of the file systems supported include Hierarchical File System (HFS), HFS Plus, File Allocation Table (FAT), ISO9660 and Universal Disk Format (UDF). Apple Disk Images can be created using utilities bundled with Mac OS X, specifically Disk Copy in Mac OS X v10.2 and earlier and Disk Utility in Mac OS X v10.3 and later. These utilities can also use Apple disk image files as images for burning CDs and DVDs. Disk image files may also be managed via the command line interface using the utility. In Mac OS X v10.2.3, Apple introduced Compressed Disk Images and Internet-Enabled Disk Images for use with the Apple utility Disk Copy, which was later integrated into Disk Utility in 10.3. The Disk Copy application had the ability to display a multilingual software license agreement before mounting a disk image. The image will not be mounted unless the user indicates agreement with the license. An Apple Disk Image allows secure password protection as well as file compression, and hence serves both security and file distribution functions; such a disk image is most commonly used to distribute software over the Internet. History Apple originally created its disk image formats because the resource fork used by Mac applications could not easily be transferred over mixed networks such as those that make up the Internet. Even as the use of resource forks declined with Mac OS X, disk images remained the standard software distribution format. Disk images allow the distributor to control the Finder's presentation of the window, which is commonly used to instruct the user to copy the application to the correct folder. A previous version of the format, intended only for floppy disk images, is usually referred to as "Disk Copy 4.2" format, after the version of the Disk Copy utility that was used to handle these images. A similar format that supported compression of floppy disk images is called DART. New Disk Image Format (NDIF) was the previous default disk image format in Mac OS 9, and disk images with this format generally have a .img (not to be confused with raw .img disk image files) or .smi file extension. Files with the .smi extension are actually applications that mount an embedded disk image, thus a "Self Mounting Image", intended only for Mac OS 9 and earlier. Universal Disk Image Format (UDIF) is the native disk image format for Mac OS X. Disk images in this format typically have a .dmg extension. File format Apple has not released any documentation on the format, but attempts to reverse engineer parts of the format have been successful. The encrypted layer was reverse engineered in an implementation called VileFault (a spoonerism of FileVault). Apple disk image files are essentially raw disk images (i.e. contain block data) with some added metadata, optionally with one or two layers applied that provide compression and encryption. In , these layers are called CUDIFEncoding and CEncryptedEncoding. UDIF supports ADC (an old proprietary compression format by Apple), zlib, bzip2 (as of Mac OS X v10.4), LZFSE (as of Mac OS X v10.11), and lzma (as of macOS v10.15) compression internally. Metadata The UDIF metadata is found at the end of the disk image following the data. This trailer can be described using the following C structure. All values are big-endian (PowerPC byte ordering) typedef struct { uint8_t Signature[4]; // magic 'koly' uint32_t Version; // 4 (as of 2013) uint32_t HeaderSize; // sizeof(this) = 512 (as of 2013) uint32_t Flags; uint64_t RunningDataForkOffset; uint64_t DataForkOffset; // usually 0, beginning of file uint64_t DataForkLength; uint64_t RsrcForkOffset; // resource fork offset and length uint64_t RsrcForkLength; uint32_t SegmentNumber; // Usually 1, can be 0 uint32_t SegmentCount; // Usually 1, can be 0 uuid_t SegmentID; uint32_t DataChecksumType; // Data fork checksum uint32_t DataChecksumSize; uint32_t DataChecksum[32]; uint64_t XMLOffset;             // Position of XML property list in file uint64_t XMLLength; uint8_t Reserved1[120]; uint32_t ChecksumType; // Master checksum uint32_t ChecksumSize; uint32_t Checksum[32]; uint32_t ImageVariant; // Unknown, commonly 1 uint64_t SectorCount; uint32_t reserved2; uint32_t reserved3; uint32_t reserved4; } __attribute__((packed, scalar_storage_order("big-endian"))) UDIFResourceFile; The XML plist contains a (blocks) key, with information about how the preceding data fork is allocated. The main data is stored in a base64 block, using tables identified by the magic . This structure contains a table about blocks of data and the position and lengths of each "chunk" (usually only one chunk, but compression will create more). The data and resource fork information is probably inherited from NDIF. Encryption The encryption layer comes in two versions. Version 1 has a trailer at the end of the file, while version 2 (default since OS X 10.5) puts it at the beginning. Whether the encryption is a layer outside of or inside of the metadata (UDIF) is unclear from reverse engineered documentation, but judging from the demonstration it's probably outside. Utilities There are few options available to extract files or mount the proprietary Apple Disk Image format. Some cross-platform conversion utilities are: dmg2img was originally written in Perl; however, the Perl version is no longer maintained, and the project was rewritten in C. It extracts the raw disk image from a DMG, without handling the file system inside. UDIF ADC-compressed images (UDCO) have been supported since version 1.5. DMGEXtractor is written in Java with GUI, and it supports more advanced features of dmg including AES-128 encrypted images but not UDCO images. The Sleuth Kit. Only handles uncompressed DMG format, HFS+, and APFS. Most dmg files are unencrypted. Because the dmg metadata is found in the end, a program not understanding dmg files can nevertheless read it as if it was a normal disk image, as long as there is support for the file system inside. Tools with this sort of capacity include: Cross-platform: 7-zip (HFS/HFS+), PeaZip (HFS/HFS+). Windows: UltraISO, IsoBuster, MacDrive (HFS/HFS+). Unix-like: cdrecord and (e.g. ). Tools with specific dmg support include: Windows: Transmac can handle both UDIF dmgs and sparsebundles, as well as HFS/HFS+ and APFS. It is unknown whether it handles encryption. It can be used to create bootable macOS installers under Windows. A free Apple DMG Disk Image Viewer also exists, but it is unknown how much what it actually supports. Unix-like: darling-dmg is a FUSE module enabling easy DMG file mounting on Linux. It supports UDIF and HFS/HFS+. See also cloop DiskImageMounter Installer (macOS) Sparse image References External links Apple Developer Connection A Quick Look at PackageMaker and Installer O'Reilly Mac DevCenter Tip 16-5. Create a Disk Image from a Directory in the Terminal Apple Inc. file systems Archive formats Compression file systems Disk images MacOS
8919856
https://en.wikipedia.org/wiki/Hubert%20Dreyfus%27s%20views%20on%20artificial%20intelligence
Hubert Dreyfus's views on artificial intelligence
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do (1972; 1979; 1992) and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including , the standard AI textbook, and in , a survey of contemporary philosophy. Dreyfus argued that human intelligence and expertise depend primarily on unconscious processes rather than conscious symbolic manipulation, and that these unconscious skills can never be fully captured in formal rules. His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which used high level formal symbols to represent reality and tried to reduce intelligence to symbol manipulation. When Dreyfus' ideas were first introduced in the mid-1960s, they were met with ridicule and outright hostility. By the 1980s, however, many of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches now called "sub-symbolic" because they eschew early AI research's emphasis on high level symbols. In the 21st century, statistics-based approaches to machine learning simulate the way that the brain uses unconscious process to perceive, notice anomalies and make quick judgements. These techniques are highly successful and are currently widely used in both industry and academia. Historian and AI researcher Daniel Crevier writes: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments." Dreyfus said in 2007, "I figure I won and it's over—they've given up." Dreyfus' critique The grandiose promises of artificial intelligence In Alchemy and AI (1965) and What Computers Can't Do (1972), Dreyfus summarized the history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver (1957), predicted that by 1967: A computer would be world champion in chess. A computer would discover and prove an important new mathematical theorem. Most theories in psychology will take the form of computer programs. The press reported these predictions in glowing reports of the imminent arrival of machine intelligence. Dreyfus felt that this optimism was totally unwarranted. He believed that they were based on false assumptions about the nature of human intelligence. Pamela McCorduck explains Dreyfus position: [A] great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner. These predictions were based on the success of an "information processing" model of the mind, articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hilary Putnam. Believing that they had successfully simulated the essential process of human thought with simple programs, it seemed a short step to producing fully intelligent machines. However, Dreyfus argued that philosophy, especially 20th-century philosophy, had discovered serious problems with this information processing viewpoint. The mind, according to modern philosophy, is nothing like a digital computer. Dreyfus' four assumptions of artificial intelligence research In Alchemy and AI and What Computers Can't Do, Dreyfus identified four philosophical assumptions that supported the faith of early AI researchers that human intelligence depended on the manipulation of symbols. "In each case," Dreyfus writes, "the assumption is taken by workers in [AI] as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work." The biological assumption The brain processes information in discrete operations by way of some biological equivalent of on/off switches. In the early days of research into neurology, scientists realized that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, argued that neurons functioned similar to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron. When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components. But Daniel Crevier observes that "few still held that belief in the early 1970s, and nobody argued against Dreyfus" about the biological assumption. The psychological assumption The mind can be viewed as a device operating on bits of information according to formal rules. He refuted this assumption by showing that much of what we "know" about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious background of commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus' view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings. The epistemological assumption All knowledge can be formalized. This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue (as AI founder John McCarthy has) that it is possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represent knowledge the same way. Dreyfus argued that there is no justification for this assumption, since so much of human knowledge is not symbolic. The ontological assumption The world consists of independent facts that can be represented by independent symbols Dreyfus also identified a subtler assumption about the world. AI researchers (and futurists and science fiction writers) often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The study of being or existence is called ontology, and so Dreyfus calls this the ontological assumption. If this is false, then it raises doubts about what we can ultimately know and what intelligent machines will ultimately be able to help us to do. Knowing-how vs. knowing-that: the primacy of intuition In Mind Over Machine (1986), written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can't Do, where he had made a similar argument criticizing the "cognitive simulation" school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s. Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between "knowing-that" and "knowing-how", based on Heidegger's distinction of present-at-hand and ready-to-hand. Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus agreed that their programs adequately imitated the skills he calls "knowing-that." Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without considering any alternatives. This is the essence of expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply "size up the situation" and react. The human sense of the situation, according to Dreyfus, is based on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge about the world. This “context” or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively in some way. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our "fringe consciousness" (borrowing a phrase from William James): the millions of things we're aware of, but we're not really thinking about right now. Dreyfus does not believe that AI programs, as they were implemented in the 70s and 80s, could capture this "background" or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in "tree climbing with one's eyes on the moon." History Dreyfus began to formulate his critique in the early 1960s while he was a professor at MIT, then a hotbed of artificial intelligence research. His first publication on the subject is a half-page objection to a talk given by Herbert A. Simon in the spring of 1961. Dreyfus was especially bothered, as a philosopher, that AI researchers seemed to believe they were on the verge of solving many long standing philosophical problems within a few years, using computers. Alchemy and Artificial Intelligence In 1965, Dreyfus was hired (with his brother Stuart Dreyfus' help) by Paul Armer to spend the summer at RAND Corporation's Santa Monica facility, where he would write Alchemy and Artificial Intelligence, the first salvo of his attack. Armer had thought he was hiring an impartial critic and was surprised when Dreyfus produced a scathing paper intended to demolish the foundations of the field. (Armer stated he was unaware of Dreyfus' previous publication.) Armer delayed publishing it, but ultimately realized that "just because it came to a conclusion you didn't like was no reason not to publish it." It finally came out as RAND Memo and soon became a best seller. The paper flatly ridiculed AI research, comparing it to alchemy: a misguided attempt to change metals to gold based on a theoretical foundation that was no more than mythology and wishful thinking. It ridiculed the grandiose predictions of leading AI researchers, predicting that there were limits beyond which AI would not progress and intimating that those limits would be reached soon. Reaction The paper "caused an uproar", according to Pamela McCorduck. The AI community's response was derisive and personal. Seymour Papert dismissed one third of the paper as "gossip" and claimed that every quotation was deliberately taken out of context. Herbert A. Simon accused Dreyfus of playing "politics" so that he could attach the prestigious RAND name to his ideas. Simon said, "what I resent about this was the RAND name attached to that garbage". Dreyfus, who taught at MIT, remembers that his colleagues working in AI "dared not be seen having lunch with me." Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he recalls "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being." The paper was the subject of a short in The New Yorker magazine on June 11, 1966. The piece mentioned Dreyfus' contention that, while computers may be able to play checkers, no computer could yet play a decent game of chess. It reported with wry humor (as Dreyfus had) about the victory of a ten-year-old over the leading chess program, with "even more than its usual smugness." In hope of restoring AI's reputation, Seymour Papert arranged a chess match between Dreyfus and Richard Greenblatt's Mac Hack program. Dreyfus lost, much to Papert's satisfaction. An Association for Computing Machinery bulletin used the headline: "A Ten Year Old Can Beat the Machine— Dreyfus: But the Machine Can Beat Dreyfus" Dreyfus complained in print that he hadn't said a computer will never play chess, to which Herbert A. Simon replied: "You should recognize that some of those who are bitten by your sharp-toothed prose are likely, in their human weakness, to bite back ... may I be so bold as to suggest that you could well begin the cooling---a recovery of your sense of humor being a good first step." Vindicated By the early 1990s several of Dreyfus' radical opinions had become mainstream. Failed predictions. As Dreyfus had foreseen, the grandiose predictions of early AI researchers failed to come true. Fully intelligent machines (now known as "strong AI") did not appear in the mid-1970s as predicted. HAL 9000 (whose capabilities for natural language, perception and problem solving were based on the advice and opinions of Marvin Minsky) did not appear in the year 2001. "AI researchers", writes Nicolas Fearn, "clearly have some explaining to do." Today researchers are far more reluctant to make the kind of predictions that were made in the early days. (Although some futurists, such as Ray Kurzweil, are still given to the same kind of optimism.) The biological assumption, although common in the forties and early fifties, was no longer assumed by most AI researchers by the time Dreyfus published What Computers Can't Do. Although many still argue that it is essential to reverse-engineer the brain by simulating the action of neurons (such as Ray Kurzweil or Jeff Hawkins), they don't assume that neurons are essentially digital, but rather that the action of analog neurons can be simulated by digital machines to a reasonable level of accuracy. (Alan Turing had made this same observation as early as 1950.) The psychological assumption and unconscious skills. Many AI researchers have come to agree that human reasoning does not consist primarily of high-level symbol manipulation. In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Daniel Crevier writes that by 1993, unlike 1965, AI researchers "no longer made the psychological assumption", and had continued forward without it. In the 1980s, these new "sub-symbolic" approaches included: Computational intelligence paradigms, such as neural nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning. Dreyfus himself agrees that these sub-symbolic methods can capture the kind of "tendencies" and "attitudes" that he considers essential for intelligence and expertise. Research into commonsense knowledge has focused on reproducing the "background" or context of knowledge. Robotics researchers like Hans Moravec and Rodney Brooks were among the first to realize that unconscious skills would prove to be the most difficult to reverse engineer. (See Moravec's paradox.) Brooks would spearhead a movement in the late 80s that took direct aim at the use of high-level symbols, called Nouvelle AI. The situated movement in robotics research attempts to capture our unconscious skills at perception and attention. In the 1990s and the early decades of the 21st century, statistics-based approaches to machine learning used techniques related to economics and statistics to allow machines to "guess" – to make inexact, probabilistic decisions and predictions based on experience and learning. These programs simulate the way our unconscious instincts are able to perceive, notice anomalies and make quick judgements, similar to what Dreyfus called "sizing up the situation and reacting", but here the "situation" consists of vast amounts of numerical data. These techniques are highly successful and are currently widely used in both industry and academia. This research has gone forward without any direct connection to Dreyfus' work. Knowing-how and knowing-that. Research in psychology and economics has been able to show that Dreyfus' (and Heidegger's) speculation about the nature of human problem solving was essentially correct. Daniel Kahnemann and Amos Tversky collected a vast amount of hard evidence that human beings use two very different methods to solve problems, which they named "system 1" and "system 2". System one, also known as the adaptive unconscious, is fast, intuitive and unconscious. System 2 is slow, logical and deliberate. Their research was collected in the book Thinking, Fast and Slow, and inspired Malcolm Gladwell's popular book Blink. As with AI, this research was entirely independent of both Dreyfus and Heidegger. Ignored Although clearly AI research has come to agree with Dreyfus, McCorduck claimed that "my impression is that this progress has taken place piecemeal and in response to tough given problems, and owes nothing to Dreyfus." The AI community, with a few exceptions, chose not to respond to Dreyfus directly. "He's too silly to take seriously" a researcher told Pamela McCorduck. Marvin Minsky said of Dreyfus (and the other critiques coming from philosophy) that "they misunderstand, and should be ignored." When Dreyfus expanded Alchemy and AI to book length and published it as What Computers Can't Do in 1972, no one from the AI community chose to respond (with the exception of a few critical reviews). McCorduck asks "If Dreyfus is so wrong-headed, why haven't the artificial intelligence people made more effort to contradict him?" Part of the problem was the kind of philosophy that Dreyfus used in his critique. Dreyfus was an expert in modern European philosophers (like Heidegger and Merleau-Ponty). AI researchers of the 1960s, by contrast, based their understanding of the human mind on engineering principles and efficient problem solving techniques related to management science. On a fundamental level, they spoke a different language. Edward Feigenbaum complained, "What does he offer us? Phenomenology! That ball of fluff. That cotton candy!" In 1965, there was simply too huge a gap between European philosophy and artificial intelligence, a gap that has since been filled by cognitive science, connectionism and robotics research. It would take many years before artificial intelligence researchers were able to address the issues that were important to continental philosophy, such as situatedness, embodiment, perception and gestalt. Another problem was that he claimed (or seemed to claim) that AI would never be able to capture the human ability to understand context, situation or purpose in the form of rules. But (as Peter Norvig and Stuart Russell would later explain), an argument of this form cannot be won: just because one cannot imagine formal rules that govern human intelligence and expertise, this does not mean that no such rules exist. They quote Alan Turing's answer to all arguments similar to Dreyfus's:"we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"Dreyfus did not anticipate that AI researchers would realize their mistake and begin to work towards new solutions, moving away from the symbolic methods that Dreyfus criticized. In 1965, he did not imagine that such programs would one day be created, so he claimed AI was impossible. In 1965, AI researchers did not imagine that such programs were necessary, so they claimed AI was almost complete. Both were wrong. A more serious issue was the impression that Dreyfus' critique was incorrigibly hostile. McCorduck wrote, "His derisiveness has been so provoking that he has estranged anyone he might have enlightened. And that's a pity." Daniel Crevier stated that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier." See also Adaptive unconscious Church–Turing thesis Computer chess Hubert Dreyfus Philosophy of artificial intelligence Notes References . . . . . . . . . . Philosophy of artificial intelligence
39991400
https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Sri%20City
Indian Institute of Information Technology, Sri City
Indian Institute of Information Technology, Sri City is an educational institute of national importance located in Sri City, Chittoor, Andhra Pradesh, India set up by the Ministry of Human Resource Development, Government of India, under the partnership with Andrha Pradesh government and Sri City consortium. The IIIT campus at Sri City is spread over . The institute is run by the Board of Governors of the IIIT Society. The Board of Governors include representatives of MHRD, GoAP, and Industry Partners as well as eminent people from academia, industry, and civil society. History The Indian Institute of Information Technology Sricity started its first batch in 2013 with IIIT Hyderabad as the mentor institute. Location Indian Institute of Information Technology – Sri City, Chittoor District, Andhra Pradesh is situated in Sri City which is a planned integrated business city located north of Chennai on NH 5 along the border of Andhra Pradesh (AP) and Tamil Nadu (TN) states of India. Much of Sri City area is in Chittoor District. The Satish Dhawan Space Centre (SHAR), India's satellite/rocket launching centre is located at Sriharikota, on the eastern side of Pulicat Lake which separates Sri City and the satellite launching station. Sri City is the largest industrial park in South India spread over of land in close proximity to Chennai. Academic programs The institute presently offers B.Tech in Computer Science and Engineering (CSE), B.Tech in Electronics and Communication Engineering (ECE) and MS by Research, PhD in ECE and CSE. Admissions The admission to IIIT is through Central Seat allocation Board (CSAB). The students are allotted admission by CSAB based on their Joint Entrance Examination (JEE-Main) ranks. The Indian Institute of Information Technology, Sri City, Chittoor District, Andhra Pradesh is listed in the CSAB website under List of participating institutes in Other Central Government / State Government Funded Technical Institutes. References External links Universities and colleges in Chittoor district Sri City 2013 establishments in Andhra Pradesh Educational institutions established in 2013
306236
https://en.wikipedia.org/wiki/FreeWill
FreeWill
FreeWill Co is a company whose website, FreeWill.com, has online software which helps people write wills for free and make charitable contributions, and it reports each person's planned bequests to charities which pay subscription fees. It also helps people write advance healthcare directives and living trusts in California. Background FreeWill is a public-benefit corporation founded at Stanford University in 2017 by Jennifer Xia Spradling and Patrick Schmitt. It has two social missions: The first is to create access to estate planning for all individuals regardless of their backgrounds or ability to pay. The second is to make it easy for people to leave money to charity. As a Public Benefit Corporation, they are legally obligated to pursue their social missions in addition to seeking financial return. The idea for FreeWill stemmed from cofounder Patrick Schmitt’s time working on fundraising campaigns for the Democratic National Committee where he learned how remove friction points in the donation process and increase individual donations. When it came time for Schmitt to do his own estate planning, he wondered why nobody had made it similarly easy to build charitable donations into your will. Town & Country magazine ranked the founding of FreeWill as one of the top 50 Philanthropic efforts of 2019. Business model Hundreds of charities pay a few thousand dollars to $100,000+ per year in order to have the charity's name included in the software, and receive reports of the name, address, assets, and planned bequest for each donor who agrees. Donors are not required to leave money to charity. For donors who do not release information to the charity, FreeWill can still send the charity aggregate data, and they do not say how much detail these aggregates have. Unlike most estate planning, the software asks users specifically if they want to give to charities, and automatically looks up an accurate EIN and address of subscribing charities, which users would otherwise find on their own. The company's products include wills, durable financial power of attorney, qualified charitable distributions, stock donations. and living wills (also known as advance healthcare directives or healthcare power of attorney). Most estate lawyers would prepare a living trust to keep the estate private and out of probate. FreeWill offers a living trust only in California. After entering all will information, users have options to download the will; or get more information; or see a lawyer, in which case the site offers the American Bar Association directory of all lawyers. Users with over $10 million in assets and users in California also see a suggestion that they see a lawyer. If users ask for more information the site makes no recommendations, but notes that some people prefer to use a lawyer if they are getting divorced, or have out-of-state property, a business, a dependent with a disability, someone who may contest the will, children from multiple marriages, a premarital agreement, a caregiver as beneficiary, or assets over the estate tax exemption. They do not give reasons why any of these calls for a lawyer, but others say that having a lawyer involved is good protection against anyone questioning whether the decedent was mentally qualified. Others say that having any house, not just one out of state, calls for a lawyer. Market As of June 2021, 295,000 people have prepared wills on the platform, 19% have included bequests to charities, and bequests average $111,000. The planned bequests total $3 billion. The largest numbers of donations have been for the American Red Cross, United Way, Defenders of Wildlife and Disabled American Veterans. The average user is 57 years old. FreeWill expects to expand to Canada, Western Europe, Australia, Japan and China. Privacy While lawyers are involved in writing the software, FreeWill is not a law firm and does not have an attorney-client relationship with customers. Privacy statements let the company store information on assets, children bequests, medical and religious preferences and use them to target ads and fundraising appeals. FreeWill explicitly advises charities to use information they receive from the software to build relationships with potential donors and raise more money. For these purposes, FreeWill collects total assets, age and address, as well as information used in writing the will and living will. They track visits and actions elsewhere on the web over time, and ignore Do Not Track requests. FreeWill will transfer their information to any larger company which acquires them. They can amend the privacy statement by posting a notice on the site. FreeWill says it uses modern security protocols. They acknowledge that information can escape in security breaches, for which they do not accept liability. Competition FreeWill is free to users. Its competitors include other online services, some free, some offering trusts and other services. One competitor offers downloadable software so the software company does not see users' wills and trusts, which is available from libraries, This downloadable competitor is linked from some nonprofits' websites, as FreeWill is. Competitors also include lawyers, with flat fees of $1,200-$2,000, and there are lawyer rating systems such as Martindale-Hubbell. Members and dependents of the US military have access to lawyers at Judge Advocate offices. Consumer Reports notes that people resist hiring a lawyer for a will, even though they hire professionals for hairdressing, mowing and tax preparation. Lawyers have boilerplate wording, which they adjust for almost every client. The lawyer's work is confidential, under Attorney–client privilege, and liability is covered by errors and omissions insurance in case of problems. In Britain, similar services are provided by companies such as FreeWills.co.uk free wills from solicitors and from the online firm, Bequeathed. Lawyers have said that a will and trust created by software are better and faster than none, though not as good as a custom product and counsel from a lawyer. Dispute resolution FreeWill, like other companies which produce will-writing software, disclaims liability for errors and omissions in their software; they also note that laws change rapidly. If people nevertheless have disputes with the company, the users and company agree to use small claims courts or individual arbitration in New York City under "Commercial Arbitration Rules that contemplate in-person hearings." The company's offices are in New York City, and it is incorporated in Delaware. References :Category:Companies Online companies of the United States Online legal services Legal organizations based in the United States American legal websites Companies based in New York City
34989637
https://en.wikipedia.org/wiki/Tony%20Wasserman
Tony Wasserman
Anthony "Tony" I. Wasserman is an American computer scientist. He is a member of the board of directors of the Open Source Initiative, a Professor of the Practice in Software Management at Carnegie Mellon Silicon Valley, and the Executive Director of the CMU Center for Open Source Investigation. As a special faculty member at Carnegie Mellon University, Wasserman teaches classes in cloud computing, open source software, and software product definition. He is a frequent speaker at Open Source conferences around the world including the Open World Forum. He was the general chair of the tenth international conference on Open Source systems, OSS2014, in Costa Rica. After working as a professor at the University of California, San Francisco and as a lecturer at the University of California, Berkeley, Wasserman founded and was CEO of Interactive Development Environments, a computer-aided software engineering company that became a predecessor of Atego, from 1983 to 1993. He then became vice president of Bluestone Software before its acquisition by Hewlett Packard. In 1996 he was elected as a fellow of the IEEE "for contributions to software engineering, including the development of computer-aided software engineering (CASE) tools". In the same year he also became a fellow of the Association for Computing Machinery "for technical and professional contributions to the field of software engineering". References Year of birth missing (living people) Living people American computer scientists Carnegie Mellon University faculty University of California, Berkeley faculty Fellows of the Association for Computing Machinery Fellow Members of the IEEE Members of the Open Source Initiative board of directors University of California, San Francisco faculty
25757480
https://en.wikipedia.org/wiki/Tomasz%20Imieli%C5%84ski
Tomasz Imieliński
Tomasz Imieliński (born July 11, 1954 in Toruń, Poland) is a Polish-American computer scientist, most known in the areas of data mining, mobile computing, data extraction, and search engine technology. He is currently a professor of computer science at Rutgers University in New Jersey, United States. In 2000, he co-founded Connotate Technologies, a web data extraction company based in New Brunswick, NJ. Since 2004 till 2010 he had held multiple positions at Ask.com, from vice president of data solutions intelligence to executive vice president of global search and answers and chief scientist. From 2010 to 2012 he served as VP of data solutions at IAC/Pronto. Tomasz Imieliński served as chairman of the Computer Science Department at Rutgers University from 1996 to 2003. He co-founded Art Data Laboratories LLC company, and its product, Articker is the largest known database that aggregates dynamic non-price information about the visual artists in the global art market. Articker has been under an exclusive partnership with Phillips auction house. Education Tomasz Imieliński graduated with B.E/M.E degree in electrical engineering from Politechnika Gdańska in Gdańsk, Poland, and received his PhD, in 1982, in computer science, from Polish Academy of Sciences, in Poland, under supervision of Witold Lipski. Career After receiving his PhD, Tomasz Imieliński joined, for a year, faculty at the McGill University School of Computer Science at McGill University in Montreal. Since 1983, he joined the Computer Science Department at Rutgers University, in New Brunswick. He served as a chairman of the department, from 1996-2003. In 2000, he co-founded Connotate Technologies based on his data extraction research developed at Rutgers University. While on leave from Rutgers University, from 2004-2010, he had held multiple positions at Ask.com: vice president of data solutions, executive vice president of global search and answers, and chief scientist. Between 2010-2012, Tomasz Imieliński served as vice president of data solutions at Pronto. Tomasz Imieliński received numerous awards, such as 2018 The Tadeusz Sendzimir Applied Sciences Award. Research and recognition Imieliński-Lipski Algebras Imieliński's early work on 'Incomplete Information in Relational Databases' produced a fundamental concept that became later known as Imieliński-Lipski Algebras. Cylindric Algebras According to Van den Bussche, the first people from database community to recognize the connection between Codd's relational algebra and Tarski's cylindric algebras were Witold Lipski and Tomasz Imieliński, in a talk given at the very first edition of PODS (the ACM Symposium on Principles of Database Systems), in 1982. Their work,"The relational model of data and cylindric algebras" was later published in 1984. Association Rule Mining His joint 1993 paper with Agrawal and Swami, 'Mining Association Rules Between Sets of Items in Large Databases' started the association rule mining research area, and it is one of the most cited publications in computer science, with over 22,000 citations according to Google Scholar. This paper received the 2003 - 10 year Test of Time ACM SIGMOD award, and is included in the list of important publications in computer science. Mobile Computing Imieliński has also been one of the pioneers of mobile computing and for his joint 1992 paper with Badri Nath on 'Querying in highly mobile distributed environments' he received the VLDB Ten Year Award in 2002. Geocast He proposed the idea of Geocast which would deliver information to a group of destinations in a network identified by their geographical locations. He proposed applications such as geographic messaging, geographic advertising, delivery of geographically restricted services, and presence discovery of a service or mobile network participant in a limited geographic area. Patents Overall, Imieliński has published over 150 papers and is an inventor and co-inventor on multiple patents ranging from search technology to web data extraction as well as multimedia processing, data mining, and mobile computing (e.g. patent on "Method and system for audio access to information in a wide area computer network"). His papers have been cited over 37000 times Tomasz Imieliński has been listed as #3, in the area of databases, on the AMiner Most Influential Scholars List, which tracks the top researchers in computer science and engineering. Other Interests Tomasz Imieliński formed, in 2000, System Crash, an avant-garde rock group which combined heavy sound with philosophical and political lyrics and multimedia projection of videos and sounds of current world, real and virtual. System Crash consisted of three musicians Tomasz Imielinski vocal and guitar, James Jeude (bass) and Tomek Unrat (drums). Since January 2006, the band had also gone by the name of "The Professor and System Crash", the band title used on their 2006 re-printing of their 2005 CD "War By Remote Control". Internity, the first show of System Crash, focused on the internet revolution and its philosophical consequences – interplay between the virtual and real world, anthropomorphization of machines, programs and files. All lyrics were written by Tomasz Imieliński. Internity was featured in Knitting Factory, in 2001, and received enthusiastic reviews in Star Ledger and New York Times. System Crash played to sold out audiences and quickly achieved cult status in the avant-garde music scene of New York City. The group stopped performing around 2007. References Other selected publications External links Tomasz Imieliński's Personal Web page at Rutgers University Connotate Technologies, Inc Null (SQL) Association rule mining Relational algebra Imieliński-Lipski Algebras Mobile Computing Geocast Cylindric algebra List of important publications in computer science 1954 births Living people Polish computer scientists Rutgers University faculty Gdańsk University of Technology alumni Polish emigrants to the United States People from Toruń
40013065
https://en.wikipedia.org/wiki/Decline%20of%20the%20Glass%E2%80%93Steagall%20Act
Decline of the Glass–Steagall Act
This article is about the decline of the effect of Glass–Steagall: legislation, limits, and loopholes. The Glass–Steagall Act was a part of the 1933 Banking Act. It placed restrictions on activities that commercial banks and investment banks (or other securities firms) could do. It effectively separated those activities, so the two types of business could not mix, in order to protect consumer's money from speculative use. The Banking Act of 1935 clarified and otherwise amended Glass–Steagall. Over time, private firms and their regulators found novel ways to weaken the barriers envisioned in the legislation. Eventually, the protections became very weak. From its start, there were many economists, businessmen, and politicians who did not find the restrictions to be productive, and wished to do away with them altogether. It took about 66 years, but the legislation was eventually completely repealed. Subsequent financial crises have resulted in attempts to revive the legislation, and even make it stronger than originally envisioned. Glass–Steagall developments from 1935 to 1991 Commercial banks withdrew from the depressed securities markets of the early 1930s even before the Glass–Steagall prohibitions on securities underwriting and dealing became effective. However, those prohibitions were controversial. A 1934 study of commercial bank affiliate underwriting of securities in the 1920s found such underwriting was not better than the underwriting by firms that were not affiliated with banks. That study disputed Glass–Steagall critics who suggested securities markets had been harmed by prohibiting commercial bank involvement. A 1942 study also found that commercial bank affiliate underwriting was not better (or worse) than nonbank affiliate underwriting, but concluded this meant it was a "myth" commercial bank securities affiliates had taken advantage of bank customers to sell "worthless securities." Senator Glass's "repeal" effort In 1935 Senator Glass attempted to repeal the Glass–Steagall prohibition on commercial banks underwriting corporate securities. Glass stated Glass–Steagall had unduly damaged securities markets by prohibiting commercial bank underwriting of corporate securities. The first Senate passed version of the Banking Act of 1935 included Glass's revision to Section 16 of the Glass–Steagall Act to permit bank underwriting of corporate securities subject to limitations and regulations. President Roosevelt opposed this revision to Section 16 and wrote Glass that "the old abuses would come back if underwriting were restored in any shape, manner, or form." In the conference committee that reconciled differences between the House and Senate passed versions of the Banking Act of 1935, Glass's language amending Section 16 was removed. Comptroller Saxon's Glass–Steagall interpretations President John F. Kennedy's appointee as Comptroller of the Currency, James J. Saxon, was the next public official to seriously challenge Glass–Steagall's prohibitions. As the regulator of national banks, Saxon was concerned with the competitive position of commercial banks. In 1950 commercial banks held 52% of the assets of US financial institutions. By 1960 that share had declined to 38%. Saxon wanted to expand the powers of national banks. In 1963, the Saxon-led Office of the Comptroller of the Currency (OCC) issued a regulation permitting national banks to offer retail customers "commingled accounts" holding common stocks and other securities. This amounted to permitting banks to offer mutual funds to retail customers. Saxon also issued rulings that national banks could underwrite municipal revenue bonds. Courts ruled that both of these actions violated Glass–Steagall. In rejecting bank sales of accounts that functioned like mutual funds, the Supreme Court explained in Investment Company Institute v. Camp that it would have given "deference" to the OCC's judgment if the OCC had explained how such sales could avoid the conflicts of interest and other "subtle hazards" Glass–Steagall sought to prevent and that could arise when a bank offered a securities product to its retail customers. Courts later applied this aspect of the Camp ruling to uphold interpretations of Glass–Steagall by federal banking regulators. As in the Camp case, these interpretations by bank regulators were routinely challenged by the mutual fund industry through the Investment Company Institute or the securities industry through the Securities Industry Association as they sought to prevent competition from commercial banks. 1966 to 1980 developments Increasing competitive pressures for commercial banks Regulation Q limits on interest rates for time deposits at commercial banks, authorized by the 1933 Banking Act, first became "effective" in 1966 when market interest rates exceeded those limits. This produced the first of several "credit crunches" during the late 1960s and throughout the 1970s as depositors withdrew funds from banks to reinvest at higher market interest rates. When this "disintermediation" limited the ability of banks to meet the borrowing requests of all their corporate customers, some commercial banks helped their "best customers" establish programs to borrow directly from the "capital markets" by issuing commercial paper. Over time, commercial banks were increasingly left with lower credit quality, or more speculative, corporate borrowers that could not borrow directly from the "capital markets." Eventually, even lower credit quality corporations and (indirectly through "securitization") consumers were able to borrow from the capital markets as improvements in communication and information technology allowed investors to evaluate and invest in a broader range of borrowers. Banks began to finance residential mortgages through securitization in the late 1970s. During the 1980s banks and other lenders used securitizations to provide "capital markets" funding for a wide range of assets that previously had been financed by bank loans. In losing "their preeminent status as expert intermediaries for the collection, processing, and analysis of information relating to extensions of credit", banks were increasingly "bypassed" as traditional "depositors" invested in securities that replaced bank loans. In 1977 Merrill Lynch introduced a "cash management account" that allowed brokerage customers to write checks on funds held in a money market account or drawn from a "line of credit" Merrill provided. The Securities and Exchange Commission (SEC) had ruled that money market funds could "redeem" investor shares at a $1 stable "net asset value" despite daily fluctuations in the value of the securities held by the funds. This allowed money market funds to develop into "near money" as "investors" wrote checks ("redemption orders") on these accounts much as "depositors" wrote checks on traditional checking accounts provided by commercial banks. Also in the 1970s savings and loans, which were not restricted by Glass–Steagall other than Section 21, were permitted to offer "negotiable order of withdrawal accounts" (NOW accounts). As with money market accounts, these accounts functioned much like checking accounts in permitting a depositor to order payments from a "savings account." Helen Garten concluded that the "traditional regulation" of commercial banks established by the 1933 Banking Act, including Glass–Steagall, failed when nonbanking firms and the "capital markets" were able to provide replacements for bank loans and deposits, thereby reducing the profitability of commercial banking. Richard Vietor agreed that traditional bank regulation was unable to protect commercial banks from nonbank competition. However, he noted that significant the economic and financial instability began in the mid-1960s. This slowed economic growth and savings, which reduced demand and supply of credit; it also induced financial innovations that undermined commercial banks. Hyman Minsky agreed financial instability had returned in 1966 and had only been constrained in the following 15 years through Federal Reserve Board engineered "credit crunches" to combat inflation followed by "lender of last resort" rescues of asset prices that produced new inflation. Minsky described ever worsening periods of inflation followed by unemployment as the cycle of rescues followed by credit crunches was repeated. Minsky, however, supported traditional banking regulation and advocated further controls of finance to "promote smaller and simpler organizations weighted more toward direct financing." Writing from a similar "neo-Keynesian perspective," Jan Kregel concluded that, after World War II, non-regulated financial companies, supported by regulatory actions, developed means to provide bank products ("liquidity and lending accommodation") more cheaply than commercial banks through the "capital markets." Kregel argued this led banking regulators to eliminate Glass–Steagall restrictions to permit banks to "duplicate these structures" using the capital markets "until there was virtually no difference in the activities of FDIC-insured commercial banks and investment banks." Comptroller Saxon had feared for the competitive viability of commercial banks in the early 1960s. The "capital markets" developments in the 1970s increased the vulnerability of commercial banks to nonbank competitors. As described below, this competition would increase in the 1980s. Limited congressional and regulatory developments In 1967 the Senate passed the first of several Senate passed bills that would have revised Glass–Steagall Section 16 to permit banks to underwrite municipal revenue bonds. In 1974 the OCC authorized national banks to provide "automatic investment services," which permitted bank customers to authorize regular withdrawals from a deposit account to purchase identified securities. In 1977 the Federal Reserve Board staff concluded Glass–Steagall permitted banks to privately place commercial paper. In 1978 Bankers Trust began making such placements. As described below, in 1978, the OCC authorized a national bank to privately place securities issued to sell residential mortgages in a securitization Commercial banks, however, were frustrated with the continuing restrictions imposed by Glass–Steagall and other banking laws. After many of Comptroller Saxon's decisions granting national banks greater powers had been challenged or overturned by courts, commercial banking firms had been able to expand their non-securities activities through the "one bank holding company." Because the Bank Holding Company Act only limited nonbanking activities of companies that owned two or more commercial banks, "one bank holding companies" could own interests in any type of company other than securities firms covered by Glass–Steagall Section 20. That "loophole" in the Bank Holding Company Act was closed by a 1970 amendment to apply the Act to any company that owned a commercial bank. Commercial banking firm's continuing desire for greater powers received support when Ronald Reagan became President and appointed banking regulators who shared an "attitude towards deregulation of the financial industry." Reagan Administration developments State non-member bank and nonbank bank "loopholes" In 1982, under the chairmanship of William Isaac, the FDIC issued a "policy statement" that state chartered non-Federal Reserve member banks could establish subsidiaries to underwrite and deal in securities. Also in 1982 the OCC, under Comptroller C. Todd Conover, approved the mutual fund company Dreyfus Corporation and the retailer Sears establishing "nonbank bank" subsidiaries that were not covered by the Bank Holding Company Act. The Federal Reserve Board, led by Chairman Paul Volcker, asked Congress to overrule both the FDIC's and the OCC's actions through new legislation. The FDIC's action confirmed that Glass–Steagall did not restrict affiliations between a state chartered non-Federal Reserve System member bank and securities firms, even when the bank was FDIC insured. State laws differed in how they regulated affiliations between banks and securities firms. In the 1970s, foreign banks had taken advantage of this in establishing branches in states that permitted such affiliations. Although the International Banking Act of 1978 brought newly established foreign bank US branches under Glass–Steagall, foreign banks with existing US branches were "grandfathered" and permitted to retain their existing investments. Through this "loophole" Credit Suisse was able to own a controlling interest in First Boston, a leading US securities firm. After the FDIC's action, commentators worried that large commercial banks would leave the Federal Reserve System (after first converting to a state charter if they were national banks) to free themselves from Glass–Steagall affiliation restrictions, as large commercial banks lobbied states to permit commercial bank investment banking activities. The OCC's action relied on a "loophole" in the Bank Holding Company Act (BHCA) that meant a company only became a "bank holding company" supervised by the Federal Reserve Board if it owned a "bank" that made "commercial loans" (i.e., loans to businesses) and provided "demand deposits" (i.e., checking accounts). A "nonbank bank" could be established to provide checking accounts (but not commercial loans) or commercial loans (but not checking accounts). The company owning the nonbank bank would not be a bank holding company limited to activities "closely related to banking." This permitted Sears, GE, and other commercial companies to own "nonbank banks." Glass–Steagall's affiliation restrictions applied if the nonbank bank was a national bank or otherwise a member of the Federal Reserve System. The OCC's permission for Dreyfus to own a nationally chartered "nonbank bank" was based on the OCC's conclusion that Dreyfus, as a mutual fund company, earned only a small amount of its revenue through underwriting and distributing shares in mutual funds. Two other securities firms, J. & W. Seligman & Co. and Prudential-Bache, established state chartered non-Federal Reserve System member banks to avoid Glass–Steagall restrictions on affiliations between member banks and securities firms. Legislative response Although Paul Volcker and the Federal Reserve Board sought legislation overruling the FDIC and OCC actions, they agreed bank affiliates should have broader securities powers. They supported a bill sponsored by Senate Banking Committee Chairman Jake Garn (R-UT) that would have amended Glass–Steagall Section 20 to cover all FDIC insured banks and to permit bank affiliates to underwrite and deal in mutual funds, municipal revenue bonds, commercial paper, and mortgage-backed securities. On September 13, 1984, the Senate passed the Garn bill in an 89-5 vote, but the Democratic controlled House did not act on the bill. In 1987, however, the Senate (with a new Democratic Party majority) joined with the House in passing the Competitive Equality Banking Act of 1987 (CEBA). Although primarily dealing with the savings and loan crisis, CEBA also established a moratorium to March 1, 1988, on banking regulator actions to approve bank or affiliate securities activities, applied the affiliation restrictions of Glass–Steagall Sections 20 and 32 to all FDIC insured banks during the moratorium, and eliminated the "nonbank bank" loophole for new FDIC insured banks (whether they took demand deposits or made commercial loans) except industrial loan companies. Existing "nonbank banks", however, were "grandfathered" so that they could continue to operate without becoming subject to BHCA restrictions. The CEBA was intended to provide time for Congress (rather than banking regulators) to review and resolve the Glass–Steagall issues of bank securities activities. Senator William Proxmire (D-WI), the new Chairman of the Senate Banking Committee, took up this topic in 1987. International competitiveness debate Wolfgang Reinicke argues that Glass–Steagall "repeal" gained unexpected Congressional support in 1987 because large banks successfully argued that Glass–Steagall prevented US banks from competing internationally. With the argument changed from preserving the profitability of large commercial banks to preserving the "competitiveness" of US banks (and of the US economy), Senator Proxmire reversed his earlier opposition to Glass–Steagall reform. Proxmire sponsored a bill that would have repealed Glass–Steagall Sections 20 and 32 and replaced those prohibitions with a system for regulating (and limiting the amount of) bank affiliate securities activities. He declared Glass–Steagall a "protectionist dinosaur." By 1985 commercial banks provided 26% of short term loans to large businesses compared to 59% in 1974. While banks cited such statistics to illustrate the "decline of commercial banking," Reinicke argues the most influential factor in Congress favoring Glass–Steagall "repeal" was the decline of US banks in international rankings. In 1960 six of the ten largest banks were US based, by 1980 only two US based banks were in the top ten, and by 1989 none was in the top twenty five. In the late 1980s the United Kingdom and Canada ended their historic separations of commercial and investment banking. Glass–Steagall critics scornfully noted only Japanese legislation imposed by Americans during the Occupation of Japan kept the United States from being alone in separating the two activities. As noted above, even in the United States seventeen foreign banks were free from this Glass–Steagall restriction because they had established state chartered branches before the International Banking Act of 1978 brought newly established foreign bank US branches under Glass–Steagall. Similarly, because major foreign countries did not separate investment and commercial banking, US commercial banks could underwrite and deal in securities through branches outside the United States. Paul Volcker agreed that, "broadly speaking," it made no sense that US commercial banks could underwrite securities in Europe but not in the United States. 1987 status of Glass–Steagall debate Throughout the 1980s and 1990s scholars published studies arguing that commercial bank affiliate underwriting during the 1920s was no worse, or was better, than underwriting by securities firms not affiliated with banks and that commercial banks were strengthened, not harmed, by securities affiliates. More generally, researchers attacked the idea that "integrated financial services firms" had played a role in creating the Great Depression or the collapse of the US banking system in the 1930s. If it was "debatable" whether Glass–Steagall was justified in the 1930s, it was easier to argue that Glass–Steagall served no legitimate purpose when the distinction between commercial and investment banking activities had been blurred by "market developments" since the 1960s. Along with the "nonbank bank" "loophole" from BHCA limitations, in the 1980s the "unitary thrift" "loophole" became prominent as a means for securities and commercial firms to provide banking (or "near banking") products. The Savings and Loan Holding Company Act (SLHCA) permitted any company to own a single savings and loan. Only companies that owned two or more savings and loan were limited to thrift related businesses. Already in 1973 First Chicago Bank had identified Sears as its real competitor. Citicorp CEO Walter Wriston reached the same conclusion later in the 1970s. By 1982, using the "unitary thrift" and "nonbank bank" "loopholes," Sears had built the "Sears Financial Network", which combined "Super NOW" accounts and mortgage loans through a large California-based savings and loan, the Discover Card issued by a "nonbank bank" as a credit card, securities brokerage through Dean Witter Reynolds, home and auto insurance through Allstate, and real estate brokerage through Coldwell Banker. By 1984, however, Walter Wriston concluded "the bank of the future already exists, and it's called Merrill Lynch." In 1986 when major bank holding companies threatened to stop operating commercial banks in order to obtain the "competitive advantages" enjoyed by Sears and Merrill Lynch, FDIC Chairman William Seidman warned that could create "chaos." In a 1987 "issue brief" the Congressional Research Service (CRS) summarized "some of" the major arguments for preserving Glass–Steagall as: Conflicts of interest characterize the granting of credit (lending) and the use of credit (investing) by the same entity, which led to abuses that originally produced the Act. Depository institutions possess enormous financial power, by virtue of their control of other people's money; its extent must be limited to ensure soundness and competition in the market for funds, whether loans or investments. Securities activities can be risky, leading to enormous losses. Such losses could threaten the integrity of deposits. In turn, the Government insures deposits and could be required to pay large sums if depository institutions were to collapse as the result of securities losses. Depository institutions are supposed to be managed to limit risk. Their managers thus may not be conditioned to operate prudently in more speculative securities businesses. An example is the crash of real estate investment trusts sponsored by bank holding companies a decade ago. and against preserving Glass–Steagall as: Depository institutions now operate in "deregulated" financial markets in which distinctions between loans, securities, and deposits are not well drawn. They are losing market shares to securities firms that are not so strictly regulated, and to foreign financial institutions operating without much restriction from the Act. Conflicts of interest can be prevented by enforcing legislation against them, and by separating the lending and credit functions through forming distinctly separate subsidiaries of financial firms. The securities activities that depository institutions are seeking are both low-risk by their very nature, and would reduce the total risk of organizations offering them – by diversification. In much of the rest of the world, depository institutions operate simultaneously and successfully in both banking and securities markets. Lessons learned from their experience can be applied to our national financial structure and regulation. Reflecting the significance of the "international competitiveness" argument, a separate CRS Report stated banks were "losing historical market shares of their major activities to domestic and foreign competitors that are less restricted." Separately, the General Accounting Office (GAO) submitted to a House subcommittee a report reviewing the benefits and risks of "Glass–Steagall repeal." The report recommended a "phased approach" using a "holding company organizational structure" if Congress chose "repeal." Noting Glass–Steagall had "already been eroded and the erosion is likely to continue in the future," the GAO explained "coming to grips with the Glass–Steagall repeal question represents an opportunity to systematically and rationally address changes in the regulatory and legal structure that are needed to better address the realities of the marketplace." The GAO warned that Congress's failure to act was "potentially dangerous" in permitting a "continuation of the uneven integration of commercial and investment banking activities." As Congress was considering the Proxmire Financial Modernization Act in 1988, the Commission of the European Communities proposed a "Second Banking Directive" that became effective at the beginning of 1993 and provided for the combination of commercial and investment banking throughout the European Economic Community. Whereas United States law sought to isolate banks from securities activities, the Second Directive represented the European Union's conclusion that securities activities diversified bank risk, strengthening the earnings and stability of banks. The Senate passed the Proxmire Financial Modernization Act of 1988 in a 94-2 vote. The House did not pass a similar bill, largely because of opposition from Representative John Dingell (D-MI), chairman of the House Commerce and Energy Committee. Section 20 affiliates In April 1987, the Federal Reserve Board had approved the bank holding companies Bankers Trust, Citicorp, and J.P. Morgan & Co. establishing subsidiaries ("Section 20 affiliates") to underwrite and deal in residential mortgage-backed securities, municipal revenue bonds, and commercial paper. Glass–Steagall's Section 20 prohibited a bank from affiliating with a firm "primarily engaged" in underwriting and dealing in securities. The Board decided this meant Section 20 permitted a bank affiliate to earn 5% of its revenue from underwriting and dealing in these types of securities that were not "bank-eligible securities," subject to various restrictions including "firewalls" to separate a commercial bank from its Section 20 affiliate. Three months later the Board added "asset-backed securities" backed by pools of credit card accounts or other "consumer finance assets" to the list of "bank-ineligible securities" a Section 20 affiliate could underwrite. Bank holding companies, not commercial banks directly, owned these Section 20 affiliates. In 1978 the Federal Reserve Board had authorized bank holding companies to establish securities affiliates that underwrote and dealt in government securities and other bank-eligible securities. Federal Reserve Board Chairman Paul Volcker supported Congress amending Glass–Steagall to permit such affiliates to underwrite and deal in a limited amount of bank-ineligible securities, but not corporate securities. In 1987, Volcker specifically noted (and approved the result) that this would mean only banks with large government securities activities would be able to have affiliates that would underwrite and deal in a significant volume of "bank-ineligible securities." A Section 20 affiliate with a large volume of government securities related revenue would be able to earn a significant amount of "bank-ineligible" revenue without having more than 5% of its overall revenue come from bank-ineligible activities. Volcker disagreed, however, that the Board had authority to permit this without an amendment to the Glass–Steagall Act. Citing that concern, Volcker and fellow Federal Reserve Board Governor Wayne Angell dissented from the Section 20 affiliate orders. Senator Proxmire criticized the Federal Reserve Board's Section 20 affiliate orders as defying Congressional control of Glass–Steagall. The Board's orders meant Glass–Steagall did not prevent commercial banks from affiliating with securities firms underwriting and dealing in "bank-ineligible securities," so long as the activity was "executed in a separate subsidiary and limited in amount." After the Proxmire Financial Modernization Act of 1988 failed to become law, Senator Proxmire and a group of fellow Democratic senior House Banking Committee members (including future Committee Ranking Member John LaFalce (D-NY) and future Committee Chairman Barney Frank (D-MA)) wrote the Federal Reserve Board recommending it expand the underwriting powers of Section 20 affiliates. Expressing sentiments that Representative James A. Leach (R-IA) repeated in 1996, Proxmire declared "Congress has failed to do the job" and "[n]ow it's time for the Fed to step in." Following Senator Proxmire's letter, in 1989 the Federal Reserve Board approved Section 20 affiliates underwriting corporate debt securities and increased from 5% to 10% the percentage of its revenue a Section 20 affiliate could earn from "bank-ineligible" activities. In 1990 the Board approved J.P. Morgan & Co. underwriting corporate stock. With the commercial (J.P. Morgan & Co.) and investment (Morgan Stanley) banking arms of the old "House of Morgan" both underwriting corporate bonds and stocks, Wolfgang Reinicke concluded the Federal Reserve Board order meant both firms now competed in "a single financial market offering both commercial and investment banking products," which "Glass–Steagall sought to rule out." Reinicke described this as "de facto repeal of Glass–Steagall." No Federal Reserve Board order was necessary for Morgan Stanley to enter that "single financial market." Glass–Steagall only prohibited investment banks from taking deposits, not from making commercial loans, and the prohibition on taking deposits had "been circumvented by the development of deposit equivalents", such as the money market fund. Glass–Steagall also did not prevent investment banks from affiliating with nonbank banks or savings and loans. Citing this competitive "inequality," before the Federal Reserve Board approved any Section 20 affiliates, four large bank holding companies that eventually received Section 20 affiliate approvals (Chase, J.P. Morgan, Citicorp, and Bankers Trust) had threatened to give up their banking charters if they were not given greater securities powers. Following the Federal Reserve Board's approvals of Section 20 affiliates a commentator concluded that the Glass–Steagall "wall" between commercial banking and "the securities and investment business" was "porous" for commercial banks and "nonexistent to investment bankers and other nonbank entities." Greenspan-led Federal Reserve Board Alan Greenspan had replaced Paul Volcker as Chairman of the Federal Reserve Board when Proxmire sent his 1988 letter recommending the Federal Reserve Board expand the underwriting powers of Section 20 affiliates. Greenspan testified to Congress in December 1987, that the Federal Reserve Board supported Glass–Steagall repeal. Although Paul Volcker "had changed his position" on Glass–Steagall reform "considerably" during the 1980s, he was still "considered a conservative among the board members." With Greenspan as Chairman, the Federal Reserve Board "spoke with one voice" in joining the FDIC and OCC in calling for Glass–Steagall repeal. By 1987 Glass–Steagall "repeal" had come to mean repeal of Sections 20 and 32. The Federal Reserve Board supported "repeal" of Glass–Steagall "insofar as it prevents bank holding companies from being affiliated with firms engaged in securities underwriting and dealing activities." The Board did not propose repeal of Glass Steagall Section 16 or 21. Bank holding companies, through separately capitalized subsidiaries, not commercial banks themselves directly, would exercise the new securities powers. Banks and bank holding companies had already gained important regulatory approvals for securities activities before Paul Volcker retired as Chairman of the Federal Reserve Board on August 11, 1987. Aside from the Board's authorizations for Section 20 affiliates and for bank private placements of commercial paper, by 1987 federal banking regulators had authorized banks or their affiliates to (1) sponsor closed end investment companies, (2) sponsor mutual funds sold to customers in individual retirement accounts, (3) provide customers full service brokerage (i.e., advice and brokerage), and (4) sell bank assets through "securitizations." In 1982 E. Gerald Corrigan, president of the Federal Reserve Bank of Minneapolis and a close Volcker colleague, published an influential essay titled "Are banks special?" in which he argued banks should be subject to special restrictions on affiliations because they enjoy special benefits (e.g., deposit insurance and Federal Reserve Bank loan facilities) and have special responsibilities (e.g., operating the payment system and influencing the money supply). The essay rejected the argument that it is "futile and unnecessary" to distinguish among the various types of companies in the "financial services industry." While Paul Volcker's January 1984, testimony to Congress repeated that banks are "special" in performing "a unique and critical role in the financial system and the economy," he still testified in support of bank affiliates underwriting securities other than corporate bonds. In its 1986 Annual Report the Volcker led Federal Reserve Board recommended that Congress permit bank holding companies to underwrite municipal revenue bonds, mortgage-backed securities, commercial paper, and mutual funds and that Congress "undertake hearings or other studies in the area of corporate underwriting." As described above, in the 1930s Glass–Steagall advocates had alleged that bank affiliate underwriting of corporate bonds created "conflicts of interest." In early 1987 E. Gerald Corrigan, then president of the Federal Reserve Bank of New York, recommended a legislative "overhaul" to permit "financial holding companies" that would "in time" provide banking, securities, and insurance services (as authorized by the GLBA 12 years later). In 1990 Corrigan testified to Congress that he rejected the "status quo" and recommended allowing banks into the "securities business" through financial service holding companies. In 1991 Paul Volcker testified to Congress in support of the Bush Administration proposal to repeal Glass–Steagall Sections 20 and 32. Volcker rejected the Bush Administration proposal to permit affiliations between banks and commercial firms (i.e., non-financial firms) and added that legislation to allow banks greater insurance powers "could be put off until a later date." 1991 Congressional action and "firewalls" Paul Volcker gave his 1991 testimony as Congress considered repealing Glass–Steagall sections 20 and 32 as part of a broader Bush Administration proposal to reform financial regulation. In reaction to "market developments" and regulatory and judicial decisions that had "homogenized" commercial and investment banking, Representative Edward J. Markey (D-MA) had written a 1990 article arguing "Congress must amend Glass–Steagall." As chairman of a subcommittee of the House Commerce and Energy Committee, Markey had joined with Committee Chairman Dingell in opposing the 1988 Proxmire Financial Modernization Act. In 1990, however, Markey stated Glass–Steagall had "lost much of its effectiveness" through market, regulatory, and judicial developments that were "tantamount to an ill-coordinated, incremental repeal" of Glass–Steagall. To correct this "disharmony" Markey proposed replacing Glass–Steagall's "prohibitions" with "regulation." After the House Banking Committee approved a bill repealing Glass–Steagall Sections 20 and 32, Representative Dingell again stopped House action. He reached agreement with Banking Committee Chairman Henry B. Gonzalez (D-TX) to insert into the bill "firewalls" that banks claimed would prevent real competition between banks and securities firms. The banking industry strongly opposed the bill in that form, and the House rejected it. The House debate revealed that Congress might agree on repealing Sections 20 and 32 while being divided on how bank affiliations with securities firms should be regulated. 1980s and 1990s bank product developments Throughout the 1980s and 1990s, as Congress considered whether to "repeal" Glass–Steagall, commercial banks and their affiliates engaged in activities that commentators later linked to the financial crisis of 2007–2008. Securitization, CDOs, and "subprime" credit In 1978 Bank of America issued the first residential mortgage-backed security that securitized residential mortgages not guaranteed by a government-sponsored enterprise ("private label RMBS"). Also in 1978, the OCC approved a national bank, such as Bank of America, issuing pass-through certificates representing interests in residential mortgages and distributing such mortgage-backed securities to investors in a private placement. In 1987 the OCC ruled that Security Pacific Bank could "sell" assets through "securitizations" that transferred "cash flows" from those assets to investors and also distribute in a registered public offering the residential mortgage-backed securities issued in the securitization. This permitted commercial banks to acquire assets for "sale" through securitizations under what later became termed the "originate to distribute" model of banking. The OCC ruled that a national bank's power to sell its assets meant a national bank could sell a pool of assets in a securitization, and even distribute the securities that represented the sale, as part of the "business of banking." This meant national banks could underwrite and distribute securities representing such sales, even though Glass–Steagall would generally prohibit a national bank underwriting or distributing non-governmental securities (i.e., non-"bank-eligible" securities). The federal courts upheld the OCC's approval of Security Pacific's securitization activities, with the Supreme Court refusing in 1990 to review a 1989 Second Circuit decision sustaining the OCC's action. In arguing that the GLBA's "repeal" of Glass–Steagall played no role in the financial crisis of 2007–2008, Melanie Fein notes courts had confirmed by 1990 the power of banks to securitize their assets under Glass–Steagall. The Second Circuit stated banks had been securitizing their assets for "ten years" before the OCC's 1987 approval of Security Pacific's securitization. As noted above, the OCC had approved such activity in 1978. Jan Kregel argues that the OCC's interpretation of the "incidental powers" of national banks "ultimately eviscerated Glass–Steagall." Continental Illinois Bank is often credited with issuing the first collateralized debt obligation (CDO) when, in 1987, it issued securities representing interests in a pool of "leveraged loans." By the late 1980s Citibank had become a major provider of "subprime" mortgages and credit cards. Arthur Wilmarth argued that the ability to securitize such credits encouraged banks to extend more "subprime" credit. Wilmarth reported that during the 1990s credit card loans increased at a faster pace for lower-income households than higher-income households and that subprime mortgage loan volume quadrupled from 1993–99, before the GLBA became effective in 2000. In 1995 Wilmarth noted that commercial bank mortgage lenders differed from nonbank lenders in retaining "a significant portion of their mortgage loans" rather than securitizing the entire exposure. Wilmarth also shared the bank regulator concern that commercial banks sold their "best assets" in securitizations and retained their riskiest assets. ABCP conduits and SIVs In the early 1980s commercial banks established asset backed commercial paper conduits (ABCP conduits) to finance corporate customer receivables. The ABCP conduit purchased receivables from the bank customer and issued asset-backed commercial paper to finance that purchase. The bank "advising" the ABCP conduit provided loan commitments and "credit enhancements" that supported repayment of the commercial paper. Because the ABCP conduit was owned by a third party unrelated to the bank, it was not an affiliate of the bank. Through ABCP conduits banks could earn "fee income" and meet "customers' needs for credit" without "the need to maintain the amount of capital that would be required if loans were extended directly" to those customers. By the late 1980s Citibank had established ABCP conduits to buy securities. Such conduits became known as structured investment vehicles (SIVs). The SIV's "arbitrage" opportunity was to earn the difference between the interest earned on the securities it purchased and the interest it paid on the ABCP and other securities it issued to fund those purchases. OTC derivatives, including credit default swaps In the early 1980s commercial banks began entering into interest rate and currency exchange "swaps" with customers. This "over-the-counter derivatives" market grew dramatically throughout the 1980s and 1990s. In 1996 the OCC issued "guidelines" for national bank use of "credit default swaps" and other "credit derivatives." Banks entered into "credit default swaps" to protect against defaults on loans. Banks later entered into such swaps to protect against defaults on securities. Banks acted both as "dealers" in providing such protection (or speculative "exposure") to customers and as "hedgers" or "speculators" to cover (or create) their own exposures to such risks. Commercial banks became the largest dealers in swaps and other over-the-counter derivatives. Banking regulators ruled that swaps (including credit default swaps) were part of the "business of banking," not "securities" under the Glass–Steagall Act. Commercial banks entered into swaps that replicated part or all of the economics of actual securities. Regulators eventually ruled banks could even buy and sell equity securities to "hedge" this activity. Jan Kregel argues the OCC's approval of bank derivatives activities under bank "incidental powers" constituted a "complete reversal of the original intention of preventing banks from dealing in securities on their own account." Glass–Steagall developments from 1995 to Gramm–Leach–Bliley Act Leach and Rubin support for Glass–Steagall "repeal"; need to address "market realities" On January 4, 1995, the new Chairman of the House Banking Committee, Representative James A. Leach (R-IA), introduced a bill to repeal Glass–Steagall Sections 20 and 32. After being confirmed as Treasury Secretary, Robert Rubin announced on February 28, 1995, that the Clinton Administration supported such Glass–Steagall repeal. Repeating themes from the 1980s, Leach stated Glass–Steagall was "out of synch with reality" and Rubin argued "it is now time for the laws to reflect changes in the world's financial system." Leach and Rubin expressed a widely shared view that Glass–Steagall was "obsolete" or "outdated." As described above, Senator Proxmire and Representative Markey (despite their long-time support for Glass–Steagall) had earlier expressed the same conclusion. With his reputation for being "conservative" on expanded bank activities, former Federal Reserve Board Chairman Paul Volcker remained an influential commentator on legislative proposals to permit such activities. Volcker continued to testify to Congress in opposition to permitting banks to affiliate with commercial companies and in favor of repealing Glass–Steagall Sections 20 and 32 as part of "rationalizing" bank involvement in securities markets. Supporting the Leach and Rubin arguments, Volcker testified that Congressional inaction had forced banking regulators and the courts to play "catch-up" with market developments by "sometimes stretching established interpretations of law beyond recognition." In 1997 Volcker testified this meant the "Glass–Steagall separation of commercial and investment banking is now almost gone" and that this "accommodation and adaptation has been necessary and desirable." He stated, however, that the "ad hoc approach" had created "uneven results" that created "almost endless squabbling in the courts" and an "increasingly advantageous position competitively" for "some sectors of the financial service industry and particular institutions." Similar to the GAO in 1988 and Representative Markey in 1990 Volcker asked that Congress "provide clear and decisive leadership that reflects not parochial pleadings but the national interest." Reflecting the regulatory developments Volcker noted, the commercial and investment banking industries largely reversed their traditional Glass–Steagall positions. Throughout the 1990s (and particularly in 1996), commercial banking firms became content with the regulatory situation Volcker described. They feared "financial modernization" legislation might bring an unwelcome change. Securities firms came to view Glass–Steagall more as a barrier to expanding their own commercial banking activities than as protection from commercial bank competition. The securities industry became an advocate for "financial modernization" that would open a "two-way street" for securities firms to enter commercial banking. Status of arguments from 1980s While the need to create a legal framework for existing bank securities activities became a dominant theme for the "financial modernization" legislation supported by Leach, Rubin, Volcker, and others, after the GLBA repealed Glass–Steagall Sections 20 and 32 in 1999, commentators identified four main arguments for repeal: (1) increased economies of scale and scope, (2) reduced risk through diversification of activities, (3) greater convenience and lower cost for consumers, and (4) improved ability of U.S. financial firms to compete with foreign firms. By 1995, however, some of these concerns (which had been identified by the Congressional Research Service in 1987) seemed less important. As Japanese banks declined and U.S.-based banks were more profitable, "international competitiveness" did not seem to be a pressing issue. International rankings of banks by size also seemed less important when, as Alan Greenspan later noted, "Federal Reserve research had been unable to find economies of scale in banking beyond a modest size." Still, advocates of "financial modernization" continued to point to the combination of commercial and investment banking in nearly all other countries as an argument for "modernization", including Glass–Steagall "repeal." Similarly, the failure of the Sears Financial Network and other nonbank "financial supermarkets" that had seemed to threaten commercial banks in the 1980s undermined the argument that financial conglomerates would be more efficient than "specialized" financial firms. Critics questioned the "diversification benefits" of combining commercial and investment banking activities. Some questioned whether the higher variability of returns in investment banking would stabilize commercial banking firms through "negative correlation" (i.e., cyclical downturns in commercial and investment banking occurring at different times) or instead increase the probability of the overall banking firm failing. Others questioned whether any theoretical benefits in holding a passive "investment portfolio" combining commercial and investment banking would be lost in managing the actual combination of such activities. Critics also argued that specialized, highly competitive commercial and investment banking firms were more efficient in competitive global markets. Starting in the late 1980s, John H. Boyd, a staff member of the Federal Reserve Bank of Minneapolis, consistently questioned the value of size and product diversification in banking. In 1999, as Congress was considering legislation that became the GLBA, he published an essay arguing that the "moral hazard" created by deposit insurance, too big to fail (TBTF) considerations, and other governmental support for banking should be resolved before commercial banking firms could be given "universal banking" powers. Although Boyd's 1999 essay was directed at "universal banking" that permitted commercial banks to own equity interests in non-financial firms (i.e., "commercial firms"), the essay was interpreted more broadly to mean that "expanding bank powers, by, for example, allowing nonbank firms to affiliate with banks, prior to undertaking reforms limiting TBTF-like coverage for uninsured bank creditors is putting the 'cart before the horse.'" Despite these arguments, advocates of "financial modernization" predicted consumers and businesses would enjoy cost savings and greater convenience in receiving financial services from integrated "financial services firms." After the GLBA repealed Sections 20 and 32, commentators also noted the importance of scholarly attacks on the historic justifications for Glass–Steagall as supporting repeal efforts. Throughout the 1990s, scholars continued to produce empirical studies concluding that commercial bank affiliate underwriting before Glass–Steagall had not demonstrated the "conflicts of interest" and other defects claimed by Glass–Steagall proponents. By the late 1990s a "remarkably broad academic consensus" existed that Glass–Steagall had been "thoroughly discredited." Although he rejected this scholarship, Martin Mayer wrote in 1997 that since the late 1980s it had been "clear" that continuing the Glass–Steagall prohibitions was only "permitting a handful of large investment houses and hedge funds to charge monopoly rents for their services without protecting corporate America, investors, or the banks."Hyman Minsky, who disputed the benefits of "universal banking," wrote in 1995 testimony prepared for Congress that "repeal of the Glass–Steagall Act, in itself, would neither benefit nor harm the economy of the United States to any significant extent." In 1974 Mayer had quoted Minsky as stating a 1971 presidential commission (the "Hunt Commission") was repeating the errors of history when it proposed relaxing Glass–Steagall and other legislation from the 1930s. With banking commentators such as Mayer and Minsky no longer opposing Glass–Steagall repeal, consumer and community development advocates became the most prominent critics of repeal and of financial "modernization" in general. Helen Garten argued that bank regulation became dominated by "consumer" issues, which produced "a largely unregulated, sophisticated wholesale market and a highly regulated, retail consumer market." In the 1980s Representative Fernand St. Germain (D-RI), as chairman of the House Banking Committee, sought to tie any Glass–Steagall reform to requirements for free or reduced cost banking services for the elderly and poor. Democratic Representatives and Senators made similar appeals in the 1990s. During Congressional hearings to consider the various Leach bills to repeal Sections 20 and 32, consumer and community development advocates warned against the concentration of "economic power" that would result from permitting "financial conglomerates" and argued that any repeal of Sections 20 and 32 should mandate greater consumer protections, particularly free or low cost consumer services, and greater community reinvestment requirements. Failed 1995 Leach bill; expansion of Section 20 affiliate activities; merger of Travelers and Citicorp By 1995 the ability of banks to sell insurance was more controversial than Glass–Steagall "repeal." Representative Leach tried to avoid conflict with the insurance industry by producing a limited "modernization" bill that repealed Glass–Steagall Sections 20 and 32, but did not change the regulation of bank insurance activities. Leach's efforts to separate insurance from securities powers failed when the insurance agent lobby insisted any banking law reform include limits on bank sales of insurance. Similar to Senator Proxmire in 1988, Representative Leach responded to the House's inaction on his Glass–Steagall "repeal" bill by writing the Federal Reserve Board in June 1996 encouraging it to increase the limit on Section 20 affiliate bank-ineligible revenue. When the Federal Reserve Board increased the limit to 25% in December 1996, the Board noted the Securities Industry Association (SIA) had complained this would mean even the largest Wall Street securities firms could affiliate with commercial banks. The SIA's prediction proved accurate two years later when the Federal Reserve Board applied the 25% bank-ineligible revenue test in approving Salomon Smith Barney (SSB) becoming an affiliate of Citibank through the merger of Travelers and Citicorp to form the Citigroup bank holding company. The Board noted that, although SSB was one of the largest US securities firms, less than 25% of its revenue was "bank-ineligible." Citigroup could only continue to own the Travelers insurance underwriting business for two (or, with Board approval, five) years unless the Bank Holding Company Act was amended (as it was through the GLBA) to permit affiliations between banks and underwriters of property, casualty, and life insurance. Citigroup's ownership of SSB, however, was permitted without any law change under the Federal Reserve Board's existing Section 20 affiliate rules. In 2003, Charles Geisst, a Glass–Steagall supporter, told Frontline the Federal Reserve Board's Section 20 orders meant the Federal Reserve "got rid of the Glass–Steagall Act." Former Federal Reserve Board Vice-Chairman Alan Blinder agreed the 1996 action increasing "bank-ineligible" revenue limits was "tacit repeal" of Glass–Steagall, but argued "that the market had practically repealed Glass–Steagall, anyway." Shortly after approving the merger of Citicorp and Travelers, the Federal Reserve Board announced its intention to eliminate the 28 "firewalls" that required separation of Section 20 affiliates from their affiliated bank and to replace them with "operating standards" based on 8 of the firewalls. The change permitted banks to lend to fund purchases of, and otherwise provide credit support to, securities underwritten by their Section 20 affiliates. This left Federal Reserve Act Sections 23A (which originated in the 1933 Banking Act and regulated extensions of credit between a bank and any nonbank affiliate) and 23B (which required all transactions between a bank and its nonbank affiliates to be on "arms-length" market terms) as the primary restrictions on banks providing credit to Section 20 affiliates or to securities underwritten by those affiliates. Sections 23A and B remained the primary restrictions on commercial banks extending credit to securities affiliates, or to securities underwritten by such affiliates, after the GLBA repealed Glass–Steagall Sections 20 and 32. 1997-98 legislative developments: commercial affiliations and Community Reinvestment Act In 1997 Representative Leach again sponsored a bill to repeal Glass–Steagall Sections 20 and 32. At first the main controversy was whether to permit limited affiliations between commercial firms and commercial banks. Securities firms (and other financial services firms) complained that unless they could retain their affiliations with commercial firms (which the Bank Holding Company Act forbid for a commercial bank), they would not be able to compete equally with commercial banks. The Clinton Administration proposed that Congress either permit a small "basket" of commercial revenue for bank holding companies or that it retain the "unitary thrift loophole" that permitted a commercial firm to own a single savings and loan. Representative Leach, House Banking Committee Ranking Member Henry Gonzalez (D-TX), and former Federal Reserve Board Chairman Paul Volcker opposed such commercial affiliations. Meanwhile, in 1997 Congressional Quarterly reported Senate Banking Committee Chairman Al D'Amato (R-NY) rejected Treasury Department pressure to produce a financial modernization bill because banking firms (such as Citicorp) were satisfied with the competitive advantages they had received from regulatory actions and were not really interested in legislative reforms. Reflecting the process Paul Volcker had described, as financial reform legislation was considered throughout 1997 and early 1998, Congressional Quarterly reported how different interests groups blocked legislation and sought regulatory advantages. The "compromise bill" the House Republican leadership sought to bring to a vote in March 1998, was opposed by the commercial banking industry as favoring the securities and insurance industries. The House Republican leadership withdrew the bill in response to the banking industry opposition, but vowed to bring it back when Congress returned from recess. Commentators describe the April 6, 1998, merger announcement between Travelers and Citicorp as the catalyst for the House passing that bill by a single vote (214-213) on May 13, 1998. Citicorp, which had opposed the bill in March, changed its position to support the bill along with the few other large commercial banking firms that had supported it in March for improving their ability to compete with "foreign banks." The Clinton Administration issued a veto threat for the House passed bill, in part because the bill would eliminate "the longstanding right of unitary thrift holding companies to engage in any lawful business," but primarily because the bill required national banks to conduct expanded activities through holding company subsidiaries rather than the bank "operating subsidiaries" authorized by the OCC in 1996. On September 11, 1998, the Senate Banking Committee approved a bipartisan bill with unanimous Democratic member support that, like the House-passed bill, would have repealed Glass–Steagall Sections 20 and 32. The bill was blocked from Senate consideration by the Committee's two dissenting members (Phil Gramm (R-TX) and Richard Shelby (R-AL)), who argued it expanded the Community Reinvestment Act (CRA). Four Democratic senators (Byron Dorgan (D-ND), Russell Feingold (D-WI), Barbara Mikulski (D-MD), and Paul Wellstone (D-MN)) stated they opposed the bill for its repeal of Sections 20 and 32. 1999 Gramm–Leach–Bliley Act In 1999 the main issues confronting the new Leach bill to repeal Sections 20 and 32 were (1) whether bank subsidiaries ("operating subsidiaries") or only nonbank owned affiliates could exercise new securities and other powers and (2) how the CRA would apply to the new "financial holding companies" that would have such expanded powers. The Clinton Administration agreed with Representative Leach in supporting "the continued separation of banking and commerce." The Senate Banking Committee approved in a straight party line 11-9 vote a bill (S. 900) sponsored by Senator Gramm that would have repealed Glass–Steagall Sections 20 and 32 and that did not contain the CRA provisions in the Committee's 1998 bill. The nine dissenting Democratic Senators, along with Senate Minority Leader Thomas Daschle (D-SD), proposed as an alternative (S. 753) the text of the 1998 Committee bill with its CRA provisions and the repeal of Sections 20 and 32, modified to provide greater permission for "operating subsidiaries" as requested by the Treasury Department. Through a partisan 54-44 vote on May 6, 1999 (with Senator Fritz Hollings (D-SC) providing the only Democratic Senator vote in support), the Senate passed S. 900. The day before, Senate Republicans defeated (in a 54-43 vote) a Democratic sponsored amendment to S. 900 that would have substituted the text of S. 753 (also providing for the repeal of Glass–Steagall Sections 20 and 32). On July 1, 1999, the House of Representatives passed (in a bipartisan 343-86 vote) a bill (H.R. 10) that repealed Sections 20 and 32. The Clinton Administration issued a statement supporting H.R. 10 because (unlike the Senate passed S. 900) it accepted the bill's CRA and operating subsidiary provisions. On October 13, 1999, the Federal Reserve and Treasury Department agreed that direct subsidiaries of national banks ("financial subsidiaries") could conduct securities activities, but that bank holding companies would need to engage in merchant banking, insurance, and real estate development activities through holding company, not bank, subsidiaries. On October 22, 1999, Senator Gramm and the Clinton Administration agreed a bank holding company could only become a "financial holding company" (and thereby enjoy the new authority to affiliate with insurance and securities firms) if all its bank subsidiaries had at least a "satisfactory" CRA rating. After these compromises, a joint Senate and House Conference Committee reported out a final version of S. 900 that was passed on November 4, 1999, by the House in a vote of 362-57 and by the Senate in a vote of 90-8. President Clinton signed the bill into law on November 12, 1999, as the Gramm–Leach–Bliley Financial Modernization Act of 1999 (GLBA). The GLBA repealed Sections 20 and 32 of the Glass–Steagall Act, not Sections 16 and 21. The GLBA also amended Section 16 to permit "well capitalized" commercial banks to underwrite municipal revenue bonds (i.e., non-general obligation bonds), as first approved by the Senate in 1967. Otherwise, Sections 16 and 21 remained in effect regulating the direct securities activities of banks and prohibiting securities firms from taking deposits. After March 11, 2000, bank holding companies could expand their securities and insurance activities by becoming "financial holding companies." Aftermath of repeal Please see the main article, Glass–Steagall: Aftermath of repeal, which has sections for the following: Section 1, Commentator response to Section 20 and 32 repeal Section 2, Financial industry developments after repeal of Sections 20 and 32 Section 3, Glass–Steagall "repeal" and the financial crisis The above article also contains information on proposed reenactment, or alternative proposals that will have the same effect or a partial reinstatement effect. External links Glass-Steagall Act of 1933 References See also the References list (citations) in the main article, Glass–Steagall_Act. 73rd United States Congress Federal Deposit Insurance Corporation 20th century in American law Legal history of the United States United States federal banking legislation United States repealed legislation Financial regulation in the United States Separation of investment and retail banking
426533
https://en.wikipedia.org/wiki/Adaptive%20chosen-ciphertext%20attack
Adaptive chosen-ciphertext attack
An adaptive chosen-ciphertext attack (abbreviated as CCA2) is an interactive form of chosen-ciphertext attack in which an attacker first sends a number of ciphertexts to be decrypted chosen adaptively, then uses the results to distinguish a target ciphertext without consulting the oracle on the challenge ciphertext, in an adaptive attack the attacker is further allowed adaptive queries to be asked after the target is revealed (but the target query is disallowed). It is extending the indifferent (non-adaptive) chosen-ciphertext attack (CCA1) where the second stage of adaptive queries is not allowed. Charles Rackoff and Dan Simon defined CCA2 and suggested a system building on the non-adaptive CCA1 definition and system of Moni Naor and Moti Yung (which was the first treatment of chosen ciphertext attack immunity of public key systems). In certain practical settings, the goal of this attack is to gradually reveal information about an encrypted message, or about the decryption key itself. For public-key systems, adaptive-chosen-ciphertexts are generally applicable only when they have the property of ciphertext malleability — that is, a ciphertext can be modified in specific ways that will have a predictable effect on the decryption of that message. Practical attacks Adaptive-chosen-ciphertext attacks were perhaps considered to be a theoretical concern but not to be manifested in practice until 1998, when Daniel Bleichenbacher of Bell Laboratories (at the time) demonstrated a practical attack against systems using RSA encryption in concert with the PKCS#1 v1 encoding function, including a version of the Secure Sockets Layer (SSL) protocol used by thousands of web servers at the time. The Bleichenbacher attacks, also known as the million message attack, took advantage of flaws within the PKCS #1 function to gradually reveal the content of an RSA encrypted message. Doing this requires sending several million test ciphertexts to the decryption device (e.g., SSL-equipped web server). In practical terms, this means that an SSL session key can be exposed in a reasonable amount of time, perhaps a day or less. With slight variations this vulnerability still exists in many modern servers, under the new name "Return Of Bleichenbacher's Oracle Threat" (ROBOT). Preventing attacks In order to prevent adaptive-chosen-ciphertext attacks, it is necessary to use an encryption or encoding scheme that limits ciphertext malleability and a proof of security of the system. After the theoretical and foundation level development of CCA secure systems, a number of systems have been proposed in the Random Oracle model: the most common standard for RSA encryption is Optimal Asymmetric Encryption Padding (OAEP). Unlike improvised schemes such as the padding used in the early versions of PKCS#1, OAEP has been proven secure in the random oracle model, OAEP was incorporated into PKCS#1 as of version 2.0 published in 1998 as the now-recommended encoding scheme, with the older scheme still supported but not recommended for new applications. However, the golden standard for security is to show the system secure without relying on the Random Oracle idealization. Mathematical model In complexity-theoretic cryptography, security against adaptive chosen-ciphertext attacks is commonly modeled using ciphertext indistinguishability (IND-CCA2). References Cryptographic attacks
28687887
https://en.wikipedia.org/wiki/BIW%20Technologies
BIW Technologies
BIW Technologies (BIW) was a privately held British company providing web-based electronic construction collaboration technologies (also sometimes described as project management or project extranet systems), to customers in the construction and property sectors. It was acquired by a German company, Conject, in December 2010, and adopted its parent company's branding in April 2012. In March 2016, the Conject group was acquired by Australia-based rival, Aconex, which, in December 2017, was acquired by Oracle Corporation. History Having purchased the rights to a prototype Software-as-a-Service (SaaS) application already being piloted by Sainsbury's and BAA, CEO Colin Smith and his fellow founder directors established the company in London in early 2000 as interest in construction-oriented dot.com businesses began to peak in the UK. Working with such 'blue chip' clients helped BIW market the platform to both existing project supply chains and to new customers, and the business grew rapidly. By 2003, according to independent research by Compagnia, BIW could claim 26.4% of the UK market, and it was achieving annual revenues of £2.7m, comfortably ahead of its then nearest UK rival, BuildOnline. For a period 2001–2004, BIW also provided its software to Asite which traded as a reseller of BIW's platform until it launched its own collaboration system and became a direct competitor. In 2003, BIW was a founder member of the Network of Construction Collaboration Technology Providers (NCCTP), then managed by CIRIA and later part of Constructing Excellence. From 2000–2005, Sir Michael Latham, author of the influential Latham Report, served as non-executive Deputy Chairman of BIW. In January 2005, the company relocated its head office from London to Woking in Surrey. It has maintained a UK software development centre in Nottingham since 2000, and established an offshore software development centre in Vadodara, Gujarat, India in October 2007. It opened a Middle East office in Dubai in 2006. In 2008, it formed a partnership with Sage to deliver its collaboration solutions to clients in North America. In April 2006, BIW Technologies won the 'Entrepreneur of the Year' category in the industry's annual Building Awards, being described as a "firm that seems intent on wiping paper use out of the industry altogether". BIW's customers included Robertson, Sainsbury's, BAA, Argent Group plc, Mace, Bovis Lend Lease, Land Securities, Defence Estates, Interserve and Marks & Spencer. By 2010, it claimed its software was used by over 190,000 registered users in over 14,000 organisations. Finances BIW remained independent of construction industry investors, with most of its initial funding coming from private investors, who held shares in a holding company, BIW plc. According to Companies House submissions, BIW Technologies Ltd achieved revenues of £7.3m in the year to 30 September 2008, generating a profit of £1.1m. At this time, the company was reported to be planning a £25m to £40m listing on London's AIM. However, the company suffered during the post credit-crunch recession partly due to its exposure to the Dubai market, with revenues cut to £5.9m and BIW declaring a pre-tax loss of £731,000. The company recapitalised in September 2009, with BIW plc being put into administration, a process that rendered the company debt-free. Acquisition and merger In December 2010 BIW Technologies was acquired by German-based Conject Holdings GmbH in a deal to unite providers of ILM software applications for the engineering, construction and real estate industries. BIW and Conject AG formed a new group offering applications to support asset-based projects throughout their infrastructure lifecycle: from concept, design and construction through to facilities management. With offices in nine countries, the group had combined revenues of around €18 million, with 180 employees and approximately 170,000 active users. In March 2016, the Conject group was acquired by Australia-based rival, Aconex. In December 2017, Aconex was acquired by Oracle Corporation. External links BIW Technologies website References Software companies of the United Kingdom Technology companies established in 2000
18639424
https://en.wikipedia.org/wiki/Whole%20Earth%20Software%20Catalog%20and%20Review
Whole Earth Software Catalog and Review
The Whole Earth Software Catalog and The Whole Earth Software Review (1984–1985) were two publications produced by Stewart Brand's Point Foundation as an extension of The Whole Earth Catalog. Overview Fred Turner discusses the production and eventual demise of both the Catalog and Review in From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. Turner notes that in 1983, The Whole Earth Software Catalog was proposed by John Brockman as a magazine which "would do for computing what the original [Whole Earth Catalog] had done for the counterculture: identify and recommend the best tools as they emerged." Brand announced the first publication of the quarterly Whole Earth Software Review at the SoftCon trade show at the Louisiana Superdome in New Orleans in March 1984. While both were published as an extension of Whole Earth, the Catalog was a large glossy book sponsored by Doubleday and published in Sausalito California while the Review was a small periodical published in San Diego. The Catalog and Review were notable for being "devoid of any industry advertising" and for being "accessible and user friendly - written in an glib, conversational style that takes most of the bugs out of microprocessing." The Whole Earth Software Catalog and Review were both business failures, however. The Catalog was only published twice and The Whole Earth Software Review three times. At the same time, another Brand publication, CoEvolution Quarterly evolved out of the original Whole Earth Supplement in 1974. In 1985, Brand merged CoEvolution Quarterly with The Whole Earth Software Review to create the Whole Earth Review. This is also indicated in the issues themselves. Fall 1984, Issue No. 43 is titled The Last CoEvolution Quarterly.The cover also states, "Next issue is 'Whole Earth Review': livelier snake, new skin." In January 1985, Issue No. 44 was titled Whole Earth Review: Tools and Ideas for the Computer Age. The cover also reads "The continuation of CoEvolution Quarterly and Whole Earth Software Review." In an article titled "Whole Earth Software Catalog Version 1.1," Stewart Brand states that there are three intended audiences for the new Whole Earth Review: a) the audience of The Whole Earth Software Catalog, b) the audience of The Whole Earth Software Review and c) the audience of CoEvolution Quarterly.<ref>Stewart Brand. "Whole Earth Software Catalog Version 1.1", Whole Earth Review, No. 44 (Sausalito, CA January 1985): 74.</ref> BibliographyWhole Earth Software ReviewSpring 1984, Issue 1 Summer 1984, Issue 2 Fall 1984, Issue 3Whole Earth Software CatalogWhole Earth Software Catalog. Spring 1984 by Point.Whole Earth Software Catalog for 1986 (2.0 edition). 1984, 1985 by Point (Winter 1986). Notes References Elmer-DeWitt, Philip. "The Stepchild Comes of Age." TIME, Mar. 05, 1984. Lehmann- Haupt, Christopher. Books of the Times: Whole Earth Software Catalog. New York Times, October 3, 1984. Sylva, Bob. SCENE. Sacramento Bee'', November 13, 1984, p. CO1 Turner, Fred External links BOOKS OF THE TIMES: Whole Earth Software Catalog Defunct magazines published in the United States Whole Earth Catalog Quarterly magazines published in the United States Magazines established in 1984 Magazines disestablished in 1985
636597
https://en.wikipedia.org/wiki/CNAME%20record
CNAME record
A Canonical Name record (abbreviated as CNAME record) is a type of resource record in the Domain Name System (DNS) that maps one domain name (an alias) to another (the canonical name). This can prove convenient when running multiple services (like an FTP server and a web server, each running on different ports) from a single IP address. One can, for example, point ftp.example.com and www.example.com to the DNS entry for example.com, which in turn has an A record which points to the IP address. Then, if the IP address ever changes, one only has to record the change in one place within the network: in the DNS A record for example.com. CNAME records must always point to another domain name, never directly to an IP address. Details DNS CNAME records are specified in RFC 1034 and clarified in Section 10 of RFC 2181. CNAME records are handled specially in the domain name system, and have several restrictions on their use. When a DNS resolver encounters a CNAME record while looking for a regular resource record, it will restart the query using the canonical name instead of the original name. (If the resolver is specifically told to look for CNAME records, the canonical name (right-hand side) is returned, rather than restarting the query.) The canonical name that a CNAME record points to can be anywhere in the DNS, whether local or on a remote server in a different DNS zone. For example, if there is a DNS zone as follows: NAME TYPE VALUE -------------------------------------------------- bar.example.com. CNAME foo.example.com. foo.example.com. A 192.0.2.23 when an A record lookup for bar.example.com is carried out, the resolver will see a CNAME record and restart the checking at foo.example.com and will then return 192.0.2.23. Possible confusion With a CNAME record, one can point a name such as "bar.example.com" to "foo.example.com." Because of this, during casual discussion the bar.example.com. (left-hand) side of a DNS entry can be incorrectly identified as "the CNAME" or "a CNAME." However, this is inaccurate. The canonical (true) name of "bar.example.com." is "foo.example.com." Because CNAME stands for Canonical Name, the right-hand side is the actual "CNAME"; on the same side as the address "A". This confusion is specifically mentioned in RFC 2181, "Clarifications to the DNS Specification." The left-hand label is an alias for the right-hand side (the RDATA portion), which is (or should be) a canonical name. In other words, a CNAME record like this: bar.example.com. CNAME foo.example.com. may be read as: bar.example.com is an alias for the canonical name (CNAME) foo.example.com. A client will request bar.example.com and the answer will be foo.example.com. Restrictions DNAME record A DNAME record or Delegation Name record is defined by RFC 6672 (original RFC 2672 is now obsolete). A DNAME record creates an alias for an entire subtree of the domain name tree. In contrast, the CNAME record creates an alias for a single name and not its subdomains. Like the CNAME record, the DNS lookup will continue by retrying the lookup with the new name. The name server synthesizes a CNAME record to actually apply the DNAME record to the requested name—CNAMEs for every node on a subtree have the same effect as a DNAME for the entire subtree. For example, if there is a DNS zone as follows: foo.example.com. DNAME bar.example.com. bar.example.com. A 192.0.2.23 xyzzy.bar.example.com. A 192.0.2.24 *.bar.example.com. A 192.0.2.25 An A record lookup for foo.example.com will return no data because a DNAME is not a CNAME and there is no A record directly at foo. However, a lookup for xyzzy.foo.example.com will be DNAME mapped and return the A record for xyzzy.bar.example.com, which is 192.0.2.24; if the DNAME record had been a CNAME record, this request would have returned name not found. Lastly, a request for foobar.foo.example.com would be DNAME mapped and return 192.0.2.25. ANAME record Several managed DNS platforms implement a non-standard ALIAS or ANAME record type. These pseudo records are managed by DNS administrators like CNAME records, but are published and resolved by (some) DNS clients like A records. ANAME records are typically configured to point to another domain, but when queried by a client, answer with an IP address. ANAME record types was going through standardization, but there probably exist many non-conforming implementations, so they can do whatever the owner of the DNS platform chooses, including existing at the apex of a zone and existing for domains that receive mail. One possible advantage of ANAME records over CNAME records is speed; a DNS client requires at least two queries to resolve a CNAME to an A record to an IP address, while only one query is necessary to resolve an ANAME to an IP address. The assumption is that the DNS server can resolve the A record and cache the requested IP address more efficiently and with less latency than its DNS clients can. The ANAME record type was a draft standard being considered by the IETF, but the latest draft document expired in January 2020. See also List of DNS record types Internet Assigned Numbers Authority ICANN References External links RFC 1912 is wrong – Meng Weng Wong's analysis of CNAME restrictions – Use of DNS Aliases for Network Services DNS record types da:Zonefil#CNAME
13634965
https://en.wikipedia.org/wiki/Peer-to-peer%20video%20sharing
Peer-to-peer video sharing
Peer-to-peer video sharing is a basic service on top of the IP Multimedia Subsystem (IMS). Early proprietary implementations might also run a simple SIP infrastructure, too. The GSM Association calls it "Video Share". The peer-to-peer video sharing functionality is defined by the Phase 1 of the GSMA Video Share service. For a more detailed description of the full GSMA Video Share service, please see the Wikipedia entry for Video Share. The most basic form is typically connected to a classical circuit-switched (CS) telephone call. While talking on the CS line the speaker can start in parallel a multimedia IMS session. The session is normally a video stream, with audio being optional (since there is an audio session already open on the CS domain). It is also possible to share photos or files. Actually, P2P video sharing does not require a full IMS implementation. It could work with a pure IETF Session Initiation Protocol (SIP) infrastructure and simple HTTP Digest authentication. However, mobile operators may want to use it without username/password provisioning and the related frauds problems. One possible solution is the Early IMS Authentication method. In the future USIM/ISIM based authentication could be introduced, too. So the IMS adds up extra security and management features that are normally required by a mobile operator by default. Early implementation by Nokia The early Nokia implementation requires the manual setting of an attribute in the phone book. When the video session is triggered (by simply pulling down the back-side camera cover on a 6680), the video sharing client looks up the destination URI based on the MSISDN number of the B party of the current open CS voice call. The video sharing is possible only if this number has a valid entry in the phone book and a valid URI for the SIP call. However, this method is not really scalable, since the user has to enter very complex strings into the phone book manually. Because this service does not involve any application server, it is difficult to make a good business model for it. Usually, the first commercial services were based on the idea that video sharing will increase the length of the voice sessions, and the resulting increased revenue would be enough to cover the costs of the video sharing service. History The P2P video sharing was introduced in 2004 by Nokia. Two major operators started commercial implementations: "Turbo Call" from Telecom Italia Mobile (TIM) in Italy and Telecomunicações Móveis Nacionais, SA (TMN) in Portugal. The first handsets to support P2P video sharing were the Nokia 6630 and 6680. The 6680 is especially suited for turning on the video sharing by having a slider on top of the back-side camera. Later the Nokia N70 was added to the commercially supported handsets. Popularity TIM Italy reported about 10% penetration (based on the potentially available customers with appropriate handsets). Supported handsets Nokia 6630, 6680 Nokia N70 Nokia 5230 References External links https://web.archive.org/web/20071011005614/http://gsmworld.com/sip/e2e/videoshare.shtml - GSM Association Video Share homepage http://sw.nokia.com/id/ced67f36-2a98-4f21-9277-209bb4a2429c/Video_Sharing.pdf - Technical description on the Forum Nokia site https://web.archive.org/web/20080821215654/http://press.nokia.com/PR/200502/980522_5.html - Announcement of the "Turbo Call" service from TIM in cooperation with Nokia IMS services
3389934
https://en.wikipedia.org/wiki/CaBIG
CaBIG
The cancer Biomedical Informatics Grid (caBIG) was a US government program to develop an open-source, open access information network called caGrid for secure data exchange on cancer research. The initiative was developed by the National Cancer Institute (part of the National Institutes of Health) and was maintained by the Center for Biomedical Informatics and Information Technology (CBIIT). In 2011 a report on caBIG raised significant questions about effectiveness and oversight, and its budget and scope were significantly trimmed. In May 2012, the National Cancer Informatics Program (NCIP) was created as caBIG's successor program. History The National Cancer Institute (NCI) of the United States funded the cancer Biomedical Informatics Grid (caBIG) initiative in spring 2004, headed by Kenneth Buetow. Its goal was to connect US biomedical cancer researchers using technology known as grid computing. The program, led by the Center for Bioinformatics and Information Technology (CBIIT), began with a 3-year pilot phase. The pilot phase concluded in March 2007, and a trial was announced. Buetow promoted the program in 2008. In addition to caGrid, the underlying infrastructure for data sharing among organizations, caBIG developed software tools, data sharing policies, and common standards and vocabularies to facilitate data sharing. Software tools targeted: Collection, analysis, and management of basic research data Clinical trials management, from patient enrollment to adverse event reporting and analysis Collection, annotation, sharing, and storage of medical imaging data Biospecimen management caBIG sought to provide foundational technology for an approach to biomedicine it called a “learning healthcare system.” This relies on the rapid exchange of information among all sectors of research and care, so that researchers and clinicians are able to collaboratively review and accurately incorporate the latest findings into their work. The ultimate goal was to speed the biomedical research process. It was also promoted for what is often called Personalized Medicine. caBIG technology was used in adaptive clinical trials such as the Investigation of Serial studies to Predict Your Therapeutic Response with Imaging and molecular AnaLysis 2 (I-SPY2), which was designed to use biomarkers to determine the appropriate therapy for women with advanced breast cancer. Health information technology Health information technology (HIT) was promoted for management and secure exchange of medical information among researchers, health care providers, and consumers. HIT initiatives mentioning caBIG were: NCI and the American Society of Clinical Oncology initiated a collaboration to create an oncology-specific electronic health record system using caBIG standards for interoperability and that will enable oncologists to manage patient information in an electronic format that accurately captures the specific interventional issues unique to oncology. The Nationwide Health Information Network was an initiative to share patient clinical data across geographically disparate sources and create electronically linked national health information exchange. It might be somehow related. Collaborations A BIG Health Consortium was formed in 2008 to promote personalized medicine, but disbanded in 2012. In July 2009, caBIG announced a collaboration with the Dr. Susan Love Research Foundation to build an online cohort of women willing to participate in clinical trials. Called the Army of Women, it had a goal of one million in its database; by December 2009 the site was "launched", and about 30,000 women and men signed up by 2010. The Cancer Genome Atlas aimed to characterize more than 10,000 tumors across at least 20 cancers by 2015. caBIG provided connectivity, data standards, and tools to collect, organize, share, and analyze the diverse research data in its database. Since 2007, NCI worked with UK National Cancer Research Institute (NCRI). The two organizations shared technologies for collaborative research and the secure exchange of research data using caGrid and the NCRI Oncology Information Exchange (ONIX) web portal announced in August 2009. ONIX shut down in March 2012. The Duke Cancer Institute used caBIG clinical trials tools in their collaboration with the Beijing Cancer Hospital of Peking University. Implementation The project intended to connect 65 NCI-designated cancer centers to enable collaborative research. Participating institutions could either “adopt” caBIG tools to share data directly through caGrid, or “adapt” commercial or in-house developed software to be caBIG-compatible. The caBIG program developed software development kits (SDKs) for interoperable software tools, and instructions on the process of adapting existing tools or developing applications to be caBIG-compatible. The Enterprise Support Network program included domain-specific expertise, and support service providers, third party organizations that provide assistance on a contract-for-services basis. A web portal using the Liferay software was available from 2008 to 2013. Open source Since 2004, the caBIG program used open-source communities, adapted from other public-private partnerships. The caBIG program produced software under contract to software development teams largely within the commercial research community. In general, software developed under US government contracts is the property of the US government and the US taxpayers. Depending on the terms in specific contracts, they might be accessible only by request under the Freedom of Information Act (FOIA). The timeliness of response to such requests might preclude a requester from ever gaining any secondary value from software released under a FOIA request. The caBIG program placed the all caBIG software in a software repository freely accessible for download. Open source means anyone can modify the downloaded software; however, the licensing applied to the downloaded software allows greater flexibility than is typical. An individual or enterprise is allowed to contribute the modified code back to the caBIG program but is not required to do so. Likewise, the modifications can be made available as open source but are not required to be made available as open source. The caBIG licensing even allows the use of the caBIG applications and components, combined with additions and modifications, to be released as commercial products. These aspects of the caBIG program actually encourage commercialization of caBIG technology. Results In 2008, GlaxoSmithKline announced it would share cancer cell genomic data with caBIG. Some private companies claimed benefits from caBIG technology in 2010. A caGrid community web site was created in 2007. The 1.x version of the core software was added to a GitHub project in mid-2013, under the BSD 3-Clause license. It used version 4.03 of the Globus Toolkit, and the Taverna workbench system to manage workflow and the Business Process Execution Language. Software called Introduce was developed around 2006. Contributors included the Ohio State University Center for Clinical and Translational Science, and private companies Ekagra Software Technologies and Semantic Bits. Criticism By 2008, some questioned if the program was benefiting large pharmaceutical companies. By 2011, the project had spent an estimated $350 million. Although the goal was considered laudable, much of the software was unevenly adopted after being developed at great expense to compete with commercial offerings. In March 2011, an NCI working group assessment concluded that caBIG "...expanded far beyond those goals to implement an overly complex and ambitious software enterprise of NCI-branded tools, especially in the Clinical Trial Management System (CTMS) space. These have produced limited traction in the cancer community, compete against established commercial vendors, and create financially untenable long-term maintenance and support commitments for the NCI". In 2012, the NCI announced a new program the National Cancer Informatics Program (NCIP) as a successor to caBIG. caGrid The caGrid computer network and software supported the cancer Biomedical Informatics Grid (caBIG) initiative of the National Cancer Institute of the US National Institutes of Health. caBIG was a voluntary virtual informatics infrastructure that connects data, research tools, scientists, and organizations. In 2013, the National Cancer Informatics Program (NCIP) re-released caGrid under the BSD 3-Clause license, and migrated the source repository to github. caGrid used version 4.03 of the Globus Toolkit, produced by the Globus Alliance. Portal The caGrid Portal was a Web-based application built on Liferay that enables users to discover and interact with the services that are available on the caGrid infrastructure. Portal serves as the primary visualization tool for the caGrid middleware. It also served as a caBIG information source. Through the caGrid Portal, users had access to information about caBIG participants, caGrid points of contact (POCs), and caGrid-related news and events. Workflow caGrid workflow uses: Active BPEL Taverna Contributors Ohio State University University of Chicago, Argonne National Laboratory SemanticBits, LLC Ekagra Software Technologies Criticism In March 2011, the NCI published an extensive review of CaBIG, the NCI CBIIT program that funded the caGrid software development (see , ), which included a long list of problems with the program, and recommended that most of the software development projects should be discontinued. References Further reading Abernethy AP, Coeytauz R, Rowe K, Wheeler JL, Lyerly HK. Electronic patient-reported data capture as the foundation of a learning health care system. JCO. 2009;27:6522. Buetow KH. caBIG: proof of concept for personalized cancer care. JCO. 2009:27 Suppl 15S:e20712. “Health IT gets personal,” InformationWeek (11/13/09) “Health data in the raw,” Government Health IT (11/6/09) “NCI to open research grid to cancer patient 'army',” Government Health IT (10/9/09) “GridBriefing: The future of Healthcare - eHealth and Grid Computing,” GridTalk (9/09) “Collaboration and Sustainability are Front and Center as caBIG Celebrates Fifth Anniversary,” GenomeWeb/BioInform (7/09) “Sharing the Wealth of Data,” Scientific American (5/09) “Translational Research Drives Demand for 'Virtual' Biobanks Built on caBIG Tools,” GenomeWeb/BioInfom (4/3/09) External links caBIG Consumer/User Website (non-technical) caBIG Community Website (technical) caGrid wiki caGrid gforge project caGrid Portal Components Introduce Toolkit, also a Globus Incubator Project Data Services Metadata Security Credential Delegation Service (CDS) Dorian GAARDS Grid Grouper Grid Trust Service (GTS) WebSSO - Web Single Sign-on component, based on JASIG CAS Bioinformatics Cancer research Grid computing products National Institutes of Health Oncology
62378822
https://en.wikipedia.org/wiki/Half-Life%3A%20Alyx
Half-Life: Alyx
Half-Life: Alyx is a virtual reality first-person shooter game developed and published by Valve. It was released in 2020 for Windows and Linux with support for most PC-compatible VR headsets. Set between the events of Half-Life (1998) and Half-Life 2 (2004), players control Alyx Vance on a mission to seize a superweapon belonging to the alien Combine. Like previous Half-Life games, Alyx incorporates combat, puzzles, exploration and survival horror. Players use VR to interact with the environment and fight enemies, using "gravity gloves" to manipulate objects, similarly to the gravity gun from Half-Life 2. The previous Half-Life game, Episode Two, was released in 2007 and ended on a cliffhanger. Valve made several attempts to develop further Half-Life games, but could not settle on a direction, and its flat management structure made it difficult for projects to gather momentum. In the mid-2010s, Valve began experimenting with VR, and released The Lab, a collection of VR minigames, in 2016. Recognizing the demand for a major VR game, they experimented with prototypes using their various intellectual properties such as Portal, and found that Half-Life best suited VR. Half-Life: Alyx entered production using Valve's new Source 2 engine in 2016, with the largest team in Valve's history, including members of Campo Santo, a studio Valve acquired in 2018. VR affected almost every aspect of the design, including combat, movement, level design, and pacing. Valve initially planned to launch Alyx alongside its Index VR headset in 2019, but delayed it following internal feedback about the story by new writer Rob Yescombe; Erik Wolpaw and Jay Pinkerton rejoined Valve to rewrite it. Half-Life: Alyx received acclaim for its graphics, voice acting, narrative, and atmosphere, and has been cited as VR's first killer app. It was nominated for numerous awards and won "Best VR/AR" at the 2020 Game Awards. Gameplay Half-Life: Alyx is a virtual reality game (VR) that supports all SteamVR-compatible VR headsets, which include the Valve Index, HTC Vive, Oculus Rift, Oculus Quest and all Windows Mixed Reality headsets. As the gameplay was designed for VR, Valve said they had no plans for a non-VR version. Half-Life: Alyx also supports user mods via the Steam Workshop. Players control Freeman's ally Alyx Vance as she and her father Eli Vance fight the Combine, an alien empire that has conquered Earth. Designer David Speyrer said Alyx was not an episodic game or side story, but "the next part of the Half-Life story", around the same length as Half-Life 2. Players use VR to get supplies, use interfaces, throw objects, and engage in combat. Like the gravity gun from Half-Life 2, the gravity gloves allow players to pick up objects from a distance. The game includes traditional Half-Life elements such as exploration, puzzles, combat, and story. While the game is primarily a first-person shooter, it adds elements of the survival horror genre, as health and ammo are more scarce, and includes frightening encounters. Players can physically move around in room scale to move Alyx in-game. Alternatively, players can use analog sticks on the VR controllers to move Alyx traditionally, teleport to nearby points, or use an intermediate mode to "glide" to selected points. When teleporting, the game simulates the movement even though the action appears instantaneous, and Alyx may die if attacked or moved from too great a height. Plot Half-Life: Alyx takes place five years before the events of Half-Life 2. Earth has been conquered by the alien Combine, who have implemented a brutal police state. In City 17, teenage Alyx and her father Dr. Eli Vance, both members of a human resistance movement, are stealing Combine resources. After they are captured by the Combine, resistance member Russell rescues Alyx and warns her that Eli will be transported to Nova Prospekt for interrogation. To intercept the train carrying Eli, Alyx ventures into the quarantine zone, an area inside City 17 infested with parasitic alien life. Along the way, she meets an eccentric Vortigaunt, who asks Alyx to save his fellow Vortigaunts and foresees that Eli will die. Inside the Quarantine Zone, Alyx derails the train and the Vortigaunt rescues Eli from the wreckage. While in custody, Eli has learned that the Combine are storing a superweapon in a massive floating vault in the quarantine zone; he instructs Alyx to find the vault and steal its contents. Alyx ventures through the Quarantine Zone, contending with aliens and Combine soldiers. She shuts down a power station keeping the vault afloat, and discovers that each station contains enslaved Vortigaunts forced to channel their energies to the vault. The Vortigaunt she rescues promises that he and the others will take down the remaining power stations. Eli contacts Alyx and warns her that the vault does not contain a weapon; it is a prison built around an apartment complex to contain something the Combine discovered. Alyx, Eli and Russell reason that whatever is inside can help them fight the Combine. As she approaches the vault, Alyx overhears a scientist mentioning to Combine superiors that the occupant is a survivor of the Black Mesa incident. Assuming the survivor is Gordon Freeman, Alyx aims to mount a rescue and crashes the vault to the ground. Inside the vault, Alyx finds the apartment complex around which it was built. Physical phenomena permeate the building; objects float in antigravity, and mirrored rooms are stacked on top of one another. Alyx discovers an advanced prison cell in its center. She breaks it open, expecting to find Freeman, but instead releases the mysterious G-Man. As a reward, the G-Man offers his services to Alyx. She requests he remove the Combine from Earth, but the G-Man stresses that this request would contradict the interests of his "employers". He instead transfers her to the future, and offers her the chance to change the outcome of Eli's death at the hands of a Combine Advisor at the end of Half-Life 2: Episode Two. Alyx complies, killing the Advisor and saving her father. The G-Man informs Alyx that she has proven herself capable of replacing Freeman, with whom the G-Man has grown dissatisfied. He traps Alyx in stasis and leaves. Gordon regains consciousness in the Resistance's White Forest base. Eli is alive, but Alyx is missing. Eli realizes what has happened to Alyx, declares his intention to kill the G-Man, and hands Gordon his crowbar. Background After the release of Half-Life 2 in 2004, Valve began developing a trilogy of episodic sequels, planning to release shorter games more frequently. Half-Life 2: Episode One was released in 2006, followed by Episode Two in 2007, which ended on a cliffhanger. Episode Three was scheduled for 2008, but was delayed as its scope expanded. Designer Robin Walker said that Valve used the Half-Life series to "solve some interesting collision of technology and art that had reared itself"; when working on Episode Three, Valve failed to find a unifying idea that provided a sense of "wonderment, or opening, or expansion". They abandoned episodic development and made several failed attempts to develop further Half-Life projects. Walker blamed the lack of progress on Valve's flat management structure, whereby employees decide what to work on themselves. He said the team eventually decided that "we would all be happier if we worked on a big thing, even if it’s not exactly what we wanted to work on". Valve decided to complete its new engine, Source 2, before beginning a new game, as developing Half-Life 2 and the original Source engine simultaneously had created problems. In 2016 and 2017, Half-Life writers Marc Laidlaw, Erik Wolpaw, Jay Pinkerton and Chet Faliszek left Valve; coupled with Valve's support for their other franchises, journalists took the departures as an indicator that new Half-Life games were no longer in development. In 2015, Valve collaborated with the electronics company HTC to develop the HTC Vive, a virtual reality headset released in 2016. Valve president Gabe Newell aimed for Valve to become more like Nintendo, which develops games in tandem with hardware and allows them to create innovative games such as Super Mario 64. Valve experimented with VR, and in 2016 released The Lab, a collection of VR minigames. Valve recognized that many players wanted a more ambitious VR AAA game, and began exploring the development of a major VR game. Walker wondered if they could develop a VR "killer app", as the influential FPS Doom had been in 1993. Valve developed several VR prototypes, with three projects under development by 2017. Finding that the portal systems of their puzzle series Portal were disorienting in VR, they settled on Half-Life. Walker said that Half-Life 3 had been a "terrifyingly daunting prospect", and the team saw VR as a way to return to the series. Additionally, Valve anticipated that fans would react negatively if Half-Life 3 were a VR-only game, and felt that a prequel carried less weight. Development Half-Life: Alyx entered development around February 2016, and entered full production later that year. The team, comprising around 80 people, was the largest in Valve's history, and included Campo Santo, a studio Valve acquired in 2018. As Valve had repeatedly failed to see projects through, some staff were reluctant to join and many were skeptical that VR was the right direction. Valve built prototypes using Half-Life 2 assets, and narrowed the gameplay systems to those they felt best fit VR. They found that the Half-Life systems were a "surprisingly natural fit" for VR, but that VR affected almost every aspect of the design, including combat, level design and pacing; for example, shooting in VR, which requires the player to physically position their hand in space, is a different experience from aiming with traditional mouse-and-keyboard controls. Valve did not develop a non-VR version of Alyx, as they were confident that the game would only be possible in VR. They anticipated that fans would modify it to run without VR equipment; though this bothered some on the team, Walker was not concerned, as he believed it would offer an inferior experience that would demonstrate why they had chosen VR. In late 2018, Valve held a company-wide playtest of the entire game; the results convinced the team that VR had been the right choice. The final weeks of development took place remotely due to the COVID-19 pandemic. Movement To mitigate the problem of motion sickness in VR, Valve implemented several different movement options. They cited inspiration from the 2018 VR game Budget Cuts, which uses teleporting to move the player between locations. Valve had assumed that teleportation would damage the experience; however, though teleporting appears jarring when watching others use it, they found that players quickly became accustomed to it. According to Walker, "It recedes to the background of your mind, and you become much more focused on what you're doing with it." To disincentivize players from quickly teleporting through levels, Valve filled areas with elements to capture their attention and slow them down, such as threats, collectables or other elements of interest. To solve the problem of taller players having to crouch when moving through some spaces, Valve standardized their virtual body size when they teleport, effectively making every player the same height when teleporting. They found that players did not notice this discrepancy as they were focused on moving to their goal. Combat Every weapon can be used one-handed, as Valve wanted players to have a hand free to interact with the world at all times. The crowbar, an iconic weapon from previous Half-Life games, was omitted as Valve could not make melee combat work in VR, and because players would accidentally catch it on objects in the game world as they moved, creating confusion. Additionally, players associated the crowbar with Gordon Freeman, the protagonist of previous games; Valve wanted to create a different identity for Alyx, portraying her as a "hacker and tinkerer". Other discarded weapon concepts include a trip mine, slingshot, shield, and rocket launcher. As players move at more realistic speeds in VR compared to typical FPS games, Valve had to adjust enemies to make combat fair and fun. Antlions, returning enemies from Half-Life 2, would quickly overwhelm players; the team slowed their movement and added the ability to shoot their legs off to slow them down. Fast zombies and fast headcrabs, enemies introduced in Half-Life 2, were cut as they were too frightening for some players in VR. According to designer Dario Casali, "The shock of having [fast headcrabs] come around the corner and latch onto you before you'd even know what was going on was just too much." Casting Merle Dandridge reprised her role as Alyx for initial recording sessions in March 2019, but after playtests indicated that Alyx needed a younger voice, Ozioma Akagha was cast in September 2019. Akagha had to avoid using irritation in her performance, as "you don't want someone in your head that sounds irritated with you". Additional actors include James Moses Black as Eli, replacing Robert Guillaume, who died in 2017, and Rhys Darby as Russell, who added comedic elements. Returning actors include Tony Todd as the alien Vortigaunts, Mike Shapiro as the G-Man, and Ellen McLain as the voice of the Combine broadcasts. Shapiro recorded his lines in one 20-minute take, with pickups in 2019. Cissy Jones (Olga) and Rich Sommer (Larry, Russell's Drone, and Combine Soldiers) were cast at the suggestion of writer Sean Vanaman, who had worked with them on Campo Santo's Firewatch (2016). Music Mike Morasky, composer for Portal 2 and Team Fortress 2, composed for Alyx in consultation with Kelly Bailey, composer for previous Half-Life games. He cited industrial music by acts such as Nine Inch Nails, the Prodigy and Skinny Puppy as inspiration. Writing Former Half-Life writers Erik Wolpaw and Jay Pinkerton turned down invitations to return to Valve early in the Alyx development. Instead, Valve recruited writer Rob Yescombe of The Invisible Hours, who worked on Alyx in 2017 and 2018. Yescombe's narrative was darker than other Half-Life games, with scenes of dread, torture and horror. The antagonist was a female Combine officer named Hahn; in one proposed ending, Alyx would kill Hahn in revenge for torturing her father. Yescombe also proposed an ending in which Alyx and the G-Man would travel back in time to the events of the first Half-Life to prevent Freeman from triggering the alien invasion. After the company-wide playtest in 2018, feedback was overwhelmingly positive but for the story, which employees scored the lowest of any Valve game. Morasky described it as "dark, serious and laborious", likening it to a Zack Snyder superhero film. Designer Corey Peters said the team had a "strong feeling" about the story and that it had been "validating" to get the feedback. Valve initially planned to launch Alyx alongside its Index VR headset in June 2019, but delayed it to address the story. They re-enlisted Wolpaw and Pinkerton to rewrite the plot and dialogue from scratch while preserving the gameplay. They were joined by writers Jake Rodkin and Sean Vanaman, who had joined the company when Valve acquired Campo Santo. Marc Laidlaw, who retired from Valve in 2016, provided consultation. The new writers identified three problems. First, the nature of a prequel meant that players knew the characters would survive, reducing suspense. Second, the story did not have an impact on the overall Half-Life story; the writers did not want it to feel "just like a hermetically sealed short story in the world of Half-Life". Third, the game had to end with an encounter with G-Man, essentially a god, giving Alyx something for freeing him; the writers could not imagine what this could be. Walker said the team did not want the ending to be something "you could just ignore", and knew that fans had been in a "narrative limbo" since Episode Two, which they wanted to change. Having Alyx and the G-Man travel forward in time and rescue Eli at the end of Episode Two was suggested by character artist Jim Murray. The team was reluctant, as this undid the Episode Two cliffhanger, but were intrigued by the questions it raised about the world and how it pushed the Half-Life story forward. The change required Valve to create new assets, such as the Episode Two White Forest base and models for Dog, an older Eli, and Gordon Freeman. The red herring, wherein Alyx believes she is rescuing Gordon Freeman before discovering the G-Man, was conceived by Vanaman late in production; as there was no character model for the Combine scientist Alyx overhears, the scene was animated in shadow play. While previous Valve games use silent protagonists, the writers found that having Alyx speak improved the storytelling. They added radio dialog between Alyx and Russell as a simple way to "bring the energy up" whenever needed. The final script ran to 280 pages, compared to 128 pages for Half-Life 2 and 18 for Half-Life. Release Valve announced Half-Life: Alyx in November 2019. It was free to owners of Valve Index headsets or controllers. Valve waited until the game was almost complete before announcing it, aiming to avoid the delays of previous games. They were conscious that players, having waited years for a new Half-Life game, might be disappointed by a VR game, and tightly managed the announcement. To promote Alyx, Valve made all prior Half-Life games free on Steam from January 2020 until its release. Alyx was released on March 23, 2020. Valve released a Linux version on May 15, 2020, along with Vulkan rendering support for both platforms. A pre-release build was mistakenly leaked on Steam that included non-VR developer tools, allowing interactions such as picking up objects and firing weapons. However, most of the basic interactions such as pressing buttons or filling Alyx's backpack could not be completed with the "use" key. When asked about plans for future Half-Life games, designer David Speyrer said the team was willing but were waiting for the reaction to Alyx. According to Walker, "We absolutely see Half-Life: Alyx as our return to this world, not the end of it." Mod support tools for the Source 2 engine and Steam Workshop support for Half-Life: Alyx were released on May 15, 2020. Valve plans to release a new Hammer level editor for Source 2. Valve also plans to release a partial Source 2 software development kit (SDK) for the updated features at a later date, with the focus at launch on shipping and supporting Alyx. Reception Half-Life: Alyx received "universal acclaim", according to review aggregator Metacritic. By April 2020, it was one of the 20 highest-rated PC games on Metacritic. Reviewers at publications such as VG247, Tom's Hardware, and Video Games Chronicle described it as VR's "killer app". The announcement trailer was watched over 10 million times within the first 24 hours of its release. Though most fans expressed excitement, some were disappointed that the game was only available in VR, a small but growing market in 2019. Before the game's release, Vic Hood of TechRadar expressed enthusiasm for Alyx, but wrote that "we forever live in hope for a Half-Life 3". Kevin Webb of Business Insider wrote that Alyx could "spark fresh interest in an industry [VR] that has struggled to win over hardcore gamers". Andrew King of USGamer also suggested that Half-Life: Alyx would be the "make or break VR Jesus Moment" for the modding community, in whether the players would be interested and be capable of using the tools provided by Valve to produce new creations that took advantage of VR space, as modification within VR space had traditionally been difficult to work with prior to this point. Alyx also won the Easy Allies 2020 awards for both Best World Design and Game of the Year. In Polygon, Ben Kuchera wrote of how Alyx used VR to transform traditional FPS systems; for example, he felt that reloading guns, traditionally done with a single button press, was more fun in VR. He wrote: "The magic lies in being inside the world, being able to touch it, and interact with it, directly. The game’s design and pacing would lose all meaning if played as a standard game, even if more players would be able to experience the story for its own sake." Sales On its first day of release, Half-Life: Alyx had 43,000 concurrent players. According to Niko Partners analyst Daniel Ahmad, this was a successful launch by VR standards, matching Beat Sabers peak concurrent users. However, Ahmad noted that it was clear "the numbers are held back due to the VR requirement". Valve's Greg Coomer said Valve knew many people would not play the game on launch, and that its audience was "relatively small right now". Newell described it as a "forward investment" into long-term technologies. The Valve Index headset, controllers, and base stations all sold out in the United States, Canada, and Europe within a week of the game's announcement. By mid-January 2020, they were sold out in all 31 regions the units were offered. According to Superdata, Valve sold 103,000 Index units in the fourth quarter of 2019 as a result of the Alyx announcements compared to the total 149,000 sold throughout 2019, and it was the highest-selling VR headset for PCs during that quarter. Though Valve had expected to supply several Index pre-orders in time for the release of Alyx, the COVID-19 pandemic limited their supply chain. Awards Half-Life: Alyx won "Game of the Year" at the 2020 VR Awards. At the Game Awards 2020, it was nominated for "Best Game Direction", "Best Audio Design", and "Best Action", and won for "Best VR/AR" game. At the 17th British Academy Games Awards, it was nominated for "Best Game", "Game Direction", "Audio Achievement", and "Artistic Achievement". References External links 2020 video games Alien invasions in video games Dystopian video games Half-Life (series) HTC Vive games Interquel video games Linux games Oculus Rift games Rebellions in fiction Single-player video games Source 2 games Valve Index games Video game prequels Video games about zombies Video games developed in the United States Video games featuring black protagonists Video games featuring female protagonists Video games scored by Mike Morasky Video games set in Europe Video games with commentaries Video games with Steam Workshop support Video games with user-generated gameplay content Virtual reality games Windows games
1564205
https://en.wikipedia.org/wiki/Real-time%20computer%20graphics
Real-time computer graphics
Real-time computer graphics or real-time rendering is the sub-field of computer graphics focused on producing and analyzing images in real time. The term can refer to anything from rendering an application's graphical user interface (GUI) to real-time image analysis, but is most often used in reference to interactive 3D computer graphics, typically using a graphics processing unit (GPU). One example of this concept is a video game that rapidly renders changing 3D environments to produce an illusion of motion. Computers have been capable of generating 2D images such as simple lines, images and polygons in real time since their invention. However, quickly rendering detailed 3D objects is a daunting task for traditional Von Neumann architecture-based systems. An early workaround to this problem was the use of sprites, 2D images that could imitate 3D graphics. Different techniques for rendering now exist, such as ray-tracing and rasterization. Using these techniques and advanced hardware, computers can now render images quickly enough to create the illusion of motion while simultaneously accepting user input. This means that the user can respond to rendered images in real time, producing an interactive experience. Principles of real-time 3D computer graphics The goal of computer graphics is to generate computer-generated images, or frames, using certain desired metrics. One such metric is the number of frames generated in a given second. Real-time computer graphics systems differ from traditional (i.e., non-real-time) rendering systems in that non-real-time graphics typically rely on ray tracing. In this process, millions or billions of rays are traced from the camera to the world for detailed rendering—this expensive operation can take hours or days to render a single frame. Real-time graphics systems must render each image in less than 1/30th of a second. Ray tracing is far too slow for these systems; instead, they employ the technique of z-buffer triangle rasterization. In this technique, every object is decomposed into individual primitives, usually triangles. Each triangle gets positioned, rotated and scaled on the screen, and rasterizer hardware (or a software emulator) generates pixels inside each triangle. These triangles are then decomposed into atomic units called fragments that are suitable for displaying on a display screen. The fragments are drawn on the screen using a color that is computed in several steps. For example, a texture can be used to "paint" a triangle based on a stored image, and then shadow mapping can alter that triangle's colors based on line-of-sight to light sources. Video game graphics Real-time graphics optimizes image quality subject to time and hardware constraints. GPUs and other advances increased the image quality that real-time graphics can produce. GPUs are capable of handling millions of triangles per frame, and current DirectX 11/OpenGL 4.x class hardware is capable of generating complex effects, such as shadow volumes, motion blurring, and triangle generation, in real-time. The advancement of real-time graphics is evidenced in the progressive improvements between actual gameplay graphics and the pre-rendered cutscenes traditionally found in video games. Cutscenes are typically rendered in real-time—and may be interactive. Although the gap in quality between real-time graphics and traditional off-line graphics is narrowing, offline rendering remains much more accurate. Advantages Real-time graphics are typically employed when interactivity (e.g., player feedback) is crucial. When real-time graphics are used in films, the director has complete control of what has to be drawn on each frame, which can sometimes involve lengthy decision-making. Teams of people are typically involved in the making of these decisions. In real-time computer graphics, the user typically operates an input device to influence what is about to be drawn on the display. For example, when the user wants to move a character on the screen, the system updates the character's position before drawing the next frame. Usually, the display's response-time is far slower than the input device—this is justified by the immense difference between the (fast) response time of a human being's motion and the (slow) perspective speed of the human visual system. This difference has other effects too: because input devices must be very fast to keep up with human motion response, advancements in input devices (e.g., the current Wii remote) typically take much longer to achieve than comparable advancements in display devices. Another important factor controlling real-time computer graphics is the combination of physics and animation. These techniques largely dictate what is to be drawn on the screen—especially where to draw objects in the scene. These techniques help realistically imitate real world behavior (the temporal dimension, not the spatial dimensions), adding to the computer graphics' degree of realism. Real-time previewing with graphics software, especially when adjusting lighting effects, can increase work speed. Some parameter adjustments in fractal generating software may be made while viewing changes to the image in real time. Rendering pipeline The graphics rendering pipeline ("rendering pipeline" or simply "pipeline") is the foundation of real-time graphics. Its main function is to render a two-dimensional image in relation to a virtual camera, three-dimensional objects (an object that has width, length, and depth), light sources, lighting models, textures and more. Architecture The architecture of the real-time rendering pipeline can be divided into conceptual stages: application, geometry and rasterization. Application stage The application stage is responsible for generating "scenes", or 3D settings that are drawn to a 2D display. This stage is implemented in software that developers optimize for performance. This stage may perform processing such as collision detection, speed-up techniques, animation and force feedback, in addition to handling user input. Collision detection is an example of an operation that would be performed in the application stage. Collision detection uses algorithms to detect and respond to collisions between (virtual) objects. For example, the application may calculate new positions for the colliding objects and provide feedback via a force feedback device such as a vibrating game controller. The application stage also prepares graphics data for the next stage. This includes texture animation, animation of 3D models, animation via transforms, and geometry morphing. Finally, it produces primitives (points, lines, and triangles) based on scene information and feeds those primitives into the geometry stage of the pipeline. Geometry stage The geometry stage manipulates polygons and vertices to compute what to draw, how to draw it and where to draw it. Usually, these operations are performed by specialized hardware or GPUs. Variations across graphics hardware mean that the "geometry stage" may actually be implemented as several consecutive stages. Model and view transformation Before the final model is shown on the output device, the model is transformed onto multiple spaces or coordinate systems. Transformations move and manipulate objects by altering their vertices. Transformation is the general term for the four specific ways that manipulate the shape or position of a point, line or shape. Lighting In order to give the model a more realistic appearance, one or more light sources are usually established during transformation. However, this stage cannot be reached without first transforming the 3D scene into view space. In view space, the observer (camera) is typically placed at the origin. If using a right-handed coordinate system (which is considered standard), the observer looks in the direction of the negative z-axis with the y-axis pointing upwards and the x-axis pointing to the right. Projection Projection is a transformation used to represent a 3D model in a 2D space. The two main types of projection are orthographic projection (also called parallel) and perspective projection. The main characteristic of an orthographic projection is that parallel lines remain parallel after the transformation. Perspective projection utilizes the concept that if the distance between the observer and model increases, the model appears smaller than before. Essentially, perspective projection mimics human sight. Clipping Clipping is the process of removing primitives that are outside of the view box in order to facilitate the rasterizer stage. Once those primitives are removed, the primitives that remain will be drawn into new triangles that reach the next stage. Screen mapping The purpose of screen mapping is to find out the coordinates of the primitives during the clipping stage. Rasterizer stage The rasterizer stage applies color and turns the graphic elements into pixels or picture elements. History Computer animation has been around since the 1940s, but it was not until the '70s that 3D techniques were implemented. The first step towards 3D graphics, but not real-time graphics, was taken in 1972 by Edwin Catmull and Fred Parke. Their implementation featured a computer-generated hand that was created using wire-frame imagery, solid shading, and finally smooth shading. In '72 and '74, Parke made a video of a Gouraud-shaded woman's face that changed its expression. 3D graphics have reached the point where animated humans look almost entirely realistic. One obstacle is the uncanny valley. However, human beings are the hardest models to create to the point of photorealism, and so many animated films stick to anthropomorphic animals, monsters, or dinosaurs. For an example of realistic human animation, the 2007 film Beowulf showcases 3D graphics that get close to fooling the human eye. The film was created using 3D motion capture technology. See also Bounding interval hierarchy Demoscene Geometry instancing Optical feedback Quartz Composer Real time (media) Real-time raytracing Video art Video display controller References Bibliography External links RTR Portal – a trimmed-down "best of" set of links to resources Computer graphics Computer graphics
459552
https://en.wikipedia.org/wiki/List%20of%20Macintosh%20software
List of Macintosh software
The following is a list of Macintosh Software – notable computer applications for current macOS operating systems. For software designed for the classic Mac OS, see List of old Macintosh software. Anti-malware software The software listed in this section is antivirus software and malware removal software. BitDefender Antivirus for Mac – antivirus software Intego VirusBarrier – antivirus software MacScan – malware removal program Norton Antivirus for Mac – an antivirus program specially made for Mac Sophos – antivirus software VirusScan – antivirus software Archiving, backup, restore, recovery This section lists software for file archiving, backup and restore, data compression and data recovery. Archive Utility – built-in archive file handler Backup – built-in Compact Pro – data compression Disk Drill Basic – data recovery software for macOS iArchiver – handles archives, commercial Stellar Phoenix Mac Data Recovery – Data Recovery Software for Mac Computers Stellar Phoenix Video Repair – Repair corrupt of damaged videos Stuffit – data compression The Unarchiver Time Machine (macOS) – built-in backup software BetterZip – file archiver and compressor utility WinZip – file archiver and compressor utility Audio-specific software Ableton Live – digital audio workstation Adobe Soundbooth – music and soundtrack editing Ardour – hard disk recorder and digital audio workstation program Audacity – digital audio editor Audion – media player (development ceased) Audio Hijack – audio recorder baudline – signal analyzer BIAS Peak – mastering Cog – open source audio player, supports multiple formats Cubase – music production program djay – digital music mixing software Digital Performer – MIDI sequencer with audio tracking Final Cut Express/Pro – movie editor Finale – scorewriter program fre:ac – open source audio converter and CD ripper GarageBand – music/podcast production Impro-Visor – educational notation and playback for music improvisation iTunes – audio/video Jukebox ixi software – free improvisation and sketching tools Jaikoz – mass tagger LilyPond – scorewriter program Logic Express – prosumer music production Logic Studio – music writing studio package by Apple Inc. Apple Loops Utility – production and organisation of Apple Loops Apple Qmaster and Qadministrator Logic Pro – digital audio workstation Mainstage – program to play software synthesizers live QuickTime Pro – pro version of QuickTime Soundtrack Pro – post production audio editor WaveBurner – CD mastering and production software Mixxx – DJ mix software Max – Cycling 74's visual programming language for MIDI, audio, video; with MSP, Jitter Nuendo – audio and post production editor Overture – scorewriter program ReBirth – virtual synth program simulates Roland TR-808, TB-303 REAPER – digital audio workstation Reason Studios – digital audio workstation Recycle – sample editor Renoise – contemporary digital audio workstation, based upon the heritage and development of tracker software. RiffWorks – guitar recording and online song collaboration software CD and DVD authoring Disco DVD Studio Pro – DVD authoring application Adobe Encore iDVD – a basic DVD-authoring application Roxio Toast – DVD authoring application Chat (text, voice, image, video) Active Adium – multi-protocol IM client aMSN ChitChat Colloquy – freeware advanced IRC and SILC client Discord Fire – open source, multiprotocol IM client FaceTime – videoconferencing between Mac, iPhone, iPad and iPod touch iMessage – instant messaging between Mac, and iDevices Ircle Irssi – IrssiX and MacIrssi Kopete LiveChat Microsoft Messenger for Mac Microsoft Teams Palringo Psi (instant messenger) Skype Snak Ventrilo – audio chatroom application X-Chat Aqua Yahoo! Messenger Telegram Discontinued AOL Instant Messenger – discontinued as of December 15, 2017 iChat – instant messaging and videoconferencing (discontinued since OS X 10.8 Mountain Lion in favour of FaceTime and iMessage) Children's software Kid Pix Deluxe 3X – bitmap drawing program Stagecast Creator – programming and internet authoring for kids Developer tools and IDEs Apache Web Server AppCode – an Objective-C IDE by JetBrains for macOS and iOS development Aptana – an open source integrated development environment (IDE) for building Ajax web applications Clozure CL – an open source integrated development environment (IDE) for building Common Lisp applications Code::Blocks – open source IDE for C++ CodeWarrior – development environment, framework Coldstone game engine Dylan Eclipse – open source Java-based IDE for developing rich-client applications, includes SWT library, replaces Swing by using underlying OS native windowing abilities Fink – Debian package manager for ported Unix software Free Pascal – Object Pascal compiler, XCode plugin available GNU Compiler Collection Glasgow Haskell Compiler Helix – relational database IDE Homebrew - Package manager for installing many open source, mostly terminal based, utilities. Includes Apache, PHP, Python and many more. HotSpot – Sun's Java Virtual Machine HyperNext – freeware software development IntelliJ IDEA – a JAVA IDE by JetBrains (free limited community edition) Komodo – commercial multi-language IDE from ActiveState Lazarus – cross-platform IDE to develop software with Free Pascal, specialized in graphical software LiveCode – high-level cross-platform IDE MacApp – application development framework Pascal and C++ Macintosh Programmer's Workshop (MPW) Macports – a package management system that simplifies the installation of free/open source software on the macOS. Macromedia Authorware – application (CBT, eLearning) development, no Mac development environment since version 4, though can still package applications with the 'Mac Packager' for OS 8 through 10 playback Mono – open source implementation of Microsoft .NET Framework with a C# compiler NetBeans – modular, open source, multi-language platform and IDE for Java written in pure Java Omnis Studio – cross-platform development environment for creating enterprise and web applications for macOS, Windows, Linux, Solaris Panorama Perl PHP Python Qt Creator – an IDE for C++ GUI applications, by Trolltech Real Studio – cross-platform compiled REALbasic BASIC programming language IDE ResEdit – resource editor Script Debugger – an AppleScript and Open Scripting Architecture IDE SuperCard – high-level IDE Tcl/tk – scripting shell & GUI utility that allows cross platform development. Included with macOS. TextMate – multipurpose text editor that supports Ruby, PHP, and Python Torque (game engine) – game creation software WebKit – open source application framework for Safari (web browser) WebObjects wxPython – API merging Python and wxWidgets Xcode – IDE made by Apple, which comes as a part of macOS and is available as a downloadon, was called Project Builders of Florida tech Email Email clients Apple Mail – the bundled email client Claris Emailer – classic Mac OS only, no longer available Entourage – email client by Microsoft; analogous to Microsoft Outlook Eudora Foxmail Lotus Notes Mailbird Mailplane – a WebKit-based client for Gmail Microsoft Outlook Mozilla Thunderbird Mulberry – open-source software for e-mail, calendars and contacts Opera Mail Outlook Express Postbox Sparrow – as well as Sparrow Lite Other email software Gmail Notifier FTP clients Classic FTP Cyberduck Fetch Fugu FileZilla ForkLift Interarchy Transmit WebDrive – FTP and cloud client Yummy FTP Games Steam – digital distribution software for video games and related media Graphics, layout, desktop publishing CAD, 3D graphics 3D-Coat Autodesk Alias Ashlar-Vellum – 2D/3D drafting, 3D modeling ArchiCAD AutoCAD Blender BricsCAD Cheetah3D Cinema 4D SketchUp – 3D modeling software Houdini Lightwave Maya Modo PowerCADD ZBrush Distributed document authoring Adobe Acrobat Preview Icon editors, viewers Icon Composer – part of Apple Developer Tools IconBuilder Microsoft Office File conversion and management Active Adobe Bridge – digital asset management app BibDesk – free bibliographic database app that organizes linked files Font Book – font management tool GraphicConverter – graphics editor, open/converts a wide range of file formats Photos – photo management application Discontinued iPhoto – discontinued photo management application Layout and desktop publishing Active Adobe InDesign – page layout iCalamus – page layout iStudio Publisher – page layout Pages – part of iWork QuarkXPress – page layout Ready, Set, Go! – page layout Scribus – page layout TeX – publishing MacTeX – TeX redistribution of TeX Live for Mac Comparison of TeX Editors The Print Shop – page layout Discontinued iBooks Author – created interactive books for Apple Books Raster and vector graphics This section lists bitmap graphics editors and vector graphics editors. Active Adobe Fireworks – supports GIF animation Adobe Illustrator – vector graphics editor Adobe Photoshop – also offers some vector graphics features Affinity Designer – vector graphics editor for Apple macOS and Microsoft Windows Anime Studio – 2D based vector animation Collabora Online - enterprise-ready edition of LibreOffice Corel Painter EazyDraw – vector graphics editor; versions available that can convert old formats such as MacDraw files Fontographer GIMP – free bitmap graphics editor GIMPShop – free open source cross-platform bitmap graphics editor GraphicConverter – displays and edits raster graphics files Inkscape – free vector graphics editor Luminar Macromedia FreeHand – vector graphics editor Paintbrush – free simple bitmap graphics program Photos – official photo management and editing application developed by Apple Photo Booth – photo camera, video recorder Pixelmator – hardware-accelerated integrated photo editor Polarr – photo editing app Seashore – open source, based around the GIMP's technology, but with native macOS (Cocoa) UI Discontinued Aperture – Apple's pro photo management, editing, publishing application MacPaint – painting software by Apple (discontinued) Integrated software technologies Finder Path Finder QuickTime Terminal X11.app Language and reference tools Cram (software) Dictionary (software) Encyclopædia Britannica Rosetta Stone (software) – proprietary language learning software Ultralingua – proprietary electronic dictionaries and language tools World Book Encyclopedia – multimedia Mathematics software Fityk Grapher Maple (software) Mathematica MATLAB MathMagic Octave (software) – open source R (programming language) Sysquake SciLab – open source Media center Boxee – Mac and Apple TV Front Row Mira MythTV SageTV Plex Kodi Multimedia authoring Adobe Director – animation/application development Adobe Flash – vector animation Adobe LiveMotion – a discontinued competitor to Flash, until Adobe bought Macromedia Apple Media Tool – a discontinued multimedia authoring tool published by Apple Dragonframe - stop motion animation and time-lapse iBooks Author – created interactive books for Apple Books (discontinued) iLife – media suite by Apple Unity – 3D authoring Networking and telecommunications Apple Remote Desktop Google Earth iStumbler – find wireless networks and devices Karelia Watson (defunct) KisMAC Little Snitch – network monitor and outgoing connection firewall NetSpot – software tool for wireless network assessment, scanning, and surveys, analyzing Wi-Fi coverage and performance Timbuktu – remote control WiFi Explorer – a wireless network scanner tool News aggregators Feedly – news aggregator, and news aggregator reading application NetNewsWire – news aggregator reading application NewsFire – news aggregator reading application RSSOwl – news aggregator reading application Safari (web browser) - news aggregation via built-in RSS support Apple Mail – news aggregation via (discontinued) built-in RSS support Office and productivity AbiWord Adobe Acrobat Address Book – bundled with macOS AppleWorks – word processor, spreadsheet, and presentation applications (discontinued) Banktivity – personal finance, budgeting, investments Bean (word processor) – free TXT/RTF/DOC word processor Celtx Collabora Online enterprise-ready edition of LibreOffice CricketGraph – graphmaker Delicious Library FileMaker FlowVella Fortora Fresh Finance Helix (database) iBank – personal finance application iCal – calendar management, bundled with macOS iWork – suite: Pages – word processor application Numbers – spreadsheet application Keynote – presentation application Journler – diary and personal information manager with personal wiki features KOffice LibreOffice MacLinkPlus Deluxe – file format translation tool for PowerPC-era Mac OS X, converting and opening files created in other operating systems Mellel Microsoft Office – office suite: Microsoft Word – word processor application Microsoft Excel – spreadsheet application Microsoft PowerPoint – presentation application Microsoft Entourage – email application (replaced by Microsoft Outlook) Microsoft Outlook – email application Microsoft OneNote – note-taking application MoneyWiz – personal finance application Montage – screenwriting software NeoOffice Nisus Writer OmniFocus OpenOffice.org WriteNow Taste – word processor (discontinued) Operating systems Darwin – the BSD-licensed core of macOS macOS – originally named "Mac OS X" until 2012 and then "OS X" until 2016 macOS Server – the server computing variant of macOS Outliners and mind-mapping FreeMind Mindjet OmniOutliner OmniGraffle XMind Peer-to-peer file sharing aMule BitTorrent client FrostWire LimeWire Poisoned rTorrent Transmission (BitTorrent) μTorrent Vuze – Bittorrent client, was Azureus Science Celestia – 3D astronomy program SimThyr – Simulation system for thyroid homeostasis Stellarium – 3D astronomy program Text editors ACE Aquamacs BBEdit BBEdit Lite Coda Emacs jEdit iA Writer Komodo Edit Nano SimpleText Smultron SubEthaEdit TeachText TextEdit TextMate TextWrangler vim XEmacs Ulysses Utilities Activity Monitor – default system monitor for hardware and software AppZapper – uninstaller (shareware) Automator – built-in, utility to automate repetitive tasks Butler – free, launcher and utility to automate repetitive tasks CleanGenius – free system optimization tool for macOS, disk cleaner, uninstaller, device ejector, disk monitor. (freeware) CandyBar – system customization software (commercial) CDFinder – disk cataloging software (commercial) DaisyDisk – disk visualization tool Dashboard – built-in macOS widgets Grab (software) – built-in macOS screenshot utility Growl – global notifications system, free iSync – syncing software, bundled with Mac OS X up to 10.6 LaunchBar – provides instant access to local data, search engines and more by entering abbreviations of search item names, commercial Mavis Beacon Teaches Typing – proprietary, typing tutor OnyX – a freeware system maintenance and optimization tool for macOS Quicksilver – a framework for accessing and manipulating many forms of data Screen Sharing SheepShaver – PowerPC emulator, allows, among other things, running Mac OS 9 on Intel Macs Sherlock – file searching (version 2), web services (version 3) Stickies – put Post-It Note-like notes on the desktop System Preferences – default Mac system option application UUTool – uuencoded/uudecode and other transcoding Xsan – storage network utility Yahoo! Widget Engine – JavaScript-based widget system Support for non-Macintosh software Bochs Boot Camp – a multi-boot utility built into macOS from 10.5 CrossOver – commercial implementation of Wine DOSBox – DOS emulator Hercules emulator pcAnywhere – VNC-style remote control Parallels Workstation – commercial full virtualization software for desktop and server Q – emulates an IBM-compatible PC on a Mac, allows running PC operating systems VMware – virtualization software Wine – Windows API reimplementation Virtual PC – full virtualization software allows running other operating systems, such as Windows and Linux, on PowerPC Macs (discontinued in 2007) VirtualBox vMac – emulates a Macintosh Plus and can run Apple Macintosh System versions 1.1 to 7.5.5. Video Adobe After Effects Adobe Premiere Pro Adobe Presenter Video Express ArKaos – VJ software Avid DaVinci Resolve – Video Editing Suite DivX DivX Player DVD Player (Apple) – DVD player software built into macOS FFmpeg – audio/video converter Final Cut Express Final Cut Studio – audio-video editing suite: Apple Qmaster Cinema Tools Compressor DVD Studio Pro Final Cut Pro LiveType Motion 2 Soundtrack Pro HandBrake – DVD to MPEG-4 and other formats converter iMovie – basic video editing application Miro Media Player MPlayer Perian QuickTime – including its Player and QuickTime Pro RealPlayer Shotcut - An open source rich video editor Shake Windows Media Player VLC media player 4K Video Downloader – free video downloader Web browsers Amaya – free Camino – open source Flock – free, Mozilla Firefox based Google Chrome – free, proprietary iCab – free Konqueror – open source Lynx – free Mozilla – open source, combines browser, email client, WYSIWYG editor Mozilla Firefox – open source Microsoft Edge – free Netscape Navigator – free, proprietary OmniWeb – free, proprietary Opera – free Safari (web browser) – built-in from Mac OS X 10.3, available as a separate download for Mac OS X 10.2 SeaMonkey – open source Internet application suite Shiira – open source Sleipnir – free, by Fenrir Inc Tor (anonymity network) – free, open source Torch (web browser) – free, by Torch Media Inc. Internet Explorer for Mac – free, by Microsoft WebKit – Safari application framework, also in the form of an application Web design and content management Adobe Contribute Adobe Dreamweaver Adobe GoLive Claris Homepage Coda Freeway iWeb NVu RapidWeaver – a template-based website editor Weblog clients ecto MarsEdit See also List of Macintosh software published by Microsoft List of Unix commands References List of Macintosh software Macintosh
57809290
https://en.wikipedia.org/wiki/CAINE%20Linux
CAINE Linux
CAINE Linux (Computer Aided INvestigative Environment) is an Italian Linux live distribution managed by Giovanni "Nanni" Bassetti. The project began in 2008 as an environment to foster digital forensics and incidence response (DFIR), with several related tools pre-installed. Purpose CAINE is a professional open source forensic platform that integrates software tools as modules along with powerful scripts in a graphical interface environment. Its operational environment was designed with the intent to provide the forensic professional all the tools required to perform the digital forensic investigate process (preservation, collection, examination and analysis). CAINE is a live Linux distribution so it can be booted from removable media (flash drive) or from an optical disk and run in memory. It can also be installed onto a physical or virtual system. In Live mode, CAINE can operate on data storage objects without having to boot up a supporting operating system. The latest version 11.0 can boot on UEFI/UEFI+Secure and Legacy BIOS allowing CAINE to be used on information systems that boot older operating systems (e.g. Windows NT) and newer platforms (Linux, Windows 10). Requirements CAINE is based on Ubuntu 18.04 64-bit, using Linux kernel 5.0.0-32. CAINE system requirements to run as a live disc are similar to Ubuntu 18.04. It can run on a physical system or in a virtual machine environment such as VMware Workstation. Supported platforms The CAINE Linux distribution has numerous software applications, scripts and libraries that can be used in a graphical or command line environment to perform forensic tasks. CAINE can perform data analysis of data objects created on Microsoft Windows, Linux and some Unix systems. One of the key forensic features since version 9.0 is that it sets all block devices by default to read-only mode. Write-blocking is a critical methodology to ensure that disks are not subject to writing operations by the operating system or forensic tools. This ensures that attached data objects are not modified, which would negatively impact digital forensic preservation. Tools CAINE provides software tools that support database, memory, forensic and network analysis. File system image analysis of NTFS, FAT/ExFAT, Ext2, Ext3, HFS and ISO 9660 is possible via command line and through the graphic desktop. Examination of Linux, Microsoft Windows and some Unix platforms is built-in. CAINE can import disk images in raw (dd) and expert witness/advanced file format. These may be obtained from using tools that are included in CAINE or from another platform such as EnCase or the Forensic Tool Kit. Some of the tools included with the CAINE Linux distribution include: The Sleuth Kit – open source command line tools that support forensic inspection of disk volume and file system analysis. Autopsy – open source digital forensics platform that supports forensic analysis of files, hash filtering, keyword search, email and web artifacts. Autopsy is the graphical interface to The Sleuth Kit. RegRipper – open source tool, written in Perl, extracts/parses information (keys, values, data) from the Registry database for data analysis. Tinfoleak – open source tool for collecting detailed Twitter intelligence analysis. Wireshark – supports interactive collection of network traffic and non real-time analysis of data packet captures (*.pcap). PhotoRec – supports recovery of lost files from hard disk, digital camera and optical media. Fsstat – displays file system statistical information about an image or storage object. References External links Official website Forensic software Linux Live Linux distributions Digital forensics software Linux distributions
127058
https://en.wikipedia.org/wiki/Troy%2C%20New%20York
Troy, New York
Troy is a city in the U.S. state of New York and the seat of Rensselaer County. The city is located on the western edge of Rensselaer County and on the eastern bank of the Hudson River. Troy has close ties to the nearby cities of Albany and Schenectady, forming a region popularly called the Capital District. The city is one of the three major centers for the Albany metropolitan statistical area, which has a population of 1,170,483. At the 2020 census, the population of Troy was 51,401. Troy's motto is Ilium fuit, Troja est, which means "Ilium was, Troy is". Today, Troy is home to Rensselaer Polytechnic Institute, the oldest private engineering and technical university in the US, founded in 1824. It is also home to Emma Willard School, an all-girls high school started by Emma Willard, a women's education activist, who sought to create a school for girls equal to their male counterparts. Due to the confluence of major waterways and a geography that supported water power, the American industrial revolution took hold in this area, making Troy reputedly the fourth-wealthiest city in America around the turn of the 20th century. Troy, therefore, is noted for a wealth of Victorian architecture downtown and elaborate private homes in various neighborhoods. Several churches have a concentrated collection of stained-glass windows by Louis Comfort Tiffany. Troy is also home to the world-renowned Troy Music Hall, which dates from the 1870s and is said to have superb acoustics in a combination of restored and well-preserved performance space. The area had long been occupied by the Mahican Indian tribe, but Dutch settlement began in the mid-17th century. The patroon, Kiliaen van Rensselaer, called the region Pafraets Dael, after his mother. The Dutch colony was conquered by the English in 1664, and in 1707, Derick van der Heyden purchased a farm near today's downtown area. In 1771, Abraham Lansing had his farm in today's Lansingburgh laid out into lots. Sixteen years later, Van der Heyden's grandson Jacob had his extensive holdings surveyed and laid out into lots, naming the new village Vanderheyden. In 1789, Troy adopted its present name following a vote of the people. Troy was incorporated as a town two years later, and extended east across the county to the Vermont line, including Petersburgh. In 1796, Troy became a village and in 1816, it became a city. Lansingburgh, to the north, became part of Troy in 1900. History 1500 to 1700: the Mohican and the Skiwia Indians Prior to the arrival of Europeans, the Mohican Indians had a number of settlements along the Hudson River near its confluence with the Mohawk River. The land comprising the Poesten Kill and Wynants Kill areas were owned by two Mohican groups. The land around the Poesten Kill was owned by Skiwias and was called Panhooseck. The area around the Wynants Kill, known as Paanpack, was owned by Peyhaunet. The land between the creeks, which makes up most of downtown and South Troy, was owned by Annape. South of the Wynants Kill and into present-day North Greenbush, the land was owned by Pachquolapiet. These parcels of land were sold to the Dutch between 1630 and 1657, and each purchase was overseen and signed by Skiwias, the sachem at the time. In total, more than 75 individual Mohicans were involved in deed signings in the 17th century. 1700: The Dutch and the British The site of the city was a part of Rensselaerswyck, a patroonship created by Kiliaen van Rensselaer. Dirck Van der Heyden was one of the first settlers. In 1707, he purchased a farm of , which in 1787 was laid out as a village. The 1800s: Canals, shipping, early industrialization The name Troy (after the legendary city of Troy, made famous in Homer's Iliad) was adopted in 1789, before which it had been known as Ashley's Ferry, and the region was formed into the Town of Troy in 1791 from part of the Manor of Rensselaerswyck. The township included Brunswick and Grafton. Troy became a village in 1801 and was chartered as a city in 1816. In the post-Revolutionary War years, as central New York was first settled, a strong trend to classical names existed, and Troy's naming fits the same pattern as the New York cities of Syracuse, Rome, Utica, Ithaca, and the towns of Sempronius and Manlius, and dozens of other classically named towns to the west of Troy. Northern and Western New York was a theater of the War of 1812, and militia and regular army forces were led by Stephen Van Rensselaer of Troy. Quartermaster supplies were shipped through Troy. A local butcher and meatpacker named Samuel Wilson supplied the military, and according to an unprovable legend, barrels stamped "The U.S." were jokingly taken by the troops to stand for "Uncle Sam" meaning Wilson. Troy has since claimed to be the historical home of Uncle Sam. On December 23, 1823, The Troy Sentinel was the first publisher of the world-famous Christmas poem "A Visit from St. Nicholas" (also known as "The Night Before Christmas" or "'Twas the Night Before Christmas"). The poem was published anonymously. Its author has long been believed to have been Clement Clarke Moore, but is now regarded by a few to have been Henry Livingston, Jr. Scientific and technical proficiency was supported by the presence of Rensselaer Polytechnic Institute (RPI), one of the highest-ranked engineering schools in the country. RPI was originally sponsored by Stephen Van Rensselaer, one of the most prominent members of that family. RPI was founded in 1824, and eventually absorbed the campus of the short-lived, liberal arts-based Troy University, which closed in 1862 during the Civil War. Rensselaer founded RPI for the "application of science to the common purposes of life", and it is the oldest technological university in the English-speaking world. The institute is known for its success in the transfer of technology from the laboratory to the marketplace. Through much of the 19th and into the early 20th century, Troy was one of the most prosperous cities in the United States. Prior to its rise as an industrial center, Troy was the transshipment point for meat and vegetables from Vermont, which were sent by the Hudson River to New York City. The trade was vastly increased after the construction of the Erie Canal, with its eastern terminus directly across the Hudson from Troy at Cohoes in 1825. Another artery constructed was the Champlain canal. In 1916, Troy Federal Lock opened as one of the first modern locks along the present-day canal system. Troy has been nearly destroyed by fire three times. The Great Troy Fire of 1862 burned the W. & L. E. Gurley, Co. factory, which was later that year replaced by the new W. & L. E. Gurley Building, now a National Historic Landmark: Gurley & Sons remains a worldwide leader in precision instrumentation. Troy's one-time great wealth was produced in the steel industry, with the first American Bessemer converter erected on the Wynantskill, a stream with falls in a small valley at the south end of the city. The industry first used charcoal and iron ore from the Adirondacks. Later on, ore and coal from the Midwest were shipped on the Erie Canal to Troy and there processed before being sent on down the Hudson to New York City. The iron and steel were also used by the extensive federal arsenal across the Hudson at Watervliet, New York, then called West Troy. After the American Civil War, the steel production industry moved west to be closer to raw materials. The presence of iron and steel also made it possible for Troy to be an early site in the development of iron storefronts and steel structural supports in architecture, and some significant early examples remain in the city. Troy was an early home of professional baseball and was the host of two major league teams. The first team to call Troy home was the Troy Haymakers, a National Association team in 1871 and 1872. One of their major players was Williams H. "Bill" Craver, a noted catcher and Civil War veteran, who also managed the team. Their last manager was Jimmy Wood, reckoned the first Canadian in professional baseball. The Troy Haymakers folded, and Troy had no team for seven seasons. Then, for four seasons, 1879 to 1882, Troy was home to the National League Troy Trojans. The Trojans were not competitive in the league, but they did field a young Dan Brouthers, who went on to become baseball's first great slugger. In 1892, Robert Ross, a poll watcher, was shot dead (and his brother wounded) by operatives of Mayor Edward Murphy, later a U.S. Senator, after uncovering a man committing voter fraud. The convicted murderer, Bartholomew "Bat" Shea, was executed in 1896, although another man, John McGough, later boasted that he had actually been the shooter. The initial emphasis on heavier industry later spawned a wide variety of highly engineered mechanical and scientific equipment. Troy was the home of W. & L. E. Gurley, Co., makers of precision instruments. Gurley's theodolites were used to survey much of the American West after the Civil War and were highly regarded until laser and digital technology eclipsed the telescope and compass technology in the 1970s. Bells manufactured by Troy's Meneely Bell Company ring all over the world. Troy was also home to a manufacturer of racing shells that used impregnated paper in a process that presaged the later use of fiberglass, Kevlar, and carbon-fiber composites. The 1900s: Industrialization, railroads, Rensselaer Polytechnic Institute In 1900, Troy annexed Lansingburgh, a former town and village whose standing dates back prior to the War of Independence, in Rensselaer County. Lansingburgh is thus often referred to as "North Troy". However, prior to the annexation, that portion of Troy north of Division Street was called North Troy and the neighborhood south of Washington Park is referred to as South Troy. To avoid confusion with streets in Troy following the annexation, Lansingburgh's numbered streets were renamed: its 1st Street, 2nd Street, 3rd Street, etc., became North Troy's 101st Street, 102nd Street, 103rd Street, etc. Lansingburgh was home to the Lansingburgh Academy. In the early 1900s, the New York Central Railroad was formed from earlier railroads and established its "Water Level Route" from New York City to Chicago, via Albany. A beaux-arts station was constructed c. 1903. A short New York Central branch from Rensselaer connected at Troy. Also serving the station was the Boston and Maine Railroad to/from Boston and the Delaware and Hudson Railroad to/from Canada. The railroads quickly made obsolete the 1800s-constructed canals along the Mohawk. The former NYC operates today as CSX for freight service and Amtrak for passenger service, the latter operating from Albany–Rensselaer station, directly opposite downtown Albany on the east side of the Hudson River. The end of rail passenger service to Troy occurred when the Boston and Maine dropped its Boston–Troy run in January, 1958. The Troy Union Station was demolished later in 1958. In addition to the strong presence of the early American steel industry, Troy was also a manufacturing center for shirts, shirtwaists, collars, and cuffs. In 1825, a local resident Hannah Lord Montague, was tired of cleaning her blacksmith husband's shirts. She cut off the collars of her husband's shirts since only the collar was soiled, bound the edges and attached strings to hold them in place. (This also allowed the collars and cuffs to be starched separately.) Montague's idea caught on and changed the fashion for American men's dress for a century. Her patented collars and cuffs were first manufactured by Maullin & Blanchard, which eventually was absorbed by Cluett, Peabody & Company. Cluett's "Arrow shirts" are still worn by men across the country. The large labor force required by the shirt manufacturing industry also produced in 1864 the nation's first female labor union, the Collar Laundry Union, founded in Troy by Kate Mullany. On February 23, 1864, 300 members of the union went on strike. After six days, the laundry owners gave in to their demands and raised wages 25%. Further developments arose in the industry, when in 1933, Sanford Cluett invented a process he called Sanforization, a process that shrinks cotton fabrics thoroughly and permanently. Cluett, Peabody's last main plant in Troy, was closed in the 1980s, but the industrial output of the plant had long been transferred to facilities in the South. In 1906, the city supplied itself with water from a 33-inch riveted-steel main from the Tomhannock Reservoir. A 30-inch cast-iron main was added in 1914. When the iron and steel industry moved westward to Pennsylvania around Pittsburgh to be closer to iron ore from Lake Erie and nearby coal and coke needed for the Bessemer process, and with a similar downturn in the collar industry, Troy's prosperity began to fade. After the passage of Prohibition, and given the strict control of Albany by the O'Connell political machine, Troy became a way station for an illegal alcohol trade from Canada to New York City. Likewise, the stricter control of morality laws in the neighboring New England states encouraged the development of openly operating speakeasies and brothels in Troy. Gangsters such as "Legs Diamond" conducted their business in Troy, giving the city a somewhat colorful reputation through World War II. A few of the buildings from that era have since been converted to fine restaurants, such as the former Old Daly Inn. Kurt Vonnegut lived in Troy and the area, and many of his novels include mentions of "Ilium" (an alternate name for Troy) or surrounding locations. Vonnegut wrote Player Piano in 1952, based on his experiences working as a public relations writer at nearby General Electric. His 1963 novel, Cat's Cradle, was written in the city and is set in Ilium. His recurring main character, Kilgore Trout, is a resident of Cohoes, just across the Hudson River from Troy. 2000 to today Like many old industrial cities, Troy has had to deal with the loss of its manufacturing base, loss of population and wealth to the suburbs, and to other parts of the country. This led to dilapidation and disinvestment until later efforts were made to preserve Troy's architectural and cultural past. , Troy is updating its citywide comprehensive plan for the first time in more than 50 years. The two-year process is known as "Realize Troy" and was initiated by the Troy Redevelopment Foundation (with members from the Emma Willard School, RPI, Russell Sage College, and St. Peter's Health Partners). Urban Strategies Inc. (Toronto) is planning Troy's redevelopment. Geography According to the United States Census Bureau, the city has a total area of , of which (5.44%) is covered by water. Troy is located several miles north of Albany near the junction of the Erie and Champlain canals, via the Hudson River, and is the terminus of the New York Barge Canal. It is the distributing center for a large area. The city is on the central part of the western border of Rensselaer County. The Hudson River makes up the western border of the city and the county's border with Albany County. The city borders within Rensselaer County, Schaghticoke to the north, Brunswick to the east, and North Greenbush to the south; to the west, the city borders the Albany County town of Colonie, the villages of Menands and Green Island, and the cities of Watervliet and Cohoes. To the northwest, Troy borders the Saratoga County village of Waterford within the town of Waterford. The western edge of the city is flat along the river, and then steeply slopes to higher terrain to the east. The average elevation is 50 feet, with the highest elevation being 500 feet in the eastern part of the city. The city is longer than it is wide, with the southern part wider than the northern section of the city (the formerly separate city of Lansingburgh). Several kills (Dutch for creek) pass through Troy and empty into the Hudson. The Poesten Kill and Wynants Kill are the two largest, and both have several small lakes and waterfalls along their routes in the city. Several lakes and reservoirs are within the city, including Ida Lake, Burden Pond, Lansingburgh Reservoir, Bradley Lake, Smarts Pond, and Wright Lake. Demographics At the 2010 census, 50,129 people, 20,121 households and 10,947 families were residing in the city. The population density was 4,840.1 people/sq mi, with 23,474 housing units. The racial makeup of the city was 69.7% White, 16.4% African American, 0.3% Native American, 3.4% Asian, and 4.1% from two or more races. Hispanics or Latinos of any race were 7.9% of the population. The median household income in 2013 was $37,805 (NY average of $57,369), and the median family income was $47,827 (NYS average of $70,485). The median per capita income for the city was $20,872 (NY average of $32,514). About 27.3% of the population were living in poverty as of 2013. Since then, Troy's population size has increased to 51,401 with 19,899 households, taken from the 2020 census The racial makeup of the city increased in the percentage of African Americans to 17.5% whereas the number of White residents decreased to 63.5%. The rest of the population was reported to be 0.1% Native American, 4.8% Asian, 9.6% Latino or Hispanic, and 7.3% two or more races. The majority of Troy's population consists of women (51.4%) whereas males makeup the remaining 48.6%. Troy residents under the age of 5 were reported to be 5.2%, under the age of 18 were 19.6%, and 65 years and over were 11.4%. People with a disability, under age 65 years were 13.3% and those without health insurance (under age 65 years) was outlined to be 5.9%. The population of veterans in 2020 was 1,907 Troy locals, which coincided with those who had disabilities. The number of foreign born persons, between 2015 and 2019, was 8.0%. Economically in 2020, the city's median household income had increased to $45,728 per family, with each family reporting to have 2.25 persons residing in them. The per capita income in past 12 months (in 2019 dollars), taken 2015-2019 was $25,689 with 24.4% of the population living in poverty. The poverty rate overall has decreased 3.3% since 2013. The education rate of Troy locals, 25 years or more, with a high school graduate or higher is 86.8% whereas the amount of persons with a bachelor's degree or higher is 26.8%. Additionally, due to the increasing age of the internet, the percentage of households with a computer from 2015 to 2019 has increased to 88.5% and those with a broadband Internet subscription lies at 81.5%. Religion The city is also home to numerous churches (Orthodox, Catholic, and Protestant), three synagogues, and one mosque. Economy Troy is known as the "Collar City" due to its history in shirt, collar, and other textile production. At one point, Troy was also the second-largest producer of iron in the country, surpassed only by the city of Pittsburgh, Pennsylvania. Troy, like many older industrial cities, has been battered by industrial decline and the migration of jobs to the suburbs. Nevertheless, the presence of RPI is helping Troy develop a small high-technology sector, particularly in video game development. The downtown core also has a smattering of advertising and architecture firms, and other creative businesses attracted by the area's distinctive architecture. Uncle Sam Atrium is an enclosed urban shopping mall, office space, and parking garage in downtown Troy. RPI is the city's largest private employer. Nonprofits The city is home to many nonprofits. Unity House serves as a food pantry, and assists domestic violence safety and housing support. Joseph's House helps with sheltering the homeless. Arts and culture Architecture Troy is home to Victorian and Belle Époque architecture. The Hudson and Mohawk Rivers play their part, as does the Erie Canal and its lesser tributary canal systems, and later the railroads that linked Troy to the rest of the Empire State, New York City to the south, and Utica, New York, Syracuse, New York, Rochester, New York, Buffalo, New York, and the myriad of emergent Great Lakes' cities in the burgeoning United States. Notable buildings Rensselaer Polytechnic Institute The Emma Willard School for Girls aka Emma Willard School The Hart-Cluett Mansion Paine Mansion Russell Sage College Troy Public Library Hudson Valley Community College Natives of Troy expressed their passion for building, using the following materials, for an array of building features: Iron: cast and structural iron works (facades, gates, railings, banisters, stairwells, rooftop crenellation, window grilles, etc.) Stone: carved hard and soft stone foundations, facades and decorative elements Glass: as well as in the vast array of ornate stained and etched glass works; Wood: fine wood work in found in many of Troy's buildings. Tiffany and La Farge created magnificent stained-glass windows, transoms and other decorative stained-glass treatments for their customers in Troy. With many examples of intact 19th-century architecture, particularly in its Central Troy Historic District, this has helped to lure several major movies to film in Troy, including Ironweed, The Age of Innocence (Filmed partially in the Paine mansion), Scent of a Woman, The Bostonians, The Emperor's Club, and The Time Machine. In addition, the television show The Gilded Age filmed in Troy. There are many buildings in a state of disrepair, but community groups and investors are restoring many of them. Troy's downtown historic landmarks include Frear's Troy Cash Bazaar, constructed on a steel infrastructure clad in ornately carved white marble; the Corinthian Courthouse constructed of gray granite; the Troy Public Library, built in an elaborate Venetian palazzo style with high-relief carved white marble; the Troy Savings Bank Music Hall, designed in the Second Empire style, with a recital hall with highly regarded acoustic properties. There is a rich collection of Colonial, Federal, Italianate, Second Empire, Greek Revival, Egyptian Revival, Gothic Revival and other Romantic period townhouses surrounding the immediate downtown. The Hart-Cluett Mansion displays a Federal facade executed in white marble, quarried in Tuckahoe, New York. Often with foundations of rusticated granite block. Medina sandstone, a deep mud-red color, from Medina, New York, was also used. As with many American cities, several city blocks in downtown Troy were razed during the 1970s as a part of an attempted urban renewal plan, which was never successfully executed, leaving still vacant areas in the vicinity of Federal Street. Today, however, there have since been much more successful efforts to save the remaining historic downtown structures. Part of this effort has been the arrival of the "Antique District" on River Street downtown. Cafes and art galleries are calling the area home. As home to many art, literature, and music lovers, the city hosts many free shows during the summer, on River Street, in parks, and in cafes and coffee shops. Notable landmarks Recurring events Troy Flag Day parade – was the largest Flag Day parade in the US. It started in 1967 and ended in 2017. Troy River Fest – arts, crafts and music festival held every June in the downtown district. Uncle Sam Parade – was held near Samuel Wilson's birthday in mid-September. It was held last in 2015 after 40 years. Bakerloo Theatre Project – classical summer theatre The Victorian Stroll – held annually in December Troy Turkey Trot – Thanksgiving Day run; the oldest race in the Capital District. The Enchanted City – Steampunk festival in downtown Troy Troy Night Out – monthly arts and cultural event in the streets of Downtown Troy Rockin' on the River – outdoor concert series in June to August Troy Pig Out – BBQ competition in Riverfront Park Chowderfest – chowder festival in downtown Troy Troy Farmer's Market - held weekly, during the summer at Monument Square and River Street, and in the winter in the Atrium Government Executive branch The executive branch consists of a mayor who serves as the chief executive officer of the city. The mayor is responsible for the proper administration of all city affairs placed in his/her charge as empowered by the city charter. The mayor enforces the laws of New York State as well as all local laws and ordinances passed by the city council. She or he exercises control over all executive departments of the city government, including the Departments of Finance, Law, Public Safety, Public Works, Public Utilities, and Parks and Recreation. The mayor's term of office is four years, and an incumbent is prohibited from serving for more than two consecutive terms (eight years). The current mayor of Troy is Patrick Madden (D), who is serving his second term, having been re-elected on November 5, 2019. Electoral history Results from the last seven mayoral elections (an asterisk indicates the incumbent): November 5, 2019 – Patrick Madden *(D, W) defeated Rodney Wiltshire (G, I), Tom Reale (R, C) November 3, 2015 – Patrick Madden (D) defeated Jim Gordon (R, C, G, I, RF), Rodney Wiltshire (W), Jack Cox (REV) November 8, 2011 – Lou Rosamilia (D, W) defeated Carmella Mantello (R, C, I) November 6, 2007 – Harry Tutunjian *(R, I, C) defeated James Conroy (D), Elda Abate (TPP) November 4, 2003 – Harry Tutunjian (R, I, C) defeated Frank LaPosta (D) November 2, 1999 – Mark Pattison *(D, L, W) defeated Carmella Mantello (R, I, C) November 7, 1995 – Mark Pattison (D, C) defeated Kathleen Jimino (R, RtL, Fre), Michael Petruska (I, W), Michael Rourke (L) prior to the November 1995 election, a city-manager form of government was utilized Legislative branch Troy's legislative branch consists of a city council composed of seven elected members: one at-large member who represents the entire city and acts as City Council President, and six district members who represent each of the six districts of Troy. Currently, there are 4 Democrats and 3 Republicans. Each council member serves a two-year term and an incumbent is prohibited from serving for more than four consecutive terms (eight years). The council meets on the first Thursday of every month at 7:00 pm in the City Hall council chambers. All meetings are open to the public and include a public forum period held before official business where residents can address the council on all matters directly pertaining to city government. The current Troy City Council took office on January 1, 2020, and will serve until December 31, 2021. The members are: Carmella Mantello (R – At-Large; President) Jim Gulli (R – District 1) Kim McPherson (R – District 2) Sue Steele (D – District 3) Anasha Cummings (D – District 4) Ken Zalewski (D – District 5; President Pro Tempore) Eileen McDermott (D – District 6) Political boundaries The City of Troy is divided into thirty (30) election districts, also known as EDs. An ED is the finest granularity political district that can be used, from which all other political districts are formed. Other political districts that make use of these EDs include City Council Districts, County Legislative Districts, State Assembly Districts, State Senate Districts, and U.S. Congressional Districts. City Council districts The 30 EDs are grouped into six Council Districts, as follows: Council District 1: ED1–ED6 Council District 2: ED7–ED10 Council District 3: ED11–ED15 Council District 4: ED16–ED18 Council District 5: ED19–ED24 Council District 6: ED25–ED30 New York State Senate districts Two New York State Senate Districts, the 43rd and the 44th, each share a portion of their total areas with groups of EDs in Troy as follows: Senate District 43: ED1–ED7 Senate District 44: ED8–ED30 New York State Assembly districts Two New York State Assembly Districts, the 107th and the 108th, each share a portion of their total areas with groups of EDs in Troy as follows: Assembly District 107: ED1–ED8, ED12–ED15, ED23 Assembly District 108: ED9–ED11, ED16–ED22, ED24–ED30 Other districts All other political districts that exist in Troy consist of the entire city — all 30 EDs: U.S. Congressional District 20: ED1–ED30 Rensselaer County Legislative District 1: ED1–ED30 Education The Rensselaer School, which later became RPI, was founded in 1824 with funding from Stephen Van Rensselaer, a descendant of the founding patroon, Kiliaen. In 1821, Emma Willard founded the Troy Female Seminary. It was renamed Emma Willard School (America's first girl's high school and a high-academic boarding and day school) in 1895. The former Female Seminary was later reopened (1916) as Russell Sage College (a comprehensive college for women). All of these institutions still exist today. In addition, Troy is home to the 10,000-student Hudson Valley Community College (part of the State University of New York system); two public school districts (Troy and Lansingburgh); three private high schools: La Salle Institute (Catholic military-style), Emma Willard School, Catholic Central High School (a regional Catholic high school in Lansingburgh section), and one K-12 charter school system, Troy Prep. Infrastructure Transportation Inter-city buses Buses are operated by Capital District Transportation Authority. Roads US 4 runs north–south through the city. New York State Route 7 passes through, east–west through the city, with a bridge west across the Hudson River, as does New York State Route 2. Rail The New York Central Railroad, Delaware and Hudson Railroad, Rutland Railroad and Boston and Maine Railroad provided passenger rail service to Troy. By the late 1950s, only the Boston & Maine passenger service remained. The last Boston and Maine passenger train arrived from Boston, Massachusetts in 1958. Troy Union Station closed and was demolished later that year. Amtrak serves Albany-Rensselaer station, 8.5 miles to the south of Troy. Fire Department Troy Fire Department's 119 uniformed personnel operate out of six fire stations located throughout the city and operate five engine companies, a rescue-engine company, two truck companies, three ambulances, a Hazardous Material response unit (Troy Fire Department is the hazardous material response unit for Rensselaer County) and two rescue boats. Health care Northeast Health is now the umbrella administration of Troy's two large hospitals (Samaritan Hospital and St. Mary's Hospital). Notable people Joe Alaskey (1952–2016), voice actor, known for various Looney Tunes characters Dave Anderson (1929-2018), Pulitzer Prize-winning sportswriter for The New York Times, born in Troy Garnet Douglass Baltimore (1859-1946), distinguished civil engineer and landscape designer, first African-American graduate of Rensselaer Polytechnic Institute Thomas Baker (1916–1944), U.S. infantryman, received Medal of Honor for Battle of Saipan James A. Barker, Wisconsin State Senator George Packer Berry (1898–1986), Dean of Harvard Medical School, born in Troy Nick Brignola (1936–2002), musician (internationally famous jazz baritone saxophonist), was born in Troy and lived his whole life in the area. Dorothy Lavinia Brown (1919-2004), African American surgeon, legislator and teacher, raised in the Troy Orphan Asylum for much of her childhood and attended Troy High School, where she graduated at the top of her class in 1937. Henry Burden (1791–1871), originally from Scotland, engineer and businessman who built an industrial complex in Troy called the Burden Iron Works that featured the most powerful water wheel in the world Hadden Clark, Cannibal child murderer and suspected serial killer; Born in Troy. James Connolly (1868–1916), a leader of the Irish Easter Rising, lived in Troy 1903 – c. 1910; a statue of Connolly was erected in Troy in 1986 Thomas H. Conway, Wisconsin State Assemblyman Charles Crocker, a railroad executive, a founder of the Central Pacific Railroad, and an associate of Leland Stanford Jeff Daly, architect and designer, former head of design for the Metropolitan Museum of Art Blanche Dayne, an actress in vaudeville from 1890s to 1920s Courken George Deukmejian Jr. (1928–2018), an American politician from the Republican Party who was the 35th Governor of California from 1983 to 1991 and Attorney General of California from 1979 to 1983 John Joseph Evers (1883–1947), baseball Hall of Fame second baseman Mame Faye (1866–1943), brothel mistress Robert Fuller (born 1933), actor, star of TV series Wagon Train, rancher, born in Troy Alice Fulton (born 1952), poet and author, MacArthur "Genius Grant" recipient, was born and raised in Troy; her novel The Nightingales of Troy follows a fictional Irish-American family through the 20th Century in Troy Henry Highland Garnet (1815–1882), African-American abolitionist, minister and orator; editor of The National Watchman and The Clarion Uri Gilbert (July 10, 1809 – June 17, 1888) 19th century mayor and alderman of Troy and owner of Gilbert Car Company. Abba Goddard (1819 - 1873), editor of The Trojan Sketchbook Jay S. Hammond (1922–2005), fourth governor of Alaska from 1974 to 1982 Benjamin Hanks (1755–1824), goldsmith, instrument maker, and first maker of bronze cannons and church bells in America Tim Hauser (1941–2014), singer and founding member of the vocal group The Manhattan Transfer Edward Burton Hughes, the Deputy Superintendent of New York State Department of Public Works from 1952 to 1967 Theodore Judah, a railroad engineer for the Central Pacific Railroad King Kelly (1857–1894), professional baseball player, born in Troy Ida Pulis Lathrop (1859–1937), American painter, born in Troy. Dennis Mahoney (1974–), author, born in Troy William Marcy (1786–1857), governor, U.S. senator, U.S. Secretary of State Edward P. McCabe (1850-1920), African American settler, attorney and land agent, born in Troy Herman Melville (1819–1891), author (Moby Dick), from 1838 to 1847 resided in Lansingburgh John Morrissey (1831–1878), bare-knuckle boxer, U.S. representative, co-founder of Saratoga Race Course Kate Mullany (1845–1906), Irish-born labor organizer, founder of the Collar Laundry Union James Mullowney, Wisconsin State Assemblyman Edward Murphy Jr. (1836–1911), mayor, U.S. senator Florence Nash (1888–1950), actress Mary Nash (1884–1976), actress Mary Louise Peebles (1833–1915), author of children's books Cicero Price (1805–1888), United States Navy commodore who fought in American Civil War and was commander of East India Squadron, resided in Troy for 36 years Don Rittner, Historian, author, film maker George G. Rockwood (1832–1911), celebrity photographer Richard Selzer (1928-2016), surgeon and author, was born in Troy; his memoir Down from Troy recounts his experiences there as the son of a physician Bernard Shir-Cliff (1924-2017), editor Jeanie Oliver Davidson Smith (1836-1925), poet, romancist Horatio Spafford (1828–1888), composer of the well-known Christian hymn "It Is Well With My Soul", was born in Lansingburgh (now Troy) Maureen Stapleton (1925–2006), Academy Award-winning actress of film, stage and television Lavinia Stoddard (1787–1820), poet, school founder John J. Taylor, U.S. Congressman Mike Valenti, radio commentator Joseph M. Warren, U.S. Representative for New York Amy Wax (born 1953), law professor Samuel Wilson (1766–1854), a butcher and meatpacker during War of 1812 whose name is believed to be the inspiration for the personification of the United States known as Uncle Sam Russell Wong (born 1963), actor Duke Zeibert (1910–1997), restaurateur David Baddiel (born 1964), comedian References Further reading Rensselaer County histories Troy histories External links City of Troy Homepage Early history of Troy, NY Our Town: Troy Documentary produced by WMHT (TV) Cities in New York (state) Former towns in New York (state) Former villages in New York (state) New York State Heritage Areas Populated places established in 1787 Cities in Rensselaer County, New York New York (state) populated places on the Hudson River 1787 establishments in New York (state) Capital District, New York
30608516
https://en.wikipedia.org/wiki/Ed%20Wright%20%28composer%29
Ed Wright (composer)
Edward Charles Wright (born 4 August 1980 in Hawridge, Buckinghamshire) is a British composer known largely for electronic and mixed media sound art. Biographical information He studied at Bangor University under the supervision of Andrew Lewis completing a doctorate in mixed electroacoustic and instrumental music in 2010 Edward Wright is actively composing and recording. He lives in North Wales with his partner and daughter. Wright's life was hit by tragedy when his father committed suicide in 2017, deeply affecting his next major work Space to Think. Performance and broadcast Wright has performed widely thought the UK and abroad including performances in SARC (Belfast), Electroacoustic Cymru (Wales), St. James Piccadilly (London), Art Forum (Antwerp), ICMC2012 (Slovenia), California State University New Music Festival, NYCEMF (USA) & Toronto Electroacoustic Symposium (Canada). His work is characterised (although not exclusively) by the use of electronic resources especially surround and octophonic sound diffusion systems. Although his output remains somewhat melodic in comparison to many comparable acousmatic compositions, Wright's work has become increasingly driven by philosophical, rather than specifically musical content, as demonstrated in more recent pieces such as Thinking inside the Box and Crosswire; the emphasis is more on the multisensory experience of the music and the underlying discourse rather than a particular tune or melody. During his time completing his thesis Wright worked on a number of broadcast projects including the tape part and realisation of Stereo Type for Guto Puw broadcast on S4C television in 2005 as part of the Bangor New Music Festival, also appearing on S4Cs Sioe Gelf with a workshop with pupils from Ysgol PendalarS4C television Sioe Gelf 15/4/08, and work being played on BBC Radio 1 Wales Introducing BBC Radio 1 Wales Introducing 5/6/09. His work first achieved international recognition in the IMEB Prix Bourges 2008 for his piece 'Con-chords' IMEB Prix Bourges. He has since gone on to sign to the Blipfonica record label Blipfonica and has released two CDs in the past two years, as well as contributing to the Journal of Creative Studies, the Composers of Wales Newsletter, and Electronic Music Wales. He also did sound for the Conwy Food Festival, Art Video. Software development Wright initially developed software within the Max/Msp/Jitter and Csound environments as a way of creating methods to perform otherwise impossible music, as in the case of the 8 channel audio/video mixer devised for 'Harp Set', and the sample/processing/difussion system of 'Polarities'. These have quickly become outmoded as live/electronic mixed music has become more mainstream. Wright's approach to software design as a method of interfacing with the digital world during live performance brings software design (as in the case of 'Crosswire' and 'Sound games') closer to instrument building, focusing on methods of physical interaction e.g. mouse, Wii Remote and motion tracking controls which provide a wide range of expression yet demand the precision of physical control expected by an instrumental performer. As well as his performance software Wright has also released a number of composition tools Virtual440 Audio Machine VAM 1 which are available to download in the Max (software) environment. Collaborations Wright has worked collaboratively with a number of artists, most especially the photographer and poet Morlo Bach under the title of Virtual Poetry for: 'Passage, In memory of Thomas/Celtic Cross' and 'Broken Glass', and with the author Graeme Harper on 'Seasons'. He also worked on Stereo Type for Guto Puw see above. As part of the Conwy Blinc digital arts festival he also worked with Tim Pugh & Wendy Dawson, Helen Booth and Dominic McGill. Wright is part of the electronic improvisation trio Accretion Entropy notably performing on BBC Radio 3. Works to October 2019 'Newborough Bluesky', electronic (2001) 'In Pace', harp, violin, cello and flute (2002) 'Triangulations', small orchestra (2002) 'Stereo Type' with Guto Puw (2005) 'Passage', music and image installation (2005) 'The Way I Saw It', violin and tape (2005) 'Enough~?', clarinet and live electronics (2006) 'Botany', chorus (2006) 'En Masse', electronic (2006) 'In memory of Thomas/Celtic Cross', multi-media (2006) 'Postcards From Home', electronic (2007) 'Broken Glass', music with image installation (2007) 'Harp Set', 8 channel surround sound and moving image (2007) 'Seasons', chorus and live 4-channel electroacoustics (2008) 'Con-chords', 8-channel electroacoustics (2008) 'Castell' (with Ysgol Pendalar) 'Y Twr', multi-channel installation (2009) 'Polarities', orchestra and 8-channel surround live electronics (2009) 'Starlight Snowfall', String Ensemble and 4 channel electronics (2010) 'Thinking Inside the Box', Stereo fixed media installation (2010) 'Crosswire', for electric violin and live processing (2010) 'Anatomy of a Mountain Stream', 4 channel fixed media (2011) 'Sound Games', electronics and live controllers (2011) 'Jackdaws', 4 channel fixed media poetry collaboration with Rhys Trimble (2011) 'Who can hear the Sea', Octophonic evolving installation (2012) 'DROP!', Marble run sound game installation (2013) 'Sonic Wavelab' Sound art and sculpture installation with Charles Gershom (2014) 'Xmas-o-lophone' Sound sculpture (2015) 'Ricecar' for electric violin and stochastic step sequencer (2016) 'Like Shooting-stars Over Petrol Pools' for electric violin, FX and modular synthesis (2017) 'Space To Think' acousmatic concert music (2018) '1 Mile 365 Days: Stories from running a mile daily for a year' paperback and eBook (2018) 'Turbo' stereo acousmatic music (2019) Fundraising activities Wright ran at least 1 mile everyday in 2017 to raise money for the Cleft Lip & Palate Association. He completed the challenge running over 500 miles, raising in excess of £2500. It may be relevant to this to note that Edward was born with a bilateral cleft lip and palate. References External links Electroacoustic Wales composers website Bangor New Music Festival CLAPA News 2017 Wright's 2017 Running Blog Living people English composers 1980 births People from Buckinghamshire Musicians from Buckinghamshire
507823
https://en.wikipedia.org/wiki/Andrew%20Project
Andrew Project
The Andrew Project was a distributed computing environment developed at Carnegie Mellon University (CMU) beginning in 1982. It was an ambitious project for its time and resulted in an unprecedentedly vast and accessible university computing infrastructure. History The Information Technology Center, a partnership of Carnegie Mellon and IBM, began work on the Andrew Project in 1982. In its initial phase, the project involved both software and hardware, including wiring the campus for data and developing workstations to be distributed to students and faculty at CMU and elsewhere. The proposed "3M computer" workstations included a million pixel display and a megabyte of memory, running at a million instructions per second. Unfortunately, a cost in the order of a US$10,000 made the computers beyond the reach of students' budgets. The initial hardware deployment in 1985 established a number of university-owned "clusters" of public workstations in various academic buildings and dormitories. The campus was fully wired and ready for the eventual availability of inexpensive personal computers. Early development within the Information Technology Center, originally called VICE (Vast Integrated Computing Environment) and VIRTUE (Virtue Is Reached Through Unix and Emacs), focused on centralized tools, such as a file server, and workstation tools including a window manager, editor, email, and file system client code. Initially the system was prototyped on Sun Microsystems machines, and then to IBM RT PC series computers running a special IBM Academic Operating System. People involved in the project included James H. Morris, Nathaniel Borenstein, James Gosling, and David S. H. Rosenthal. The project was extended several times after 1985 in order to complete the software, and was renamed "Andrew" for Andrew Carnegie and Andrew Mellon, the founders of the institutions that eventually became Carnegie Mellon University. Mostly rewritten as a result of experience from early deployments, Andrew had four major software components: The Andrew Toolkit (ATK), a set of tools that allows users to create and distribute documents containing a variety of formatted and embedded objects, The Andrew Messaging System (AMS), an email and bulletin board system based on ATK, and The Andrew File System (AFS), a distributed file system emphasizing scalability for an academic and research environment. The Andrew Window Manager (WM), a tiled (non-overlapping windows) window system which allowed remote display of windows on a workstation display. It was one of the first network-oriented window managers to run on Unix as a graphical display. As part of the CMU's partnership with IBM, IBM retained the licensing rights to WM. WM was meant to be licensed under reasonable terms, which CMU thought would resemble a relatively cheap UNIX license, while IBM sought a more lucrative licensing scheme. WM was later replaced by X11 from MIT. Its developers, Gosling and Rosenthal, would next develop the NeWS (Network extensible Window System). AFS moved out of the Information Technology Center to Transarc in 1988. AMS was fully decommissioned and replaced with the Cyrus IMAP server in 2002. The Andrew User Interface System After IBM's funding ended, Andrew continued as an open source project named the Andrew User Interface System. AUIS is a set of tools that allows users to create and distribute documents containing a variety of formatted and embedded objects. It is an open-source project run at the Department of Computer Science at CMU. The Andrew Consortium governs and maintains the development and distribution of the Andrew User Interface System. The Andrew User Interface System encompasses three primary components. The Andrew User Environment (AUE) contains the main editor, help system, user interface, and tools for rendering multimedia and embedded objects. The Andrew Toolkit (ATK) contains all of the formattable and embeddable objects, and allows a method for developers to design their own objects. ATK allows for multi-level object embedding, in which objects can be embedded in one another. For example, a raster image object can be embedded into a spreadsheet object. The Andrew Message System (AMS) provides a mail and bulletin board access, which allows the user to send, receive, and organize mail as well as post and read from message boards. As of version 6.3, the following were components of AUIS: Applications Word processor (EZ) Drawing Editor (Figure) Mail and News Reader (Messages) Mail and News Sender (SendMessage) Font Editor (BDFfont) Documentation Browser (Help) Directory Browser (Bush) Schedule Maintainer (Chump) Shell Interface/Terminal (Console, TypeScript) AUIS Application Menu (Launch) Standard Output Viewer (PipeScript) Preferences Editor (PrefEd) Graphical and interactive editors Equation Insert (EQ) Animation Editor (Fad) Drawing Editor (Figure) Insert Layout Insert (Layout) Display Two Adjacent Inserts (LSet) Extension and String Processing Language (Ness) Display and Edit Hierarchies (Org) Page Flipper (Page) Monochrome BMP Image Editor (Raster) Spreadsheet Insert (Table) Text, Document, and Program Editor (Text) Wireless Andrew Wireless Andrew was the first campus-wide wireless Internet network. It was built in 1993, predating Wi-Fi branding. Wireless Andrew is a 2-megabit-per-second wireless local area network connected through access points to the wired Andrew network, a high-speed Ethernet backbone linking buildings across the CMU campus. Wireless Andrew consists of 100 access points covering six buildings on the campus. The university tested the setup with over 40 mobile units before allowing general use by researchers and students in February 1997. References Further reading Morris, J.H., Van Houweling, D., & Slack, K., The Information Technology Center Carnegie Mellon Technical Report CMU-ITC-025, 1983. External links The Andrew Project - CMU's site chronicling the history of the project and the people involved. The Andrew Consortium - Website of the Andrew User Interface System project. /afs/cs.cmu.edu/project/atk-ftp - AUIS FTP archive. Carnegie Mellon University software Distributed computing architecture
55204084
https://en.wikipedia.org/wiki/Deontay%20Burnett
Deontay Burnett
Deontay Burnett (born October 4, 1997) is an American football wide receiver who is a free agent. He was signed as an undrafted free agent by the Tennessee Titans in 2018. He played college football at USC. He has also been a member of the New York Jets, San Francisco 49ers, and Philadelphia Eagles. Early years Burnett attended Junípero Serra High School in Gardena, California. He was rated as a three-star recruit and was not a highly recruited player coming out of high school. He committed to the University of Southern California (USC) to play college football. Burnett was offered a blueshirt scholarship on the day of signing day, and chose to commit to the school he dreamed of playing for as a kid. College career As a true freshman at USC in 2015, Burnett appeared in all but two of the Trojans' games. He was a backup receiver, finishing the year with 10 receptions for 161 yards. He led the team in receiving against California with 3 receptions for 82 yards. In his sophomore season at USC in 2016, Burnett broke out as one of the top-three targets for the Trojans. He appeared in all 13 games, started the last 5 as the slot receiver, and finished with 56 receptions for 622 yards with 7 touchdowns. He added on with 3 carries for 31 yards. Deontay Burnett broke out in the 2017 Rose Bowl Game, where he caught 13 passes for 164 yards and 3 touchdowns. Burnett was given numerous post-season accolades, as he made the 2016 AP All-Bowl Team first team, ESPN All-Bowl Team first team and ESPN Pac-12 All-Bowl Team first team. Professional career Tennessee Titans Burnett signed with the Tennessee Titans as an undrafted free agent on May 11, 2018. He was waived on September 1, 2018. New York Jets On September 3, 2018, Burnett was signed to the New York Jets' practice squad. He was released on September 21, 2018, but was re-signed four days later. He was promoted to the active roster on October 20, 2018. Burnett made his NFL debut on October 21, 2018 in a 37-17 loss to the Minnesota Vikings, catching one pass for nine yards. In Week 8 against the Chicago Bears, Burnett finished with a team high of both 61 receiving yards and 4 receptions as the Jets lost 10-24. On August 31, 2019, Burnett was waived by the Jets. San Francisco 49ers On October 16, 2019, Burnett was signed to the San Francisco 49ers practice squad. He was released on December 10, 2019. Philadelphia Eagles Burnett was signed to the Philadelphia Eagles' practice squad on December 12, 2019. He was promoted to the active roster on December 24, 2019. In week 17, he caught 2 passes for 48 yards in the 34-17 win against the Giants. Burnett was waived on September 3, 2020, and re-signed to the practice squad three days later. He was elevated to the active roster on September 26 and October 3 for the team's weeks 3 and 4 games against the Cincinnati Bengals and San Francisco 49ers, and reverted to the practice squad after each game. He was placed on the practice squad/COVID-19 list by the team on November 19, 2020, and restored to the practice squad on December 3. In total, Burnett played in 2 games, and had 3 receptions for 19 yards. He signed a reserve/future contract with the Eagles on January 4, 2021. He was waived with a non-football injury designation on March 23, 2021. References External links USC Trojans bio Philadelphia Eagles bio 1997 births Living people Players of American football from Compton, California American football wide receivers USC Trojans football players Tennessee Titans players New York Jets players San Francisco 49ers players Philadelphia Eagles players Junípero Serra High School (Gardena, California) alumni
1991766
https://en.wikipedia.org/wiki/Robert%20L.%20Cook
Robert L. Cook
Robert L. Cook (December 10, 1952) is a computer graphics researcher and developer, and the co-creator of the RenderMan rendering software. His contributions are considered to be highly influential in the field of animated arts. In 2009, Cook was elected a member of the National Academy of Engineering for building the motion picture industry's standard rendering tool. Cook was born in Knoxville, Tennessee and educated at Duke University and Cornell University. While at Cornell, Cook worked with Donald P. Greenberg. Education B.S. in physics, 1973, Duke University, N.C. M.S. in computer graphics, 1981, Cornell University, Ithaca, N.Y Career Robert Cook was involved with Lucasfilm and later had the position as Vice President of Software Development at Pixar Animation Studios, which he left in 1989. In November 2016, he became the Commissioner of the Technology Transformation Service of the U.S. General Services Administration Computer Animation Rendering Star Trek II: The Wrath of Khan (1982) computer graphics: Industrial Light & Magic André and Wally B. (1984) 3D rendering Luxo, Jr. (1986) rendering Red's Dream (1987) reyes / miracle tilt Toy Story (1995) renderman software development Toy Story 2 (1999) rendering software engineer Monsters, Inc. (2001) software team lead Cars (2006) software team lead Up (2009) software development: Pixar studio team Awards 1987, ACM SIGGRAPH Achievement Award in recognition of his contributions to the fields of computer graphics and visual effects. 1992, Scientific and Engineering Award for the development of "RenderMan" software which produces images used in motion pictures from 3D computer descriptions of shape and appearance. 1999, Fellow of the Association for Computing Machinery. 2000, Academy Award of Merit (Oscar) for significant advancements to the field of motion picture rendering as exemplified in Pixar's RenderMan. Their broad professional influence in the industry continues to inspire and contribute to the advancement of computer-generated imagery for motion pictures. GATF InterTech Award MacWorld World Class Award Seybold Award for Excellence 2009, The Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics 2009, Elected to the National Academy of Engineering References 1952 births Living people Computer graphics professionals Cornell University alumni Duke University alumni People from Knoxville, Tennessee Fellows of the Association for Computing Machinery Members of the United States National Academy of Engineering Lucasfilm people Pixar people
22168795
https://en.wikipedia.org/wiki/Cardepia
Cardepia
Cardepia is a genus of moths of the family Noctuidae. Species Cardepia affinis Rothschild, 1913 Cardepia arenaria (Hampson, 1905) Cardepia arenbergeri Pinker, 1974 Cardepia dardistana Boursin, 1967 Cardepia halophila Hacker, 1998 Cardepia hartigi Parenzan, 1981 Cardepia helix Boursin, 1962 Cardepia irrisoria (Erschoff, 1874) Cardepia kaszabi Sukhareva & Varga, 1973 Cardepia legraini Hacker, 1998 Cardepia martoni Hacker, 1998 Cardepia mixta (Pagenstecher, 1907) Cardepia oleagina Hacker, 1998 Cardepia sociabilis (Graslin, 1850) References Cardepia at Markku Savela's Lepidoptera and Some Other Life Forms Natural History Museum Lepidoptera genus database Hadenini Moth genera
1303398
https://en.wikipedia.org/wiki/Gaston%20Means
Gaston Means
Gaston Bullock Means (July 11, 1879 – December 12, 1938) was an American private detective, salesman, bootlegger, forger, swindler, murder suspect, blackmailer, and con artist. While not involved in the Teapot Dome scandal, Means was associated with other members of the so-called Ohio Gang that gathered around the administration of President Warren G. Harding. Means also tried to pull a con associated with the Lindbergh kidnapping, and died in prison following his criminal conviction. Biography Gaston Bullock Means was born in Concord, North Carolina, the son of William Means, a reputable lawyer. He was also a great-nephew of Confederate General Rufus Barringer. He was in the first graduating class of Concord High School in 1896, graduated from the University of North Carolina in 1903, became a schoolteacher, then a travelling salesman. His life avocation, however, was that of a confidence trickster. J. Edgar Hoover once called him "the most amazing figure in contemporary criminal history" because of his ability to weave a believable, albeit fraudulent, story. In 1911, he talked himself into a job with a New York detective firm where he created reports that contained so many clues that they must either be investigated further (at a substantial cost) or denounced utterly. His reputation spread. On the eve of World War I, he was asked to further Germany's interests in the neutral United States. He "uncovered" plots and counterplots rife with secret documents and skulking spies, all of which required investigation at his usual rate of $100 (gold standard dollars) per day. After America declared war on Germany, Means returned to being a private detective. There, he was given a case involving Maude King, the widow of a wealthy lumberman, who had fallen into the clutches of a swindler in Europe. King had been left $100,000 by her late husband, with the remainder of his $3 million estate intended for charity. She sued for more and settled for $600,000 plus the interest on $400,000. Means ingratiated himself into King's life and assisted her with her business affairs. Under the guise of investing her money, Means deposited hundreds of thousands of dollars to his own credit in New York and Chicago, invested in cotton and the stock market and lost heavily. Claiming to find a new will which required "investigation", Means plundered the remainder of the woman's finances until they were nearly all gone. On August 29, 1917, the widow went with Means to a firing range. Means returned with her body, claiming she had killed herself, perhaps accidentally while handling his gun. Means' account was disputed by the coroner; no powder marks were found near the wound in her head, discounting a self-inflicted wound. Maude was fearful of pistols and she was planning to remarry. Means was indicted for murder and after deliberating for only 15 minutes, a jury in his home town acquitted him, after defense counsel cleverly whipped up local jury resentment against New York lawyers who were assisting the prosecution. The will was declared a forgery and Means was prosecuted. Testimony showed that the witnesses to the purported will were out of town on the day it was signed, the typewriter used to type the document had not yet been manufactured when the will was purportedly written and King's signature and those of other witnesses were not genuine. The trial was going badly for Means when he declared that he knew the location of a trunk filled with secret documents obtained from German spies. In exchange for a letter to the judge attesting to his good character from the United States Army, he said, he would hand over that trunk. An Army Intelligence officer was assigned to accompany Means to locate the trunk, which he did, handing it over on the condition that it be sent to Washington intact. Then, baggage claim in hand, he hurried to Washington, declared he had kept his end of the bargain and demanded the promised letter attesting to his good service. Alas, the trunk arrived and it was found to contain no documents. Declaring he knew who had done this "despicable thing", Means promised to find the scoundrels and recover the lost papers. The army investigated and discovered the weight of the trunk when sent was identical to its weight when opened. In later years, Means boasted to friends that he had been accused of every felony in the criminal law books, up to and including murder. Although he had a shady reputation as a detective, in October 1921, Means was hired by the Bureau of Investigation and he moved to Washington, D.C. The FBI was then led by William J. Burns, famous ex-Secret Service man, private detective and friend of Harry M. Daugherty, Attorney General in the Harding administration. Burns had employed Means as a detective and thought Means had great skill as an investigator and an extortionist. Despite the protection of his patron, Means was later suspended from the FBI at the insistence of Daugherty, who had become increasingly aware that Means was a loose cannon. Bootleggers' helper Although the United States was officially "dry" during the Harding years as a result of Prohibition, illegal alcohol was common. In the late fall of 1922, Means began selling his services to local Washington bootleggers, with the offer that he could use his connections to "fix" their legal problems with the government. In 1924, following Harding's death, Congress held hearings on the Justice Department's role in failing to oversee their Prohibition duties under the Volstead Act. Means testified against former Attorney General Daugherty. Means "confessed" to handling bribes for senior officials in the former Harding Administration. He declared the country was being bespoiled and that he had the documents to prove it. When asked to produce them, Means readily agreed but returned with a story that "two sergeants-at-arms" had appeared at his home, produced an order signed by the head of the committee and had taken the documents away with them. The committee head examined the "order" and declared his signature a forgery. Means leaped from his chair. "Forgery!" he said. "I've been tricked by my enemies. I'll run them down if it's the last thing I do!" After testifying before the Brookhart-Wheeler committee, Gaston Means signed an affidavit stating that his entire testimony was false and that he had only given it because senator Burton Wheeler had promised to get the then current charges against him dismissed. Means further admitted in his affidavit that he had coached the other witness in the case, Roxie Stinson. The congressional investigation also revealed evidence of Means's role in the illegal issuance of Prohibition-era liquor permits. Means was indicted for perjury and tried before a jury. In intentionally sensational testimony, Means implicated both Harding and Secretary of the Treasury Andrew Mellon as being part of the cover-up. Unable to support his own counter-charges and unable to convince the jury of his innocence, Means was found guilty of perjury and sentenced to two years in federal prison. Professional con man During and after his term in the federal penitentiary, Means retained his reputation as the ultimate man who knew all of the secrets. He put this reputation to work in his book, The Strange Death of President Harding (1930). The exposé alleged that Harding had been consciously complicit in all of the major scandals of his administration. The book's status as a bestseller derived in considerable measure from its insinuation that the President had been murdered by his wife, First Lady Florence Harding, with assistance from the couple's personal physician, Charles E. Sawyer. Mrs. Harding's alleged motivation was that she had become aware of her husband's corruption and marital infidelity and wanted to protect his reputation. Means' accusations seemed to some to be true. The writer had learned many facts about Harding's sex life from the rumor mill in Washington. A 1933 counter-exposé, published in Liberty, blew the cover off of the dubious book. Part-time journalistic stringer Mae Dixon Thacker confessed that not only had she ghostwritten the book for Means but also that Means had bilked her out of her share of the profits. Mrs. Thacker further stated that Means had never provided the documents he swore he had that backed his accusations. Upon further investigation, Thacker had found the accounts Means made in the book were completely false and she stated in Liberty Magazine that the book was a “colossal hoax – a tissue of falsehood from beginning to end.” Having collected his royalties, Means cheerfully repudiated his own book. He had moved on to a new set of victims, a group of New York men who were interested in subversive Soviet activities. Means claimed to have the goods on two Soviets intent on wreaking havoc in the United States with $2 million earmarked for that purpose. He took on the case at his usual price of $100 per day. His investigation dragged out for three years as Means promised to bring the secret agents to justice and to capture 24 trunks and 11 suitcases full of secret orders, plans and diaries. He claimed several times that he almost got those trunks and suitcases and once did so, he said, but upon his return to New York, the secret agents stole them back again. Finally, he delivered the news that one of the Russians had murdered the other and all the documents had been burned. He told his story so convincingly that an arrest warrant was sworn out against the killer, for a murder that existed only in Means's imagination. Following the Lindbergh kidnapping of 1932, Means attempted the most audacious con job of his career. Means was contacted by the Washington socialite Evalyn Walsh McLean (owner of the Hope Diamond), who asked him to use his connections in the East Coast underworld to assist in the recovery of the Lindbergh child. Means declared that he knew the whereabouts of the victim. He offered his services as a go-between and asked for $100,000 to pass on to the kidnappers. The credulous McLean sent Means the money and Means promptly disappeared, while a confederate kept McLean apprised of Means's difficulties and the fabulous chase. Means later came to McLean at her home again and said he would need an additional $4,000 to pay the expenses of the kidnappers; she had a $6,000 check cashed at one of the banks in Washington and turned $4,000 over to him. Finally, Means met McLean in a southern resort, promising to deliver the baby. Instead, he showed up with a man he introduced as the "King of the Kidnappers", who told her how and when the baby would be delivered. Everyone was given a code, Means was No. 27, the "King" was No. 19, Norman Tweed Whitaker was "The Fox", McLean was No. 11, and the baby was "The Book". The missing baby (who was later found murdered) did not show up and the next thing that McLean heard from Means was a demand for another $35,000. Failing to raise it, the heiress demanded all the money back. Means agreed, hurried to get it – and didn't come back. Confronted about his duplicity, Means expressed astonishment. "Didn't Mrs. McLean get it?", he asked. "She must have it. Her messenger met me at the bridge outside Alexandria as I was returning to Washington. He said 'I am Number 11.' So what was I to do? I gave him the money." This time, the heiress called the police, Means was captured, found guilty of grand larceny, and sentenced to serve 15 years in a federal penitentiary but the money was never recovered. Means was assigned, transported, and imprisoned at the United States Penitentiary at Leavenworth, Kansas, where he died in custody in 1938. In popular culture Gaston Means appears in the third and fourth seasons of the TV series Boardwalk Empire, played by Stephen Root. Means is portrayed as a kind of confidence man who sells information to people like Nucky Thompson, or does the dirty work of politicos like Attorney General Daugherty. He agrees to murder Daugherty's friend and associate Jess Smith only to have Smith commit suicide before he can do the deed. This alludes to the historical ambiguity over whether Smith's death was murder or suicide. The fictional Means was arrested for perjury—just as in real life—in season four of the show. Means is also mentioned in the book Go Set a Watchman, by Harper Lee. See also My Life with Gaston B. Means: As Told by His Wife, Julie P. Means Little Green House on K Street References Sources Dean, John; Schlesinger, Arthur M. Warren Harding (The American President Series), Times Books, 2004. Ferrell, Robert H. The Strange Deaths of President Harding. University of Missouri Press, 1996. Mee, Charles L., Jr. The Ohio Gang: A Historical Entertainment. M. Evans, 1991. "The Amazing Mr. Means" by J. Edgar Hoover, The American Magazine December 1936 reprinted in Reader's Digest March 1937 pg. 30. United States of America vs. Gaston B. Means and Norman T. Whitaker, Criminal No. 53134, May 8, 1933. American confidence tricksters People from Concord, North Carolina 1879 births 1938 deaths Warren G. Harding People acquitted of murder American people who died in prison custody Prisoners who died in United States federal government detention American people convicted of fraud American bootleggers Barringer family
23616030
https://en.wikipedia.org/wiki/Janusz%20Brzozowski%20%28computer%20scientist%29
Janusz Brzozowski (computer scientist)
Janusz (John) Antoni Brzozowski (May 10, 1935 - October 24, 2019) was a Polish-Canadian computer scientist and Distinguished Professor Emeritus at the University of Waterloo's David R. Cheriton School of Computer Science. In 1962, Brzozowski earned his PhD in the field of electrical engineering at Princeton University under Edward J. McCluskey. The topic of the thesis was Regular Expression Techniques for Sequential Circuits. From 1967 to 1996 he was Professor at the University of Waterloo. He is known for his contributions to mathematical logic, circuit theory, and automata theory. Achievements in research Brzozowski worked on regular expressions and on syntactic semigroups of formal languages. The result was Characterizations of locally testable events written together with Imre Simon, which had a similar impact on the development of the algebraic theory of formal languages as Marcel-Paul Schützenberger's characterization of the star-free languages. In the area, today at least three concepts bear Brzozowski's name in honour of his contributions: The first is the Brzozowski conjecture about the regularity of noncounting classes. Second, Brzozowski's algorithm, a conceptually simple algorithm for performing DFA minimization. Third, Eilenberg's reference work on automata theory has a chapter devoted to the so-called Brzozowski hierarchy inside the star-free languages, also known as dot-depth hierarchy. Curiously, Brzozowski was not only co-author of the paper that defined the dot-depth hierarchy and raised the question whether this hierarchy is strict, he later also was co-author of the paper resolving that problem after roughly ten years. The Brzozowski hierarchy gained further importance after Thomas discovered a relation between the algebraic concept of dot-depth and the alternation depth of quantifiers in first-order logic via Ehrenfeucht–Fraïssé games. He received the following academic awards and honours: NSERC Scientific Exchange Award to France (1974–1975) Japan Society for the Promotion of Science Research Fellowship (1984) Computing Research Association Certificate of Appreciation for outstanding contributions and service as a member of the CRA Board of Directors (1992) Distinguished Professor Emeritus, University of Waterloo, Canada (1996) Medal of Merit, Catholic University of Lublin, Poland (2001) IBM Canada Canadian Pioneer in Computing (2005) The Role of Theory in Computer Science, a one-day conference in honour of John Brzozowski's 80th birthday (2015) The Role of Theory in Computer Science: Essays Dedicated to Janusz Brzozowski, World Scientific (2017) Lifetime Achievement Award, Computer Science Canada/Informatique Canada (CS-CAN/INFO-CAN) (2016) CIAA 2017 Sheng Yu Award for Best Paper for Complexity of Proper Prefix-Convex Regular Languages by J. Brzozowski and C. Sinnamon CIAA 2018 Sheng Yu Award for Best Paper for State Complexity of Overlap Assembly by J. Brzozowski, L. Kari, B. Li, M. Szykula Research papers J. A. Brzozowski: Derivatives of regular expressions, Journal of the ACM 11(4): 481–494 (1964) J. A. Brzozowski, I. Simon: Characterizations of Locally Testable Events, FOCS 1971, pp. 166–176 R. S. Cohen, J. A. Brzozowski: Dot-Depth of Star-Free Events. Journal of Computer and System Sciences 5(1): 1-16 (1971) J. A. Brzozowski, R. Knast: The Dot-Depth Hierarchy of Star-Free Languages is Infinite. Journal of Computer and System Sciences 16(1): 37–55 (1978) Books J. A. Brzozowski, M. Yoeli: Digital Networks. Prentice–Hall, 1976 J.A. Brzozowski, C.-J. H. Seger: Asynchronous Circuits. Springer-Verlag, 1995 Notes References S. Eilenberg, Automata, Languages and Machines, Volume B. W. Thomas, Classifying Regular Events in Symbolic Logic. J. Comput. Syst. Sci. 25(3): 360-376 (1982) J.-E. Pin, Syntactic semigroups, Chapter 10 in "Handbook of Formal Language Theory", Vol. 1, G. Rozenberg and A. Salomaa (eds.), Springer Verlag, (1997) Vol. 1, pp. 679–746 A. de Luca and S. Varicchio, Regularity and Finiteness Conditions, Chapter 11 in "Handbook of Formal Language Theory", Vol. 1, G. Rozenberg and A. Salomaa (eds.), Springer Verlag, (1997) Vol. 1, pp. 747–810 V. Diekert, P. Gastin, M. Kufleitner, A Survey on Small Fragments of First-Order Logic over Finite Words. Int. J. Found. Comput. Sci. 19(3): 513-548 (2008) J. Shallit, A Second Course in Formal Languages and Automata Theory, Cambridge University Press (2009) External links Profile of Janusz Brzozowski, University of Waterloo Brzozowski's personal website at the University of Waterloo The Theory of Computing Hall of Fame Concatenation Hierarchies by Jean-Eric Pin Polish computer scientists Polish emigrants to Canada Canadian computer scientists Theoretical computer scientists 1935 births 2019 deaths
35954361
https://en.wikipedia.org/wiki/Cloud%20computing%20architecture
Cloud computing architecture
Cloud computing architecture refers to the components and subcomponents required for cloud computing. These components typically consist of a front end platform (fat client, thin client, mobile ),back end platforms (servers, storage), a cloud based delivery, and a network (Internet, Intranet, Intercloud). Combined, these components make up cloud computing architecture. Client platforms Cloud computing architectures consist of front-end platforms called clients or cloud clients. These clients are servers, fat (or thick) clients, thin clients, zero clients, tablets and mobile devices that users directly interact with. These client platforms interact with the cloud data storage via an application (middle ware), via a web browser, or through a virtual session. Virtual sessions in particular require secure encryption algorithm frame working which spans the entire interface. Zero client The zero or ultra-thin client initializes the network to gather required configuration files that then tell it where its OS binaries are stored. The entire zero client device runs via the network. This creates a single point of failure, in that, if the network goes down, the device is rendered useless. Storage An online network storage where data is stored and accessible to multiple clients. Cloud storage is generally deployed in the following configurations: public cloud, private cloud, community cloud, or some combination of the three also known as hybrid cloud. In order to be effective, the cloud storage needs to be agile, flexible, scalable, multi-tenancy, and secure. Delivery Software as a service (SaaS) The software-as-a-service (SaaS) service-model involves the cloud provider installing and maintaining software in the cloud and users running the software from cloud over the Internet (or Intranet). The users' client machines require no installation of any application-specific software since cloud applications run in the cloud. SaaS is scalable, and system administrators may load the applications on several servers. In the past, each customer would purchase and load their own copy of the application to each of their own servers, but with the SaaS the customer can access the application without installing the software locally. SaaS typically involves a monthly or annual fee. Software as a service provides the equivalent of installed applications in the traditional (non-cloud computing) delivery of applications. Software as a service has four common approaches: single instance multi-instance multi-tenant flex tenancy Of these, flex tenancy is considered the most user adaptive SaaS paradigm in designated multi-input four way manifold models. Such systems are based on simplified encryption methods that target listed data sequences over multiple passes. The simplicity of this concept makes flex tenancy SaaS popular among those without informatics processing experience, such as basic maintenance and custodial staff in franchise businesses. Development as a service (DaaS) Development as a service is web based, community shared tool set. This is the equivalent to locally installed development tools in the traditional (non-cloud computing) delivery of development tools. Data as a service (DaaS) Data as a service is web based design construct where cloud data is accessed through a defined API layer. DaaS services are often considered as a specialized subset of a Software as a Service (SaaS) offering. Platform as a service (PaaS) Platform as a service is cloud computing service which provides the users with application platforms and databases as a service. This is equivalent to middleware in the traditional (non-cloud computing) delivery of application platforms and databases. Infrastructure as a service (IaaS) Infrastructure as a service is taking the physical hardware and going completely virtual (e.g. all servers, networks, storage, and system management all existing in the cloud). This is the equivalent to infrastructure and hardware in the traditional (non-cloud computing) method running in the cloud. In other words, businesses pay a fee (monthly or annually) to run virtual servers, networks, storage from the cloud. This will mitigate the need for a data center, heating, cooling, and maintaining hardware at the local level. Networking Generally, the cloud network layer should offer: High bandwidth and low latency Allowing users to have uninterrupted access to their data and applications. Agile network On-demand access to resources requires the ability to move quickly and efficiently between servers and possibly even clouds. Network security Security is always important, but when you are dealing with multi-tenancy, it becomes much more important because you're dealing with segregating multiple customers. See also Cloud collaboration Cloud computing Cloud computing comparison Cloud database Cloud storage Further reading Reese, G. (2009). Cloud Application Architectures: Building Applications and Infrastructure in the Cloud. Sebastopol, CA: O'Reilly Media, Inc. (2009). Rhoton, J. and Haukioja, R. (2011). Cloud Computing Architected: Solution Design Handbook. Recursive Limited, 2011. . Shroff, Dr. Gautam. Enterprise Cloud Computing: Technology, Architecture, Applications. References Cloud computing
20881253
https://en.wikipedia.org/wiki/Tessitura%20%28software%29
Tessitura (software)
Tessitura is an enterprise application used by performing arts and cultural organisations to manage their activities in ticketing, fundraising, customer relationship management, and marketing. It refers to itself as "arts enterprise software". History and business model Tessitura was originally developed by and for the Metropolitan Opera of New York. One of the aspects which distinguishes Tessitura from most other commercial software is the business model chosen by the Metropolitan Opera to commercialize what was originally custom software. The Metropolitan Opera maintains ownership of the intellectual property in the original software, but established a separate organization called Tessitura Network (as a not-for-profit corporation with 501(c)3 status under United States tax law) to manage the ongoing development and support of the system. The Tessitura Network now licenses users, handles management, maintenance and development of the system, and fosters an active exchange of best practices and knowledge sharing within the nonprofit arts and cultural sector. The Tessitura Network is effectively a cooperative enterprise, governed via a board elected by and from, and representative of, the licensees of the system. This business model has an obvious resonance with the not-for-profit and self-governing ethos of the arts community, and is one reason for the dominance Tessitura has rapidly achieved in the (deliberately restricted) market in which it operates–English-speaking, not-for-profit, arts organizations with a need for ticketing and fundraising systems. System functionality The Tessitura system is designed to be flexible, customizable, and open, and therefore can be tailored for each organization. Functional areas include ticketing, fundraising, constituent relationship management, Web API, and marketing tools. Notable users In the United States 5th Avenue Theatre (Seattle) 92nd Street Y (New York City) Academy of Vocal Arts (Philadelphia) ACT Theatre (Seattle) Adrienne Arsht Center for the Performing Arts of Miami-Dade County American Repertory Theater Arena Stage (Washington, D.C.) Arizona Science Center (Phoenix, Arizona) Asolo Repertory Theatre (Sarasota, Florida) Atlanta Opera Atlanta Ballet Atlanta Symphony Orchestra AT&T Performing Arts Center (Dallas) Austin Opera (Austin, Texas) Austin Symphony Orchestra (Austin, Texas) Ballet Austin (Austin, Texas) Baltimore Symphony Orchestra Berkeley Repertory Theatre Boston Ballet Boston Symphony Orchestra Brooklyn Academy of Music Carnegie Hall Center Theatre Group (Los Angeles) Chicago Opera Theater Chicago Shakespeare Theater Chicago Symphony Orchestra Cincinnati Symphony Orchestra Cleveland Museum of Natural History Cleveland Orchestra Cooper Hewitt, Smithsonian Design Museum (New York) Dallas Opera Dallas Symphony Orchestra Florida Studio Theatre Folger Shakespeare Library Fort Worth Symphony Orchestra Georgia Aquarium (Atlanta) Goodman Theatre (Chicago) Grand Opera House (Wilmington, Delaware) Guthrie Theater (Minneapolis) High Museum of Art (Atlanta) Indianapolis Symphony Orchestra Isabella Stewart Gardner Museum (Boston) Ithaca College (Ithaca, New York) James Museum of Western and Wildlife Art (St. Petersburg, FL) Jazz at Lincoln Center (New York) Kaufman Music Center (New York City) Laguna Playhouse Lincoln Center for the Performing Arts Los Angeles Master Chorale Los Angeles Opera Los Angeles Philharmonic Lyric Opera of Kansas City Miami City Ballet McCarter Theatre (Princeton, New Jersey) Minnesota Opera (Minneapolis, Minnesota) Minnesota Zoo (Apple Valley, Minnesota) Metropolitan Museum of Art (New York City) Metropolitan Opera (New York City) The Metropolitan Opera Guild (New York) Mondavi Center (Davis, California) Mount Vernon Nashville Opera (Nashville, Tennessee) National Geographic Museum (Washington, D.C.) New Victory Theater (New York City) New York Philharmonic (New York City) Opera Philadelphia Opera Theatre of Saint Louis Oregon Shakespeare Festival Pennsylvania Ballet Perot Museum of Nature and Science (Dallas) Pittsburgh Cultural Trust Philadelphia Orchestra Phoenix Symphony Portland Center Stage (Oregon) Richard B. Fisher Center for the Performing Arts (Annandale-on-Hudson, New York) Ruth Eckerd Hall (Clearwater, Florida) San Diego Symphony San Francisco Ballet San Francisco Museum of Modern Art San Francisco Opera San Francisco Symphony SFJAZZ (San Francisco) San Jose Repertory Theatre Science Museum of Minnesota Seattle Children's Theatre Seattle Opera Seattle Symphony Segerstrom Center for the Arts Shakespeare Theatre Company (Washington, DC) The Shed (Hudson Yards) (New York City) Signature Theatre Company (New York City) Smith Center for the Performing Arts (Las Vegas) Steppenwolf Theatre Company (Chicago) St. Louis Symphony Studio Theatre (Washington, D.C.) Tennessee Performing Arts Center (Nashville, Tennessee) Texas Ballet Theater (Fort Worth) TheatreWorks (Silicon Valley) Tulsa Opera University Musical Society (Ann Arbor, Michigan) University of California, Berkeley – Cal Performances University of California, Santa Barbara – Arts and Lectures University of North Carolina at Chapel Hill Utah Symphony – Utah Opera (Salt Lake City) Westport Country Playhouse (Westport, Connecticut) The Whiting (Flint, Michigan) Wilma Theater (Philadelphia) Witte Museum (San Antonio, Texas) Woolly Mammoth Theatre Company (Washington, DC) Yale Repertory Theatre (New Haven, Connecticut) In Canada Aga Khan Museum (Toronto, Ontario) Alberta Theatre Projects (Calgary, Alberta) Arts Club Theatre Company (Vancouver, British Columbia) Calgary Opera Edmonton Opera Edmonton Symphony Orchestra Harbourfront Centre (Toronto, Ontario) Kitchener–Waterloo Symphony National Ballet of Canada (Toronto, Ontario) Royal Manitoba Theatre Centre Royal Winnipeg Ballet Science North (Sudbury, Ontario) Shaw Festival (Niagara-on-the-Lake, Ontario) Soulpepper Theatre Company Stratford Shakespeare Festival Theatre Calgary Toronto Symphony Orchestra Vancouver Opera Vancouver Playhouse Vancouver Symphony Orchestra Victoria Symphony In the UK Almeida Theatre (London) BBC National Orchestra of Wales (Cardiff) Birmingham Hippodrome (The UK's busiest theatre) The Bridgewater Hall Donmar Warehouse (London) English National Ballet Glyndebourne Festival Opera Grange Park Opera The Hallé (Manchester) The Mayflower (Southampton) The Old Vic (London) The Roundhouse (London) The Royal & Derngate (Northampton) Royal Albert Hall (London) Royal National Theatre (London) Royal Opera House (London) Royal Shakespeare Company (Stratford-upon-Avon) Sage Gateshead Southbank Centre (London) Theatre Royal, Plymouth Wales Millennium Centre (Cardiff) Wigmore Hall (London) Young Vic Company (London) In Ireland Abbey Theatre (Dublin) In Australia Adelaide Symphony Orchestra Arts Centre Melbourne The Australian Ballet (Melbourne) Australian Brandenburg Orchestra (Sydney) Australian Centre for the Moving Image (Melbourne) Australian Chamber Orchestra Bell Shakespeare Belvoir Melbourne Festival Melbourne Recital Centre Melbourne Symphony Orchestra Melbourne Theatre Company Museum of Old and New Art Opera Australia (Sydney) Perth Theatre Trust Queensland Symphony Orchestra State Theatre Company of South Australia Sydney Opera House Sydney Theatre Company In New Zealand Auckland War Memorial Museum The Edge Performing Arts & Convention Centre (Auckland) Notable former users In the UK Opera Holland Park (London) References External links Customer relationship management software 2000 software Metropolitan Opera
42617007
https://en.wikipedia.org/wiki/PCKeeper
PCKeeper
PCKeeper is advertised as an optimization services package featuring a set of software utilities for Windows OS owned by Essentware S.A. (company based in Bratislava, Slovakia). It includes 2 separate products for Windows: PCKeeper Live and PCKeeper Antivirus. PCKeeper was originally developed by Zeobit LLC which was founded in 2009 by Slava Kolomiychuk. PCKeeper was released in September 2010. Kromtech Alliance acquired PCKeeper and MacKeeper from Zeobit in May 2013. In 2015, PCKeeper has changed its legal owners from Kromtech Alliance Corp. to Essentware S.A. Kromtech Alliance has made the decision to focus on the products for Mac users only. Essentware S.A. office registered in Panama. The principal officers and developers of the company remain in Ukraine. Exaggerated warning messages Two class action lawsuits have been filed against Kromtech for the Mac OS version of PCKeeper, MacKeeper. The first lawsuit, filed in Illinois, alleges that "Contrary to ZeoBIT's marketing and in-software representations, however, neither the free trial nor the full registered versions of MacKeeper perform any credible diagnostic testing of a user's Mac. Instead, ZeoBIT intentionally designed MacKeeper to invariably and ominously report that the consumer's Mac needs repair and is at-risk due to harmful errors, privacy threats, and other problems, regardless of the computer's actual condition."[11] The second complaint, filed by Holly Yencha of Pennsylvania, alleges that the free starter version of MacKeeper identifies harmless programs as "critical" problems. The complaint claims that "under MacKeeper's reporting algorithm, even brand new computers are in 'critical' condition and require repair by purchasing the full version." [12] PCKeeper Live is sometimes installed on users' PCs with other partner programs, which can cause unwanted pop-up windows. PCKeeper Live PCKeeper Live offers 13 different PC services in 4 categories: Human Assistance (Find & Fix, Geek on Demand, Live Support), Security (Anti-Theft, Data Hider, Shredder, Files Recovery), Cleaning (Disk Cleaner, Disk Explorer, Duplicates Finder, Uninstaller) and Optimization (Context Menu Manager, Startup Manager). Popup ads PCKeeper is sometimes advertised with pop up ads. Many of their pop up ads occur on pornography websites. Reviews PCKeeper Live is rated "Good" (3.5 out of 5) by the PCMag editor, stating "improves PC performance", and "great for novice users" , "The utility won't clean your PC as well as Iolo System Mechanic, but considering its accessibility and wallet friendliness, it's one to check out" and "Kromtech PCKeeper Live's human specialists and unique price plans make this tune-up utility worthy of consideration, but rival applications offer better PC improvement". Techradar.pro rates its 3 out of 5 stars and that noted, that "Overall, PCKeeper is a light-weight tool that won't annoy you with pop-up windows", "When you consider the fact most of PCKeeper Live's optimization features are easily found on most free tools, and that some were simply not useful at all, the price for the Kromtech solution seems unreasonably high." and "Kromtech's 24/7 support is what elevates PCKeeper Live from an overpriced utility program to a tool that even a non-geek can use to optimize his/her computer." PCKeeper was reviewed by a wide range of tech experts. It was reviewed in Turkish", French", Portuguese", Spanish", German". Review at PCMag thus described PCKeeper Live: "Improves PC performance. Multiple price points. Advice from Microsoft-certified computer specialists. Anti-theft technology". It gained 3,5 out of 5 points." PCKeeper Antivirus PCKeeper Antivirus integrates the Avira's Secure Anti-Virus API (SAVAPI), the official interface for Avira’s anti-malware scanning engine. PCKeeper Antivirus does not block malware-hosting URL's or phishing URL's. According to Virus Bulletin, PCKeeper Antivirus PRO has scored 96.1% in RAP tests and set a stability level at Stable. PCKeeper Antivirus got a VB100 award. PCKeeper Antivirus (version 1.x) received 2 OPSWAT Gold Certifications in the Antispyware and Antivirus categories. The German organization AV-Test.org tested 25 anti-virus programs. The testers found that AhnLab, Microsoft Windows Defender and PCKeeper Antivirus were the lowest performing anti-virus applications for Windows 8.1. Lab testing results Reviews PCMag rated PCKeeper Antivirus Good (3 out of 5), noting that PCKeeper Antivirus has a "Streamlined, attractive interface. Easy to use for non-techies. Good score in our hands-on malware blocking testing". However, it scored poorly in independent lab testing and the chat based support was "soured by the fact that they served me misinformation." PC Mag concluded "you'd be better off with Panda Free Antivirus 2015, an Editors' Choice for free antivirus. It slightly edged PCKeeper in my own tests and swept the field in lab tests, with high marks across the board. For the same price as PCKeeper, you could have Bitdefender Antivirus Plus 2015 or Kaspersky Anti-Virus (2015), both of which we named Editors' Choice for paid antivirus. Any of these is a better choice." Social campaigns In May 2013 PCKeeper let its customers decide how much to pay for the software, launching a pay what you want campaign. See also Antivirus software Comparison of antivirus software Comparison of computer viruses References External links PCKeeper website Computer performance Computer system optimization software Utilities for Windows Antivirus software Utility software Slovak brands
3174826
https://en.wikipedia.org/wiki/Sony%20BMG%20copy%20protection%20rootkit%20scandal
Sony BMG copy protection rootkit scandal
A scandal erupted in 2005 regarding Sony BMG's implementation of copy protection measures on about 22 million CDs. When inserted into a computer, the CDs installed one of two pieces of software that provided a form of digital rights management (DRM) by modifying the operating system to interfere with CD copying. Neither program could easily be uninstalled, and they created vulnerabilities that were exploited by unrelated malware. One of the programs would install and "phone home" with reports on the user's private listening habits, even if the user refused its end-user license agreement (EULA), while the other was not mentioned in the EULA at all. Both programs contained code from several pieces of copylefted free software in an apparent infringement of copyright, and configured the operating system to hide the software's existence, leading to both programs being classified as rootkits. Sony BMG initially denied that the rootkits were harmful. It then released, for one of the programs, an "uninstaller" that only un-hid the program, installed additional software that could not be easily removed, collected an email address from the user, and introduced further security vulnerabilities. Following public outcry, government investigations, and class-action lawsuits in 2005 and 2006, Sony BMG partially addressed the scandal with consumer settlements, a recall of about 10% of the affected CDs, and the suspension of CD copy protection efforts in early 2007. Background In August 2000, statements by Sony Pictures Entertainment US senior VP Steve Heckler foreshadowed the events of late 2005. Heckler told attendees at the Americas Conference on Information Systems: "The industry will take whatever steps it needs to protect itself and protect its revenue streams ... It will not lose that revenue stream, no matter what ... Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual user. We will firewall Napster at source – we will block it at your cable company. We will block it at your phone company. We will block it at your ISP. We will firewall it at your PC ... These strategies are being aggressively pursued because there is simply too much at stake." In Europe, BMG created a minor scandal in 2001 when it released Natalie Imbruglia's second album, White Lilies Island, without warning labels stating that the CD had copy protection. The CDs were eventually replaced. BMG and Sony both released copy-protected versions of certain releases in certain markets in late 2001, and a late 2002 report indicated that all BMG CDs sold in Europe would have some form of copy protection. Copy-protection software The two pieces of copy-protection software at issue in the 2005–2007 scandal were included on over 22 million CDs marketed by Sony BMG, the record company formed by the 2004 merger of Sony and BMG's recorded music divisions. About two million of those CDs, spanning 52 titles, contained First 4 Internet (F4I)'s Extended Copy Protection (XCP), which was installed on Microsoft Windows systems after the user accepted the EULA which made no mention of the software. The remaining 20 million CDs, spanning 50 titles, contained SunnComm's MediaMax CD-3, which was installed on either Microsoft Windows or Mac OS X systems after the user was presented with the EULA, regardless of whether the user accepted it. However, Mac OS X prompted the user for confirmation when the software attempted to modify the OS, whereas Windows did not. XCP rootkit The scandal erupted on October 31, 2005, when Winternals (later acquired by Microsoft Corporation) researcher Mark Russinovich posted to his blog a detailed description and technical analysis of F4I's XCP software that he ascertained had been recently installed on his computer by a Sony BMG music CD. Russinovich compared the software to a rootkit due to its surreptitious installation and its efforts to hide its existence. He noted that the EULA does not mention the software, and he asserted emphatically that the software is illegitimate and that digital rights management had "gone too far". Anti-virus firm F-Secure concurred: "Although the software isn't directly malicious, the used rootkit hiding techniques are exactly the same used by malicious software to hide. The DRM software will cause many similar false alarms with all AV software that detect rootkits. ... Thus it is very inappropriate for commercial software to use these techniques." After public pressure, Symantec and other anti-virus vendors included detection for the rootkit in their products as well, and Microsoft announced it would include detection and removal capabilities in its security patches. Russinovich discovered numerous problems with XCP: It creates security holes that can be exploited by malicious software such as worms or viruses. It constantly runs in the background and excessively consumes system resources, slowing down the user's computer, regardless of whether there is a protected CD playing. It employs unsafe procedures to start and stop, which could lead to system crashes. It has no uninstaller, and is installed in such a way that inexpert attempts to uninstall it can lead to the operating system failing to recognize existing drives. Soon after Russinovich's first post, there were several trojans and worms exploiting XCP's security holes. Some people even used the vulnerabilities to cheat in online games. Sony BMG quickly released software to remove the rootkit component of XCP from affected Microsoft Windows computers, but after Russinovich analyzed the utility, he reported in his blog that it only exacerbated the security problems and raised further concerns about privacy. Russinovich noted that the removal program merely unmasked the hidden files installed by the rootkit, but did not actually remove the rootkit. He also reported that it installed additional software that could not be uninstalled. In order to download the uninstaller, he found it was necessary to provide an e-mail address (which the Sony BMG Privacy Policy implied was added to various bulk e-mail lists), and to install an ActiveX control containing backdoor methods (marked as "safe for scripting", and thus prone to exploits). Microsoft later issued a killbit for this ActiveX control. On November 18, 2005, Sony BMG provided a "new and improved" removal tool to remove the rootkit component of XCP from affected Microsoft Windows computers. MediaMax CD-3 Legal and financial problems Product recall On November 15, 2005 vnunet.com announced that Sony BMG was backing out of its copy-protection software, recalling unsold CDs from all stores, and offering consumers to exchange their CDs with versions lacking the software. The Electronic Frontier Foundation compiled a partial list of CDs with XCP. Sony BMG was quoted as maintaining that "there were no security risks associated with the anti-piracy technology", despite numerous virus and malware reports. On November 16, 2005, US-CERT, part of the United States Department of Homeland Security, issued an advisory on XCP DRM. They said that XCP uses rootkit technology to hide certain files from the computer user and that this technique is a security threat to computer users. They also said one of the uninstallation options provided by Sony BMG introduces further vulnerabilities to a system. US-CERT advised, "Do not install software from sources that you do not expect to contain software, such as an audio CD." Sony BMG announced that it had instructed retailers to remove any unsold music discs containing the software from their shelves. It was estimated by internet security expert Dan Kaminsky that XCP was in use on more than 500,000 networks. CDs with XCP technology can be identified by the letters "XCP" printed on the back cover of the jewel case for the CD according to SonyBMG's XCP FAQ. On November 18, 2005, Reuters reported that Sony BMG would exchange affected insecure CDs for new unprotected disks as well as unprotected MP3 files. As a part of the swap program, consumers can mail their XCP-protected CDs to Sony BMG and be sent an unprotected disc via return mail. On November 29, then-New York Attorney General Eliot Spitzer found through his investigators that, despite the recall of November 15, Sony BMG CDs with XCP were still for sale in New York City music retail outlets. Spitzer said "It is unacceptable that more than three weeks after this serious vulnerability was revealed, these same CDs are still on shelves, during the busiest shopping days of the year, [and] I strongly urge all retailers to heed the warnings issued about these products, pull them from distribution immediately, and ship them back to Sony." The next day, Massachusetts Attorney General Tom Reilly issued a statement saying that Sony BMG CDs with XCP were still available in Boston despite the Sony BMG recall of November 15. Attorney General Reilly advised consumers not to purchase the Sony BMG CDs with XCP and said that he was conducting an investigation of Sony BMG. As of May 11, 2006, Sony BMG's website offered consumers a link to "Class Action Settlement Information Regarding XCP And MediaMax Content Protection". It had online claim filing and links to software updates/uninstallers. The deadline for submitting a claim was June 30, 2007. As of April 2, 2008, Sony BMG's website offered consumers an explanation of the events, as well as a list of all affected CDs. Texas state action On November 21, 2005, Texas Attorney General Greg Abbott sued Sony BMG. Texas was the first state in the United States to bring legal action against Sony BMG in response to the rootkit. The suit was also the first filed under the state's 2005 spyware law. It alleged that the company surreptitiously installed the spyware on millions of compact music discs (CDs) that compromised computers when consumers inserted them into their computers in order to play. On December 21, 2005, Abbott added new allegations to his lawsuit against Sony BMG, regarding MediaMax. The new allegations claimed that MediaMax violated the state's spyware and deceptive trade practices laws, because the MediaMax software would be installed on a computer even if the user declined the license agreement authorizing the action. Abbott stated, "We keep discovering additional methods Sony used to deceive Texas consumers who thought they were simply buying music", and "Thousands of Texans are now potential victims of this deceptive game Sony played with consumers for its own purposes." In addition to violations of the Consumer Protection Against Computer Spyware Act of 2005, which allowed for civil penalties of $100,000 for each violation of the law, the alleged violations added in the updated lawsuit (on December 21, 2005) carried maximum penalties of $20,000 per violation. Sony was ordered to pay $750,000 in legal fees to Texas, accept customer returns of affected CDs, place a conspicuous detailed notice on their homepage, make "keyword buys" to alert consumers by advertising with Google, Yahoo! and MSN, pay up to $150 per damaged computer, among other remedies. Sony BMG also had to agree that they would not make any claim that the legal settlement in any way constitutes the approval of the court. New York and California class action suits Class action suits were filed against Sony BMG in New York and California. On December 30, 2005, the New York Times reported that Sony BMG had reached a tentative settlement of the lawsuits, proposing two ways of compensating consumers who have purchased the affected recordings. According to the proposed settlement, those who purchased an XCP CD will be paid $7.50 per purchased recording and given the opportunity to download a free album or be able to download three additional albums from a limited list of recordings if they give up their cash incentive. District Judge Naomi Reice Buchwald entered an order tentatively approving the settlement on January 6, 2006. The settlement was designed to compensate those whose computers were infected, but not otherwise damaged. Those who have damages that are not addressed in the class action are able to opt out of the settlement and pursue their own litigation. A fairness hearing was held on May 22, 2006, at 9:15 am at the Daniel Patrick Moynihan United States Courthouse for the Southern District of New York. Claims had to be submitted by December 31, 2006. Class members who wished to be excluded from the settlement must have filed before May 1, 2006. Those who remained in the settlement could attend the fairness hearing at their own expense and speak on their own behalf or be represented by an attorney. Other actions In Italy, (an association similar to EFF) also reported the rootkit to the Financial Police, asking for an investigation under various computer crime allegations, along with a technical analysis of the rootkit. The US Department of Justice (DOJ) made no comment on whether it would take any criminal action against Sony. However, Stewart Baker of the Department of Homeland Security publicly admonished Sony, stating, "it's your intellectual property—it's not your computer". On November 21, the EFF announced that it was also pursuing a lawsuit over both XCP and the SunnComm MediaMax DRM technology. The EFF lawsuit also involves issues concerning the Sony BMG end user license agreement. It was reported on December 24, 2005, that then-Florida Attorney General Charlie Crist was investigating Sony BMG spyware. On January 30, 2007, the U.S. Federal Trade Commission (FTC) announced a settlement with Sony BMG on charges that their CD copy protection had violated Federal law—Section 5(a) of the Federal Trade Commission Act, 15 USC 45(a)—by engaging in unfair and deceptive business practices. The settlement requires Sony BMG to reimburse consumers up to $150 to repair damage that resulted directly from their attempts to remove the software installed without their consent. The settlement also requires them to provide clear and prominent disclosure on the packaging of future CDs of any limits on copying or restrictions on the use of playback devices, and ban the company from installing content protection software without obtaining consumers' authorization. FTC chairwoman Deborah Platt Majoras added that, "Installations of secret software that create security risks are intrusive and unlawful. Consumers' computers belong to them, and companies must adequately disclose unexpected limitations on the customer use of their products so consumers can make informed decisions regarding whether to purchase and install that content." Copyright infringement Researchers found that Sony BMG and the makers of XCP also apparently infringed copyright by failing to adhere to the licensing requirements of various pieces of free and open-source software whose code was used in the program, including the LAME MP3 encoder, mpglib, FAAC, id3lib, mpg123, and the VLC media player. In January 2006, the developers of LAME posted an open letter stating that they expected "appropriate action" by Sony BMG, but that the developers had no plans to investigate or take action over the apparent violation of LAME's source code license. Company and press reports Russinovich's report was being discussed on popular blogs almost immediately following its release. NPR was one of the first major news outlets to report on the scandal on November 4, 2005. Thomas Hesse, Sony BMG's Global Digital Business President, told reporter Neda Ulaby, "Most people, I think, don't even know what a rootkit is, so why should they care about it?" In a November 7, 2005 article, vnunet.com summarised Russinovich's findings, and urged consumers to avoid buying Sony BMG music CDs for the time being. The following day, The Boston Globe classified the software as spyware, and Computer Associates' Security Management unit VP Steve Curry confirmed that it communicates personal information from consumers' computers to Sony BMG (namely, the CD being played and the user's IP address). The methods used by the software to avoid detection were likened to those used by data thieves. On November 8, 2005, Computer Associates decided to classify Sony BMG's software as "spyware" and provide tools for its removal. Speaking about Sony BMG suspending the use of XCP, independent researcher Mark Russinovich said, "This is a step they should have taken immediately." The first virus which made use of Sony BMG's stealth technology to make malicious files invisible to both the user and anti-virus programs surfaced on November 10, 2005. One day later, Yahoo! News announced that Sony BMG had suspended further distribution of the controversial technology. According to ZDNet News: "The latest risk is from an uninstaller program distributed by SunnComm Technologies, a company that provides copy protection on other Sony BMG releases." The uninstall program obeys commands sent to it allowing others "to take control of PCs where the uninstaller has been used." On December 6, 2005, Sony BMG said that 5.7 million CDs spanning 27 titles were shipped with MediaMax 5 software. The company announced the availability of a new software patch to prevent a potential security breach in consumers' computers. Sony BMG in Australia released a press release indicating that no Sony BMG titles manufactured in Australia had copy protection. See also Defective by Design List of compact discs sold with Extended Copy Protection List of compact discs sold with MediaMax CD-3 References Sources "Sony Music CDs Under Fire from Privacy Advocates", National Public Radio, 2005-11-04 Bergstein, Brian (2005-11-18). "Copy protection an experiment in progress". Seattlepi.com. Halderman, J. Alex, and Felten, Edward. "Lessons from the Sony CD DRM Episode" (PDF format), Center for Information Technology Policy, Department of Computer Science, Princeton University, 2006-02-14. Wikinews: Sony's DRM protected CDs install Windows rootkits Gartner: Sony BMG DRM a Public-Relations and Technology Failure Bush Administration to Sony: It's your intellectual property -- it's not your computer - 2005-11-12 MP3 Newswire article External links Academic article examining the market, legal, and technological factors that motivated Sony BMG's DRM strategy List of titles affected by MediaMax List of titles affected by XCP List of titles included in settlement SonySuit.Com - Tracking The Sony BMG XCP and SunComm Lawsuits "Sony anti-customer technology roundup and time-line", Boing Boing. In-depth analysis and references, Groklaw Revisiting Sony BMG Rootkit Scandal 10 years later 2005 scandals Digital rights management Sony Corporate scandals Business ethics cases Corporate crime Rootkits Windows trojans Compact Disc and DVD copy protection
1943822
https://en.wikipedia.org/wiki/Asianux
Asianux
Asianux was a Linux distribution based on Red Hat Enterprise Linux. (RHEL)  and jointly developed by companies in Japan, China, and South Korea, . This project was dissolved in September 2015. The Asianux trademark is held by companies in each country. History In December 2003, the Asianux project was initially started by two companies, Miracle Linux (Japan) and Red Flag Software (China). HaanSoft (Korea) joined the project in October 2004 and development was started by these three companies. Asianux Co. was established in December 2007, and Viet Software (Vietnam) , WTEC (Thailand) and Enterprise Technology(Sri Lanka) also participated in the Asianux project. The product name was unified worldwide under the name "Asianux" and sold in each country.  Then, the Asianux project was disbanded in September 2015. Participated company Japan - Miracle Linux Co., (CyberTrust Japan Co.ltd.) China - Red Flag Software Co., Ltd. South Korea - Hancom Vietnam - Vietsoftware, Inc Thailand - WTEC Co., Ltd. Sri Lanka -  Enterprise Technology Co., Ltd. Release history June 30, 2004 - Asianux 1.0 August 26, 2005 -Asianux 2.0 September 18, 2007 - Asianux Server 3 January 17, 2012 - Asianux Server 4 References External links Asianux website RPM-based Linux distributions X86-64 Linux distributions Linux distributions
4030329
https://en.wikipedia.org/wiki/NorduGrid
NorduGrid
NorduGrid is a collaboration aiming at development, maintenance and support of the free Grid middleware, known as the Advanced Resource Connector (ARC). History The name NorduGrid first became known in 2001 as short for the project called "Nordic Testbed for Wide Area Computing and Data Handling" funded by the Nordic Council of Ministers via the Nordunet2 programme. That project's main goal was to set up a prototype of a distributed computing infrastructure (a testbed), aiming primarily at the needs of the High Energy Physics researchers in the ATLAS experiment. Following evaluation of the then existing Grid technology solutions, NorduGrid developers came up with an alternative software architecture. It was implemented and demonstrated in May 2002, and soon became known as the NorduGrid Middleware. In 2004 this middleware solution was given a proper name, the Advanced Resource Connector (ARC). Until May 2003, NorduGrid headquarters were in the Niels Bohr Institute; at the 5th NorduGrid Workshop it was decided to move them to the Oslo University. The present-day formal collaboration was established in 2005 by five Nordic academic institutes (Niels Bohr Institute in Copenhagen, Denmark, Helsinki Institute of Physics in Finland, Oslo University in Norway, and Lund and Uppsala Universities in Sweden) with the goal to develop, support, maintain and popularize ARC. Deployment and support of the Nordic Grid infrastructure itself became the responsibility of the NDGF project, launched in June 2006. This marked clear separation between Grid middleware providers and infrastructure services providers. To further support ARC development, NorduGrid and several other interested partners secured dedicated funding through EU FP6 project KnowARC. NorduGrid Collaboration is based upon a non-binding Memorandum of Understanding and is open for new members. Goals The NorduGrid Collaboration is the consortium behind the ARC middleware, and its key goal is to ensure that ARC is further developed, maintained, supported and widely deployed, while remaining a free open-source software, suitable for a wide variety of high-throughput Grid computational tasks. The ultimate goal is to provide a reliable, scalable, portable and full-featured solution for Grid infrastructures, conformant with open standards, primarily those developed in the framework of the Open Grid Forum. While ARC software development may and does often take place outside NorduGrid, the Collaboration coordinates contributions to the code and maintains the code and software repositories, as well as a build system, an issue tracking system and other necessary software development services. NorduGrid defines strategical directions for development of ARC and ensures financial support for it. ARC Community The term "ARC Community" is used to refer to various groups of people willing to share their computational resources via ARC. A tit-for-tat user group is formalized as a virtual organisation (VO), allowing the mutual use of such community resources. Contrary to the popular belief, NorduGrid members are not required to provide computing or storage resources; neither offering such resources grants an automatic membership. Still, ARC community as a whole owns a substantial amount of computing and storage resources. On a voluntarily basis, and for the purpose of the open-source development process, community members may donate CPU cycles and some storage space to the developers and testers. Such resources constitute the testbed for the ARC middleware. Other than such donated community resources, NorduGrid does not provide or allocate any computational resources and does not coordinate worldwide deployment of ARC. Actual deployment and usage of ARC-based distributed computing infrastructures is coordinated by the respective infrastructure projects, such as e.g. NDGF, Swegrid (Sweden), Material Sciences National Grid Infrastructure (M-grid) (Finland), NorGrid (Norway) etc.. Apart from contributing computational resources, many groups develop higher-level software tools on top of ARC (e.g.). This kind of development is not coordinated by NorduGrid, but assistance is provided by the Collaboration upon request. NorduGrid Certification Authority NorduGrid Certification Authority (CA) is currently the only major infrastructure service provided by the NorduGrid. This Authority issues electronic certificates to users and services, such that they can work in Grid environments. Present day Grid implementations require X.509 certificates to validate identity of Grid participants. NorduGrid CA provides such certificates to individuals and machines associated with research and/or academic institutions in Denmark, Finland, Norway and Sweden. The NorduGrid Certification Authority is a member of the European Policy Management Authority for Grid Authentication (EUGridPMA). Related projects EU KnowARC project Nordic Data Grid Facility NGIn: Innovative Tools and Services for NorduGrid Swegrid (Sweden) Dansk Center for Grid Computing (Denmark) Material Sciences National Grid Infrastructure (M-grid) (Finland) Eesti Grid (Estonia) NorGrid (Norway) Swiss National Grid Association See also Advanced Resource Connector KnowARC Nordic Data Grid Facility Enabling Grids for E-sciencE European Grid Initiative European Middleware Initiative Open Science Grid UNICORE Open Grid Forum External links NorduGrid Web site NorduGrid Certification Authority KnowARC EU project contributing to the ARC middleware development Nordic DataGrid Facility, a Nordic project contributing to the ARC middleware development References Grid computing projects Grid computing products Information technology organizations based in Europe
3453905
https://en.wikipedia.org/wiki/General%20Order%20No.%2011%20%281863%29
General Order No. 11 (1863)
General Order No. 11 is the title of a Union Army directive issued during the American Civil War on August 25, 1863, forcing the abandonment of rural areas in four counties in western Missouri. The order, issued by Union General Thomas Ewing, Jr., affected all rural residents regardless of their allegiance. Those who could prove their loyalty to the Union were permitted to stay in the affected area, but had to leave their farms and move to communities near military outposts (see villagization). Those who could not do so had to vacate the area altogether. While intended to deprive pro-Confederate guerrillas of material support from the rural countryside, the severity of the Order's provisions and the nature of its enforcement alienated vast numbers of civilians, and ultimately led to conditions in which guerrillas were given greater support and access to supplies than before. It was repealed in January 1864, as a new general took command of Union forces in the region. Origin and provisions of the order Order No. 11 was issued four days after the August 21 Lawrence Massacre, a retaliatory killing of men and boys led by Confederate bushwhacker leader William Quantrill. The Union Army believed Quantrill's guerrillas drew their support from the rural population of four Missouri counties on the Kansas border, south of the Missouri River. These were: Bates, Cass, Jackson, and part of Vernon. Following the slaughter in Lawrence, Federal forces were determined to end such raiding and insurgency by any means necessary—no matter what the cost might be to innocent civilians. Hence, General Thomas Ewing, who had lost several lifelong friends in the raid, issued Order No. 11. Ewing's decree ordered the expulsion of all residents from these counties except for those living within one mile of the town limits of Independence, Hickman Mills, Pleasant Hill, and Harrisonville. The area of Kansas City, Missouri north of Brush Creek and west of the Blue River, referred to as "Big Blue" in the order, was also spared. President Abraham Lincoln approved Ewing's order, but he cautioned that the military must take care not to permit vigilante enforcement. This warning was almost invariably ignored. Ewing had issued his order a day before he received a nearly identical directive from his superior, Major General John Schofield. Whereas Ewing's decree tried to distinguish between pro-Union and pro-Confederate civilians, Schofield's allowed no exceptions and was significantly harsher. Ewing's order was allowed to stand, and Schofield would later describe it as "wise and just; in fact, a necessity." Text of General Order No. 11 Implementation of the order Order No. 11 was not only intended to retard pro-Southern depredations, but also limit pro-Union vigilante activity, which threatened to spiral out of control, given the immense anger sweeping Kansas following Quantrill's Raid. This meant that Ewing not only had his hands full with Confederate raiders; he equally had troubles with Unionist Jayhawkers, like James Lane and "Doc" Jennison. Convinced that Ewing was not retaliating sufficiently against Missourians, Lane threatened to lead a Kansas force into Missouri, laying waste to the four counties named in Ewing's decree, and more. On September 9, 1863, Lane gathered nearly a thousand Kansans at Paola, Kansas, and marched towards Westport, Missouri, with an eye towards destruction of that pro-slavery town. Ewing sent several companies of his old Eleventh Kansas Infantry (now mounted as cavalry) to stop Lane's advance, forcefully, if necessary. Faced with this superior Federal force, Lane ultimately backed down. Order No. 11 was partially intended to punish Missourians with pro-rebel sympathies, however many residents of the four counties named in Ewing's orders were pro-Union or neutralist in sentiment. In reality, the Union troops acted with little deliberation; farm animals were killed, and house property was destroyed or stolen; houses, barns and outbuildings were burned to the ground. Some civilians were summarily executed—a few as old as seventy years of age. Ewing's four counties, Jackson, Cass, Bates and northern part of Vernon, became a devastated "no man's land," with only charred chimneys (soon nicknamed "Jennison's tombstones", after "Doc" Jennison) and burnt stubble showing where homes and thriving communities had once stood, earning the sobriquet, "The Burnt District." Historian Christopher Philips writes, "The resulting population displacement and destruction of property (lest it fall into rebel hands) prompted the nickname "Burnt District," as an apt description of the region." There are very few remaining antebellum homes in this area due to the Order No. 11. Ewing wanted to demonstrate that the Union forces intended to act forcefully against Quantrill and other bushwhackers, thus rendering vigilante actions (such as the one contemplated by Lane) unnecessary—and thereby preventing their occurrence, which Ewing was determined at all costs to do. He ordered his troops not to engage in looting or other depredations, but he was ultimately unable to control them. Most of the troops were Kansas volunteers, who regarded all of the inhabitants of the affected counties as rebels with property subject to military confiscation. Although Federal troops ultimately burned most of the outlying farms and houses, they were unable to prevent Confederates from initially acquiring vast amounts of food and other useful material from abandoned dwellings. Ewing's order had the opposite military effect from what he intended: instead of eliminating the guerrillas, it gave them immediate and practically unlimited access to supplies. For instance, the bushwhackers were able to help themselves to abandoned chickens, hogs and cattle, all of which had been left behind when their owners were forced to flee. Smokehouses were sometimes found to contain hams and bacon, while barns often held feed for horses. Repeal and legacy of the order Ewing eased his order in November, issuing General Order No. 20, which permitted the return of those who could prove their loyalty to the Union. In January 1864, command over the border counties passed to General Egbert Brown, who disapproved of Order No. 11. He almost immediately replaced it with a new directive, one that allowed anyone who would take an oath of allegiance to the Union to return and rebuild their homes. Ewing's controversial order greatly disrupted the lives of thousands of civilians, most of whom were innocent of any guerrilla collaboration. The evidence is not conclusive whether Order No. 11 seriously hindered Confederate military operations. No raids into Kansas took place after its issuance, but historian Albert Castel credits this not to Order No. 11, but rather to strengthened border defenses and a better organized Home Guard, plus a guerrilla focus on operations in northern and central Missouri in preparation for General Sterling Price's 1864 invasion. The infamous destruction and hatred inspired by Ewing's Order No. 11 would persist throughout western Missouri for many decades as the affected counties slowly tried to recover. Author Caroline Abbot Stanley's 1904 Order No. 11 is based on the events surrounding the order. George Bingham and Order No. 11 American artist George Caleb Bingham, who was a Conservative Unionist and bitter enemy of Ewing, called Order No. 11 an "act of imbecility" and wrote letters protesting it. Bingham wrote to Gen. Ewing, "If you execute this order, I shall make you infamous with pen and brush," and in 1868 created his famous painting reflecting the consequences of Ewing's harsh edict (see above). Former guerrilla Frank James, a participant in the Lawrence, Kansas raid, is said to have commented: "This is a picture that talks." Historian Albert Castel described it as "mediocre art but excellent propaganda." Bingham, who was in Kansas City at the time, described the events: It is well-known that men were shot down in the very act of obeying the order, and their wagons and effects seized by their murderers. Large trains of wagons, extending over the prairies for miles in length, and moving Kansasward, were freighted with every description of household furniture and wearing apparel belonging to the exiled inhabitants. Dense columns of smoke arising in every direction marked the conflagrations of dwellings, many of the evidences of which are yet to be seen in the remains of seared and blackened chimneys, standing as melancholy monuments of a ruthless military despotism which spared neither age, sex, character, nor condition. There was neither aid nor protection afforded to the banished inhabitants by the heartless authority which expelled them from their rightful possessions. They crowded by hundreds upon the banks of the Missouri River, and were indebted to the charity of benevolent steamboat conductors for transportation to places of safety where friendly aid could be extended to them without danger to those who ventured to contribute it. Bingham insisted that the real culprits behind most of the depredations committed in western Missouri and eastern Kansas were not the pro-Confederate bushwhackers, but rather pro-Union Jayhawkers and "Red Legs," whom he accused of operating under the protection of General Ewing himself. The Red Legs were a paramilitary group wearing red gaiters and numbered around 100 that served as scouts during the punitive expedition of the Union troops in Missouri; they were accused by contemporaries in spreading atrocities and destruction. According to Bingham, Union troops might easily have defeated the Bushwhackers if they had tried hard enough, and exercised a requisite amount of personal courage. However, Albert E. Castel refutes Bingham's assertions, demonstrating in his publications that Ewing made conspicuous efforts to rein in the Jayhawkers, and to stop the violence on both sides. He furthermore argues that Ewing issued Order No. 11 at least partly in a desperate attempt to stop a planned Unionist raid on Missouri intended to exact revenge for the Lawrence massacre, to be led by Kansas Senator Jim Lane himself (see above). Further scholarship indicates that although Bingham's son used the painting in 1880 to attack Ewing when he ran for Governor of Ohio, it did not prove to be the deciding influence in Ewing's narrow loss. President Rutherford Hayes, a Ewing family friend but political opponent of Ewing's campaign, urged Ohio Republicans not to use the painting as it would show Ewing's strong war record against the South, which was contrary to his effort to show Ewing as a weak business leader, and a repudiationist on hard money/soft money issues. This more recent scholarship reviews Ohio newspaper accounts of the 1880 campaign, and indicates Ewing, running as a Democrat, faced significant third-party challenges, and was trying to oust the Republicans during a time of economic prosperity—always a difficult political task, at best. See also Scorched earth Total war References Further reading Smith, Ronald D., Thomas Ewing Jr., Frontier Lawyer and Civil War General. Columbia:University of Missouri Press, 2008, . External links Historic Lone Jack Missouri Partisan Ranger "Order No. 11 and the Civil War on the Border" by Albert Castel American Civil War documents American Civil War in art Missouri in the American Civil War 11 (1863) Kansas in the American Civil War 1863 documents